Author: Stacey Higginbotham

  • Does HP Want to Be the New Apple?

    With HP’s $1.2 billion planned acquisition of Palm, the computer giant hopes to turn Palm’s webOS operating system into a platform to rival Apple’s mobile computing franchise. “Ultimately the Palm webOS and Apple are the two that can scale best over multiple devices and we are going to compete with Apple going forward in the broader mobile category,” said Brian Humphries, SVP of corporate strategy and development at HP.

    I spoke with Humphries last night after the deal was announced, but he declined repeatedly to give details as to when or what devices may get webOS. So we have no idea if the HP Slate that Steve Ballmer, the CEO of Microsoft, was waving about at CES will continue to have Windows or webOS, but we do know that HP has a big vision for webOS — it hopes to put it across an array of mobile devices, creating a platform backed by the power of HP’s sales and distribution channels to which developers will flock.

    A huge portion of HP’s message around this deal is aimed at reassuring developers that webOS isn’t a dying platform and that HP is willing to invest. Humphries was adamant that developers will find a supportive HP (GigaOM Pro, sub req’d). “We’re clearly giving them dev tools, a platform they can port to, an easy financial model that’s viable to them and confidence that the OS will be scaled globally and on many different form factors,” Humphries said.

    It’s clear that HP is modeling its mobile computing vision on Apple’s platform, and when I asked how many mobile operating systems the world has room for, Humphries hedged for a bit saying the market is large and that it was difficult to see how things might develop, however when pressed he said that only webOS and Apple really have the ability to scale across many devices and many markets.

    As for HP’s willingness to be more open than Apple, perhaps taking a page from its personal computing heritage, it doesn’t look good. “Apple is proprietary but it also has a tremendous relationship with the app developer,” Humphries said. “And it may have a closed OS on which the app community can sit, but the apps make it open.”

  • Cell Phone Chip King Confirms Its Server Ambitions

    ARM plc has confirmed that within the next 12 months its architecture, which is currently used primarily in cell phones and consumer electronics, will also be used in servers — pitting it against the lifeblood of Intel’s chip business. Speaking with EETimes, Warren East, the CEO of ARM, said servers using ARM-based chips should appear within the year.

    The news shouldn’t come as a surprise to our readers, since I profiled Smooth-Stone, one company trying to build low-power servers earlier this month, and in that same post pointed to ARM’s server ambitions. And it’s not just startups that are interested in using the low-power ARM architecture inside data centers, either. Google recently acquired a secretive startup called Agnilux that was rumored to be making a server with the ARM architecture. We also reported on a Microsoft job listing that sought a software development engineer with experience running ARM in the data center for the company’s eXtreme Computing group.

    For the last couple of decades, Intel’s x86 chips have gained dominance in the data center, but as power considerations begin to outweigh the benefits of a cheap, general purpose processor, other chip makers have started to smell blood. Nvidia is pushing its graphics processors for some types of applications, while Texas Instruments is researching the use of DSPs inside servers. So ARM’s server ambitions aren’t so far-fetched, and because of its ubiquity it may have the best chance at success, especially as more and more software is written for mobiles where ARM dominates. East told EETimes:

    The architecture can support server application as it is. The implementations [of ARM] have traditionally been aimed at relatively low performance optimized for minimum power consumption. But we are seeing higher speed, multicore implementations now pushing up to 2-GHz. The main difference for a server processor is the addition of high-speed communications interfaces.

    Smooth-Stone, for example, says it has developed intellectual property at the silicon level to handle the communications between the myriad ARM-based processors that would be needed inside a server. However, ARM isn’t the only low-power solution in the server world. Intel’s best hope may lie in companies using its low-power Atom chips to build greener boxes. I’m hoping we’ll hear more about ARM’s server ambitions when Ian Feurgeson, the director of enterprise and embedded solutions at ARM and the guy in charge of the company’s server ambitions, speaks at our Structure 10 conference in June.

    Related content from GigaOM Pro (sub req’d):


    Hot Topic: Green Data Centers

  • Bloom Won’t Micromanage Data So Apps Can Scale

    Building webscale or cloud applications is hampered by figuring out ways to spread tasks out over thousands of computers without slowing things down, or requiring too many people to keep things running. Virtualization and faster storage helps, as do new databases (GigaOM Pro sub req’d) and caching techniques, but right now folks are trying to adapt how they program computers to reflect that one has now become many.

    Bloom, a programming language created at the University of California, Berkeley by a group led by Joseph Hellerstein is one such effort. Bloom was profiled this week as one of the top 10 emerging technologies by MIT’s Technology Review, because it could help cloud computing continue to scale. Here’s how, according to Technology Review:

    The challenge is that these languages process data in static batches. They can’t process data that is constantly changing, such as readings from a network of sensors. The solution, ­Hellerstein explains, is to build into the language the notion that data can be dynamic, changing as it’s being processed. This sense of time enables a program to make provisions for data that might be arriving later — or never.

    Hellerstein also gave an extensive interview to HPC in the Cloud this week about what Bloom is and the problem it’s trying to solve. From that interview:

    To put it simply, our what our work is trying to do is start with the data itself and get people to talk about what should happen to the data step-by-step through a program without ever having them specify at all how many machines are involved. So, when you ask a query of a database you describe what data you want—not how to get it.

    The interview lays out how this programming effort  came about (building network protocols) and who might care most about using Bloom (Amazon, Google or anyone with big data needs), but for me the best part of the interview was how Hellerstein underscored that the ability to harness a hell of a lot of servers and treat them as a single computer is the next big shift in information technology.

    We can call it cloud computing, webscale applications or merely bigger data centers, but the key element here is that the hardware has gone social in ways that require many-to-many ways of communication and delivering instructions to the processors — inside the servers, between the servers, and soon, between data centers. The exciting aspect of this shift is that while larger companies like Google, Yahoo and Amazon are innovating, there is plenty of room for startups with a new appliance, server, networking technology or a chunk of code to make waves — and hopefully money.

    For more on the effort, please check out the FAQ’s Hellerstein has posted on his blog.

    Image courtesy of Flickr user tibchris

  • T-Mobile Drops 5GB Cap, Ushers in a New Mobile Broadband Future

    T-Mobile has announced that it will pull the 5 gigabyte-per-month cap on its mobile broadband service, part of an effort to push its HSPA+ network, which can deliver data speeds of up to 21 Mbps down. So is real competition coming to the wireless industry, or is this the end of flat-rate mobile broadband? I think it’s both.

    T-Mobile has changed its mobile data pricing plan to cut overage charges for customers of its 200 MB plan in half, and remove them entirely for customers who pay $59.99 per month (or $49.99 per month without a contract) for the 5 GB plan.

      The move is aimed at signing up customers in an increasingly competitive mobile broadband market. After all, Clearwire, Sprint and the cable companies are already selling WiMAX, which can deliver up to 6 Mbps down and 1 Mbps up, and Verizon is prepping for the launch of its LTE service during the fourth quarter of this year. AT&T will follow with LTE in 2011.

      All of which is good for consumers, but at least as far as T-Mobile’s lifting of its 5 GB-per-month cap, there’s a catch: go past that limit, and download speeds will slow. T-Mobile tries to play down such a caveat, however, saying: “When used as a mobile broadband solution in conjunction with an existing home broadband service, only a very small number of customers use more than 5GB per month.”

      But going forward, that number will only rise, as consumers are downloading ever more data, especially to watch video. So wireless providers, which have limited spectrum and a demand curve that resembles a steep uphill climb, are re-evaluating how they charge for mobile broadband (GigaOM Pro, sub req’d). The end of the flat-rate pricing is coming (GigaOM Pro) and the jury is still out as to how carriers will implement new options (GigaOM Pro).

      T-Mobile’s decision to slow speeds after a user hits 5GB per month is likely an answer to Clearwire’s unlimited mobile broadband offering, while also protecting the carrier for overloading its cellular network with a corresponding policy change. And as the next generation of wireless networks hit the market, such plans will become increasingly common.

    • VMware and Salesforce.com Create the VMforce Love Child

      Salesforce.com and VMware have teamed up to offer an enterprise Java cloud called VMforce. The offering, which ties the existing Salesforce.com infrastructure to VMware’s SpringSource-based Java platform is an indication of a larger trend for infrastructure and platform as a service providers to sell not just a platform, but to sell the app. It’s the difference between selling the services of a general contractor or selling someone a house.

      As my colleague Derrick Harris wrote in a GigaOM Pro article this weekend (sub req’d):

      The combination of cloud services designed for and hosted on cloud platforms seems like a surefire strategy to secure PaaS (or even IaaS) adoption. … By creating targeted applications designed specifically for use on their platforms, cloud providers can increase the likelihood of bringing customers into the fold (and can increase their profit margins, as well) by letting applications help sell the platform instead of relying on the platform itself. According to some surveys, at least, businesses presently find SaaS significantly more palatable than straight-up cloud computing.

      It also is a highly anticipated move ever since VMware purchased Spring Source last summer and said it would create platform as a service for enterprises. Essentially, what this announcement means is that enterprise customers can use their existing Java experts to build application on the Salesforce.com infrastructure and link it to Force.com and Salesforce.com databases and services.

      Under the hood, Salesforce.com is running VMware’s software in its own data centers for the VMforce cloud. It’s the first platform as a service offering for VMware, which is continuing its march up the cloud stack, and also shows how influential Salesforce.com can be when it comes to influencing enterprise custoemrs. When asked if VMware would host its Java cloud with any other provider, Mitch Ferguson, senior director Alliances at VMware said the company was currently focused on this product.

      The VMforce offering will be available in developer preview at some undisclosed time this year, and pricing will be announced at that time. Maybe VMware President and CEO Paul Maritz will announce it when he speaks at our Structure 10 conference in June.

    • FCC’s Spectrum Task Force Had Better Be Better Than the X-Men

      The Federal Communications Commission today created a task force to help it bring 500 MHz of spectrum to market over the next 10 years as dictated by the National Broadband Plan. But since most of the hoped-for spectrum is controversial to one group or another, the folks behind this task force had better have some super-human powers of persuasion, or at least an adamantine skeleton that will hold up against  powerful lobbying.

      Julius Knapp, the group’s co-chair and head of the Office of Engineering Technology, told me that its primary goal is to deal with issues that will arise across multiple FCC agencies as the spectrum portions of the National Broadband Plan are implemented, a quick synopsis of which can be found in the chart below. For more details, read my post on the topic.

      But Knapp, an electrical engineer by training, will also be looking ahead to long-term spectrum needs (GigaOM Pro, sub req’d). Below are some of the upcoming issues Knapp says the agency and the task force are planning to tackle. And because every comic book hero has an arch nemesis, I’ve tried to focus on the players who will fight each of the FCC’s proposals.

      • Broadcast Spectrum: As we’ve explained in previous posts, the FCC is going up against the broadcast industry as represented by the National Association of Broadcasters with the hope of getting access to 120 MHz of underutilized spectrum. Already the broadcast industry is raising fear, uncertainty and doubt by saying that the FCC can’t start a spectrum proceeding without first doing an inventory of the current spectrum holdings as proposed in the Radio Spectrum Inventory Act, which recently passed the House and is awaiting passage in the Senate. However, a Knapp notes, the FCC doesn’t plan on dealing with formal rulemaking for reallocating broadcast spectrum until 2011 (it will open up the topic for comments in the third quarter), which should give it plenty of time to comply with the law.
      • MSS Spectrum — Also in the third quarter, the FCC plans to take a look at why satellite providers holding about 90MHz haven’t yet deployed mobile broadband. If the FCC decides to relax the rules requiring those spectrum holders to deploy a significant satellite network, expect the CTIA and wireless carriers to play the arch-nemesis role.
      • TV Whitespaces: This spectrum is between the channels used by broadcasters to deliver digital television and was a big issue in 2008. In the third quarter the FCC plans to issue a notice of proposed rulemaking on the topic that would enable equipment manufacturers and network operators to start delivering products and services that use the white spaces. The nemesis here is once again the NAB.
      • Unlicensed Spectrum: Finally, toward the end of the year the FCC will come out with information on what band of spectrum it wants to use for delivering unlicensed services like Wi-Fi. The agency is currently talking to equipment manufacturers and public interest groups to determine which blocks might work. Until we know the spectrum the FCC plans to use and how much it wants to offer without a license, however, it’s hard to pinpoint the nemesis.

      The task force will also take steps to get the process for WCS, AWS-2 and AWS-3 spectrum auctions for next year, but there is less controversy around those. Regardless, Knapp and his co-chair, Ruth Milkman, have a grueling few years ahead of them.

    • M2Z Is Back With Free Wireless Broadband Plan

      The County Executives of America plans to build a nationwide wireless broadband network to cover residents of its 700 member counties, and has applied for $122 million in stimulus grants to kick off the effort. The money would fund networks in 12 counties and would cover more than 14 million people.

      The CEA hopes to let M2Z, a Kleiner Perkins-backed startup that’s been trying to build a free wireless broadband network since May 2006, build out the network using the AWS-3 band of spectrum. We explained why M2Z was a bad bet back in 2008,  but the utopian idea of free wireless broadband isn’t going away, especially since the current Federal Communications Commission Chair is so keen on mobile broadband as the great equalizer.

      Here are the details on the CEA plan:

      Specifically, the CEA broadband stimulus application aims to bring free broadband access to 12 major counties and serving the residents of Allegheny County, Pa.; Bronx County, N.Y.; Chambers and Kaufmann Counties, Texas; DeKalb County, Ga.; Kenosha County, Wis.; New Castle County, Del.; Prince George’s and Montgomery Counties, Md.; Will and Cook Counties, Ill.; and Salt Lake City County, Utah.

      But the plan submitted by the CEA offers two service options: a paid 6 Mbps offering (the application cuts off right as it says what the paid version would cost) and M2Z’s original free service with speeds of 768 kbps — what most would call barely broadband. The FCC in March suggested that it would create a free nationwide wireless broadband network, but my sources there assured me it wasn’t related to M2Z.  Looks like now M2Z is attempting to take its free wireless broadband proposal to other entities.

      Image courtesy of Gavin St. Ours on Flickr.

    • Cable Considers Faster Broadband Standard

      CableLabs, the standard-setting organization for the cable industry, is pondering a next-generation cable broadband technology that could deliver up to 5 gigabits per second down, according to Multichannel News. Not only are the speed gains significant, but the standard would be more efficient by doing away with the current way cable companies parse out their spectrum. However, it would require them to invest in new gear, both in their plants and for installation in consumers’ homes, as well as to switch to an all-IP infrastructure.

      The proposal is being floated in the cable community, but may never make it all the way to the standards process — or consumers’ homes. However, if implemented, it could solve a problem for the cable industry and perhaps enable cable companies to forestall putting in fiber to the home.

      Currently cable companies divide their spectrum into 6MHz channels, each of which delivers about 38 Mbps, the equivalent of two HD channels. The DOCSIS 3.0 standard bonds those channels together to create faster broadband speeds. The proposed new standard would eliminate the 6 MHz channels altogether, which could give cable companies more flexibility in how they manage their assets. It’s the difference between trying to cram blocks into a container Tetris-style and pouring sand in it. With the sand, you use the empty space more effectively.

      Such a level of efficiency is something the cable companies will have to deliver as consumers expect both faster broadband speeds but an infinite array of HD (maybe 3-D) content personalized just for them. So far, many of the larger cable companies haven’t committed to true IPTV, but most recognize that it’s merely a matter of time. Changing the network architecture like this would be a first step, although it would mean we’ll still have two different styles of networks in the U.S. for a long time to come.

      Related GigaOM Pro content (sub req’d): Who Will Profit From Broadband Innovation

    • Is Arkansas Really The Most Competitive State for Broadband?

      Arkansas, North Dakota and South Carolina are the three states with the most competitive broadband markets, according to a report released today by ID Insights and broadband consultant Craig Settles, president of Successful.com. For those questioning what makes these states so competitive, it’s the fact that no single provider — or even a duopoly of providers — serves a huge percentage of the state’s broadband subscribers. For example, in Arkansas the top two providers offer service to just 49 percent of broadband customers, whereas the top two ISPs in the least competitive market, Rhode Island, cover 95 percent of subscribers.

      What’s more interesting is that the survey draws out a correlation (not causation, guys) between home values and income, showing that the wealthier you are and the higher your home value, the less likely you are to live in a place with competitive access to broadband. Plus, the more people online in your state, the less competition there is.

      This may at a certain level be counter-intuitive. However, when you look at this one layer deeper, it begins to make sense. In more prosperous states where there are many users, and more wealth, this tended to attract the largest providers. As infrastructure was enabled and larger providers began to dominate markets, it became increasingly difficult for new entrants to establish themselves.

      Competition, or the lack of it, is one of the key reasons the U.S. lags behind in broadband speeds, and can also be tied to anti-competitive tactics such as tiered broadband. The Federal Communications Commission hopes to address the lack of competition with mobile broadband and better data, which this report helps provide, but I’m not holding out for a any miracles of access technology coming to my home unless Google chooses Austin for its experimental fiber network. Below is a list of the top 20 states, and the full report can be found here.

      1. Arkansas
      2. North Dakota
      3. South Carolina
      4. Nebraska
      5. California
      6. Alabama
      7. Missouri
      8. Indiana
      9. Texas
      10. Kentucky
      11. West Virginia
      12. Wisconsin
      13. Minnesota
      14. Florida
      15. Montana
      16. Connecticut
      17. North Carolina
      18. South Dakota
      19. Oregon
      20. Michigan

      Image courtesy of Flickr user OakleyOriginals

    • Hey ISPs, Google Wants to Share Its Fiber Network

      Google’s planned experimental fiber network will be so open that the company hopes to see other ISPs ride it in order to deliver their own services, according to a story by BroadbandBreakfast.com. The publication quotes a project manager in charge of the program (whom we’ve also interviewed before), as saying:

      “We definitely inviting the Comcasts, the AT&T service providers to work with us on our network, and to provide their service offering on top of our pipe – we’re definitely planning on doing that,” said Minnie Ingersoll, Google’s product manager and co-lead for alternative access. “Our general attitude has been that there’s plenty of room for innovation right now in the broadband space, and it’s great what the cable companies are doing, upgrading to DOCSIS 3.0, but no one company has a monopoly on innovation.”

      Although I can’t imagine any of the larger ISPs taking Ingersoll up on the offer, it deos represent a chance for them to get away from being dumb pipes by truly proving the value of the services they offer to their users. And in the process, they could try to overload Google’s network with their content, as they accuse Google of doing to their own networks. Except that Google’s small network would be fiber-to-the-home, rather than a a more easily-congested cable or copper pipe.

      While Ingersoll didn’t share when Google might announce which of the 1,100 municipalities that applied for the fiber network to be built in their town, she did say she was evaluating them based on “the efficiency with which such networks could be rolled out, and how the targeted communities could benefit from the roll-out of such a network,” according to BroadbandBreakfast.com. As much as I’d like to see that fiber installed in my city, I’m even more excited to see how the whole project plays out in terms of costs and what it can show us about the economics of delivering fiber to the home.

      Related GigaOM Pro content (sub req’):

      Google Buzz, Fiber and Their Place in the Smart Grid

    • CenturyTel to Buy Qwest for $22.4 Billion

      CenturyTel said today that it’s agreed to buy Qwest Communications in a deal valued at $22.4 billion, continuing the consolidation of rural telephone companies and ending speculation as to when and how Qwest would manage to sell portions of its business. CenturyTel will spend $10.3 billion buying Qwest stock and will assume $11.8 billion in debt. The deal comes two years after CenturyTel’s predecessor company purchased Embarq Telecommunications for $11.6 billion to become CenturyTel, and vaults it into the realm of the Big Bells Verizon and AT&T.

      The combined Qwest and CenturyTel will have 5 million broadband customers, 17 million access lines, 1.4 million video subscribers and 850,000 wireless consumers (through a Qwest partnership with Verizon). For reference, AT&T has 17.5 million broadband customers, 4.5 million video subscribers and 26.6 million voice subscribers. The consolidation in the landline market is driven by a few factors, many of which spell bad news for consumer subscribers unless the winners of this consolidation fest are prepared to spend like mad.

      The demand for wireline telephone and DSL services is on the wane, but at the same time, the need to spend money to maintain old lines and invest in new technologies like fiber is on the rise. Unlike Verizon and AT&T, CenturyTel and Qwest don’t have a corresponding wireless business to offset the losses and increased infrastructure costs. AT&T, for example, saw its wireline business provided just 24 percent of its fiscal first-quarter sales, down 3 percent from the year before, but 45 percent came from its wireless business — a business that also provided operating margins of 45 percent.

      In addition to the wireline squeeze, these businesses are also located in areas where the population is spread out, making it more costly to maintain and invest in network upgrades. Verizon for example, has been selling its rural lines where it can and it doesn’t currently have plans to continue extending its FiOS fiber-to- the-home buildout to more of its subscribers, most of whom are located in less populated areas.

      There’s also the competition with cable, which can deliver faster speeds with a simple DOCSIS 3.0 upgrade that can cost a few hundred dollars per home, as compared to the higher cost of delivering fiber.

      Adding to this grim mix is the coming reform of the Universal Service Fund, a government subsidy program aimed at offsetting the costs of providing rural telephone service. The program is being shifted away from telephone subsidies and toward paying for broadband expansions. The Federal Communications Commission is also trying to rein in some of the waste associated with the program. Within five years the FCC hopes to stop paying companies like CenturyTel for voice lines with USF money. Some of that loss will be made up through new USF broadband subsidies, however, so this deal may be a way for CenturyLink to reap a larger portion of those fees.

      CenturyLink executives emphasized the potential for stronger business relationships that it will win thanks to its acquisition of Qwest (it serves 95 percent of the Fortune 500), rather than the consumers. Qwest also has an emerging cloud computing product, which leads me to wonder if CenturyLink might eventually split the consumer and enterprise businesses further down the road. On the conference call executives said they may keep the Qwest name for business and use CenturyLink for the consumer markets.

      Historically, these telecom consolidation deals have been a loss for consumers and even the firms who make them. Verizon has sold many of its rural assets, leaving its purchasers to file for bankruptcy. Taking on the burden of costly assets and a lot of debt doesn’t seem to be a winning strategy for telephone companies, but maybe the hope is to become something that’s just too big to fail. Given the government’s current focus on boosting broadband, perhaps such a strategy isn’t such a bad idea.

    • Open vs. Closed: Ubuntu Walks the Line

      Any debate over open vs. closed systems has to touch on open-source software and the ways in which companies are attempting to build code as a community effort, while still profiting from it in some way. So I chatted with Mark Shuttleworth, CEO of Canonical, the company that supports Ubuntu, about how his company walks the line between spending to support open-source software and finding a business model that works.

      Canonical’s 330 employees are responsible for maintaining, supporting and selling service for Ubuntu, an open-source version of the Linux operating system for servers, desktops and computer manufacturers. About 120 to 150 of the Canonical employees contribute directly to the new releases of the software that come out every six months, and most of the company’s revenue comes from supporting enterprise server customers and maker’s of computers who want to put Ubuntu on desktops. Consumers also download the software, but few pay Canonical for support. The company is not yet profitable.

      Shuttleworth believes that in order to develop a strong business model around an open approach, one has to create an open option early, ideally through a strong standardization process and one also needs to have a lot of different open source projects fighting it out.  For example, in the operating system world there wasn’t a strong history of open alternatives, which meant that Ubuntu had to out-open its proprietary competition, which has high costs.

      In that way it has pushed Canonical perhaps further out toward open on the spectrum. Shuttleworth calculates the direct costs of being so open as bringing people together in way that empowers them and makes them feel like members of a community, and reaching out and putting in place the infrastructure to create a company. However, there are indirect costs as well.

      “There is a myth that being open is necessarily more efficient and cheaper, but there are no hordes of people showing up to do the hard stuff,” Shuttleworth says. “Occasionally wonderful, magical things happen — really incredible things do happen, like people show up unexpectedly with brilliant ideas — but it’s still hard and expensive and you still have to be willing to do all the hard and expensive things and do it in an open fashion. And you’re still likely to be accused of being open only when it’s convenient.”

      He points to the cloud-computing market as one that tends to give a lot of lip service toward openness but where a lack of a big standardization effort and robust open source competition could lead to a relatively closed ecosystem.

      “The basic story there is pretty bad at the moment,” Shuttleworth says. He notes that proprietary infrastructure, hypervisors and even the APIs and ways data is stored can lock folks into one cloud for life. “We need real open alternatives early in the process, making it possible for people to build own cloud infrastrucutre that responds to the same APIs that Amazon’s do,” Shuttleworth says.

      He’s accepted that Amazon Web Services’ APIs for its web services, while not created through an open standards group, have become a de facto standard and said that it’s more efficient to build open source code around Amazon APIs rather than try to develop new ones for accessing the cloud. Canonical has a partnership agreement with Eucalyptus, which offers open-source software to create an AWS-compatible cloud, where people can use Ubuntu  and Eucalyptus to create their own cloud computing platform. But Shuttleworth would like to see more open-source options other than Eucalyptus  for building out a cloud computing service of your own.

      At the platform-as-a-service level, the issue around openness will be around moving data from cloud to cloud easily. There’s room there for an open standard or open databases, he said. But at every level, when considering building a business around open source software, he he believes that “you want a common and clear standard with competing open source versions using that standard.”

      That keeps proprietary vendors at bay, and gives the companies building a business around the open source software a chance to decide where they want to be on the open-to-closed spectrum. But it also introduces the prospect of fragmentation, which we’ll leave for a later post.

    • FCC Plows Ahead With Broadband Plan Despite Comcast Ruling

      The FCC today began the long process of building a regulatory regime for a broadband-centric — as opposed to telephone-centric — communications world during an open meeting in which it sought comments on several sweeping policy changes, including reforming the federal telecommunications subsidy for providing rural telephone access. Even as the FCC issues these notices and requests for comments, however, Washington policy experts are still wondering when the FCC will take action to affirm its authority to implement some aspects of the broadband plan. Is the FCC blithely ignoring reality or is it just that confident?

      During the first week of April a federal appellate court ruled that the FCC was wrong to chastise Comcast for throttling P2P packets because it did not ground its decision-making in the proper authority. The ruling has the effect of putting the FCC in limbo (GigaOM Pro, sub req’d) as to whether or not it has the authority to compel high-speed Internet service providers to follow some of its regulations.

      FCC Chairman Julius Genachowski opened today’s meeting by saying he believes the agency has the full legal authority to move forward with these proceedings despite the Comcast ruling, but that’s what the FCC has been saying since the decision was handed down during the first week of April, and even today that authority was questioned by a fellow commissioner.

      The FCC then voted on  six proposals, yielding:

      • It approved a Notice of Proposed Rulemaking and request for comments on reforming the Universal Service Fund so the program fund would be used for advancing the growth of high-speed Internet access to rural areas as opposed to supporting voice telephone lines.
      • It approved a change in its previous rules on wireless voice roaming, which forces carriers to automatically allow voice calls to roam, and it issued a Notice of Proposed Rulemaking on wireless data roaming in order to see if it’s feasible for wireless network providers to make it as easy for subscribers to roam on data networks in the U.S. as it is for voice.
      • It approved two proceedings related to opening set-top-boxes to allow open cable boxes that could access cable networks and the Internet, leading to greater competition for television programming, and for hardware to access video content.
      • It approved a Notice of Inquiry seeking comments on how to create fail-proof and resilient broadband networks in case of a natural disaster. One of the  current qualms over IP communication is that it’s not as reliable as the copper telephone networks in case of a disaster.
      • It approved a Notice of Inquiry asking for comments on whether the FCC should create a cyber security program.

      But during this decision-making process the issue of the FCC’s authority to implement its plans was challenged by Robert McDowell, a Republican commissioner, who questioned the FCC’s authority to require wireless network operators to automatically open their networks to roaming in the wake of the Comcast decision.

      Which indicates to me that when the FCC has to implement anything difficult or controversial, the issue of its authority will come up again and again — both from lobbyists and later through the courts. So the real question is as the FCC tries to implement a new broadband-centric regulatory regime, when is going to take the time to clear up these lingering questions?

    • AT&T Bets Big on the Internet of Things

      AT&T today reported first-quarter earnings of $2.5 billion and sales that were largely unchanged from the year before, at $30.6 billion — but the flat sales mask the gains made in its wireless business, which grew to account for 45 percent of revenues. In short, AT&T is betting big on wireless through the sale of phones with data plans (it added 1.9 million wireless subscribers), prepaid plans and an emphasis on providing wireless connectivity for the Internet of things.

      For example, the carrier has a deal to provide connectivity for the Kindle and one with Jasper Wireless to help it provide wireless connectivity for myriad partners. I’ve spoken with Glenn Lurie, the executive in charge of At&T’s machine-to-machine efforts, who was optimistic that margins would be higher in emerging devices such as the pictured photo frame. Earlier this year AT&T said it was providing connectivity to everything from dog collars that broadcast a pet’s location to pill bottles that will remind you to take your meds (and even tell on you if you don’t).

      The irony here is that M2M connectivity in many ways represents the dumb pipe future that AT&T is so worried about — it’s not providing anything to its partners but the bits. On the call, AT&T executives explained that the number of bits sent via the network are high-margin bits and the machine-to-machine clients have very low churn. Total wireless operating margin rose for the carrier to 44.5 percent.

      AT&T also said it had improved its wireless network (GigaOM Pro sub req’d) in New York and that dropped calls in the region declined by 6 percent. For everyone on the wireless network, AT&T said  its HSPA network upgrades are boosting download speeds by 32-47 percent in places where AT&T has deployed fiber backhaul.  Readers, has your AT&T experience improved? Let us know in the comments.

    • Breaking: Google Buys Stealthy Startup Agnilux

      Google has purchased a stealthy startup called Agnilux, according to PEHub. Agnilux was founded by engineers who formerly worked at startup PA Semi, which Apple purchased in 2008. The startup is supposedly making some type of server. Google buying such a company could go a long way toward proving that for webscale data centers there’s nothing like tweaking your infrastructure — from the silicon up.

      We’ve written about SeaMicro, which is making its own specialty server using Atom chips, and Smooth-Stone, which is using its chip design expertise to build an ARM-based server, and discussed today how the tide might be turning when it comes to commodity infrastructure. If Google has purchased this startup with the goal of making its own servers run more efficiently, or to adapt them to Google-specific compute needs, that’s a huge bet on specialty hardware for webscale computing.

    • Microsoft Speeds Up Its Data Center With Light and Mirrors

      Microsoft Research is the first commercial customer of a new optical equipment module made by a seven-year-old startup that hopes its gear will enable servers to send and receive information faster. Lightfleet, based in Camas, Wash. sold an alpha version of its Direct Broadcast Optical Interconnect system, which uses broadcast light to connect computing nodes, to Microsoft’s eXtreme Computing Group, as part of a project to explore faster communication between servers in its cloud computing deployments.

      Lightfleet’s gear looks pretty cool, and would help eliminate the bottleneck that occurs as information is sent inside servers and from server to server in dense computing environments. The company’s DBOI system uses light and mirrors to send bits from all compute nodes inside a server to all other nodes  at one time, rather than sending it via cables and a switch. So far, Lightfleet has raised $30 million in funding from angel investors and has 22 employees.

      Other companies are addressing this bottleneck in a variety of ways that include using fiber in the data center, specialized gear or virtualizing the network fabric ( GigaOM Pro sub. req’d).  Intel is proposing a similarly named optical cable technology called Light Peak for computer peripherals.

      While Microsoft may have an industrial-scale strategy around its data center operations that seems antithetical to buying gear from startups,  its research arm shows that Redmond isn’t totally oblivious to new technologies to address the challenges of running hundreds of thousands of servers. Last week, I wrote about how Microsoft was looking for someone to work with solid state drives and ARM-based servers in its online services division.

      Microsoft’s willingness to see the “light” when it comes to networking is just another example of how the shift to webscale computing may be opening opportunities for hardware and software startups, as the current generation of “commodity” x86 gear hits the wall. I will be leading a panel discussing  the prospects for hardware startups in a webscale world at our Structure 2010 conference in June, so we can see if this is the future or wishful thinking.

    • AT&T Tries to Strong-arm the Feds

      As regulators dive deep into broadband politics, AT&T has turned not only to lobbyists, but to threats. Ma Bell today issued a ho-hum press release saying it’s chosen Ciena as its optical equipment provider for upgrades to “maintain and expand” its metropolitan and long-haul network infrastructure. It’s a pretty standard release, noting, for example, that AT&T has delivered 18.7 petabytes of information over said backbone and that this investment will be part of a planned capital upgrade to its IP network for businesses. But the last line has me thinking the folks at AT&T have seen too many episodes of “The Sopranos:”

      AT&T in January announced total 2010 capital expenditures are expected to be between $18 billion and $19 billion, a level framed by the expectation that regulatory and legislative decisions relating to the telecom sector will continue to be sensitive to investment.

      It doesn’t take much reading between the lines to see that AT&T is suggesting it could hold its billions of dollars in capital spending as some kind of hostage as it negotiates with Congress and the FCC on issues such as network neutrality and reclassifying broadband (GigaOM Pro sub req’d) as a transport service rather than an information service. It’s done this before through lobbying efforts and in FCC filings, but in a random press release, it’s just too much.

      When asked about that section of the release, an AT&T spokesman said, “We always have a cautionary language statement in materials such as this.” And this particular language is in its fourth-quarter earnings, although it’s nowhere to found it in AT&T’s most recent capex-themed releases. But while yes, cautionary language statements are standard practice in the press releases of publicly traded companies, this language doesn’t read as cautionary so much as it reads like AT&T is saying, We’ve built a nice telecommunications network infrastructure here — sure would be a shame if anything were to happen to it.

      Really, Ma Bell? You’re going to stop maintaining and expanding your network if the FCC doesn’t allow you to discriminate against certain types of network traffic by implementing network neutrality regulations — something you’re keen to say you’d never do anyhow? Or maybe it’s the idea that DSL might end up more directly under FCC authority through a reclassification process, something that already affects those copper lines since they’re already delivering voice traffic? Can you even afford to stop investing in your network, especially the wireless one?

      AT&T’s not-so-veiled threats leave me boiling with rage, especially given how its late-to-the-party attitude toward network upgrades has made the iPhone experience so crappy for so long. To basically threaten that its 85 million cell-phone subscribers, 2.1 million U-verse TV subscribers, 24.6 million voice subscribers and 17.2 million high-speed Internet subscribers would get degraded service because it won’t maintain or expand its network if the government enacts regulations “that aren’t sensitive to network investment” is reprehensible — and an open admission that AT&T thinks it can stop investing in its network and still make money off of it (possibly because there’s not a lot of competition). Even worse, many of those regulations would help protect consumers from anticompetitive practices and pricing.

      Plus, AT&T’s reaction to the FCC’s relatively benign policy efforts (the network neutrality clause leaves room for reasonable network management, which could be interpreted in pro-ISP ways) is so out of proportion as to be ridiculous. I could understand such posturing if the FCC, like some telecom agencies around the world, was considering a way to open up AT&T’s network for competitive services to travel over it, but the FCC in its National Broadband Plan steers very clear of that issue, instead deciding that data and possibly wireless access would have to be the stick to keep network providers such as AT&T in line. Threatening to halt several billion dollars of necessary capital investment over reclassification or network neutrality is like threatening to burn down your own house because you don’t like your home owner association’s rules.

      Image courtesy of Flickr user Eddie~S

    • Computing’s Gone Mobile, So Now Make It Pay

      ARM, the licensing firm whose architecture is the basis of the processors inside most cell phones, as well as many consumer gadgets, will be touring the country talking about the future of mobile computing over the next few weeks. Before that effort got underway I asked Bob Morris, director of mobile computing at ARM, for a sneak peak at ARM’s vision of the future, which he shared in the video below.

      Morris credits the rise of the web for everything from personal communications to entertainment to enabling companies to create consumption-centric devices like tablets and e-readers, and he expects the next big effort will be around linking secure payments to broadband-enabled devices — be they mobile computers or broadband-connected TVs. He explained the concept in the video below (one PIN associated with all of your devices); you also can read about ARM’s partnership with Europe’s G&D for the nitty-gritty details.

    • When It Comes to Virtualization, Are We There Yet?

      As much as we hear about virtualization, it can be surprising to get actual numbers on deployments and realize how low they remain — just 18-19 percent of workloads on enterprise x86 servers have actually been virtualized, according to new data released by Lazard Capital Markets. The investment bank expects 48 percent of enterprise workloads to be virtualized by 2012, which means the number of virtual machines will grow to 58 million from 5.8 million in 2008. So the software to manage virtualized machines is going to be hot, but there’s still room for growth.

      The other things adding spice to the commodity hypervisor market are the growth in server virtualization among small- to medium-sized businesses and the next level of virtualization– virtualizing the desktop. Virtualizing the desktop allows IT departments to store a copy of desktops on a server and deliver it to a remote client or PC.  This is a boon for Citrix, which has been pushing desktop virtualization for a while and appears to be the leader over rival VMware when it comes to customer interest in the technology, according to the Lazard report issued on Friday. A Jefferies research report out this morning notes that in a survey of the top 25  software resellers, 44 percent said their customers has expressed an interest in virtualizing the desktop. From the note:

      Some VARs see VDI as the next logical step after app virtualization. Moreover, Windows 7 upgrades are causing IT depts to reassess their entire desktop infrastructure. Ironically, some customers are looking to use VDI as a way to increase life of their existing hardware.

      But beyond the battle between VMware and Citrix for enterprise server and desktop virtualization, SMBs are accelerating their virtualization plans and choosing Microsoft’s Hyper-V in order to do so. Before 2009, some 30 percent of global organizations had started virtualizing — that number which has since doubled, driven predominantly by SMBs, Lazard said. SMB penetration is expected to exceed large enterprise penetration by next year, according to Lazard. So it looks like VMware will have to work hard to stay ahead of where the virtualization market is growing — on the desktop and with SMBs.

    • Storage Pain Is Fusion-io’s $45M Gain

      Fusion-io, a maker of specialty solid-state storage gear, has raised $45 million in a third funding round, bringing the total amount it’s raised since launching in 2007 to $111.5 million. Meritech Capital Partners led the round, and was joined by Accel Partners and Andreessen Horowitz. HP, Samsung and Dell have all contributed in previous rounds. The company, which I was excited about when I met with it at DEMO in September 2008, has gone through several executive shuffles, but appears to be succeeding based on the demand for its gear, which speeds up access to stored data.

      Fusion-io is one of several companies trying to ease the bottleneck created when users attempt to access gigabytes of information from web pages and expect it to appear instantly, such as with photos on Facebook. When a user clicks on a photo album, he expects to see it load within seconds, even though it involves finding and pulling the files from wherever they’re stored, then serving them up over the user’s broadband connection. To speed this process up startups in the last three years have been tweaking storage gear and focusing on new types of databases. The gear helps send the bits where they need to go quickly, while the database technologies (GigaOM Pro sub req’d), from Cassandra to memcached, attempt to organize information across a variety of servers so the gear finds the right bits faster.

      Fusion-io is one of the startups tweaking storage gear by putting Flash memory inside PCI-Express-capable modules that plug into existing servers. Once combined with Fusion-io’s software, it can be used to access data faster than spinning disk hard drives, all while consuming less energy — albeit at a greater cost. HP sells servers with Fusion-io drives inside, and at SXSW this year Serkan Piantino of Facebook noted that the social network is testing Fusion-io drives. Other customers include MySpace. However, as I noted when Fusion-io last raised money in August, the company is not only up against startups like Pliant, which also have a proprietary Flash memory drive, but the big SSD makers like Intel.