Author: Stacey Higginbotham

  • Marvell Goes to Hollywood to Push Superfast Broadband

    Marvell is going to Hollywood next week in an effort to show the film industry what it’s missing because the U.S. has such slow broadband speeds. The chip firm is teaming up with Jason Reitman, director of “Up in the Air,” at the W Hotel in Hollywood on Tuesday, where it will show off a line of chips destined for home modems and residential gateways that can handle broadband speeds of between 100 Mbps and 2.5 gigabits per second.

    Why should Hollywood care about fast broadband? Well with such fat pipes, 3-D would just be the beginning. One might be able to film a group of actors located in different parts of the world instead of bringing a bunch of people to one location. Plus the actual movie-watching experience could come even more immersive if broadband speeds in the U.S. were more than 600 times faster than the nation’s average of 3.9 Mbps.

    But unfortunately for those of us in the U.S., we’re not actually going to see Marvell’s Avanta chips designed for fiber networks for a while. Nafea Bshara of Marvell explains that the lack of competition between ISPs and coupled with the lack of a big government push to increase speeds means that Marvell is counting on the Asia-Pacific region for sales of the Avanta chips over the next few years.

    But once we get fat pipes — and symmetrical ones with both fast upstream and downstream speeds (GigaOM Pro, sub req’d) – Bshara says we could experience the best aspects of having services reside in the cloud. “There are whole new applications and new business models for the Internet,” Bshara says. “We want to encourage new breakthrough usage models that will really drive broadband and Hollywood can do that.”

    Forget email. He envisions a world where web-based hardcore gaming is possible and there’s no need for someone to decide between a Playstation or a Wii console because they’ll be able to access whatever type of game they want over the web. However, he may be overestimating Hollywood’s willingness to usher in the future of streaming movies instead of buying DVDs.

  • Sports Will Drive 3-D Adoption and Broadband Upgrades

    Hollywood is awash in 3-D these days, as witnessed by films from “Avatar” to a “Alice in Wonderland,” but the real driver of 3-D demand was on display last week in the form of the Master’s golf tournament, which Comcast and Verizon both showed in 3-D. Indeed, sports, not Hollywood movies, will drive 3-D adoption, and in doing so will likely lead to a wave of upgrades in our last-mile broadband infrastructure.

    Hollywood is betting on 3-D movies partially because it finally has the processing power and infrastructure available to film and edit movies in 3-D, which can produce a petabyte of digital information, but mostly because it’s hoping that 3-D movies will sell people on the cinema experience and later compel them to buy Blu-ray DVDs. “Avatar” director James Cameron earlier this year quipped that the movie industry would have gone the way of the music industry if it weren’t for bandwidth constraints. Beefing up the required bandwidth to watch a movie is one way to stall piracy and make streaming or downloads more difficult.

    I’m sure the cinema experience will be hot, but I think hoping that people will buy DVDs runs counter to the burgeoning trend of streaming (GigaOM Pro subscription req’d) or getting most of our content via services like iTunes, Unbox or Netflix. So when it comes to widespread 3-D content look to last week’s Masters broadcast and ESPN’s upcoming broadcast of 25 World Cup games.

    Certain sports providers like Major League Baseball and the National Hockey League are already pushing the envelope by streaming games live, and the logical bet is they will turn to 3-D as the technology becomes more advanced and is increasingly used in consumers’ homes. Which is why either delivered as an over-the-top video service or as a pay-TV channel, 3-D may be the next killer app driving last-mile infrastructure upgrades.

    But for the network provider, that love of streaming or even watching 3-D sports via a pay-TV channel on Comcast or Verizon comes at a price. Comcast delivered its 3-D Masters stream using the equivalent of one HD channel, which requires about 18.75 Mbps. Most cable providers can fit two HD channels, each into a limited number of slots — a constraint dictated by the spectrum each cable plant has (see here for a video on how a cable plant works).

    Verizon, which is thinking even farther ahead, has told me that true holographic 3-D might require 100Mbps to deliver and estimates that delivering 3-D video over broadband pipes takes 1.8 times more bandwidth than delivering an HD stream. Of course, with a fiber network like FiOS, Verizon is happy to push the envelop on services.

    The cable and pay-TV industry will likely focus on delivering a compression technology to deliver both 3-D and 2-D channels in high definition in the space used for 1.5 HD channels, says Jim Strothmann of Cisco’s Service Provider Video Technology Group, which provides both set-top boxes for pay-TV companies and the back-end equipment on the networks. However, the transition to 3-D television is coming, and Strothmann says it will put cable providers into a bind as they try to figure how to allocate their limited spectrum for delivering television channels to allow for both HD and 3-D.

    Much like the current constraints on delivering channels in both HD and standard definition are forcing cable companies to use compression, switch certain channels to digital over analog signals and other measures designed to free up capacity on their networks, the switch to 3-D could have those same effects. “People may not remember but HD launched in 1998…and we’re kind of the 1998 of 3-D TV,” Strothmann told me. “And when people start making that new TV purchase, today the new TV will be 3D-capable and we’ll see a  similar adoption curve as it starts to penetrate.” Sports, he added, will be the driving factor.

    Perhaps greater demand for 3-D (GigaOM Pro sub req’d) will be enough to force cable companies to move to true IPTV, where channels are delivered on demand rather than being accessible at any one time, and force them and any other ISP attempting to compete to extend fiber further out toward the edge of their networks. I may not care about 3-D sports, but I wouldn’t mind it leading to faster broadband.

    Image courtesy of Flickr user Jimf0390

  • Is Twitter Really Building Its Own Data Center?

    Twitter will move into its own data center soon as it seeks to scale its social messaging service, according to a presentation by John Adams, one of the messaging service’s operations engineers. Speaking at the Chirp developer conference yesterday in a session on scale, he laid out Twitter’s strategy to keep the fail whale at bay — which included plans to soon move to its own data center (see slide below).

    It’s unlikely Twitter is building and operating this data center itself. In 2008 Twitter signed a contract to host its servers at NTT because the latency in the cloud was too high for its service, and last August NTT said it had leased 15,000 square feet in Santa Clara, Calif. for expanding its data center operations in part because of Twitter’s success. So my hunch is that Twitter is moving into NTT’s data center dedicated to the messaging service, as opposed to building and operating its own, which would take a while. I’ve reached out to NTT and Twitter for more information. In 2008 Twitter saw year-over-year traffic growth of 752 percent and from 2008 to  2009, traffic rose 1,358 percent. It serves 55 million tweets a day.

    Twitter has gone from hosting aspects of its service on Amazon Web Services and via Joyent to discarding the cloud because latency was too high (we’re going to have a talk about improving latency in the cloud at our Structure 10 conference in June). Essentially, like most IT folks, Twitter’s operations group plays a game of whack-a-mole attempting to add hardware, improve algorithms or add new code to solve the next engineering bottleneck.

    Even if the dedicated NTT data center space manages to improve the service for a while, will Twitter ever end up following Facebook, which this year said it would build its own data center so it could control costs and reduce its energy consumption? And what does this say about the evolution of scalable infrastructure?

    Chirp 2010: Scaling Twitter
    View more presentations from John Adams.

  • NYC Cable Cos. Let Wi-Fi Roam and Users Get More Free Hotspots

    The three cable providers in the New York City metro area have banded together to create a Wi-Fi network that any of the companies’ customers can use, essentially turning the city into a cluster of hotspots for all those folks toting smartphones and iPads. Cablevision, Time Warner Cable and Comcast have signed roaming agreements, so customers of one can get on the Wi-Fi network owned by one of the others for free.

    Last month, Om mentioned that Time Warner Cable was offering Wi-Fi service to its customers in partnership with Cablevision. The announcement today adds Comcast and formalizes the idea of the three as roaming partners anywhere the members have Wi-Fi networks in the NYC area.  Back in January I pondered this exact sort of relationship developing among ISPs:

    So will ISPs take the consumer love of ubiquitous broadband and carriers’ need for offload to the next level and create the equivalent of roaming agreements for Wi-Fi? Greg Williams, the new SVP of corporate development at Bel Air Networks, thinks they might. …He wonders if carriers will negotiate with each other and fixed-line ISPs to get access for their wireless subscribers, especially in congested cities such as New York or San Francisco.

    At the time I was skeptical because I couldn’t see the big carriers — namely AT&T and Verizon — doing anything to radically cut into their data revenue (GigaOM Pro sub req’d) from their 3G networks, but having cable providers offer such a service makes sense, especially given that the cable guys right now are also up against Verizon FiOS in some of their markets. The ability to offer free Wi-Fi while on the go (plus paid mobile broadband service through the partnership with Clearwire) makes their broadband product portfolio competitive with Verizon (which signed a Wi-Fi agreement of its own), and is decreasing churn. And all of this makes for happy Wi-Fi users.

    Thumbnail image courtesy Flickr user Adventures in Librarianship.

  • Is Microsoft Testing Servers Running Cell-phone Chips?

    Microsoft may be testing ARM-based servers in addition to solid-state storage drives for its online services division — which operates sites like Bing — ostensibly in an effort to drive down energy costs without sacrificing performance. Just last week I wrote about a stealthy startup that uses the ARM-based architecture commonly found inside cell phones to deliver lower-power servers, and detailed how ARM plc appears to have plans to make inroads in the data center. Now I see that Microsoft has a job listing on its site for a software development engineer that reads:

    To provide sufficient server and networking capacity, the Autopilot Hardware team is involved in Data Center planning, new hardware expirementation [sic] including SSD and ARM, vendor relationships, delivery and installation, network management, and the development of software to automate provisioning and management of all hardware pieces in the dependency chain.

    Is this a huge win for ARM in the server business? No, most likely it’s Microsoft doing what any company with a gargantuan number of servers (Microsoft says hundreds of thousands) would do — which is test out all possible ways to cut down on energy usage. Last year, I wrote about the energy savings Microsoft experienced while using Intel’s low-power Atom-based processors inside its servers.

    However, its willingness to experiment with servers that use the ARM architecture found inside cell phones rather than the x86 architecture that Intel and AMD chips use is worth noting. Microsoft’s current data center strategy involves a manufacturing-line model where it uses a well-established supply chain and commodity parts that it can source easily and quickly in order to build data centers anywhere in the world as fast as possible. That’s not a data center operations model that’s going to support swapping out x86 servers for a specialized box running cell-phone chips anytime soon.

    Perhaps we’ll hear more about putting cell-phone chips in servers to cut back on energy consumption in the data center at our Green Net conference on April 29, when Bill Weihl, Google’s green energy czar, gives a presentation on how Google is innovating around energy use in its data centers (GigaOM Pro sub req’d). We’ll see then if Intel or AMD should be worried about their largest business lines — the sale of server chips — shrinking.

  • With More Cores, the Cell Phone Closes in on the Computer

    Intel plans to release a dual-core Atom chip during the second quarter, CEO Paul Otellini said on Tuesday. “The next innovation coming to Atom is on dual-core,” he said on the company’s first-quarter investor conference call. He didn’t, however, disclose if they would be for Atom’s traditional home of netbooks, or for smartphones or tablets, into which Intel is also hoping to get its chips.

    The idea of smartphones with multiple processor cores isn’t a new one — last year I talked to Texas Instruments about it, and earlier this year Qualcomm said it would release a dual-core processor that could hit processing speeds of 3 GHz. Marvell  is also exploring the idea of quad-core chips inside phones and other mobile devices. The goal of such tinkering is to beef up the performance of the smartphone so it can handle compute-intense tasks like multimedia gaming and multitasking.

    As I explain in a new GigaOM Pro piece (sub req’d):

    Speaking in early January at the launch of the new Nexus One phone designed by HTC and Google, Andy Rubin, VP of engineering at the search giant, compared the device he held up to the laptops he carried around four or five years ago. He was a little off the mark; the Nexus One’s 1 GHz processor from Qualcomm isn’t quite as powerful as the 1.5 GHz Intel Centrino processor that sits inside my ancient Toshiba from 2004, and the phone doesn’t offer anywhere close to the 60 GB of memory provided by the eight-pound machine. But as he waved this phone around, his point was clear. In his hand wasn’t a mere phone, it was a computer.

    As the lines between computers and mobile devices blur, traditional PC vendors are building phones and the traditional phone manufacturers are trying to build mobile PCs. But with mobility come constraints — particularly around power consumption and battery life. So the big task for every device manufacturer is figuring out how to cram all the functionality of a big computer into a tiny handset. Many chip firms believe tomorrow’s phones will be powered by multicore processors that deliver the performance the consumer wants without destroying the lengthy battery life such devices need.

    So as more vendors add multiple CPU cores to their chips aimed at mobile devices, the computing gap between a mobile phone and a laptop will close, leaving users to focus on features such as keyboards and screen sizes when choosing their mobile compute device. The real question is when this happens. Texas Instruments believes next year is when we’ll see them, but I wouldn’t be surprised if something sneaks out before this year is up.

  • Ericsson Sees the Internet of Things By 2020

    In 10 years there will be 50 billion devices connected to the web, declared Ericsson President and CEO Hans Vestberg yesterday. That differs from Intel’s estimates that by 2015 the world will have 15 billion connected devices up from 5 billion now. However, the point is the same — mobile broadband and cheap chips equal a connected network of gadgets.

    Vestberg highlighted the benefits of connected health-care devices, which we’ve also featured. The smart grid (GigaOM Pro sub req’d) and the potential for connected appliances also will bring more devices online, in addition to the already proliferating connected consumer electronics devices like televisions, cameras and game consoles. Already, the carriers are salivating at the prospect of providing cellular connections to these products and have set up divisions dedicated to machine-to-machine connectivity, but Wi-Fi is also a contender as the wireless backhaul to the web.

    Large-scale projects such as Hewlett-Packard’s CeNSE network will also drive the number of connected devices, as will tracking modules for managing a company’s inventory or supply chain. So for those eyeing Ericsson’s connected future with skepticism, know that the technology already exists in the form of wireless broadband options, while more chips to provide the brains combined with radios will start hitting the markets in the next few years. We’re just waiting on the business models and deployments.

  • Hmmm…Software That Predicts If You Will Do Crime & Time

    The Florida State Department of Juvenile Justice says it will use predictive analytics software from IBM to foretell which of its juvenile offenders are likely to return to crime. The software, made by the SPSS division that Big Blue purchased last year, will replace Excel spreadsheets analyzed by employees. The software can look at far more data inputs and potentially handle more juvenile offenders faster than the older methods, and presumably the ability to incorporate more data points could lead to better results. Those deemed likely to re-offend are given specialized treatment.

    The UK Ministry of Justice also uses IBM’s predictive software on its criminal population, to see which ones pose a greater threat to public safety upon release. IBM clearly plans to take SPSS beyond its former domain of market researchers and scientists and apply it to where the big money is — homeland security in these frightening times.

    Deepak Advani, vice president of predictive analytics at IBM, said, “Predictive analytics gives government organizations worldwide a highly-sophisticated and intelligent source to create safer communities by identifying, predicting, responding to and preventing criminal activities. It gives the criminal justice system the ability to draw upon the wealth of data available to detect patterns, make reliable projections and then take the appropriate action in real time to combat crime and protect citizens.”

    Is anyone else getting “Minority Report” flashbacks? I’m a little concerned as we evaluate our laws protecting citizen and corporate electronic communications (GigaOM Pro sub req’d), that we now have the tools to establish a reliable and cheap surveillance society. With the scale and flexibility of cloud computing, better data management flows and the infrastructure to run many of these queries, governments and private companies are going to have the resources to predict not only market trends and supply chain needs, but also behavior. IBM actually plans to marry its SPSS software to a scaled-out architecture to offer a data-analytics cloud.

    Combine good software and the cloud, and the scanning of older data for predictive analysis could soon start incorporating real-time data. Given that someone has already been arrested after making comments on his Twitter feed and the police regularly scour Facebook pages looking for suspects and threats, it’s not so far-fetched.

    Image courtesy of Flickr user AlanCleaver_2000

  • Do Neutral Wireless Networks Require an End to the Flat-rate Plan?

    The network neutrality debate — whether or not Internet Service Providers can discriminate against packets (GigaOM Pro sub req’d) or application providers — pits what the blogosphere often sees as the forces of good (Google, The Free Press) against the forces of evil (AT&T, Verizon, Comcast), while generally ignoring the technical realities or even clearly understanding the limits of said networks. So I was excited to see presented to the FCC this week a paper written by Scott Jordan, a professor of computer science at the University of California, Irvine, on whether or not one can or should apply net neutrality to wireless networks.

    The paper concludes that the differences between wireline and wireless networks do change the way network management is implemented, and suggests that by creating the equivalent of an open interface for the transport layers (layers 1-3 in the OSI model) of a wireless network would be enough to prevent ISPs from stifling competition on wireless networks. From an abstract of the paper:

    We address whether differences between wired and wireless network technology merit different treatment with respect to net neutrality. We are concerned with whether the challenges of wireless signals and mobility merit different traffic management techniques, and how these techniques may affect net neutrality. Although wireless networks require stronger traffic management, we find these differences are only at and below the network layer, and hence wireless broadband access providers can effectively control congestion without restricting a user’s right to run the applications of their choice.

    However, for Jordan the open interface would be tied to pricing, notably the amount a user is willing to pay for certain prioritization or types of traffic at the lower levels. He explains in the paper itself:

    In contrast, many current plans are not application-agnostic and are hence not consistent with an open interface. Some plans for smartphones include unlimited amounts of data, but restrict use to certain devices (e.g. prohibit tethering to a laptop) and to certain applications (e.g. permit web browsing and email, but prohibit file sharing, streaming, and VoIP). The goals of traffic management can be more efficiently obtained through an application-agnostic interface that allows users to choose their own applications and to match these applications to QoS options based on price.

    Not only does this mean the flat-rate mobile broadband plan is dead, but the onus is on the consumer to understand what she wants to do with her device and subscribe to the correct pricing plan. Already carriers are weighing how they will change mobile broadband pricing (GigaOM Pro, sub req’d) to more accurately reflect usage, so by providing a way to offer usage-based pricing in a way that could fit with network neutrality rules, this paper could help carriers implement such plans. It explains from a technical perspective why wireless networks should also abide by network neutrality regulations and how to do so in a manner that respects the constraints unique to wireless networks. So there’s something in here for both consumers and carriers to potentially dislike.

    Image courtesy of Flickr user pfly

  • SpringSource Buys Startup to Scale Messaging in the Cloud

    SpringSource, a division of VMware, has purchased the open-source cloud messaging company behind the RabbitMQ software. The value of the deal was undisclosed, but the purchase of Rabbit Technologies Ltd. is yet another effort by VMware to become the operating system for enterprise clouds (GigaOM Pro, sub req’d) and add value to its commoditized hypervisor. It’s also the latest example of a company selling proprietary software buying up an open-source software company aimed at the cloud.

    Cloud providers use RabbitMQ to create a messaging server allowing them to quickly manage the flow of messages between applications. It can also be used to notify users of a web service when content on the site has changed, such as when someone posts a Facebook photo and the service sends an email out notifying all a user’s friends.

    The RabbitMQ code was created by Cohesive FT and LShift based on the relatively young AMQP standards effort backed by major banks, Cisco and a handful of smaller companies. As hardware is virtualized, translating some of the network equipment like load balancers into software allow services running on the virtualized hardware to scale better. Hopefully we’ll learn more about SpringSource, RabbitMQ and VMware’s plans for becoming the cloud OS when VMware CEO Paul Maritz speaks at our Structure conference in June.

    Image courtesy of Flickr user Joshua Davis

  • Google on Net Neutrality, Its Fiber Buildout and Cloud

    Google’s core philosophy about opening up access to the world’s information is the reason behind the company’s pro-net neutrality stand, the building of its own fiber network and its search for protocols for moving information between cloud providers. Google discussed its information liberation efforts at the search giant’s Atmosphere event held today as part of Google’s efforts to push enterprise adoption of cloud computing.

    One of the biggest issues holding back cloud computing is the lack of protocols and a standard vocabulary around moving data from one cloud to another, said Vint Cerf, Google’s chief Internet evangelist. “Inter-cloud interaction is still in the formative stage,” he said. Cerf spoke on a panel at the all-day event held at Google’s headquarters with some 400 attendees representing corporate America. I’m watching via webcast, and my colleague Liz is contributing notes and photos from the event in person.

    Cerf likened the cloud today to corporate networks back in the pre-Internet days, explaining that the Internet emerged as the way to bring disparate networks together and to enable folks to move data from one network to another. That’s an especially interesting comparison given that both IBM and HP have said they view the cloud as open in so much as there are already existing protocols such as TCP/IP and Http to move data between different clouds.

    Cerf didn’t sound satisfied by this, and I don’t imagine he should be given some of the security needs of data moving between clouds and the amount of bandwidth such information can require. Surely for sharing such large amounts of sensitive data, different protocols that are open and standardized might make sense.  Look at companies like Aspera, which is offering a proprietary protocol to shift huge volumes of information between data centers.

    Plus Cerf called for ways to move information between clouds in ways that preserves the metadata that makes the data itself useful.  If you think of the data as a piece of meat, say salami, the metadata is the additional ingredients and bread that determine if the salami ends up as a mufaletta instead of a Italian sub when traversing from one cloud to another.

    In addition to protocols for moving information between clouds, Cerf said the industry needs to keep paying attention to IPv6 as the number of devices connected to the web increase, and it also needs to develop protocols to send information via broadcast, rather than sending everything via a one-to-one unicast connection. Other protocols or standards should focus on authentication and knowing who a person is on the web. Interestingly, none of the Googlers mentioned protocols for protecting data privacy or anonymity.

    The panel also touched on non-cloud issues such as the importance of net neutrality, with Cerf reiterating that Google isn’t calling for every packet to be treated the same, but rather making sure the owners of the pipe don’t behave anticompetitively toward content flowing over their pipes. Prioritizing the flow of information for legitimate network management means is fine, but blocking them to stifle competition isn’t.

    It wasn’t just Cerf speaking. Alan Eustace, SVP of engineering & research, and Jeff Huber, SVP of engineering, also shed light on Google’s plans. When asked about native or platform-specific apps versus the browser, Huber said, “The app model doesn’t scale well across different devices, and that’s why the browser and HTML 5 is important.”

    He also discussed a new native client that Google is developing that allows web apps to run at native speeds while keeping those apps partitioned off from all of the resources of the hardware, saying it will “raise the bar on what a web app will do.”

    Finally Cerf and Huber explained why Google is building out its experimental fiber network to bring 1-gigabit-per-second speeds to 50,000 to 500,000 Americans. Simply put, Google needs data. ” What does [a fiber network buildout] take technologically, and what does it cost not only to deliver it but to maintain it,” Cerf said. “Our business model isn’t to replicate that all over the world, but to understand it.” Later he added that Google might be able to bring new knowledge to the table, something that could help drive innovation in broadband (GigaOM Pro sub req’d).

    Google’s search for data and the trek to catalog information can obviously disrupt entire industries, but it’s clear that the company is stepping up to take a greater role when it comes to cloud computing and hosted applications. We’ve seen its effort to get enterprise customers on board and it sounds like we’re going to see it attempt to drive standards as well. As it does so, Huber offered a reminder to those in the Atmosphere audience, “The more fundamental or structural thing is our commitment to openness…it’s your data, not something we’re trying to capture and keep.”

    I’m not sure that’s a message that’s getting through to some people so far, especially given Google’s reliance on proprietary code for its Google Apps platform to enable programs to scale across its infrastructure. But perhaps for enterprise customers, Google is saying enough of the right things to drive interest and eventual adoption. We’ll have to see.

  • Twitter Open-sources the Home of Its Social Graph

    Twitter today open-sourced the code that it used to build its database of users and manage their relationships to one another, called FlockDB. The move comes shortly after Twitter released its Gizzard framework, which it uses to query the FlockDB distributed data store up to 10,000 times a second without creating a logjam.

    The code was posted last night on GitHub, although as Twitter developer Nick Kallen writes (under a “warning” and a “what the hell is this?”) label:

    This is in the process of being packaged for “outside of twitter use”. It is very rough as code is being pushed around. please forgive the mess.

    This is a distributed graph database. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter.

    Still, ahead of Twitter’s Chirp conference this week — and in the wake of moves that may alienate some of the popular client applications through which many access Twitter — the company has released code that may improve the web for all. In a GigaOM Pro piece published over the weekend (sub. req’d), Derrick Harris said:

    Twitter’s newly open-sourced Gizzard tool seems to have promise, as well. By eliminating some pain from the often difficult sharding process, Gizzard makes it easier to build and manage distributed data stores that can handle ultra-high query volumes without getting bogged down. Like Google, Yahoo and Facebook before it, Twitter has played a role in evolving how we use the web, and software developed within its walls should be a hot commodity for present and future Twitter-inspired sites and products.

    Simply because of the number of users and the scale of its service, Twitter is solving problems that many other web-based startups hope they will have one day. So now I’m back to wondering if FlockDB and Gizzard will join the ranks of  Hadoop or Cassandra as open-source solutions for managing data at webscale.

    Image courtesy of Flickr user Tim Morgan

  • Comcast Didn’t Kill Net Neutrality Last Week

    A federal court of appeals said last Tuesday that the Federal Communications Commission wasn’t justified when it censured Comcast back in 2008 for blocking peer-to-peer files. At the time, I said the ruling could call into question the FCC’s ability to regulate several aspects of high-speed Internet service, including network neutrality, but after talking last week to people in D.C., it became clear that the consensus is in fact that regulations guaranteeing net neutrality will survive, and the FCC will likely begin a proceeding to solidify its authority by reclassifying Internet access.

    The issue isn’t that Comcast sued the FCC, but that the FCC was on weak footing thanks to previous decisions it made back in 2002 and 2005. In that series of rulings, the FCC decided that various forms of high-speed Internet access should not be classified as a transport service like a telephone line is, but rather as an information service, like Google or Facebook. In 2002, the FCC classified cable as an information service, and did the same with DSL in 2005, wireless broadband in 2007 and broadband over power lines in 2008.  The agency decided that Internet access was more than transport because ISPs also offered email accounts, portals, storage and other technologies on top of the transport layer.

    In a GigaOM Pro report published late Friday (sub req’d), I lay out the options the FCC has before it from a regulatory perspective, and explain how we got here and where the FCC will go next. I also outline how the ISPs will likely push for Congress to get involved, rather than see their internet access reclassified as transport, because that reclassification gives the FCC more regulatory authority over their pipes.

    So keep an eye on the FCC, because we’re likely to see it issue a Notice of Inquiry in the coming weeks on the topic of reclassifying high-speed Internet access (not broadband, which may encompass the services such as email and storage that led the FCC to classify the offering as an information service rather than transport in the first place). That will undoubtedly be followed by months of comments and inflammatory rhetoric about ISPs already being committed to net neutrality and arguments saying that the FCC wants to regulate “the Internet” (it’s trying to regulate access to it, not the web itself).

    Once it gets through the reclassification process, which could take at least six months, the net neutrality proceedings will stay open and the FCC will likely take up the topic once its authority is firmly in place. Also, expect the FCC to follow any of its orders reclassifying Internet access as a transport service with later proceedings where it says it won’t regulate certain aspects of high-speed Internet access such as tariffs and peering agreements.

    Amid this wonky debate, keep in mind that regulating communications transport is what the FCC was set up to oversee, and the current Commission apparently intends to do it. It’s not going to be incredibly aggressive about it (otherwise it could issue a Declaratory Order saying that high speed Internet access is a transport service without going through the comment period) but eventually net neutrality regulations that do contain provisions allowing ISPs to manage their networks will be passed. So Comcast didn’t kill net neutrality, but it did delay it for a while.

  • The FCC Wants to Test Your Broadband Speed Limit

    The Federal Communications Commission wants to know how fast your broadband speed is, so it’s looking for volunteers to install gear that will provide accurate readings of it. In a blog post today the agency said it has chosen SamKnows Ltd., which also worked to establish speed tests for British telecom regulator Ofcom, to help it in its task. The FCC will issue a public notice seeking more input on the process in “the coming days,” and will also detail how people can volunteer to install the gear in “the next few weeks.”

    Gathering quality data plays an important role in the FCC’s National Broadband Plan, acting as the agency’s only solution to the relative lack of competition in most U.S. broadband markets, so this move to install actual hardware on people’s modems is a big deal.

    It’s part of a multipronged effort to gather data on broadband quality, access and speed. Others efforts include getting consumers to go to the FCC’s broadband.gov site to test their speeds and a partnership with comScore, thought that’s been criticized as being fairly unscientific. The agency has also expanded the information it collects from ISPs, but some of its ability to force carriers to give up that information has been thrown in doubt after the FCC lost a legal battle against Comcast over its authority to regulate aspects of high-speed Internet access. So this effort and the eventual volunteers might be the FCC’s best hope of gathering data that will stand up to court fights and help defend consumers from anti-competitive practices — at least while the current commissioners are at the FCC and want to fight for consumers.

    Related content from GigaOM Pro (sub req’d):

    Who Will Profit From Broadband Innovation?

  • Smooth-Stone Bets ARM Will Invade the Data Center

    Smooth-Stone CEO Barry Evans

    Intel, with its x86 architecture, has owned the corporate computing market for decades, but Barry Evans, CEO of Austin, Texas-based systems startup Smooth-Stone, thinks it’s time for a change. Smooth-Stone, which Evans co-founded in 2008, is using ARM-based processors to create a box for the data center. Its goal isn’t a slight reduction in power efficiency, he said, but to “completely remove power as an issue in the data center.”

    However, the specifics of Evans’ stealthy company are overshadowed by one key question: Is ARM ready to invade the data center? Evans thinks yes, and I think the IP licensing company behind the architecture does too, because it appears to be cooking up something that involves using its architecture inside servers. Ian Ferguson, director of enterprise and embedded solutions at ARM Plc, declined to talk to me for this story, saying the timing was not yet right to talk about the company and servers “for a few reasons that I can’t discuss.”

    Evans was formerly an executive at Intel, where he worked in the chip maker’s ARM business unit; he stayed with the division after Intel sold the line of chips to Marvell. Other members of the Smooth-Stone team hail from HPC systems startup Convey Computer and Newisys, a company that helped build the first server optimized for AMD’s Opeteron chips and was purchased by Sanmina in 2004. Evans was coy about what exactly Smooth-Stone is doing, but did say that the system the company is building is not designed for the high-performance computing market and will use ARM-based chips.

    However it’s not enough to swap out x86 chips for those based on ARM and expect the new systems to work. For one thing, it takes a lot of low-power processors to equal the performance of a single multicore Nehalem chip. An even bigger challenge is getting all of the cores to work together efficiently, a problem that another low-power systems company, SeaMicro, likely is solving as well with a box that contains 512 Atom chips. When I asked Evans if Smooth-Stone had built a custom chip to handle the networking and coordination of the ARM-based chips, he said, “Our IP goes all the way down to the silicon level.”

    As for when the rest of the world will see this product, Evans declined to give a date, nor would he list customers. But engineers at Cisco, Microsoft and Dell have all mentioned Smooth Stone to me and appear to know something about what it’s attempting to do.

    I’ve written about how x86 may be on the verge of losing its hegemony as mobile computing turns to ARM-based architectures and graphics processors from AMD and Nvidia move upmarket into high-performance computing and even into some servers. But the commodity servers that populate the world’s data centers (there are still a few specialty servers using Sun’s Sparc chips or IBM’s PowerPC chips out there, but they are not in the mainstream) seemed fairly safe.

    After all, there’s a ton of software written for them and the low cost of such machines makes it hard to imagine someone swapping them out for specialty boxes. Evans doesn’t deny the lure of the commodity server, but because of the need to add so many more servers to meet our rising demand for computing, and the incremental power gains associated with them, he’s betting that the end of x86 domination may be in sight. I’m curious whether alternatives to commodity machines can make it inside the data center, and will be talking about it with some smart people at our Structure conference in June.

    “Think of the install base of servers and all of the new servers coming online and how most approaches today save 10 or 20 percent on power,” Evans says. “Now imagine saving 99 percent on power and how completely that changes things and takes power out of the equation.”

    If that’s what Smooth-Stone does, it could certainly get CIOs thinking.

    Related content from GigaOM Pro (sub req’d):


    Hot Topic: Green Data Centers

  • Verizon Tries to Patent Spot Pricing for the Cloud

    The U.S. Patent and Trademark Office today published a patent application from a division of Verizon Communications for a way to offer market-based spot pricing for cloud computing. The application was filed in October of last year but published today. It could mean that Verizon plans to offer spot pricing for its cloud computing product, or it could just be the result of another overzealous legal department trying to corner the market on a way of doing business.

    If Verizon does plan to offer dynamic pricing, whereby what customers pay for compute time depends on how heavily the company’s cloud infrastructure is being utilized, that could be a good thing for the industry. Amazon in December launched spot pricing for its Elastic Compute Cloud service, and at the time Derrick Harris, a colleague at our GigaOM Pro research service, pointed out that while dynamic pricing was a good thing, Amazon’s huge market share meant that dynamic pricing without a major competitor (sub req’d) wouldn’t drive costs down quickly. Throwing Verizon into the mix could drive competition.

    And given that Verizon would be going up against Amazon, perhaps its decision to patent the idea of dynamic pricing makes sense. After all it is Amazon that holds the infamous “1-Click” patent, which allows users to make purchases with just one click and which it used like a cudgel to beat Barnes and Noble at the online retailing game. Perhaps Verizon just didn’t want to be at the wrong end of the patent stick when it faced off against Amazon in the cloud.

  • You’ve Got Mail! Amazon Creates Cloud Notification Service

    Amazon Web Services has launched its Simple Notification Service (Amazon SNS), which allows developers to create a push notification system for applications. The service allows companies to deliver messages to customers of their applications or even to other applications in a couple of different formats, among them HTTP and email. Amazon SNS could be used for system administrators in an IT department (notifying clients if they’re hitting a certain limit on storage capacity or that latency on their service is too high), or it could be used to build out notifications for mobile applications, such as letting consumers when friends check into a location, or when they have new email.

    Developers using the service pay per instance, as with all Amazon cloud products. The price includes a per-request, notification delivery and data transfer fee, but developers can get started with Amazon SNS for free. Each month, Amazon SNS customers get the first 100,000 Amazon SNS Requests, the first 100,000 notifications over HTTP and the first 1,000 notifications over email free. After that, prices range from 6 cents to $2 per 100,000 messages sent for delivery and 8-15 cents per gigabyte of data transferred.

    Related GigaOM Pro content (sub. req’d):Report: Delivering Content in the Cloud

    Image courtesy of Flickr user Ed Siacoso (aka SC fiasco)

  • Gizzard Anyone? Twitter Offers up Code for Distributed Data

    Twitter last night offered up the code for Gizzard, an open-source framework for accessing distributed, scalable data stores quickly, which could become an important component of building out a web-based business, much like Facebook’s Cassandra project has swept through the ranks of webscale startups and even big companies.

    Gizzard is a middleware networking service that sits between the front end web site client and the database and attempts to divide and replicate data in storage in intelligent ways that allows it to be accessed quickly by the site. From the Twitter blog post:

    Twitter has built several custom distributed data-stores. Many of these solutions have a lot in common, prompting us to extract the commonalities so that they would be more easily maintainable and reusable. Thus, we have extracted Gizzard, a Scala framework that makes it easy to create custom fault-tolerant, distributed databases.

    Gizzard is a framework in that it offers a basic template for solving a certain class of problem. This template is not perfect for everyone’s needs but is useful for a wide variety of data storage problems. At a high level, Gizzard is a middleware networking service that manages partitioning data across arbitrary backend datastores (e.g., SQL databases, Lucene, etc.).

    The goal is to deliver relevant information to users faster across huge data sets that Twitter manages. Twitter said one of  its FlockDB distributed graph database can serve 10,000 queries per second per commodity machine using Gizzard. I heard Twitter’s, Kevin Weil talk about the project a few weeks ago at SXSW, and at the time he said the company was building something to help manage distributed data sets using a Scala framework. This appears to be exactly that.

    Whether or not Gizzard turns into another Cassandra or it fizzles, is open for debate, but the act of figuring out how to work with giant data sets and then sharing that information with others is an essential step in creating webscale businesses. Thus, Twitters’s decision to solve its own problem and then share it’s solution is beneficial for the startup community.

    I’ve chatted with developers who feel that Google’s development of BigTable and its decision to keep it to themselves stalled the progress of building out webscale infrastructure for a few years until Facebook opened up Cassandra. This may be sour grapes — after all, a company does not have to open up code that gives it a strategic advantage — but it does highlight how difficult it is to build code that can handle and scale for millions of users. Sharing ways to do that lowers the barriers to entry for startups much like compute clouds such as Amazon’s EC2 or Rackspace’s CloudServers can.

    So for anyone who wants some Gizzard, Twitter is happy to share.

    Related GigaOM Pro Content (sub req’d): What Cloud Computing Can Learn from NoSQL

    Image courtesy of Flickr user Sifu Renka

  • Comcast vs FCC: In Battle For Net Neutrality, Did the Courts Hand Comcast a Pyrrhic Victory?

    The U.S. Court of Appeals for the District of Columbia handed Comcast a victory against the Federal Communication’ Commissions today, but in winning its appeal, Comcast may have just set off a war — one it could wind up losing. As we noted, a three-judge panel took issue with the FCC’s attempts to regulate cable’s ability to manage its networks, not simply because there wasn’t a formal rule-making process in place, but because the FCC appears to have overstepped its bounds when it tried to regulate how a cable company managed its network.

    Stifel Nicolaus, an investment bank, lays it out well in a research note:

    Today’s ruling is destabilizing as it could effectively free broadband providers from FCC regulation over broadband, including net neutrality, rules requiring transparency letting customers know what actual speeds they are receiving, the ability to prioritize emergency communications, consumer privacy protections (though these could presumably be imposed to a certain degree by the FTC). But it could lead the FCC to reclassify broadband services as the more heavily regulated “telecommunications service” under the traditional Title II – which the Bells, cable, and wireless companies (e.g., T, VZ, CMCSA) strongly oppose.

    So while this decision does throw the FCC’s current network neutrality rule-making into disarray, it also could affect the agency’s attempts to regulate a wide variety of broadband issues, including how broadband providers tell consumers what their true Internet speeds are, as well as how the agency can enact universal service fund reform and set privacy rules on the Internet.

    What the FCC needs to figure out is whether or not it should assert its authority narrowly for each issue it is trying to address, and risk its authority being challenged each time (possibly in court) or if Congress needs to step in to grant the FCC more power. Already several consumer organizations have issued statements about the ruling, such as Gigi Sohn of Public Knowledge:

    “The FCC should immediately start a proceeding bringing Internet access service back under some common carrier regulation similar to that used for decades. Some parts of the Communications Act, which prohibit unjust and unreasonable discrimination, could be applied here. The Commission would not have to impose a heavy regulatory burden on the telephone and cable companies, yet consumers could once again have the benefit of legal protections and the Broadband Plan could go forward.”

    The FCC itself is being coy on the reclassification issue, saying:

    “The FCC is firmly committed to promoting an open Internet and to policies that will bring the enormous benefits of broadband to all Americans. It will rest these policies — all of which will be designed to foster innovation and investment while protecting and empowering consumers — on a solid legal foundation. Today’s court decision invalidated the prior Commission’s approach to preserving an open Internet. But the Court in no way disagreed with the importance of preserving a free and open Internet; nor did it close the door to other methods for achieving this important end.”

    Apparently everyone — even Comcast, which originally sued the agency — is in favor of a “free and open Internet.” Comcast, which declared itself vindicated, was careful to point out that it was in favor of the open Internet principles, despite its original P2P blocking efforts:

    Comcast remains committed to the FCC’s existing open Internet principles, and we will continue to work constructively with this FCC as it determines how best to increase broadband adoption and preserve an open and vibrant Internet.

    Other major ISPs have also been quick to favor the existing broadband principles (notably those do not include network neutrality provisions on wireless networks and require ISPs to be transparent about any network management). AT&T, in what looks to be a bid for self-regulation by ISPs, suggested that the FCC’s censuring of Comcast wasn’t needed because Comcast had stopped the throttling on its own (although it did lie about what it was doing, pack hearings on the topic in its favor and behave poorly throughout the process). AT&T attributed its statement to Jim Cicconi, senior executive vice president of external and legislative affairs:

    “If, after assessing its options under Title I, the FCC feels it needs to clarify its jurisdiction as a result of today’s decision, we hope the issue would be referred to the U.S. Congress which alone confers the Commission’s legal authority. In any circumstance, AT&T pledges to work constructively with the FCC as it considers these questions.”

    Verizon’s statement — attributed to Randal S. Milch, executive VP and general counsel — doesn’t emphasize the need for Congress to get involved, despite one of its top policy wonks calling for Congress to figure out new ways to regulate broadband last month:

    “The court recognized that the FCC does have Title I ancillary authority over Internet access. In this case, the FCC simply failed to link its actions to its statutory responsibilities. The FCC’s authority supplements the various other consumer protection and competition laws that apply to all members of the Internet ecosystem.”

    As this devolves into a fight between lawyers, it’s important to realize what’s at stake amid all of the fancy language, namely the ability for content to pass relatively unimpeded over the pipes that provide your broadband access. At stake as well is who gets to make the rules that govern those pipes, no matter if the ISP is a telco, a cable company or even Google.

    Post and thumbnail photos courtesy of Flickr user Steakpinball

  • Scotty, We Need More Bandwidth!

    A slew of news out this morning — ranging from AT&T’s $1 billion expansion of its network to Cisco’s update of its unified computing system — highlights the continued need to invest in networking. We’re piling on compute power and boosting storage at a much faster pace than our networking infrastructure can handle — both inside the data center (GigaOM Pro sub req’d) and on the long haul networks running between (GigaOM Pro sub req’d)  them. There isn’t really a Moore’s law that pertains to networking.

    Which is why in some cases, it’s just a matter of plunking down more cash to add gear and perhaps undersea capacity, as AT&T said it plans to do for business networks. Cisco is taking a doubled-side approach to the networking bottleneck by providing servers that can deliver faster and easier networking inside virtualized data centers with an upgrade to its unified computing system, as well as building routers for long haul and edge networks that can handle a whole lotta terabytes. Last month Marvell upped the data center networking ante by announcing 40 gigabit Ethernet chips for when 10 won’t do.

    And back to the long haul networks, Cisco sold its massive ASR-9000 core network router to NTT Communications Corp. last week,and today, EETimes quotes NTT CTO Doug Junkins as explaining why isn’t pleased by the higher prices for advanced networking gear (the optics components are 10-30 times more expensive than for 10 GigE gear), but is ready to take the plunge because of customer demand:

    “We are a wholesale IP transit provider, and our highest growth is in 10G Ethernet ports for new customers,” said Junkins who is also vice president of IP development for NTT Communications’ business network unit. “We have customers today bundling more than ten 10G Ethernets from our backbone to their net, so the day 100G Ethernet is available, we will start provisioning for it,” he said.

    It’s not just consumers downloading video or the love of smartphones that’s causing bandwidth demand to skyrocket, but the need for access to software, platforms and infrastructure as a service by businesses and our increasing reliance on the network for improving productivity and seeding innovation.