Author: Stacey Higginbotham

  • Is There a Rollable Display in Your Future?

    Sony’s announcement today that it’s developed a thin OLED display flexible enough to roll around a pencil (or any other 4 mm object) got me thinking about screens. The screen is quite literally our window to the web for computers, mobile phones, tablets and whatever other device we may have in our pockets, and anyone who walks outside with a Nexus One will quickly tell you that having a fast phone with neat apps doesn’t help when it’s a sunny day and your OLED touchscreen is subsequently so washed out that you can’t see anything.

    Flexible screens are pretty cool, as are those you can see in the daylight. But when considering the use cases for mobile phones, laptops, e-readers or tablets, what screen functionality do we need? When it comes to e-readers in particular, some argue that screens that don’t emit a lot of light make reading more comfortable. But then again, e-Ink devices like the Kindle don’t have color, which most people consider pretty darn essential. Humans, barring an accident, disease or genetic defect, are visual creatures, and so as we take computing on the go and embed both connectivity and electronics into our tasks, the right screen is essential.

    As I’ve noted previously, a greater emphasis on screen technology (and size) may change the dynamics in the semiconductor industry, in that beauty (nice screens) could soon cost twice as much as the brains (fast processors). But as Sony offers us an opportunity to add flexibility to our displays, the issue of figuring out what devices need in a display becomes that much more difficult. We may have a one-size-fits- all devices, but will we ever make a one-size-fits-all screen? Readers, is a single screen (or device) really appropriate for our needs? If so, what features should that display have?

    Related posts from GigaOM Pro (subscription required):



    Atimi: Software Development, On Time. Learn more about Atimi »

  • Stat Shot: Frightening Phone Bills

    The Federal Communications Commission, as part of its effort to boost competition by lowering the cost of switching providers, today released a survey showing that early termination fees (ETFs) keep consumers wedded to carriers even when they want a divorce. It also released data to support its effort to get carriers to notify consumers before their mobile phone bills get too expensive, known as bill shock regulations.

    The survey notes that 83 percent of adults in this U.S. have a cell phone, and 80 percent have a personal cell phone that an employer doesn’t subsidize. And 58 percent of users are happy with their coverage. That being said, 30 million Americans (roughly 17 percent) were found to have received bills that were higher than they anticipated — of those, 21 percent had children under the age of 18.

    It also found that 43 percent of customers with contracts said ETFs were a major reason they would stay with their current service (which may be why AT&T is following Verizon’s boost in ETFs with one of its own come June). The termination fees aren’t used as much in fixed-line broadband, with 21 percent of users saying that their contracts include an early termination fee. Of those users, however, fully 64 percent don’t know what the fee is — a higher level of confusion than for cell phone service.

    On a call to discuss the survey, the FCC officials shied away from saying whether or not the agency would step in a regulate higher ETFs (so far it has questioned them, but hasn’t stepped in to change them). So for those who don’t want to be locked into a carrier, buying an unsubsidized handset (GigaOM Pro sub req’d) and paying (in some cases) higher monthly bills may be the way to go.



    Atimi: Software Development, On Time. Learn more about Atimi »

  • T-Mobile USA Gets New CEO: Here’s What He Needs to Do

    T-Mobile USA President and CEO Robert Dotson will leave the nation’s fourth-largest carrier as of May 2011 and will be replaced by Philipp Humm, who was the former CEO of T-Mobile Deutschland. Dotson has been at the helm of T-Mobile USA, which is owned by Deutsche Telekom, for the last 15 years. However, as the mobile market shifts to higher-powered devices that consume a lot of data, T-Mobile has faltered with late network upgrades and a recent quarter that showed off weakness in its once-strong prepaid offering.

    Humm will take over as CEO of T-Mobile USA in February 2011, while Dotson will remain on as a non-executive board member until May of that year. But as the mobile market in the U.S. deals with the fact that most Americans already own a cell phone and that growth now has to come from machine-to-machine services and stealing customers away from the competition, things could get rough. As the new head of the smallest player in the space, here are a few things Humm will need to keep an eye on:

    Transition to LTE: T-Mobile may say its rollout of HSPA+ technology across its network will offer “4G speeds,” and for the next two years or so it might, but T-Mobile can’t rest on smartphones during that time without preparing for the delivery of data. LTE is a key component of such delivery, as it not only has the potential for faster speeds but is a more efficient user of spectrum, which means it will boost T-Mobile’s capacity.

    Spectrum: T-Mobile USA’s spectrum holdings limit where it can expand and how much capacity it will have for tomorrow’s bandwidth-sucking applications. The availability of 60 MHz of AWS spectrum, which will be auctioned off next year, would help, as would a deal with another player like Clearwire, which may be shopping its spectrum around.

    Prepaid: T-Mobile had its lunch handed to it during the first quarter, when its prepaid net adds drop by 92 percent. More competitors are entering the prepaid market and prices are falling rapidly as those providers compete for market share. T-Mobile either has to cut costs to the bone so it can offer rock-bottom prices or differentiate (GigaOM Pro sub req’d) so subscribers will pay more for its service.

    M2M: AT&T, Verizon, and even Sprint have efforts to sell their underlying network capacity to makers of gadgets and appliances. T-Mobile is no exception, although it has been quieter than the others. However, it has an agreement with Echelon to supply network capacity for smart grid services, and has also offered up its network for other devices, but it’s smaller coverage area makes it a hard sell.

    So as Humm takes over T-Mobile USA the problems of stalled growth and a saturated market may be familiar to him from his time running the T-Mobile cellular network in Germany. But he will have his work cut out for him.



    Atimi: Software Development, On Time. Learn more about Atimi »

  • Unboxing Goes High Performance at the National Petascale Computing Facility

    The unboxing video was initially recognized back in 2006 as a key part of geek culture, and since then, the care and digital bits devoted to slicing through tape, carefully unwrapping your new toy and peeling off the plastic from the screen has proliferated. Some have mocked it, but the phenomenon has spread to other industries with women showing off their latest shopping hauls in videos that are clearly influenced by unboxings.

    I don’t want to go into what the spread of unboxing videos may mean for our culture, but I do want to point out the most random and incredibly sincere unboxing video I’ve come across — that of high-performance computing gear from IBM, which is acting as the warm-up system for the coming Blue Waters petaflop supercomputer at the National Petascale Computing Facility at the University of Illinois-Urbana Champaign (hat tip Inside HPC). The video isn’t a play-by-play unboxing because that would take too long, but is a lovingly shot homage to their new gear that I figured fellow geeks might appreciate.

    Related GigaOM Pro Content (sub req’d): Supercomputers and the Search for the Exascale Grail



    Atimi: Software Development, On Time. Learn more about Atimi »

  • The Top 10 Cities With the Best Broadband

    The company behind the broadband speed testing site Speedtest.net is ready to go beyond testing broadband quality and into the data game. Ookla, the three-year-old company based in Seattle that’s behind the online speed service, introduced a broadband index today that tabulates the results from the more than 1 million speed tests done each day around the world. The global broadband speed is 7.69 Mbps while the U.S. speeds average out at 10.12 Mbps.

    Mike Apgar, co-founder and managing partner of Ookla, said the indexes will measure broadband speeds, ping times and jitter [COULD WE HAVE A DEFINITION OF JITTER?]. His goal is to move the testing beyond the tech-savvy market (we use it!), so as to get a better sense of how broadband speeds really play out across the world. The FCC is encouraging consumers to use the sites (Ookla also runs a site that tests jitter and packet loss at pingtest.net) as part of its nationwide testing goals, and many of Ookla’s ISP customers also offer the test to their customers and host Ookla’s servers.

    Providing tests for ISPs is actually most of Ookla’s business. The next plank of the business strategy is the index data: Ookla hopes to provide the information for free to academic researchers, but it also plans to charge ISPs, analysts and governments for it. Ookla has no debt or venture capital, and is profitable.

    The company also today released a list of the top worldwide and U.S. cities based on their broadband speeds. It measured only cities with more than 75,000 people connecting for more than three months using a 30-day rolling average. The results are subject to change, and given that no place in the U.S. ranks in the global Top 10 (the first U.S. city is San Jose, which is ranked 18), I hope the results do shift.

    Here are the top 10 U.S. cities and their corresponding 30-day average speeds:

    1. San Jose, Calif. 15.02
    2. Saint Paul, Minn. 14.53
    3. Pittsburgh, Pa. 14.18
    4. Oklahoma City, Okla. 12.12
    5. Brooklyn, N.Y. 12.10
    6. Tampa, Fla. 12.05
    7. Bronx, N.Y. 12.01
    8. New York, N.Y. 11.85
    9. Denver, Colo. 11.68
    10. Sacramento, Calif. 11.34

    The global top 10:

    1. Seoul, South Korea 34.49
    2. Riga, Latvia 27.88
    3. Hamburg, Germany 26.85
    4. Chisinau, Republic of Moldova 24.31
    5. Helsinki, Finland 20.58
    6. Stockholm, Sweden 19.97
    7. Bucharest, Romania 19.68
    8. Sofia, Bulgaria 18.99
    9. Kharkov, Ukraine 18.15
    10. Kaunas, Lithuania 17.46

    Related GigaOM Pro Research (sub req’d): Big Data Marketplaces Put a Price on Finding Patterns



    Atimi: Software Development, On Time. Learn more about Atimi »

  • Announcing the Structure 2010 Launchpad Finalists!

    Vishal Sikka (SAP) joins Stacey onstage at Structure 2009.

    I can’t open my email these days without seeing a new cloud-based startup pitch (or three). Meanwhile, startups offering everything from platforms as a service to various ways to bridge hybrid clouds are raising venture capital (GigaOM Pro, sub req’d) after a fairly fallow period for investment, and large buyers are scooping up more mature startups left and right. Which is why this year at Structure 2010, we’re adding a new feature: the Launchpad.

    Veteran attendees to GigaOM events will recognize the format from our Mobilize conference and our green IT-focused Green:Net as the session when startups launch — or launch their products or services — to a room full of VCs, industry players and financial press. Two awards will be given: an audience choice award (determined by SMS voting) and a judges’ selection.

    So after much deliberation and consideration, today we announce our Launchpad 11. It was meant to be 10, but we had such a hard time choosing, we just increased the finalists to 11. They are:

    • Benguela – A stealth-mode startup founded by Amazon EC2 veterans.
    • Cloudant – Provider of MapReduce-based data management for enterprises, including analytics and search.
    • Cloudswitch – Maker of software to connect private clouds to public clouds.
    • Datameer – Offers the ability to organize big data using Hadoop through a spreadsheet interface.
    • Greenqloud – Provides completely carbon-neutral cloud infrastructure with commodity gear.
    • GridCentric – Turns your compute farm into a high-performance, private cloud in seconds.
    • nephosity – Allows non-programmers to allocate tasks and workflows on the cloud.
    • Northscale – Provides a simple, fast, schema-free mechanism for storing data objects using memcached.
    • Riptano – Developer of software and services on top of the open-sourced Apache Cassandra (Facebook’s data layer) data storage platform.
    • SolidFire – Offers a next-generation block-based storage appliance for cloud computing providers and big enterprises.
    • Zettar Inc – Enables the creation of secure private storage clouds using commodity hard drives.

    And in honor of the Launchpad 11, for a limited time, we’re offering discounted tickets: $795 for the full two days vs. the regularly priced $995 — but just until this Friday at 11:55 p.m. PST. Simply enter the code LAUNCH11 when you register (applies to new registrations only).

    See you in June!



    Atimi: Software Development, On Time. Learn more about Atimi »

  • With Bandwidth Demand Booming, a New Kind of Optical Network Is Born

    Allied Fiber said today it’s begun construction of a nationwide wholesale fiber network that will span 11,548 miles. The New York-based company will build out the network in six phases, linking undersea cable landing points, data centers, colocation interconnection facilities, rural networks and wireless towers in order to feed the increasing demand for broadband capacity resulting from everything from the ever-growing number of cellular towers to cloud computing (we’ll talk about the bandwidth needs for cloud computing at our Structure 2010 conference next month).

    A New Model to Meet Broadband Demand

    But Allied’s effort isn’t just aimed at boosting overall capacity — it’s aimed at changing the underlying business model of providing long-haul telecommunications networks. Hunter Newby, CEO of Allied Fiber, wants to connect the U.S. with an open fiber network comprised of the three disparate systems that essentially make up the backbone of the Internet, and is targeting data centers, high-bandwidth sites, rural ISPs, wireless companies and long-haul networks providers as customers. But it remains to be seen if Allied’s model will compete, not just with offerings from backbone providers such as Level 3 Communications, but also with colocation companies and the tower industry.

    Newby, who was the chief strategy officer at colocation provider Telex, is pretty impassioned about his plan to bring wholesale fiber to places where existing backhaul providers may not go. It’s a plan similar to Google’s experimental fiber network for consumer broadband, but enacted on a much larger scale, and for businesses. Newby believes that in underserved areas where Allied Fiber will have a presence, the cost of bandwidth will be driven down significantly because Allied will be willing to sell access to the long haul network, at competitive rates, to anyone who wants them — something the incumbents aren’t inclined to do.

    Competition Drives Costs Down

    The construction of Allied’s network is a big deal for small ISPs, which can find themselves having to pay more than $100 a megabyte for bandwidth, and may mean they don’t have to implement bandwidth caps as a means to keep their own costs down. It’s also a big deal for cellular carriers like Sprint and T-Mobile, as it will give them access to less expensive backhaul without having to pay the likes of AT&T or Verizon.

    As Newby explains, rural providers or cellular providers needing rural coverage will be able to buy transport at wholesale rates from a colocation provider in the middle of field somewhere along a railroad right of way (Allied has a deal with some railways companies for access to their ducts). Such an approach could provide access for a single provider near the colocation facility or other regional providers could build off the Allied network. It would also open up the opportunity to locate data centers in rural areas, perhaps near renewable energy projects.

    “The incumbents have control and have made it quite clear they’re not willing to make any significant capital investments in rural areas and are selling off rural assets,” Newby told me. “But you need to change the economics, and if these buyers can buy at even $15 per megabyte…the number of gigs and terabytes will eclipse the current rate because right now it’s so expensive.”

    Building a High-Fiber Network

    The first phase of the Allied network will cost $140 million, will connect New York, Chicago and Ashburn, Va. and will be completed by the end of this year. Newby said the second phase (from Atlanta to Miami) will cost $180 million, and the third phase connecting Chicago to Seattle could cost as much as $350 million. However, he added that potential customers are willing to go in with him on the cost of the connection to Seattle because big bandwidth providers like NTT Corp. need a shorter route to get their traffic to Asia. The final three projects aren’t budgeted yet, nor is there a definitive time frame.

    The first phase will provide a combined 648 dark fibers, 19 colocation facilities and 300 tower sites. From the press release:

    Allied is deploying a 432-count, long haul cable coupled with the 216-count, short-haul cable that will be a composite of Single-Mode and Non-Zero Dispersion Shifted fibers. Allied Fiber has implemented a new, multi-duct design for intermediate access to the long-haul fiber duct through a parallel short-haul fiber duct all along the route. This enables all points between the major cities, including wireless towers and rural networks, to gain access to the dark fiber. In addition, the Allied Fiber neutral colocation facilities, located approximately every 60 miles along the route, accommodate and encourage a multi-tenant interconnection environment integrated with fiber that does not yet exist in the United States on this scale.

    If Allied Fiber can build an open fiber network that spans the country and includes colocation and towers, it could provide a way for municipal fiber networks and rural ISPs to get online and connect to backhaul for less, while bypassing their potential competitors (for example, a muni fiber network might compete against AT&T but may also have to buy access back to the Internet backbone from AT&T because it’s the only provider in the area). We’ve long argued that open networks are the way to go when it comes to big infrastructure, something with which Newby agrees. “I believe in the power of open networks,” he said, “but instead of talking about it or writing, about I want to do it.”

    He went on to say that: “I encourage other people to copy our model and philosophy of neutrality. It drives growth and it’s what drives the innovation and bridges the islands of broadband we have in this country.”

    Related GigaOM Pro Content (sub req’d): Who Will Profit From Broadband Innovation?



    Atimi: Software Development, On Time. Learn more about Atimi »

  • AT&T’s Tarnished $1.4B Sterling Sale

    AT&T said today it’s agreed to sell its Sterling Commerce software division to IBM for $1.4 billion. The deal will let AT&T offload a business unit that SBC Communications bought for $3.9 billion at the height of the dot-com boom (SBC went on to buy AT&T in 2005 but kept the iconic telecom name). Sterling offers pricing and e-commerce software that businesses can use to manage pricing in real time or to get an entire view of their inventory, from marketing to fulfillment.

    However, AT&T isn’t a software company, and the original rationale behind buying Sterling — namely that the phone company could become an exchange for online pricing — never panned out. So even though the sale price is much less than what SBC paid back in 2000, the deal is a good way for AT&T (which never integrated Sterling into its business) to get rid of a non-core asset. IBM’s acquisition will pit it against similar software from HP and other large enterprise software providers, which is where Sterling belongs anyhow.

    Image courtesy of Flickr user zzzack



    Atimi: Software Development, On Time. Learn more about Atimi »

  • It’s a Long Way to Widespread LTE

    It’s going to take almost 10 years for the sale of LTE devices to overtake 3G devices, according to an analyst who follows the industry. Keith Mallinson, founder of WiseHarbor, estimates the tipping point between LTE and 3G will occur in 2019 and said the U.S. will be an early leader when it comes to deploying the technology, in part because of the National Broadband Plan’s reliance on mobile. Mallinson also expects China to move quickly to LTE because its largest mobile operator, China Mobile, doesn’t like being forced to use Chinese developed 3G technology.

    In the last few days, I’ve received several LTE reality checks, such as the news that by 2014 there will only be 150 million LTE subscriptions, or AT&T’s belief that true LTE handsets that are as diverse in features as the current 3G handsets won’t even hit the market until 2014 (even though Verizon is bringing five LTE handsets to market next year).

    Still, I’m optimistic, mostly because I can see faster speeds on the horizon. For those upset at my focus on speeds at the expense of network quality and capacity, I’m encouraged by LTE for two reasons: the technology itself is more efficient, which means we can cram more bits into each hertz, but it is also being deployed in new spectrum, which will help meet capacity and bandwidth needs as well. Of course, it’s not going to provide the quality or consistency of wireline broadband, but expecting that would be kind of like believing in the tooth fairy.

    Related GigaOM Pro content (sub req’d):
    Everybody Hertz: The Looming Spectrum Crisis



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • America’s Amazing Rise to 3G Dominance

    Nearly 70 percent of U.S. cell phone subscribers are on a 3G network, according to data released today (and based on info gathered at the end of the first quarter) by Wireless Intelligence. In comparison, at the end of last year only 20 percent of the world’s mobile phones were on a 3G connection. India is in the process of auctioning off its 3G spectrum and China plans to boost its 3G coverage over the next few years, which will boost worldwide 3G. Meanwhile, the U.S. has a lead when it comes to innovation on the wireless front that is driven in no small part by its widespread access to 3G speeds, and its citizens’ willingness and ability to consume them.

    In the first quarter of this year, all four of the nation’s top carriers were among the top 10 in the world making money from data revenue, with Verizon leading the list for the first time. As power in the industry shifts to mobile computing from the desktop, such dominance is a clear indicator of how fast broadband helps drive innovation.

    But it isn’t just the pipe. Consumers have to be able to access the faster networks on their phones, which is why devices like the iPhone have played such a huge role when it comes to both new mobile computing innovations and the boost in data revenues (GigaOM Pro, sub req’d) (and most importantly, data traffic). As we transition to next-generation networks like LTE or WiMAX, the relationship among the pipe, the device and the corresponding level of innovation is notable.



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • Google Tries to Offer a Grown-up Cloud

    Google has tweaked its App Engine platform as a service to make it palatable for business customers, the search giant said today at its developer conference in San Francisco. Its goal is to get corporations to build their in-house software on App Engine for Business as opposed to hosting it in their own data centers or on another cloud. But while App Engine for Business is a logical next step for Google, it still has a ways to go when it comes to providing a truly competitive PaaS that developers will use to build enterprise applications.

    App Engine for Business corrects certain issues that bother developers about App Engine, namely by providing “enterprise-level support” (in other words, trouble tickets and subsequent responses) and SQL database capability (due by the third quarter).
    Google’s Use of Big Table had also frustrated them as it locked them into the App Engine platform. Google’s response was that it knew how to build a scalable infrastructure, and so if you wanted to scale your app, then App Engine and the Big Table limits were the way to go.

    To be fair, App Engine was designed to attract developers trying to build consumer web services, and Google last May integrated App Engine with Salesforce.com so business application developers could test it out. However, App Engine competes with services like Microsoft’s Azure, Heroku and the recently announced VMforce platform (GigaOM Pro, sub req’d) from VMware and Salesforce.com. And so far Google does well at providing web-based apps to folks interested in breaking out of an Office-dominated world (and the office), but less so when it comes to providing flexibility and the higher levels of services that those building their own enterprise apps in the cloud require.

    Perhaps because of its previous weakness at providing the service and an enterprise-friendly platform, Google has worked with VMware’s Spring Source division to develop a way to move apps from one cloud to another based on Spring’s Java framework. Oftentimes, it’s the weaker companies that work hardest to force interoperability and choice for customers, seeking an advantage.

    The ability to move applications from one cloud to another helps advance the cloud computing agenda because customers won’t get locked into one platform or infrastructure — a worry for anyone spending time and money building applications in the cloud. Google and VMware are hoping that their partnership and use of Spring makes enterprise customers that use Java to build in-house apps more comfortable building them and hosting them in the cloud. Apps built using Spring will run on App Engine, Amazon’s Web Services and any other platform that can support Java.

    The Google and VMware partnership is less about them working together than Google saying it will make sure apps built using the Spring framework will run seamlessly on App Engine. Google is also releasing tools that will allow any developer to add some code on top of the platform, which makes it possible to run any app on any device that supports a browser. That’s a nice feature. For more on platforms as a service and supporting enterprise cloud apps, attend our Structure 2010 conference on June 23 and 24 in San Francisco.

    Google’s cloud efforts so far aren’t that compelling for businesses, but perhaps that’s a good thing, as it’s forcing Google to open up and support cloud interoperability in ways it hasn’t in the past with its insistence on Big Table. Its infrastructure may be top-notch when it comes to supporting the search engine, but Google still has a lot to learn about supporting businesses in the cloud.



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • Amazon Tries to Take the Commodity Out of Cloud Computing

    Amazon will offer a lower-priced, less reliable tier of its popular Simple Storage Service, the retailer said today. The offering, called Reduced Redundancy Storage, is aimed at companies that wouldn’t be utterly bereft if the less reliable storage fails. From the Amazon release:

    Amazon S3’s standard and reduced redundancy options both store data in multiple facilities and on multiple devices, but with RRS, data is replicated fewer times, so the cost is less. Once customer data is stored, Amazon S3 maintains durability by quickly detecting failed, corrupted, or unresponsive devices and restoring redundancy by re-replicating the data. Amazon S3 standard storage is designed to provide 99.999999999% durability and to sustain the concurrent loss of data in two facilities, while RRS is designed to provide 99.99% durability and to sustain the loss of data in a single facility.

    As the market for infrastructure-as-a-service platforms grow, Amazon is trying to offer variations and services that distinguish its compute and storage cloud from those of Rackspace and Verizon and from platforms such as Microsoft’s Azure or VMforce. Cheaper storage with a lower service level is one such way, and its spot pricing instances are another.

    On his blog, Amazon’s Jeff Barr offers an overview of the RSS offering. For more detail, check out Amazon CTO Werner Vogel’s explanation on how S3 works and what the magic behind RRS is.

    For more on the economics of cloud computing and how they will evolve, visit our Structure 2010 conference June 23 and 24 where Amazon CTO Werner Vogels will be a key note speaker.

    Related GigaOM Pro content (sub req’d):

    Spot Instances Won’t Commoditize the Cloud, and That’s OK



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • HP, to Get Greener Data Centers, Thinks Brown

    Give Hewlett-Packard 10,000 cows, and the computer company will give you the means to power a data center. HP today presented research from its HP Labs division showing how a data center can power its servers using the gas produced by cow poop, and dairy farmers, in turn, can take advantage of all the hot air data centers generate to make steam that will help power their farms.

    HP’s paper, presented at the ASME International Conference on Energy Sustainability in Phoenix, outlines how this natural gas exchange works. We’ll start with the 10,000 cows. The average dairy cow produces about 120 pounds of manure per day, which can generate 3.0 kWh of electrical energy — or enough to power television usage in three U.S. households.

    The waste simply needs to be processed at a plant using anaerobic digestion. HP’s researchers estimate that the resulting methane could power a 5,000-square-foot data center, with some left over to power the dairy farm. This keeps poop out of the waterways and methane out of the air. HP optimistically estimates that farmers would not only break even with such an approach, they’d profit. From the release:

    HP researchers estimate that dairy farmers would break even in costs within the first two years of using a system like this and then earn roughly $2 million annually in revenue from selling waste-derived power to data center customers.

    Animal waste is used for cooking in other parts of the world, and companies have generally used their waste heat from data center operations to heat their water and offices through onsite cogeneration plants. Yahoo has also turned to farms for inspiration on reducing power consumption — it modeled a data center design off of a chicken coop to reduce the need for air conditioning. So as far-fetched an idea as it may seem, it’s not a total load of crap.

    Related GigaOM Pro Content (sub req’d): Green Data Center Design Strategies



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • AT&T’S Slow Road to Fast Broadband

    AT&T is planning for faster wireless, but also wants to push its wireline networks to 80 Mbps downstream this summer in a trial using esoteric copper technologies such as vectoring, pair bonding and spectrum management. In an interview with me yesterday, John Stankey, president and CEO of AT&T Operations, explained how the carrier is testing faster speeds on its fiber-to-the node network by upgrading to VDSL2 technology and hinted at AT&T’s ability and willingness to extend fiber closer to the customer’s home as demand rises. But it most assuredly isn’t ready to hop on the fiber-to-the-home bandwagon, not is it convinced its customers need or want the 100 Mbps broadband by 2020 that the FCC is seeking.

    AT&T currently offers 24 Mbps down and 3 Mbps upstream as its top U-verse service tier, which is looking sloth-like when compared with the DOCSIS 3.0 being rolled out by its cable competitors and the fiber-to-the-home efforts of Verizon. Even Qwest is boosting speeds to 40/20 Mbps in some areas, although there are still plenty of people who would love U-verse speeds. Then there’s the looming specter of the National Broadband Plan, which includes the goal of offering 100 Mbps speeds to 100 million homes by 2020. I asked Stankey if AT&T could meet that goal using its fiber-to-the-node technology, which relies on copper from a neighborhood box to connect to the customer’s home.

    But Stankey was less focused on AT&T’s ability to meet the goal than on disparaging the goal itself. “I don’t know what informed the FCC that [100 Mbps to 100 million homes] was the right answer,” he said. “We’ve been doing wired broadband for 10 years and we have meaningful curves in terms of speeds and demand that are statistically accurate and predictable.” Based on those curves Stankey said AT&T knows exactly how much data and throughput are needed as opposed to choosing a “nice round number” to shoot for.

    “We feel comfortable…based on how we deploy, that we can match the needs of the customer,” Stankey said. For example, Stankey said that AT&T could extend fiber further along the local copper loop and then reduce the number of homes served by each neighborhood cabinet and shorten the distance bits have to travel over the last-mile copper. Reducing the distance is a key element when it come to improving the quality of signals and boosting speeds — the further out one is on the local loop, the slower the speeds are.

    As for the upstream capabilities, Stankey wouldn’t say what AT&T might offer, nor what it theoretically could offer using the bonding, vectoring and spectrum management. “We’re evaluating the upstream characteristics and we might take [the 80 Mbps speeds] down to lower levels to offer more upstream,” Stankey said. The trials will last through the end of the summer.

    Image courtesy of Flickr user Photo Monkey

    Related GigaOM Pro Content (sub req’d): When It Comes to Pain at the Pipe, Upstream Is the New Downstream



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • Nvidia Shows Off Its Survival Skills With IBM Win

    Nvidia today said its graphics processors will be part of a new IBM server used for high-performance computing and webscale deployments. IBM’s iDataPlex servers will combine x86-based CPUs with GPUs in order to offer parallel processing on the GPU, which is more energy efficient. However, the announcement that IBM has turned to Nvidia GPUs after it stopped producing its own specialty parallel processor also offers a guide on how to succeed in the data center.

    IBM’s own Cell processor, originally developed for the Sony Playstation with Sony and Toshiba, was killed late last year. It was used as an accelerator chip in the high-performance computing world to build cheaper, greener supercomputers. We wrote about the efforts IBM made to push it into the HPC market as well as into the data center, but it turns out the market didn’t want the Cell processor.

    There are likely a few reasons for its death and the simultaneous survival of GPUs from which those pushing ARM-based servers should learn. In the Darwinian environment of the data center, where the search for energy efficiency, performance and low-cost computing collide, Nvidia had two advantages over the Cell. One, it had the CUDA programming tool introduced in 2007 that made it possible for developers to program in C and then watch their efforts run easily on the GPU’s parallel architecture. IBM had developer efforts, but programming for the Cell was still more difficult. And specialty programming raises costs.

    Nvidia also has a huge consumer market for its graphics chips, which lowered the overall cost of the chips, making them cheaper to buy for large-scale computing efforts. IBM’s Cell processors cost a lot to develop and a market for the chips outside of the Playstation and IBM’s supercomputers never materialized despite efforts by Toshiba, Sony and IBM. Both the Cell processor and GPUs deliver incredible performance gains when dealing with parallel processing compared to a CPU, but huge performance gains alone are not enough.

    So as chip and server vendors attempt to develop hardware for the new webscale companies, the tale of IBM’s cell and NVdia’s continued survival offers some solid lessons, namely: It’s not enough to be green; you have to be cheap, easy to program and orders of magnitude better at crunching data. As for bringing technologies once reserved for supercomputers, check out our Exascale Grail panel at Structure 2010 in June on where HPC is headed next and why it matters for software as a service, platforms as a service and other webscale vendors.

    Related GigaOM Pro Content (sub req’d): Super Computers and the Search for the Exascale Grail



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • Exclusive: The Details on AT&T’s Bridge to LTE

    AT&T has what it hopes to be an ace in the hole while it transitions to the Long Term Evolution fourth-generation wireless network technology — faster 3G over its entire footprint by the end of the year. How fast? Up to 14 Mbps through an upgrade to the HSPA+ technology standard, according to John Stankey, president and CEO of AT&T Operations, who spoke with me this afternoon.

    In the interview Stankey confirmed plans for the nation’s second-largest carrier to move from the current planned rollout of HSPA 7.2 (which offers maximum theoretical speeds of 7.2 Mbps down and real-world speeds of about 3.5 Mbps) to a version of HSPA+ that will offer real-world speeds closer to 7 Mbps down. He said that, for less than $10 million, AT&T can upgrade its 3G network to provide HSPA+ network access to 250 million people by the end of the year. AT&T still plans to begin its LTE roll out in 2011, but for less than $10 million it can provide a fallback network that’s more robust than the 3G network offered by its closest rival, Verizon. My hunch is that it can also afford to take more time completing its LTE rollout while still competing with its rivals, which are boosting speeds on their networks.

    Verizon’s 3G network is based on a CDMA standard (EVDO Rev. A) that currently offers speeds of up to 3.1 Mbps (I generally get about 1.7 Mbps down on my modem). As Verizon upgrades to LTE (it plans to cover 100 million people by the end of this year and its entire footprint by the end of 2013) it’s going to offer its users two networks with widely varying speeds. In places with LTE, Verizon says speeds will range from 5 to 12 Mbps down, while in places it has 3G, users will see speeds drop significantly. This is one argument in favor of Verizon looking at deploying EVDO Rev. B in some places, which offers speeds of up to 14.7 Mbps down. Verizon denies this plan.

    So, essentially AT&T wants to spend a fairly small chunk of change to make sure its customers have a network on which to fall back on without experiencing a steep drop in speeds. It also wants to buy itself some time to roll out an LTE network without looking like a laggard, speed-wise. Indeed, T-Mobile is deploying an HSPA+ network that’s delivering speeds of up to 8 Mbps in real-world tests.

    AT&T also wants to make sure its customers have good devices and coverage while the vendor community gets the LTE ecosystem up to speed. Stankey has long been vocal about his belief that LTE won’t be ready for the mainstream until 2014, and said today, “The vendors are experiencing some challenges on certain features and software, and first implementations in 2011 will be…pretty vanilla.”

    Among his worries are issues about roaming between 3G and 4G, and the handoffs between voice and data on 4G networks. He believes a wide variety of LTE handsets for the general consumer, as opposed to early adopters, won’t appear until 2014 — which is also the same time he expects voice to be delivered via VoIP on LTE. Until then, the handsets will be big, have bulky antennas and suffer from short battery life, he predicted. However, he also acknowledged that the HSPA+ handset ecosystem will take some time to develop and said the first products will likely be data cards — a forecast which effectively killed my hope of a fourth-generation iPhone that works with HSPA+ networks.

    Even if the handset experience for LTE is lame through 2014, the market for data cards or service for devices like the iPad is a growing opportunity that AT&T can’t ignore. And that’s the main benefit to an upgrade to HSPA+ for Ma Bell: It gets double the speeds on its network for a low price, and it won’t fall behind as it competes with what would otherwise be faster speeds on Verizon’s LTE network, Sprint and Clearwire’s WiMAX network and T-Mobile’s HSPA+ network next year and beyond.

    Related GigaOM Pro content (sub req’d):

    Everybody Hertz: The Looming Spectrum Crisis

    Thumbnail image courtesy of Flickr user mrbill



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • AT&T & Verizon’s Future Is in Your Fridge

    Carriers’ data revenue rose 22 percent to $12.5 billion in the first quarter of 2010 over the same period a year ago, according to the latest data from Chetan Sharma, a wireless analyst. However, while data now contributes slightly more than 30 percent to the total average revenue per user (ARPU), it also uses 70 percent of network capacity. Sharma estimates that by the end of 2010, data will contribute more than 35 percent to ARPU and devour 85 percent of network capacity.

    So even as data revenue and traffic rises, carriers face two key challenges: One, the handset market is saturated; and two, users on smartphones are boosting their consumption of data at a far faster rate than carriers are boosting their data revenue. The answer to these challenges is selling data plans for your car. Your kitchen. And even your electric meter.

    Wireless providers are recognizing that the smartphone isn’t where the profits are going to lie, especially if they don’t reign in all-you-can-eat mobile data plans as Kevin laid out last week. Sharma’s data around how deeply voice subsidizes data is grim, but he predicts that it will only last through 2013, at which point things will even out. Then carriers will have to deal with a decline in overall ARPU.

    Obviously ARPU isn’t the only metric that carriers pay attention to, and selling data by the megabyte isn’t the only option available to carriers. For example, texts are a low-data, high-dollar and high-margin service, a trifecta that leads to profits without overburdening the network. Carriers are hoping to find other data services (GigaOM Pro sub req’d) that offer these characteristics.

    That’s why the promise of machine-to-machine communications is so important to the likes of AT&T and Verizon. Sharma notes that U.S. subscription penetration was at about 94 percent at the end of the first quarter, and if one eliminates children 5 or younger, past 100 percent. He writes that AT&T and Verizon added more connected devices than postpaid subs in the January through March time frame. Postpaid cell plans (even with data) just aren’t a growth area — unless we’re talking about using up network resources.

    That’s why AT&T is betting big on the Internet of things, providing service for the Kindle, pill bottles and dog collars. It’s why Verizon has a joint venture with Qualcomm for machine-to-machine connectivity. For industry watchers the question to ask is not why carriers are rushing to provide connectivity, but how it will happen.

    I think the business model questions have to be addressed before my fridge gets a wireless connection from one of the top carriers. For example, does the manufacturer of the appliance pay for the connection as Amazon does with its Kindle? Plus, bigger issues are at stake, such as why use cellular when Wi-Fi might suffice? For example, a connected appliance in the home doesn’t need to use a cellular network since it’s likely going to be part of a Wi-Fi network. As consumer electronics makers and automotive executives choose which cellular connection to put in a product, what attributes matter in terms of coverage, cost and contracts? Ironically, as carriers pursue this strategy they may find themselves at the mercy of their customers, providing the dumb pipe.

  • A Modest Proposal on Privacy

    Privacy is different for everyone. Robert Scoble is happy sharing, while I would hate showing off pictures of my daughter to my Twitter followers or even checking into a grocery store on Gowalla or Foursquare. Add the conflicting goals of a site like Facebook — which wants to make money from people’s data — to the disparity between people’s tolerance for sharing, and we’re faced with labyrinthine privacy policies and confused messaging as big services try to please a huge section of users, most of whom who don’t want to sit down and go through 170 options to change their privacy settings.

    Now even Congress is getting involved — but wireless analyst Chetan Sharma proposed an interesting idea last night in his first quarter wireless data analysis. The analysis is worth checking out, (Verizon edged past Japan’s NTTDoCoMo for the first time to become the carrier making the most money selling wireless data) but his suggestion for dealing with privacy is worth sharing with those outside of the wireless industry who might otherwise miss it:

    If people are really serious about tackling privacy, OEMs and carriers should build a physical/soft privacy button on the device with 3-5 levels (just like for the ringer volume) that allows users to open/close privacy across all applications and services with the touch of a button. All apps and services should adhere to the principle via APIs. The other mistake companies make about privacy is by treating everyone the same. Privacy is about the perception of control and transparency. If it is given back to the consumer, they are likely to engage more and have a more positive impact on revenue streams that are likely to flow.

    Clearly there are issues with this, including the fact that it would only work on mobiles, and that most people have different settings for different apps. Implementing such a thing would also require the carriers or handset makers to work together with app developers without trying to hijack standards or access to the information. But the idea of a privacy middleware layer or a service is intriguing, be it on a handset or as another layer in the cloud. What do you think? Let me know in the comments.

    Related GigaOM Pro Content (sub req’d): Could Prrivacy Be Facebook’s Waterloo?

  • TeliaSonera Shows That LTE Is Addictive

    TeliaSonera, which deployed the first 4G network in the world last December, has released data showing that once users have the faster Long Term Evolution service (GigaOM Pro sub req’d), more than half won’t go back to slower 3G. The Nordic carrier conducted the survey among its 4G customers after they had used 4G for 100 days, and 54 percent said they would never go back to 3G.

    Sure, these customers were already well versed in mobile technology — more than 90 percent had upgraded from an existing 3G connection and 43 percent had an iPhone. But if more than half are willing to stick around even as the price increases (TeliaSonera had a sweet deal for the first customers), that’s pretty awesome. Other results included a change in surfing habits, which makes sense given that 65 percent are relying on that 4G as a supplement to fixed-line broadband (in other words, it’s not a replacement), and now they can do even more. From the report:

    • 26 percent say they are working more on a mobile basis
    • 23 percent say they are downloading larger files
    • 19 percent say they watch online TV/stream movies
    • 16 percent say they began surfing the web more

    Carriers have to be thrilled with such data, as it shows that faster broadband speeds can make customers spend more time online and view 3G as something subpar. I imagine I’ll be paying a pretty penny for my LTE connection once Verizon upgrades its network by the end of this year. But I have a feeling it will be worth it.

    Image courtesy of Flickr user Panoramas

  • The Global Rise of the Smartphone


    BlackBerry maker Research in Motion broke into the top five handset vendors during the first quarter of this year, according to numbers released by research firm IDC. It attributed the success of RIM’s smartphones in part to “text-crazy teens” and strong demand for the BlackBerry Curve 8520 and BlackBerry Bold 9700.

    Growth in the handsets many of us think of when it comes to smartphones — Apple’s iPhone and the HTC phones for Android, which are found in IDC’s chart under the “others” category and RIM’s devices — are growing faster than the handset market overall, the data shows. in other words, the global rise of the smartphone is upon us.

    In the U.S. and in the tech community, we may take it for granted that the phone and the web should blaze along at 3G (and soon 4G speeds) with user-friendly interfaces, touchscreens and an app market, but the rest of the world — and even large portions of North America — has been moving at a slower pace. A whopping 83 percent of Americans have a feature phone while there are only 400 million smartphones in a world with 4.6 billion mobile subscribers. But the smartphones are coming, as IDC noted with regards to handset maker LG’s success:

    The abundance of feature phones at varying price points kept the company in good stead with carriers and customers, particularly within emerging markets where LG reaped triple-digit growth. Still, the lack of a broad and deep smartphone portfolio made it vulnerable to competitor share gains, particularly within North America.

    This matters because in the debate over Apple vs. Android, and whether or not HP can save Palm’s webOS, it appears that the battle lines are drawn and that the market dynamics are already set. But clearly the rest of the world, and the large portion of people toting feature phones, are not the dregs of the market but the massive middle, and the opportunity to reach them with compelling products that are differentiated — perhaps cheaper or tailored to their circumstances (text-crazy teens perhaps) — is still one worth chasing. Just ask Nokia. It’s moving slowly, but it isn’t ready to hang up on the opportunity.

    Related GigaOM Pro Research (sub req’d):

    The App Developer’s Guide to Choosing a Mobile Platform