Author: Stacey Higginbotham

  • Huawei’s North American Conquest Continues

    We’ve kept a close eye on Chinese telecommunications equipment vendor Huawei for the last few years, and today the company justified our attention by telling me in an interview that in 2009 it grew its North American sales by 63 percent to $408 million. The base number is relatively small compared with Huawei’s global contract sales (deals that are signed, but where the revenue has yet to be recognized) of more than $30 billion, but North America is its fastest-growing market.

    The company started its North American business in 2001 and now has 1,000 employees on the continent after adding 450 people in 2009. It’s cherry-picked some engineers from troubled equipment makers like Nortel and Nokia Siemens Networks for its eight offices in North America. Charlie Chen, a SVP of  marketing and product management for the region, said it plans to achieve the same percentage growth in 2010, which would put it with contract sales of around $665 million.

    Yesterday, the GSMA released data showing that North American operators plan to spend $19 billion in capital equipment for mobile broadband alone. Given that Huawei sells equipment for wired and wireless carriers, it’s making inroads in North America but it still has plenty of room for growth. Analyst Jonathan Goldberg, a Deutsche Bank analyst, notes that the overall capital equipment spending in North America has been flat or decreasing, which makes Huawei’s 63 percent growth look a lot more interesting, especially given the troubles in the telecommunications equipment sector.

    Some highlights the company shared for the last year include:

    • It worked with Verizon to test the first 10 Gbps fiber-to-the-premise equipment
    • It deployed the first North American HSPA+ network for Telus in Canada
    • It’s conducting 42 LTE trials globally and five in North America
    • It’s building a 3G network in Chicago for Leap Wireless

  • Aspera’s iPhone App Sends Fat Files With Ease

    Aspera today launched a version of its rapid file transport software for the iPhone, which will allow iPhone users to squeeze their picture and video files through the crappiest connection that AT&T may have to offer. And it makes the transfer fast! Aspera says it can make file transfers over 3G networks three times faster than existing HTTP or FTP transfers.

    I’ve long been a fan of Aspera, which has a proprietary method for moving bits around, and — unlike popular protocols — takes full advantage of an existing broadband pipe for the file’s entire journey. It’s currently used by media companies for transporting huge video files from New York to Los Angeles and even for sending data to the cloud. Its move to the iPhone was something CEO and Co-founder Michele Munson talked to me about last September.

    To take advantage of Aspera’s proprietary protocol, users need to have the Aspera fasp-AIR client on their phones. The client is in beta and enables users to move fat files, like pictures and videos, to Aspera’s servers. The cost of the app or data transfer was not disclosed, although for some of its other products Aspera offers its software for free and charges for the transport. The iPhone client will be generally available in the first half of 2010. At that point, people trying to send video to CNN’s iReport or use Ustream will have a tool that speeds the file transfer process.

    The Aspera client shines when it’s used to send stuff over longer distances, because the further your packets go, the more entropy slows and affects the process. So while Aspera can speed content transfer over small distances with 3G networks by three times, transfers over Wi-Fi networks connected to distant servers, are 10 to 100 times faster than traditional transports. For the average user, this means you can send a fat file and watch it fly. For professionals, it turns an iPhone into a broadcast platform.

    Related GigaOM Pro Content: Are Torrents a Tool for Predicting the Future?

  • Google Doesn’t Want to Be an ISP — It Wants to Be a Rabble-rouser

    Minnie Ingersoll

    Google rocked my world today when it unveiled its experimental fiber network, complete with speeds of 1 gigabit per second to the home. Already the Google-as-ISP meme has pervaded the technology world, but I don’t think Google is in this to be a traditional ISP. Rather, it wants to build out a fiber-to-the-home network to show the U.S. the future of communications — a far more frightening prospect for existing ISPs, and one that has big implications for entrepreneurs and consumers.

    I wrote late last year about the future of communications in an all-IP world and noted that even the Federal Communications Commission is preparing for that day. Google clearly is as well, as we can see from products like Google Voice and its cloud efforts. To figure out why Google has declared war on the existing communications network with this plan, I chatted with Minnie Ingersoll, a product manager for alternative access at Google. Her group works on such issues as white spaces broadband, spectrum auctions and Google’s filings with the FCC related to the National Broadband plan.

    GigaOM: Why this? Why now?

    Minnie Ingersoll: Some of this is because it’s a natural follow-on to the work we’ve done in response to the National Broadband Plan. Some of that advocacy includes saying the government should set up testbeds to set up super-fast connections, so we said, “How about we step up and put our money where our mouth is and offer one of these high-speed test beds ourselves?” Getting faster and cheaper Internet access is really core to the mission of the team that I work on.

    GigaOM: You’re calling for community involvement. Does that mean the government needs to offer tax breaks, stimulus funds or access?

    Ingersoll: We’re not looking for government funding or subsidies here. It is a community partnership, if you will. One of the things we learned from municipal Wi-Fi is we need to have an engaged, excited community. We need to find a place where we can get users to use this service, what infrastructure is available and how the local contracting process works.

    GigaOM: Google has a reputation for building its own gear inside its data centers. Free, the French ISP, has managed to lower its operational costs and the cost to subscribers by building some of its own telecommunications gear. Will you guys do that with your fiber network?

    Ingersoll: When you mentioned Free, I thought you were going to ask about its openness, so I could talk about our thoughts on openness which we are replicating from the Europeans. But it’s a little too early to know exactly what were going to do in terms of the hardware, and we have not fleshed out all the partnerships yet.

    GigaOM: By getting into the ISP business Google will expose itself to new regulations. What are the expectations there for you?

    Ingersoll: We expect to be regulated in the same way as anyone else is regulated. We don’t plan a video or voice offering. Our fiber to the home is strictly an IP data pipe. We will be governed by the regulations that apply there and are not seeking special treatment.

    GigaOM: What is the timing on the network?

    Ingersoll: We’ll be seeking to identify a community this year. Beyond that the timing will depend a lot on what communities we partner with.

    GigaOM: Google said it would offer a competitive price. How will that be determined?

    Ingersoll: It’s too early to make commitments to the price, but part of the goal is to have services that users are actually using, so we are pricing it to encourage people to use it.

    GigaOM: That’s a phenomenally fast connection. What will people use it for?

    Ingersoll: Think back to when we all had dial-up and no idea what would be possible once we moved into this broadband world. This is like that, and that’s where the open nature of the network is important. We have a lot of Google engineers who are excited and experimenting with apps and services on the network but the openness part is just for developers to offer products and services on top of the network. We’re looking for  innovation on the deployment such as new deployment techniques and technologies.

    GigaOM: So if I have a new edge router or an optical network termination design, I should call you?

    Ingersoll: Yes! We think that faster, improved Internet access is possible and we hope to take the learning from this test bed to the world.

    GigaOM: It has to be asked: Is Google going to become an ISP?

    Ingersoll: We are not planning to roll out a nationwide ISP network. This is a test bed for innovation.

  • Google’s Fiber Network Could Foil ISPs and Fuel Innovation

    Google said today it plans to build an experimental fiber-to-the-home network in select areas of the country that would offer speeds of around 1 Gigabit per second. It says it plans to serve between 50,000 and 500,000 people and will offer the service for a “competitive cost.” The company is currently seeking a response to its request for information (RFI) by March 26 from municipalities that may want such a service for their citizens, but let me be the first to put my hometown of Austin, Texas, in the running.

    When asked about the characteristics of those communities, a Google spokeswoman emailed the following response:

    Above all, we’re interested in deploying our network efficiently and quickly, and are hoping to identify interested community partners that will work with us to achieve this goal. To that end, we’ll use our RFI to identify interested communities and to assess local factors that will impact the efficiency and speed of our deployment, such as the level of community support, local resources, weather conditions, approved construction methods and local regulatory issues. We will also take into account broadband availability and speeds that are already offered to users within a community.

    Google’s announcement, which has been five years in the making, could positively shift the telecommunications landscape if it leads to new services that galvanize the FCC, communities and consumers to start demanding faster broadband. It also creates a potential testbed for innovative services that rely on broadband as a platform to work — benefiting entrepreneurs and those who invest in them.

    With its web DNA and commitment to openness, Google will likely attract entrepreneurs to its network that are willing to try something new on the services front. Presumably it will also offer a faster path to the end consumer than what an existing ISP might. I’ve always felt that much of the innovation around broadband has occurred despite the ISP or even by bypassing the ISP, so imagine what projects we might see if the pipe owner were an active contributor to that innovation.

    We at GigaOM have said for years that broadband is the platform for innovation, and Google no doubt agrees. The pace of technological innovation in terms of video conferencing, telemedicine and remote education are rapidly surpassing the average American’s connection speed, which ranges from 3 Mbps to 7 Mbps depending on the study. And without the demand for such services, or a cost-effective way to get there, ISPs and entrepreneurs that want to deliver products that require fat pipes are reluctant to invest. Think of it as a chicken-and-egg issue. Google can help change this.

    The network can help spur innovation, but could also become a foil to the lobbying efforts from existing ISPs, many of whom are less than up-front about how their network costs are reflected in their prices, and tend to react with hysteria when faced with regulations that will limit their ability to control the bits running over their pipes. We’ve all read stories about how the web will break under the weight of network neutrality, or how the ISPs needs to raise prices or implement tiered pricing plans because some consumers are using too many resources. What we don’t have is the data showing that there’s an economic reason for this other than profiteering in an uncompetitive market.

    Google makes almost all of its money from selling ads over the Internet, not from selling the pipe itself. Therefore, if it is truly transparent about its costs and traffic demands, it could provide valuable data to the FCC and the industry that telecommunications companies do not. It wouldn’t exactly make the broadband market competitive, but it could help make the economics of operating such a network more transparent. And that could help regulators determine how competitively priced broadband is.

    Om wrote about Google’s interest in controlling its own bandwidth back in 2005 for Business 2.0, laying out an economic rationale for the creation of what he called the GoogleNet:

    An even more compelling reason for Google to build its own network is that it could save the company millions of dollars a month. Here’s why: Every time a user performs a search on Google, the data is transmitted over a network owned by an ISP–say, Comcast–which links up with Google’s servers via a wholesaler like AboveNet. When AboveNet bridges that gap between Google and Comcast, Google has to pay as much as $60 per megabit in IP transit fees. As Google adds bandwidth-intensive services, those costs will increase. Big networks owned by the likes of AT&T get around transit fees by striking “peering” arrangements, in which the networks swap traffic and no money is exchanged. By cutting out middlemen like AboveNet, Google could share traffic directly with ISPs to avoid fees.

    It didn’t happen five years ago and my hunch is that Google was waiting for a last-mile technology that would last. DSL and cable don’t have the capacity to reach 1 Gbps (although cable can offer up to 200 Mbps), and in 2005, fiber to the home was still an expensive pipe dream. But now fiber to the premise, which has the capacity to meet bandwidth demand for decades, is a reality.

    It took Verizon’s $23 billion investment in its FiOS network to drive innovations for delivering fiber to the last mile and lowering the costs. Technologies such as bendable fiber and smaller optical network terminals that fit on desks rather than inside entire closets were pioneered for the FiOS effort. Google will surely take advantage of that with its deployment.

    The fiber GoogleNet could become an indirect threat to Verizon and other ISPs because it could open the kimono on actual network costs and lead to services that would suck even more bandwidth on ISPs’ existing networks. So it’s ironic that the creation of such a network might have a large debt to Verizon at it makes its own fiber push. Personally, I can’t wait.

    Related content from GigaOM Pro:

    When It Comes to Pain at the Pipe, Upstream Is the New Downstream

  • T-Mo’s HSPA+ Upgrade to Hit the Coasts First

    T-Mobile CTO Cole Brodman with Om

    T-Mobile may end up having one of the fastest mobile broadband networks in the country — at least for a short time, as it rolls out HSPA+ upgrades across its network this year, which will offer theoretical speeds of 21 Mbps down. And in an interview with me yesterday, T-Mobile’s Dave Mayo, VP of engineering, hinted that the first cities to get it (outside of Philly, which already has a test network) will be located on the coasts.

    Mayo said that T-Mobile has already upgraded the existing HSPA software in “major cities” along the California Coast and said “major cities from Washington, D.C. to Boston” will have it on the East Coast, including Philadelphia, where it’s already live. My requests for more information on the timing and exact cities were denied, but my guess is T-Mobile, which said it would have HSPA+ deployed by the middle end of 2010 and that the network would offer access to 205 million, is waiting for devices and better backhaul before flipping the switch.

    Currently T-Mobile has about 10 devices that can use its existing HSPA network, which offers download speeds of up to 7.2 Mbps and no HSPA+ capable gadgets. The relatively young 3G network was rolled out in 2008, but has several compelling devices such as the Nexus One, the MyTouch and unlocked iPhones scavenged from AT&T’s network. AT&T is currently deploying HSPA coverage, but will skip HSPA+ and go straight to the Long Term Evolution network beginning with trials in 2011.

    However AT&T is currently beefing up its mobile backhaul faster than T-Mobile is, at least judging by Ma Bell’s statements to the world (GigaOM Pro, sub req’d). Mayo at T-Mobile said the operator doesn’t have the advantage of a fixed wireline network, but it has deployed fiber to 7 percent of its towers with 20 Mbps of capacity on those fiber strands. He also said that within the next few weeks the operator will turn on fiber to about 25 percent of its towers.

    When it comes to mobile broadband, the HSPA+ deployment puts T-Mobile in a singular position with regards to the other national operators. Bend Broadband, an Oregon cable operator, also has an HSPA+ network with faster speeds.  None of the big carriers in the U.S. is rolling out HSPA+, although a spokesman for T-Mobile points out that more than 25 operators have deployed HSPA+ so there are plenty of devices available. However, the speeds should beat out those offered by WiMAX (maybe even on older HSPA gear) and will likely compete with early Long Term Evolution deployments, especially LTE’s slower end. For example, Verizon expects users to see speeds on its network of between 5-12 Mbps down.

    T-Mobile’s unique bet on HSPA+ is probably dictated in part by the high costs of rolling out entirely new infrastructure to build an LTE network, especially so soon after it upgraded to 3G from 2G. However, Mayo said the 3.5G network is an asset as more consumers pick up smartphones and want a fast surfing experience on them (sorry iPad fans, no word yet on T-Mobile’s plans for Micro SIMs). He believes T-Mo’s network will provide a good experience for those who want voice and mobile data on a handset.

    I think that’s true, especially since voice and converged handsets for LTE networks are farther out than carriers may want us to believe. Plus, while a select few with deep pockets or mobile lifestyles plunk down $60 a month for a data card or personal hotspot, smartphones will be the mainstream consumer device for accessing the mobile web for a while yet. So until about 2012 or later, when LTE handsets hit the mainstream, I think T-Mo may have a slight advantage over other mobile broadband providers. After that, HSPA+ may look more like a handicap.

  • YouTube Will Kill Flat-rate Mobile Broadband Pricing Forever

    Video is driving the projected increase in both mobile and wired broadband — but it’s not the proliferation of video that’s the problem for mobile operators so much as the relative ease with which consumers can now access it. Indeed, while mobile operators have long faced traffic congestion at cell sites thanks to peer-to-peer traffic, the widespread availability of video in formats that the average consumer can watch has changed the industry. And that’s causing mobile operators to rethink their pricing plans (GigaOM Pro, sub. req’d). In short, YouTube may be the death of unlimited mobile broadband on handsets.

    Mobile video streaming rose 99 percent between the first and second half of 2009, according to data released this week by Allot Communications. The firm, which sells network management gear to broadband providers, credits the accessibility of YouTube and its ilk for the rise in video streaming on mobile networks. Video overall comprises most of the mobile network traffic, but the amount consumed via peer-to-peer traffic has fallen as the amount of streaming video traffic has risen. Jonathon Gordon, vice president of marketing with Allot, says that P2P has decreased as a percentage of mobile network traffic although it still comprises 19 percent of total traffic.

    The reason for P2P’s gradual decline is that it’s more complicated to share files via P2P, which somewhat limits the audiences that will practice file sharing. For example, it requires finding and downloading software to a mobile phone, which not everyone is willing to do. YouTube, however, can be accessed by anyone in just a few clicks. As such, YouTube traffic accounts for 10 percent of all the traffic on mobile broadband networks, and 32 percent of all HTTP streaming traffic. And it rose 90 percent between the first half and second half of 2009.

    Allot’s data proves that YouTube is a force today, but the latest numbers out from Cisco’s Visual Networking Index show the effects of streaming video on mobile broadband networks through 2014. And those effects are pretty brutal. Cisco estimates that 82 percent of mobile broadband traffic will be HTTP streaming traffic while total video traffic will account for about 2.3 exabytes of data a month.

    Streaming traffic is more difficult for operators to manage simply because, as opposed to a video download, streaming is also an ongoing process. For ways companies are trying to improve this process check out our report on Adaptive Bit Rate Streaming (GigaOM Pro). Such real-time consumption of video during streaming has big implications for mobile operators’ networks, notably in that it can cause problems during periods of the day when other people want to use the same mobile network to surf the web, make phone calls or check email.

    It’s no accident that AT&T’s Ralph de la Vega singled out video streaming during a speech his answer to a question asked at an analyst event, in which he attacked the gratuitous use of network resources by iPhone owners last December. In his speech response, de la Vega said:

    “I’m not going to give you in detail what we’re going to do, but if three are causing 40 percent, then we’re going to try to focus on making sure we give incentives to those small percentages to either reduce or modify their usage so they don’t crowd out the other users in those same cell sites,” de la Vega said. “You’ll see us address that more in detail in the future…What’s driving usage on the network and driving these high-usage situations are things like video or audio that keeps playing around the clock. We’ve got to get to those customers and have them recognize and change their patterns.”

    I’ve suggested that AT&T might use pricing as a means to shape user behavior on the network, rather than simply forbid users from doing what they want on mobile phones. Indeed, AT&T (and other carriers) may find itself racing to keep margins high for mobile broadband as usage increases. On its fourth-quarter earnings call at the end of January, Ma Bell admitted that its data traffic had doubled while the costs to send bits had halved. So for now, AT&T is keeping its costs in line with demand. But according to a forecast from Cisco released today, the average amount of data consumed on mobile devices will rise to 7 GB per month by 2014 from just 1.3 GB per month today — a 438 percent increase. Can AT&T — or other operators — drive the cost of bits down in line with that amount?

    Given that mobile resources are constrained by a variety of things, including the spectrum allotted to carriers, it’s likely that mobile broadband providers will eliminate flat-rate pricing for mobile broadband as away to keep profits and network quality up while data use expands. When that happens should we blame YouTube — or profiteering mobile operators?

    Related GigaOM Pro content:

  • Cisco: The Mobilpocalypse Is Coming!!!!!

    Cisco forecasts that by 2014 we will be using 3.6 exabytes a month on mobile networks worldwide, according to its Visual Networking Index figures released today. (For those pondering an exabyte, it’s equal to 1 billion gigabytes or half a trillion MP3 files.) And by 2014, we’re apparently going to be sucking down 40 exabytes annually from our mobile broadband networks, up from a total of 1.08 exabytes in all of 2009.

    A desire for always-on connectivity is behind Cisco’s incredible predictions for growth as well as the increasing number of devices that allow us to surf the mobile web with wired-Internet ease. If the mobile web is a cocktail served at a bar, our devices have moved from being thin cocktail straws to data-quaffing iPads. Plus, more of us are now able to drink legally, which means more bar patrons and more consumption by those bellying up to the mobile broadband bar. It’s both a nightmare and dream for mobile operators, and a clear opportunity for equipment vendors like Cisco and gadget makers like Apple.

    For example, today, the average mobile broadband connection generates 1.3 gigabytes of traffic per month — while consumers using mobile broadband via a data card pay around $60 a month for a 5 GB-chunk of access. By 2014, the average mobile broadband connection will generate 7 GB of traffic per month, which means that operators are going to have to revamp their pricing plans while also lowering the costs associated with sending bits through their networks in order to keep margins up.

    I’ll have more analysis on the numbers later, but here are the bare bone stats, which should be enough to knock your socks off — or strike terror into the hearts of mobile operators — some of which can’t even handle the data deluge caused by the iPhone.

    • Global mobile data traffic has increased by 160 percent over the past year to 90 petabytes per month — the equivalent of 23 million DVDs.
    • Global mobile data traffic today is growing today 2.4 times faster than global fixed broadband data traffic.
    • Smartphones and laptop air cards will drive more than 90 percent of global mobile traffic by 2014.
    • Of the anticipated traffic, Wi-Fi offload and other offload will only reduce mobile data use by 25 percent by 2014.
    • Global mobile video traffic is forecasted to be 2.3 exabytes per month by 2014.
    • By 2014, more than 400 million of the world’s Internet users will access the network solely through a mobile connection.
    • Today, smartphones are only 10 percent of all handsets in use, but generate over 50 percent of global mobile data handset traffic.

    Related GigaOM Pro report (subscription required): 


    How AT&T Will Deal With iPad Data Traffic

  • Silicon Valley Has a Woman Problem, But Women Still Have a Baby Problem

    A post yesterday on TechCrunch did a wonderful job of illustrating how many more men than women there are in the U.S. venture capital industry — and how that imbalance extends to tech entrepreneurs. It also extrapolated a rationalization for this gap that, while reasonable, was incorrect. Silicon Valley’s gender problem isn’t that complicated — it boils down to babies. As in, those who have them can’t be a startup CEO, too.

    Vivek Wadhwa, the author of the TechCrunch post, included a nice list of reasons why women entrepreneurs and women-led venture-backed companies are scarce:

    Sharon Vosmek, CEO of venture accelerator Astia doesn’t think that VCs have an overt bias against women. Instead, it’s the way the venture-capital industry operates. Vosmek says that these “systematic or hidden biases” include:

    1. that VCs hold clear stereotypes of successful CEOs (they call it pattern recognition, but in other industries they call it profiling or stereotyping.) John Doerr publicly stated that his most successful investments – and the no-brainer pattern for future investments – were in founders who were white, male, under 30, nerds, with no social life who dropped out of Harvard or Stanford (2009 NVCA conference).

    2. VCs invest in people they know. If women aren’t in their natural networks, they won’t get through the door. We know that still today, men and women network in separate business networks.

    3. VCs want to invest in serial entrepreneurs. (This further reduces the chance for woman entrepreneurs.)

    4. The VC community is obviously male dominated, and it just got worse…after the cold freeze VCs experienced over the past 24 months, many women partners exited the industry. As the Diana Project research shows, a firm with women General Partners is more likely to invest in women entrepreneurs.

    However, it was a comment from TechCrunch reader Chem that actually laid bare the issue of why women aren’t better represented in tech — essentially, it’s because women have babies, and the perception is that when we do, we leave the workforce to take care of them. And while Chem’s stereotype isn’t correct ( I was back at work and even took on a more demanding job soon after my daughter was born), the fact that women are “supposed” to bear the brunt of raising children is a huge reason why women aren’t more visible at the helm of venture-backed startups. It’s the babies, stupid.

    Or rather, it’s the idea that women should shoulder the burden of raising children, an idea that dominates our society to such a degree that many women and men buy into it without question. Society at large explicitly perpetuates motherhood and not parenthood (check out the New York Times, from stories that demand mothers learn how to speak nanny, to the spate of “wow-men-are-now-staying-at-home” stories), and implicitly enforces the status quo through its policies around access to childcare for babies, school calendars and thousands of other complicating factors that any family, be they dual-income or single-parent, must navigate.

    And when that navigation does require a trade-off, it’s generally still the mother that makes it. Which means that yes, once women have babies there are forces that can keep them from taking on a 90-hour-a-week startup gig. We can bemoan a scarcity of female role models in tech, entice women into the math and science professions or even blame women who leave the work force to take care of kids for the lack of gender diversity, but to fix the problem, we’re going to have to discuss the lack of parity between men and women when it comes to raising children.

    Because Wadhwa is right: Gender diversity is important, and women shouldn’t have to choose between raising a family and building a startup any more than men should.

    Image courtesy of Flickr user anonymous to you

  • Stat Shot: How the iPhone Changed the Handset Market

    The change in the mobile phone market caused by the introduction of the iPhone in 2007 has slightly cut the profits for the handset industry overall, but has most severely affected Nokia and Sony Ericsson, according to data released today from Deutsche Bank. The investment bank issued a note showing how Apple and Research in Motion, the maker of the BlackBerry, garner most of the profits in the handset industry despite their relatively small market share.

    The report also shows an incredible loss for Nokia, which saw its share of handset profits cut in half by the shift in the handset market that occurred after the iPhone was released. In 2007 Nokia made about 60 percent of the profits in the industry, and in 2009 it had about 31 percent. Meanwhile the adoption of mobile broadband (and likely the fact that the iPhone is a consumer-focused device only available from one carrier) has helped RIM take about a fifth of the overall industry profits in 2009 as more corporations and people tried to access email and the web on their phones.

    However, 2009 represented a bad year for the average industry profits, which the bank believes will rise in 2010 and 2011. Part of that might be a better economic climate, but it’s also likely that after a few years of playing catch-up with the iPhone, the handset makers now have something that can compete with it thanks to Android and more web-friendly phones. The real questions ahead for handset makers are which ones will fall by the wayside thanks to the overall shift in the market? Palm and Sony Ericsson aren’t due for a comeback based on this data and Motorola’s overall share of the profits isn’t much to build a business on.

  • My Austin WiMAX Experience Was Good, But Not Good Enough

    I spent the last few weeks roaming around Austin with a dual-mode WiMAX modem from Sprint in order to see how well it works here. The verdict: It’s not strong enough to be a wireline replacement, but if I didn’t have a contract to fulfill on Verizon I’d ditch my MiFi and pick up the Overdrive 4G/3G personal hotspot and use that as my primary data connection.

    Sadly, the truly fast 4G service is only available in a limited area and the upload speeds are only so-so, which means I’m not going to go out of my way to make WiMAX happen for me. But anyone who doesn’t already have a data card should take a hard look at it. Data service from Sprint costs $59.99 a month and the personal hotspot will set you back $349 for the device without a contract or $99 with one. The cost of data service and the contract price on the device is comparable to that of the MiFi.

    However, my overall experience  with WiMAX bums me out because I had high hopes for the 4G wireless service as a way to fill in some of the broadband black holes in town where cable or DSL doesn’t reach. Unfortunately, WiMAX doesn’t seem to reach those areas, either.

    WiMAX speeds downtown

    The best speeds I noted were 4 Mbps down and never more than 500 kbps up. I was really disappointed with the upload speeds until I spoke with someone from Clearwire, who said that upload speeds are limited to 1 Mbps in order to allocate more downlink capacity, which most people value over the upload speeds.

    Capability-wise, I was able watch Hulu, stream music and make decent VoIP calls. Skype video was iffy, but I even have issues with that on my cable modem at home, so I’m not prepared to draw too many conclusions from just one test. Surfing the web was no problem — in fact in some areas where WiMAX coverage was strongest, it was just like surfing at home.

    WiMAX at Zilker Park

    Verizon EVDO at Zilker Park

    However, in many areas of town I had a hard time getting a 4G signal at all, such as when I was being driven down MoPac and around the 360 in the western part of town. And the modem I was testing had a hard time transitioning between 3G and 4G signals, so VoIP in a moving car isn’t really possible. Some of this may be Austin’s hilly geography and part of it may be that the network in Austin, according to Sprint spokesman John Taylor, is still not 100 percent built out (he didn’t know how long that would take). In other areas, such as around Zilker Park and in the West Lake Hills, I got speeds of 2.5 Mbps down and 220 kbps up. The University of Texas campus isn’t covered by WiMAX at all since the university has its own Wi-Fi network.

    Taylor told me the thought was that students getting Wi-Fi for free wouldn’t pay for coverage, so Clearwire didn’t put up the towers. Which indicates that in the rush to keep its first-mover advantage over the coming Long Term Evolution networks being deployed by the large carriers, Clearwire is cutting some corners. Depending on how well and where Verizon implements LTE by the end of this year, this could come back to hurt Clearwire and Sprint.

    So overall, I think if you live in an area where the coverage is strong, WiMAX is a decent alternative to basic DSL or non-DOCSIS 3.0 cable service. If you don’t have access to either of those, even getting the 3-4 Mbps down speeds via WiMAX is a no-brainer. As a mobile broadband user, however, having a dual-mode device is a nice-to-have option, but isn’t really something I’d go out of my way for, especially since the area where the 4G service is significantly better than my current EVDO modem is small. What the WiMAX test really does is whet my appetite for a day when 4G is more ubiquitous.

  • AT&T Seen Keeping the iPhone Through 2011: Analyst

    AT&T will likely keep its exclusive hold on the iPhone for the next 12-18 months, rather than ending its exclusivity in mid-2010, writes Jonathan Chaplin of Credit Suisse in a note issued today. And because he thinks AT&T will have so much more time as the sole provider of Apple mobile phone goodness, he figures Ma Bell will have the chance to make the last year and half’s network problems a thing of the past in the minds of consumers as it pulls out all the stops in boosting network capacity.

    An improved AT&T experience means consumers are less likely to run to another network at the end of the exclusivity period, leaving AT&T fat and happy and other carriers seeing marginal gains. From the report:

    We believe there is a 75% probability that AT&T keeps exclusivity in 2010. We arrive at this probability through a two step process: First, we try to determine whether the Apple / AT&T agreement expires in 2010. The consensus view is that it does; however, we couldn’t find compelling evidence that this is the case. We conclude that there is only a 50% probability that it ends in 2010. Next, we try to determine whether AT&T bids for another year of exclusivity if exclusivity does end in 2010. We conclude that they would and that they can afford to compensate Apple such that Apple would be economically indifferent. Our approach yields a 25% probability for this outcome. Taken together, we see a 75% probability that AT&T keeps exclusivity for another year.

    However, at the end of the exclusivity period he believes AT&T Apple will make a CDMA device for Verizon and Sprint, which means anyone can can pick up an iPhone if that’s their heart’s desire. Personally, I’m happy enough with the Nexus One from Google  and HTC, and I wonder after four years of waiting how many others will rush out to buy the hallowed iPhone. Chaplin also downgraded Verizon in light of his thesis and upgraded Research in Motion, the maker of the BlackBerry.

    Related content from GigaOM Pro (subscription required):

    How AT&T Will Deal With iPad Data Traffic

  • Younger Cell Phone Users Mean More Cell Phone Sales

    The Pew Internet and American Life survey today released data showing that younger and younger teens are getting cell phones, with 56 percent of 12-year-olds toting the devices. If we assume that these kids will live until the age of 78 and take as fact a recent NPR story that said each cell phone has an average 16-month life expectancy, these teens will go through almost 50 cell phones in their lifetimes. That’s a lot of hardware and that’s a lot of months paying for cell plans. It boggles the mind.

  • 2010: The Year Comcast Embraces Convergence

    Comcast today reported fourth-quarter and 2009 earnings that showed remarkable subscriber growth against the backdrop of such a down economy. More telling, however, are the three big forward-looking strategic initiatives the cable operator plans to focus on this year: expanding its mobile broadband offering through Clearwire, deploying some type of interactive advertising and signing up carrier customers for mobile backahul. It will also complete the rollout of its DOCSIS 3.0 broadband, which can deliver speeds of up to 50 Mbps; expand its TV everywhere product, Xfinity; and attempt to close the joint venture with GE over NBC Universal.

    Essentially Comcast, which is about to finish laying the groundwork for a fast wired network, is focused on reaping the benefits of mobile broadband. Along the way it will also use Xfinity, the NBC-Universal deal and interactive advertising as a means to forestall becoming a dumb pipe for users. I have no idea if all of those efforts will succeed, but I applaud it for looking ahead and seeing the future of ubiquitous and fast broadband as a necessary platform in a way some of its rivals may not.

    Its priorities reflect the growing awareness of a converged communications world. It’s attempting to provide the underlying infrastructure of fixed and mobile broadband as a bundle for the consumer as well as find ways to monetize and control the content running over those pipes in a way that won’t draw an outcry from consumers or regulators. However, regulators are already scrutinizing Comcast’s control of NBC-Universal and will likely spend some time on Xfinity as well.

    But if we step back and look at the big picture, it’s clear that Comcast understands both the opportunity and the threat that ubiquitous broadband presents to its business. For Comcast, 2010 is when it will finish laying the groundwork for delivering ubiquitous broadband, and when it will build up the arsenal of tools to answer the threat that an all-IP network represents to its core video delivery business.



    Related GigaOM Pro content (subscription required):

    A Closer Look at Comcast’s NBC Universal Acquisition

    Thumbnail image courtesy of Flickr user Tyler Yip

  • Can Microsoft’s Azure Find True Blue Developers?

    Microsoft on Tuesday said that its Azure cloud computing platform was open for business after more than a year of development. While Redmond may be late to the cloud bonanza, it now has a platform that could become a major force in cloud computing — if it can get developers to trust it. Derrick Harris takes an in-depth look at Azure over at GigaOM Pro (subscription required) to see what exactly Microsoft is offering and how it compares with other clouds.

    Derrick is pretty optimistic about Microsoft’s chances to build a developer community for Azure. He said that since Azure offers a platform as a service, a fabric to join public and private clouds, and a robust SQL database, it will meet the needs of many potential customers. From his report:

    What sets Windows Azure apart from the competition is that it tries to be everything to everyone, and often times it succeeds. For example, the sheer variety of languages and frameworks it supports is rare among PaaS offerings, most of which target one language or stack (e.g., Ruby on Rails or LAMP) and build the best possible service around it. This means that Azure might be attractive to developers who really like to experiment or businesses that run various types of applications, but that Azure won’t likely be the best at serving any particular language (except for .NET, of course). It remains to be seen whether PaaS customers will buy into Microsoft’s reputation and relative openness with Azure, or whether they will take their business to the best clouds for their particular jobs.

    The question of Microsoft’s success may boil down to how much enterprises and customers need to consolidate all of their IT operations in a single cloud or with one vendor. It could also depend on whether those customers want to take advantage of the plethora of application-specific or language-specific platforms for each IT function. If they do, then there’s no need for a general purpose cloud that tries to be all things to all developers because customers will seek to find the best fit for each program they want to run.

  • VoIP Gaining Ground, So Where Will Legacy Voice Make Its Last Stand?

    Voice over Internet Protocol penetration among U.S. businesses will increase rapidly over the next few years, reaching 79 percent by 2013, compared to 42 percent at the end of 2009, according to research out today from analyst firm In-Stat. At this point I wonder what market demographic represents the last stand for legacy circuit switched voice. Will it be consumer landlines or will it be mobile voice over 3G networks?

    Current telephone networks are gradually being phased out as the world moves to IP communications. Right now in the U.S. only 78 percent of consumer homes have a landline and only 22 percent rely on them exclusively. In the next three years I imagine both numbers will be much lower, which is why the FCC is looking at how to support broadband access (which is necessary for IP telephony) for all.

    In the mobile world, legacy voice will stick around for a while longer. Even though the next-generation Long Term Evolution networks will support voice, it’s still unclear how carriers will manage voice calls over the all-IP LTE network. Plus, the existing 3G and even 2G networks will still be around delivering voice calls, so legacy voice is still going to rule on mobile phones.

    Related GigaOM Pro Content:

    Thumbnail image from Old Telephones via Flickr

  • Obama Budget Spells Benefits for the Cloud

    Quick, what spends almost $80 billion on information technology, likes the cloud, and is red white and blue all over? Yup, it’s the federal government, and in President Obama’s budget announced yesterday, the feds may have opened a window of opportunity for cloud computing companies both large and small hoping for some government largess.

    The feds want to increase spending on IT in 2011 by 1.2 percent, to $79.4 billion, and to use the dollars to transform the way the government’s IT runs. Considering that about 70 percent of the current budget goes to keeping the existing gear running, that’s not a lot of overhead for innovation. The government fiscal year ends at the end of September so the money would start to trickle out after Oct. 1, 2010, for projects including cloud computing, cyber security, procurement and performance management.

    Cloud computing is the most exciting as it could make government IT more efficient, something our readers have called on the feds to do. There are opportunities that could benefit Salesforce.com, Google, Microsoft, Amazon and others. Back in September 2009, federal CIO Vivek Kundra announced a trial program that will be expanded across the government in 2011. The program allows federal agencies to buy IT services through a government portal from companies such as Google, Salesforce.com, Scribd, SlideShare and others. It’s possible that smaller companies could benefit as much as larger ones through such a system. Plus the government can legitimize the cloud and provides a large buyer that should force vendors to implement interoperability and openness standards that the cloud so desperately needs.

    There are also plans for data center consolidation, which will benefit server makers such as Dell, HP, IBM and perhaps even SGI, as well as providers that deploy software and services to help manage hypervisors. The government  had more than 1,100 data centers in 2009, up from a still huge 432 data centers back in 1998. Despite the sprawl, things are not getting more efficient. Look at the issues federal officials have sharing data across different agencies and databases.

    There’s also $364 million for the operations of the Department of Homeland Security’s National Cyber Security Division, which may become more politically important as tales of Chinese hacking continue to percolate, as well as from placing more critical information and infrastructure online.

    Other than these data center and cloud-focused priorities there is plenty of room for technology in the budget, as Larry Dignan points out over at ZDNet. And for those of us looking at the government’s role in telecommunications, the Federal Communications Commission got a slight boost with a proposed annual budget of $352.5 million. I wonder if that means Chairman Julius Genachowski can afford gadgets for the hoped-for new technology lending library he talked about during his visit to our offices last month?

  • President Obama Hearts Net Neutrality and May Hate Metered Broadband

    President Obama took questions via YouTube today, and professed a belief in net neutrality — not necessarily the generic net neutrality that most agree with — namely the idea that broadband providers shouldn’t block content or make companies pay for preferred delivery over their pipes — but a fairly specific vision that may even include resistance to allowing carriers deliver managed services or possibly tiered pricing on the consumer side.

    “We’re getting pushback, obviously, from some of the bigger carriers who would like to be able to charge more fees and extract more money from wealthier customers. But we think that runs counter to the whole spirit of openness that has made the Internet such a powerful engine for not only economic growth, but also for the generation of ideas and creativity.”

    The idea of carriers extracting more money from wealthier customers could apply to the large ISPs pressuring the big content providers to pay more for delivery, but Obama may be tying the idea of net neutrality to those same ISPs trying to extract more money from the end consumer through tiered pricing plans or higher fees. It’s unclear, but as far as I can tell net neutrality isn’t going to stop efforts to implement tiered pricing for broadband such as what Time Warner Cable proposed last year. His statements aren’t all that surprising given the Obama had campaigned on a pro-net neutrality platform, but it’s still worth checking out the video below.

  • Squeezed Cell Networks Lead to Dealmaking

    Two startups making hardware for the mobile industry scored investments today: Pulsus, a Korean company that has a chip aimed at making handsets more efficient, has taken $4 million from Qualcomm Ventures, and SpiderCloud, a startup that helps with mobile offload, has gotten $25 million from a group of VCs.

    As users, application developers and carriers bump up against the technical constraints around mobile broadband’s popularity, expect more and more hardware investments and dealmaking in the mobile sector. A venture partner involved in the $1.3 billion program to fund startups that will benefit Verizon’s LTE network has even detailed to me what types of companies he and other participants in the program are looking to invest in.

    But back to today’s funding. SpiderCloud is building out hardware that allows for cell phone traffic to be kept on a proprietary local network, which the company has dubbed eRAN. So instead of all the mobile traffic that a business might generate in its building and send back to the web over the carrier’s cellular network, all the phone in the office can now communicate via a smaller in-house network that connects to the web using the business’ own web connection — it’s kind of like a femtocell in idea, but not in execution. Today’s infusion, from Opus Capital, Shasta Ventures, Charles River Ventures and Matrix Partners, brings its total to $40 million.

    Pulsus makes a mixed signal chip that converts analog signals to digital signals before it amplifies them. That allows a greater ability to modulate those signals for better sound quality but could also be used to increase the power efficiency of the handsets, possibly in  a manner similar to Quantance. It also makes a variety of other audio chips and its CEO has said it hopes to one day “be the Qualcomm of Korea.”

    In a related move, today Austin chip startup Black Sand Technologies purchased an intellectual property portfolio from Silicon Labs, a mixed-signal chipmaker. Black Sand is building a power amplifier that will make some of the components inside mobile handsets cheaper to manufacture and will boost battery life on mobile phones. Black Sand raised $10 million last September.

    Image courtesy of Flickr user Jurveston

  • Apple Brings 3G VoIP to the iPhone

    While the world was watching Apple CEO Steve Jobs unveil the iPad, voice-over-IP programs that use AT&T’s 3G network were finally being released for the iPhone. Yesterday iCall sent out a release saying Apple had updated the iPhone software development kit to allow VoIP over AT&T’s cellular network, and tout that it has a working product. Today fring said it has a working VoIP over 3G application on the iPhone. Om checked out Skype and Nimbuzz, but those apps haven’t yet been updated, as you can see from Om’s screenshot.

    The tech world has been waiting for the ability to make VoIP calls over the AT&T 3G network ever since after the Federal Communication Commission closed it inquiry into the blocking of the Google Voice application on the iPhone. As a result of the agency’s questioning, it was revealed that AT&T in fact prohibited VoIP on its network, but also that Apple was responsible for blocking Google Voice.

    So in October, AT&T said it would allow VoIP over its 3G network. While some doubt that VoIP on the iPhone will be a great experience, some just want to see it happen. For them, yesterday may live in their memory not for the launch of the iPad but because now they can now use VoIP on their beloved iPhones anywhere they have a connection. For more, check out the coverage at The Apple Blog.

  • What the iPad Tells Us About Mobile Broadband Pricing

    No matter what you think of the newly launched Apple  iPad, it confirms a shift in the way mobile broadband services are priced, albeit a subtle one. Yesterday I wondered if Apple had it in for AT&T because the prepaid data plan offered more data for less than existing AT&T prepaid plans, and because with an unlimited plan that mirrored the pricing on the iPhone plan, it seemed like AT&T would have double or quadruple the traffic without a corresponding increase in revenue.

    But after listening in on the AT&T earnings call today, I realize that AT&T thinks it won’t be so hard up, and that the iPad data pricing continues a pattern of pricing mobile broadband based on the device. Much like prices are different for data via a data card or dongle based on the expected usage pattern, the data plan on the iPad is less because AT&T hopes a “substantial” amount of the web surfing will occur over the Wi-Fi network, AT&T’s CFO Rick Linder said on the call. If it turns out that isn’t the case, then Linder said AT&T would re-evaluate the pricing plans. AT&T does expect the iPad data consumption to fall somewhere between that of an iPhone and a laptop.

    Plus the fact that AT&T isn’t subsidizing the iPad — and doesn’t have subscriber billing and other costs associated with it — means the carrier isn’t terribly concerned about the iPad’s effect on its margins. Check out the chart below to see the costs for AT&T associated with the two devices.

    iPhone iPad
    Avg. Data Consumption 500 MB per month 1-2 GB per month
    Unlimited Plan $29.99 $29.99
    250 MB Plan none for the iPhone, but AT&T charges $29.99 for pre-paid 250 MB plans $14.99
    Subsidy $351 $0
    Estimated Traffic Sent Via Wi-Fi 10 percent to 20 percent analysts estimate 50 percent or more