Author: Stacey Higginbotham

  • AT&T: We Really Do Suck in SF & NYC

    Updated: AT&T this morning said its earnings rose 25 percent in the fourth quarter thanks to its wireless business, and told consumers, if not investors, what they wanted to hear by detailing plans to spend $18-$19 billion in capital expenditures, with a $2 billion increase aimed at the wireless network and backhaul. The carrier, which has the dubious honor of being the exclusive provider of the iPhone, has been the target of much ire on the part of its customers due to the poor performance of its network, especially in major cities.

    Yet in Apple’s own earnings call on Monday an executive at the iPhone maker said AT&T had shown it its plans for improving its network, and that Apple was pleased. It must have been, as yesterday Apple said that AT&T would once again provide a data plan for its latest gadget, a tablet called the iPad. For a detailed look at AT&T’s past network improvements and how the iPad may affect its network, check out out our GigaOM Pro item from yesterday (subscription required).

    So far, network performance hasn’t been up to par, and AT&T included charts today in its earnings presentation detailing how far off the mark service is in Manhattan and San Francisco. Additionally AT&T released stats showing that in December 2008 more than one out of every 100 calls was dropped. Now almost one out of every 100 calls is dropped, but such calls are still heavily concentrated in a few major cities (or wherever a bunch of people with iPhones gather). Update: However an AT&T executive stressed on the earnings call, that according to third party data, its dropped call rate of 1.32% is only two tenths of one percent behind the national leader.

    Fixing the problem will require more capital expenditures, and AT&T said it will spend $2 billion more in wireless network upgrades and backhaul to connect its towers and base stations to the Internet (although not all of that will be fiber). It’s also adding more radio network controllers or swapping old ones out in certain areas to better use the spectrum, and is still deploying its HSPA 7.2 upgrades to deliver faster speeds and more capacity.

    Such measure, plus the efforts AT&T made last year, will help, but carriers need to start looking at how to manage their networks holistically now that these large data-consuming devices are coming onto them. So while AT&T is bandaging its wounded network, and could succeed, carriers around the world need to think about how to manage data so they avoid such injuries in the first place.

  • Does Apple Have It In for AT&T?

    Today’s unveiling of the Apple iPad, which resembles a slightly larger iPod touch and has the tech world all a-twitter, seems like a brutal blow to AT&T. First of all, it’s unlocked, and as such can run on other GSM-based networks, which means that AT&T may not be the only place iPad users can get their mobile broadband. Plus the pricing seems like it will cut into AT&T’s profits.

    Granted, the extent to which a consumer could find another mobile broadband provider in the U.S. is unclear, as the micro SIMs that CEO Steve Jobs mentioned in his onstage presentation may not be tuned to other networks (see Why Your Nexus One Won’t Work on AT&T’s 3G for an explanation). PCMag has a nice story on micro SIMs for those who are curious — they’re basically smaller SIM cards. AT&T won’t comment on the announcement, but I imagine it will be a topic on the carrier’s quarterly results call tomorrow morning.

    The revolution in pricing offered by the iPad is very clear. Jobs said those wanting 3G connectivity could get a prepaid AT&T data plan that allows about 250 MB of data downloads for $14.99 or an unlimited plan for $29.99. The 250 MB plan costs half as much as AT&T’s other prepaid data plans, presumably cutting into Ma Bell’s data margins. And the unlimited plan is the same price that folks currently pay for unlimited data via the iPhone, which had AT&T executive Ralph de la Vega complaining in December that folks were downloading too much. And giving folks a bigger, faster device isn’t likely to get them to cut back on their data consumption.

    For more details on the iPad, a roundup of how AT&T has prepared its network for this moment, and some great facts about the average monthly data consumption on a superphone vs. a data card, check out my latest piece at GigaOM Pro (subscription required). Hint: the average superphone user already consumes more data in a month than a 250 MB plan accommodates.

  • Will the iPad Kill the Kindle? In a Word, Yes

    Om is drooling over the iPad at the moment, getting ever more excited about the feel and capabilities of Apple’s new device. His thesis is that the Amazon Kindle is dead thanks to the rich media capabilities of the iPad as well as the full software-based keyboard. Oh yeah, and there’s all those apps Apple already has in its App Store, which put the Kindle’s platform efforts to shame.

    The iPad already delivers some of the stellar features the Kindle does in order to make that a dedicated e-reader (and as such, better than your smartphone), such as a long battery life and a bigger screen, but instead of offering it in black and white, it offers it in color, complete with rich media. It’s like comparing Kansas and Oz. That rich media experience will change the way organizations deliver their content in that it will allowing them to include video, interactive charts and all sorts of other fun things. The price for rocking your media world? At the low end, an iPad with Wi-Fi will set you back $499, but if you’re willing to splurge there’s a version with more memory and a data plan (that’s extra) for $829. It will be available starting in late March.

    Check back for more from Om  later.

  • Roads Not Taken: The U.S. Needs Smarter Highways

    Concrete, steel and silicon are the key ingredients of the next generation’s highway infrastructure, according to a report out this morning from the Information Technology and Innovation Foundation, a think tank representing the interests of a variety of technology firms. The report details a future in which cars can access real-time traffic information and municipalities may implement a per-mile usage fee for roadways. It also charges the Department of Transportation to come up with a plan to implement such technology by 2014.

    An intelligent transportation system includes real-time traffic information, in-vehicle communications systems that “talk to the road and to other cars, public transit systems that can broadcast information about their whereabouts, better electronic tolling and intelligent infrastructure that can react to real-time situations.”

    The report asks that $2.5 billion-$3 billion annually go toward this informed highway system ($1 billion would be dedicated for the states with the rest aimed at nationwide, large-scale projects), noting that the benefits to gasoline consumption, as well as the decrease to both commuting time and wear and tear on roadways would be substantial. In addition I imagine that as electric cars gain in popularity we’ll see the nation’s gas tax revenue decline, cutting off a source of highway funding.

    Of course, for the nation and taxpayers to get the true bang for their buck, the report notes that we need buy-in from governments and consumers on a national level.

    While some ITS, such as ramp meters or adaptive traffic signals, can be deployed locally and prove effective, the vast majority of ITS applications—and certainly the ones positioned to deliver the most extensive benefits to the transportation network—must operate at scale, often at a national level, and must involve adoption by the overall system and by individual users at the same time to be effective, raising a unique set of system inter- dependency, network effect, and system coordination challenges.

    In addition to the money and will to implement such a system, I think that in order to ensure massive buy-in we need an open platform from multiple vendors for delivering the information an intelligent transportation system needs. Developers, car manufacturers and others should be able build on that platform in order to offer compelling features and programs that work no matter where we are in the country or what kind of vehicle we’re driving. We need to take a hint from the standards that enabled the Internet to be the information superhighway and develop a similar protocol for creating the informed highway.

    Related GigaOM Pro Research:

    Image courtesy of Flickr user kla4067

  • FCC Quizzes Carriers and Google on Early Termination Fees

    As part of a recently created pro-consumer task force at the Federal Communications Commission, the agency is sending out letters to the top four wireless carriers and Google asking about their early termination fees (ETFs). The FCC already quizzed Verizon after Big Red raised the fees a consumer pays for canceling their contract early to $350 for smartphones — and the answers provided by Verizon were pretty much an exercise in obfuscation.

    So now the agency has sent out letters to Verizon, Sprint, AT&T, T-Mobile and Google asking about ETF policies, including what each company’s ETF supports and even things like how much a consumer has to pay in restocking fees if they return a phone early (a personal pet peeve of mine since it’s hard to get a feel for a phone that’s attached to the wall via a security tether). Google and T-Mobile get special attention for their fees associated with the Nexus One, but I find it odd that questions weren’t sent to other retailers of mobile phones. Overall I hope to see a bit more clarity around consumer billing for mobile service.

    Maybe the next inquiry could ask why carriers are forcing folks to pay for data plans they may not want with certain phones. An even more forward-looking inquiry might ask whether certain phones need voice plans at all.

    Image courtesy of Flickr user Neubie

  • Chip Firms Want to Know: When It Comes to e-Readers, Will Brains or Beauty Win?

    Things are looking dire for dead tree media of all sorts as the consumer electronics industry takes aim at newspapers, magazines, and the humble mass of paper known as a book. But between iPhones, dedicated e-readers and the much anticipated tablet, what does the consumer electronics industry’s infatuation with reading mean for those providing the components that make such devices work?

    The answer could be big for the chip industry, as In-Stat estimates that up to 28.6 million dedicated e-readers could be sold in 2013. However, such a prediction don’t take into account the launch of Apple’s tablet and may not accurately factor in the habits of those who read on their smartphones. Mobile application analytics provider Flurry, as Om noted back in November, sees books becoming the leading category of apps downloaded on the iPhone.

    David Carey, a teardown expert at Portelligent, says it’s not clear if dedicated e-readers or tablets designed to consume media will displace existing netbooks, notebooks or smartphones. If they add to our overall gadgets then yes, they will expand the market opportunity for chip vendors. If they don’t, they’ll just help to redistribute consumer device spending.

    It’s an obvious point, for sure, but less obvious is how the success of a particular type of device may affect industry players. Currently Freescale Semiconductor is the market leader in the dedicated e-reader space with chips in the Kindle and the Sony Reader. When it comes to smartphones, companies like Qualcomm, Samsung (its applications processor powers the iPhone) and Texas Instruments are vying for domination. Those same companies, along with Marvell and its Armada line of chips, will also attempt to power a tablet with Samsung said to be providing the silicon inside the new Apple device.

    That covers the brains of the reading devices, but what about their displays? When evaluating the total cost of a reading device certain form factors will imply more spending on the brains than the display — the component that I think of as beautiful. Dedicated e-readers have overall higher costs thanks to the proprietary nature of e-ink displays, and those costs are unlikely to come down as fast as the price for LCD screens. Using the brains vs. beauty ratio (application processor vs. display), Carey said dedicated e-readers have a 1:2 ratio with manufacturers spending $2 on beauty for every dollar spent on applications processors.

    With smartphones or superphones, for every dollar a manufacturer spends on beauty, they spend 75 cents on brains. For example, in Google’s Nexus One the Snapdragon processor with integrated cellular radio costs $30.50 and the screen plus the capacitive touch components costs $41. In the iPhone 3G S, the applications processor plus the cellular radio (it’s not integrated in the iPhone) costs $27.46 while the screen and touch components cost $35.75.

    It’s likely that the ratio of brains to beauty will start out closer to the 1:2 ratio for dedicated e-readers, but that the prices will drop as the cost of LCD screens drops, estimates Carey. Of course, as consumers decide they want their tablet to do more, the demand for smarter slates could force the application processor innovation (and costs) up.

    So for the chip firms hoping that e-readers will provide a steady market for their wares, the future is still cloudy. The end products are so different they can’t possibly meet the needs of every consumer. Dedicated e-readers last longer on a single charge, but likely won’t ever deliver as rich media a experience that a tablet or smartphone could (plus young people aren’t that into them), while words on LCD screens used by smartphones and potentially on a tablet can be hard to read over a long period of time. Some folks may use a smartphone, since they don’t read that much anyhow, while others will go to their graves clutching their ripped, dog-eared novels proud to have only owned printed versions of the printed word.

    Related GigaOM Pro Research:

    Image courtesy of Flickr user timetrax23

  • Forget Cable’s WiMAX Dreams: Cox Trials LTE Network

    Cox, the nation’s third-largest cable company, said today that it’s successfully delivered a voice call and high-definition video streaming over a fourth-generation Long Term Evolution network using some of the more than $550 million worth of spectrum it purchased during the 2006 AWS and 2008 700 MHz auctions. But while the trials held in Phoenix and San Diego may bring Cox’s 6.2 million customers closer to a quadruple-play offering of video, voice, data and mobility, they stand in stark contrast to the 4G wireless plans of Cox’s cable competitors, which all involve WiMAX.

    Cox has detailed plans to deploy a 3G networking using the same CDMA technology as both Verizon and Sprint, and in December tested that service in three markets, with Sprint as its partner providing nationwide 3G coverage. Cox plans to officially launch its 3G wireless network in March in those three markets and roll out coverage to the remaining markets over time.

    However, as Cox embraces LTE as its fourth-generation wireless technology of choice (using gear from Alcatel-Lucent and Huawei), its fellow cable companies have invested billion in Clearwire and its WiMAX rollout. Which puts Cox in the same camp as most of the telecommunications providers, which are also betting on LTE.

    Now WiMAX and LTE are actually not that far apart, and Clearwire could theoretically upgrade its WiMAX network to one that uses LTE after it gets out from under Intel’s thumb. But that’s a big if, especially since Clearwire is racing to roll out WiMAX before the major carriers get their LTE networks out in force.

    What’s odd about Cox’s LTE moves is that it’s trialing the technology relatively early (its 3G network isn’t even deployed). Plus, since Cox has an agreement with Sprint for 3G roaming, it will have to find a partner for 4G roaming that’s using LTE. Spokesman David Grabert wouldn’t speculate on whether Cox would leapfrog 3G coverage in some markets and go straight to LTE, but the news of the 4G tests raises far more questions than provides real information about Cox’s wireless plans. As to Cox’s LTE plans, we need to wait a few weeks until Stephen Bye, the company’s VP of wireless, delivers a keynote at the Mobile World Congress in Barcelona. As for devices, pricing and 3G plans, we’ll have to wait until the March launch of the network.

  • Intel Goes to the Super Bowl: We Are All Nerds Now

    Intel said today that it will sponsor the post-game show of this year’s Super Bowl, and will join other tech advertisers with promotional spots during the nation’s largest televised sporting event. Intel will show off new versions of its commercials that feature geeks as rock stars (my favorite one is below), touting the latest line of Core processors that the company announced at this year’s consumer electronics show. Intel’s Super Bowl ad will not only mark the chip maker’s return to the football game after a 10-year absence, but will highlight how much technology has become part of our daily lives.

    From our growing obsession with high-end phones to ever more networked homes and 3-D television, the average consumer’s everyday products are increasingly embedded with silicon, and Intel aims to make sure its name is associated with our digital consumption. During the Super Bowl, Intel will debut two new commercials over three 30-second spots, the first airing during the fourth quarter of the big game on Feb. 7. Intel drove extensive consumer branding of a geeky component when its bunny-suited fab workers stormed the advertising world in the last 90s, so we’ll see if a similar approach keeps it relevant as a wider variety of computing challenges its x86 hegemony.

  • Why Juniper’s Polycom Partnership Is Destined to Fail

    Juniper Networks said today that it would team up with videoconferencing gear maker Polycom to form an alliance to deliver videoconferencing products that would ensure a quality experience. The products and services associated with this alliance will come out the middle of this year.

    This is clearly a partnership made to ensure that Cisco’s  move into the enterprise video conferencing space doesn’t go unchallenged. Gear from this alliance will be designed for providers offering videoconferencing services. Also as part of the partnership, Juniper routers will detect a Polycom video link and allocate the optimal bandwidth for it in real time.

    When Cisco said it would buy Tandberg, a Polycom rival, for $3 billion last year, we explained why it had reason to move deeper into the middle market for video conferencing. However, I’m not sure that Juniper and Polycom are an ideal match, mostly because tying the product to the networking gear is a strategy that ultimately follows along with Cisco’s aims. As a smaller rival to Cisco, Juniper can’t win by playing by the same rules as the larger company — it needs to break them.

    Cisco is tying its networking expertise to servers, video conferencing gear, consumer equipment and whatever else it thinks it can get away with, because it can guarantee a better quality of service, but also because that gives Cisco the means to keep its margins up. Cisco, may call its servers open, but they are not commoditized the way other servers are because Cisco has integrated this networking component.

    By adding in the network layer, Cisco hopes to keep its servers from becoming the commodity that Dell’s, HP’s and IBM’s are. The same goes with things like the Flip camera that Cisco acquired when it bought Pure Digital. Sure it wants to boost bandwidth consumption so it can sell more telco gear, but it also wants to build a high-margin ecosystem around the home tied to its gear, in the manner that Apple has. But creating this end-to-end ecosystem tying the network and the device together isn’t the only way to ensure quality (although it may be the easiest).

    Taking this back to the Juniper-Polycom deal, it’s not enough for these guys to shout, “Me too!” and hope that they can win playing Cisco’s game. A far more disruptive decision would be to open up the standards and ecosystem for teleconferencing and make Cisco-Tandberg gear look costly and less relevant. Polycom already delivers a standards-based telepresence products that will work with other non-proprietary gear (Cisco’s Telepresence is proprietary). Of course, by opening up too much Polycom and Juniper are facing an innovator’s dilemma that will almost certainly commoditize their hardware sales. They may be damned if they do, damned if they don’t.

  • Startups, If You Can Make Verizon’s LTE Network Awesome, There’s $1.3B to Be Had

    There are 1.3 billion venture dollars available for startups that want to play a role in the fourth-generation Long Term Evolution wireless network being deployed this year by Verizon, so I talked to Daniel Deeney, a partner at a venture firm that’s investing some of those dollars, to see what types of companies have a shot at the dough. Deeney is with New Venture Partners, a telecommunications-focused firm that invests primarily in corporate spin-outs and is about to make its first investment under the Verizon LTE program.

    The program, called the 4G Venture Forum, was created back in October. Participating venture funds, which also include Charles River Partners, North Bridge Venture Partners, Norwest Venture Partners and Redpoint Ventures, will sit down with Verizon executives each quarter to figure out which startups the wireless carrier is interested in and decide if they represent an investment opportunity.

    Deeney said he’s looking at everything from ways to improve capacity and speeds on the LTE network to technologies that allow devices to use bandwidth effectively or make the network smarter. In other words, he’s focused not so much on apps (although he’s interested in those that help enterprises take advantage of LTE connectivity) as on the deep infrastructure that will enable a robust mobile broadband network, one that can support updates from household appliances to peer-to-peer file sharing.

    Daniel Deeney

    Those kinds of nitty-gritty technology equipment companies can eat up a lot of capital, something most venture firms are especially leery of right now. But Deeney says the fact that many of the startups only need to build gear that works on standards-based IP equipment rather than the proprietary older 2G and 3G network gear will help them keep costs down. However, he also acknowledged that since existing 3G networks are here to stay, in some parts of the network, going all-IP wouldn’t make sense. My hunch is that a lot of the equipment investments will involve gear that can sit on the edge of the network, or technology such as better antennas inside devices.

    For example, the first investment he’s hoping to announce within the next month is in a Dallas-based stealth equipment startup that helps boost capacity  at the edge of the network. Aside from hardware, software that helps track network or device activity and respond to problems proactively is another area of interest.

    Carriers currently have equipment that can measure the type of traffic flowing across the core of the network, but Denney is looking at software that can aggregate traffic from the cell sites at the edge and make suggestions to load balance to improve the capacity quickly, or even in real time. In theory such software may mean fewer dropped calls as people move in and out of various parts of the network.

    Using that analytic software to help plan future network buildouts is also of interest to Deeney. Companies that provide software on the devices themselves that can help optimize the way the devices communicate with the network or even schedule tasks in ways that don’t use as much bandwidth could also expect interest from the 4G Venture Forum. For startups, the lure of Verizon as a customer, as well as investment dollars from a contracting industry, is strong. For consumers, the 4G Venture Forum could deliver a better mobile broadband experience.

    Image courtesy of Flickr user swanksalot

  • Could the Supreme Court and Cablevision Help the Wireless Biz Get More Spectrum?

    Cablevision, the nation’s fifth-largest cable provider, has been fighting the rules that require it to carry certain local broadcast stations in areas it serves, and hopes to get the Supreme Court to hear its lawsuit regarding those rules. These so-called “must-carry” rules ensure that the local access channels are watchable on cable in addition to the larger broadcasters like Fox or NBC. However, if the Supreme Court hears the case and sides with Cablevision, then cable providers could dump those less popular stations, and the rejects, finding it hard to stay alive, could end up relinquishing their valuable broadcasting spectrum.

    That’s a lot of ifs, but analysts at Stifel Nicolaus believe that if the Supreme Court hears the case it’s likely to overturn aspects of the must-carry rules, setting off a chain of events that could benefit the cable companies and the wireless business, while hurting smaller, local broadcasters. From a note the firm released today on the topic:

    We understand that roughly 40% of full-power stations are must-carry, and many of the stations that rely on must-carry for their MVPD/multichannel carriage would probably not survive without it. Those stations tend to be in larger cities, where wireless spectrum needs are greatest. Given the FCC’s search for additional spectrum for wireless broadband, a cable victory could present an important opportunity to reallocate spectrum from broadcasters seeking an exit strategy. In effect, rather than recovering some spectrum from all (or many) broadcasters, it could recover all spectrum from some broadcasters.

    Each broadcaster has a 6 MHz chunk of spectrum in each locale that’s generally within a range that’s good for providing mobile broadband. Now 6MHz of spectrum isn’t a lot when compared to the 100 MHz or so that wireless carriers tend to have in large cities, but given the capacity crunch carriers like AT&T are obviously experiencing in places like New York City and San Francisco, getting that broadcast spectrum looks appealing.

    However, in order to ensure your iPhone stops dropping calls, a lot of spectrum trading would have to occur because it would be difficult for a wireless carrier to offer devices that work in too many disparate bands of spectrum. For more on this, check out why Google’s Nexus One doesn’t work on the AT&T 3G network. So if a broadcaster goes under in Long Island  it may give up spectrum in a band that’s different from a failed station in New York, creating an environment where third-party investors may have to come in and aggregate the spectrum in order to sell it to an interested carrier.

    In other words, any benefits to the wireless industry would likely be a long time coming. However, cable providers would benefit immediately as they could dump the stations and free up capacity for more high-definition channels on their network. Consumers would happily trade “Wayne’s World”-type programs for Comedy Central in HD and the possibility of better mobile broadband. For certain stations, that may be a fair trade, too.

    Image courtesy of Flickr user adamsofen

  • Facebook Matures, Will Build Its Own Data Center

    Artist's rendering of planned Prineville, Ore. data center

    Facebook said today that it would build its first data center, a 147,000-square-foot server farm to be located in Prineville, Ore. The social networking company didn’t disclose what it will cost to build, but according to public records dug up by The Bend Bulletin, the project would be valued at $188 million and employ 200 people during its 12-month construction before leveling out at 35 full-time employees.

    By choosing Prineville, Facebook avoids paying about $2.8 million a year in taxes for 15 years, and can apply for other waivers as well. In addition, now Facebook has the cachet that comes with owning its own data center. And by spending a lot in up-front costs, it will be able to enjoy the benefits of a lower cost of ownership on whatever servers it places in the data center, thanks to the ability to control the entire environment. Over time, this saves Facebook money and could help it innovate to deliver a faster experience for end users.

    Much like buying your own home rather than renting, becoming large enough to build your own data center is a big step in the life of a tech firm. However, Facebook could also be signaling that its wild growth over the last six years may be reaching a predictable and less torrid phase. It’s hard to commit to constructing a permanent data center if a company doesn’t know if it will need 50 or 500 new servers within the next year, but as growth becomes predictable it makes it easier to justify laying out the funds for permanent infrastructure and figuring out how much of that infrastructure the company needs. Sure a growing business could always add another data center, but building a single 100,000-square-foot data center costs far less than building two data centers of 50,000 square feet each.

    Like a homeowner, Facebook can make drastic changes in its server farm and run it however it wants in order to lower costs or boost performance. In a blog post, Jonathan Heiliger, VP of technical operations at Facebook, writes that the social networking company plans to take a variety of steps to reduce the power consumption at the Prineville data center such as using an evaporative cooling system instead of chillers, bringing in outside air to cool the building between 60 percent and 70 percent of the year, re-using the heat generated by the servers to heat the office when it’s cool, and using a new proprietary and uninterpretable power supply.

    Facebook’s data center won’t open until the middle of next year, and it will still lease data center space in other parts of the country, but for the site with 350 million users, it has clearly reached a point where it can plunk down the up-front capital and own.

  • Analyst: ARM Will Beat Intel in Ultra Mobile

    Intel isn’t going to beat the likes of Qualcomm, Texas Instruments and others when it comes to providing the brains behind smartphones, netbooks and even ultra-mobile PCs, according to ABI Research. The analyst firm said today that ARM-based ultra-mobile devices will surpass x86-based devices by 2013, a reversal from this past year when 90 percent of the ultra-mobile devices were x86-based.

    ABI classifies netbooks, mobile Internet devices (tablets), smartbooks and UMPCs as the sort of gadgets that will count on ARM-based application processors rather than CPUs from Intel, AMD or VIA Technologies. “2010 will be pivotal for building momentum behind non-x86 solutions,” writes ABI Senior Analyst Jeff Orr, “and gaining adoption in both distribution channels and by end-user populations worldwide.”

    I would argue that the hard work has already occurred, with ARM pushing to ensure that popular software, browsers and operating systems worked on its instruction set. Getting Android, which runs on ARM, onto a variety of devices, and making sure Adobe Flash runs on ARM-based chips are what will help the company gain favor with device makers and the end consumer.

    ARM has always had an advantage in mobile because the chips based on the instruction set were designed to sip power rather than glug it. That translates into a longer battery life and presumably a smaller form factor for the battery and end device. The biggest hurdle it had was that most of the software people want to use is designed to run on x86 chips. But in the last two years, thanks to the efforts to port software to its instruction set, and the overall movement of applications to the web, ARM has whittled away Intel’s advantage on that front, and we’re now seeing ARM-based devices finally hit the consumer market. Faced with faster processors, longer battery life and always-on connectivity, I agree with ABI that ARM will blow Intel’s Atom based gadgets out of the water.

  • Broadband’s Next 100 Million Will Come From Emerging Economies

    The world will reach 500 million fixed-line broadband subscribers sometime this year thanks to growth in developing countries, according to research firm Informa. What’s more, through 2014 the next 100 million broadband subscribers will come from those countries as well. When it comes to wired broadband additions, China, Russia and Mexico are the new stars.

    The developed world has seen slowing rates of fixed-line broadband growth as markets there have become saturated, so in the meantime the focus has moved to the developing world. We talk a lot about mobile broadband being the wave of the future in developing countries, and how many in those places will skip the wireline infrastructure altogether, but when it comes to delivering high speeds necessary for video and even mobile backhaul, wired is the way to go.

    Which means wireline growth will continue even as mobile broadband via 3G and 4G networks expands. The Informa report predicts that most of the wired broadband growth with come from the seven countries seeing the most growth in subscribers: China, Russia, Mexico, India, Brazil, Turkey and Argentina. For a tally of broadband subscriptions as of the end of September, check out Informa’s chart below.

    Image courtesy of Flickr user ground.zero

  • Take Our Poll: What Perks Do You Want From Your Broadband Provider

    Comcast today said it would offer free Norton antivirus software for its broadband subscribers, adding to an array of perks Internet service providers are offering in competitive markets. Online storage, anitvirus, Wi-Fi, better upstream speeds and even (rapidshare downloads) special content — such as Verizon delivering ESPN360 — are now offered to customers with regularity. Readers, what perks do you get from your ISP and which ones do you want?



    Thumbnail courtesy of Flickr user Stevendepolo

  • Comcast Adds Antivirus to Its Broadband Package

    Updated: Comcast said today that it will bundle a subscription to Norton’s antivirus software for its business and residential broadband customers, adding yet another perk for broadband subscribers. Residential subscribers can install it on up to seven computers and business customers can put the software on up to 25. As competition for broadband heats up in some markets, and providers attempt to lure customers, the nature of what a broadband subscriber should expect is changing.

    Online storage, anitvirus, Wi-Fi, better upstream speeds and even special content — such as Verizon delivering ESPN360 — are now offered to customers with regularity. Of course, if your provider is still in the broadband dark ages like mine is, you’re not getting any of these options. Readers, please check out our poll, and tell us what do you expect from your provider, and what do you get?

  • Transparency Is Good But Intelligence Is Better

    I took a break in the middle of the FCC hearing on open access and transparency today to figure out with Time Warner Cable, my Internet Service Provider, why the FCC feed and many other live online video feeds are problematic for me. It was more than a little ironic since it could be a lack of network management disclosure on the part of Time Warner causing my problem (it wasn’t), but it’s also a wonderful example of how even a transparent network doesn’t always deliver a great end user experience.

    The Federal Communications Commission is trying to mandate six network neutrality rules governing how and when Internet Service Providers can mess with the bits traveling over their pipes (GigaOM Pro, subscription required), and one of those rules dictates that ISP be transparent with their users about how they mess with bits to improve the quality of the customer experience. The proposed rules also mandate that any such network management interference be “reasonable.” Comcast’s decision to throttle P2P files without letting users know would have violated the transparency principal, and it was later censured because the FCC found that its actions weren’t reasonable.

    But sending bits across the web is far more complicated than most people may realize. Those bits pass through pipes owned by different companies, multiple routers, a variety of servers and even your home equipment, which means a transparent window on what an ISP is doing doesn’t always mean you can watch a live FCC broadcast without significant skips, buffering and other problems that may have a web user railing against their broadband provider. Not to say that the FCC rules aren’t a good thing (I think they are), but getting a quality video or even web site experience isn’t just an ISP issue, nor will network neutrality ensure high-quality video delivered via broadband.

    For example, a traceroute command on the FCC broadcast revealed that my cable provider wasn’t the issue; the FCC server was. A few months back, a problem I had with Cisco’s live webcasts maybe have been traceable to an ISP issue or insufficient CPU power on my end (although since I didn’t involve Time Warner on that one I will never know). Most users don’t have the resources to get on the phone quickly with someone knowledgeable to see what might be causing a problem (and to be fair, when Hulu stutters during “The Daily Show,” I have to wait like everyone else).  But even if they able to access the intelligence of network engineers at their ISP or the host company, who wants to stop a YouTube video to figure out the issue, when something better is a click away?

    So what does this mean for those of us who just want their online video to work? Knowing what your ISP is doing to your traffic is a key step, but for the best experience, I think there’s a market for a network intelligence-gathering service offered ideally by the content provider that a user can click on to get a sense of what issues might be halting content between the provider and the user’s home. Maybe the button starts a ping test and delivers the results in a user-friendly format noting which servers may be causing problems.

    Router makers could pick it up inside the home. So maybe YouTube isn’t going to start offering this (although Google does have some nice tools for getting more insights into your ISP’s network management), but I think as far as getting business information and services via the web thanks to the movement to the cloud, some sort of user-friendly tool would make a complex process a bit easier to understand. I hope the idea will take off and the end consumers will benefit. If that happens companies like AlertSite, or Compuware, which bought Gomez, may find themselves with a consumer-facing quality assurance product.

  • Skype Steals Even More Minutes From Phone Companies

    When a crappy economy meets VoIP, cheaper IP telecommunications win, according to research out today from TeleGeography showing that Skype is taking market share from the international calling market. TeleGeography found that the projected growth of international telephone traffic was almost halved in 2009 — to a mere 8 percent — while Skype’s growth accelerated by 51 percent.

    TeleGeography also notes that over the past 25 years, international call volume from telephones has grown at a compounded annual rate of 15 percent. In the past two years, however, international telephone traffic growth has slowed to only 8 percent on an annual basis, growing to an estimated 406 billion minutes in 2009 from 376 billion minutes in 2008.

    Skype’s on-net international traffic (between two Skype users) is projected to grow 63 percent in 2009, to 54 billion minutes. In comparison, the Skype-to-Skype traffic grew 51 percent in 2008.

    This increase is the effect of a sharp increase in the number of Skype users. At the end of 2008, Skype had 405 million registered users. Since then the company has passed the 525 million user-mark. At any given time there are more than 20 million people making calls using logged onto the Skype service, the company claims. Those calls are free as opposed to services where one Skype user calls a landline, which are still cheaper than rates provided by the telecommunications companies.

    Skype is still the largest long distance phone company in the world. And as VoIP grows both on the computer and on mobile devices, thanks to an ever-increasing number of VoIP apps for mobile phones, international carriers are going to find yet another source of profits eroded.

  • Can Austin Ignite the Fire of Entrepreneurship?

    I recently attended, as part of an overall effort to get out around town more (and away from my desk), Ignite Austin, an event that gives people five minutes and 20 slides to share an idea, and left jazzed about Austin’s future as a place for entrepreneurs. No, it’s not a stellar place to find venture capital, but it’s a great place to start a business. Dell, Whole Foods, National Instruments and even companies like Sweet Leaf Tea were created in Austin, and there’s an ever-increasing amount of resources devoted to making Austin a home for entrepreneurs the way it’s already a home for live music.

    When I’m in Silicon Valley people talk about technology. In bars, over dinner and at coffee shops. And they talk about things they’d like to build from that technology. It’s awesome, especially for a nerd like me. But in Austin, people talk about their passions and how to tie them to technology. They also talk a lot about Austin, because the people here love the city so much they don’t just want to build something — they want to build it here.

    So the key is to bring those Austin-loving entrepreneurs together in a critical mass so they can both help one another and generate the kinds of successes that will keep Austin on the map as a top spot for startups. As I’ve written before, not every entrepreneur needs to be in the Valley.  Bijoy Goswami, who runs Bootstrap Austin as well as a site to promote Austin’s entrepreneurship called the Austin Equation, talked about how to do that in his presentation at the Ignite event. What will set Austin above the pack is creating what he calls a scene around entrepreneurship.

    “You’ve got to convene the scene, get involvement and then evangelize the scene to raise the attraction of it,” Goswami said. To that end, he’s mapped out a list of entrepreneurship resources in the city (it’s pretty cool) and once a month people from various parts of the Austin entrepreneur community meet to plan events, learn more about one another and try to raise the profile of the local entrepreneur scene.

    Will it work? Such things are pretty hard to measure, and as Goswami notes, there isn’t an actual end goal, just continual work to ensure that things keep humming along. “I don’t see this as finished work,” he explained. “Maintaining the scene is an ongoing effort. Maybe there’s fundamentally a bootstrap  mindset vs. a venture capital mindset, where the whole idea of exits is huge.”

    Austin may never be a city with fantastic venture capital-worthy exits, but if the city can help support startups and keep entrepreneurs coming to Central Texas, I’m OK with that.  After all, SolarWinds, a local software company, managed to go public last year during a wretched time for such exits, last week 3-year-old startup Phurnace was bought by BMC and this week Spiceworks, a local B2B company, raised $16 million.

    Thumbnail courtesy of Flickr user turtlemom4bacon’s

  • What the Net Neutrality Filings Say

    The Federal Communications Commission has received 23,137 filings and more than 100,000 comments on its proposed net neutrality rules, which would prohibit both wired and wireless Internet service providers from discriminating against the content flowing across their respective pipes. They range from “form filings” that were solicited by the Free Press’ Save The Internet campaign, to a 255-page tome submitted by AT&T.

    Free Press has done a nice job of excerpting out sections from filings that meet the pro-net neutrality stance of its organization, and Om has profiled what Skype has to say, so I thought it might be worthwhile to offer our readers a feel for what else has been submitted on the topic.

    Over the next few days I’ll dig through more comments, assimilate information and come up with more articles, but for now here’s a quick snapshot. I’m acutely disappointed that what was supposed to be a “data-driven” process has produced so little data, and instead churned out hundred of pages of philosophical and legal arguments.

    From AT&T:

    The NPRM proposes all this new regulation, moreover, without any credible data-driven evidence of any market failure amid this robust competition. Instead, it bases its hyper- regulatory proposals solely on the basis of speculation that a market failure might arise someday in the future. This speculation rests on three deeply flawed premises: (1) that the Internet has always been a collection of “dumb pipes” that cannot distinguish among packets based on their associated applications or content; (2) that “[a]s a platform for commerce,” the Internet therefore “does not distinguish between a budding entrepreneur in a dorm room and a Fortune 500 company”; and (3) that only recently have new “[t]ools” emerged “that enable network operators to prioritize” particular data and that somehow threaten the Internet’s historic openness and “neutrality.” This threat, the NPRM posits, demands immediate, preemptive intervention.

    From the National Cable and Telecommunications Association:

    To minimize the damage to the continued growth and potential of the Internet, the Commission should limit any enforceable obligations to the four principles set forth in the Policy Statement, along with the proviso that permits reasonable network management – a proviso that, as proposed in the Notice, should be construed much less restrictively than in the Comcast decision. The addition of enforceable nondiscrimination and transparency principles would, unless very carefully tailored, seriously impair consumer welfare in the development of Internet services.

    From Comcast:

    First, if the Commission decides to adopt formal Internet regulations, it should limit such regulation to the first three principles of the Internet Policy Statement. This would address all of the hypothetical concerns that have been raised by various proponents of regulation, [ed. note: I find it awesome that Comcast is calling these threats hypothetical after it blocked P2P filings] with minimal disruption of the status quo. The Commission, however, should not adopt the fourth principle as a rule. It is a laudable goal to state that a broadband ISP “may not deprive . . . users of the user’s entitlement to competition among network providers, application providers, service providers, and content providers,” but the concept is too vague and is ill-suited as an enforceable regulatory standard. Rather, this principle should be retained as an overarching, aspirational policy goal for the entire Internet ecosystem.

    From Google:

    While regulatory oversight targeted to last-mile broadband providers is consistent with regulatory precedent and the Commission’s statutory mandates, the FCC’s authority does not extend to most web overlay applications and services. These software-derived offerings are not associated with either the network provider’s transmission functions or the source of potential FCC concerns, i.e., affecting the facilities of communications by wire or radio. 254 The FCC has broad authority to regulate communications in the public interest, but the Supreme Court has made clear that its jurisdiction is not unlimited.

    From the Free Press:

    In markets where technological change is relatively swift and competition is healthy, firms have a strong incentive to invest in order to keep up with or get ahead of their competitors. The current high-speed ISP market is characterized by swift technological change, but the overall level of competition is sub-optimal. The latter factor means that regulators must be vigilant to ensure that the lack of competition and presence of market power do not spill over from the ISP market into the adjacent content and applications markets. If ISPs are allowed to discriminate against content and applications, it will create incentives for them to profit from artificial scarcity by delaying or avoiding network investments — and it will reduce investment in the content and applications sector.

    Image courtesy of Flickr user Let Ideas Compete