Author: Stacey Higginbotham

  • T-Mobile’s CTO Talks Spectrum and Metered Data

    Brodman (left) with Motorola CEO Sanjay Jha

    T-Mobile USA, which detailed its next network upgrade yesterday, is betting that faster mobile broadband service today (well, before the end of this year) aimed at making smartphones perform better will justify its decision to move to a 3.5G network rather than the 4G LTE standard.

    I chatted with T-Mobile USA CTO Cole Brodman yesterday after the nation’s fourth-largest carrier explained the rollout of its HSPA+ network upgrade that will deliver speeds of 21 Mbps down to all of its markets by the end of the year. T-Mobile also said that Philly, New York, and Washington, D.C. already have those speeds today. For more on the rollout check out my story from February. For more on the consumer experience, check out Kevin Tofel’s review of the network on existing T-Mobile gadgets or for those specifically designed for the upgraded network.

    GigaOM: Let’s start with data consumption. What are people consuming on the T-Mobile network?

    Brodman: The average Android customer consumes between 400 and 600 MB and we’re excited to see that because it means people are starting to use them for more than the occasional search for a ringtone.

    GigaOM: So at these rates, do you believe that some type of usage-based billing is inevitable as AT&T and Verizon apparently do?

    Brodman: The vast majority of consumers will consume at rates that carriers can stay ahead of and we want them to do that, but a small number of consumers will consume too much and it’s that small number that starts to endanger the economics for the average consumer. Around 2 percent of broadband stick users will even reach our cap (5GB on the high end) and they will far exceed it. What makes sense is as they start to endanger the affordability for everyone we need to look at things like usage-based pricing.

    GigaOM: Don’t we already have some type of usage-based pricing with current plans that have different caps and overage charges?

    Brodman: It’s hard to expect consumers to know what a megabyte is because what that can offer changes every day. And the problem is overage is not a good experience for consumers. I don’t have an answer for you today on what usage-based pricing will look like, whether it’s upgrade buckets, app-based pricing, quality of service-based pricing or time-sensitive throttling.

    GigaOM: Right now, you are talking 3.5G when other carriers are talking about 4G and LTE. When might T-Mobile move to an LTE network?

    Brodman: We’ve not announced a time for LTE. HPSA+ has a really rich future…one of our network partners delivered 80 Mbps on HSPA +. And we can transition smoothly to LTE when we have to and are excited about that growth into LTE when we need to.

    GigaOM: T-Mobile has fewer spectrum resources than other carriers. Do you have the spectrum capacity to deploy LTE?

    Brodman: We certainly believe access to additional spectrum for mobile broadband is important to America. If you step back from that, we have quite a bit of additional capacity in our portfolio and we’re freeing up spectrum used for our 2G network for 3G and HSPA+. But I think I’m not going to answer the question directly. We believe that our portfolio will keep us competitive in the near and medium term — for the next couple of years.

    Related content from GigaOM Pro (sub req’d):

    Metered Mobile Data is Coming and Here’s How

    Everybody Hertz: The Looming Spectrum Crisis

  • Mobile Milestone: Data Surpasses Voice Traffic

    Mobile data bits traveling around the world outnumbered voice traffic for the first time during December of 2009, according to wireless equipment vendor Ericsson. Worryingly, that data traffic was generated by an estimated 400 million smartphones set against 4.6 billion mobile subscribers making voice calls. What happens when everyone has a smartphone?

    Ericsson measured traffic across networks around the world and discovered that once data traffic surpassed 140,000 terabytes per month, those bytes outnumbered the traffic generated by voice calls. Data traffic grew 280 percent during each of the last two years, and Ericsson expects it to double annually over the next five (see Cisco’s estimates here). Already traffic in 3G networks has surpassed that of 2G networks. Note that in India and China, the world’s two most populous countries, 3G networks are only now coming online or have yet to do so.

    Ericsson credits/blames social networking as accounting for “a large percentage of mobile data traffic” consumption, although video seems to consume many more bytes. Ericsson notes that 200 mobile operators in 60 countries are deploying and promoting the social networking site’s mobile products, with over 100 million active users accessing Facebook through their mobile devices. But Facebook is more an example of why we want always-on, always accessible broadband (we talk to our friends now, not merely via voice, but with pictures, texts and status updates) than it is the cause of the data deluge.

    Here’s a video from Ericsson talking yesterday about this historic moment (of three months ago):

    Related GigaOM Pro research: Metered Mobile Data Is Coming and Here’s How (sub req’d)

  • Huawei Shows Off 1.2 Gbps Wireless — Yes, Wireless

    Huawei today demonstrated the next-generation Long Term Evolution network technology in trials that reached speed of 1.2 Gbps. That’s faster than most wireline networks, but it’s delivered via a cellular network. With such speeds you could download an HD movie in 30 seconds.

    But before you dump your FiOS wireline subscription, be aware that the LTE Advanced network technology is years away. When I ask folks at Verizon Wireless or AT&T about LTE Advanced I’m been told that no one is thinking about it today.

    Plus, even with it far off (the standard won’t even be set until 2011), the test speeds bear little relationship to the actual speeds. For proof, check out the LTE demo I saw back in 2008 showing speeds of 150 Mbps down and the anticipated speeds of 5-12 Mbps down that Verizon is telling users to expect on its network. I explain why there’s such a disconnect in this post.

    There’s also a matter of finding enough spectrum to deliver peak speeds using LTE-Advanced. For the fastest speeds carriers will need wide swaths of spectrum — the 3GPP standard-setting group says 70 MHz. Currently Verizon expects to use 10 MHz bands to deploy its LTE in the 700 MHz block, so we’re talking a huge increase. Cobbling together that many airwaves will suck up the spectrum that the FCC wants to deliver as part of its National Broadband Plan.

    Still, 1 gigabit wireless is an exciting milestone that I couldn’t bear to pass up, especially given that our wired networks are hoping to hit a mere 100 Mbps by 2020, according to the National Broadband Plan goals. Also of note is Huawei’s continued advancement in the equipment business. It was also working with Verizon last year to deliver 10 Gbps to homes via fiber and has deployed nine commercial LTE networks.

  • LA, Miami and Middle America to Get WiMAX

    Clearwire said today that it would expand its WiMAX network to Cincinnati, Cleveland, Los Angeles, Miami, Pittsburgh, Salt Lake City and St. Louis this year. In its effort to cover 120 million people before the end of the year, these cities will join previously announced network buildouts in 2010 in New York City, San Francisco, Boston, Washington, D.C., Denver, Minneapolis and Kansas City.

    Sprint, which is a Clearwire partner, will provide the 3G coverage to supplement Clearwire’s 4G coverage in the regions. The two companies are in a race to sign up mobile broadband subscribers before the cellular operators launch their own 4G networks, based on the Long Term Evolution standard. For consumers, WiMAX and LTE mean faster mobile broadband speeds and eventually VoIP phone calls and superfast phones.

  • Mobile Broadband: You’re Gonna Pay for the Convenience

    We all know by now that people treat their mobile broadband connections like they treat their wireline connections — downloading as much data and expecting the same rapid performance. But Sandvine (you may remember it as the company that helped Comcast block P2P files) has released data showing exactly how much people use mobile broadband — and concluded that for carriers, such use is neither sustainable nor profitable (GigaOM Pro, sub req’d).

    Since the carriers are positioning themselves for the implementation of usage-based pricing schemes for mobile broadband, Sandvine is merely telling its future customers what they want to hear. But Sandvine’s data also paints a very clear picture (one we’ve been painting for at least a year) about the economics of mobile broadband demand and use (GigaOM Pro). From its report:

    While significant, these numbers are dwarfed by projections suggesting between 1 billion and 2 billion users by 2014, and revenues well in excess of $100 billion. And therein is one of the points of concern for wireless providers – while the number of users is expected to triple or even quadruple in the next five years, revenue is predicted to only double.

    Sandvine offers several charts showing why this is the case, starting with the quality of our phones in the U.S. (the report also covers Europe and Latin America, but I took only U.S. charts):

    In other words, the proliferation of high-end phones that can handle bigger applications and deliver faster speeds when it comes to downloading photos or watching video changes the types of things one can do on the mobile network (even when folks aren’t on a computer). Check out the top applications that Sandvine sees being used during peak times on U.S. mobile networks:

    So what does this mean for consumers? It means many of us will end up paying more for mobile broadband that we do under unlimited plans. Sandvine’s software offers carriers the ability to look at data usage on the network and set pricing tiers to ensure that, for example, only a certain percentage of users will fall into a basic tier (or be subject to overage charges). We’ve written about the downsides of having tiers as opposed to metering, but in the end we are going to pay for the convenience of being mobile. I can only hope that myriad hotspots and competition from Clearwire’s WiMAX network can keep the big cellular carriers competitive.

  • Is Baby Talk the New Startup Naming Convention?

    This weekend at DEMO a startup called Gwabbit launched a web-based contact syncing service. The company makes a popular app that adds contacts from the web to your Outlook client, but in seeing the name, I instantly thought of Kwedit, another tech startup whose oh-so-cutesy name makes me want to…well…womit.

    Look, I get that domain names are hard to find. And I have lived through the lowercase E and dot-com naming conventions of the late 90s. I carefully triple-check spellings as startups drop vowels (Flickr, Scanr) or bastardize common spellings to make themselves stand out (Digg, Reddit, Vidyo). When it comes to adding letters, I have a special disdain for the double-o (Orgoo, Faroo, Wufoo, Kadoo).

    I gamely look up the names of startups that insert periods in between all of their letters (del.icio.us, bit.ly, me.dium), but this childish lingo is pushing me to the edge. So, please, just add a lowercase “i” to your name or the word app and leave the baby talk out of it.

    Disclosure: Kwedit is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

  • AT&T Loses the Landline With New Triple Play

    AT&T today launched a new bundle of services containing video, data and voice, but this time consumers can choose whether they want wireless or a landline (it could be VoIP or a traditional circuit-switched line) for voice. This upends the idea that after the triple play, the next big ISP offering would be a quadruple play, that includes voice, video data and mobility.

    It also gives consumers a bit more choice in terms of paying for a service they actually use. AT&T still will offer a quadruple play for those who want it, but flexibility around voice gives AT&T an offering that the cable providers can’t match today with their bundles.

    For those of us watching the technical limits between TV, voice and the web erode (it’s going to be all IP soon enough), the willingness of a major service provider to accept that the landline is dying (and maybe hasten its death) is a hopeful sign. AT&T is doing this to help differentiate itself from its cable competitors, but if competition can lead to a phone company shrinking its bundle, there’s still hope.

    I’d like to see the bundle compress further. A wireline data and mobile data/voice offering would be awesome. Already there are people who buy wireless and wireline data — (and some who just go all wireless) — and leave the pay TV and landlines for those living in the 20th century. The key will be getting speeds that are fast enough and unencumbered by artificially low caps and tiers.

    Related GigaOM Pro Content (sub req’d): 

    How Mobile Network Operators Must Evolve as Data Ramps Up

    Image courtesy of Flickr user mrbill

  • Verizon Wireless Enters Online Payment Space

    Verizon Wireless has signed an agreement with online payments company Danal that will enable customers to buy digital goods online and have them billed to their Verizon account using just their mobile phone numbers. This puts the nation’s largest wireless provider in similar company as Apple, Amazon and PayPal (s ebay) when it comes to offering a payment platform, but with this strategy Verizon is swinging for the fences.

    Verizon is smart to create an online payment platform that it can offer its 91.2 million wireless subscribers, but getting people to use it will be a challenge. If Verizon can get people accustomed to putting in their phone numbers instead of credit cards while  shopping online, then it could own a critical element in building an application and services platform that spans the wired and wireless world. Much like Apple has such a large stake in the mobile application and commerce space today because it has millions of credit cards in iTunes, Verizon could be expanding its own payments information for a similar goal.

    Verizon’s billing will work when consumers go to a participating web site and choose something to download. When buying the approved game, music or other content, users click on the BilltoMobile button during checkout and enter their mobile numbers and mobile billing zip codes. Then they get a text message on their mobile phones with a one-time code, and once they enter this code into the online checkout window, they’re done. It’s not clear if Verizon will charge folks for this text.

    No pre-registration or links to credit cards or bank accounts are required, which is good. Also worth nothing is that there is a $25 spending limit on purchases made via this platform, which means parents could let kids use it and control both the content the kids can download and how much they can spend. In fact, since teens have cell phones and not credit cards, such a service might really take off among the younger set.

    Related GigaOM Pro Research:

    Image courtesy of Flickr user foforix

  • FCC’s Broadband Plan: Mobile Broadband Will Save Us!

    The Federal Communications Commission issued the long-awaited National Broadband Plan this week, a 376-page document that makes clear the agency accepts the reality of the current wireline duopoly — and as such, has decided to put the burden of competitive pressure on mobile broadband.

    There are many consumer-friendly aspects of the plan, such as opening up set-top boxes (GigaOM Pro, sub req’d) and creating an easy-to-understand label that shows people what their broadband connections are capable of (see image). But the FCC has clearly decided against a plan that requires a new infrastructure buildout when the current infrastructure will suffice. If only the agency had moved to tackle this issue back in 2002, when the telecommunications providers were thinking about how their fiber rollouts were going to occur, and implemented policies that could have resulted in a shared nationwide fiber network.

    When Life Gives You Lemons …

    But now that Verizon is spending $19 billion to push fiber to the home for 80 percent of its footprint (although that push may be slowing) and cable providers have pushed fiber out closer to the home in their networks and are deploying DOCSIS 3.0 upgrades, the FCC needs to work with what ISPs have in the field. So the bulk of the wireline reform coming out of the plan consists of regulatory tweaks to address predatory special access charges, inter-carrier compensation rules, set rates for access to underground conduits and utility poles, and in-depth proposals for universal service fund reform.

    Yes, the FCC is proposing that wireline networks will be faster if the 2020 goal of 100 Mbps speeds down and 50 Mbps speeds up are met, but that’s a goal, not something I’m sure the FCC can and will enforce. Another goal is 1-gigabit connections to community centers and schools, which depending on how it’s implemented could help drive faster networks as well. But again, those are 2020 goals.When it comes to ensuring competition between the duopoly in the short term, the FCC will rely on data. The plan proposes changes to both the type and amount of data the FCC collects, and also asks the Bureau of Labor Statistics to collect information on how people use broadband.

    The FCC says it will watch for price discrepancies and inequalities as newer networks are deployed and the types of services available to consumers diverge in speeds from wireless broadband’s 1 Mbps downstream speeds to fiber’s 100 Mbps downlink speeds. However, it doesn’t lay out how such inequalities — if they do emerge — will be addressed. Rather, mobile broadband is the star of the plan, both because it offers hope of a third broadband competitor in many areas, and also because of the potential for future growth and innovation of the U.S. economy.

    Airwaves Are the Key

    I’ll write more in the coming weeks on the spectrum aspects of the plan. The details as to how the FCC plans to go from having 50 MHz available for mobile broadband today to 500 MHz in 10 years will result in a pretty big legislative battle as the FCC tries to nab broadcaster spectrum and incumbents and tech firms position to own large chunks of those valuable airwaves.

    But the real benefit of mobile broadband as a competitive stick is threefold: it can cover the entire country relatively cheaply, existing operators are already moving to all-IP networks that the FCC sees as the future of its regulatory jurisdiction (the airwaves will always be part of the FCC oversight even if Internet applications and services are not), and the infrastructure is easily upgradable without tearing up streets and installing gear into people’s homes.

    So to push the mobile broadband envelope the FCC wants to take actions to free up 300 MHz by 2015. The chart lays out the spectrum bands and the timing for this FCC airwave grab, and I offer a bit more context below.

    WCS: This spectrum is contentious because Sirius Satellite is worried about interference from any cellular operators deploying service in this band. The plan proposes to resolve that issue this year.

    AWS 2 and 3: These 60 MHz should be relatively easy to get to auction or to allocate for mobile broadband once the government makes some decisions. At issue with some of this spectrum is whether it will be paired with spectrum the FCC will have to carve out from other federal holdings. The agency hopes to figure this pairing issue out with the National Telecommunications and Information Administration by Oct. 1. Paired spectrum is useful for deploying the more common, forward division multiplexing-type of networks.

    D Block: These 10 MHz were too much trouble during the last spectrum auction because they were burdened with huge public safety network rules. The goal, to which the plan dedicates an entire chapter and $6.5 billion, is to build out a nationwide public safety network so all local, state and federal first responders can communicate in case of an emergency. These 10 MHz will have to connect with spectrum set aside for the National Public Safety network, and will have to be deployed to work with commercial handsets using LTE network technology. This makes such spectrum a good bet as a safety valve or a backup chunk of spectrum for an existing provider.

    MSS: Mobile satellite service providers such as Terrestar, SkyTerra, and Inmarsat own spectrum in this band because they’ve promised to build a combination satellite-and-terrestrial network. So far they’ve failed to make good on that promise, and I have huge doubts that they ever will. The FCC appears to be relaxing some of the more stringent requirements on satellite providers to see if they can deliver a credible mobile broadband service with devices consumers will buy. If the FCC eliminates some of the satellite requirements, the MSS spectrum holders hope their spectrum becomes more valuable.

    Broadcast TV: The FCC hopes to pry 120 MHz away from broadcasters in urban areas, where cellular providers have the most need for spectrum, which will pit the FCC and carriers against big broadcasters and over-the-air television watchers in big cities. Oh. My. God. It’s going to be a showdown. But I’m glad the FCC isn’t going for a token spectrum grab from rural broadcasters, which would be easy but wouldn’t alleviate network congestion.

    The FCC isn’t making friends in Congress (or with over-the-air television buffs) with this plan, but as the final arbiter on how televisions have to send out their signals, it has the ability to squish some channels together and dictate how broadcasters use their 6MHz channels. To ease the pain of the FCC flexing this power over broadcaster’s spectrum allotments, it’s asking Congress to change the way spectrum auction proceeds are shared so as to let broadcasters have a piece of the pie. To bolster its controversial move, the FCC points out that cellular companies have valued each megahertz of spectrum per person covered at $1.28 while the television spectrum is currently valued at 11-15 cents. Why? Because mobile broadband is the future and over-the-air television is on its way out. Heck, the FCC even notes that poor consumers could get their broadcast through subsidized IPTV instead.

    Getting more spectrum is the biggest aspect of expanding mobile broadband, but rules to make it easier to deploy microwave backhaul are also in the queue for 2010. And the FCC pledges to allocate a band for unlicensed wireless, although it doesn’t specify where this band might be. It also touches on the white spaces broadband the FCC approved in 2008, basically saying it wants to see devices and networks using white spaces broadband soon. We do, too. We thought we’d have more than a few trial networks by now. For folks watching and waiting for this flood of spectrum, the FCC and the NTIA set a deadline of Oct. 1 of this year to identify additional spectrum for use.

    Since mobile broadband is the lynchpin of our federal broadband plan, we’d better get this right.

  • Google’s Growing Infrastructure Advantage

    Google’s content comprises between 6 and 10 percent of global Internet traffic, making its internal network one of the top three ISPs in the world, according to Arbor Networks. The maker of deep packet inspection equipment, which runs a survey of international ISPs, detailed Google’s traffic in a blog post Tuesday.

    However, the total volume of traffic is just one measure of how big a web presence a company has — the other is how it can leverage that scale to cut its costs and boost its ability to better serve customers. For Google, which has long seen its infrastructure as a competitive advantage, the ability to keep its mighty web traffic on its own network rather than pay others to deliver it is a margin-boosting — and quality-boosting — advantage.

    Arbor notes that Google has consistently increased its direct peering, and through the use of its own content caching appliances located at ISPs around the world, it has cut out middlemen like Level 3 or Bandwidth.com. Are Yahoo and Microsoft taking notes?

    Related GigaOM Pro content (sub. req’d):

    Why Google Should Fear the Social Web

  • Qualcomm Will Bid on Indian Spectrum to Boost Mobile Broadband Demand

    Qualcomm plans to bid for a chunk of spectrum in India’s upcoming 3G auction, the chipmaker said today. Qualcomm is taking a page out of Google’s playbook — the search engine giant bid for spectrum in the U.S. but never had any plans to become a network operator. The San Diego-based chip maker doesn’t really want to be a network operator nor does it want to deploy a 3G technology — it wants to jump-start demand for its 4G chips and meet India’s demand for mobile broadband. From its release:

    “Qualcomm has a history of participating in spectrum auctions to expedite the commercialization of new wireless technologies. By participating in India’s BWA spectrum auction, Qualcomm can foster the accelerated deployment of TD-LTE.”

    If the chip firm wins a chunk of spectrum in the 2.3 GHz band, it will want to promote TD-LTE, a version of the fourth-generation wireless standard that uses less total spectrum in deployment. Qualcomm is also making the bet that India will want to skip quickly from a 3G to a 4G service, a bet China Mobile is also making with its upcoming TD-LTE deployment.

    Each generation of mobile broadband technology adds more capabilities for data, which we here in the U.S. are already consuming at a network-crushing pace. Bringing in LTE networks with more capacity will help with both speed and the total amount of data that can be transferred over the air. For cell-phone users in India, which has seen its 3G spectrum auctions delayed for years, there had been talk of skipping 3G and hopping right to 4G services that could handle current and future demand.

    Qualcomm is also aiming to jump-start market demand for its chips in end devices in India and China (the two most populous countries), especially as its 3G royalties begin a decline in coming years. Qualcomm has to be worried by the increasing deployment of WiMAX-based services, which don’t really require the company’s technology and thus will fail to line its pockets.

    Related content from GigaOM Pro (sub req’d):

    4G: State of the Union

  • SXSW: The Future Application Ecosystem

    In a world of web-based services that depend on various other services, like Twitter or Google Maps data, your product will only be as strong as your weakest API call. I’ve found it’s actually a fun topic to discuss. For developers and web-based businesses, thinking about and managing federated flows of information has a big impact on the end-user experience.

    I moderated the Can You Run a Serverless Business panel a few hours ago, and two of the panelists brought that up as an issue, with Jim Louderback of Revision3 (an occasional columnist for GigaOM) saying that at one point he ended up slowing down that company’s video delivery because it had relied on too many services located in the cloud, a sentiment echoed by Ethan Kaplan of Warner Music. It’s a topic I discussed yesterday with Sam Ramji, at Sonoa, which offers a service that helps track the health of APIs.

    But after surveying the audience at the panel via a show of hands as to what type of different web and cloud services they use, it was clear that most were using a mix of software, platforms and infrastructure as a service, with some also using traditional hosting or running their own data centers. Information technology has always been a confusing mix of gear and services, but it seems like that complexity is only expanding in terms of how and where you can build an offering as well as the “partners” whose APIs you might use to build a product.

    Making all of those elements work together, ensuring a good user experience, and eventually delivering service-level agreements seems inordinately complex, but if done right we should see the emergence of new ecosystems of data, maybe built around cloud providers like Amazon or Google, or maybe popular APIs such as Twitter’s. But like any ecosystem, the effects of small breakdowns will ripple throughout the whole, something we’re for which only now beginning to react and build tools and contracts.

    For the GigaOM network’s complete SXSW coverage, check out this round-up.

    Related GigaOM Pro Content (sub. req’d): Report: The Real Time Enterprise

  • Lessons in Phone Marketing, or Why the Nexus One Is Sucking Wind

    When it comes to selling a lot of a new phones in a fairly short amount of time, an educated customer base, a pre-holiday launch and picking a carrier with a huge subscriber base are essential, according to an analysis released today by Flurry. The provider of high-end handsets app analytics looked at the first 74 days of sales for the iPhone, the Droid and the Nexus One to see how each had sold in that time frame. It chose 74 days because that’s how long it took Apple to sell 1 million of the original iPhones.

    To the Flurry team’s surprise, however, even more Droids were sold in that amount of time, prompting them to come up with the above lessons. The Droid came out two and half years after the original iPhone, so people were primed for a touch-enabled, app-happy handset, and its November launch positioned it perfectly for holiday shopping. Launching with Verizon and its 89 million subscribers also helped.  So the moral of this story is that the Google experiment of making a really cool phone with the Nexus One and just tossing it over the fence isn’t working so far.

  • Entrepreneurial Stereotypes on Display at SXSW

    Quick, picture a tech startup founder: Are they male, maybe around 27 years old, a resident of Silicon Valley? Apparently that’s what it takes to build a tech startup for a seed incubation program, at least according to the explicit and implicit wisdom shared at Seed Combinators panel today at South by Southwest.

    The panel, which featured Paul Graham from Y Combinator, David Cohen from TechStars, Naval Ravikant co-founder of Venture Hacks and Josh Baer from Austin’s Capital Factory, offered much of the expected commentary on how the ability to build cheap startups has changed the funding and entrepreneurial landscape. We at GigaOM have covered that, even the debate on whether you need to found your startup in the Valley (answer: if you want venture capital you have to go to the Valley), so what was most interesting was how narrow the definition of entrepreneur was for these programs.

    Someone on the panel tried to make a point about how not all startup entrepreneurs are young, but then backed off of that when Cohen said the median age of a TechStars participant was 27 — ditto for those in Y Combinator. When an audience member asked why the programs didn’t accept older entrepreneurs, the consensus onstage was that it’s much more difficult to find two founders who are older, ready to pull up stakes and move to a new place to risk it all on building a company. And no one wanted to back an entrepreneur without a co-founder.

    This makes sense in some ways. Investing in startups is risky, and all investors look for ways to mitigate those risks. Forcing folks to have a founder is simply part of that risk-mitigation formula, although it tends to force out older entrepreneurs. It’s also a reason why Valley companies tend to do better — they can find capital because they’re closer to their investors, eliminating travel and risk for their backers. So this brings me to the other elephant in the room, or rather the one that wasn’t in the room.

    There were very few women — one in 10 would be a stretch. One actually asked about the lack of women in the programs, prompting Baer to note that of the 15 startups that Capital Factory has worked with, five of them had women as co-founders. And TechStars tweeted that a mere 11 percent of its founders are women (I have yet to hear from anyone from Y Combinator). But instead of wondering where the women are, or say you wish they were more (as a few panelists did), why not ask what is it about these programs that either make it difficult for them to accept women or make women not even bother to apply (or show up for panels on the topic?)?

    I’ve put forth some ideas as to why there are so few women tech entrepreneurs, as have others. Do female entrepreneurs not get into these programs because they don’t fit the formula? Then we need to be talking about the risks to a startup’s success that women pose — we need to bring that into the open. If that’s not it, then we need to figure out why women aren’t in these programs. Are they not applying in equal numbers? My hunch is they don’t.

    Doesn’t that mean we need to figure out why? Are there not enough of them? What opportunities are being denied to women, and more importantly, what are the ideas and businesses that venture firms lose out on? I welcome your thoughts and comments.


    For the GigaOM network’s complete SXSW coverage, check out this round-up.

  • EMC’s Crazy Plan to Create a Worldwide Data Cloud

    Pat Gelsinger, who moved to EMC late last year after 30 years at Intel, is stirring things up at the storage giant with a plan to virtualize and federate storage so data and compute can truly be linked together (hat tip The Register). The implication of this vision is that organizations will have the ability to keep constantly changing information up to date around the world in real time despite the challenges of moving huge amounts of data over networks that measure data in in gigabytes rather than petabytes.

    In a presentation on Thursday, Gelsinger pointed out that compute and storage are rapidly getting better about dealing with more information, while networks  are trying to catch up. “Compute is doubling every two years. Storage doubles every 15 months, and networking is much much much slower, like every four years, so how do you deal with latency bandwidth and consistency?” Gelsinger said.

    Gelsinger’s answer is caching. Imagine a two-way content delivery network built on EMC appliances that tracks and replicates changes made to data at one node and then pushes them out to all the other nodes as quickly as possible. Gelsinger calls this freeing the information from physical storage, but it sounds more like making sure your information is in a bunch of different physical storage containers. He mentions EMC’s acquisition of intellectual property from Yotta Yotta as offering the breakthrough required to build this technology.

    But at the end of the day, this is all a big if, not an actual product yet.  If EMC can link storage and virtualized machines together, the data center that “follows the sun” — basically moving compute loads around the world where it’s cheapest to run them – or automatic failover for cloud services become possible. However, it will be controlled by a proprietary hardware vendor, which certainly clouds its prospects a bit.

  • FCC’s Broadband Plan: The Role Of Competition

    Updated: The executive summary of the National Broadband Plan is out today, and in addition to the stuff we’ve already covered, we finally know how the FCC plans to treat the issue most responsible for the current state of broadband in the U.S. — the lack of competition. The FCC has proposed collecting more data, which is good, but what matters is how it uses that data, which isn’t outlined in the plan. If the FCC uses the data it hopes to collect as a means to rule and impose conditions on mergers, as well as enforce certain polices around special access reform or sharing fiber, then that’s going to have an impact.

    Despite outdated maps showing that most areas are hotbeds of competition, the FCC has no real data on which Internet service providers serve individual homes, what those ISPs charge and what speeds they deliver. We’ve discussed the fallacies of spending $7.2 billion in government money toward better broadband without such data, and have called for such data for years. But now that the FCC plans to get it, what else will they do to enforce competition?

    One element is a broadband certification program — or a so-called “Schumer box” for broadband — that gives a defined format for broadband that shows customers what they should be getting for their dollar. So instead of paying $45 a month for speeds advertised as up to 7 Mbps, but which really average out at 3 Mbps, the FCC would require that I get a more accurate assessment of the service, based on reality. Other elements include:

    • Special access reform: the FCC pledges to undertake a review of wholesale competition rules to see if those providing middle mile broadband are being charged competitive rates. It’s clear that in rural areas some are paying ten to a hundred times more for middle mile access.
    • Deliver more unlicensed spectrum, which could be great, but it depends on what spectrum is freed up and would still require investment from tech companies for devices and network infrastructure.
    • Update rules for using microwave for wireless backhaul to boost capacity in urban areas and range in rural areas.
    • Figure out how to get wireless broadband providers to improve mobile broadband coverage everywhere, not just in cities. This might involve intercarrier compensation reform.
    • Change the rules around set-top-boxes to open them up.
    • “Clarify” a Congressional mandate that allows municipalities to provide broadband in their communities. I’m not sure how this would affect existing state and local laws that prevent municipalities from building fiber networks, but depending on the “clarification,” it might help cities avoid costly legal fights over building fiber networks.
    • Make a statement on consumer privacy when it comes to users’ online profiles. The FCC said it will “clarify the relationship between users and their online profiles to enable continued innovation and competition in applications and to ensure consumer privacy, including the obligations of firms collecting personal information to allow consumers to know what information is being collected, consent to such collection, correct it if necessary, and control disclosure of such personal information to third parties.” I’m not sure how far the FCC can go when it comes to ensuring disclosure about my online information, but my hunch is it relates more to schemes where ISPs try to track consumer’s web surfing to sell info to advertisers than to prevent involuntary disclosure of private information through services like Google Buzz.

    Taken together, better information about broadband speeds and pricing, special access reform, making it easier to build out municipal fiber, and open set-top boxes will likely have the greatest impact on consumers, while the ability to get better data on services could have the most far-reaching effect if the FCC decides to use that information to promote competition. For more details, we’ll have to wait for and read the several hundred pages of the complete plan coming out tomorrow.

    Update: Other than competition, the plan also includes a requirement that 100 million homes should have access to 100 Mbps down (which we knew), but also requires 50 Mbps upload speeds — a real coup. We’ve written about the need for better upstream speeds, and by requiring that, the FCC is pushing the cable operators and telcos to allocate far more broadband capacity, especially in cable systems that rely on DOCSIS 3.0. Cable companies and those deploying fiber can reach this goal, but those using copper will be left behind.

  • SXSW: When it Comes to Web Scale Go Cheap, Go Custom or Go Home

    Facebook's future home for big data

    Dealing with the terabytes of data generated by users online and serving up relationships tied to that data quickly are forcing web-scale sites like Twitter, Reddit and Facebook to investigate a variety of home-built, open sourced and hardware solutions, and reject as many closed-source software (such as Oracle) and specialized hardware solutions as possible.

    It may seem like a forgone conclusion, but the ideological and practical bias against closed-source software products, as well as specialized physical hardware inherent in web-scale companies, has big implications, not just for vendors like Microsoft and Oracle, if they get locked out of businesses built on the web, but for all businesses. That’s because broadband networks, cloud computing and the shift toward more rapid adoption and integration of web technology into our everyday lives changes the business models and opportunities for all businesses.

    The new business models will take into account the need to attract users individually, on a personal level, while also connecting them out to other products they use. These services will be available and designed to be accessed on phones, monitors and any other screen. They are hyper-personal, even for services dictated by corporate IT departments. Because of the “use anywhere” nature of these services, and the myriad connections out to other applications, they will have to manage a lot of user data, a lot of requests from outside a network, and scale out to meet demand.

    Given this framework, the panel on scaling open source frameworks past MySQL was one of the more interesting ones at the South by Southwest Interactive conference this weekend. Scalable databases are part of the future of IT for many businesses. You can’t build the types of services discussed above without scalable databases. And those databases, and generally all of the tools used to achieve cheap and agile scale, are open source.

    Citing a desire to support open source code, as well as the need to peek under the hood and be able to solve problems quickly, a panel of four guys responsible for building various architectures at Twitter, Facebook, Reddit and Imgur said specifically that they avoid Oracle in favor of rolling their own databases. Most even derided proprietary hardware and specialized networking gear, with the exception of Facebook’s Serkan Piantino, who said the company does use proprietary F5 gear behind software load balancers. Piantino also said that Facebook was testing super fast solid-state hard drives from a company called Fusion IO as a means to speed up access to data.

    But for the most part, building your own code and working with open source code ruled the day. Even if there wasn’t an open source solution that was readily available or mature, the consensus was that folks would wait until something was ready, or if the pain was too much, build it themselves. For example, an audience member questioned the panel about any good columnar database stores beyond Hadoop, and Kevin Weil from Twitter explained that there were some closed source options out there, but the open source world’s products were still, ” a little early.” So Twitter does without for now.

    Other tidbits of interest on scaling databases from the panel were:

    • Nginx got a big shout out as an alternative to full Apache as a web server.
    • HAProxy is also a popular way to either load balance or merely break requests up to have a cache or a database serve those requests faster
    • Both Twitter and Facebook are using P2P technology (Twitter calls it Murder) to provision services because instead of taking five to 7 minutes to bring one online, it takes 37 seconds.
    • Facebook plans to open source Haystack, its visual storage system within a few months
    • While Hadoop isn’t used much or at all on the front end for both Twitter and Facebook, engineers use it on the back end to deliver granular analytics that otherwise wouldn’t have been possible about how people use the site
    • If you can’t speed up the process with better databases, caching or anything on the software and hardware side, try user interface tricks to make it seem faster, such as saying the video is done uploading even if it isn’t yet.
    • Facebook no longer thinks in terms of servers deployed, it now thinks in terms of deploying entire racks. The software the company is running is rack aware so it can take advantage of all of the bandwidth on a given switch in the rack. It looks like an intermediate step in running your data center as a computer.

    For the GigaOM network’s complete SXSW coverage, check out this round-up.

  • Clearwire’s Big Bet on Our Broadband Addiction

    Despite doubts about Clearwire’s ability to compete against the coming rival 4G network of Verizon and AT&T, its users are apparently pleased with the service. Mike Sievert, chief commercial officer at Clearwire, said the company’s mobile users (those on laptops and dongles outside the home) consume more than an average of 7GB per month of data. That’s a shocking amount of mobile data consumption, especially when all we’re hearing is how scarce spectrum is and how operators can’t keep up with the mobile demand.

    But slaking that thirst for mobile data, and doing it cheaply, underlies Clearwire’s overall strategy. For now, that’s why it’s bet on WiMAX, but WiMAX plays only a small role in Clearwire’s cost benefits, which means that it’s not beholden to the technology after 2011, when an agreement with Intel that kept Clearwire and WiMAX together will expire. Sievert was coy when asked directly about the Long Term Evolution standard that the two largest U.S. carriers are experimenting with, but rather than obsess about the radio access technology, let’s look closer at the real disruption Clearwire offers.

    Sievert said it cost Clearwire “somewhere in the mid-$20 range” per person to build out its WiMAX network, an estimate that relies on several things, from the cost of the spectrum to the number of the towers Clearwire needs to deploy. In contrast, analyst Chris King at investment bank Stifel Nicholas, has put the per-person cost near $20 for Verizon’s rival LTE network build.

    But it’s once the network gets humming when Sievert believes Clearwire starts looking good, both because it will be cheaper to send bits across and enable the company to provide more capacity to data-hungry users, something that may play a larger role as rivals introduce tiered pricing plans, as both Verizon and AT&T have talked about doing.

    Sievert credits the all-IP architecture of the Clearwire network for its ability to deliver bits cheaply, pointing out that Verizon and AT&T both will have more expensive legacy networks to run that include equipment for dealing with circuit-switched voice. In the short term this is an advantage for the LTE crew, because they can offer data across their 4G networks and keep voice on 3G — ensuring a consistent level of quality.

    But long term, Sievert thinks the advantage is Clearwire’s, especially after it introduces handsets in 2011 that will use Sprint’s 3G network for voice (see video) and will then transition to VoIP. Sievert did not give a time frame for the all-4G phone. Eventually, however the LTE providers will also move to VoIP but aren’t likely to abandon their older networks for decades.

    But the biggest advantage is Clearwire’s deep spectrum resources. If nothing else, the last few months has focused the tech world’s attention on the scarcity of available mobile spectrum. Well, Clearwire has a lot of it — about 150 MHz in many markets, while the other major carriers claim just two-thirds or less of that amount.

    It also has 30 MHz chunks of spectrum that it can use for WiMAX, while Verizon, for example, has 20 MHz for LTE. Spectrum can be used to increase both speed and capacity, so while Clearwire’s current speeds of 3-6 Mbps down aren’t going to compare to Verizon’s 5-12 Mbps for LTE, Sievert says Clearwire could allocate another 10 MHz to match speeds and still have another 10 to spare to boost capacity.

    So Sievert is content that he can profitably meet the needs of mobile broadband customers with his existing resources without having to resort to pricing gimmicks that may anger customers. And since, as I’ve argued, the average consumer isn’t too worried if their mobile wireless is LTE or WiMAX, Clearwire does have a chance. Add to that a relationship with the cable providers and Sprint, and Sievert claims he has access to 100 million customer relationships through his partners. In other words, if consumers decide they want unlimited wireless broadband from their existing cable provider, rather than a constrained offering from their wireless provider, Clearwire may succeed.

    Related content from GigaOM Pro (sub req’d):

    Metered Mobile Data is Coming and Here’s How

  • I Can’t Navigate My Location Friend Requests

    Leading up to South by Southwest my inbox has been littered with friend requests on Gowalla, a check-in service that I can use to show those friends where I am at any point in time. Underneath each request is a line that reads: “We recommend you accept friend requests only from people you know and want to share your travels with.” I confess, I read these friend requests from folks I have never met, talked to, tweeted with or emailed, and I don’t really know what to do. Accept them? Ignore them? Bemoan them on Twitter?

    I have included a poll below asking when and with whom you guys share your location, because as a shy and privacy-focused person I tend to err on keeping my digital presence online and my real-world presence, not…anonymous, exactly, but I certainly don’t broadcast it to the world. And I think that will eventually mean I lose out on those serendipitous connections that location services can provide. For example, I might miss out on meeting the stranger sitting next to me in a coffee shop who reads the site and could offer a great conversation on the future of semiconductors.

    With more than 400,000 users of Foursquare and Gowalla already, there are plenty of interesting connections I or anyone else could make. But there are also plenty of people who, like me, are clearly waiting to see how this check-in concept — and by extension, always-on location services like Google’s Latitude or Loopt — plays out. I’m hoping that at SXSW we’ll start seeing tools that use the check-in concept, not to award points or badges, but to facilitate useful interactions among relative strangers, such as, if you see a neighbor checking in at your kid’s school every afternoon, then maybe you can meet them and set up a carpool.

    Much like it took time for people to see use cases and value in Twitter, which was an entirely new means of communicating, it will take time and a display of beneficial results before folks will see the value in displaying their location rather than focusing on the loss of anonymity. Until that happens, many people, when faced with an unfamiliar friend request, will likely hit delete. And without that large network of strangers, then the idea of machine-mediated serendipity remains just that — an idea.

    Related research from GigaOM Pro:

  • Rob Glaser Defines the Superphone and Predicts the Mobile Future

    The future of media will be information consumed on superphones while on the go, said Rob Glaser, chairman of RealNetworks, today in his first public speech since stepping down from his CEO position. In the speech, given in Seattle at a Mobile Broadband Breakfast event, he forecast that by 2013 the installed base of smart and superphones (see chart for Glaser’s definition of each) will exceed the installed base of PCs, and those web-surfing devices will be mobile. In this world he sees five big opportunities:

    1. People want digital persistence: They have an expectation that their content will be available everywhere at any point in time.
    2. People want universal access to content across all devices.
    3. The industry needs to make discovery easy, which means once people have access to digital content, they need to be able to find their stuff and new stuff they will like using semantic data.
    4. There will be new ways to empower social expression and engagement, much in the same way Twitter created a new category of expression and a way to communicate.
    5. The digital revolution will be a global phenomenon.