Author: Stacey Higginbotham

  • Former FCC Chair Lays Out the Limits on the Agency’s Authority

    Regulating big consumer issues such as the availability of Internet apps on mobile devices and metered broadband are outside the Federal Communication Commission’s authority, said Kevin Martin, the former FCC chairman, speaking today in Seattle at the Mobile Broadband Breakfast. Martin, who is now with Patton Boggs LLP, responded to a question about how as networks open up, the activities to keep them closed are being pushed to hardware, by saying, “The further it is pushed out the more difficult it is for the Commission to address it. The FCC’s core regulatory authority is on wireless and carriers, so its direct authority is less and less the further out you go.”

    The issue of just how far the FCC’s authority reaches was raised last summer, when the agency opened up a probe into the blocking of Google Voice on the iPhone. It turns out that Apple had blocked the application, not AT&T, prompting questions as to what the FCC could really do to force Apple to allow Google Voice on the device. Apple has yet to relent.

    Additionally, the FCC doesn’t appear to have authority to stop the recent efforts of carriers and ISPs to introduce metered wired broadband, according to Martin. The Commission can, of course, step in when ISPs discriminate by tying services to required service bundles, but he doesn’t think metering violates any rules.

    Martin also praised the potential for more spectrum for mobile broadband, but suggested that more spectrum alone isn’t the key. He said getting traffic off the wireless network faster will help, as will bigger and better fiber connections. Additionally, he said the FCC’s stated goal of providing 500MHz of spectrum to carriers, and its plan to offer broadcasters compensation for relinquishing some of their spectrum, will require changes in the law that only Congress can implement.

    Martin steered clear of making judgments on the current FCC, but by delineating the limits of the Commission’s authority, I left feeling doubtful that the agency has the authority to implement some of the policy goals it’s recommending in the National Broadband Plan. I also wish that instead of focusing on things it can’t control, the FCC comes up with a solid plan to address competition in wired and wireline, so that a competitive market can do what the FCC cannot.

  • Is Cheap Wireless Broadband for Real This Time?

    The FCC said today that as part of its National Broadband Plan it might allocate spectrum for a free or low-cost wireless broadband network as a means to help address the affordability of broadband for poor people. If all this sounds familiar to you, maybe you recall the efforts of M2Z Networks, a Kleiner Perkins backed venture that tried to offer filtered, low-cost broadband using WiMAX.

    A source at the FCC assures me that the agency’s efforts, which will be detailed next week when the National Broadband Plan comes out, are not similar to M2Z’s plan. M2Z  wanted to offer free subscribers dialup-like speeds of 768 kbps and would have provided filtered access to the web. The source said the FCC’s plan would offer speeds “that are real broadband” and would likely involve using proceeds from the Universal Service Fund reform to offset the cost of building out a network.

    However, any federal involvement in the network could lead to to a return of the filtering issue that bogged down M2Z. Those in power are easily swayed by the argument that allocating a federal resource (spectrum) to provide free broadband which children could use to access porn, could lead to negative publicity. A cynic might say this offers excellent cover for the lawmakers who may also be swayed by the telecommunication’s industry’s obvious reluctance to see low-cost or free broadband.

    Any company focused on free or low-cost wireless  broadband would also have to figure out how to build a network — a multibillion-dollar proposition. For example, M2Z estimated its network buildout would cost $3-$5 billion. To put that into perspective, the stimulus dollars allocated for broadband are limited to $7.2 billion, and the USF (Universal Service Fund) is currently an $8 billion program.

    Getting around a filter, offering real speeds and finding the billions needed to build out a nationwide network are all essential to any FCC plan to “consider use of spectrum for a free or very low-cost wireless broadband service,” as was stated in today’s release on the topic. I wish them luck.

    Related GigaOM Pro content (sub req’d):

    Everybody Hertz: The Looming Spectrum Crisis

    Image courtesy of Gavin St. Ours on Flickr.

  • 10 Austin Startups You Should Meet While You’re at SXSW

    SXSW Since more than 30,000 people are coming here to Austin for South by Southwest, I figured I’d offer up a list of local companies that members of the digerati should take the time to meet while they’re in town. Austin has a ton of startups, but I tried to highlight the ones doing things that Austinites do well (such as enterprise social media efforts and hardware) as well as those I think are about to break out and become bigger.

    A note to those folks following the manufactured Foursquare-Gowalla smackdown: Gowalla is not listed because most people have already met with Josh Williams, Gowalla’s founder, and I wanted to save room for some unknown Austinites.

    1. Gendai Games — This is the company behind GameSalad, a platform that anyone can use to build iPhone games. Its software could become to App Store game development what Microsoft’s Frontpage software was for creating web sites without having to know HTML. One local tech watcher says he thinks that Gendai’s platform could be Apple’s answer to Flash.
    2. Smooth-Stone — I write about this company all the time because I’m intrigued by its plan to use ARM-based chips in servers as a way to conserve energy and match the processing power to the workload required by web-scale companies.
    3. Plerts — A stealthy startup doing some form of personal alerts on the iPhone. I’m hoping the company will launch at SXSW.
    4. Whurleyvision — William Hurley, better known as Whurley, is an Austin tech celebrity, but visit his R&D firm to discuss the future of augmented reality and his rather passionate call for better AR hardware.
    5. ATX Innovation — These are the developers behind the TabbedOut app, which connects with a bar’s payment system and allows you to track and then settle your bar tab with one click. Having waited around for more than 20 minutes for my tab on many occasions, I’m loving this. The app is free and users pay a 99-cent fee on each transaction. It’s available for iPhones, and is being tested right now in several Austin bars and restaurants.
    6. Infochimps — I love this startup because I love anything that makes access to data easier. Infochimps aggregates and then licenses data sets in formats that folks can then use to create new apps, demographic models or whatever; public data sets are free and private ones (or ones that Infochimps has scrubbed) cost money. Anyone can submit a data set.
    7. AreYouWatchingThis.com — If you’ve ever blown off a football game after the first quarter and later found out that your team rebounded in the fourth to win in an upset, you need RUWT. This 3-year-old startup has built a bot that factors in items such as games going into overtime, when talented teams are unexpectedly losing and other indications of an exciting match-up, and then sends out an alert via text or email so that true sports fans never miss out.
    8. LugIron — This company founded by ex-Cisco guys wants to be the middleware between social media and enterprise customer relationship management or business intelligence software. The goal is to provide software that can correlate information from Twitter or Facebook to how it affects your business.
    9. Appozite — This two-man (actually one man and one woman) startup is behind @cheaptweet, which has 22,000 followers and scours Twitter for savings. If you don’t meet with them, at least follow them so you can score some savings.
    10. OtherInbox — Email isn’t going away, but it is becoming increasingly cumbersome. OtherInbox allows users to organize their email, automatically routing messages that come via Facebook or iTunes receipts into folders that you can ignore until you have space time. The endgame is to grab relevant information from your inbox and surface it easily, but we’re not there yet.

    Image courtesy of Flickr user Igor Bespamnatyov

  • What Silicon Valley Needs to Read to Learn What’s Going on in Washington, D.C.

    As much as those in Silicon Valley like to avoid politics, it’s becoming increasingly clear that if they want their companies to get ahead, they need to stay tuned to what’s going on in Washington. With that in mind, I’d like to point out three items worth reading today that cover what’s happening in our nation’s capital as it relates to any online business.

    First up is The Hill’s interview with Rep. Rick Boucher, the man who will oversee the implementation of the National Broadband Plan if he stays in office. Boucher, the chairman of the House’s Energy and Commerce Subcommittee on Communications, Technology and the Internet, is an influential congressman when it comes to broadband policy. He’s up for re-election this year in what’s expected to be a hotly contested race; in the interview he talks about a spectrum inventory bill, how he proposed weakening the plans to take broadcast spectrum for mobile broadband and how he intends to regulate online privacy.

    Also in congressional news, Rep. Eric Massa, who was the New York representative that tried to ban tiered pricing by ISPs, resigned today. Stop the Cap provides more details, but Massa, who is battling cancer as well as allegations of ethical misconduct, was paying attention to the broadband consumer even if his legislative efforts there didn’t get very far.

    Finally, for those who want to question the Federal Communication Commission’s authority to regulate the Internet — as well as get into a philosophical debate about the status of network transport — read this two-part blog post by Susan Crawford, a respected privacy advocate and the former co-lead on the FCC Agency Review team for the Obama-Biden transition team. Crawford writes about how the FCC, when it determined to regulate the cable companies differently from DSL providers because they provided services and not just transport, may have subsequently boxed itself in when it comes to its ability to regulate those cable providers.

    Her second blog post on the topic explores how the agency’s wishy-washy stance on regulating cable is now coming back to haunt it, as exemplified by Comcast challenging its ability to regulate net neutrality in light of the way the FCC had classified the cable service years back. Comcast argues that the FCC can regulate transport providers but not those providing services on top of the transport. As Crawford notes, the FCC can cave to the ISPs or it can reclassify all broadband service under a different jurisdiction that would ultimately put the ISPs in a far more regulated environment — and make their dumb pipe status a regulatory fact.

    Related GigaOM Pro Content (sub. req’d):

    Forget Twitter, the Real Firehose is Government

  • Does the Cloud Need a Specialized Chip?

    Intel's single chip cloud computer

    Tilera, a startup building chips that contain anywhere from 16 to 100 cores, said today it’s raised $25 million in a third round of funding from investors including Broadcom. Chips made by Tilera, which we named as one of five multicore statups to watch two years ago, are aimed at boosting performance and energy efficiency for networking and cloud computing, which is likely why Broadcom invested. But as Tilera spends more time emphasizing the cloud and big players like Intel do the same, we have to ask: Do cloud computing and web-scale computing need their own chips?

    Broadcom likely wants an edge should Tilera’s multiple RISC-based (rather than Intel’s x86) processors set fire to the cloud computing world as equipment companies attempt to develop power-efficient chips that can be adapted to specific workloads. For Broadcom, an investment in Tilera is a direct challenge to Intel’s dominance in the data center computing space, as well as a bet on faster networking chips.

    Tilera has advantages in cloud computing because its chip architecture allows for a lot of lower-power processors to talk to one another using an interconnect technology that doesn’t cause bottlenecks. In plain English, Tilera has figured out a way to get a lot of cores to talk without having to pause to listen to one another, which slows things down as you add more cores. A Tilera executive told me last year that if just 10 percent of cloud computing or web-scale customers took a chance on the startup’s architecture, it could succeed.

    But while Tilera, which started developing its chips in 2004, may have the lead when it comes to building massively multicore chips with a mesh-interconnect, Intel smells an opportunity as well and as such is building out what it calls a “single-chip cloud computer” with 48 cores for the cloud computing market. There are also systems vendors trying to solve similar problems for those needing energy-efficient web-scale computing, such as SeaMicro and Smooth-Stone.

    A key problem in all of these endeavors is figuring out how to get the multiple chips or cores to function together in such a way that performance scales linearly with the addition of each new core rather than tapering off as the communications between the cores or chips becomes overloaded. Intel and Tilera are hoping to do this on the chip itself, while systems vendors are trying to do it with a better box.

    Related GigaOM Pro Content (sub.req’d):

    We Can Call It A Cloud, But It’s Still Hardware

  • The Unreleased iPad Haunts SXSWi

    Apple’s iPad is the subject of no fewer than four panels at the upcoming South by Southwest Interactive festival next week despite the fact that it won’t be out until two and half weeks after the show ends. No matter — the iPad is one of the hottest memes welling up in advance of the show. A quick glance at tweets, panels and a few conversations with hopeful attendees reveal that the others will be Android, location and real-time anything.

    Sure, Ev Williams of Twitter is keynoting and will hopefully provide us with Twitter’s revenue model, and Spotify’s CEO Daniel Ek is up for a talk as well (perhaps to tell us when the rockin’ music service will be finally available in the U.S.?), but it’s the iPad — and the mobile experience overall — that’s underpinning South by Southwest Interactive this year. The iPad is the most visible symptom of the mobile trend, but the underlying cause of the iPad excitement is the search for the best mobile experience for users.

    That theme is exemplified with Google’s all-day Sunday Android Hackathon going up against Facebook’s Developer Garage (among the gathering’s hottest events in 2008 and 2009) on Sunday. The SXSW “who’s attending” function appears to be broken, so it’s hard to know which event will have more participants. But my guess is that the promise of building mobile apps for multiple devices will win out over social networking.

    SXSW

    Other indications that mobile is taking over are the panels which focus on iPads, location (one that focuses on both), new user interfaces for phones and several on augmented reality. In a very cool way it’s like South by Southwest Interactive has realized that the interactivity the began with blogging and progressed to social media is now ready to invade our gadgets, thanks to smarter, more power-efficient chips and ubiquitous wireless. So even as attendees scratch their iPad itch, the true cause of that excitement isn’t just a new tablet, it’s the opportunity for mobile computing that’s, yes, interactive.

    Related GigaOM Pro Content (sub. req’d):

    5 Tips for Developers Targeting the iPad

  • What You Need To Know About the National Broadband Plan

    The FCC will deliver its National Broadband Plan to Congress a day earlier than originally scheduled — on March 16. Also on that day, the five FCC commissioners will vote on a “mission statement” intended to represent the spirit of the submitted documents. The plan, which Congress called for as part of the stimulus package passed last year, will recommend ways to provide universal broadband access as well as encourage Congress and industry to use broadband in health care, education and energy efficiency programs.

    So far we have been relatively unimpressed with the aspects of the plan that have been pre-announced, as have other economists and analysts. It tends to favor the existing broadband duopoly while offering little in the way of innovative ideas for expanding access. While the complete document isn’t out yet, many of the big-picture items in the plan have been previewed within the last two weeks. So with the caveat that it may all change by March 16, here is a rundown of its main components and a few beneficiaries that we believe our readers are most likely to care about.

    Wired Broadband: The plan’s most far-reaching goal is to deliver 100 Mbps  to 100 million households by 2020. But as we’ve already pointed out, such a goal isn’t really that ambitious given that 65 percent of homes in America will have broadband that could deliver 100 Mbps speed by the end of this year and 90 percent will have it by 2013.

    Also as part of improving wired broadband the plan will tackle the issue of universal service reform, a snore-inducing topic but one that nevertheless will be a primary means of funding rural broadband access. Currently USF is an $8 billion program aimed at phone-based technologies rather than broadband and IP services. On Friday the FCC suggested a decade-long transition that will gradually shift that spending from voice to IP-based services. The long transition should ease the pain for rural phone companies — such as CenturyLink or Windstream — that have benefited from the program.

    Aside from USF reform, when it comes to higher speeds, it’s the carriers using fiber-to-the-node strategies that are likely to lose. Despite assurances from chip vendors such as Ikanos that 100 Mbps over copper is feasible, Qwest, AT&T (T) and other providers relying on DSL and copper for any part of the last mile will struggle to reach 100 Mbps unless they make huge investments.

    Wireless Broadband: This is where things get a bit more interesting. FCC Chairman Julius Genchowski is a big proponent of mobile broadband — he almost always seems to steer the conversation from wired broadband to mobile — perhaps because he’s acknowledged that the lack of competition in the U.S. is a problem he doesn’t know how to solve. As such, he has dedicated many FCC resources to a variety of cellular issues, from calling out carriers on early termination fees to finding more spectrum assets. The National Broadband Plan asks the government to free up 500 MHz of spectrum — less than the wireless industry asked for, but still a nice chunk.

    However the government won’t demand that spectrum from broadcasters, as had been originally floated by the FCC. Instead the FCC is asking broadcast television spectrum holders  to voluntarily give up their spectrum in exchange for compensation. Such an offer won’t go far in freeing up much-needed spectrum available in urban areas, but will be a boon to spectrum holders and broadcasters in rural areas, which will get payouts for something they aren’t using.

    The plan also tackles the almost decade-old problem of the lack of a nationwide, wireless public safety network by saying it will enable some type of private-public partnership with consumer-grade equipment that could be cheaper for the feds to buy — a plan that could cost up to $16 billion. This might hurt Motorola, which provides a lot of specialty public safety gear — unless the government gets Android handsets.

    Wireless broadband will also have a role to play in connecting the roughly 5 percent of the population that will be too expensive to connect with wired broadband, some of which will come from wireless phone service and some of which will be satellite. The key will be ensuring that satellite providers and wireless broadband providers that serve as the sole means of broadband don’t cripple the service with low caps and wretched speeds.

    Adoption: Once the infrastructure is in place, there’s still a key hurdle to adoption — namely the cost of broadband and convincing people they need it. The FCC recently found that 94 million people don’t have broadband, choosing to forgo fat pipes because they couldn’t afford it or just don’t value it. Providing access for the poor through some sort of government lifeline program will likely be part of the plan; what’s unclear is whether or not the government would pay the high retail rates for those lines or cheaper, wholesale ones.

    The ISPs benefit either way, although they would love to get the higher prices. But using additional programs modeled after that of cable’s A-Plus project to boost access for the poor by subsidizing the cost of broadband may result in slower speeds for those receiving the subsidized service — and as such, won’t help convince people broadband is worth paying for!

    In the meantime, it’s unclear if the plan will tackle special access issues whereby ISPs in rural areas pay far more than ISPs in urban areas because the number of providers selling the rural ISPs with an on-ramp to the Internet backbone are so few. Attempting to solving that problem by investigating the abuse of a monopoly and regulating prices in those areas would lower broadband costs for ISPs, which could then afford to provide faster speeds without building new infrastructure in rural areas.

    National Purposes: This aims to solve the chicken-and-egg problem plaguing broadband adoption, smoothing the path for services such as health care, education, smarter energy management systems and even government programs to be delivered via broadband. Demand for such services will not only spur investment from infrastructure companies, but will also give folks a reason to adopt broadband in their homes.

    The National Broadband Plan will likely contain several miscellaneous items as well, such as a section on opening up set-top boxes and making room for unlicensed wireless spectrum, but at its core it’s still a recommendation. To bring many of  the recommendations to fruition will require legislative action on the part of Congress as well as the enactment of formal rule-making procedures from the FCC, especially when it comes to things like USF reform and special access reform. Other federal agencies will also get involved, such as the Food and Drug Administration, which is already working with the FCC on ways the agencies can regulate wireless medical devices. Given the sclerotic pace of the bureaucracy and the amount of lobbying the communications firms will continue to bring to bear, expect the actual implementation of many aspects of this plan to take years to become reality.

    Related content from GigaOM Pro (sub req’d):

    Everybody Hertz: The Looming Spectrum Crisis

  • What’s Slowing Down Verizon’s LTE Speeds?

    Verizon continues to say it will launch its fourth-generation, super-fast Long Term Evolution wireless network to cover 100 million people by the end of this year, but it’s also been clear that it expects its LTE network speeds to be just 5-12 Mbps. So how does LTE — a technology that can deliver a theoretical 150 Mbps — get whittled down to less than a tenth of its top speed?

    Thanks to the recent attention on the potential shortage of wireless spectrum for mobile broadband, and the hope that mobile broadband could act as a decent substitute for wired broadband, it’s worth figuring out how spectrum, cell towers and subscribers all factor into the speeds a wireless carrier can offer. Peter Rysavy, a wireless analyst, offers a great explanation using Verizon as an example. Note that in a Network World article, Tony Melone, executive VP and CTO of Verizon Wireless, confirmed that the carrier will use all of its 700 MHz spectrum for LTE service. Here’s what Rysavy has to say on that:

    “Looking forward to advanced technologies such as LTE, capacity will higher, but it will still be extremely limited compared to wireline capacity. Verizon Wireless’ LTE network will operate in the 700 MHz band using 10 MHz radio channels. With a spectral efficiency of 1.5 bps/Hz, this delivers sector throughput of 15 Mbps.

    Meanwhile, there are about 1000 subscribers in the US for every cell site, which makes for an average of 333 subscribers per sector. If 10% of them were using the LTE data service, that would mean 33 users for the 15 Mbps data channel. Now, compare this with a subscriber of a wireline high‐speed Internet service of 50 Mbps that is dedicated, and not shared, as shown in Figure 2 (below). The point is not that the wireless network cannot deliver extremely useful and valuable services, since it can, but rather that wireless capacity is inherently limited compared to wireline capacity.”

    Basically, Verizon can cram only a certain number of bits into each hertz — a function of how LTE allocates bits and a physics constraint. Multiply that number of bits times Verizon’s 10 MHz channels, which are dictated by regulatory policy as well as what Verizon paid for during the spectrum auction, and you have the top speed. Deliver those 15 Mbps channels to your cell sites (determined by Verizon’s investment decisions as well as community regulations that govern where towers are allowed) and divide by the number of devices sucking bits.

    It’s complex math getting the infrastructure in place, but add in the infinite variables of iPads, smartphones, M2M sensors and even human traffic patterns and you have a situations in which delivering fast mobile broadband looks nothing short of miraculous.

    Related content from GigaOM Pro (sub req’d):

    Everybody Hertz: The Looming Spectrum Crisis

  • Compiled Networks Aims to Link Clouds, Make Wi-Fi Mobile

    Compiled's Bob Locklear (left) and Jasson Casey

    Compiled Networks, a stealthy Austin, Texas-based startup, is building an appliance that can securely link two clouds at the network level. But before it started on its cloud product the founders also managed to develop one that ISPs can use to deliver Wi-Fi that behaves less like a fixed wireless network and more like a cellular one, with seamless handoffs and one-time authentication. One unnamed ISP is already using it for its Wi-Fi network.

    But Compiled, which is searching for a Series A round of $2.5 million, has its eye on the cloud. The software it uses to make a series of Wi-Fi access points into a seamless network can also run inside an appliance located inside a cloud, and virtually extend a data center’s Layer 2 Ethernet network to a cloud. In doing so it offers an IT operations person a level of security and control that they might not get from other private virtualized networks such as Amazon’s or even Savvis’s data-center-in-a-box offering, which tend to operate by managing IP addresses at Layer 3.

    Jasson Casey, founder and CEO of Compiled, says the company’s appliances allow an IT manager to build a virtualized version of a network inside a cloud, without adding latency or requiring the addition of more CPU power. Other companies are attempting to virtualize the network and bridge clouds using managed software but Casey believes those methods are unwieldy and won’t scale as well.

    Casey didn’t offer any more details about the software, citing the company’s relative stealth, but said for now it is working on a deal with a large cloud provider that could buy the Compiled appliance and pop it into the its cloud as a way to offer customers a more secure form of virtualized networking. In order to work, the Compiled appliance has to be inside the cloud, which could be a barrier for the company if it can’t get large infrastructure-as-a-service providers interested. Even large companies trying to move applications and workloads to public clouds may not have the cachet to demand a Savvis or Terremark put an appliance inside the ones they offer.

    In the meantime, Compiled has a cloud product and a potential partner willing to try it in its cloud, and a deal with an unnamed ISP for its Wi-Fi product. I will say that the company has certainly managed to straddle two large markets with its technology. Now we just need to see if it can sell into them.

    Related GigaOM Pro content (sub. req’d):

    Amazon’s Virtual Private Cloud: What’s New, What’s Next

  • Austin’s uShip Profits Off Slow Growth

    Matt Chasen, CEO and Founder, uShip

    Six years ago I attended a business plan competition where I watched a couple of guys explain how they wanted to streamline the process of shipping bulky items across the country by linking folks up via the Internet. The social web was starting to heat up, but I liked how these guys and their company, uShip, were trying to take the web and use it to bring people together, not to swap songs or photos but to take advantage of empty space in moving vans. uShip will hold a panel at the upcoming South by Southwest conference explaining how to build a profitable business, so in the meantime, I decided to check in with the company to see what the founders have learned.

    uShip, which provides a matching service that connects people who want to send things across the country and those who happen to have space in their vehicles as they drive across country (and also offers an alternate revenue stream for  professional shippers) is now profitable, thanks to commissions the company takes when it matches shippers and shippees. The Austin, Texas-based company brought in between $5 million and $7 million in sales during 2009, double that of the previous year.

    uShip has raised $7 million in outside capital from Benchmark and DAG Ventures, and has helped broker more than $140 million in shipping contracts — with about 85 percent of those occurring during the last two years. Last week, it received its 1 millionth listing on the site. So I asked founder and CEO Matt Chasen for a few lessons he learned as part of building uShip and he offered up via email the following tidbits:

    • Know Thy Revenue Streams. “Focus on proving out how you will make money. It is critically important to build a unit economic model that shows how much it costs to acquire a customer and the metrics around how much you can earn per customer.”
    • Avoid PPC Addiction. “Don’t be overly reliant on paid search marketing. It’s okay to use search engine marketing to seed a business, but have plans in place for how you will grow the business with search engine optimization, word-of-mouth and other free channels.”
    • Launch the Minimum Viable Product. “For internet-based businesses, don’t wait until your product is “perfect” before launching, because you will likely just end up changing it. I’m a big fan of the Minimum Viable Product (MVP) concept – build and launch only the bare minimum that is necessary to get customer feedback and data. Sometimes this may even only be a landing page describing your product and value proposition – with AdWords you can inexpensively test entire business models and start understanding your customer acquisition cost before you even start building your product.”
    • Stay Lean. “Continually revisit your budgets, assumptions and strategy. Focus on niche markets that are big enough to build a profitable businesses – but that also have very large adjacent opportunities.”

    Related GigaOM Pro Research (sub required):

  • Check Out a Big Primer on Big Data

    Big data is certainly on the tip of everyone’s tongues these days as both the amount of data entered online expands and the ways to track objects and people grows with wireless connectivity and sensors. We have both more information being entered and more sources of that information, providing a river of data that somehow we’re going to capture and use to make money and better decisions.

    For those wondering about the big picture and some of the nitty-gritty details (metadata, data visualization, open document formats) The Economist has a killer package on big data. Download it at the web site, or wander out and get your own copy of the magazine. I was intrigued by the idea that the “it” career in a data driven world was statistician.

    Writing about the concept of big data is kind of like trying to write about water. Water is essential, touches so many aspects of life — from evolution to the current location of cities — that one article, one book or even one field of study can’t articulate its influences. Data will be the same way in the not-t00-distant future, thanks to cheap, scalable computing and ubiquitous broadband enabling a connected everything.

    Related GigaOM Pro content (sub. req’d):

    What Comes Next for the Web

    Thumbnail image courtesy of Flickr user Esparta

  • Wikia Dell Spat Highlights Divide Between Corporate and Web Computing

    Wikia, the service that hosts wikis for any known subject, is quitting Dell servers thanks to both a functional and philosophical disagreement stemming from Dell’s new demands that all hard drives in its 11th-generation PowerEdge servers are certified by the Round Rock, Texas-based company. Artur Bergman, director SVP of engineering and operations at Wikia, in a conversation conducted via chat, told me that “Dell basically wants $2500 for 100GB SSD [solid-state drives], and their plan is to only allow dell [sic] drives in their servers.”

    Wikia, which has only 200 servers for its popular web site, may not be the largest buyer of Dell servers, but it is part of a community of startups and web-based companies that are becoming more frustrated with the products the traditional server and chip vendors are pushing at them. For example, Facebook VP of Technical Operations Jonathan Heiliger said last year at our Structure Conference that web companies and the vendor market were far out of step — a divide that startups such as SeaMicro and Smooth-Stone are trying to bridge with new products optimized for web-scale companies.

    For Wikia, Dell’s moves mean that it couldn’t run the Intel x25e 64G, which Bergman pegs at about $750 on Amazon — about a third of the cost of the approved Dell hardware (although smaller). He also uses the Intel 160GB x25m, which costs $450. Dell has said the certification ensures reliability for customers, but Bergman said he’s OK with his drives failing, and has planned for that in his network architecture (he uses RAID 0), so Dell is “optimising for something we really don’t care about.”

    Given the cost differential, Bergman said he has “a hard time seeing this as anything but dell trying to increase margins.” And considering the bifurcation of the demands of web-scale companies and enterprise-class corporate data centers that will willingly shell out for more expensive optimized gear, I think these sorts of spats will become all the more common. Whether it will be common enough to support a new breed of equipment startup, only time will tell.

    Related GigaOM Pro content (sub req’d):

    Report: The Future of Data Center Storage

    Image courtesy of Flickr user JonSeb

  • Yahoo’s Carol Bartz Hates Net Neutrality? Nope.

    Yahoo CEO Carol Bartz in an interview on CNBC today slammed government involvement in broadband deployments and consumers who are too worried to spend money because they don’t have jobs, and said she would would have taken Microsoft’s offer of $36 per share back in 2008. However, she didn’t come out against net neutrality, despite assertions to the contrary in a tweet from Verizon’s executive director of external communications and an email alerting us to Bartz’s comments from Ted Hearn, VP of communications for the American Cable Association, which read:

    On CNBC, Yahoo CEO Carol Bartz just said government “should stay out” of regulating broadband access providers. A big crack in the Net Neutrality coalition?

    But Bartz didn’t say anything about net neutrality (another member of the debate did). When an anchor asked her how she felt about “the government getting involved in building out the Internet so Yahoo can get into more homes” she replied:

    “I think the government should stay out of it. It was wonderful with DARPA and all the science that happened, but people in companies, and company investments are what really brought the Internet to the American public.”

    The video, focused on the broadband regulatory debate, is embedded below, but before you watch seven talking heads scream at one another, let me caution you that the facts presented regarding broadband access are based on an older standard of broadband. Also, just because the CEO of Yahoo doesn’t want government involvement in broadband buildout doesn’t mean all tech companies are eager to see the U.S. continue with an anti-competitive duopoly in most towns that keeps innovation down.

  • U.S. Carriers Are Running Out of Growth Options

    It’s hard to grow in a saturated market, but despite 89 percent cell phone penetration in the U.S., AT&T managed to pull out some impressive revenue growth over the past three years, not because it has the iPhone but because it’s been buying other companies. We’ve written about AT&T’s dependence on the iPhone, but this chart from TeleGeography illustrating AT&T’s sales growth over the period — on par with service providers in countries where cell phone subscribers are still growing — is tied primarily to Ma Bell’s acquisitions.

    The companies found in the lower part of the chart, which operate in saturated Western Europe markets, are a glimpse of the future for AT&T and even Verizon as U.S. companies run out of acquisition targets. The carriers hope that machine-to-machine communications will save them, but they’re still searching for the right business model as well as compelling applications. I suppose if times get too tough, there’s always Sprint.

    Related GigaOM Pro content (sub. req’d):

    Metered Mobile Data Is Coming and Here’s How

  • Force10 IPO — Better Late Than Never?

    Force10, the networking gear maker founded in 1999, filed for an initial public offering today, as part of a rush of companies seeking to hit the public markets while the window seems open. IPO filings are up more than 900 percent in 2010 according to Renaissance Capital, which tracks the IPO market. Despite its relative youth for an IPO filer this year, Force10 represents not a hot new startup seeking access to the public markets, but a grizzled 11-year-old veteran trudging toward an IPO because it simply has to exit, and no buyers have emerged,

    At least 13 venture firms firms have invested more than $205 million in Force10 within the last five years alone. The company is seeking to raise $143.75 million through its offering. But given the amount of time its investors have waited, a history of losses, a complicated balance sheet thanks to a series of transactions, and a slew of larger competitors, Force10’s IPO looks a bit like a shotgun wedding.

    Force10 sells telecommunications networking gear as well as 10 gigabit Ethernet gear for the data center. It’s the data center market that’s growing most for Force10, although the opportunity to provide aspects of the core network and backhaul components for telecommunications providers as they switch to all-IP architectures is another mid-term opportunity.

    Three years ago, after a $60 million Series F round of funding, Om wondered when Force10 would file to go public. But any company that didn’t make it before the fall of 2008, and was stuck staying private thanks to the economic freeze and the credit crunch. In 2009 it purchased Turin, a maker of wireless backhaul gear. It reported pro forma sales (which combined Force10 and its Turin acquisition) of $199.2 million in 2009 and a loss of $76.3 million.

    Force10’s IPO may not reflect the return of the big ticket technology IPO as much as it reflects a lack of buyers for the business and the chance to get an exit while IPOs are possible. Instead of comparing it to Tesla, the electric vehicle maker that recently filed to go public, or Silver Spring, a smart grid startup that is expected to file soon, a better comparison would be Calix, the telecommunications gear maker that has raised a similar amount of money in its long history, and filed late last year. Given that too many hot IPOs can overshadow older candidates, perhaps it’s better for Force10 that popular online businesses such as Yelp or Facebook are holding off on IPOs this year.

    Related content from GigaOM Pro (sub req’d):

    What the VC Industry Upheaval Means for Startups

  • Liquid Computing & the Curse of a Computer Hardware Startup

    It’s hard out there for a systems vendor — that is, if the death of Liquid Computing, an Ottawa, Ontario-based startup that until last week was building a unified computing box to help manage the virtualization of the data center, is any indication. Last week, a round of funding failed to materialize at the last minute and the company was shut down, confirmed Liquid’s now former CEO, Vikram Desai, who subsequently found himself out of a job.

    Liquid was one of many startups trying to build a box to improve performance inside data centers. In Liquid’s case it was trying to address some of the complexity introduced in networking and storage once servers were virtualized. But like many other startups targeting corporate data centers, Liquid had a hard time selling its wares.

    Desai was hired in December 2008 to help Liquid move from selling proprietary boxes to selling its software optimized for generic hardware made by Intel and NetApp. ”It’s very risky for large enterprises to adopt the hardware platforms of young companies as their primary solution, and moreover, with the release of advanced hardware platforms from companies like Intel, there isn’t a need for them,” Desai said. “The most successful companies in the data center will be software-centric.”

    Desai didn’t get into details about Liquid’s demise but the Ottawa Citizen reports that the 7-year-old startup laid off most of its Ottawa workers last week, and that it’s in the process of shutting down because it was unable to find additional funding (hat tip insideHPC). It was building a server integrated with networking capability much like Cisco’s  Unified Computing Systems or HP’s efforts to add networking to its servers.

    As a startup, the company was pitting itself against giants, and had raised more than $50 million in order to outrun them. But despite that fact that its products had advantages that Cisco’s and HP’s products don’t, and the company was winning customers, Liquid’s backers weren’t willing to front any more cash. A bankruptcy isn’t in the cards, but the company may sell its intellectual property.

    The inability to raise more funding signals the challenges that even large systems startups still have when it comes to raising capital. SiCortex, Quadrics, Verari are other examples, either because they were unable find money of their own or because their investors were skeptical of the returns they might get and didn’t want to keep throwing good money after bad. In fact, after Verari shut down, I wondered if Liquid and BLADE Network were next.

    SeaMicro and Smooth-Stone, both of which are building server systems for the data center, may also be having have troubles. Sources tell me that Smooth-Stone, a startup manufacturing ARM-based servers in Austin, is only building out a system to show off its software, a practice that can require a lot of up-front capital. Desai said that Liquid found itself forced to do the same thing in order to sell its software.

    Liquid spent 2009 shifting from building hardware to selling software, but the venture firms were unable or unwilling to give it more time. However, Desai is still optimistic about the prospects for unified computing, which he believes is one of the cornerstones of the next-generation data center. “We are still at the early stage of unified computing, but that and virtualization are the key to truly deliver an automated data center.”

    Related GigaOM Pro content (sub req’d):

    If We Compute in the Cloud, We Need a Network Fabric

    Image courtesy of Flickr user br1dotcom

  • We Need Another Location Technology Like We Need Another Social Network

    Rosum, a company that offers a way to use broadcast TV signals to derive location, today said its technology will be used (PDF) in a new chip from mobile TV chip maker Siano. The Alloy chip is aimed at providing location and timing information for femtocells, people and inventory tracking, and local ads via mobile TV. I think it’s an idea that’s doomed to fail because the overall market is too small.

    Rosum has been selling the idea of using broadcast TV to determine location for a while. But we already have two methods of measuring location — GPS, which uses a satellite signal and a database, and Wi-Fi triangulation as used by Skyhook. Do we really need a third?

    Wi-Fi location information is faster and works better in urban areas where GPS signals have a hard time penetrating. GPS works where there are no Wi-Fi networks, such as in rural areas or along highways. So what does Rosum’s broadcast TV solution have to offer? Rosum says it’s for areas where Wi-Fi networks aren’t able to penetrate, such as deep inside buildings, or where there are a lot of networks that may confuse location, such as in highly urban areas. However, Wi-Fi still seems to work well even in those situations.

    There’s also the issue of putting a separate broadcast TV chip inside a device. Wi-Fi chips are already embedded in many handsets and devices, and provide a primary function outside of delivering location. GPS chips are also becoming more common for mobile devices that don’t have Wi-Fi, or that offer navigation. However for many devices, using Wi-Fi alone will suffice.

    But Rosum’s primary function is location, and relies on grabbing broadcast TV signals.
    So to take advantage of the Rosum solution, a device would have to need a broadcast television signal and not need Wi-Fi or GPS. How often does this happen? I can’t think of a reason to add a broadcast TV signal-detecting chip to a femotcell, but I can see a reason to add Wi-Fi. I can see Rosum winning customers among device-makers wanting to add location to the mobile televisions that will use the Open Mobile Video Coalition’s standard for mobile TV, but that’s a young and small market.

    For now, I’m highly skeptical as to Rosum’s chances for making broadcast signals a third source of location information. Yes, location is hot, but between Wi-Fi and GPS, broadcast TV looks more like a third wheel.

    Related GigaOM Pro Content (sub. req’d):

    Location: The Epicenter of Mobile Innovation

  • How AT&T Plans to Keep SXSW From Swamping Its Network

    Smartphones, including iPhones, were all the rage at SxSW in 2009.

    Last year, the hordes of South by Southwest-attending geeks toting iPhones blew out the AT&T network around the convention center in Austin, resulting in dropped calls and crappy connections for many attendees. The subsequent news coverage showed off Ma Bell’s network failures for the entire world (or at least the world that cares about such things.) This year, having activated more than 8.7 million more iPhones since last March’s debacle, AT&T is pulling out all the stops to make sure the digerati have the coverage they want during SXSW 2010. Here’s how.

    • A Distributed Antenna System (DAS) at the Austin Convention Center: This system provides the equivalent coverage of eight cell sites, with 50 antenna nodes providing coverage throughout the venue. The system was completed in recent weeks.
    • Beefing up the Cell Sites: Austin isn’t the only city to benefit from this, but AT&T has moved from one radio network “carrier” to three in the city, which essentially enables the carrier to use more of its spectrum. My sources tell me this means AT&T is using about 30 MHz of spectrum for 3G rather than the 10 MHz that one radio network carrier would offer. And speaking of spectrum, the upgrade to the 850 MHz band that was begun in a rush during the last SXSW will also help, as will the upgrade to HSPA that AT&T completed across its network earlier this year.
    • Three Temporary Cell Sites: The carrier will deploy two Cells on Wheels, as well as add a third temporary site on an undisclosed rooftop. Those sites will provide AT&T Wi-Fi as well as 3G service, and are positioned where SXSW organizers and AT&T expect to see large amounts of traffic.
    • Better Backhaul:  AT&T was scant with details but said via email:  “Compared with last year, we have added fiber-optic connections to more than quadruple the backhaul capacity of each of the eight cell sites that serve the event area, and temporary sites will also be served by extensive backhaul.”

    AT&T worked with SXSW event planners to make sure the system in place will suffice, and it’s not turning its back on the event this year, either. Last year, the complaints caught the carrier off guard, but for 2010 a team of AT&T network engineers will monitor the Austin network 24/7 throughout the duration of the event to make sure it stays up.

    Related GigaOM Pro Content (sub. req’d):

    How AT&T Will Deal with iPad Data Traffic

  • Meet the iPad’s Micro SIM

    The Apple iPad isn’t even out yet, but already thousands of words have been written about its influence on hardware, media, computing and even ergonomics. It has a new chip, a new contract-free pricing plan from AT&T and a new SIM in order to get onto the carrier network. Since I was curious about the new micro SIM format, I visited the offices of Gemalto, which sold about 1 billion SIM cards last year, to learn more about it.

    Ray Wizbowski, a marketing director with Gemalto, showed off the micro SIM format for me and explained how it contains exactly the same hardware as a traditional SIM card, just in a smaller plastic casing. Theoretically you could cut your existing SIM down to size, although your contacts would have to match up. Wizbowski says the smaller casing makes room for more hardware inside a device like the iPad, but it also provides a smaller platform (52 percent smaller) for carrier advertising. Did Apple go smaller in order to cram in more functionality or to marginalize AT&T?

    As far as the current use of micro SIMs, there’s a company called Loc8 Solutions that uses a micro SIM keyed for tracking children, and Wizbowski says they might also be used in cell phone wrist watches or cameras connected to a wireless network. The SIM is based on a standard so other companies can make them as well. For more on micro SIMS and machine-to-machine connectivity, check out the video below.

    Related content from GigaOM Pro (sub req’d):

    With The iPad, Apple Takes Google To the Mat

  • Pokey Mobile Broadband Isn’t Cutting It in the New App Era

    As more people pick up smartphones and shell out for mobile data plans, carriers, application developers and phone manufactures need to keep one thing in mind: Speed matters. Even if it’s mobile, a connection to the web still needs to feel like broadband. Otherwise, people aren’t going to use their phones as often, or for as long. But speed is a double-edged sword because as newer, faster networks are deployed, the data tsunami already swamping carriers grows taller.

    At a GigaOM  Bunker Session (GigaOM Pro, sub. req’d) in our offices on Wednesday, Artur Bergman, VP of engineering and operations at Wikia, said that folks visiting the site from an iPhone using slower 3G networks spend about four minutes there vs. the five to five-and-a-half minutes spent by iPod touch users coming in with (generally) faster Wi-Fi connections — a difference of as much as 38 percent.

    Slow load times are also why I plan to dump my BlackBerry the second the Nexus One comes out on Verizon. I don’t even try to load web pages on that thing anymore, as I don’t have time to wait. I’d rather turn on my Mi-Fi and use my iPod touch. Yup, I carry three devices with me to sate my web addiction and make phone calls.

    I’m apparently not the only one who’s impatient. Data from AdMob shows that folks using an iPod touch and Wi-Fi to connect to the web spend 100 minutes a day on their devices using apps, while those using 3G on the iPhone spend just 79 minutes. For other 3G handsets, that number rises to 80 minutes for Android phone users and 87 minutes for those on Palm devices.

    I’m inferring from that data, my own experience and Bergman’s comments that if it ain’t fast, then users go home. The speed of a mobile application can be a result of the connection, the phone hardware and the application’s design (which can also involve the web browser instead of an app). Which is why faster processors for handsets and new WiMAX and LTE networks will not only appease current web users with a need for speed, but will drive demand ever higher.

    Image courtesy of Flickr user zenera