Author: David Meyer

  • Vodafone shares dip as Verizon denies $245 billion takeover rumors

    Well, that was quick. On Tuesday, the buzz was that Verizon and AT&T were thinking of bidding an eye-watering $245 billion for the UK-based carrier Vodafone. If true, this would have represented the biggest M&A transaction ever.

    However, late on Tuesday evening Verizon issued the following statement, which keeps alive the perennial possibility of Verizon buying out Voda’s stake in their joint venture, Verizon Wireless, but which is also pretty categorical on the latest rumor:

    “As Verizon has said many times, it would be a willing purchaser of the 45 percent stake that Vodafone holds in Verizon Wireless. It does not, however, currently have any intention to merge with or make an offer for Vodafone, whether alone or in conjunction with others.”

    The denial knocked down Voda’s share price by – at the time of writing – 2.2 percent on intra-day trading. That said, according to Bloomberg, the rumor brought a 6.1 percent bump on Tuesday, so for now it did more good (for Voda’s investors) than harm.

    According to Reuters, the problem with the Verizon-buying-out-Vodafone’s-stake scenario is that the $115 billion transaction would land Voda with a $20 billion tax bill – hence the idea of carrying out a merger instead. Apparently that’s now not going to fly, so it’s back to the drawing board.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • EE harnesses London cabs to tout its 4G prowess

    Right now, Deutsche Telekom and France Telecom’s EE joint venture is the only 4G-toting carrier in the United Kingdom. That will change soon enough though, as the much-delayed 4G spectrum auction is now done and dusted, so EE is doing all it can to capitalize on its early start.

    The operator’s latest marketing tactic involves one of London’s greatest icons: the Hackney carriage, or “black cab” as most people call it. EE has put a 4G MiFi router into 50 of the vehicles in London and Birmingham (40 in the former, 10 in the latter), and passengers will be able to use the service for free.

    Displaying a true marketing professional’s grasp of physics, EE brand chief Spencer McHugh claimed in a statement that users will be able to “browse, download, catch up on emails, Tweet and check Facebook literally at the speed of light”.

    EE’s three-month promotion is not the first to combine the black cab with wireless connectivity. Back in December, ad firm Eyetease said it had gained approval from London’s transport authorities to put hotspots into the vehicles, with users needing to watch a 15-second ad in order to get 15 minutes of free surfing. The ISP Virgin Media also gains a great deal of exposure by providing Wi-Fi for commuters in certain London Underground stations.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google faces wrath of European regulators over unified privacy policy

    When Google abruptly unified its privacy policies a year ago, data protection authorities in France reckoned the result broke EU law. The French regulator, CNIL, subsequently took up the cause on behalf of its peers across the various European nations, and sent Google a comprehensive list of questions about the change. Then, in October, following unsatisfactory responses from Google, the regulators came back with a series of recommendations for the company.

    Google did not implement the recommendations within the allotted four months, even after a meeting in March with CNIL and data protection authorities (DPAs) from Germany, the UK, the Netherlands, Spain and Italy. And now we see the result. According to a CNIL statement on Tuesday:

    “It is now up to each national data protection authority to carry out further investigations according to the provisions of its national law transposing European legislation. Consequently, all the authorities composing the taskforce have launched actions on 2 April 2013 on the basis of the provisions laid down in their respective national legislation (investigations, inspections, etc.)

    “In particular, the CNIL notified Google of the initiation of an inspection procedure and that it had set up an international administrative cooperation procedure with its counterparts in the taskforce.”

    The UK Information Commissioner’s Office (ICO) also confirmed that it had opened an investigation to check whether the privacy policy complies with that country’s Data Protection Act (each EU member state transposes EU law into its own version, and there are sometimes variations in interpretation).

    Google, meanwhile, said in a statement that its privacy policy “respects European law and allows us to create simpler, more effective services”. “We have engaged fully with the DPAs involved throughout this process, and we’ll continue to do so going forward,” the company added.

    What did Google do wrong?

    The main problems with the privacy policy, according to the regulators, are that Google doesn’t provide clear and comprehensive information about the data it collects and what it uses the data for, and that it also doesn’t give users control over the way data is mixed and matched across different services.

    The DPAs want Google to give its users “the opportunity to choose when their data are combined, for instance with dedicated buttons in the services”, as well as a centralized opt-out for data collection. They also want Google to be much clearer with its users about the way it gathers and exploits their data, ideally “with three levels of detail to ensure that information complies with the requirements laid down in the [Data Protection] Directive and does not degrade the users’ experience”.

    So what happens if Google fails to satisfy the DPAs? As this is now being dealt with on a national basis, that depends on the DPA. In the case of the UK ICO, Google could in theory be hit with a monetary penalty of up to £500,000 ($758,000), but it could also be forced to change its processes and practices.

    It’s not hard to see the benefit of Google unifying its services, but the DPAs do have a point about the levels of information and control afforded to Google’s customers. It must surely be possible for both Google and the regulators to get their way, although the variable there is the ability of the users to understand and act on the information and controls they are given.

    In the new realpolitik required by the collision of big data and privacy, perhaps people will need to start getting used to this kind of granularity.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • CloudSigma goes all-SSD to boost HPC performance in the public cloud

    Public clouds offer lots of flexibility, but not necessarily the sort of performance you need for handling big data. The Zurich-based provider CloudSigma has felt this pinch more than most, as it is a supplier to Europe’s performance-hungry science cloud, Helix Nebula, and now it says it has found the solution: going all-SSD. Well, that and rolling its own stack.

    CloudSigma, which operates out of both Switzerland (Zurich) and the U.S. (Las Vegas), was one of a handful of infrastructure-as-a-service (IaaS) providers that signed up last November for SolidFire’s all-SSD storage system. The result is now here: CloudSigma has ditched all its hard-disk drives and, as a result, it now feels confident enough to offer a service-level agreement (SLA) for performance, as well as uptime.

    What’s more, despite the fact that solid-state storage costs about eight times as much as hard-disk, CloudSigma hasn’t changed its pricing – its SSD-based utility service costs $0.14 per GB per month, same as the HDD-based service did. Customers can also pick up the SSD storage service unbundled from CPU and RAM if they so choose.

    HPC in the public cloud

    According to CloudSigma COO Bernino Lind, the shift to SSD is a major help when it comes to handling high-performance computing (HPC) workloads, such as those of Helix Nebula users CERN, the European Space Agency (ESA) and the European Molecular Biology Laboratory (EMBL):

    “They want to go to opex instead of capex, but the problem is there is no-one really who does public infrastructure-as-a-service which works well enough for HPC. There is contention — variable performance on compute power and, even worse, really variable performance on IOPS [Input/Output Operations Per Second]. When you have a lot of I/O operations, then you get all over the spectrum from having a couple of hundred to having 1,000 and it just goes up and down. It means that, once you run a large big data setup, you get iowaits and your entire stack normally just stops and waits.”

    Lind pointed out that, while aggregated spinning-disk setups will only allow up to 10,000 IOPS, one SSD will allow 100,000-1.5 million IOPS. That mitigates that particular contention problem. “There should be a law that public IaaS shouldn’t run on magnetic disks,” he said. “The customer buys something that works sometimes and doesn’t work other times – it shouldn’t be possible to sell something that has that as a quality.”

    CloudSigma has also resolved another contention point around RAM, Lind claimed:

    “A modern CPU can ask for a lot of data because it’s fast and efficient, so it is possible to saturate and make contention on your memory bus. That has been solved with NUMA topology, which is like a multiplexer to get access to memory banks. You get asynchronous access, which means you don’t have contention on accessing the RAM.

    “However, public cloud service providers turn this off so the actual instance doesn’t have access to NUMA. We figured out a way to pass on the NUMA topology so, when you run really extensive compute jobs, you won’t hit a kind of contention when you want access to RAM. This is really important for big data workloads.”

    In-house stack

    Speaking of things that public cloud providers tend to turn off, CloudSigma’s stack – apart from the underlying KVM hypervisor, everything was written in-house – makes it possible to access all the instruction set goodies that are built into modern processors, such as the AES encryption instruction set.

    Public clouds may run on a variety of physical hosts that encompass a range of CPU generations, only some of which will have certain instruction sets hard-coded onto the silicon. Providers will often turn off these instruction sets to make their platform homogeneous, but that means losing out on the performance benefits offered by hard-coding. According to Lind, CloudSigma’s stack allows a heterogeneous cloud based on allocation pools – say, one of older Intel chips and another of newer AMD 6380 chips – that customers can choose according to their performance needs.

    What does all this mean in practice? Lind cited the example of augmented-reality gaming outfit Ogmento, which recently used CloudSigma’s all-SSD setup to power a mobile, location-based version of a popular title. “They [said] all their I/O-heavy stuff, databases and so on, saw a x8-x12 performance increase,” he noted. “Their entire stack saw a x2-x4 performance increase. That means they need to use less compute power in order to run their system.”

    With the budgetary constraints faced by European scientists these days, it’s not hard to see how that same kind of effect could make a real difference in more serious applications too.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Liberty Global buys minority stake in Dutch ISP Ziggo for $810M

    Liberty Global, which became the largest ISP outside China when it bought Virgin Media in February, has made another investment in Europe, this time by buying a stake in Dutch cable operator Ziggo.

    Ziggo is the largest cable provider in the Netherlands. Liberty announced on Thursday that it had picked up 25.3 million shares in the ISP at a total value of €632.5 million ($810.2 million), giving it a 12.65 percent stake.

    This is not Liberty’s first Dutch investment. Indeed, it is also the owner of UPC — founded in the Netherlands — which provides broadband services across 10 European countries, namely the Netherlands, Austria, Ireland, Belgium, Germany, the Czech Republic, Poland, Slovakia, Switzerland and Romania. In the Netherlands, UPC is the second-largest cable provider. Liberty also holds a majority stake in Belgium’s Telenet and maintains a European content division called Chellomedia.

    According to Thursday’s statement:

    “Liberty Global considers the acquisition of this minority stake in Ziggo as an attractive opportunity to make a strategic investment in a market where it already enjoys a sizeable presence through its UPC Netherlands subsidiary. The Purchase Price is also financially attractive given the stock’s approximate 7.4 percent dividend yield, which is implied by Ziggo’s expectation that it will pay €370 million of dividends during 2013.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Egyptian coastguard arrests divers over major broadband cable cut

    Most times a submarine internet cable gets cut, it’s the result of someone dropping anchor in the wrong place. In the case of the cut off the Egyptian coast, on which my colleague Om Malik reported yesterday, it seems that more deliberate action may have been involved.

    According to the Associated Press, on Wednesday the Egyptian coastguard detained three scuba divers in a dinghy near Alexandria, who were “cutting the undersea cable” of local telco Telecom Egypt. Egyptian news agency MENA identified the affected cable as SMW4: the same one whose cutting caused an internet slowdown in parts of Africa, the Middle East and Asia.

    MENA quoted officials as saying services would be “back 100 percent on Thursday morning” via the use of “alternative feeds”. Telecom Egypt will apparently bear the cost of the repairs, both of this disruption and a separate cable cut last Friday.

    Incidentally, the SMW4 cable (more properly known as South East Asia–Middle East–Western Europe 4 or SEA-ME-WE 4) was also involved in a very serious outage five years ago, which cut the capacity of the main Europe-Middle East connection by 75 percent. This one appears to have been less drastic.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Moxtra’s collaborative take on personal information management goes global

    After WebEx founder Subrah Iyar sold his company to Cisco five years ago, he spent some time helping with the transition then became an investor in companies such as Huddle. About a year ago his daughter suggested the idea of shared virtual binders for college students — and around the same time, Iyar got back together with some of his former WebEx colleagues, who were keen on the idea of helping people access all their personal information from mobile devices.

    The result, which launched in January, was Moxtra, a service for students and small businesses that combines collaboration capabilities with the ability to access data not only from cloud services such as DropBox, but also from the user’s desktop computer.

    And now the company has made a big global push, releasing versions of the iOS app in 18 new languages, namely Chinese, Danish, Dutch, English, Finnish, French, German, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Spanish, Swedish, Thai, Turkish and Vietnamese. This should help a lot more people join in the service’s collaboration aspect.

    Moxtra has some pretty clever features on that front, starting with the range of things that can be rolled into these “binders” and the sources from which they can be derived: they could be photos from the tablet’s camera, or documents from the user’s remote desktop, or audio from a cloud storage account. In true Evernote style, web clippings can also be added. Updates from any member of the collaborating team will show up in a Facebook-like activity stream.

    There’s also an ad-hoc meeting facility in there (again: WebEx guy) and the ability to share binders publicly or within a private group. However, the cleverest feature in my view is Moxtra Note, which lets you annotate files and binders with your voice (video is apparently also on the horizon, Iyar told me). You can then send out the annotated result to people who aren’t even themselves Moxtra users, who can then view it like a video clip.

    With Evernote continually adding new features, and with rivals such as Wunderlist in hot pursuit, this personal information management space is getting quite frisky. Moxtra’s collaborative take gives it an interesting new avenue to go down.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • What you need to know about the world’s biggest DDoS attack

    The last week has seen probably the largest distributed denial-of-service (DDoS) attack ever. It’s being reported in fairly dramatic terms, with the New York Times and BBC talking about the internet getting jammed or slowed down.

    So what’s actually going on? Here’s a rundown of some key points:

    A what attack?

    DDoS attacks, as the “distributed” part suggests, involve large numbers of computers bombarding a target system with traffic, with the idea being to stop that system from functioning. A bunch of South Korean banks and broadcasters got temporarily crippled by such an attack a week ago, for example.

    Who got hit this time?

    The intended target appears to be Spamhaus, a European organization that maintains a blacklist of ISPs that supposedly host “spam gangs” and refuse to stop serving them as customers. Spamhaus is pretty resilient, as its own network is distributed across many countries, but the attack was still enough to knock its site offline on March 18.

    The reason was the attack’s sheer volume. At the time, it looked to be around 75Gbps of traffic — which is a lot — hammering Spamhaus’s servers. Cloudflare, the security firm that Spamhaus called for help, subsequently published a good explainer of what happened:

    “The largest source of attack traffic against Spamhaus came from DNS reflection… [This method has] become the source of the largest Layer 3 DDoS attacks we see (sometimes well exceeding 100Gbps). Open DNS resolvers are quickly becoming the scourge of the Internet and the size of these attacks will only continue to rise until all providers make a concerted effort to close them…

    “The basic technique of a DNS reflection attack is to send a request for a large DNS zone file with the source IP address spoofed to be the intended victim to a large number of open DNS resolvers. The resolvers then respond to the request, sending the large DNS zone answer to the intended victim. The attackers’ requests themselves are only a fraction of the size of the responses, meaning the attacker can effectively amplify their attack to many times the size of the bandwidth resources they themselves control.”

    Whodunnit?

    Spamhaus has no shortage of enemies, given its line of business. Spammers are a nasty lot, although there are in fact some serious arguments to be had around the weight carried by blacklists of this kind, and who controls them.

    However, all eyes seem to be on CyberBunker, a Dutch host that prides itself on hosting anything except terrorist material and child pornography (Wikileaks was a client). Spamhaus lists CyberBunker (or CB3ROB, as it is also known) as the world’s number-one offender when it comes to hosting spam gangs, and around 18 months ago it blacklisted the host’s ISP, A2B Internet. A2B responded by reporting Spamhaus to the Dutch police as DoS offenders – if you want to delve deeper into that nasty dispute, here are accounts of what happened from CyberBunker, A2B and Spamhaus.

    After this latest attack hit, the NYT got hold of one Sven Olaf Kamphuis, who claimed to represent the attackers. Kamphuis claimed CyberBunker was retaliating against Spamhaus in concert with Eastern European and Russian gangs, saying: “Nobody ever deputized Spamhaus to determine what goes and does not go on the internet… They worked themselves into that position by pretending to fight spam.”

    Spamhaus itself is reticent about naming CyberBunker as the culprit. I’ve approached CyberBunker for comment, and will add it in if and when I get it.

    What about this “slowing down the internet” stuff?

    Remember that 75Gbps number? Well, that was then and this is now. The BBC quoted Spamhaus CEO Steve Linford on Wednesday as saying the attack had peaked at 300Gbps. That would make it the biggest DDoS in history – or at least the biggest publicly disclosed DDoS.

    Professor Alan Woodward of the University of Surrey, one of the UK’s premier computer security experts, told me that the attack “seems to be orders of magnitude larger than anything seen before”:

    “In some places it’s been mounted, it has had some collateral damage, for example Netflix, although these are transient effects… The thing that got people talking is that it’s a DNS amplification attack. The point is, if you’re targeting something and [the target has] a 10Gbps switch, you only have to throw 11Gbps at it and you’ve pole-axed the system. If it is at 300Gbps, then potentially some of the main infrastructure is being affected, though I’m not sure how much it’s really affecting it.”

    Woodward used the analogy of a highway. Such an attack could briefly take out the highway ramps, he said, but the “main backbone of it is unlikely to be affected for any length of time”.

    The thing is, in terms of figuring out whether this attack really has slowed down chunks of the internet, there are other factors to consider. For example, in the last week we’ve also seen a yet another submarine cable cut off Egypt, slowing down internet access in that region. Together, these factors could have a cumulative impact.

    “I don’t think there’s any immediate effect on the internet, but it is a wake-up call,” Woodward said. “If it was done really seriously in a wider attack, then it could affect [many users]. Trying to take down the whole internet is impractical, but you could start to decapitate sections of it.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Here today, gone tomorrow: director of Nokia’s mapping platform joins SoundCloud

    The director of Nokia’s Here mapping platform, Sylvain Grande, is leaving the company to join SoundCloud, GigaOM can reveal.

    Grande, who will take on an as-yet-undisclosed role at the Berlin-based audio platform firm next week, worked on Here Maps (formerly Nokia Maps) since the end of 2008. He ran the teams – also located in Berlin — that develop Here for Windows Phone, the web and other platforms. According to his LinkedIn profile, he was also “strongly involved in Nokia Maps’ key partnerships (from negotiation to delivery) with Yahoo!, Microsoft and others”.

    Nokia tells me Grande won’t have a direct replacement as such. He reported to Thom Brenner, Nokia’s vice president of applications, location and commerce, and various members of Brenner’s team will take over his responsibilities.

    The move comes at an interesting time for both SoundCloud and Nokia’s Here platform. SoundCloud has done a great job becoming the so-called YouTube of audio, but is only now starting to get serious about making money. Meanwhile, last month Nokia announced that it is opening up the Here platform to third-party developers, a shift that I reckon points to a strengthening of the platform’s significance for the company.

    Grande’s jump to SoundCloud isn’t unprecedented. Indeed, SoundCloud co-founder Eric Wahlforss used to work at gate5, the Berlin mapping company that, along with Navteq and Plazes, was acquired by Nokia to form the underpinnings of what is now Here. Sources tell me at least one other developer has also left Nokia’s Berlin operations for SoundCloud, so there appears to be some active courting going on.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Ericsson is angling for Microsoft’s Mediaroom IPTV package, report claims

    Microsoft may sell its Mediaroom IPTV package to Ericsson, according to a Bloomberg report on Wednesday.

    The report quotes unnamed “people with knowledge of the matter” as saying the two companies are in talks over the potential sale. Neither company would comment on the report when I asked them today, but the idea of the sale is perfectly plausible.

    Mediaroom is a customizable platform that is mostly bundled by telcos, who want to take on more traditional TV service providers with their broadband services — customers include AT&T, Deutsche Telekom and Telus. As it also comes with DVR capabilities, Mediaroom is in some ways a competitor to TiVo .

    According to Bloomberg’s source, Microsoft wants to focus more on delivering IPTV through its Xbox consoles. Swedish networking equipment vendor Ericsson, meanwhile, is all about servicing its telco customers, so the addition of Mediaroom to its lineup would make a lot of sense. The company could integrate the software with its existing IPTV managed services and systems integration portfolio – in February, Ericsson said it had signed 240 IPTV contracts with various service providers.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • DataStax pushes NoSQL into Europe with new London-based subsidiary

    Last year was a good year for NoSQL outfit DataStax. The big data company’s customer base increased roughly tenfold to 270, including 20 Fortune 100 firms and names such as eBay, Netflix and Thomson Reuters. It also picked up a $25 million C round in October, with one of the intended uses of that funding being global expansion. Now it’s making good on that promise by opening a European subsidiary.

    The DataStax Enterprise 3 big data bundle fuses Hadoop with the Apache Cassandra database and Apache Solr enterprise search platform, creating what CEO Billy Bosworth claims is “the first viable alternative to Oracle since Oracle.” The big selling points here are linear scalability, operational simplicity and an emphasis on business continuity.

    As the company has noticed that much of its new customer base was sited in Europe, the Middle East and Africa (EMEA), its latest move makes sense: DataStax has opened up a London office, and it’s a full-on subsidiary rather than just a branch office.

    As Bosworth told me, the idea here is to be able to respond quickly to European market demands, which range from language variation to a different style of partnership:

    “Without any presence in EMEA, we ended up in 2012 with 10 percent of our customers located in the EMEA region – that was 100 percent inbound; we didn’t do any programs or outbound activity. We have Scoreloop in Germany, the mobile gaming platform, and Trademob, the mobile app platform. We have mobile carriers who are decommissioning Oracle because they have to have a multi-data-center solution, and a London-based bank chose DataStax over Oracle for their ecommerce platform.

    “In the UK, the business aspect of it is not that different from the U.S. … but as you move into the European continent, you do want to have some local language skills. And when you move into France and Spain and Italy, now you’re into a very boutique partner network. Those partners have very good relationships with their customers but are often not on the same scale as a big [systems integrator] like Accenture. The only way to really get close enough to that partner network is for us to be in the region as well.”

    With a portfolio as open-source-centric as DataStax’s is, Bosworth added, the company is also looking forward to hosting “a ton of meet-ups in the region” in the coming months.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Babbel raises $10M to step up language-learning fight against Rosetta Stone

    Lesson Nine, the German company behind online and app-based language-learning platform Babbel.com, has picked up a $10 million Series B funding round that it intends to use for international expansion.

    Babbel’s strongest market is in its home country, but the plan now is to push into the Americas, the rest of Europe and emerging markets. The funding round was led by Reed Elsevier Ventures (see disclosure), Nokia Growth Partners and existing investors, including the Investitionsbank Berlin (IBB) and Kizoo Technology Capital.

    “If we look at France or the UK or U.S., then our presence there is rather poor,” CEO Markus Witte told me. “What we need to get is a good reach and brand recognition.”

    Going large

    Marketing will be part of that, but so will be cooperation with various other players, such as platform providers (the company recently collaborated with Microsoft on releasing apps for Windows 8 and Windows Phone), hardware manufacturers and media companies such as newspaper publishers, that might be diversifying and keen to get into the sale of language courses.

    The mention of hardware manufacturers is interesting. Witte wouldn’t get into details there, but the sorts of tie-ins that phone makers could offer would extend to preloading the Babbel app or promoting it in an app store channel.

    Just last week, Babbel also bought a small Silicon Valley language-learning app firm called PlaySay. Witte suggested to me that, while further acquisitions are possible, they’re not in the game plan right now. “In general we don’t feel the market is mature enough for a rollup strategy,” he noted.

    Competition

    So where does Babbel sit in the grand scheme of things? The big competitor is Rosetta Stone, a giant from the CD-ROM days that now offers its language courses by download, although the price still runs into the hundreds of dollars. Babbel charges monthly fees, starting at $7.45, and of course – being web-based — it hews to a continuous deployment model rather than the download-and-install model of old.

    Then there are more community-based options such as Busuu and LiveMocha (which appears to be going more for the B2B market these days), as well as innovative new rivals such as DuoLingo, which combines language lessons with the translation of real-life web content. In short, there’s a lot of competition out there.

    Still, Babbel has a fair amount of traction already. According to Tuesday’s release, the apps have been downloaded more than 8 million times and there are over 15 million users overall.

    Disclosure: Reed Elsevier Ventures is also an investor in the parent company of GigaOM.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Less digging and more speed: how Europe plans to get back on the broadband track

    Europe’s digital agenda chief, Neelie Kroes, may have lost all her funding for ensuring fast broadband coverage across the continent by 2020, but she’s still not giving up hope. Her latest push, unveiled on Tuesday, has two main strands: cutting some of the red tape around deploying 4G masts and antennas, and changing regulations around civil works.

    The European Commission reckons 80 percent of high-speed network deployment costs are to do with civil engineering, mostly digging up roads. For example, it may be that a road is being dug up anyway for the laying of new waterworks or electricity cables, and it would be a no-brainer to lay some fiber in there at the same time – however, in many European countries that kind of coordination is not in place, and that’s what Kroes wants to fix.

    Kroes maintains that this could take €40-60 billion ($51-77 billion) off the overall cost of deploying fiber-based broadband in Europe. She also wants rules brought in to ensure that newly-built or renovated buildings have the necessary equipment in order to receive fiber directly to the premises, and to mandate reasonably-priced open access conditions on infrastructure such as ducts and poles.

    On the mobile front, Kroes says permits for new masts and antennas should be granted or refused within six months. Her office is painting all of these changes as cuts to red tape – while this interpretation may be debatable, as some of the changes would actually involve new rules, the overall result would at least be one of more efficient bureaucracy.

    “Everyone deserves fast broadband. I want to burn the red tape that is stopping us for getting there,” Kroes said in a statement. “The European Commission wants to make it quicker and cheaper to get that broadband.”

    Kroes’s Digital Agenda office intends to see, by 2020, that everyone in Europe has access to at least 30Mbps broadband, and that half the EU is able to surf at 100Mbps or more. She recently threw €50 million in the direction of “5G” research, so that mobile can carry more of the load in meeting those goals.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Why the collision of big data and privacy will require a new realpolitik

    When it comes to protecting privacy in the digital age, anonymization is a terrifically important concept. In the context of the location data collected by so many mobile apps these days, it generally refers to the decoupling of the location data from identifiers such as the user’s name or phone number. Used in this way, anonymization is supposed to allow the collection of huge amounts of information for business purposes while minimizing the risks if, for example, someone were to hack the developer’s database.

    Except, according to research published in Scientific Reports on Monday, people’s day-to-day movement is usually so predictable that even anonymized location data can be linked to individuals with relative ease if correlated with a piece of outside information. Why? Because our movement patterns give us away.

    The paper, entitled Unique in the Crowd: The privacy bounds of human mobility, took an anonymized dataset from an unidentified mobile operator containing call information for around 1.5 million users over 14 months. The purpose of the study was to figure out how many data points — based on time and location — were needed to identify individual users. The answer, for 95 percent of the “anonymous” users logged in that database, was just four.

    From the paper:

    “We showed that the uniqueness of human mobility traces is high, thereby emphasizing the importance of the idiosyncrasy of human movements for individual privacy. Indeed, this uniqueness means that little outside information is needed to re-identify the trace of a targeted individual even in a sparse, large-scale, and coarse mobility dataset. Given the amount of information that can be inferred from mobility data, as well as the potentially large number of simply anonymized mobility datasets available, this is a growing concern.”

    Just because you’re paranoid…

    For those already worrying about the privacy-busting implications of mobile device use, this should come as no surprise. As CIA CTO Ira “Gus” Hunt stressed last week at GigaOM’s Structure:Data conference, mobility and security do not go hand-in-hand. You can be constantly tracked through your mobile device, even when it is switched off. What’s more, those sensors you’re pairing with your device make it ridiculously easy to identify you.

    From Hunt’s speech:

    “You guys know the Fitbit, right? It’s just a simple three-axis accelerometer. We like these things because they don’t have any – well, I won’t go into that [laughter]. What happens is, they discovered that just simply by looking at the data what they can find out is with pretty good accuracy what your gender is, whether you’re tall or you’re short, whether you’re heavy or light, but what’s really most intriguing is that you can be 100 percent guaranteed to be identified by simply your gait – how you walk.”

    One of the explicit purposes of Unique in the Crowd was to raise awareness. As the authors put it: “these findings represent fundamental constraints to an individual’s privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.”

    But this isn’t just about mobility; it’s also about the implications of our big data society. These are effectively two sides of the same coin – mobile devices make it easy to collect data, while big data capabilities make it increasingly trivial to take the resulting mass of supposedly anonymized data and tease out the kind of specificity that the anonymizers were trying to erase.

    This was precisely the sort of problem foreseen by Europe’s cybersecurity agency, ENISA, a few months back when evaluating the continent’s proposed “right to be forgotten”. If a citizen really wants all traces of their personal data removed from the web, ENISA pointed out, that would have to mean removing their data from anonymized datasets as well as from more obvious repositories such as social networks and search indices.

    As ENISA said at the time:

    “Removing forgotten information from all aggregated or derived forms may present a significant technical challenge. On the other hand, not removing such information from aggregated forms is risky, because it may be possible to infer the forgotten raw information by correlating different aggregated forms.”

    Shall we just give up now?

    The Unique in the Crowd authors stressed in a BBC interview that “we really don’t think that we should stop collecting or using this data — there’s way too much to gain for all of us — companies, scientists, and users.” So what can be done?

    Personally speaking, I have been writing about issues around data privacy for many years, and I still cannot see any easy solution to this problem. If it were simply a case of which side of the argument carries more weight, I would have no hesitation in siding with the privacy brigade: selling data to advertisers in order to fund that “free” app does not justify the creation of a surveillance society.

    But it’s just not that simple. That Fitbit is also trying to help you keep fit — the fact that it can identify you by accident doesn’t change that fact. Mobile operators’ datasets help keep their networks running. Location-based services don’t work without location. We even hope big data capabilities will help us fight diseases and socio-economics problems. And, most importantly, despite the fact that most people in the U.S. and European Union insist they want better data privacy, we see time and again that this desire doesn’t translate into action – people still give up their data without much consideration.

    What we need is a new realpolitik for data privacy. We are not going to stop all this data collection, so we need to develop workable guidelines for protecting people. Those developing data-centric products also have to start thinking responsibly – and so do the privacy brigade. Neither camp will entirely get its way: there will be greater regulation of data privacy, one way or another, but the masses will also not be rising up against the data barons anytime soon.

    There needs to be better regulation that works in practice – unlike Europe’s messy cookie law or the “right to be forgotten”. It may be that the restrictions will need to be on the use of data rather than its collection, as proposed in a recent World Economic Forum report. However, regulators tend not to be very proactive, particularly when the risks, while inevitable, remain mostly theoretical.

    I suspect the really useful regulation will come some way down the line, as a reactive measure. I just shudder to think what event will necessitate it.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google puts spectrum database to use in Cape Town white space broadband trial

    Microsoft recently said it intended to trial white space technology in Kenya, and now Google is also experimenting with the wireless broadband system in Africa, this time in Cape Town, South Africa.

    White spaces are the gaps in between broadcast TV channels in the radio spectrum. These gaps are left empty as buffers, in order to avoid the TV channels bleeding into each other, but they also have the capacity to carry wireless broadband. And, because the spectrum we’re talking about is quite low-frequency, it is very good at carrying that wireless broadband over great distances – hence the technology’s promise for mostly rural areas that lack good fixed-line broadband.

    The Cape Town trial, launched on Monday, is experimenting with white spaces as a way of bringing connectivity to schools. The base stations are being sited on the Tygerberg hill, which is next to several heavily-populated areas (I’m from Cape Town, as it happens), so the trial should provide a good idea of how white space broadband interferes – or hopefully doesn’t – with licensed spectrum holders in the vicinity.

    Google’s involvement extends to sponsorship and the use of its newly-launched spectrum database, while others taking part include the Council for Scientific and Industrial Research (CSIR), the Tertiary Education and Research Network of South Africa (TENET) and of course the local telecoms regulator, ICASA. The equipment comes from Neul and Carlson Wireless.

    The trial will last six months. According to TENET’s explanation, each of the 10 schools involved will get a “dedicated 2.5Mbps service with failover to ADSL” – hardly impressive speeds, but this is still an experiment after all.

    According to Fortune Mgwili-Sibanda, Google’s public policy manager in South Africa, Google’s intention here is partly to drive regulatory change there. Like Wi-Fi spectrum, white space spectrum can be used license-free in the U.S. This may also happen in the UK, depending on what the regulator Ofcom decides. “We hope the results of the trial will drive similar regulatory developments in South Africa and other African countries,” Mgwili-Sibanda wrote in a blog post.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Evernote adds document search functionality to mark Deutsche Telekom partnership

    Evernote has struck a strategic partnership with Deutsche Telekom, which will see Telekom customers get one-year Evernote Premium subscriptions. And, to mark the occasion, Evernote has added a new Premium feature: document search.

    Evernote Premium’s existing features include higher upload limits, offline note availability, added note-sharing options and better note search. But now Premium subscribers are also able search documents, spreadsheets and presentations that are attached to notes, as long as the documents in question were created in Microsoft Office, Apple iWork or OpenOffice.

    The initial result of the Telekom tie-in is not in itself unique by any means – many Orange and Taiwan Mobile customers, for example, also got a year’s free Premium access last year. However, the use of the term “strategic partnership”, along with the companies’ insistence that this Premium deal is only “the first part” of that arrangement, suggest more is yet to come.

    For Evernote, such deals mean more reach and entrenchment, which comes at an opportune time given Google’s entry into the same market with Keep and the rise of other rivals such as Wunderlist.

    For Deutsche Telekom, the partnership is a continuation of the telco’s strategy of bundling the premium versions of popular web brands as a way of making its own services more attractive: it does much the same with Spotify for one of its youth-oriented tariffs.

    “At Deutsche Telekom, we count on partnerships to pave the way for innovations,” Telekom business development SVP Heikki Makijarvi said in a statement. “Our goal is to offer highly innovative and unique services with the easiest access possible. The cooperation with Evernote is an excellent example of two companies combining their strengths for the benefit of our customers.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Why Bitcoin poses an interesting ethical conundrum for journalists

    Bitcoin is having a wild ride right now. Partly due to the euro crisis, and partly due to a lot of press coverage, some people seem to be taking a keen interest in the crypto-currency — in the last week, its value in relation to the U.S. dollar has shot up by almost 60 percent.

    As I mentioned the last time I wrote about Bitcoin, I’m not an economist: I’m a technology journalist who is intrigued on a technical level by the theory and mechanics behind a distributed, algorithmically-generated “currency”, and that makes me want to track it and occasionally cover it when the tech angle is strong. Now, I don’t actually own any Bitcoins, but what if I did?

    If I were to own stock in a tech company (I don’t, by the way) and I found myself writing about said company, I would at the very least be obliged to put a disclosure into the article — in fact, I would probably just avoid writing about the firm altogether. However, bloggers and journalists don’t follow that convention with currencies. Imagine an American journalist covering the fortunes of the dollar, and putting in a disclaimer to say that all her savings are held in USD – it would seem daft.

    So I posed the question on Twitter earlier: “What are the ethics of writing about Bitcoin if you’ve bought some (I haven’t). Does it require stock-style disclosure?” Some quickly responded in the affirmative:

    Whereas others were more circumspect:

    There’s some serious debate going on about whether or not Bitcoin actually is a currency, but I (like the U.S. Treasury Department, it seems) feel that it is, albeit an unique one. Its uniqueness stems not only from the way in which its creation is automated, but also from its current volatility and, crucially, the fact that people don’t generally understand it very well. We all know what nationally-issued currencies like dollars and yen are, and we don’t need the concept explained to us every time we read an article about them. The situation with Bitcoin is very different.

    I strongly suspect Bitcoin’s meteoric rise in recent weeks is largely an echo chamber effect — coverage begets coverage — which puts those writing about it in an unusual position. We bloggers and journalists have an extraordinary amount of influence in people’s perceptions of Bitcoin and, as a result, the trajectory of its value. For that reason alone, I think any coverage from a writer who has bought into Bitcoin should come with a clear disclosure.

    That said, my colleague Tom Krazit also brought up an interesting tangential point in discussion, suggesting that writers covering Bitcoin may actually have an obligation to buy into it on a low level, so they can conduct a few transactions and basically have a clearer idea of what they’re talking about in their coverage.

    This is all clearly a new and unusual field to explore, so I’d be interested in hearing further thoughts on the subject.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

    • Why the EU is unlikely to crack down on Apple over its carrier contracts

      Carriers have passed information to the European Commission’s antitrust chief about the contracts Apple makes them sign, according to a report in The New York Times. The Commission says it is looking into the information, although it stops short of calling them formal complaints, meaning it is not obliged to consider a formal investigation into the matter.

      The details of this information remain sketchy, although the report suggests that French carriers are concerned that Apple’s contracts hold back competition by setting excessively high quotas for iPhone sales, thereby making it difficult to assign marketing resources to rival smartphones. While no one is forcing the operators to sell the iPhone, they really want to do so because customers want it, and that means agreeing to Apple’s demands. The terms of such contracts are always secret.

      Here’s a statement sent out on Friday by Antoine Colombani, spokesman for Competition Commissioner Joaquín Almunia:

      “The markets for smartphones and tablets are very dynamic, innovative and fast-growing. Samsung’s growing market position and the success of Google’s Android platform are good reasons to believe that competition is strong in the markets for smartphones and tablets.

      “The Commission has been made aware of Apple’s distribution practices for iPhones and iPads. There have been no formal complaints, though. The Commission is currently looking at the situation and, more generally, is actively monitoring market developments. We will intervene if there are indications of anticompetitive behaviour to the detriment of consumers.”

      I find it hard to believe this will come to anything. As the statement suggests, iOS devices are not the only game in town — in fact, the iPhone only has around 25 percent share of the smartphone market across the five biggest European economies. Apple certainly has a lot of weight to throw around in the mobile market, but nowhere near enough as to constitute a monopoly.

      A good (though not perfect) point of comparison here would be Intel, which found itself the subject of a $1.45 billion EU fine back in 2009 for abuse of its dominant position. Intel, which utterly dominated the x86 processor market as it does now, gave secret kickbacks to computer manufacturers and retailers for not stocking AMD-based products. It even paid manufacturers to delay or cancel the launch of non-Intel products.

      That was a clear-cut case of illegal practices, hurting consumers by limiting their choices. It’s hard, if not impossible, to argue that consumers in the EU do not have easy access to non-Apple mobile devices. In the Intel case, those manufacturers and retailers did not seriously have the option of telling the chipmaker to show itself the door. In this Apple business, the anonymous carriers in question could likely have done what U.S. Cellular did, and just not stock the iPhone. There are plenty of alternatives.

      I suspect that the carriers in this situation are simply trying to weaken Apple’s hand in contract negotiations, and that the Commission is highly unlikely to step in and help.

      Related research and analysis from GigaOM Pro:
      Subscriber content. Sign up for a free trial.

    • Watch out, big CDNs: OnApp and its federation are coming for your resellers

      OnApp is quietly amassing extensive cloud resources around the world, and without having to build out its own infrastructure. OnApp’s game involves federating the resources of hosting providers and telcos who want to get into the cloud, and right now it’s making a particular push on the content delivery network (CDN) front, having recently launched its own CDN.net brand in order to sell capacity to web businesses.

      Now, CDN.net can’t quite rival the likes of Akamai, Limelight or Level3 in terms of points of presence (PoPs): OnApp’s federation includes just over 150 PoPs, whereas Akamai, for example, has around 1,200 (also, CDN.net itself has launched with just 30 PoPs, although it says more can be added according to demand). However, its services are flexible and available on a pay-per-use basis, allowing it to target smaller businesses rather than blue-chip customers.

      And now London-based OnApp is taking on the big CDN players by gunning for their resellers.

      Business-in-a-box

      It’s doing so by essentially giving those resellers a CDN business-in-a-box. OnApp has “open-sourced” the tools used to build CDN.net, so now service providers – whether or not they are currently in OnApp’s federation – can roll out their own rival. The package contains a customer portal, configuration and reporting tools and billing functionality, and it will be available to providers for a usage-derived monthly fee with no long-term contract and no minimum bandwidth commitments.

      According to OnApp Federation MD Stuart Simms, flexibility is again the key here, as service providers can use the ready-made storefront to sell specialized CDN services. What’s more, he promised, OnApp is promising greater profitability than the Akamais and Level3s of this world can offer:

      “The OnApp federation is a diverse community of service providers, and now there’s an easy way to tap into that rich resource, and create unique CDN services based on whatever attributes are important to you and your customers — location, speed, quality and more. You can build CDNs across a handful of locations, or across the world; offer more attractive pricing for end users; and still get more margin than you would from legacy vendors, who have to recoup the cost of the entire network.”

      Remember that CDN is only part of OnApp’s strategy: storage is another big piece, and compute capacity is coming up too. So this “instant CDN” package, as OnApp calls it, is a model for other virtual service provider packages that will come out later this year.

      New entrants

      The key here is that these packages are no longer restricted to those service providers who were already offering up their data center resources to be sliced and diced in OnApp’s federation. Now those resources can be exploited by entirely virtual service providers who have no physical infrastructure of their own to offer, but who are willing to pay those fees to OnApp and, in turn, the real infrastructure owners who are making this all possible.

      Back to Simms:

      “Opening up the federation is the next phase in its growth. It’s great news for our customers, because it’ll drive more traffic for the companies supplying the federation. It’s great news for other service providers, who can take advantage of our CDN service alongside their existing services.

      “We’ll see other companies using the network too — technology companies who have struggled with the capital expense of building their own network, who can now focus on innovation. We’ve created a launch pad and channel for business applications, games, social media apps, app stores and all kinds of innovative new services that need global performance and reach, out of the box.”

      OnApp said this week that it has almost 600 service provider customers in 68 countries, who are all running clouds based on the company’s orchestration software (which was how OnApp first created its federation). The firm claims this makes it “the most widely used public cloud platform on the market today”.

      Of course, this scale doesn’t translate directly into PoPs, and those contemplating reselling OnApp’s CDN are still going to get more reach from Akamai, Limelight et al. However, for a lot of providers – both real and wannabe virtual – OnApp’s terms may prove mightily tempting.

      Related research and analysis from GigaOM Pro:
      Subscriber content. Sign up for a free trial.

    • Why Nuance sees the semantic web as a key to smarter natural language interfaces

      Natural language user interfaces – think Siri – promise to revolutionize everything from call-centers to hospitals, but what about their data sources? How do the companies and organizations deploying them ensure real choice, rather than just assuming a certain one or two sources will do the job? These are questions that came up today at GigaOM’s Structure:Data conference in New York and, according to Nuance Communications CTO Vlad Sejnoha, the answer may lie in the semantic web.

      The semantic web, a term coined by Tim Berners-Lee, is an initiative being developed under the auspices of the World Wide Web Consortium (W3C). It aims, through the use of standardized tags and formats, to build a framework where content on the web has machine-understandable meaning, rather than simply being searchable by keyword.

      Asked by GigaOM Research analyst George Gilbert how best to tap a variety of information repositories without having to “hardwire” the language interface into each one separately, Sejnoha said the answer lies in an open approach:

      “The conversation stack… has to interact with content sources, other services, applications and devices. Today, integrating those interfaces into those resources is a one-off job. Some applications on the market make choices on behalf of the user, and this brings important questions about openness. I do think the promise of the semantic web remains very important there.

      “I hope we get to the point where people who have important services or content on the web publish them in standard formats that we can connect to and [use] almost automatically… I am hopeful that this will gain greater support in the industry as these folks realize without something like this the interface might become opaque to new entrants.”

      However, Gilbert demurred, suggesting that incumbents have “no incentive to make a common interface.”

      Check out the rest of our Structure:Data 2013 coverage here, and a video embed of the session follows below:


      Related research and analysis from GigaOM Pro:
      Subscriber content. Sign up for a free trial.