Author: David Meyer

  • O2 UK moves away from handset subsidies with decoupled Refresh tariffs

    Much as various carriers have done in other European countries, and as T-Mobile has done in the U.S., O2 has become the first in the UK to decouple the cost of the handset from its service contracts. The operator has done so through a new option called O2 Refresh, which will launch this coming Tuesday.

    It works like this: customers sign up for separate phone and airtime plans at the same time, with the duration of the plans being 24 months. If the customer wants to upgrade their phone within that period, they can simply pay off the remainder of the phone plan, with no further penalty. O2 will then unlock their phone, unless the device is exclusive to O2′s network.

    Similarly, if customers hit the end of the two years and don’t want a new handset, they only have to pay the monthly airtime fee from that point on. This is a lot fairer to the user than the traditional system, where the benefit to the operator of not having to provide an expensive new device after two years isn’t necessarily passed on to the customer in full.

    Bye-bye “free”

    By adopting this sort of interest-free financing scheme, O2 is effectively moving away from the traditional, opaque model of subsidising the handset up-front, then burying the true cost of the device in monthly contract payments. That’s not to say customers will pay more under the Refresh scheme — it just means they will more accurately see what they’re paying for, and will no longer be under the illusion that the handset they’re buying is “free”.

    In the words of an O2 spokesman today:

    “This isn’t about subsidies. Our main reason for doing this is to give people more freedom to get the latest phone whenever they want without paying any extra charges — our customers are telling us they don’t want to be tied to their current phone for up to two years.

    “By allowing customers to pay for their phone and tariff in this way, we are also able to more responsibly manage our costs, which will mean a better service for our customers and greater investment in future products and services.”

    That is indeed a problem carriers have these days with the subsidized model: people are increasingly adopting smartphones, which are complex and therefore quite expensive. By encouraging people to upgrade more often, O2 is making it likely that customers will pay it back for their phones more quickly than previously. It is surely no coincidence that the Refresh focus is on high-end devices such as the iPhone 5 and Samsung Galaxy S4.

    The move also handily pre-empts Ofcom’s probable introduction of new rules for carriers around price hikes and letting customers leave early. The telecoms regulator is annoyed with the operators for raising their prices, then penalizing people who subsequently want to end their contracts before the term is up.

    The Refresh airtime plans start at £12 ($18.43) a month, which will get you 600 minutes of call time, unlimited texts and 750MB of data. The phone plan pricing depends on the phone, obviously, but O2 said by way of example that the HTC One would cost £49.99 up-front, then £20 a month. The carrier said customers would end up paying the same amount as they would on a combined tariff.

    So how will this pan out for O2 and its customers? For that, we can turn to an admirably frank post on O2′s The Lab blog from last month, written in response to T-Mobile USA’s similar move, and bearing in mind the experience of O2 parent Telefonica in Spain, where subscriber numbers subsequently increased:

    “Could it work in the UK? Would customers be willing to pay up-front for their handsets? Would customers rather take out a loan from their mobile network and pay for the handset separately? Would customers compare prices across networks and simply choose the one which is cheapest today rather than looking at the [total cost of ownership]?

    “I think moving to removing subsidies is great for consumers. It lowers the price they pay and means that they’re not beholden to an evil operator gouging them for two years. And, if at any point the customer wants the latest phone – they don’t have to go through a complicated upgrade procedure – just slap down the cash.

    “For the operator, I think it’s also good news. It forces them to concentrate on customer service. They don’t need to extend large loans to the customer, nor do they need to compete on up-front cost. The downside, of course, is that the monthly revenue generated by the customer could be lower.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Secure cloud storage outfit Tresorit posts $10K hacker bounty

    Popular cloud storage services such as Dropbox and Google Drive are terrifically easy to use, but only boast middling security (hence the existence of third-party client-side encryption services such as BoxCryptor). However, there are many rivals out there that offer much stronger client-side security and anonymity — Lacie’s Wuala, Spideroak and Kim Dotcom’s Mega all spring to mind.

    So, if you’re an upstart in this business with serious security chops, how do you set yourself apart? Do what Tresorit is doing, and offer a bounty to any hacker who can breach your cryptography.

    Tresorit was founded in 2011, received $1.7 million in funding last year from Euroventures and nine private investors, and is now freshly out of closed beta, with its storage being based on Azure. The firm has strong security cred as a spinoff of Hungarian security outfit CrySys Lab, which was responsible for identifying the notorious Duqu worm. And Tresorit is so sure of itself that, from April 15th, it will offer a $10,000 reward to any hacker that busts its cryptography.

    “We’re positioning ourselves as an enterprise or small and medium business solution, but right now we’re targeting consumers too, because we need to reach credibility,” CEO István Lám explained to me. “That’s why we’re starting this campaign where we offer $10,000 to the first one who can hack this encryption.”

    Crypto challenge

    One issue with some  Tresorit rivals, such as Mega and Wuala, is that they use something called “convergent encryption”, which essentially means they can deduplicate the stuff their customers are storing on their systems. In the case of Mega, whose customers frequently use the service for storing movie files (entirely legally, of course), this helps avoid a situation where the same multi-gigabyte file is stored thousands of times, thereby keeping down Mega’s costs.

    Some security specialists are wary of this approach because they fear it can undermine user privacy.

    “If 10,000 people upload the same movie to Mega, they only have to store one file,” Lám said. “That leaks information about who has the same file – you can track one [piece of] information from another. So, from the very beginning, we dropped the idea of convergent crypto because that’s simply unacceptable for us.”

    Of course, Tresorit is ultimately going for a somewhat different user base; one that demands secrecy but that isn’t necessarily going to be uploading dozens of bulky movies. As such, while Mega famously offers 50GB of free storage, Tresorit’s free option maxes out at 5GB…

    … Although, if you’re reading this before 23:59 GMT on May 20th 2013, you can get a free 50GB Tresorit account for life by signing up here. Just thought I’d mention that. Anyway…

    In terms of business-friendly features, Tresorit uses public key cryptography to establish keys between people, so users can share access to files without sharing passwords. There are no master keys for bosses, but Lám said the company will soon introduce a “threshold cryptography” system, where at least two managers will need to be present in order to decrypt and open an employee’s account.

    Right now the client is only available for Windows, but OS X, iOS and Android versions will arrive before June, as will the first paid-for premium Tresorit accounts. Lám declined to reveal the pricing or capacity for these accounts ahead of that launch.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Duedil rakes in $5M for its open data-powered due diligence services

    Want to see how open data translates into new business models? Look to the London-based fintech firm Duedil, which has just completed a $5 million Series A round of funding.

    Duedil is an aggregator and data visualization outfit that provides due diligence services through a freemium model. It focuses on private companies and it gets its information about them largely thanks to the U.K.’s open data policies, although it also buys data from sources such as Companies House and the Ministry of Justice (for county court judgement debt information). Quite neatly, users can also sync the service with their LinkedIn accounts, so they can perform due diligence on their contacts.

    The firm focuses on making this range of complex data — asset value, intellectual property records, turnover, litigation and health-and-safety violation records, director information — easier to intepret, and its customers range from corporate lawyers and venture capitalists to small businesses checking up on their suppliers. Seventy-five of the FTSE 100 companies are clients. Duedil has also notably been used in some of The Guardian‘s (see disclosure) data journalism efforts, such as its recent exposé on companies (and individuals) that ferret their money away in tax havens.

    The service is generally free to use, although company document downloads and credit reports need to be purchased through a pay-as-you-go system. In a couple of weeks, though, Duedil will launch subscription packages that come with varying quotas of these each month.

    The funding round was led by Notion Capital and Oak Investment Partners, with others such as Passion Capital and Spotify investor Shakil Khan also taking part. According to Duedil CEO Damian Kimmelman, the investment will be used to expand into new regions and hire more engineers and data scientists.

    Powered by open data

    It seems a happy coincidence that Duedil is planning to move into other European countries just as the governments of those countries have agreed to adopt open data policies.

    “The value lies in linking datasets from data providers, governments and businesses themselves into one place,” Duedil commercial officer Andrew Connolly told me. “There is quite a progressive culture here [in the UK] in terms of open data. European data is quite exciting for us – we’ll take data from wherever we can get it.”

    Duedil isn’t the only British company working in this field. Another interesting example is OpenCorporates, which is pulling in data from all over the world, but the difference there is that OpenCorporates is entirely focused on open data, whereas Duedil works with a mix of open/free and closed/paid-for, both in terms of the data coming in and the services going out.

    “They’ve been a great team in terms of making data easily accessible, but they have slightly different source feed,” Connolly said. “They want to give everything away for free. For us, we believe in open data but not all data should be open. A director might not be comfortable with his home address being open to the public.”

    As for expansion outside Europe, Connolly certainly sounded wary of the U.S. market, which he described as a “quagmire” due to the wide variety of jurisdictional regulations around private company data.

    Disclosure: The Guardian is an investor in Giga Omni Media, which publishes GigaOM.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Garbage in, data out: Enevo gets funding for its smart waste services

    The internet of things — the scenario where everyday items are equipped with sensors and pumping out data — still feels largely theoretical. However, it’s bleeding into reality, often in rather prosaic ways. And you don’t get much more prosaic than garbage collection.

    One interesting company dabbling in this field, Helsinki-based Enevo, just picked up €2 million ($2.6 million) in funding from Finnish Industry Investment and Lifeline Ventures. The money will be used to help Enevo push its cleantech services across Europe and into North America.

    Enevo CEO Fredrik KekalainenEnevo isn’t the only company working on smarter waste management, but rivals (such as BigBelly) are largely trying to sell more intelligent bins. Enevo, on the other hand, is a services firm that provides sensor units to waste management companies for free. The unit (pictured in the hands of CEO Fredrik Kekäläinen) measures variables such as volume and temperature within the bin, then sends the data back to Enevo via GPRS. The company then uses that data to dynamically optimize collection intervals and routes for its customers.

    The intended result? Fewer overflowing bins and fewer pointless journeys to empty bins that are barely full. In the trillion-dollar industry that is waste management (according to Lifeline), that adds up to a pretty big deal. According to Kekäläinen, Enevo’s existing 10-or-so customers are already saving 30 percent on direct waste logistics costs.

    Kekäläinen told me on Wednesday that Enevo has started mass production of its sensor units (using a Finnish manufacturer “to make sure it’s really high quality”) and is recruiting salespeople across Europe. The company has deals with municipalities across Scandinavia and is setting up a pilot project in Canada. It’s also in talks with the big bin manufacturers to try get the sensor units integrated into their products.

    All in all, it’s a simple idea that can produce tangible results – cost savings for firms and greater efficiency and environment-friendliness for communities. As such, it’s a pretty good example of what we’re hoping to see come out of the much-hyped internet of things.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • European governments agree to open up public data

    Member states of the European Union have endorsed new rules for opening up publicly-funded data to developers, businesses and citizens.

    The 27 countries agreed on the rule change on Wednesday, according to the European Commission, which is behind the proposed revision of a 2003 directive on public sector information. If the European Parliament adds its stamp of approval, national governments will then transpose the changes into their laws sometime in the next 18 months or so.

    According to Neelie Kroes, the digital agenda commissioner, the European Parliament will sign off on the change soon:

    The change will give developers, businesses and citizens the right to get their hands on public data at low cost or for free. They will also be able to use data from museums, libraries and archives for the first time. Public sector bodies will only be able to charge marginal costs for sharing their data, and will also have to be more transparent about their charges. They will also be encouraged to make their data available in machine-readable formats.

    According to a webpage setting out the Commission’s hopes on the matter, the data in question will cover digital maps, weather data and road congestion data, as well as information on companies and court proceedings:

    “Most of Public Sector Information raw data could be re-used or integrated into new products and services, which we use on a daily basis, such as car navigation systems, smartphone apps with weather forecasts, information services for companies integrating information from various sources, such as statistical data with economic forecasts, company register data and other publicly available information.”

    Some European countries, such as the UK, already have established open data initiatives (and so, of course, does the U.S.).

    Sources close to the negotiations tell me that agreement was reached on the basis that cultural institutions in particular could charge a bit more than originally planned for handing out their data. Some governments had apparently been hoping to be able to charge a lot more for their institutions’ data, but were convinced that they would get more money in the form of taxation from the businesses that would spring up around open data, the sources noted.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Viber for BlackBerry finally finds its voice

    The Skype and WhatsApp competitor Viber has at last released a beta version for BlackBerry OS that features voice calling, Viber for BlackBerry 2.4.

    In a statement on Wednesday, the Cyprus-based startup said the new version of its BlackBerry app, which was previewed in January, included free calls to other Viber users for those on BlackBerry OS5 and OS7, as well as “performance improvements” for OS5 and various other bug fixes. However, BlackBerry 10 – the make-or-break latest version of the platform – is not supported.

    “BlackBerry is one of the most important markets for us and represents our third largest user base,” Viber CEO Talmon Marco said. “We are thrilled to bring this community free voice calling, letting them communicate freely with all of their important contacts across multiple platforms.”

    The release means that, over two years after Viber first hit the scene, the only remaining major platforms on which Viber is a voiceless, text-and-photo-only service are Nokia Series 40 and Samsung’s Bada OS. The omission of BlackBerry 10 support isn’t as crazy as it might sound — most BlackBerry users will still be on older versions of the platform, and the company is still launching new BlackBerry OS7 devices in emerging markets.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Cloud storage security service BoxCryptor previews business-friendly new version

    BoxCryptor, the German startup that provides added security for information held in cloud storage vaults such as Dropbox, SkyDrive and Google Drive, is previewing a new version of its client-side encryption tool.

    The new version of BoxCryptor is more explicitly aimed at teams. It includes new features such as the ability to share file access permissions without sharing private passwords, and to share files with entire teams with a single click. BoxCryptor 2 also does away with the original version’s mapping of files to folders, a system that meant every folder and project required the setting-up of a new BoxCryptor drive – now, there’s just one drive.

    Another compliance-friendly new feature for businesses is the option to have a master key covering everything encrypted by employees, just in case someone leaves suddenly or goes rogue.

    The update comes as BoxCryptor finds itself up against an increasing range of rivals such as Viivo and DigitalQuick. CipherCloud closed a $30 million investment round last December, and Symantec is also now in the Dropbox encryption game. In short, everyone’s clicked that consumer cloud services will be used in business, like it or not, and their consumer-grade security leaves a market opportunity for more serious users.

    According to BoxCryptor CEO Andrea Wittek, the full version of BoxCryptor 2 will come out sometime this quarter. The technical preview, launched on Tuesday, can be downloaded for Windows and Android.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Is it a good thing that Elsevier bought Mendeley?

    When rumors sprang up in January about the scientific journal publisher Elsevier buying British reference manager and academic social network Mendeley, the reaction was negative in some quarters. Elsevier has a bad reputation among many academics over the amount it charges for access to its journals, which are generally populated by taxpayer-funded research. Mendeley’s community is all about open collaboration, so the takeover rumors inspired a #mendelete Twitter campaign.

    So, now that the takeover has come to pass (the Financial Times reports the deal value as £45 million (USD $69 million), or around £20 per user), what fallout should we expect?

    Cleaner data

    According to Mendeley CEO Victor Henning, everything should be just fine. Mendeley will “stay an independent site” with plans to expand its 50-strong team to 80 within the next 18 months, he told me, adding that the deal would give both parties better data:

    “All those resources will help us do two things. One is more integration — the biggest gap in our product offering is that it’s too difficult for users to get to full text content. When people found something on Mendeley it was usually just the metadata with a link to the publisher’s website. Elsevier publishes around 20pc of the world’s scientific output and has deals with other publishers for its Scopus database. We’ll be working to integrate Mendeley with Scopus and ScienceDirect to make it easier for our users to enrich and clean up the content we already have — our content is crowdsourced. Elsevier has a lot of clean structured data we can use to clean it up, and our data can enrich Elsevier’s because we have rich social information.

    “We can now also take a more long-term perspective about monetization versus feature development and user growth. As an independent startup we were always trying to break even as soon as possible, and were under pressure to monetize new features. Now we can pick up certain things for the roadmap, for example hiring a fully-fledged mobile team. There will be a new iOS app soon, and we’re going to start building an Android app from scratch.”

    Henning added that Elsevier’s existing 17 million author profiles would also have a positive effect. “Now, once we’re integrated, when you sign up to Mendeley we will immediately be able to present you with your profile to claim,” he said. “It will make it easier for users to get started.”

    But what about all that criticism of Elsevier? There, Henning insisted there was little risk of users taking flight:

    “People have criticized Elsevier for things they’ve done in the past but, particularly last year when they were subjected to criticism for their stance on open access publishing, they’ve taken that feedback to heart. They’ve doubled the number of open access journals that they publish. They do support open access publishing and they will expand on it in the future. Another move they made last year is, people were saying they’d like to text-mine content that you have, and they opened up to the community about that.”

    Elsevier, meanwhile, also said in a blog post that Mendeley was “open, social and collaborative, and it is important to [Elsevier] that it retains all of those traits”.

    “Elsevier has all the power”

    However, not everyone is sounding so positive. One notable perspective is that of Jason Hoyt, Mendeley’s former R&D head and, since leaving the company, founder of open access publisher PeerJ.

    Hoyt said in a post on Tuesday that Elsevier had previously hampered or outright stymied open access projects at Mendeley, including the service’s PDF preview functionality and a scheme to automatically put papers filed with Mendeley into the open access archive of the author’s institution:

    “If one is honest, from a business perspective the Mendeley founders did the right thing to comply with Elsevier’s demands. My personal passions about Open Access hindered that, so no surprise it didn’t work out for more than a few years… I think that Mendeley as it stands today will continue to be useful even at Elsevier. That said, I think it will be challenging for Mendeley to become a truly transformative tool in science, which is what had originally convinced me to move from San Francisco to London four years ago.”

    Meanwhile, open access blogger Mike Taylor noted that “Elsevier has all the power in the relationship” with Mendeley:

    “So Mendeley say things like ‘very little will change for you as a Mendeley user’ and ‘we will continue to support standard and open data formats’, and I’m sure they believe them. But it’s dependent on the whim of Elsevier. The moment it becomes inconvenient or financially disadvantageous for them to do these things, they’ll stop.”

    It will be worth keeping an eye on the user numbers of Mendeley and its main academic community rivals (such as ResearchGate) and reference management rivals (such as Zotero) to see which way the scholarly users themselves feel the wind is blowing.

    Disclosure: Reed Elsevier, the parent company of science publisher Elsevier, is an investor in GigaOmniMedia, the company that publishes GigaOM.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Full speed ahead, as EE reveals 4G boost for the summer

    The carrier EE has a head-start on 4G in the UK, but a recently-concluded spectrum auction will allow its rivals to join the party this summer. So, ahead of that, EE has announced that it will be doubling its LTE speeds in a few months’ time, reaching a “headline” speed of 80Mbps (i.e. the speed you will get if you’re dangling from a mobile mast with one hand) and a real-world average of over 20Mbps.

    EE gets to do this because it is using 1800MHz spectrum – formerly dedicated to 2G services – for its 4G. As it is the result of a merger of France Telecom and Deutsche Telekom’s UK operators, Orange and T-Mobile, it has an unusually large amount of this spectrum to play with (even after giving some to rival Three, and it will double the speeds by simply doubling the amount of 1800MHz spectrum it dedicates to LTE, from 10MHz to 20MHz.

    As well as warding off the threat of rival LTE carriers, the speed boost may also convince some consumers who see LTE as not that much faster than HSPA+, and therefore not worth the switch.

    Here’s what EE chief Olaf Swantee said in a statement:

    “We are ensuring that the UK remains at the forefront of the digital revolution. Having already pioneered 4G here, we’re now advancing the country’s infrastructure again with an even faster, even higher-capacity network, and at no extra cost to our customers.

    “Since we launched 4G, we’ve seen a huge shift in the way people are using mobile. Video already accounts for 24 percent of all traffic on our 4G network – that’s significantly more than on 3G. Maps, mobile commerce, sat-nav tools and cloud services are all seeing a similar rise. Mobile users in the UK have a huge appetite for data-rich applications, and this will only grow as people become more familiar with and reliant upon next generation technologies and services.”

    On the subject of take-up, EE said it hoped to have a million 4G customers by the end of the year:

    “Among 4G network rollouts around the world, converting 10 percent of pay monthly base after 24 months is considered to indicate a successful deployment. More than one million 4GEE customers would represent around 8 percent of the EE pay monthly user base, upgraded or acquired from rival networks within just 14 months.”

    The carrier added that it intended to trial carrier aggregation this year, combining spectrum from the different bands it has at its disposal – in addition to the 1800MHz spectrum, EE picked up 800MHz and 2.6MHz spectrum at the auction. Carrier aggregation is key to LTE-Advanced, the next generation of LTE and, technically speaking, the first mobile broadband technology that should be able to bear the moniker “4G”. EE also noted that it was working on supplying its customers with voice-over-Wi-Fi services — a move that might lessen the load on its LTE network — and voice-over-LTE.

    Right now, EE’s LTE network covers 50 towns and cities in the UK. The doubling of the speeds will affect customers in 10 of those cities initially, namely Birmingham, Bristol, Cardiff, Edinburgh, Glasgow, Leeds, Liverpool, London, Manchester and Sheffield.

    Here’s hoping EE also boosts the amount of data it offers for its 4G customers — the entry-level package there comes with just 500MB, which already doesn’t go far with 10Mbps usage speeds.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google could face Android antitrust investigation in Europe, after Microsoft complains

    Google may find itself in trouble for bundling key applications in its lineup with the Android operating system, after a lobbying group including Microsoft, Nokia and others complained to the European Commission over the practice.

    The Microsoft-led group, called FairSearch, was already behind a previous (and as yet unresolved) complaint to the Commission over Google’s desktop search practices, in particular its alleged tendency to rank Google services higher than those of rivals. However, Nokia (the handset maker) and Oracle (the anti-Android litigator) joined FairSearch last September, indicating that the fight would be extended to the mobile sphere.

    That has now happened. According to a FairSearch statement on Tuesday, “Google uses deceptive conduct to lock out competition in mobile”. The main issue at play here is the way in which Google bundles its suite of services with Android: if a phone manufacturer wants to build an Android phone that includes consumer favorites such as Maps or YouTube, the manufacturer is then also obliged to “pre-load an entire suite of Google mobile services and to give them prominent default placement on the phone”, the complaint states.

    The other issue is that of Google’s distribution method. FairSearch characterizes the giving-away of Android as “predatory” and “below-cost”, arguing that it “makes it difficult for other providers of operating systems to recoup investments in competing with Google’s dominant mobile platform”.

    According to FairSearch counsel Thomas Vinje:

    “Google is using its Android mobile operating system as a ‘Trojan Horse’ to deceive partners, monopolize the mobile marketplace, and control consumer data. We are asking the Commission to move quickly and decisively to protect competition and innovation in this critical market. Failure to act will only embolden Google to repeat its desktop abuses of dominance as consumers increasingly turn to a mobile platform dominated by Google’s Android operating system.”

    So far, Google’s only response has been to say: “We continue to work cooperatively with the European Commission.” Meanwhile, a New York Times interview with EU Competition Commissioner Joaquín Almunia suggests that European antitrust officials had already been looking into Android separately from their long-running Google desktop search investigation.

    Is there a case here?

    The fundamental concept in antitrust regulation is that of market dominance – if the target of the regulation doesn’t dominate the market in a way that potentially lets them stunt competition, regulators can’t hold them back, as that would mean distorting the market unnecessarily. That’s why I don’t believe anything will come of complaints made over Apple’s carrier contract terms, for example – iOS devices don’t actually dominate their market.

    The case for Android dominating the smartphone market, though, is much stronger. We’re not talking about the levels of dominance Google enjoys in desktop search – there, it owns just under 90 percent of the market – but, as FairSearch has noted, around 70 percent of smartphones shipped worldwide at the end of 2012 carried Android. That is a lot, but does it amount to market dominance?

    There are three main problems with this theory. The first is that iOS, while not dominant, is very strong; much stronger than OS X was as a rival to Windows when Microsoft (oh, the irony) got hit with a $794 million EU antitrust fine for bundling Windows Media Player with its OS. Indeed, in the EU, iOS has a market share of around 25 percent, and Android has a market share of just over 60 percent (the 70 percent figure quoted by FairSearch is weighted somewhat by the high numbers of Android phones being shipped to developing countries).

    Secondly, it is viable to fork Android and forego the standard Google suite. Amazon has done just that with its Kindle Fire range of tablets, which is doing just fine. In China, Baidu has done the same, replacing the Google suite with its own services. In Russia, Yandex is also developing its own set of rivals to Google’s services, although its strategy is more a case of piggybacking on standard Android than of rip-out-and-replace – in itself, this demonstrates that rival services can get a chance on Android, particularly if the operator rolling out the phone is keen.

    Finally, this is a market in constant flux. Android’s rise has certainly been meteoric, but there is a chance that some alternative, whether it be Firefox OS or a Kindle phone or a de-Googlified Samsung OS, will stop it in its tracks. Microsoft and Nokia would certainly have something to gain from straitjacketing Google in the near future, as they want Windows Phone to succeed, but the regulators may be queasy at the thought of interfering in an already tumultuous scene.

    In short, this one is complicated. Whatever happens, though, it’s a formal complaint, so the EU will be forced to acknowledge it and decide whether or not to launch a formal investigation.

    Anything else?

    Glad you asked! Almunia also dropped a few interesting tidbits in that NYT interview about the Google search case. He insisted that the Commission wouldn’t require Google to change its ranking algorithms, but he did say Google would need to start more clearly identifying results that link to its own services.

    “Maybe we will ask Google to signal what are the relevant options, alternative options, in the way they present the results,” he suggested.

    According to Almunia, Google will submit proposals this week about settling the investigation. In the U.S., the Federal Trade Commission (FTC) has already concluded a similar investigation without any major crackdown on Google, but that will not necessarily influence the Commission’s thinking, particularly as Google has a greater share of the European search market than it does in the U.S.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Microsoft hopes to sell its Mediaroom IPTV platform to Ericsson

    It was a rumor, but now it’s reality: Microsoft is to sell its Mediaroom IPTV business to Ericsson. The financial terms of the deal have not been disclosed.

    The deal should go through in the second half of the year, the companies said in a statement on Monday. According to a separate blog post by Yusuf Mehdi, strategy head at Microsoft’s interactive entertainment division, the sale will allow Microsoft to “commit 100 percent of its focus on consumer TV strategy with Xbox” – bear in mind that the next-generation Xbox is expected to be unveiled in the coming months.

    Mediaroom is a telco-oriented, customizable IPTV platform that should be a good fit for Ericsson, a company that already sells networking equipment and services – including IPTV equipment and services – to telcos. There are more than 40 existing Mediaroom customers, including AT&T (who brands it as U-verse) and Deutsche Telekom (Entertain). According to the statement, Ericsson will have a market share of over 25 percent if the deal goes through.

    As is customary for such sales, the deal will be subject to regulatory approval in various countries. Microsoft’s Mediaroom division employs 400 people, who are based in Mountain View, California.

    According to Ericsson SVP Per Borgklint, “future video distribution will have a similar impact on consumer behavior and consumption as mobile voice has had”, and the transferring staff that coming with the platform will give Ericsson “senior competence and some of the most talented people within the field of IPTV distribution”.

    Here’s an excerpt from Mehdi’s post:

    “We are proud of the world-class engineering and business achievements within Mediaroom. They have a rich history of driving innovation in IPTV. As early pioneers, they built the infrastructure to stream video on limited bandwidth, and today they enable multiscreen entertainment experiences for pay TV subscribers. Mediaroom has contributed to the evolution of TV and powers 22 million set-top boxes today in 11 million subscriber households.

    “With the sale of Mediaroom, Microsoft is dedicating all TV resources to Xbox in a continued mission to make it the premium entertainment service that delivers all the games and entertainment consumers want – whether on a console, phone, PC or tablet.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Panasonic Automotive buys streaming radio firm Aupeo

    When Panasonic Automotive and Aupeo said in January that they were starting a “strategic collaboration” around in-car infotainment services, it now seems they were just being coy. As it turns out, Panasonic has bought the Berlin-based Pandora competitor outright.

    The deal closed in late March, Aupeo CEO Holger Weiss – who will continue to head up his 20-person team – told me. It’s big news for the connected car industry, but also for Berlin, a city whose startup scene has been eagerly awaiting the validation of a major exit for the last couple of years. Frustratingly, though, the financial terms of the deal have not been disclosed.

    According to Weiss, the deal will see both parties extend their reach:

    “They’re buying technology, customer relationships and an experienced and gifted team. We’re already in Mercedes, BMW and all those other guys, and have been focusing strongly in the last 12-18 months on the connected car.

    “Panasonic understands strategically that the internet-enabling of cars will [change] the use of the internet in the car fundamentally. You have the possibility of taking your smartphone and connecting it with the car, but the level of integration and according technology that are required to provide stable streaming while driving at 180mph is something a regular consumer-focused service cannot do.”

    The deal is reminiscent of Harman’s 2010 purchase of Aha Radio, whose in-car Aha platform is found these days in vehicles from Subaru, Honda and Acura. Aupeo, meanwhile, has partnerships with Mercedes, BMW, Mini and Pioneer for its platform, which covers news and weather, radio, podcasts and audiobooks, and also uses text-to-speech technology. As for Panasonic, that company is already involved in Chrysler’s Uconnect platformas is Harman, albeit as developer of a hands-free communication system — and Chevrolet’s MyLink.

    The January announcement said Panasonic and Aupeo would “create customized cloud-based solutions, optimized for the automotive market”, and it now looks like a lot of that work will happen in Berlin.

    According to Weiss, Aupeo will remain in Berlin as a wholly-owned subsidiary of Panasonic Automotive Systems Company of America, and will keep its name (and consumer apps and services). Berlin will become the “development hub” for Panasonic’s connected services strategy, he added.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • 9 in 10 Londoners will soon have free Wi-Fi in Tube stations

    From June, almost all Londoners should be able to get free Wi-Fi access in London Underground stations, after O2 became the latest major carrier to sign up as a wholesale customer of Virgin Media.

    Virgin Media has been providing internet access in Tube stations since the Olympics in mid-2012. The service was initially free for all, but after the Games Virgin started charging on a daily, weekly or monthly basis for those who aren’t customers of Virgin Mobile or the company’s fixed-line services. EE and Vodafone – respectively, the UK’s first and third-largest mobile carriers — signed up as wholesale partners in November, ensuring that their customers would also get free access.

    O2, the second-largest mobile operator, has now done the same, with its customers getting access from June. According to Virgin, those three carriers account for 89 percent of London’s population, leaving only overseas tourists and subscribers of other carriers – notably the smallest of the big four, Three – having to pay up for access.

    “Having O2 on board is excellent news for the thousands of people that use the Tube every day,” London Underground strategy chief Gareth Powell said in a statement. “Most customers will now be able to access live travel information or use social media to plan their social life while on the move.”

    London Underground also used the announcement to reveal 12 more stations that will be Wei-Fi-enabled, including Baker Street, Bank, Earl’s Court and Sloane Square. The total number of stations bearing connectivity is now 120 (the Tube network has 270 stations, although many aren’t in central London, as the Wi-Fi-enabled ones tend to be).

    I’ve asked Three whether it’s talking to Virgin about getting its customers into the scheme, and will add the response in when I get it.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Qubit revamps analytics platform for ecommerce sites

    The British site analytics firm Qubit has released version 2 of its platform, which aims to give ecommerce proprietors a simpler unified interface for seeing and acting upon what visitors to their sites do.

    Qubit was started a few years ago by a group of ex-Googlers, who are now taking on their former employer’s Analytics product as well as other rivals such as Chartbeat, Adobe Omniture and Mixpanel. Qubit draws in more than 100 data points for each site visit, which it combines with factors such as geography to create a behavior model – first-time visitors tend to do this, second-time visitors tend to do that, and so on.

    The new version of the platform gives marketers a more fully integrated workflow, as Qubit CEO Graham Cooke explained to me:

    “We can do an analysis, understand who the users are and look at the journeys they’ve taken. So, for example, [certain users] always have these issues when putting things in their basket; they don’t know if they spend another £10 [$15.30] they will get free delivery. So you can choose that segment from the analytics platform and choose to target that segment with a message, which [we] put straight into the website without needing to rewrite the website. You just need a single line of code on the site.”

    Cooke said Qubit had invested heavily in in-memory processing to deal with the resulting tens of billions of data points, as technologies such as Hadoop and MapR “mean it takes five minutes to get the result back”. (It should be noted that rivals such as Chartbeat and Mixpanel are also working with in-memory technologies.)

    “We have a mix of open-source technologies that we put together with our own query language, built around our own in-memory clusters to do that,” Cooke said. “Hadoop is part of our framework but not part of this solution. This is about making big data friendly and being able to build a hypothesis and write that change into your site targeting a specific user group.”

    Qubit already has an impressive roster of customers, including the BBC, the Financial Times, Expedia and British retail chains such as Staples and Topshop. According to Cooke, the company’s technology leads to around a 25 percent uplift in revenue.

    Internet Explorer users are valuable

    To mark the launch of Qubit v2, the company has also released some in-house research about the relative “value” of users of different browsers – one of the many variables that the platform takes into account. The research took in data from around 100 million sessions across 90 different retail sites between December and January.

    Interestingly, the research shows Internet Explorer (IE) customers to be the most “valuable” customers because they are the easiest to tempt into a sale, with a 3.14 percent conversion rate and an average basket total of £76.87. Safari users have a much higher average basket total of £108.44, but are harder to sell to, with a conversion rate of 1.64 percent.

    Firefox users actually have the highest average basket total, at £110.99, and have a 2.18 percent conversion rate, while Chrome users average £90.36 percent for basket total and are only slightly easier to sell to than Safari users, with a 1.84 percent conversion rate. In terms of average customer value, then – balancing basket value with conversion rate – the results in descending order look like this: IE (£2.42), Firefox (£2.41), Safari (£1.78) and Chrome (£1.66).

    Here’s a video promoting Qubit v2:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The UK moves to preserve its digital history, paywalled content (and some tweets) included

    For the last century, the U.K. has had what is known as a legal deposit law requiring a copy of every book, pamphlet, magazine and newspaper to be sent to the British Library, and allowing five other major libraries to also request copies. Now the rules are being updated: from Saturday, the same will apply to digital content, including blogs and other content published online.

    The idea, much as it was with printed content, is to archive the U.K.’s cultural and intellectual output. The libraries — including the British Library, the national libraries of Scotland and Wales, Trinity College Library Dublin, the Bodleian Libraries and Cambridge University Library — will be allowed to scrape and store everything on the .uk domain, and to demand copies of ebooks, e-journals and even CD-ROMs published in the U.K.

    Here’s an interesting snippet from the FAQs:

    A British Library spokesman confirmed to me on Friday that this was a reference to paywalled content. However, given that people will only be able to access the archive by physically visiting the libraries in question, and that there will be a seven-day lag between publication and archiving, that shouldn’t be too much of a problem for the publishers.

    The spokesman said social media output would also be included, “as long as it is U.K.-based and openly available on the web,” and confirmed that this includes identifiably U.K.-based individuals’ Twitter feeds, although “we’d need to select people because it’s a .com” — no Library of Congress-style catch-all approach, then.

    “The main thing we’re trying to capture first time round is .uk domain websites,” the spokesman added, while also stressing that no non-public social media material would be scraped.

    On the book publishing side, The Bookseller reported that priority will be given to ebook-only publishers. This is presumably because those who aren’t ebook only are already submitting their books under the previously existing legal deposit scheme.

    So why is this all happening? As my colleague Mathew Ingram pointed out last year, digital content can often be ephemeral and easily lost. That sentiment was echoed on Friday by British Library chief executive Roly Keating:

    “Ten years ago, there was a very real danger of a black hole opening up and swallowing our digital heritage, with millions of web pages, e-publications and other non-print items falling through the cracks of a system that was devised primarily to capture ink and paper.

    The regulations now coming into force make digital legal deposit a reality, and ensure that the Legal Deposit Libraries themselves are able to evolve — collecting, preserving and providing long-term access to the profusion of cultural and intellectual content appearing online or in other digital formats.”

    The U.K. is not the first country to update its legal deposit rules in this way – similar requirements are in place in Denmark, Finland, Sweden and New Zealand.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Jolla’s Sailfish OS SDK installers are now out for Windows, OS X and Linux

    Software development kit (SDK) installers for the Sailfish smartphone operating system are now out, Jolla has announced on Twitter. The SDK was previously trailed at Mobile World Congress in February.

    Jolla, which is led by ex-Nokians, has taken the abandoned MeeGo OS and wrangled it into a new, slicker version called Sailfish. The Linux-based OS will in theory be available for a number of device types, but the first commercially-available version will be on a smartphone sold through the Chinese distributor D.Phone and the Finnish carrier DNA.

    According to a separate tweet a few days ago, the timescale for that release is looking a bit fuzzy:

    One significant partnership announced at Helsinki’s Slush Festival last November will already be in trouble, namely that with ST-Ericsson – a chipmaker that is in the process of being broken up by parent companies STMicro and Ericsson.

    Sailfish will certainly find itself in choppy waters this year. There are a range of factors that threaten the iOS-Android duopoly, from Windows Phone and BlackBerry to newer players such as Firefox OS — and let’s not forget Nokia’s low-end Asha platform, which will likely compete in the same market as Sailfish and Firefox OS, and Facebook Home, whose effect on the smartphone scene is yet to be felt.

    Jolla will have a tough time establishing itself, but at least developers can really sink their teeth into its native app potential (they can also submit Android, HTML5 and Qt apps) now that they have the SDK.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Flexiant raises $5.7M to push its cloud orchestration tools into the U.S.

    British cloud orchestration outfit Flexiant has picked up a fresh round of £3.75 million ($5.67 million) in funding, which it says it will use to push further into European and North American markets.

    Flexiant’s Cloud Orchestrator suite is aimed at service providers, mostly telcos, who want to become infrastructure-as-a-service (IaaS) wholesalers with minimal effort. The company’s biggest rivals are probably VMware and Citrix, although OnApp and Joyent also play in the same space to varying degrees.

    The funding is the largest tranche Flexiant has received thus far, taking its total funding to date to around £6 million. The cash comes from London-based private investors, rather than venture capital. “We have a strong local investor base,” CEO George Knox told me. “They’re investing in exposure to the cloud market – this is part of their balanced portfolio.”

    U.S. service providers in particular can now expect to have Flexiant knocking on their doors, Knox explained:

    “The money will allow us to continue to be innovative on the development process. We’re very focused… we don’t try and compete in the private cloud, on-premise space. This allows us to concentrate on the service provider space and in particular to put more focus into the U.S. service provider space. At the moment we’re still relatively unknown there. Even in the European market it’s only in the last 6-9 months that we’ve become known.”

    Interestingly, Knox claimed that OpenStack isn’t a particularly serious rival due to the needs of the Flexiant’s target market.

    “The service provider market doesn’t want to spend time, energy and money putting a whole lot of consultants to work,” he said. “We do get some organizations that are open source zealots but [the drivers of that] are in the IT department. As soon as you talk to the business people, [who are concerned with] how you get to market quickly and how do you monetize – they don’t want to go down the 9-12 month route.”

    Last month Flexiant also announced a partnership with France’s USharesoft. USharesoft has a “cloud software factory” product called UForge, which lets service providers automatically create and maintain full software stacks – applications included — as cloud server templates. These templates can now be published straight to the Flexiant Cloud Orchestrator image library, ready for self-provisioning.

    Flexiant says it picked up 14 customers in the first quarter of this year alone (and has recently been touting the Canadian service provider Cartika as a case-study customer).

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Yes, you should care about Bitcoin, and here’s why

    Everybody’s talking about Bitcoin these days, which is quite remarkable given the highly technical nature of the crypto-currency. So why is it such a big deal?

    To explain why, I’m going to start with the implications of Bitcoin, then get into the technical nitty-gritty. Why that way round? Because there’s more to Bitcoin than the technical wow-factor, or indeed the crazy speculation that’s going on now. Even if Bitcoin itself fails, it’s a sign of things to come.

    All about decentralization

    Bitcoin is to state-issued currencies – often referred to as fiat money – as P2P file-sharing is to traditional broadcast media. There is no centralized source for it that can be controlled or moderated or regulated. It is difficult if not impossible to track from the outside. It is more complex to use than its better-known counterpart, but there are at least theoretical advantages to doing so.

    network

    In the case of file-sharing copyrighted content, the big advantage (apart from not paying for stuff) is the ability to ignore program scheduling or territorially-based release windows. In the case of Bitcoin — which has the added advantage of being lawful — users get to send money anywhere in the world for minimal fees, and to protect that money from the political considerations that influence central banks.

    In short, both Bitcoin and file-sharing are peer-to-peer, meaning users get to both cut out the middleman and, on an emotional level, stick it to The Man. That last factor is not trivial: Satoshi Nakamoto (the pseudonym for Bitcoin’s Keyser Söze -like initiator or initiators) seems to have had strong libertarian ideals in mind when he, she or they set the experiment in motion.

    Hang on. “Experiment”?

    Yup. People may be throwing money into Bitcoin at a scary rate (the total value of all Bitcoins passed the billion-dollar mark at the end of March, and some workers may even be opting to receive part of their salaries in Bitcoin), but it remains experimental. No one’s really sure where it will end up, because no one has really done this distributed, borderless digital currency thing for real before – yes, there are Facebook Credits and Linden dollars, but these are still centralized and controlled as such.

    However, now that the train is in motion, in a sense it doesn’t really matter if Bitcoin succeeds or fails. The original Napster failed and guess what? Unlawful file-sharing is still with us, and will remain with us for a long time. On a conceptual level, whatever happens, it’s now very difficult to see a future without Bitcoin or something like it. It may not replace fiat currencies, just as unlawful file-sharing has not killed off lawful distribution, but it may persist as a viable alternative and, by doing so, force change in the way its traditional predecessors function.

    Now’s probably a good time to look at the technical side of what we’re talking about.

    Crypto-currency

    Each Bitcoin user has a digital wallet, which can be stored on a computer or a memory stick, or in a cloud-based service, or technically even on paper. The wallet contains a list of Bitcoin addresses, which in turn contain both public and private cryptographic keys that prove the holder owns their Bitcoins and is allowed to spend them. The addresses are pseudonymous, in that there is no registry of who owns which address, so Bitcoin is great for conducting untraceable transactions.

    To receive a payment, the payee gives one of their addresses to the payer. The payer then uses that address to initiate the transaction, signing with their own private key to prove they have the funds, and the transaction then has to be certified by the network (rather than by banks, as happens with regular money).

    Mining tar sands

    Now this is where the creation of new Bitcoins also comes into play. The network is made up of computers called “miners” that are all competing with each other to solve increasingly complex computational problems. Once a miner beats the others to solving a particular problem, it gets to add the solution as a so-called “proof of work” to a block of transactions, and add that block to the “block-chain” — essentially a record of all Bitcoin transactions that have ever taken place.

    As a reward, the miner gets newly-generated Bitcoins, plus the transaction fees they have set. Apart from generating new Bitcoins, this distributed verification system also ensures that people can’t double-spend their Bitcoins.

    It is important to understand that, while fiat money is issued and controlled by governments and their laws, Bitcoin is generated and controlled by algorithm. While governments can always print more money according to their needs, there will only ever be just under 21 million Bitcoins (right now there are around 11 million), because that’s how the algorithm works.

    Every four years, the number of Bitcoins harvested with each block halves — during the first four years of Bitcoin, each block came with 50 Bitcoins, right now it’s 25, from 2017 it will be 12.5, and so on. But, even after Bitcoins cease to be produced (the current guess is that this will happen around the year 2140), miners will still want to create more blocks because of the associated transaction fees, so the network will still have the incentive to keep the economy going.

    The limit on the number of Bitcoins also makes the system inherently deflationary. As the value of Bitcoin cannot be manipulated by a central authority, as long as the Bitcoin economy continues to grow (and as people lose their Bitcoins, removing them from the system) then it follows that transactions will take place in ever-smaller fractions of a Bitcoin. However, this shouldn’t be as much of a problem as it would be with a normal currency, because Bitcoins are infinitely divisible. There is currently a limit of eight decimal places (taking us down from “bitcents” to the “satoshi”), but even smaller fractions could be enabled in the future.

    Okay, so who’s using it?

    There is some debate around whether Bitcoin is a currency or commodity. The issue there is whether people are converting fiat money into Bitcoin in order to profit off its current meteoric rise in value, or whether they intend to actually spend it.

    Golden piggy bank

    Bitcoin’s critics frequently point out how its big original user base consisted of people frequenting the Silk Road, the underground online marketplace for drugs and other illegal things. Silk Road only allows trade in Bitcoin, but in the offline world people buy a lot more drugs with dollars and euros, so frankly I fail to see the point there.

    It would certainly be a mistake to see Bitcoin’s non-Silk Road usage as widespread, but as people find out about the currency some are certainly starting to use it. You can famously use Bitcoins to buy pizza, but these days it can also be used to pay for VPN and VoIP services, music, cupcakes and, er, Linden dollars. WordPress (see disclosure) takes Bitcoin and Expensify will handle it. “Anarcho-capitalist libertarian” Jeff Berwick also wants to roll out Bitcoin ATMs.

    Does Bitcoin have rivals (apart from fiat money)?

    Yes, although none as successful. For example, there’s an interesting project called Ripple that is just getting off the ground. There have also been multiple previous attempts at creating non-digital alternative currencies, such as the Liberty Dollar, which earned its creator Bernard von NotHaus a counterfeiting conviction. Bitcoin may share the anti-statist motivation behind that wannabe currency, but it’s hard to see how it could constitute counterfeiting.

    Is it smart?

    Technically speaking, Bitcoin is very smart indeed, as it’s the first currency that removes the need for a trusted third party – usually a bank – in financial transactions.

    Business person with idea lightbulb

    That said, however, it’s crazily volatile at the moment. At the start of 2013, one Bitcoin was worth around $13. Things went nuts with the Cyprus crisis in March, and right now the price is bumping up and down around the $137 mark. It certainly looks like a bubble at this point, although the huge amount of interest Bitcoin is getting at the moment could lead to an uptick in use, which would in turn legitimize it as a viable currency. Either way, the current volatility will probably dissuade people from spending their Bitcoins right now, and make life hard for vendors setting prices in Bitcoin.

    Then, despite the supposed inviolability of the Bitcoin itself, there are multiple security issues. Before we even consider nefarious activities such as hacking, an interesting wrinkle in the Bitcoin methodology is that, if you lose your Bitcoin wallet, the money is lost forever, to everyone. If you lose your bankcard, it doesn’t wipe out the money in your account, and your bank will issue you a new one. There is no such mechanism in place here; losing Bitcoins is effectively like burning banknotes.

    Similarly, if someone steals your Bitcoin wallet by hacking into your computer, there is no heavily-insured bank to absorb the loss. You’re on your own. This happened to a user named “allinvain” back in 2011, costing him 25,000 Bitcoins. You can even get Bitcoin wallets for smartphones these days, but then you’re running a big risk if you lose your phone. As for cloud-based wallets, well, Instawallet has just suspended operations after being hacked.

    The best idea is probably to keep your Bitcoins on a device that is securely stored and not permanently connected to the internet.

    Should you get Bitcoins? I don’t know – the value against the U.S. dollar could continue to rise, or the bubble could burst. But frankly, I don’t really care. From where I’m sitting, Bitcoin is already proving its worth as a disruptor and as a test-case for how technology could divorce currency from certain external factors. If it fails, it may hurt those who bought into it big-time, but it’s not a large enough ecosystem to have wider repercussions. And if it does fail, it will have successors.

    Let’s see what happens next, because the crypto-currency genie is out of its box.

    Disclosure: Automattic, maker of WordPress.com, is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, GigaOm. Om Malik, founder of GigaOm, is also a venture partner at True.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Triposo’s iOS travel guides gain “opinion mining” data and faster OpenStreetMap rendering

    The travel service Triposo, which assembles its many city guide apps through the use of algorithms, has just added a raft of new social features and other improvements to its iOS apps.

    The biggest boost is the addition of what Triposo CEO Douwe Osinga described to me as “opinion mining.” The company’s founders are ex-Googlers and earlier iterations of the Triposo apps showed a very Google-like approach to judging the importance of sights – if the algorithms picked up that a lot of people were photographing a particular monument, they judged that it must be important. Now they also search texts written about the sights or facilities in question, analyzing the sentiments that previous visitors have expressed.

    “We started tracking social mentions in a number of social sites and used that to come up with a much better measure of what is the best place to have coffee in a city, for example,” Osinga told me. “If you look up what is the best place based on the star ratings of users, [it can be] influenced by naysayers more than fans. If one guy is a bit rude then their star rating won’t necessarily be high. Instead, we do text analysis of what people say about these places and correct for these things.”

    Speaking of reviews, Triposo is now also integrated with the Yelp API so users can check out star ratings from that service. Users can also now share tips and pictures with each other more easily, as well as organizing records of their own trips.

    The other big change is to do with Triposo’s maps. Despite being headed up by a bunch of ex-Googlers, the company has always used OpenStreetMap, largely because Google Maps doesn’t offer offline access – a total must for those roaming abroad – through its API. OpenStreetMap also boasts more detailed coverage than its Googlish counterpart in certain countries, such as the U.K. and Germany.

    However, Triposo previously rendered its maps using in-house technology. Now it’s opted for Skobbler’s GeOS toolkit, which has apparently resulted in faster rendering, as well as a smaller footprint for the overall app. In a way, it’s surprising that the Triposo-Skobbler tie-in took this long to materialize, as both companies are based in Berlin.

    On that note, Skobbler also claimed in a release on Wednesday that GeOS, which is still in private beta for now, is seeing “significant interest… from a number of companies, including leading brands in both the travel and automotive space.”

    Osinga said the Android versions of the guides would be updated at some point down the line. “We first spearhead our features on iOS and, when we have a good understanding and know how our users like it, we also port it for Android,” he said.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Cisco wades deeper into small cell waters with $310M Ubiquisys purchase

    The networking giant Cisco has been into small cell technology for about five years, since it made a strategic investment in the picocell and femtocell outfit ip.access. However, Cisco was clearly not satisfied – its hunger for more small cell tech has now led it to offer $310 million in cash and employee retention incentives for UK-based Ubiquisys.

    Small cells are effectively tiny cellular base stations that offload mobile data traffic onto a wired backhaul service. Along with good old Wi-Fi, they are seen as essential to warding off the so-called mobile data crunch, as they free up capacity on the macrocells that form the basis of cellular networks. Ubiquisys is something of a leader in the field, having recently been ranked number one (ip.access was number three) by ABI Research for enterprise and residential femtocells.

    Ubiquisys is also into self-optimizing network architecture (SON). It also offers a “smart cell” system called EdgeCloud, delivered in partnership with Intel, that combines server and small cell functionality to deliver cloud applications from the edge of the network, a bit like a content delivery network (CDN) for mobile rather than fixed-line surfers.

    This is the third major acquisition Cisco has announced in the last four months that is aimed at boosting its portfolio for mobile carriers — in December it was Broadhop (policy management) and in January it was Intucell (SON). According to a blog post on Wednesday from Cisco business development chief Hilton Romanski, the Ubiquisys deal “complements Cisco’s mobility strategy along with the recent acquisitions of BroadHop and Intucell, reinforcing in-house research and development, such as service provider Wi-Fi and licensed radio”:

    “As carriers around the world increase cellular data capacity to serve the rapidly growing population of smartphone and tablet users, adding small cells is one of the most cost-effective ways to multiply data capacity and make better use of scarce spectrum assets. Ubiquisys’ indoor small cells expertise and its focus on intelligent software for licensed 3G and LTE spectrum, coupled with Cisco’s mobility portfolio and its Wi-Fi expertise, will enable a comprehensive small cell solution to service providers that supports the transition to next generation radio access networks.”

    Assuming the deal goes through, it is expected to close in the fourth quarter of this year. The Ubiquisys employees, who are based in the English town of Swindon, would join the Cisco Service Provider Mobility Group.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.