Author: David Meyer

  • Yes, it’s another Verizon-Vodafone rumor, this time of a $100B buyout

    And lo, the Verizon-Vodafone rumor mill rumbles on. Just weeks after Verizon scotched rumors that it was planning a $245 billion takeover of its British counterpart, we have a new report, coming out of Reuters, that suggests Verizon is now preparing to buy out Voda’s share in their joint venture, Verizon Wireless.

    Verizon owns 55 percent of the cellular enterprise, and – according to Reuters’s unnamed sources – it now thinks it can pick up the rest for $100 billion, half of which would come from bank financing and half of which would take the form of Verizon’s own shares. The U.S. firm apparently wants amicable discussions with Vodafone over this, but is willing to “take a bid public” if Voda doesn’t play ball.

    I’ve asked Voda for comment on this latest notion, and am interested to see how any potential acceptance of the potential offer would overcome the hurdles faced in the past. The problem there, as another unnamed Vodafone investor previously explained, is that a sale of its share would hit Voda with an enormous capital gains tax bill – which is why a merger seemed more attractive.

    As those in the U.K. know all too well, Vodafone is allergic to big tax bills, so let’s see how this latest outbreak of whispering pans out in reality.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The ex-MySQL gang is back together, pushing MariaDB as a neutral ‘bridge’

    Bad news for Oracle, maybe: some of the key pre-Sun-takeover MySQL players are back together, and their MariaDB fork of MySQL looks like it’s gaining serious traction.

    The reunion comes courtesy of a merger between open source database services firm SkySQL (which supports both MySQL and MariaDB deployments for customers ranging from Harvard to Shutterstock) and a company called Monty Program — yes, as in Monty Widenius, who named MySQL after his oldest daughter My and its fork after his younger daughter, Maria.

    So now we have Widenius and other ex-MySQLers such as Colin Charles back together with players such as MySQL co-founder David Axmark and former MySQL sales director Magnus Stenberg. Actually, that’s underselling the magnitude of what’s happened here: out of the 70 employees of the fused operation (which is continuing under the SkySQL name), 50 used to be at the original MySQL firm.

    Open appeal

    At the same time, MariaDB seems to be capitalizing on the disillusionment of some in the open source community with Oracle’s stewardship of MySQL — doing things like releasing extensions for the commercial version but not the free version was never going to win favor in that scene. Wikipedia migrated to MariaDB in the last few days, and the Fedora and OpenSUSE Linux distros will both make the jump in their next releases.

    The MariaDB Foundation, which is busy sorting out its governance structure and which now claims SkySQL as an early member, also took on former Sun Chief Open Source Officer Simon Phipps as its CEO a week ago.

    “It is a pleasure to have a company representing the reunited core team of our code base joining the Foundation at its inception,” Phipps said in a statement this week.

    MariaDB the “bridge”

    The fused team has a unique NewSQL proposition: not only is MariaDB fully compatible with MySQL, but it can also interface with newer NoSQL databases such as Cassandra and LevelDB. According to SkySQL CEO Patrik Sallner, SkySQL will continue to service both MySQL and MariaDB customers and won’t be forcing anyone to jump to MariaDB — but he expects many customers to make that leap nonetheless:

    “Right now, because MySQL belongs to Oracle, it’s not necessarily perceived as independent. Linux is the default operating system in most enterprise contexts. Oracle, IBM and Microsoft control the vast majority of business in databases and most companies have at least two of these, which are not compatible with each other. And, as companies deploy new applications, they use new [NoSQL] database technologies to meet their needs.

    “We believe that MariaDB has an opportunity to become a truly independent and interoperable open source database, meaning we can provide a solution that’s a neutral ground for companies. … Our aspiration is to start building this into a new form of database platform that ties together other databases in a seamless manner. By providing a bridge, we believe we can create more innovation.”

    Sallner noted that there isn’t currently a great deal of difference between MySQL and MariaDB, apart from the latter’s “pluggable” approach to storage engines. “Using the SQL language allows us to be compatible with other databases, and we have a connect engine which allows us to add on-the-fly support for other data formats,” he said.

    As a next step, Sallner said he hoped to see other database providers join the MariaDB Foundation, in order to maintain this open common ground. “We’re not competing against DB2 or Oracle or Microsoft today — we’re all serving different needs,” he said. So does he want to sign up Oracle itself? “That’ll be a stretch, but it would be a huge sign of success,” he laughed.

    It’s not all bonhomie, though — Sallner reckons large internet companies will engage with MariaDB in a way that they haven’t with Oracle’s MySQL.

    “We believe those companies are willing to contribute the work they’ve done back to MariaDB,” he said. “Facebook and Twitter have contributed substantial new features to MariaDB. They probably wouldn’t have contributed that to Oracle.”

    UPDATE (10.55am PT): This piece originally and incorrectly stated that Widenius is the new SkySQL CTO, whereas he is in fact the CTO of the MariaDB Foundation. Widenius is on the board of SkySQL, but his role is non-operational.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Ubuntu Server 13.04 targets carriers and the big data crowd

    It’s Ubuntu release time again. On Thursday, version 13.04 of the venerable Linux distribution will come out, with the server version touting several new tricks for those using it in cloud deployments. It’s not a long-term support (LTS) release – you’ll have to wait another year for that, if you’re being cautious — but this “Raring Ringtail” version provides an opportunity to test out new features beforehand.

    New features

    First off, the default installation is for a virtualized environment. As Mark Baker, Ubuntu Server product manager at sponsor company Canonical, told me, this is because users are increasingly deploying the OS on hypervisors and Canonical wants to show off the OS’s capabilities there.

    “While KVM has been big on Ubuntu since 2008, it’s not the only game in town,” Baker said. “We’re seeing customers wanting to understand integration or compatibility between ESX and Ubuntu, or even Hyper-V and Ubuntu, and we’re ensuring testing on these – and of course KVM and Xen — so when we are engaged with customers or users we can say we know Ubuntu provides a robust experience on the prevalent hypervisors.”

    The other major aspect of this release is its integration with the new Grizzly release of OpenStack. Canonical has been involved with OpenStack since the start, and the release cycles for the two products are aligned (Grizzly came out a few weeks ago).

    Ubuntu 13.04′s Juju orchestration “charms” have been updated to deploy OpenStack for high availability – for example, when the user deploys MySQL, the charm will set up 3 nodes in a failover configuration, and a similar approach applies to the deployment of the Rabbit messaging server. Of course, those deploying in a test environment won’t be too keen on running 2 or 3 of everything, so it will still be possible to install in a “less highly available way”, as Baker put it. The Juju GUI has also seen a lot of work this cycle “to improve usability”, he added.

    Meanwhile, the Ceph storage subsystem is now fully integrated with Ubuntu and OpenStack, in order to please Canonical’s telco and service provider clients, and Ubuntu’s Floodlight OpenFlow controller has also been updated. Although Canonical and VMware are working closely on Nicira, “having an open-source alternative to Nicira is also important,” Baker pointed out.

    Carrier adoption

    Speaking of carriers and service providers, this is the market segment where Canonical appears to be thriving.

    “OpenStack certainly has been the biggest growth areas for us in the last 12 months,” Baker said. “We have got engaged with the types of customers that we could only have dreamed of, looking back a few years. OpenStack is gaining adoption with carriers, and most people doing that to scale are doing that with OpenStack on Ubuntu. Most of the major telcos, the global names that you’ll see, are deploying their OpenStack on Ubuntu.”

    Baker also claimed that OpenStack is seeing traction in the big data space, with users deploying Hadoop and Cassandra on Ubuntu – he suggested this may be out of “developer affinity” with the Linux distro.

    “It’s fair to say the bread and butter of our user base is running web infrastructure,” Baker said. “A lot of that user base is moving that web infrastructure into the cloud. We’ve gained significant popularity on Azure – there is a fair proportion of that running Linux. While you wouldn’t think it a natural fit to provide Ubuntu on a Microsoft cloud, we actually think it’s quite exciting.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The WhatsApp-friendly Asha 210 is a reminder of Nokia’s low-end capabilities

    WhatsApp should receive a boost in emerging markets through a Nokia phone, announced on Wednesday, that features a dedicated hard key for the SMS rival.

    The Asha 210, which will come in both single- and dual-SIM versions with retail prices starting at $72, has a physical QWERTY keyboard and is therefore well-suited to messaging and social networking services. The handset will come with a free subscription to WhatsApp, which usually costs $0.99 a year, and the service is also integrated with the 210′s phonebook.

    “We are very excited about our partnership with Nokia Asha complementing our strategy of giving people around the world an easy experience when keeping in touch with their friends,” WhatsApp co-founder Brian Acton said in a statement.

    Like other Asha phones, the device runs the Series 40 operating system. Nokia started calling the touchscreen Asha phones (of which the 210 is not one) “smartphones” last year, much to the annoyance of some observers, but in some ways that was a fair move: after all, Series 40 handset owners also get to download apps from an app store that contains many of the offerings familiar from Android and iOS. The social experience that is the focus for many “proper” smartphone users can be found here too, albeit in a slightly cut-down fashion.

    The Asha 210 comes preloaded with YouTube, Twitter and Facebook (the recently-launched Asha 205 came with a dedicated Facebook button) and a 2MP camera with its own hard key. As with the 205, a feature called Slam makes it possible to share content with nearby Bluetooth phones without having to pair the devices. The phone’s battery lasts for up to 46 days on the single-SIM version, and up to 24 days on the dual-SIM version – you don’t see this kind of longevity on a touchscreen phone.

    This is a great deal for WhatsApp, particularly as many of its key rivals – such as Tencent’s WeChat — are strongest in the emerging markets where Nokia’s low-end devices are sold. These alternatives can still be found in Nokia’s S40 app store, but users should be effectively steered in WhatsApp’s direction by the inclusion of the hard key. A reminder of the numbers here: WhatsApp may have 200 million users, making it “bigger than Twitter”, but WeChat has 300 million users.

    And from the Nokia perspective, the Asha 210 is a reminder of what can be done with the now-aged S40 platform in certain markets. This device will be going up against very low-end Android phones, which offer a much wider range of apps but not necessarily better performance (and seriously, battery life is a major issue in many of these markets), and the soon-to-be-released Firefox OS phones, which are HTML5-only and as such an unknown quantity at this point. Given its social chops, the 210 will be a fairly impressive contender for many users.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Sush.io raises $325K to plug web services into financial visualizations

    Paris-based Sush.io, which will soon reveal its financial analytics app for small businesses, has picked up $325,000 in seed financing ahead of the launch. The app, which is due to be “launched” on Thursday ahead of availability next month, plugs into accounts for the likes of Paypal, Github, Amazon Web Services, Google AdWords and even mobile phone operators, so that the user can get an overview and analysis of their total spending.

    Sush.io is pitching itself as “Mint.com meets IFTTT” and — as the list of services that can be plugged into the app attests — it’s very much targeting tech startups at this early stage.

    Sushio services
    Here’s how co-founder Thomas Guillaumin explained the focus of the app to me:

    “The problem is collecting all these services and having a top down view of your finances – how much cash you have, what’s your burn rate… that can really help you run a business better.”

    Another indicator of Sush.io’s tech startup focus is the fact that it’s launching with an OS X desktop app first. But, as Guillaumin told me, versions for other desktop and mobile platforms will be out by the end of the year.

    The investors in this seed round include Kima Ventures, Jacques-Antoine Granjon (the founder of online flash sale pioneer Vente-privee), the 50 Partners accelerator and Mediastay co-founder Jonathan Zisermann. According to Guillaumin, the cash will be used for the launch and also to hire three more staff members, taking the total (including founders) to five.

    Guillaumin said the Sush.io service will operate on a freemium model, with the paid subscription kicking in depending on the number of services you want to add. This will cost between £30-£50 ($46-$76) a month – cheaper than paying a CFO, certainly. Right now the company is hawking its wares in the European startup hubs of Paris, London and Berlin, but in June it intends to push into the U.S., too.

    That said, Guillaumin sounds quite wary about the American market due to potential competitors there – namely the Geckoboard-Zapier partnership and even IFTTT itself, on the chance that IFTTT integrates an analytics dashboard at some point.

    He added that, once it’s gotten off the ground, Sush.io may develop into a more fully-fledged business intelligence product that adds more KPIs (key performance indicators) to its current financial focus.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The first Firefox OS dev phones are on sale

    The developer test phones for Firefox OS are now on sale. They’re being produced by Geeksphone, a small Spanish outfit that used to make Android handsets for true open-source cognoscenti, and that is now backing Mozilla’s operating system as the way forward.

    Geeksphone is far from the only company pushing Firefox OS – operators seem especially keen, largely because they want to shake up the Google/Apple smartphone duopoly. However, it is the only firm thus far to start selling devices using the operating system (ZTE will also sell Firefox OS phones from around the middle of the year).

    Firefox OS’s big differentiator is its treatment of HTML5 web apps as native, which means apps built for this platform should run on other smartphone platforms too.

    “With early access to hardware, developers can test the capabilities of Firefox OS in a real environment with a mobile network and true hardware characteristics like the accelerometer and camera that are not easily tested on the Firefox OS Simulator,” Stormy Peters, head of developer engagement at Mozilla, said in a blog post. “Plus, new hardware is fun to play with!”

    The two phones – the Keon and the Peak – can be shipped anywhere in the world. The Keon, which costs €91 ($119) plus taxes, is representative of the kind of hardware that will ship to Firefox OS customers first: 3.5-inch screen, entry-level Qualcomm 1GHz processor and a 3MP camera. This is very much aimed at the low end of the market in regions such as South-East Asia and South America.

    The €149 ($194) Peak is more forward-looking in terms of Firefox OS, although the specs will be familiar to those who have scouted out current low-to-mid-range Android handsets: 4.3-inch screen, dual-core Qualcomm 1.2GHz processor and an 8MP camera. Both devices feature 512MB of RAM, 4GB of ROM, MicroSD support and the various sensors you’d expect in a modern smartphone.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google fined $189K by German privacy authority, who wishes he could fine more

    Google has been hit with a €145,000 ($189,000) fine in Germany over the “negligent” collection of people’s personal data by Google’s Street View cars. The fine was levied by Hamburg’s data protection chief, Johannes Caspar, who made it very clear that he wished he could fine the company more.

    This all follows on from the great Street View data collection scandal of 2010, where it emerged that Google’s vehicles weren’t just photographing roads and buildings, but also scraping fragments of emails, photos and passwords from open wireless networks that they passed. Google logs Wi-Fi access points in order to help its geolocation services locate the user more quickly, but the collection of data being transmitted over those access points was, Google has always argued, a terrible accident – the company blamed this on rogue code in its software.

    Germany, the birthplace of data protection law, was always going to come down harshly on Google over what happened almost three years ago, and indeed Caspar levied almost the maximum €150,000 fine at his disposal for a merely negligent data protection breach. If he had not been convinced by Google that the breach was accidental, he would have faced a €300,000 cap – still hardly enough to make a difference to a company the size of Google.

    Here’s what Caspar said in a statement:

    “In my estimation this is one of the most serious cases of violation of data protection regulations that have come to light so far. Google did cooperate in the clarification thereof and publicly admitted having behaved incorrectly. It had never been the intention to store personal data, Google said. But the fact that this nevertheless happened over such a long period of time and to the wide extent established by us allows only one conclusion: that the company internal control mechanisms failed seriously…

    “As long as violations of data protection laws are punishable by discount rates, the enforcement of data protection laws in a digital world with its high potential for abuse will be all but impossible.”

    Under the proposed new EU-wide data protection regulation, companies could be fined up to 2 percent of their annual turnover for such breaches, a level that Caspar said would “enable violations of data protection laws to be punished in a manner that would be felt economically.”

    Incidentally, Germans need not worry about Street View cars scraping their data anymore, not just because Google says it has cleaned up its act, but also because the company stopped taking new Street View pictures in the country a couple of years back. This was after Google allowed Germans to apply to have their properties manually blurred out – so many people took the company up on this that it required the hiring of scores of temporary workers to carry out the blurring, and eventually Google just gave up.

    Regarding Monday’s fine, Google released the following statement:

    “We work hard to get privacy right at Google. But in this case we didn’t, which is why we quickly tightened up our systems to address the issue. The project leaders never wanted this data, and didn’t use it or even look at it. We cooperated fully with the Hamburg [data protection authority] throughout its investigation.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • CDNify launches, based on OnApp’s federated CDN

    A month after OnApp launched its CDN-in-a-box package for resellers, here comes an exemplary result: a new, startup-oriented content delivery network (CDN) called CDNify.

    CDNs using OnApp’s federation (OnApp also has its own effort here, CDN.net) can tap into the spare capacity of service providers around the world, with more than 150 points of presence (PoPs) theoretically being at their disposal. CDNify is launching with 40 PoPs, and it’s trying to win over customers on price and simplicity, as founder James Mulvany told me:

    “We want to have a very low cost of entry, starting with a free account – you can start using it straight away, to play with the system. We’ve got things like nice reporting, good graphs so you can see your usage, decent support and video tutorials.

    “The front-end system that customers will use is something we’ve built in-house. It’s fairly simple, although we’ve got lots of exciting features that we’ll be rolling out over the next year. We’re trying to create a very clean experience.”

    In terms of price, CDNify charges $0.05/GB/month, which significantly undercuts both Amazon and Rackspace ($0.12/GB/month for the first 10TB of traffic). Additionally, Mulvany said, “Amazon charges you for the number of hits you get, whereas we just charge on bandwidth.”

    CDNify is privately funded off the back of the founders’ other business, internet radio outfit Wavestreaming — and, as it happens, the launch of that service provided the impetus for CDNify. “We started looking at CDNs and found they were aiming towards the enterprise, the big guys,” Mulvany said. “We decided to build CDNify for people like us: web companies, startups and mobile app developers.”

    Mulvany said CDNify expects its initial growth to come from the North American and European markets, although the company is “not targeting any specific region.”

    Here’s a video pitch:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Planes, trains and automobiles: Waymate unveils its ambitious travel comparison app

    Berlin’s Waymate has launched its mobile app for comparing various local and long-distance transport options on the go.

    We wrote about Waymate and its rival GoEuro last month – both companies are trying to navigate the world of travel comparison services, but Waymate is taking the extra step of letting people book journeys directly from the service, rather than sending them off to the train or plane operators’ websites.

    As we noted at the time, this is difficult from a data point of view, due to the complexity of the various services on offer. There’s an even greater barrier, though, in the unwillingness of many operators to let a third-party service handle their bookings.

    Despite these barriers, Waymate’s iOS app is now out and its website is fully up and running. In this initial version, users cannot book journeys directly from the app – instead, they can select a journey then email themselves a link, allowing them to complete the booking on Waymate’s website. The service is also yet to be internationalized, meaning long-distance journeys need to originate in the Eurozone and local journeys can only be searched within major German cities.

    The chief benefit of Waymate is the ability to compare all sorts of journey modes: planes, trains and automobiles (car-sharing schemes and taxis are included), as well as metro services and buses. Price and journey duration are clearly displayed on a visual timeline. Sensibly, Waymate has scrapped earlier plans to have two separate apps for local and long-distance travel: this one folds in both ideas.

    “Now the task is to expand the app and the website with thrilling new features — especially in social networking — and to internationalize,” Waymate CEO Maxim Nohroudi said in a statement. “In short, we want travel planning to be completely simple and joyful.”

    It’s an ambitious aim and one that (as far as I am aware) no-one has been able to achieve so far. It would be no surprise to see the app that finally pulls it off come out of Europe, as the fragmented nature of the market creates a substantial need for a service like this. Now let’s see how far Waymate’s rivals dive into this space.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • You can pluck graphene from thin air – but then what?

    Graphene! It’s the wonder stuff: the thinnest, stiffest, strongest and most impermeable material known to humanity, as well as the best thermal and electrical conductor. What’s more, a company called Graphene Technologies has figured out how to more-or-less pluck the stuff out of thin air – the firm has a scalable, patented technique for creating very pure graphene out of carbon dioxide.

    So why does Graphene Technologies CEO John Myers sound so downbeat about the atom-thick carbon lattice?

    Graphene Technologies founder John MyersSpeaking at Graphene Live in Berlin — co-located with the Printed Electronics Europe 2013 event — Myers pretty much asked the crowd of attendees whether any of them had any idea what to do with the stuff:

    “I’m skeptical about the market I’m in now. There’s a lot of enthusiasm, but also a lot of confusion. The problem is there isn’t a market of any significant size for graphene.

    “We all do ourselves a disservice with our inarticulate, self-congratulatory posturing. The fact is there isn’t a killer app yet and there’s no reason to think there will be, except there’s a lot of [effort] and money being thrown at it, and the material does appear to have a lot of potential.”

    That potential is a big reason for the hype around graphene (which, we should bear in mind, was only manufactured for the first time less than a decade ago). Because of graphene’s properties, many see it as a possible successor to silicon — a material whose own computing-friendly properties will break down if we miniaturize it much more than we already do.

    The problem there is that graphene doesn’t have an intrinsic band gap, making it tricky to use in transistors — simply put, you can’t turn a pure graphene transistor off. This may yet be fixed through clever doping (coating) techniques, but we still don’t know for sure whether that can be done while retaining graphene’s advantages.

    What about touchscreens? Graphene is transparent and highly conductive, so in that regard it could be a great rival to the frequently-used indium-tin-oxide (ITO) as a conductive coating – and it’s more flexible, too. However, as IDTechEx analyst Khasha Ghaffarzadeh pointed out, graphene doesn’t significantly outperform ITO. It also has serious rivals on the flexibility front, chiefly from carbon nanotubes. Then there’s the fact that while there are concerns over the future supply of indium, an ever-increasing amount of the rare metal is being retrieved through recycling.

    Graphene Technologies's graphene, produced from carbon dioxideGraphene is also touted as a replacement for activated carbon in the electrodes of supercapacitors, which are used in electric car batteries, for example. But, Ghaffarzadeh said, “it is again trying to replace a material that is well-known and low-cost.” And as a replacement for graphite (the source of graphene, of course) in carbon fiber? Ditto. How about for use in conductive inks? Again, carbon pastes are the rival, and they’re pretty cheap too.

    As Ghaffarzadeh said:

    “The potential is enormous, but it’s trying to do things that already exist, only a little bit better and a bit cheaper. We need new concepts that graphene alone is enabling: new platforms.”

    Myers noted that we are “more than likely going to end up with a range of carbon nano-products, each of which will have a range of interesting features and uses.” Regarding graphene, he added that he hates competing on price, and doesn’t want to “go into a market where the value proposition is that I’m cheaper than the other guy.”

    “I would urge everyone in the field to think about the process opportunity,” Myers said. “There’s no practical limit to the amount of this material that can be made. That means that, in the bulk world, graphene is going to be a commodity. As a business, you have to think about what kind of value you can create with the material, because you’re not going to make any money producing it.”

    To that end, he added, Graphene Technologies has joined the brand new Graphene Stakeholders Association, which opened its doors on Thursday. There, he suggested, various players in the nascent scene can educate each other and collaborate.

    And, hopefully, find the killer app for this wondrous substance.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • How energy harvesting tech could power wearables and the internet of things

    It’s all very well talking about the evolution of wearable computing and the internet of things, but something has to power these thin and/or tiny devices. For that reason, it’s a good thing that so many ideas are popping up in the field of energy harvesting and storage.

    Some of these ideas were on display this week at the Printed Electronics Europe 2013 event in Berlin, which took in a variety of sub-events including the Energy Harvesting & Storage Europe show. The concepts ranged from the practical to the experimental, so let’s start with the practical.

    Here’s Perpetuum‘s Vibration Energy Harvester (VEH), being carried around (appropriately) on a model train.

    Perpetuum train sensor

    The VEH is a wireless sensor that gets attached to rotating components, such as wheel bearings, on trains. Cleverly, the device both measures and is powered by mechanical vibration. It also measures temperature, and it wirelessly transmits the results to the train’s operator so they can immediately spot a failure in its early stages.

    It’s a simple, low-maintenance idea (there’s no battery that needs replacing) that promises big savings, as Perpetuum CEO Roy Freeland told me, referring to an unnamed operator:

    “The user has achieved a very fast payback because the system has enabled him to delay maintenance on the bearings until the fleet was due for a major train overhaul.”

    Perpetuum is part of an EU-funded consortium called Wibrate, which aims to introduce this kind of self-powered vibration monitoring technology into a variety of industrial systems.

    Meanwhile, a similar principle was at play in Cherry’s energy-harvesting switch.

    Cherry wireless switch

    The light you see in that picture can be wirelessly turned on and off by a switch that does not itself require any external powering: the act of pressing the switch creates enough mechanical energy to briefly power its wireless transmission capabilities. This is somewhat preferable to wiring up switches, in terms of both effort and flexibility, and who knows? Perhaps the principle could be employed in certain internet-of-things scenarios, too.

    Then there’s good old photovoltaic technology, which may soon find itself woven into a new generation of smart fabrics. Another EU-funded project called Powerweave aims to create two kinds of fiber – one for harvesting solar energy and the other for storing it – that can be woven together into one self-contained system. This could theoretically be used to power soft sensors in clothing, but there are far more large-scale applications in store.

    PowerWeaveLindAccording to Christian Dalsgaard, founder of consortium member Ohmatex, the goal is to create a fabric that can generate 10W per square meter. Once that is achieved, he noted, there are “no limits how big such a fabric can be made”, and a 100m2 piece of fabric would in theory be able to generate a kilowatt of power. Commercial applications could range from flexible roofing, tents and sun awnings to a new generation of autonomous airship (balloon manufacturer Lindstrand is also in the consortium). The fabric could even be a valuable part of aid packages, Dalsgaard noted:

    “The end fabric should be foldable, so you can fold a large fabric – 100m2 – into a package. It’s not enough to roll it up… The requirement is to fold it, put it in a package and drop it from an airplane.”

    Powerweave isn’t quite there yet, though. While a lot of progress has been made on the solar cell and storage fibers, “the challenge is to ensure the solar fibers are on top of the fabric and battery fibers are beneath, and that there is a supporting layer to provide strength,” Dalsgaard added.

    But what about fabrics that can harvest energy from movement, rather than light? Yep, people are working on that idea too, although problems remain. As Steve Beeby of the University of Southhampton said at the conference: “Textiles offer a good opportunity for energy harvesting… but clothes are designed for [comfort], not to resist your movement.” And don’t forget, any flexible electronics built into the fabric of clothes need to be machine-washable, too, connectors and all.

    And finally, a less technically interesting but nonetheless worthwhile little gadget that was on show: the Clicc.

    Clicc

    These dinky little solar panels can be clipped into tiny units that store the captured energy for charging mobile devices — I wouldn’t expect vast amounts of charge, but it’s handy in a pinch — or they can be chained as the picture shows, to increase the total amount of energy captured. Unfortunately the firm behind them, Sonnenrepublik, hasn’t yet come up with a unit to store and output that aggregated power, but it’s a nice thought nonetheless.

    In the end, all ideas that take us closer to sustainable energy use are welcome.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Nokia will launch another ‘hero’ Lumia device this quarter, says Elop

    Nokia intends to release a new flagship Lumia phone in the U.S. during this quarter, CEO Stephen Elop has said.

    Elop predicted the launch in an earnings call on Thursday, after the release of Nokia’s results for the first quarter of the year. He said to “expect to see another hero move” during the quarter – a reference to the strategy of arranging strong promotion with a particular carrier, as happened with the Lumia 900 and AT&T last year.

    All the rumors point to this hero phone being the Verizon Lumia 928. The Financial Times has also reported that Nokia will soon launch a large-screened Lumia to rival Samsung’s Galaxy Note II, as well as a Lumia phone with the high-quality camera of the Symbian-toting Pureview.

    “The next hero move in the U.S. kicks off a season of new product introductions,” Elop added. “We have a lot of juice ahead as it relates to the Lumia product line.”

    Elop also addressed falling sales of Nokia’s most low-end “smartphone” line, the full-touch Asha range, noting how the Lumia 520 demonstrated the company’s intention of “driving lower and lower with Lumia products as well”.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Nokia results: treading water for now, but Lumia sales are up

    It’s still hard to tell how much Nokia’s fortunes have turned around. Following a surprise return to profitability around the end of last year, the Finnish handset maker’s latest interim quarterly report show a continuation of underlying profitability – but its shareholders are still losing money.

    The company’s devices and services division managed to eke out a profit of €4 million ($5.2 million) in the first quarter of 2013, if you ignore “special items” during the quarter (namely, a €72 million restructuring charge, a €27 million boost from a cartel claim settlement and a €1 million hit associated with the purchases of Novarra, MetaCarta and Motally). That’s up from a €126 million loss in the same quarter of 2012, based on the same non-IFRS terms.

    However, earnings per share were still -€0.02 for the quarter. That’s a loss of $0.03 per share, slightly better than analysts’ predictions of a $0.05 per-share loss, and significantly better than the $0.10 per-share loss in Q1 2012.

    But let’s look at handset sales.

    Nokia sold 11.1 million smartphones in the quarter — that’s 5.6 million Lumias (up from 4.4 million in the previous quarter), 0.5 million Symbian smartphones (down from 2.2 million in the previous quarter) and 5 million Series 40-based Asha full-touch devices (down from 9.3 million in Q4 2012, which is probably a combination of seasonality and the rise of cheap Androids in the emerging markets).

    The average selling price of a Nokia “smart device” is up 34 percent year-on-year, from €143 to €191. This has helped the devices and services division hit underlying profitability for the second quarter in a row – overall, the group has now been profitable for an extra quarter on top of that.

    Here’s what CEO Stephen Elop said:

    “At the highest level, we are pleased that Nokia Group achieved underlying operating profitability for the third quarter in a row. While operating in a highly competitive environment, Nokia is executing our strategy with urgency and managing our costs very well.”

    For the second quarter of this year, Nokia predicted a slight worsening of its devices and services operating margin from -1.5 percent to -2 percent, citing the reason as “competitive industry dynamics continuing to negatively affect the Mobile Phones and Smart Devices business units”.

    In short, the turnaround remains far from complete, and Nokia still has to prove itself with the Lumia range. Perhaps the large-screen Lumia smartphone rumored by the FT on Wednesday might help. I imagine the lower-priced Lumias announced in February will also provide a boost.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Metaio predicts augmented reality chips in devices by end of 2013

    Metaio, the Germany augmented reality firm, says it expects to have its dedicated chips in mobile devices by year-end, despite the collapse of its one big announced customer, ST-Ericsson. The outfit has also revealed the opening of a new R&D lab in Dallas, Texas.

    The company announced the ST-Ericsson deal at Mobile World Congress in February. ST-Ericsson’s future chipsets were to include a dedicated augmented reality (AR) processor using Metaio’s designs – much as is the case with the dedicated GPUs we find in mobile devices today, the benefit of a dedicated AR chip is to cut down the power-draw required by specific functions, in this case augmented reality, so people can fire up applications using those functions without worrying about their phone or tablet dying too quickly.

    However, less than a month later STMicroelectronics and Ericsson announced the end of their chipset joint venture, along with the cancellation of the ST-Ericsson NovaThor chipsets that were also announced at Mobile World Congress. Nonetheless, Metaio told me at the time that it was still in talks with both STMicro and Ericsson about the use of its technology.

    According to Metaio spokeswoman Anett Gläsel-Maslov, these talks are still underway, as are negotiations with other (undisclosed) companies. What’s more, she said, the company is near-certain that it will see its “AR Engine in devices by the end of the year”.

    To develop its AR Engine designs further, Metaio is to open a new research facility in Dallas, the company said on Wednesday. Metaio already has an office in San Francisco, so it’s not a matter of getting closer to potential customers – instead, Gläsel-Maslov told me, the firm hopes to scoop up engineers who might be at a loose end following Texas Instruments’ winding-down of its OMAP mobile processor business.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Mercedes and Bosch push for new ideas around connectivity and big data

    Some of Germany’s industrial titans have decided to partner up with the Berlin chapter of Europe’s Startupbootcamp accelerator, in the hope of stimulating fresh ideas in the fields of connectivity, mobility and big data.

    Mercedes-Benz, Bosch and the industrial insurer HDI are all involved in the new partnership with Startupbootcamp, dubbed SBC2go. Cars will probably be a focus here: Mercedes-Benz is of course one of the world’s best-known car manufacturers, and parent company Daimler is behind the Car2Go car-sharing service — you may note a similarity in the naming of that and the accelerator partnership.

    Bosch, meanwhile, may be a familiar name for power tool users, but it is also neck-deep in a variety of other areas, including machine-to-machine communications (sensors, smart packaging and so on) and automotive technology (drivetrains and networked infrastructure for electronic vehicles, to name but two specialties).

    The SBC2go program should kick off in August. The 10 selected startups won’t have to already be in Berlin – which, after all, is generally known more for its ecommerce and consumer services – but they would have to move there for the duration of the program. Each will get €15,000 ($19,700) in investment.

    “The brands will contribute some of their top talent to the mentor pool, open up their global innovation resources and networks, and support marketing efforts,” Startupbootcamp’s Alex Farcet wrote in a blog post. “In return, the brands access the Startupbootcamp open innovation movement driven by early stage, nimble startups which are usually below the radar of global companies.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Twitter forces Flattr to stop letting users tip ‘favorited’ tweets

    Less than a month after Flattr made it possible to leave virtual tips for tweeters by favoriting tweets, Twitter has told the Swedish micropayments company to cut it out.

    Flattr, co-founded by The Pirate Bay’s Peter Sunde, is quite a simple system. The user signs up to donate a certain amount – $5 for example – each month, and then “flattrs” people for their content, with the recipient getting an amount equal to the monthly pot divided by the number of flattrs the user has made. It was originally for tipping bloggers who had Flattr buttons on their sites, but last month the company expanded the functionality to allow the automatic tipping of people posting on Twitter, SoundCloud, Instagram and other sites.

    However, Flattr has now removed the Twitter functionality after Twitter asked it to desist. As Flattr co-founder Linus Olsson explained in a blog post on Tuesday:

    “Recently Twitter contacted us and told us that we are violating their API terms citing the second part of a clause (IV. Commercial Use, 2C. Advertising Around Twitter Content) saying ‘Your advertisements cannot resemble or reasonably be confused by users as a Tweet. For example, ads cannot have Tweet actions like follow, retweet, favorite, and reply. And you cannot sell or receive compensation for Tweet actions or the placement of Tweet actions on your Service.’

    “This is a quite logical clause as it would stop companies to sell e.g. retweets and followers. It’s an understandable rule to keep the Twitter network clean but in this case the rule is strangely stomping out innovation on their platform.”

    Now, Flattr is a for-profit firm that takes a 10 percent cut of payments carried out over its system. But even after Flattr offered to forego that cut in the case of flattred tweets, Twitter apparently said no. So, from today, favoriting tweets won’t result in the tweeter getting money – although those using the Flattr browser extension can still flattr tweeters through this alternative mechanism.

    I’ve asked Twitter whether it sees another way in which Flattr can operate on its platform, but am yet to receive a response.

    On the face of it, this action of Twitter’s seems to tally with its recent shutting-down of Ribbon’s service, which used Twitter’s Cards technology to allow full-on payments to take place within tweets themselves. However, Ribbon and Flattr appear to have broken two different rules – in Ribbon’s case, it looks like the company wasn’t making the right kind of Cards access request, and in Flattr’s it was the contravention of the tweet action regulation.

    It’s probably too early to tell whether Twitter has an ulterior motive here, but Olsson suspects it does. As he told me:

    “I would speculate that they want to control all the ways that money is changing hands on Twitter – if they control that, they can control the flow and in future get a cut of it. That would be a logical business model for Twitter – if you use Twitter to sell something you need to pay Twitter for it. I’m just speculating here, of course.”

    Flattr will now put this theory to the test, Olsson added, by building a system “where you can send a flattr to someone on Twitter by tweeting them instead”. In the meantime, the service continues to expand its reach by adding YouTube to the roster of services through which flattrs can be made.

    I should probably add by way of disclosure that, since signing up for Flattr a month ago, I have received two flattrs for tweets of mine: one for €3 ($3.92 – I’m guessing the user didn’t favorite many tweets that month) and the other for €0.16. After Flattr took its cut, the remainder was €2.84, which is just less than the €3 that I have set as my monthly budget for flattring others.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Orange outs Libon for Android and adds voice chat to iOS version

    Orange has released the first version of its Libon app for Android smartphones and is adding new functionality to the iOS version.

    Libon appeared for iOS in November last year, giving Orange a clear competitor to so-called over-the-top (OTT) applications such as Skype and WhatsApp. Like T-Mobile USA’s Bobsled and Telefonica’s Tu Me, the app provided free HD calls and messaging to other users of the same platform — regardless of their carrier — as well as voicemail transcription.

    Now it’s available on Android as well as iOS. According to Giles Corbett of the Orange Vallée R&D department, the Android version is “completely integrated” into the native OS in a way that isn’t possible with iOS (see also, Facebook Home). “For instance, it integrates all of your incoming and outgoing GSM calls and SMSs in all of the conversations,” he noted, adding that setup, including the redirection of voicemail, could all be controlled from within the app.

    On the iOS side, meanwhile, the new version — to be set live on Tuesday — will remain a step ahead of its Android counterpart, with the integration of audio chat (as in, conducting an asynchronous conversation using audio messages) and photo messaging. That said, Corbett said this functionality would be added to the Android version in the coming weeks.

    I asked Corbett how Orange’s OTT efforts were keeping pace with developments such as Telefonica’s Tu Go, which gives O2 U.K. contract customers a Wi-Fi-capable app through which they can make and receive calls and texts using their existing number, with charges being integrated with their standard bill.

    Corbett responded by pointing out that Libon creates a similar experience for customers of certain Orange operators. For example, customers of Orange’s low-cost Sosh brand in France can use Libon to call landlines and mobile numbers on “advantageous terms”, with call recipients seeing the caller’s standard number and — for calls to certain countries, at least — with charges coming out of their standard allowance.

    Meanwhile, Orange Poland is to adopt a similar strategy, and by the end of June Libon will be integrated with core Orange services in 5 countries. For those who just want to use it as an OTT app alongside core services from other carriers, availability stretches to 95 countries. “It’s a way for Orange to reach and explore new customer bases,” Corbett said.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • UK audit office probes 4G auction results

    The UK’s National Audit Office is to look into the recent 4G spectrum auction, which pulled in £2.34 billion ($3.62 billion) against a government forecast of £3.5 billion.

    The news was first broken by The Guardian (see disclosure) on the weekend, and was subsequently confirmed to me by the National Audit Office (NAO) itself on Monday morning. The upcoming investigation was triggered by member of parliament Helen Goodman, who complained that the government “failed to get value for money” in the auction.

    “It’s a little early to say exactly what we’re going to be looking at,” a spokesman for the NAO told me. “We will soon be in a position to put a remit of the study and a timescale on our website.”

    This should be an interesting one. The telecoms regulator Ofcom, which has hailed the result as a success, has always been crystal clear on the fact that its auction was not designed to raise the maximum revenue possible (everyone has learned their lessons from the £22.5 billion 3G auction a dozen years ago, which nearly crippled the industry), but rather to keep the market competitive and make sure as many people as possible get coverage.

    As for the source of the £3.5 billion figure floated by the government, there seems to be a disturbing amount of buck-passing going on. As I wrote on the auction’s completion in February:

    “The reserve price for the auction was £1.3 billion, although the government had budgeted for it to bring in £3.5 billion. Does that make the result disappointing? That depends on whether you see the government forecast as politically motivated or focused on the actual worth of the spectrum. There was never much justification given for the £3.5 billion figure, and no-one appears to be taking responsibility for it — today the Treasury told me to take my questions about the figure’s rationale to the Department for Culture, Media and Sport (DCMS), and the DCMS told me to ask the Treasury.”

    Now, according to The Guardian, the Treasury is claiming the figure came out of the Office for Budget Responsibility. Whoever came up with it, I’ve not seen a scrap of the rationale behind it.

    It is worth noting that the £3.5 billion figure was floated last year at a time when the Chancellor of the Exchequer, George Osborne, was trying to maintain that the national deficit would fall, not rise, in 2013. Without the predicted boost from the spectrum auction, the margin would have been much smaller. And, as it turned out in last month’s Budget statement, the deficit for 2013 is indeed up on that for 2012, not down.

    Disclosure: Guardian News & Media, which publishes The Guardian, is a minority investor in GigaOM.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • How technology is slowly developing its sense of smell

    This week I attended what was, I think it is fair to say, the oddest conference I have been to yet. It was the first world congress of the Digital Olfaction Society (tagline: “The Smell of Digital), the stated goal of which is to “digitize, transmit, reproduce and recapture smells, flavors and fragrances”. You know that perennial April Fool’s joke about sending odors through the internet, most recently spun up by Google? That.

    The thing is, as my colleague Barb Darrow pointed out in the wake of Google’s gag this year, there really are serious efforts underway to make the digital capture and production of aromas a reality. The conference was small, but the participants spanned the disciplines of computer science, biochemistry, engineering, smart clothing design and perfume retail.

    Marvin EdeasThe society is the brainchild of Dr. Marvin Edeas, who is also the president and founder of International Society of Antioxidants in Nutrition and Health, and Professor Takamichi Nakamoto of the Tokyo Institute of Technology’s engineering school, whose team is gradually refining its smell detection and generation systems.

    Edeas’s specialty is the fight against ageing and obesity, and he is intrigued by the recent discovery that aroma can activate intestinal receptors, making people feel more full than they are. Pointing out that experiments have also shown dogs can smell cancer and diabetes, he foresees the development of a “digiscented world” where smells are deployed and captured for medical, gaming, security and justice purposes, and where cinemas use a version of Smell-O-Vision that actually works.

    But there are barriers – after all, it’s a century since Alexander Graham Bell bemoaned the lack of a true “science of odor”, and we still don’t live in that digiscented society. Fundamental problems include the unpredictability of air flows, the complexity of smells, the difficulty of managing timing and intensity, and the fact that culture and individuality play significant roles in the way each person perceives a given smell.

    Patrick MielleOne highly sceptical voice in the room was that of Patrick Mielle, a microchemical sensor expert from the University of Burgundy. E-noses have been commercially available for 20 years, he complained, but they have failed to evolve beyond fairly basic gas sensors.

    “It was marketed as a general-purpose instrument, but there are very few commercial applications now. I don’t know one in the food industry after 20 years. Maybe we are missing the link with the human… Nobody is able to predict the odor response for a mixture – it’s impossible to model. Odor doesn’t exist. It’s a neural signal processing from a chemical vector. An odor is not the same for me and for you. It’s really a cultural concept.”

    Then there’s the issue of there being no “primary odors” – no equivalent of red, green and blue from which we can weave any combination, no matter how exotic. Sure, we can pump out a smell that roughly synthesizes that of coffee, Mielle noted, but we cannot reproduce the smell of a particular fine variety.

    That doesn’t stop the likes of Nakamoto from experimenting with blended chemicals, though. Witness the professor’s Virtual Ice Cream Shop: part artwork, part demonstration of the team’s odor generation work.

    Takamichi Nakamoto's Virtual Ice Cream Shop

    We don’t know the precise set of “odor components” needed to recreate any given specific smell at the moment, but Nakatomo claims around 30 such components are sufficient to at least achieve “approximation”, reducing the exploratory area to help researchers search for more precise reproductions.

    Here, the Virtual Ice Cream Shop produces aromas that are supposed to remind the user of basic ice cream flavors such as strawberry and chocolate – it has a graphical user interface that allows flavor blends (performed in the vapor phase) and the whole thing is hooked up to a MIDI keyboard, with flavors paired with supposedly appropriate musical timbres. It was fun to try, if a bit strange.

    Nowhere near as strange as Meta Cookie, though.

    Meta Cookie 2

    Meta Cookie is an experimental “pseudo-gustatory display” (the finest phrase I have ever noted down, incidentally) that attempts to modify the perception of flavor by changing the food item’s appearance and masking its true smell with another, simulated scent. It’s a truly bizarre set of headgear that combines augmented reality with a series of tubes for emitting smells in front of the user’s nose.

    It doesn’t work terribly well. Scent quality aside, Meta Cookie relies on the system recognizing a symbol branded onto a plain cookie, so it can superimpose a picture of a strawberry or maple or chocolate cookie over it. As soon as you eat part of the symbol, it ceases to work – hence, I found myself having to nibble around the edges of the symbol, like a squirrel wearing a flatulent robotic squid on its head.

    Anyway, Tomohiro Tanikawa, one of the researchers behind Meta Cookie, reckons this technology could ultimately be used for “augmented satiety” – in other words, to help dieters fool themselves into thinking they’re eating something larger than in reality.

    Then we have the smelly devices that may seem little more than gimmicks, but that are – let’s face it – the likeliest to be commercialized in the near future. Here’s the Multi Aroma Shooter, developed at Japan’s National Institute of Information and Communications Technology (NICT).

    Multi Aroma Shooter

    Not much to explain here: the associated research is to do with temporal and spatial control of odor production, and the Shooter is a USB-powered device that is supposed to emit well-timed smells to augment scenes in games and movies. In this demonstration, a video of a woman eating various fruits is accompanied by the appropriate smells at the appropriate times. There’s no clever blending going on here – in fact, the most accurate preset smells were using good old essential oils, such as rose and orange.

    As for sending smells through the internet, here’s a rather rudimentary example: Kiko Tsubouchi’s ChatPerf, a fragrant dongle for the iPhone (an audio-jack-based version for all platforms including Android and Windows Phone will come out in July).

    ChatPerf

    The idea is for developers to use the ChatPerf SDK to build apps around the platform, so someone can, for example, send a virtual rose to their lover, fragrance included. It’s a cute idea, and it may sell well as a novelty item, but it suffers from two fundamental problems: the recipient will have to have the dongle plugged into their smartphone in order to get the message in full, and each cartridge for the thing only comes with one smell.

    On the smell detection side of things, we may not have moved beyond simple gas sensors, but there’s still some interesting research being done in that area.

    Achim Lilienthal For example, Achim Lilienthal’s Mobile Robotics and Olfaction Lab, housed at Örebro University in Sweden, is working on robots that can move around and locate gas leaks (not coincidentally, a distant cousin of Lilienthal’s died in a major gas explosion four decades ago). This involves a lot of data-crunching, as the robot constantly needs to map the gas distribution around it in three dimensions.

    As Lilienthal told the conference, one reason digital olfaction is so complex is the number of disciplines that need to work together on it:

    “For example, there’s biology — this could be your starting point. Then we have sensors. Physics and chemistry are also very important, to know about the physics of gas distribution and turbulent effects. And computer science: you need a lot of machine learning, because the models are not precisely known. You need probabilistic models to get some robustness. And you need signal processing.”

    The lab’s Gasbot project has attracted some attention for its leak-finding potential. The prototype is designed to roam around landfill sites from which methane is captured and used to generate power – it is, Lilienthal noted, “of economic important to find leaks”. A future version may take the form of a microdrone, used to scan larger areas for natural methane leaks. Sensors need to improve though, he pointed out, as do the algorithms.

    Occupational hygienists have also expressed interest in the technology, Lilienthal added, for its potential in constantly monitoring workplaces. The idea there would be, for example, to get a better picture of how people are exposed to concentrations of particulate matter in factories, and to correlate that data with reported health problems.

    All in all, the field of digital olfaction remains extremely young. Where machines can be designed around the detection or production of specific smells, we can see basic and sometimes highly useful applications starting to emerge. But as for systems that can identify a random and rare odor, then reproduce it as a blend of primary ingredients on the other end of the line… don’t hold your breath.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Why Bitcoin crashed, and how Ripple might avoid the same fate

    To the surprise of very few people, Bitcoin has crashed. That’s not to say it’s a goner, but even those who bought into it just before the craziness of the last two weeks are now looking at losses: from a U.S. dollar exchange rate high of $266 two days ago, the crypto-currency is – at the time of writing – trading at around $70.

    What went wrong? Apart from being a bubble (albeit a bubble of a kind we’ve never quite seen before), it looks like Bitcoin fell victim to a single point of failure. But wait, you say, it’s a decentralized currency – how can that happen?

    Facepalm, face palmThat single point of failure is the most popular Bitcoin currency exchange, MtGox. There are other exchanges, but the bulk of Bitcoin trading happens there. MtGox claims to have been hit over the last couple weeks’ mania by the twin ills of denial-of-service attacks and sudden, excessive popularity, both of which amount to the same thing: MtGox’s systems falling over. The operation (which is based in Japan) has also shut down its own service at least once in an attempt to “cool down” the market.

    And every time that has happened, a panic sell-off has been the result. That’s not surprising: MtGox’s status as the best-known exchange has led it to become the main data source for most of the Bitcoin rate visualizations out there, so when Mt.Gox goes down it affects visibility for a lot of people. And when people can’t see what’s going on, they panic, find another exchange and sell, sell, sell. Same goes for the biggest exchange unilaterally deciding to cool down the market – hardly a sign of viability.

    (Some people have theorized about more sinister motives, too.)

    As I write, MtGox is conducting an AMA session on Reddit, in which it is explaining what it’s doing to stem the problems:

    “Upgrading computer systems means ordering more servers (two weeks timeframe), setting up (one day), load testing (two weeks) and deployment (one day). It’s a process that can take up to one month in total… We are now enforcing new rules for people placing large amounts of trades in order to reduce risks of lag.”

    Hardly ideal. But what – apart from boring old state-issued currency – is the alternative?

    Introducing Ripple

    The ideal alternative for Bitcoin as an ecosystem is to try to even the load between different exchanges, or at least settle on one that doesn’t fall over when people get keen on the currency. However, there is also an emerging rival of sorts called Ripple (not to be confused with the charitable donation tool of the same name).

    RippleAlthough its use is also pseudonymous, Ripple isn’t quite the grassroots effort that Bitcoin is: on Thursday its sponsor, OpenCoin, picked up a round of funding from Andreessen Horowitz, FF Angel IV, Lightspeed Venture Partners, Vast Ventures and Bitcoin Opportunity Fund, “an investment vehicle for Bitcoins and Bitcoin-related companies.” OpenCoin’s development chief is Jed McCaleb, the guy who founded MtGox in 2010.

    However, while it doesn’t have the same Stick-It-To-The-Man vibe as Bitcoin does, Ripple does have a few advantages over its better-known rival. Chief among those is the fact that it doesn’t need currency exchanges: in fact, it is its own distributed currency exchange.

    Ripple can be used to convert dollars into rupees, or for that matter Bitcoins, and send them across the world for the nominal fee of one “ripple” – this fee is only charged to stop people from swamping the system with millions of transactions. OpenCoin says a ripple is worth around a thousandth of a cent, and the company will put 100 billion of them into circulation — three quarters of which it will give away and a quarter of which it will keep for itself, in the hope that the value goes up. No more ripples will ever be created.

    So, ripples are both in-service tokens for the mechanism of sending and exchanging real money, and a virtual currency in their own right. Of course, Ripple users will need to get their hard cash into the system somehow, so the system employs what it calls “gateways.” Anyone will be able to act as a gateway, even individuals and convenience stores – although online services will probably be the most convenient.

    A faster self-regulating network

    Ripple is a bit like Bitcoin, in that the network verifies transactions – this is essential if you’re removing the “trusted third party” role that banks usually fill, because someone needs to ensure that people aren’t double-spending their virtual money. However, there’s a big difference in how this happens.

    hand shakeWith Bitcoin, nodes on the network called “miners” compete with each other to verify each block of transactions every 10 minutes. To verify a block, the miner has to complete a complex computational puzzle faster than its rivals do. In return for the electricity spent in doing so, the miner gets a certain number of freshly minted Bitcoins – this is how the system keeps working, and how new Bitcoins get brought into the system.

    With Ripple, the nodes on the network also maintain a shared ledger – the equivalent of Bitcoin’s blockchain – but they don’t compete with each other to do so. Instead, the system uses a complex consensus mechanism to make sure transactions get verified and added to the ledger.

    As this process has nothing to do with mining the virtual currency, there is no need to control the timing of the verification, meaning transactions can happen within seconds rather than in 10 minutes or more it takes with Bitcoin. This is clearly a big advantage, and there are others, such as the ability to create a chain of IOUs, either through people they personally know and trust, or by using ripples.

    But…

    Thus Ripple solves some of Bitcoin’s problems: transactions can take place more quickly, there’s no need for shaky third-party exchanges, and the whole shebang doesn’t need to waste a bunch of electricity on solving computational problems. However, I suspect Ripple will have its own problems.

    The first is to do with money-laundering. This is also a big potential problem for Bitcoin – although good luck to anyone who tried that this week – but I’m a bit confused about how well-controlled these “gateways” will be. Ripple’s own explainer states that “your neighbor or corner grocery could be a gateway,” but OpenCoin told me that “we believe these gateways should be properly licensed and regulated in the same way as other financial institutions,” Does. Not. Compute.

    The second problem is OpenCoin’s role in Ripple. The company maintains that it’s “just here to pay to develop and promote the network,” and doesn’t control Ripple, but at the same time it’s a for-profit company (hence this week’s investment) that has a vested interest in seeing the value of ripples increase. At the very least, there may be an inherent problem of perception here, particularly for those subscribing to Bitcoin’s core ethos.

    Ripple will also need to find enough “validating nodes” to ensure an above-board network consensus process. Unlike Bitcoin’s miners, these nodes don’t get anything for their efforts other than seeing the system stay up and running. Granted, they also don’t need to expend as much electricity in doing their job, but their participation is still not a sure thing.

    Finally – and perhaps most importantly – Ripple is really hard to understand, compared with traditional “fiat” currency. A lot of pieces need to be in place for it to work, and a lot of education needs to take place too. I would even go so far as to say that Ripple is more complex (even on a conceptual level) than Bitcoin, and that’s saying something.

    Still, Ripple is interesting, and perhaps having an official sponsor will make it more viable than Bitcoin. Whatever happens, it’s another step towards the post-experimental use of digital currencies.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.