Blog

  • Microsoft bundles tempt Surface Pro buyers

    Yesterday, I spent about 45 minutes at Microsoft Store San Diego, which was busy — sight not seen since Kinect’s November 2010 launch. Shoppers came to see Surface, and there were lots of questions and explorations of both tablets, although clearly Pro was the draw. Unfortunately, only the 64GB model is in stock, which somewhat muted sales, or so I observed.

    If Surface is a failure, as so many bigmouths on the InterWebs claim, what company wouldn’t want one like this? There are many measures of success in retail, and just getting people in the door is one of them. Once inside, shoppers may buy something, or walk out feeling better about the brand, leading to sales of something else later on. “Jesus! Can you believe that Microsoft? Baby, you shop here for my birthday!”

    Sales Sense

    The “Surface sales suck” crowd likes to make big comparisons to Apple and iPad and allude to anything less as failure. Again, success has many measures. Surface RT and Pro are Microsoft’s first commercial computers. The company is new to this business. Now that Apple can sell millions of new iPads or iPhones during launch weekends, bigmouth supporters count no other measure. But that first million was tougher to come by when Apple was the newcomer and took 74 days with the original iPhone. In 2007, tech bloggers and Apple cultists heralded 1,000,000 in 74 as a big success. But if Microsoft does as well or better, Surface is a failure.

    Some financial analysts put holiday RT sales at around 600,000 units, which as I explained in December and January is per store on par with iPad, if not better. Last month, Ryan Reith, IDC program manager, estimated fourth quarter Surface RT shipments of 900,000 — that’s for only about 67 days, not 90, since the tablet launched on October 26.

    I got a kick posting the photo above on Google+ yesterday and observing the Microsoft-hate reaction. Strange, I don’t see these people faulting Google Nexus 10 sales. Surely they aren’t high compared to iPad. Does anyone seriously think the search giant has sold 1 million 10.1-inch tablets? Yet the Samsung slate is trumpeted a success.

    The Extras

    Ancillary sales are another measure of success — what you can tack on to the big purchase. Tech retailers love to sell extended warranties, because they’re generally pure profit. The number of people paying for protection far exceeds those needing new hardware because of coffee spills or other mishaps. AppleCare+ is a great value for iPad or iPhone, because insurance is otherwise costly or tough to come by. Apple charges $99 for iPad and iPhone, which extends base warranty to 2 years and replaces damaged hardware for $49, up to two instances.

    Then there is all the stuff sold around the gadget, like connectors and cases. Suddenly a $499 iPad is plus $99, $39 and $29 for AppleCare+, basic case and Thunderbolt adapter. That’s good business.

    Microsoft Store sells two Surface Pro bundles:

    • $199.99: Office, Microsoft Complete, carrying case, screen protector
    • $299.99: Office, Microsoft Complete, Touch Cover, carrying case, screen protector

    Buyers can choose Office Home and Student or Office 365 Home Premium, sold separately for $139.99 and $99.99 ($79.99 with new PC), respectively. Microsoft Complete is a two-year extended warranty that includes repair/replacement for accidental damage. That normally costs $149 for Surface Pro, but Microsoft is running a $99 promotion. Touch Cover sells for $129.99. There’s value in either bundle when adding the carrying case. Screen protector doesn’t excite me, but it is good way to dampen glare for outdoor computing.

    Surface or Air?

    I’ve argued that Surface Pro competes with Windows ultrabooks or MacBook Air. The $999 Surface Pro is comparable to the $1,099 MacBook Air. They both come with 1.7GHz Intel Core i5 processor, HD 4000 graphics, 4GB memory and 128GB SSD. Surface display is 10.6 inches diagonally, compared to MacBook Air’s 11.6 inches. But Microsoft’s screen supports stylus and touch and is much higher resolution (1920 x 1080 vs 1366 x 768). The Mac has a keyboard, which costs extra for Surface Pro.

    Before tax, with the more expensive bundle, Surface Pro 128 is $1,298.99 out the door. MacBook Air 128 is $1527.90 (adding $249 for AppleCare; $139.95 for Office for Mac Home and Student 2011; $39.95 for carrying case). Separately buying Office 365 would reduce price to $1,487.94 or $1447.92, if choosing Apple’s iWork, Numbers and Keynote.

    Which is the better value? You tell me. Price isn’t the only consideration but what benefits matter more to you.

    Photo Credit: Joe Wilcox

  • To meet the FCC’s Gigabit Challenge, cities will have to get political

    A few weeks ago, a tremor was felt in the Force as FCC Chairman Genachowski announced his Gigabit City Challenge – an initiative to get at least one citywide gigabit network per state by 2015. The range of responses went from cautious optimism to “is this the best we can do? and a range ”

    Meanwhile, as we were getting our heads round the Challenge, the Empire, um, incumbent telcos struck back last week in Georgia with a an anti muni network bill that appears reasonable, but would kill hopes for a gig city in the Peach State. Windstream, AT&T and Georgia’s other incumbents are incapable of delivering gigabit services, so they have taken the easy way out and lobbied the legislature to kill cities’ ability to do so. Meanwhile, most of the gigabit networks elsewhere are run or being built by muni governments and public utilities, with just a few private companies leading gig projects.

    Even the most ardent community broadband supporters, while happy the FCC’s gigabit challenge, believe the devil is in the details. Sure, quite a few fiber networks have moved past the planning stage. But it’s going to take hard work to meet the FCC challenge. Some of the hurdles are money-related. Others come from broadband policies and legislation that need to be approved, or improved, or as is the case in Georgia, flat out rejected.

    The road to gigabit cities

    FCC Chairman Julius Genachowski.

    FCC Chairman Julius Genachowski.

    The FCC news release on the Gigabit City Challenge offers few specific details for moving forward other than creating a clearinghouse for ideas and best practices. A panel of community broadband experts and advocates convened on my Gigabit Nation radio talk show to put a few brush strokes on this canvas so listeners could at least get an initial picture of what lies ahead.

    The panel consensus was that more effort must be made by the FCC and other policymakers to remove ALEC-type barriers to community networks (American Legislative Exchange Council). The FCC’s National Broadband Plan specifically advocates preventing states from restricting local broadband solutions, and just Friday FCC Chairman Genachowski formally voiced his opposition to this type of legislation. Communities are displaying a range of creative solutions to bringing broadband where it needs to be, and this must be encouraged, not hijacked by telcos that refuse to service areas most in need.

    The panel went on to describe a need for the FCC, broadband advocates and others to understand that a lot more education needs to happen.

    “We have to make sure the audience we’re trying to reach is ready for the message we’re trying to send,” states Arkansas State Senator Linda Chesterfield, a legislative champion for greater broadband deployments. “We have a youthful population here who sees the necessity of a gigabit network. Then you have the people with BlackBerries who think these are good enough to get online. Until you have an audience that is ready to accept the services you’re trying to render, efforts to convince them to support this initiative will do no good.”

    Everybody partner up

    Putting aside the discussion of large incumbents stifling communities’ efforts, many private sector companies collectively are also a necessary component of any drive for more gigabit cities. Yet they face barriers too. From panelist and Broadband Communities Magazine Editor Masha Zager’s perspective, “The Chairman’s goal is achievable in that there are providers who could bring this capacity to communities, but aren’t doing so today. However there is the question of whether they can do so and meet their ROI needs.”

    Google Fiber brickJim Baller, president of Baller-Herbst Law Group,notes that there are many legal issues that hold potential providers from developing gigabit networks. These include IRS’s ‘private use’ rules that discourage public-private partnerships, FCC limitations on access to universal service subsidies such as the preference for price cap carriers and FCC rules that adversely affect small providers.

    Given the challenges facing both communities and private sector companies, one logical course of action is a greater pursuit of public private partnerships in which both groups are full partners in projects.

    The panelists went on to describe a number of policy, logistical and financial issues that public, private and government stakeholders need to resolve if the U.S. wants to meet or surpass the FCC’s initiative. As people roll up their sleeves and prepare for some heavy lifting, it will be difficult to ignore the 800-pound gorilla in the room – politics.

    “I’ve been very critical of the FCC, but I believe this is a good initiative from this particular FCC,” stated Christopher Mitchell, Director, Telecommunications as Commons Initiative at the Institute for Local Self-Reliance. “You have to recognize the power of the carriers in Washington. If the FCC had come out with a truly bold initiative that would have knocked us all backwards, it would have incited the carriers to give a whole bunch of money to Congress, who would have been on the FCC and probably taken away the FCC’s authority. We have to recognize that we must change more things if we’re going to have an FCC that will take the actions we would like to see it take.”

    As much as some people prefer to avoid the hurly burly of the state and national capitals, it is almost inevitable that every broadband project will become political, for better or for worse. Therefore it is best to be prepared for that which we cannot avoid.

    Craig Settles is a consultant who helps organizations develop broadband strategies, host of radio talk show Gigabit Nation and a broadband industry analyst. Follow him on Twitter (@cjsettles) or via his blog.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google May Open A String Of Retail Stores, But What Does It Hope To Gain?

    fiber-space

    Microsoft and Apple already have their own physical retail stores, but thus far Google has managed to resist that particular temptation

    If a recent report from 9to5Google is to be believed though, that may not be the case much longer. According to a single “extremely reliable source,” Google will erect its own standalone stores by the holidays in an effort to more effectively push its hardware to consumers.

    These stores will reportedly carry Google’s Nexus devices as well as Chromebooks, but the curious report goes on to note that Google conceived the project as a way to get its ambitious Glass project in front of more people. But is this all really necessary?

    Let’s just say that these rumors are true — the value of something like Glass can be hard to discern without seeing what it brings to the table first-hand, but the more practical thing to do would be to leverage its existing partnerships. Google has a fair number of Chrome Zone experience areas already installed in existing retailers like Best Buy and PC World in the U.K., and those stores already get plenty of foot traffic (if perhaps less than in recent years). Even if Google had to pay for some more experienced folks to demo Glass, it could still be less expensive and potentially more impactful than going it alone in the retail space.

    Sure, there’s something to be said for Google controlling that experience end-to-end the way Apple does, but that approach isn’t without its potential pitfalls. Putting Glass aside for a moment, Google may have a hard time turning a profit off these stores thanks to some of its other products — devices like the Nexus 4 smartphone and the Nexus 7 and 10 tablets are sold at or around cost, meaning that Google hardly makes any money on them. Google’s hardware then is something of a Trojan horse (and not all that different from what Amazon offers): it’s generally cheap and powerful enough to make it worth a purchase, and Google has been aiming to make up that money in Play Store revenue down the line.

    That’s all well and good, but running a physical store takes a decent chunk of money. Rent is a pain, as are utilities, training and staffing costs, paying for interior design and fixtures; there’s a considerable amount of overhead that goes into a venture like that. Sure, Google could still make some money in the long run but it doesn’t seem like much of a sure thing unless Google manages to perform very, very well in terms of sales volume. If we’re looking at this whole situation purely in terms of dollars and cents, a big retail push seems like a very dicey decision.

    Of course, that’s not to say this whole thing is completely impossible — Google may be going after more than just money. A move like this may serve to solidify Google as a real consumer brand instead of just that thing you use when you want to scour the Internet for, well, everything. That sort of shift in public perception could only help when it comes to pushing hardware products in the future, especially if Google really does end up creating ambitious new devices on its own. Rumors of a hi-res Chromebook Pixel have more or less petered out (thanks in large part to the incredibly sketchy way that its supposed existence was revealed), but the furor it caused shows rather nicely that there’s interest for that sort of high-end Chrome computing experience.

    And to return the whole issue of Google Glass, the notion of carving out small retail locations to highlight new and novel Google-powered experiences isn’t without precedent. Consider Google’s Fiber Space in Kansas City — while it’s set up to provide in-person customer support for Google Fiber’s growing number of users, it’s also meant to showcase what the Fiber service is capable of. It’s a very pretty little area that Google has put together and it already plays home to at least a few Chromebooks, so it’s not inconceivable that Google would take that concept, tweak it a little, and transplant it into some “major metropolitan areas.”

    Still, if true, this retail crusade would be a pretty drastic little about-face for Google. Google Shopping’s Sameer Samat told AllThingsD just this past December that the company doesn’t “view being a retailer right now as the right decision,” so either this is all bunk, or Google’s having to adjust to the sea change more rapidly than it expected.

  • What to do when Amazon decides to jump into your business

    Amazon’s recently introduced Elastic Transcoder service makes it relatively easy to encode video at scale for web distribution. It’s a great addition to Amazon’s service portfolio. It’s also yet another example of AWS competing against the very customers that rely on its infrastructure to power their developer-targeted services.

    When Amazon introduced its new service, there were cries that it would put Zencoder, another cloud transcoding provider, out of business with low pricing. I think this concern is overblown, because Zencoder solidly beats AWS on a number of key dimensions besides price that are important to its customers (more on that below).

    Similarly, Sendgrid, a cloud-based email delivery provider, went up against Amazon after the company introduced its transactional email service in January 2011. Well, it’s been three years and not only is Sendgrid still in business, it’s thriving. It counts companies like Pinterest and Foursquare as customers, and it raised a further $21 million even after some had pronounced it dead. (Note: My company,  Screenlight, is a paying customer of Zencoder and Sendgrid, but we have no other financial or advisory relationship; I chose them for this piece only because they’re examples with which I’m intimately familiar.)

    There are plenty of other pain points in the cloud where developers have staked a claim that may tempt Amazon. The question then is what can a company do when suddenly matched up against an 800-pound gorilla? Here’s a look at the successful strategies employed by Zencoder and Sendgrid.

    Give your target customer better options

    Elastic Transcoder is a fairly representative example of how AWS launches a new service: It starts with a bare-bones offering that appeals to a broad base of customers in different industries. Amazon then rounds it out and adapts the service based on customer feedback.

    You can win by knowing exactly who your target customer is (it may not be the typical AWS customer) and delivering the full suite of features that they value. By that I don’t mean a laundry list of features, but rather the key features that they need and are willing to pay for. All of the things you learned through customer development and talking with your customers will pay off here. You understand your customer’s problems better than anyone else, right?

    In Zencoder’s case, it offers a much richer feature set than AWS Elastic Transcoder (ie, HLS streaming support, closed captioning, live-streaming and so-on). All of these features are likely of high enough value to Zencoder customers that it’s somewhat protected from price-based competition. For customers to switch to Amazon, they have to be willing to give up these core features to save money. For many companies that makes it a non-starter.

    Likewise, Sendgrid continues to differentiate its service from AWS SES by offering far more features (dedicated IP addressees, advanced tracking and deliverability features, advanced API features , etc). All of this is backed by phone, email, chat, and forum support. For basic, low-cost, highly scalable email-sending, AWS may work for a lot of customers. But for those with more advanced deliverability needs (and a willingness to pay), Sendgrid is one of several superior options.

    Create a better user experience

    With Amazon, a new service like Elastic Transcoder is just another API that is offered alongside many others. With AWS, support is a paid-service offering. When customers are getting started or are experiencing problems, their only recourse is to pore over the documentation and dig through forums.

    By contrast, companies like Zencoder and Sendgrid offer premium support services. In my experience with both companies, there has always been a real human ready to help answer a question or solve a pressing problem. Thus to differentiate your business, you need to offer the care and attention that Amazon simply can’t lavish on a single service.

    The opportunity to differentiate through customer experience goes well beyond offering support when things go wrong. Every touch point offers an opportunity. For example, as someone goes through the sales funnel, there is room to provide videos and clear marketing material that educates customers  and outpaces the static efforts of Amazon. Likewise, the customer on-boarding process can be addressed with timely emails and outreach that helps resolve common stumbling blocks when getting started. (For ideas around this, check out Customer.io.)

    Design of the user interface provides another powerful differentiator, and since most customers interact with infrastructure services through an API, it’s particularly important. Here Zencoder does an excellent job with a clean and well-documented API that includes a request builder that simplifies integration and testing.

    Price based on value – and communicate it

    Price is only one of the 4P’s. The only way to sustainably differentiate your service based on price is if you are the lowest cost provider: When you are competing against your infrastructure provider, that’s not going to happen.

    In a response to a discussion on Hacker News about entry level prices that were 50 percent lower than Zencoder’s, CEO John Dahl made a great point by explaining why 50 percent lower prices don’t necessarily translate to 50 percent more value.

    He’s absolutely right. Whether AWS is 1/2 or 1/10 the price per unit of your service, your potential customers need to know that Amazon vs. You is an apples-to-oranges comparison. Furthermore, they need to clearly understand why your oranges taste better and deserve a higher price.

    In some industries, particularly perfectly competitive ones, price is the dominant attribute that matters to customers. However, in most other markets, there are additional value drivers for your customers. The key to competing against AWS is to ensure that your value proposition delivers against these attributes, and is priced accordingly. When Amazon shows up, instead of panicking, slashing prices and getting into a price war you’re bound to lose – accelerate innovation and double down on the customer experience.

    Chris Potter is co-founder of cloud-based video collaboration and sharing service Screenlight. Follow him on Twitter @potta.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Of course Microsoft limits Office 2013 rights

    I’m not surprised about the weekend furor over changes to Office 2013 retail licensing terms. Gregg Keizer, writing for Computerworld, has done some of the best reporting on this topic. He deserves your pageviews, starting with this story. I can confirm what he writes, that the new End User License Agreement restricts usage to one PC and isn’t transferrable. Whether or not Microsoft actually enforces the provision, or changes it, is another matter. We’ll see.

    What does perplex me: Why there is no backlash about other licensing term changes that are considerably more onerous and costly. Like I explained last month, “Microsoft really doesn’t want you to buy Office 2013“. That is the reason for all these licensing changes. The company wants consumers to purchase Office 365 instead.

    What the Hell?

    Microsoft’s method is simple: carrot and stick. The sweet is generous Office 365 licensing terms, which permit the productivity suite on up to five devices, and provides cloud-connected benefits. The wood is more restrictive licensing terms. Office 2010 users can take the license with them — it’s not bound to a single PC. The successor is. (Microsoft instituted similar restrictions on Windows during the last decade, if I rightly recall.) Under the old terms, Office Business or Professional edition buyers could install on up to two PCs and three for Home and Student. Office 2013 reduces the licenses to one.

    The software giant doesn’t want to sell a perpetual license, and uses more restrictive terms to discourage consumers. The buyer pays once and can use the suite forever. By contrast, Office 365 is a subscription product that allows the user access to the software as long as he or she pays. If you don’t renew the service, that’s the end of Office.

    Conceptually Microsoft takes less per license, since consumers pay $99 per year versus, say, $399 for Office Standard. The company realizes benefits:

    • Converts customers to subscriptions, which evens out revenue, reducing sales spikes and slides around new releases and between them.
    • Generates new revenue. Microsoft otherwise sells to a saturated market — growth is gone — that upgrades only every four years or more.
    • Reduces fragmentation by keeping consumers on the newest Office version all the time, allowing Microsoft to innovate faster and pass updates on quickly to users.
    • Brings Office and customers using it to the cloud, where they get benefits of anytime, anywhere and on-anything computing (and hopefully keeps them from Google Docs).

    Stick: Pay More

    The licensing term changes essentially are price increases. Big ones. I’m surprised how little complaint there is about that. Recapping some of what I explained in September story “What Office 2013 pricing means to you“, the changes effectively raise per-license cost as much as 180 percent.

    For example, Office Home and Student 2010 lists for $149.99, with aforementioned three licenses. Its successor, with one license, is $139.99. Microsoft nearly trebles Office Home and Student 2013 for anyone wanting three licenses (from $149.99 to $419.97). Home and Business 2013 is $219.99 for one license, compared to $199.99 for 2010 version, which comes with two licenses. Professional: $399.99 for 2013; 349.99 for 2010. Companies wanting two Office 2013 licenses will pay $439.98 for Business or $699.98 for Pro.

    For many consumers or small businesses, the ability to install Office on two or more PCs for lower price hugely appeals. Microsoft will let them do that still, if they buy into the subscription model. We go from stick to carrot, which for consumers starts at $99 per year, with rights to install Office on up to five PCs. By the way, Apple and Google impose no limitations like this for their productivity suites.

    Carrot: Get More

    How does Office 365’s value compare? That $150 price for Office Home and Student 2010 is one time, for three licenses. The second year of Office 365 means the buyer pays about $50 more to continue using the product. Double that in year three: $99.99 x 3 = $299.97. Office Home and Student 2010 price: $149.99. That one-time payment covers you, while Office 365 is another $99.99 every year, and that’s assuming Microsoft doesn’t increase the subscription price later.

    However, the Office version included with 365 is equivalent to Professional, which adds Access, Outlook and Publisher to Excel, OneNote, PowerPoint and Word. That version, with single perpetual license, sells for $399.99, or 300 percent more than Office 365 for one year. Then there are added incentives for the subscription version, such as Office app cloud access via browser on any PC, 20GB SkyDrive storage and 60 minutes of Skype calls per month.

    So from another perspective, Office 365 is comparatively a helluva bargain, as long as the buyer doesn’t care about having a perpetual license. To be honest, I wouldn’t.

    Again, that’s the carrot. If you want to eat sweet and use Microsoft Office, subscription pricing is the future. The point: Microsoft rewards customers choosing Office 365 and penalizes those opting for perpetual license.

    Photo Credit: mikeledray/Shutterstock

  • Synthetic molecule first electricity-making catalyst to use iron to split hydrogen gas

    To make fuel cells more economical, engineers want a fast and efficient iron-based molecule that splits hydrogen gas to make electricity. Online Feb. 17 at Nature Chemistry, researchers report such a catalyst. It is the first iron-based catalyst that converts hydrogen directly to electricity. The result moves chemists and engineers one step closer to widely affordable fuel cells.

    “A drawback with today’s fuel cells is that the platinum they use is more than a thousand times more expensive than iron,” said chemist R. Morris Bullock, who leads the research at the Department of Energy’s Pacific Northwest National Laboratory.

    His team at the Center for Molecular Electrocatalysis has been developing catalysts that use cheaper metals such as nickel and iron. The one they report here can split hydrogen as fast as two molecules per second with an efficiency approaching those of commercial catalysts. The center is one of 46 Energy Frontier Research Centers established by the DOE Office of Science across the nation in 2009 to accelerate basic research in energy.

    Fuel cells generate electricity out of a chemical fuel, usually hydrogen. The bond within a hydrogen molecule stores electricity, where two electrons connect two hydrogen atoms like a barbell.

    Fuel cells use a platinum catalyst — essentially a chunk of metal — to crack a hydrogen molecule open like an egg: The electron whites run out and form a current that is electricity. Because platinum’s chemical nature gives it the ability to do this, chemists can’t simply replace the expensive metal with the cheaper iron or nickel. However, a molecule that exists in nature called a hydrogenase (high-dra-jin-ace) uses iron to split hydrogen.

    Bullock and his PNNL colleagues, chemists Tianbiao “Leo” Liu and Dan DuBois, have taken inspiration for their iron-wielding catalyst from a hydrogenase. First Liu created several potential molecules for the team to test. Then, with the best-working molecule up to that point, they determined and tweaked the shape and the internal electronic forces to make additional improvements.

    One of the tricks they needed the catalyst to do was to split hydrogen atoms into all of their parts. If a hydrogen atom is an egg, the positively charged proton that serves as the nucleus of the atom would be the yolk. And the electron, which orbits around the proton in a cloud, would be the white. The catalyst moves both the proton-yolks and electron-whites around in a controlled series of steps, sending the protons in one direction and the electrons to an electrode, where the electricity can be used to power things.

    To do this, they need to split hydrogen molecules unevenly in an early step of the process. One hydrogen molecule is made up of two protons and two electrons, but the team needed the catalyst to tug away one proton first and send it away, where it is caught by a kind of molecule called a proton acceptor. In a real fuel cell, the acceptor would be oxygen. 

    Once the first proton with its electron-wooing force is gone, the electrode easily plucks off the first electron. Then another proton and electron are similarly removed, with both of the electrons being shuttled off to the electrode.

    The team determined the shape and size of the catalyst and also tested different proton acceptors. With the iron in the middle, arms hanging like pendants around the edges draw out the protons. The best acceptors stole these drawn-off protons away quickly.

    With their design down, the team measured how fast the catalyst split molecular hydrogen. It peaked at about two molecules per second, thousands of times faster than the closest, non-electricity making iron-based competitor. In addition, they determined its overpotential, which is a measure of how efficient the catalyst is. Coming in at 160 to 220 millivolts, the catalyst revealed itself to be similar in efficiency to most commercially available catalysts.

    Now the team is figuring out the slow steps so they can make them faster, as well as determining the best conditions under which this catalyst performs.

    This work was supported by the Department of Energy, Office of Science.


    Reference: Tianbiao Liu, Daniel L. DuBois and R. Morris Bullock. An iron complex with pendent amines as a molecular electrocatalyst for oxidation of hydrogen, Nature Chemistry, February 17, 2013, doi:10.1038/NCHEM.1571.

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website.

  • Why is Facebook’s e-commerce offering so disappointing?

    Facebook probably has high expectations for e-commerce, but it seems that either it, or the brands and businesses present on the social networking site, are out of sync.

    Some commentators see enormous growth opportunities; consultants Booz & Company predict global social commerce sales will be $30 billion in 2015. Others, however, see huge effort for no return. Retail giants like the Gap, or the department store chain JC Penny and Nordstrom have closed their Facebook shops.  And a recent study by W3B suggests that just two percent of Facebook’s 1.5 billion users have ever even made a purchase through the social network.

    My company has more than 2,000 shops on Facebook, and yet we see more orders from New Zealand, where we have no marketing, no sales presence and no country-specific website, than we do from Facebook.

    So where is Facebook’s e-commerce effort missing the mark? There are two key questions: Is Facebook even the right forum for shopping, and if so, are companies trying hard enough?

    Commerce versus culture

    While the social media marketing manager dreams of millions of loyal customers eagerly sucking up any new product and enthusiastically passing it on to their stable of 140 friends, in reality, most Facebook visitors pop in for a virtual beer with a buddy, or to share and compare cats or holiday photos.

    It’s not particularly active communication, and there’s not a direct opportunity to buy anything. It shouldn’t be surprising then that any current F-commerce successes are typically limited to “quick wins” where a visitor is already motivated by the offer of specific deals or the latest iPad sweepstakes.

    Sticking to the metaphor of Facebook as a bar, companies have to accept that people generally aren’t receptive to an inappropriate sales pitch while trying to relax. (When was the last time you saw a bank representative rocking out in the club and then turning around and offering their financial services products?) Even a good friend would be de-friended if they pushed their own projects or purchases too often, or even worse, used affiliate links to earn five to seven percent commission by selling to a “friend.”

    Making the commonplace complex

    That said, Facebook is failing brands by making it difficult to engage in e-commerce on the site. It’s very hard for businesses to highlight that they even have a shop, and every redesign on the site involves yet another reworking of one.

    Facebook also wants all transactions to be carried out via their own checkout system. And since other methods of checking out aren’t integrated to use Facebook registration as a common login, customers are also forced to go through multiple logins for delivery and payment info. This tedious, frustrating process is a conversion killer that leaves the consumer better off going out of Facebook to shop where the process is more streamlined and reliable.

    A lack of natural navigation for visitors, both in finding a shop and checking out, is another failing for F-commerce. Consumers have experienced enough good communication to expect easy-to-use shopping. When they don’t get it, they just go elsewhere.

    And finally, while Facebook credits may have advantages when it comes to virtual goods and micropayment, it’s doubtful the average consumer really wants do deal with yet another form of currency. (As someone who does business in France, Germany, the UK and the U.S., I already have three, thank you very much.)

    Unsurprisingly, these difficulties mean that brands and retailers aren’t doing very much on Facebook. If you make it too hard, consumers will never discover that they can buy on the site, and brands will shy away from investment, especially when they see no return.

    Why there’s still hope

    Despite all those negative setbacks, I’d argue still that the conditions are right for f-commerce:

    • Facebook still has a very high, loyal and active user base.  If you have customers, it is more than likely that they’re on Facebook. It is also possible that they react to pure advertisement differently than to real content.
    • More and more people are now buying from the living room (and soon probably even from the bar). For many of them Facebook could be the first place to look for an opportunity to make a purchase, or one of the most important starting points.
    • And while they currently aren’t making many purchases, many Facebook visitors happily talk about brands, recommend products, and share shopping experiences on Facebook. It’s ripe for a well-designed ecommerce approach.

    So what can Facebook do to help brands experience better F-commerce?  It definitely needs a more friendly way to integrate a shop. The eBay setup, for example, is much easier for retailers to edit and build a section where customers can easily shop online. And Facebook could improve its mobile offering, as the rise of the tablet begins to dictate online sales.

    We all know that brands need to maintain their relevance on Facebook with engaging and topical content that activates fans, makes them into advocates and creates a presence for the brand via their fans’ news streams. Yet, so far we haven’t come across anyone in ecommerce who’s truly excited by Facebook. Facebook has to change that if it’s going to make e-commerce a success.

    Phillip Rooke is CEO of personalized clothing commerce platform Spreadshirt. Follow him on Twitter @PhillipRooke.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • The newest overhyped mobile industry buzzword: LTE-Advanced

    Admittedly, mobile technology evolves at a very fast pace. But somewhere along the way we seem to have skipped an entire generation of networks.

    This week Broadcom unveiled its first LTE chipset for mobile devices, but it wasn’t just any LTE chip, it was an LTE-Advanced chip. Sprint and T-Mobile were late to the LTE party, but that’s okay. They aren’t building any old LTE networks. They’re building LTE-Advanced networks.

    Everywhere you look, some infrastructure vendor is bragging about its LTE-Advanced base station or some carrier is talking up its LTE-Advanced-capable network. With these claims, it’s hard to imagine that just two years ago that plain Jane LTE was on the cutting edge of mobile technology.

    It’s all hogwash.

    celltower2There are no true LTE-Advanced networks, chips or devices in the market today and there won’t be for many years. The mobile industry is playing an old game: technology inflation.

    You may remember that a few years back T-Mobile and AT&T magically transformed their HSPA networks from 3G systems into 4G systems by waving their marketing wands. That technology inflation, however, began years began years before when Sprint first attached the 4G moniker to its WiMAX networks.

    Even today, mobile technology purists would argue the world has yet to see its first 4G network, since no carrier system yet meets the original 4G guidelines established by the International Telecommunication Union. Instead of condemning the industry’s fast-and-loose play with the term, the ITU simply caved, retroactively defining 4G as pretty much anything the carriers wanted it to be. 4G has always been an iffy term, but after the ITU dropped the ball it became a meaningless one.

    Now the same thing is happening with LTE. In an effort to seem more progressive than their competitors, carriers, infrastructure vendors and chipset makers are finding loopholes in the technical standards to elevate their LTE technologies to the rarified status of LTE-Advanced. Basically, the industry is carrying around a Cadillac keychain but it’s really driving a Buick.

    For a more detailed explanation of what LTE-Advanced actually is, you can check out these posts from Stacey Higginbotham and me about the technology’s nuts and bolts (If you’re a GigaOM Pro subscriber there’s also this more in-depth piece). Here’s the general twist: LTE is an iterative technology much like 3G HSPA before it. Just as the industry started out with slower UMTS networks and migrated to faster HSPA and HSPA+ systems, LTE will go through the same evolution process over the next decade or so.

    Nokia Siemens Networks' conception of a heterogeneous network

    Nokia Siemens Networks’ conception of a heterogeneous network

    With each new step on that evolutionary path, downlink and uplink speeds will get faster and more resilient, latency levels will drop and overall network capacity will balloon. At some point we’ll follow that path into a set of technologies and techniques that the mobile standards bodies have defined as LTE-Advanced.

    We’ll start seeing big changes in how cellular networks and devices are designed. Infrastructure and handset makers will start bolting multiple pairs of antennas onto their towers and devices. Carriers will be able to bond disparate bands of spectrum together to create super-connections. Small cells and Wi-Fi access points will merge into the fabric of our big umbrella cellular grids creating the heterogeneous network. But we’re nowhere that point today.

    The devil is in the technical specs

    It’s important to note that LTE-Advanced isn’t a monolithic technology, it’s really a collection of technologies. You can think of LTE-Advanced as a menu, from which carriers will order from depending on their needs. Some will order up the improved air interfaces, while others will munch on multiple antenna or advanced interference mitigation techniques — many operators will do all of the above.

    One operator’s LTE-Advanced is going to look very different from another operator’s LTE-Advanced, but there are some minimum guidelines. One of those guidelines is the amount of capacity the network will support over a single 20 MHz swathe, or “carrier,” of spectrum. According to the standards group that defines these things — the 3GPP — at the very least an LTE-Advanced carrier should deliver more than 300 Mbps of downlink capacity or more than 50 Mbps of uplink capacity.

    I’m going to pick on Broadcom for a minute, only because it happens to be the most recent offender. In its materials, Broadcom clearly states its super-chip supports 150 Mbps on the downlink and 50 Mbps on the uplink. Impressive, yes, but it’s not LTE-Advanced. What Broadcom has built is known in industry parlance as an LTE user equipment category 4 chip. LTE-Advanced doesn’t start until category 6. This is fairly technical but take a look at this chart of user equipment categories compiled by Wikipedia editors (A quick reference guide: Release 8 is LTE and Release 10 is LTE-Advanced):

    LTE category speed chart

    Broadcom is only halfway to even the minimum definition of LTE-Advanced’s speed specs of 300 Mbps. The same goes for Qualcomm and any other LTE chip vendor. In fact, today’s networks are right smack in the middle of the regular LTE standard (maxing out at 100-150 Mbps on the downlink), and they’re probably going to remain that way for some time.

    So how is everyone getting away with calling their products LTE-Advanced? Why, through marketing of course. They’ve latched onto a single spec in the LTE-Advanced standards, a technique called carrier aggregation. Carrier aggregation is the super-connection technology I mentioned earlier, and in truth it’s older than the hills. T-Mobile and many other global carriers already use it in their networks to support their 42 Mbps services.

    menu_engineer

    By boasting technical support for carrier aggregation on LTE networks, marketers have made the huge leap the LTE-Advanced, which is ridiculously misleading. It’s the equivalent of ordering a Coke and then claiming you’ve indulged in a full meal.

    We’re going to get to LTE-Advanced eventually, and those networks will be truly awesome. But the industry isn’t doing itself any favors by promising us technology it can never deliver. It’s 4G’s overhype all over again, and it needs to stop.

    Feature image courtesy of Shutterstock user B & T Media Group Inc.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Happy Valentine’s, Google — see you in court

    Payam Tamiz may not be a name very well known in Silicon Valley, or indeed much beyond his small hometown of Margate, a dilapidated coastal resort not far from London. But the wannabe politician has discovered a way to get the giants of the internet to sit up and take notice.

    This week Tamiz made wave with an appeal against Google, which he was trying to sue over defamatory comments about him made on Blogger posting. In a case that goes back to 2011, Tamiz had argued that Google was effectively the publisher of a series of comments calling him, falsely, a thief and a drug dealer, and should have deleted them as soon as they were made aware of them. Google did delete the comments, but only after a five week gap.

    Tamiz is familiar with online controversy: one reason he was a lightning rod for angry comments in the first place was because, he stepped down as a local election candidate in 2011 after calling Margate’s women “sluts” on Facebook. And so, when he did not originally win his case — the first judge ruling that Google was not the publisher of the comments — he appealed to a higher court. There Google’s inaction was found to be troubling, though it did not actually overturn the libel ruling itself.

    As the Financial Times reported:

    Although Lord Justice Richards and Lord Justice Sullivan agreed with the original ruling that Google was not the primary or secondary publisher of the content it hosted, they said it was “at least arguable that some point after notification Google became liable for continued publication of the material”.

    The Lords Justice likened the situation to a 1930s court case in which a golf club was held responsible for defamatory material left on its noticeboard because it failed to remove it after it was notified.

    Cue the shrill sound of the press screeching into action. “Blogger.com libel case opens door for internet giant being required to monitor users’ posts”, squealed the Daily Mail with barely contained delight. Except, as it outlines in the story, the headline is essentially trolling — Tamiz was denied his libel claim and asked to pay 50 percent of Google’s legal costs: likely to be a tidy sum. And it’s a stretch to suggest, as much commentary does, that this is another step towards internet regulation — asking a company to respond to notices of illegal content may not be popular (just see the DMCA) but it is reasonable to expect them to comply with local jurisdiction.

    Still, Tamiz — and the kerfuffle around his case — does show the amount of energy being expended around online libel in Britain right now.

    Defamation laws in the U.K. are notoriously harsh, in large part because they lean in favor of the plaintiff and put the burden of proof on the defendant: it’s a case of “prove your comments were true” rather than “prove their comments were false”.

    lawrence godfreyAnd the precedent for defamation in online publishing stretches back 15 years, to the case of Godfrey v Demon Internet Service, in which a physics lecturer sued an ISP over comments made in a Usenet group it hosted: the ISP settled the case, because a pre-trial ruling intimated that it was potentially culpable since, despite knowledge of the situation, refused to act for 10 days. Although the award was small — just £15,000 in 1997, the equivalent of around $33,000 today — it has laid the groundwork in Britain.

    It’s one major reason many media companies employ battalions of comment moderators, and carefully police the comment threads on their own stories.

    But remember, we are all media companies now. And that means that we are all open to the same set of rules. There have also been plenty of high-profile cases on Twitter and Facebook against individual users, but so far there has not been much success in taking on platform providers themselves. Just last week a judge in Northern Ireland ruled that while anonymous comments made on Facebook were defamatory, Facebook itself was not liable.

    Still, with Godfrey in the background and more and more cases coming along, you can understand why people see Tamiz’s case as another push at a brick in the wall between platforms and publishing.

    Yes, everyone’s a media company now: and eventually that will go for Google, Facebook, Twitter and the rest as much as it does you and me.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • New playlists: “Ancient clues,” “Planes, trains and automobiles” and “Are we alone in the universe?”

    planes_trains_automobilesTED playlists are collections of talks around a topic, built for you in a thoughtful sequence to illuminate ideas in context. This weekend, three new playlists are available: “Ancient clues,” “Planes, trains and automobiles” and “Are we alone in the universe?”

    Ancient clues
    Five fascinating talks by archaeologists and evolutionary biologists about humanity’s beginnings and journey.

    Planes, trains and automobiles
    Drive a plane? Race a car with your eyes closed? Fly? 11 innovators in transportation show that getting from point A to point B doesn’t have to be boring.

    Are we alone in the universe?
    Can it really be possible that Earth is only life-sustaining planet in existence? These 5 speakers think there might just be something or someone else out there, and urge us not to stop the search.

  • This week in cloud: Amazon upsets Apple; NTT backs Cloud Foundry; cloud taxes in dispute

    Amazon bests Apple in consumer appeal

    Amazon is the most widely admired U.S. company, edging out last year’s favorite, Apple, according to the new Harris Interactive Poll on most reputable companies. The online book seller and cloud services provider ranked in the top five in five of six criteria and its combined  reputation quotient or “RQ” score was 82.62. Harris takes various factors including quality of products and services; workplace environment; social responsibility; financial performance; and emotional appeal to calculate the RQ, querying some 14,000 respondents.

    harrisrqAnything over 80 is viewed as excellent. Amazon got nearly 100 percent positive rankings on “all measures related to trust and tremendous support  and “word of mouth,” according to Harris’ summary. Those words must come as music to Amazon CEO Jeff Bezos’ ears. He continuously champions Amazon’s customer service and low pricing as key to its success — although some bearish Wall Streeters might differ with him on that.

    To be fair, this score is probably more related to amazon’s consumer-focused e-commerce service than its less visible (to consumers anyway) Amazon Web Services IT services for rent business

    Apple trailed Amazon with an RQ of 82.54.

    Other fun facts from Harris:

    • Bank of America remained in Harris’ bottom 5 companies, but also saw the largest reputation rebound of 6 points.
    • Google was the only other tech company in the top ten with an RQ of 81.32
    • Microsoft ranked 15th an RQ of 76.46
    • Dell came in 26th with 73.05
    • IBM logged in at 28th at 72.21
    • Hewlett-Packard ranked 34th with  70.01
    • Facebook, new to the list, debuted at 42nd with an RQ of  65.63

    NTT climbs aboard Cloud Foundry

    cloudfoundrylogoNTT, Japan’s gigantic telco is making Cloud Foundry the basis of its upcoming Platform as a Service. It  joined Cloud Foundry Core,  a push launched last year by VMware to make its open-source Cloud Foundry the basis for a slew of compatible higher-level PaaSes. And a bunch of companies – AppFog, ActiveState, Uhuru, and Tier 3– now all offer Cloud Foundry-based platforms.

    According to a February 12 NTT guest post by Hideki Kurihara, product lead for NTT Communications’ Global Cloud Services on the Cloud Foundry blog, the telco is reacting to customer demand for an agile, flexible development platform:

    “But we also hear concerns about vendor lock-in and ability to meet the needs of a complex enterprise environment. We chose to build CloudPaaS on top of Cloud Foundry because of its multi-cloud nature, ability to integrate with existing assets, and solid API foundation for adding management and monitoring features. Using Cloud Foundry as the base, we are extending CloudPaaS for developers and enterprise customers in Japan. Together with other Cloud Foundry Core partners, we are delivering cloud portability to Japanese users as well as global users of Cloud Foundry.”

    The Cloud Foundry Core Definition baseline includes runtimes and services built on Java, Ruby Node.js, MongoDB, MySQL, PostreSQL, RabbitMQ and Redis.

    VMware launched Cloud Foundry two years ago  but is now in the process of spinning that work off into the Pivotal Initiative, a move which has some members of the Cloud Foundry ecosystem worrying about what changes could be in store.

    States rethink cloud computing sales taxes

    money dollar bills benjamin franklin cashFearing that cloud computing companies will flee for business friendlier environs, several states are moving to remove sales taxes levied on cloud computing services. Last week, a legislative panel in Idaho agreed to hammer out that topic once and for all, according to the Idaho Spokesman Review. The Idaho House’s tax committee said it will introduce legislation that will classify cloud computing services as, well as services, not tangible physical goods the sales of which are taxed.

    Nineteen years ago, a state law held that software is taxable regardless of how it is delivered.

    Meanwhile, cross country in Vermont, Governor Peter Shumlin is also working to remove a state tax on cloud services, according to VTdigger.com

    Shumlin’s administration “backed a retroactive cloud computing moratorium that reimbursed businesses for about $2 million in taxes that had already been collected. This time, the proposal would make the exemption permanent,” according to the publication.

    Removing yet another source of revenue from cash-strapped states is bound to stir up controversy however.

    Feature art courtesy of Shutterstock  userGena96

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Last week on Pro: the internet of things and cleantech, Mailbox.app and more

    If you’re in the midst of a Presidents’ Day break, be sure to download our latest podcast (the iWatch, Dr. Big Data and more) to keep you entertained and updated. And if you’ve got road trip envy, check out Katie Fehrenbacher’s take on the Tesla/New York Times debacle, which riled up automotive enthusiasts and clean tech advocates alike.

    Meanwhile, over on GigaOM Pro, our analysts are looking at how the internet of things will impact the connected (and electric) car, the future of software-defined networking, and what’s next in social networking.

    Note: GigaOM Pro is a subscription-based research service offering in-depth, timely analysis of developing trends and technologies. Visit pro.gigaom.com to learn more about it.

    Cleantech: Cleantech and the internet of things
    Adam Lesser

    The internet of things (IoT) is quickly gaining traction (and buzz) among investors and industry forecasters alike. GigaOM’s own Stacey Higginbotham deemed IoT “the new land grab for chip makers” in a recent post, and analyst Adam Lesser agrees with her, citing the importance of chips, sensors, and radios in the world of cleantech. As devices such as connected cars and smartmeters become part of the mainstream (and not just cleantech-focused) consumer landscape, the internet of things – and the components that power it – will become an increasingly vital component as well.

    Cloud: Forecast: sizing the software-defined networking market
    Lee Doyle

    Analyst Lee Doyle looks at the rapidly-evolving software-defined networking (SDN) landscape, as fresh batches of startups and established corporations alike enter this competitive arena. The outlook for the SDN enterprise market continues to expand, and recent acquisitions (such as VMWare’s recent $1.2 billion deal with Nicira), indicates major opportunities ahead. Doyle analyzes the current and near-term future prospects for SDN in the enterprise, focusing on the products (hardware, software and services) and use cases and applications that business customers will find most relevant.

    Mobile: Proximity-based mobile social networking: outlook and analysis
    Peter Crocker

    Analyst Peter Crocker looks at the most recent iteration of social networking, defining proximity-based social networking applications as services that connect users based on their physical proximity to each other, as well as facilitating connections between people in a certain time and place. Led by apps such as Highlight and Grindr, mobile application vendors have quickly and deeply latched onto this market segment, and Crocker predicts that the market will grow to $1.9 billion by 2016. In this report, Crocker analyzes the types of social networking mechanics presently in use and the existing technology and market structure, before providing his forecasts for the next four years.

    Social: Orchestra’s Mailbox makes email triage effortless
    Stowe Boyd

    In his latest blog post for Pro, Analyst Stowe Boyd provides his initial analysis of Orchestra’s Mailbox app, which launched last week with much fanfare and a very long waiting list. While acknowledging the app’s limitations (it’s currently only available for iOS, and only connects to Gmail), Boyd provides a personal walk-through of his experiences using Mailbox for his own email triage, and how the app stacks up to other task management solutions, such as Asana and Remember the Milk.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Sony Xperia Z gets put through the underwater test and passes with flying colors

    Sony_Xperia_Z_Talk_Android_

     

    You already know that the Sony Xperia Z is going to be pretty durable thanks to its dust and waterproof casing, but some casual nonbelievers wanted to take some time and see how the device would hold up in some extreme conditions like a public pool. As you can see in the video below, the Xperia Z’s camera not only holds up well when recording video, but the quality of the video and audio is as astounding as the quality of the device itself. Of course it should be pointed out that while recording the video while being underwater was a cinch, the on-screen buttons are not functional underwater, so users will need to actually hit the record button before actually putting the smartphone through its courses underwater. Still you gotta admit that it’s pretty cool seeing Sony’s claims of the Xperia Z’s durability holding up pretty well.

    Still a nonbeliever of the Xperia Z’s potential? Hit the break to check out the video for yourself.

    Click here to view the embedded video.

     

    source: Xperia Blog

    Come comment on this article: Sony Xperia Z gets put through the underwater test and passes with flying colors

  • News for 16th February 2013

    Copied from my Twitter account @egyptologynews, in no particular order

    Today is the 90th anniversary of the opening of Tutankhamun’s tomb by Lord Carnarvon and Howard Carter. The Telegraph http://bit.ly/12QKidm

    A New Kingdom jigsaw puzzle from Malqata: reconstructing a pottery vessel and a bone disc. iMalqata dig diary http://bit.ly/XP0PuK

    The new volume of The Journal of Egyptian Archaeology (EES) is now out (vol. 98, 2012). The Table Of Contents is at http://www.ees.ac.uk/userfiles/file/JEA98-Contents.pdf

    The head of the Egyptian antiquities ministry believes that archaeology and tourism will bounce back. NBC Science http://nbcnews.to/Xa5RC0

    Digital Egypt: Museums of the Future:

    I had a brilliant afternoon volunteering at the Petrie Museum’s Digital Egypt event today. I’ll be writing it up on the Museum’s blog v. soon.If you’re interested in the digitizing of archaeology in museums you might check out @3DPetrie on Twitter, charting the museum’s digi work. If you’re on Facebook, here’s my initial enthusiastic blurb re the Digital Egypt event at the Petrie (under the pic). http://on.fb.me/12ufxvd

  • Disrupt Darlings GTar Talk About What Happens After You Succeed On Stage, Raise $350K, And Have To Ship Product

    gtar

    Last May, Incident Tech launched the gTar, a guitar with real strings that connected to a smartphone for some amazing sound processing. In the last few months, the founder, Idan Beck and his team have been busy preparing the 800 guitars he pre-sold on Kickstarter for shipment. Theirs is a story of creativity, cool, and the next generation in music technology. I spoke with Idan briefly about his Disrupt experience and how it felt to go from zero to shipping in less than a year.

    TC: So how have things been going since Disrupt?

    Idan: Things have been extremely busy and going well! Shortly after disrupt we shifted our primary focus on getting the gTar into mass production out in China. While we had already been going out there for nearly a year at that point, we spent the next 6 months hammering out every issue imaginable in production and learning about how much goes into making a thousand of something.

    Now we’re starting to get units out of China in batches and fulfill them out to our amazingly supportive and patient Kickstarter backers. As a result of the last 6 months the product has really improved as well, with the end result and build quality far exceeding our expectations, since as a result of production we had to make certain changes to the design and architecture of the product, allowing us to make some significant improvements to the technology, along with the direct ability to upgrade the product in the future through iPhone delivered updates as well as hardware upgrades that our customers can install themselves.

    TC: Tell us about the gTar before and after Disrupt. What did you think would happen before you got on stage?

    Idan: Before Disrupt the gTar was still a relatively secret project being worked on in a closet-sized office in the flatland of Santa Clara. Before that I had originally started building the product in my garage in Cupertino and after that we were bouncing around for a while (even working for a month or so on an Icelandic ferry docked in the SF bay), but once we knew we were going to Disrupt everything sort of got official. Driven by the pressure to get things right, our team pulled together a really professional looking video and presentation in a matter of weeks while gearing up for what we felt was going to be a make it or break it point for the product.

    TC: Were you scared? Excited? How does it feel to launch on stage?

    Idan: It’s definitely exciting and almost foreboding to get up on the stage, especially considering that you have such a short amount of time and it’s not really possible to leave much to chance. You’re somehow stuffing three years of work into such a short little moment, and hope that people understand implicitly what had to go on under the hood to make all of that happen.

    It definitely has this sort of epic feel to it and we were definitely nervous as all hell. We spent every waking moment practicing and rehearsing every word and sentence we were going to say. Also, our dependence on our early stage prototype hardware was always something we were worried about. For example, the night before our presentation, Josh had to run out to get a Dremel tool that he somehow managed to find at the only open hardware store in Manhattan, so that I could make some internal tweaks for us to re-route some wires through the prototype to avoid any potential battery issues or audio problems that might pop up on stage.

    That prototype is in a case now, and we’re planning to hang it up as a piece of art. It was very much a super early prototype (and the only fully functional gTar in existence at that point) and we easily had disassembled and reassembled it at least 10-20 times over those few days. In fact, we did it so much that we were ruining the screws holding on the pick guard and by the last day we only had 3 left!

    TC: How many did you pre-sell that day?

    Idan: We launched the project around 2PM or something and we hit our $100K Kickstarter goal in just over 11 hours so by the end of the day we had pre-sold north of 200 gTars. The project ended up raising over $350k with about 850 people pledging to get a gTar.

    TC: Why didn’t you play any really smoking hot-reggae jams on stage? Like “Stir It Up?”

    To be honest I think we could have chosen a better set of songs for our demos, but we were also playing it a little safe as well since we wanted to choose a song that I could play well enough knowing that I’d probably freeze up on stage. I think you can probably see my leg shaking if you look carefully enough in the video of the first presentation. We actually got a lot of feedback on that demo, so for the second presentation we did change up the songs around, which definitely was a good move.

    TC: What’s next for gTar? Another version?

    Idan: We’re still working hard to get a gTar into the hands of everyone that backed us on Kickstarter, and are making solid progress and getting some great positive initial feedback. We’re eagerly awaiting another large shipment that’s on its way and on the ocean as we speak. We’ll be putting some serious effort into an Android dock and app, as well as Web browser based compatibility. We have done some light conceptualizations of how other instruments would work within our platform, but are mainly focused on the gTar for the moment.

    We’re working hard to continuously make the gTar a better product, and as a result of some the design changes that went into effect during production, the units we are sending out today will also have the capability to benefit from those improvements as we roll them out. This includes continued improvement to our own app, such as a deeper exploration and development of the social aspects of the product.

    A few weeks ago we launched an online store that is already generating pre-orders for the spring, and we’re developing retail distribution channels for the summer and holiday seasons. We’re also looking to expand our team over the next year as well!

    TC: If Disrupt were an EBay account, what would you write in the review?

    Idan: I would think that the comparison is much more likened to a summer fling. It’s a short, intense, and immensely rewarding experience that ends up surprisingly thrilling for everyone involved. At the end you might not end up being number one, but the experience will change you for the better.



  • Offline speech recognition for third-party apps discovered in Google Search update

    google_now_offline_speech

    Developers of voice command app utter! have discovered a nice surprise included in the latest Google Search update that increases the usability of speech-recognition on Jelly Bean powered Android devices. Much of the focus since Google released the latest update has been on the new widget for Google Now. There has also been some interest in new partners and access to college basketball teams for sports cards. The change discovered by utter! may have a more far-reaching impact than those features.

    With the latest update, apps from third-party developers can now access the speech-recognition dictionaries on your device. Previously, only Google’s own apps could access the dictionaries. For end users, this means the speech-recognition function can run faster and is accessible even when no data connection is available. To add the dictionaries to your device so they will be available for other apps, go to your Google Now settings, look under “Voice” and find the dictionary you want to download.

    source: AndroidCentral

    Come comment on this article: Offline speech recognition for third-party apps discovered in Google Search update

  • Is your PaaS composable or contextual? (Hint: the answer matters)

    I want to touch base on a topic that is subtle, but has a profound impact on the way anti-fragile IT systems will evolve and in what Platform-as-a-Service offerings companies will choose to use: the difference between two types of extensibility and programmability in systems, contextual and composable. This topic is an important part of my continued exploration of how the concepts of devops, complex adaptive system and anti-fragility apply to software development and IT operations in the era of cloud computing.

    These two patterns are described well in this recent post from Neal Ford, self-described “Director, Software Architect, and Meme Wrangler” at systems integrator ThoughtWorks:

    In my keynote, I defined two types of extensibility/programability abstractions prevalent in the development world: composable and contextual. Plug-in based architectures are excellent examples of the contextual abstraction. The plug-in API provides a plethora of data structures and other useful context developers inherit from or summon via already existing methods. But to use the API, a developer must understand what that context provides, and that understanding is sometimes expensive…The knowledge and effort required for a seemingly trivial change prevents the change from occurring, leaving the developer with a perpetually dull tool. Contextual tools aren’t bad things at all – Eclipse and IntelliJ wouldn’t exist without that approach. Contextual tools provide a huge amount of infrastructure that developers don’t have to build. Once mastered, the intricacies of Eclipse’s API provide access to enormous encapsulated power…and there’s the rub: how encapsulated?

    In the late 1990’s, 4GLs were all the rage, and they exemplified the contextual approach. The built the context into the language itself: dBASE, FoxPro, Clipper, Paradox, PowerBuilder, Microsoft Access, and similar ilk all had database-inspired facilities directly in the language and tooling. Ultimately, 4GLs fell from grace because of Dietzler’s Law, which I defined in my book Productive Programmer, based on experiences by my colleague Terry Dietzler, who ran the Access projects for my employer at the time:


    Dietzler’s Law for Access

    Every Access project will eventually fail because, while 80% of what the user wants is fast and easy to create, and the next 10% is possible with difficulty, ultimately the last 10% is impossible because you can’t get far enough underneath the built-in abstractions, and users always want 100% of what they want.


    Ultimately Dietzler’s Law killed the market for 4GLs. While they made it easy to build simple things fast, they didn’t scale to meet the demands of the real world. We all returned to general purpose languages.

    Composable systems tend to consist of finer grained parts that are expected to be wired together in specific ways. Powerful exemplars of this abstraction show up in *-nix shells with the ability to chain disparate behaviors together to create new things. A famous story from 1992 illustrates just how powerful these abstractions are. Donald Knuth was asked to write a program to solve this text handling problem: read a file of text, determine the n most frequently used words, and print out a sorted list of those words along with their frequencies. He wrote a program consisting of more than ten pages of Pascal, designing (and documenting) a new algorithm along the way. Then, Doug McIlroy demonstrated a shell script that would easily fit within a Twitter post that solved the problem more simply, elegantly, and understandably (if you understand shell commands):

    tr -cs A-Za-z '\n' | tr A-Z a-z | sort | uniq -c | sort -rn | sed ${1}q

    I suspect that even the designers of Unix shells are often surprised at the inventive uses developers have wrought with their simple but powerfully composable abstractions.

    Ford goes on to describe the pros and cons of each approach in much more detail, but the key conclusion he reaches is, I think, critical to understanding how one should develop the tools and tool chains that drive new IT models:

    These abstractions apply to tools and frameworks as well, particularly tools that must scale in their power and sophistication along with projects, like build tools. By hard-won lesson,composable build tools scale (in time, complexity, and usefulness) better than contextual ones. Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy. However, trying to extend it in ways not designed into the API range in difficultly from hard to impossible, Dietzler’s Law Redux. This is especially true in tools where critical parts of how they function, like the ordering of tasks, is inaccessible without hacking.

    Ford’s distinction is one that finally helps me articulate a key concern I’ve had with respect to Platform-as-a-Service tools for some time now. In my mind, there are primarily two classes of PaaS systems on the market today (now articulated in Ford’s terms). One class is contextual PaaS systems, in which a coding framework is provided, and code built to that framework will gain all of the benefits of the PaaS with little or no special configuration or custom automation. The other is composable PaaS, in which the majority of benefits of the PaaS are delivered as components (including operational automation) that can be assembled as needed to support different applications.

    Contextual PaaS

    Examples of contextual PaaS include the original releases of Google App Engine, Heroku and other “first-generation” PaaS systems that asked the developer to adhere to specific architecture and consume PaaS-specific classes in the application itself. These systems were incredibly powerful for building applications that were variations of what these frameworks were designed to do, but began to fail quickly for applications that fell outside of that domain.

    The classic example is Google App Engine’s limit of 30 seconds for any backend request to complete. Great if you were building a Facebook game, but a requirement that eliminated its use for many multi-step transactional applications. Of course, there were ways to deal with those situations, as well, but they were mostly complicated and added risk to the system.

    There is a parallel here with the 4GLs of the late 1990s that Ford talks about in his post. At that time, I worked for Forte Software (acquired by Sun Microsystems in 1999), which built a 4GL development and operations environment for distributed application development. We had a business model where we relied heavily on systems integrator partners to help our customers deliver these often sophisticated applications, and every one of those SIs eventually built a framework environment to make building complex applications “easier.”

    The problem? Almost every customer that used one of these frameworks had a requirement (or many) that the framework didn’t handle well. This resulted in either the SIs scrambling to modify their frameworks to support these requirements — inevitably resulting in the framework being much less “easy” to use — or the customer bypassing the framework all together for those needs, resulting in an application that was harder to debug and operate.

    Composable PaaS

    Composable PaaS systems, on the other had, do much less to anticipate the architecture or functionality of the application built on it, and do much more to simplify the assembly of services, including underlying infrastructure, automation, data sources, specialized data tools, etc. I think the classic example of a composable PaaS is Cloud Foundry, the open source PaaS effort from VMware that’s now part of its Pivotal Initiative spinoff. Modern versions of Heroku, EngineYard, CloudBees and other also exhibit more of this approach than “first-generation” PaaS systems.

    An old, but illustrative, Cloud Foundry diagram.

    An old, but illustrative, Cloud Foundry diagram.

    Perhaps most importantly, however, there are open source “build” tool chains being deployed directly to infrastructure services that exhibit a purely composable approach toward delivering and operating applications. Combining GitHub with Jenkins with Gradle with AWS CloudFormation and Autoscaling and so on gives a fully automated, flexible “platform” for application development and operations — everything you want from a PaaS. The catch, of course, is that you’ll need to assemble and maintain that tool chain over time (rather than letting the PaaS vendor do it for you).

    Now, take the concept a step further. Imagine a deployment environment that delivers a wide variety of these individual tools and components and simplifies the process of creating tool chains on demand from them. Imagine that environment would let each development team choose from known tool chain “patterns,” but modify them as they see fit for each project. This, I believe, will be the ultimate general purpose PaaS success, not some hard-and-fast framework-based PaaS.

    The concept of composable and contextual applies to a lot more than PaaS and cloud, of course. And it is important to note that it’s not an either/or choice, much like stability and resiliency. Parts of an IT environment should be composable, but there will always be elements where the relative stability of contextual extension makes more sense. And composable systems can leverage API-driven systems that themselves are designed primarily for extensibility via contextual approaches.

    The key is to think about each system from the perspective of how it will be used, and to target its extensibility mechanism based on needs. Just remember, however, that choosing a contextual path will dictate a lot more about how your system could be used in the future than a composable approach would.

    I’d love to hear your thoughts, either in the comments below, or on Twitter, where I am @jamesurquhart.

    Feature image courtesy of Shutterstock user Nenov Brothers Photography.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Yes! Google should open retail stores

    I can’t say if rumors flashing across the InterWebs yesterday are true about Google opening retail shops this year. Not that it matters. The search giant should open stores — and lots of them. Timing is right, too, and who could have imagined two or even three years ago.

    Make. No. Mistake. In the 22 months since returning as CEO (following a 10-year hiatus), Larry Page has injected new vim, vigor and vibrancy into the Google empire. The company is now one of the most disruptive forces across techdom. Android Market branding to Google Play, Google+, Google Now, Nexus tablets, low-cost Chromebooks and stores selling them inside major retailers all debuted during his watch. Then there is ever-tightening cross-integration of products and services creating one of the most formidable cloud applications stacks available anywhere. Google Now, Google Play and Android and Chrome OS devices are reasons enough for retail stores, because the company has a digital lifestyle to sell.

    Boutiques, Baby

    In June 2010, I explained why “Microsoft and Apple stores are the future of technology retailing“, the day after the software giant opened its fourth branded shop four doors and a walkway down from the fruit-logo. Sony also operates a store in the mall, Fashion Valley, although closed last month for renovations. I predicted in 2010, and there is increasing evidence now, that tech boutiques and not big box stores would be the future of tech retail.

    The three company stores share similarities relevant to providing products, services and customer service:

    • In-store training for using hardware, software and services.
    • Self-branded consumer electronics and supporting third-party add-ons.
    • In-store tech support: Apple Genius Bar, Microsoft Answers and Sony Backstage.
    • Similar product activities: Computing, gaming, home entertainment, mobility, photography and videography, among others.

    Halo How

    Most importantly, each shop promotes a vertically-integrated digital lifestyle around the brands, products and services. During January earnings conference call, Apple CEO Tim Cook discussed the sales pull one product can have on others:

    If somebody will buys an iPad mini or an iPad and it’s their first Apple product, we had great experience through the years of knowing that when somebody buys their first Apple product that a percentage of these people windup buying another type of Apple product. And so if you remember what we had termed the halo effect for some time with the iPod, with the Mac, we are very confident that, that will happen and we are seeing some evidence of that on the iPad as well.

    The point is digital lifestyle and selling it, and consumers spread the halo effect. Apple and analysts agree that consumers bringing their own devices to work is major driver of enterprise iPhone and iPad adoption. Where are they most likely to truly discover these products, and supporting software and services? Apple Store. Google wants into the enterprise, too, and with greater ambitions, such as Google Apps.

    Something else: Retail stores reach small businesses. Directly. Like consumers. Imagine the value digital lifestyle story to the small business owner, who walks out of Google Store with Chromebook, Nexus 4 smartphone, Nexus 10 tablet for less than the cost of MacBook Air — all tidily synced with contacts, calendars, docs, email and more.

    Curbing Conflict

    Neither these buyers or general consumers will see the benefits at shops selling many competing horizontally-oriented products.

    One reason is conflicting objectives. Big box retailers maximize profits by selling anything consumers want to buy. Then their brand priorities. Microsoft Surface or Sony Bravia don’t define Best Buy’s brand. But these products do define the manufacturers’ brands and how people use them.

    From that perspective, the company stores are as much about brand marketing as they are places to sell stuff. Apple, Microsoft and Sony all clearly focus on building brand — through in-store marketing initiatives — and making sure customers feel good about the companies, their products and sub-brands. Hence, while profitability is important, the customer relationship takes precedence over the transaction.

    Right Risks

    But benefits don’t stop there. In May 2011, I explained why “Apple would be nothing without its retail stores“. I was there when the company opened the first shop, at Tysons Corner, in May 2001.

    The move into retail came seemingly at the worst time. There was a recession underway, Apple had reported several consecutive quarterly losses and Gateway was in process of shuttering all its company-owned shops. Apple Store was madness. But CEO Steve Jobs saw something else: Using the shops to build brand awareness and sell a digital lifestyle around Apple products.

    A dozen years later, Apple operates more than 400 stores, about 150 outside the United States. During calendar fourth quarter, with average 396 stores open, sales per shop reached $16.3 million. Total visitors: 121 million.

    The shops allow Apple to take product risks competitors can’t. For example, if the iPhone battery dies or the handset has any other problems, buyers can feel confident to pop over to an Apple Store and get it replaced. Apple can risk such unorthodox design (at time iPhone launched) because of the stores. What would that same customer do if his or her Samsung phone with fixed battery had problems? Mail it to South Korea? Meanwhile, customer feedback influences future designs, while Genius Bar visits help Apple detect design problems.

    I don’t believe that Microsoft would have so easily risked releasing a branded tablet if not for its stores — and there are many more in just a few months. Microsoft operated 27 shops before Surface RT and Windows 8 launched on October 26, when 32 “pop-up” holiday shops opened. Today, according to Microsoft’s retail store locator, most of the pop-ups remain, with some going permanent (moving from mall floor to dedicated store). By my count, Microsoft now operates 64 locations in Canada and the United States. Five new shops will open in the coming months and four more pop-ups converted to permanent stores.

    Go Google

    Google retail expansion makes sense to me. Surely company math-whizzes see the benefits from the Chromebook kiosks inside Best Buys. Google hired employees, who could speak about broader digital-lifestyle benefits, to staff the kiosks.

    Then there are Nexus devices or Chromebooks from four OEMs: Acer, HP, Lenovo and Samsung. Lenovo’s ThinkPad Chromebook is only sold to educators, who could try and buy from Google Store.

    More importantly, Google isn’t just selling a digital lifestyle but one that needs some explaining about cloud benefits.

    Yes, the company should open retail stores. I’d love to see Apple, Google, Microsoft and Sony stores at the same mall here in San Diego.

    Photo Credit: Joe Wilcox

  • The real reason Pinterest bought Punchfork

    Pinterest’s recent acquisition of Punchfork has largely been misinterpreted by the industry and onlookers as merely a content-centric acquisition. While Punchfork’s database of recipes certainly will have value for Pinterest, in reality there is a Trojan horse in this acquisition that was the real focus of Pinterest’s move: the technology that powers Punchfork.

    Combining the power of a search engine indexing content, the popularity of content based on how often that content was shared on social networks, and an intelligent display of metadata to enhance the search results, Punchfork had a powerful engine under its hood. And so with Pinterest scooping it up, the move can be seen as a marked shift from a user-generated content approach to one that leverages search technology to discover, index, and present content to its user base.

    Punchfork’s secret sauce

    Punchfork’s founder, Jeff Miller, leveraged his technical skills as a quant trader to index recipes from across the internet and mash up this information with social signals from social networks (Twitter, Facebook, Stumbleupon and Pinterest) to develop a popularity score.  These popularity scores assist consumers in discovering the best recipes for a given type of food or particular diet. Miller also developed a robust API that was Punchfork’s sole source of revenue (aside from some tests with advertisements on the site). Punchfork’s API was available across multiple tiers, from a low-volume free tier to a $995-a-month tier.

    So as observers have pointed out, Pinterest and Punchfork were certainly an obvious match due to their similarities: a grid-based aesthetic with visually inspiring photos and content, and  recipes are one of the most popular types of content on Pinterest already.

    But long term, the key value for Pinterest is Punchfork’s robust and well-documented API.

    Implications for Pinterest

    The divide between content and commerce is closing. Pinterest is certainly at the forefront of helping consumers discover products and content of interest, but currently consumers can’t make a one-click purchase of all that merchandise they’ve painstakingly collected and pinned.  This poses a problem for Pinterest – and an opportunity for Google, Amazon, and Facebook to take a greater portion of the ecommerce market share.

    Pinterest must move quickly and leverage the technology and innovation from Punchfork to execute across three main areas:

    • Releasing a publicly accessible and monetizable API
    • Developing paid advertising
    • Enabling consumers to make seamless purchases of products they discover

    Release an API already!

    Pinterest has a wealth of products within its database that could be leveraged by publishers to monetize traffic and provide a source of revenue for Pinterest. This offering would combine the interest of brands with a wealth of products already pinned within Pinterest, and sites that want to monetize their content and traffic. Pinterest could easily create an offering where merchants log into Pinterest, determine how many of their products are already pinned within Pinterest to entice them into a PPC, CPM, or CPA advertising offering.

    For example, imagine if the furniture manufacturer West Elm could log into Pinterest to determine the prevalence of its pinned products and then launch a paid campaign that would enhance their product listings with up-to-date pricing and promotions. Or sites such as CNN or Design Sponge could embed a piece of javascript throughout their site that would display relevant pins and product information contextually relevant to the content of the publishers’ pages. Depending on the revenue model, Pinterest would  then be compensated for the clicks, displays of the pins, or eventual purchase of those products from the etailers.

    Pinterest has largely focused on growing its user base and, aside from user backlash around its use of Skimlinks, it hasn’t yet revealed a revenue model. It seems inevitable, though, that Pinterest will have to develop a revenue stream(s) from sponsored pins and/or sponsored profiles, along with recommended pins and profiles in either a CPA, CPC, or CPM format. These would mirror successful offerings from other social networks such as Facebook and Twitter, and make it easier for brands and agencies to incorporate it into a respective media buy.

    Empowering consumers

    More often than not consumers come across compelling images of products on Pinterest but cannot find out where to purchase those items, resulting in a lost opportunity for Pinterest and the merchant. Using technology from the Punchfork acquisition, Pinterest will be able to fix this problem in a variety of ways:

    Image recognition for unsourced pins 
    Numerous startups are attempting to leverage image recognition for monetization, ranging from GumGum applying advertising to relevant imagery, to Stipple attempting to be the image genome. In the context of Pinterest, users often upload a photo saved from a website or a camera and neglect to link that product to the manufacturer or store where the item can be purchased.

    But if Pinterest were to incorporate a form of image recognition, the company could attempt to associate images of products without a source to sites that sell that particular product. For example if I uploaded a photo from my camera depicting a tie from Everlane and didn’t link back to everlane.com; Pinterest may be able to automate identification of that tie via image recognition.

    Enhanced Product Information
    Currently pins offer very little information to Pinterest users with the exception of anything contributed directly by the user that pinned the image. Pinterest could enhance the product information that is associated with particular images by incorporating data such as pricing, availability, and product specifications. Pinterest started to address the pricing aspect in an archaic fashion by automatically capturing information when pinned from sites such as etsy; however the future may lend itself to message-based protocol where products are matched to retailers via an API.  This bi-directional API could enable larger retailers and smaller retailers to provide timely updates to prices, promotions, and availability.  For example if I came across a new line of nail polish from Sephora pinned on Pinterest, I would be able to see availability, up to date pricing information, and any discounts that could be applied to a potential purchase.

    Eric Fader is the director of analytics for Ignited, an L.A. based advertising agency. He previously worked for MySpace and BizRate.  Fader can be reached via Twitter @efader or via Linkedin.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Is Sony testing a new device running Android 4.2 with a 1080p display?

    Sony_C6802_HTML5_Test

    Sony may be testing a new device and if the latest findings are correct, it’s going to be an impressive device. With the model number C680x, Sony’s device will be running Android 4.2 and sporting a full HD screen with a resolution of 1920 x 1080. Due to the model number being higher than the anticipated Xperia Z, this might be an indication of yes another Sony flagship phone. Very little details are known at this point but check back with TalkAndroid for the latest reports.

    source: Xperia Blog

    Come comment on this article: Is Sony testing a new device running Android 4.2 with a 1080p display?