Author: Barb Darrow

  • If you think tech has changed, get a load of the new enterprise sales model

    In the traditional enterprise IT sales scenario, big companies cough up six or seven figures for new software licenses, server and/or networking gear every year and then spend the next nine months deploying that stuff. Or maybe not deploying it — the amount of “shelfware” at big accounts is probably mind boggling.

    But companies go through the motions because a: that’s what they’ve always done and b: they want to stay legal. That model may not be dead yet, but its days are numbered, at least according to some observers. What’s contributing to its decline is the increasing use of public cloud infrastructure like Amazon Web Services which together with Software-as-a-Service offerings pretty much obliterates the need for most on-site server and software upgrades.

    Trying-before-buying model gains steam

    And, more enterprise customers — not just startups anymore — have glommed onto the try-before-you-buy model that lets them download software, use it as they see fit, and when the time comes to deploy across company or to get support, all they have to do is pick up the phone.

    Oracle Exalogic ExadataSunil Dhaliwal, who founded Amplify Partners, a VC firm that backs infrastructure startups, sees a massive transition underway in how companies buy IT. “This threatens the big infrastructure and enterprise IT companies to their core. It’s even bigger than changes in the technology itself,” he told me recently. (GigaOM’s Stacey Higginbotham wrote about Amplify here.)

    The real deal, he said, is that enterprise customers are “very tired of getting the short end of the stick in their sales experience.”

    Enterprise customers arise

    Paul Santinelli, partner at North Bridge Venture Partners, agreed. Many companies just don’t have to install software for many core functions any more. Instead they try out things like Box for document sharing and storage or Okta for identity management. “You download it, try it and then buy it without ever meeting the sales guy,” Santinelli said.

    And, there’s a generational shift among IT buyers. Younger people are happy to download and try things and let business units make their own choices.

    That’s not the kind of sale companies like EMC, Oracle, IBM and even VMware — all with big well-paid sales organizations — want to hear about.

    Granted the transition will take time and the legacy vendors are not stupid — they see it happening but it’s hard for them to react fast. “This is not about them not getting it. EMC gets it. The problem is the classic Clayton Christensen Innovator’s Dilemma stuff — they have to make their quarters and you don’t do that by cutting your direct sales people,” he said. But companies like EMC, which Dhaliwal called the “best-ever factory for turning BC football players into highly-compensated sales guys,” will have to adapt eventually.

    Newer generations of IT buyers won’t want to relinquish the freedom of downloading specialized software from young, nimble vendors instead of locking into one or two huge vendors for a wide array of applications.

    Exception that makes the rule

    One thing that may mitigate against faster change is the current regulatory and compliance climate. Companies in the financial services industry, for example, must show that all their technology is up to date and fully supported. That explains why companies still pony up for Red Hat Enterprise Linux as opposed to CentOS, even though many see no substantive differences between the two.

    And vendors play right into that fear of being out of compliance. A VP with a large New York-based bank told me vendors like IBM and Oracle “give you full access to all their software to use or not but then come in with an audit or the threat of an audit to make sure you pay for every bit of it and try to lock you into an enterprise license agreement,” he said, indicating he is not at all pleased with that situation.

    But, regulations aside, change will come, Santinelli and Dhaliwal agreed. Enterprise buyers are so fed up with the old model that they’re willing to take risks.

    “One reason that OpenStack has gotten so much traction for something that’s not cooked is because it’s an alternative to [VMware] vCloud Director and companies don’t want to see vCloud as the new Microsoft CAL-style license lock-in,” he said.

    Microsoft is famous for using its client access licenses, which are often initially cheap or free for new products, to get companies using those products and then jack up the license prices. That’s just the sort of enterprise sales technique that companies resist.

    Financial services companies are locked into Oracle  – and its sales people — for now, Santinelli said. The top IT guy “will keep buying Oracle from a rep in a suit but many of the people who work for that guy are already running applications like Hadoop or Couchbase on a server under their desk or in a VM in the public cloud. Those people will likely replace that IT guy in 5 or 7 years. Then they’ll be buying software, compute and storage just like you buy electricity — on a monthly usage-based rate. They won’t need to own the power plant.”

    Feature photo courtesy of Shutterstock user Peshkova

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Report: IBM and EMC eye SoftLayer

    It is certainly interesting times in the cloud computing world. Not one, but two IT behemoths — IBM and EMC — are reportedly considering a buyout of SoftLayer, the Dallas-based cloud services provider. Reuters, citing unnamed sources, reported that any deal could be worth $2 billion. All three of the companies issued their standard “no comment” when contacted.

    softlayerIf IBM and EMC are pursuing SoftLayer, it’s interesting for a few reasons. First, IBM has spent billions building its own cloud computing business cobbling together technologies from Tivoli, WebSphere and other sources from the last decade. Last week, it just started rolling out pieces of its OpenStack-based cloud. If it’s really going all out to buy a cloud provider of Softlayer’s size, it shows that time is of the essence for big blue.

    As forEMC, the storage giant owns about 80 percent of VMware, which yesterday confirmed that it is building its own public Infrastructure-as-a-Service cloud to take on Amazon Web Services. SoftLayer competes with AWS for many workloads.

    SoftLayer is a big cloud services provider with a flair for innovation and choice. It offers managed services, private and public cloud infrastructure as needed. It has lots of startup and enterprise customers including Path, SendGrid, SlideShare, Cloudant, Citrix, ZipServers and AT&T.

    If IBM and EMC are considering this deal it means they see the need to buy a fully established cloud player with real customers and not a ton of reliance on older technologies. And that is the name of the game as legacy tech players see that they need to compete better with Amazon, and increasingly Google and Microsoft which are all building out massive scale-out cloud infrastructure for enterprise as well as startup workloads. Enterprise customers, as we know, are the lifeblood of IBM and EMC alike.

    Because of its cloud experience, its customer list, and its size, SoftLayer could make an attractive target for an older, bigger company wanting to boast of a state-of-the-art, modern cloud. To put things in context, Rackspace, based in San Antonio, Texas, is also a rumored buyout target every month or so.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Why eating dirt makes this a better world

    Hacktivist groups like Anonymous or, more broadly, groups like Wikileaks may cause big problems for institutions and companies, but overall they’re good for us and good for the internet, said Joi Ito, director of the MIT Media Lab.

    Joichi_Ito_Headshot_2007Ito likened these groups and the challenges they pose to innoculating a young child against illness. If you protect him so stringently against germs to keep him from getting sick, you can bet he will get really sick at some point. But if you let him “eat dirt” and expose him to lots of things he’ll be a healthier child, Ito said during a BBC World Service event held at MIT and broadcast live on Thursday.

    The specter of groups like Anonymous lurking in the background may also encourage better behavior by people and organizations, Ito added.

    These groups make us more “transparency robust” and that’s a good thing, he noted.  The thinking is: If you know someone might be poking around in your business, you’ll probably be better, more ethical, smarter about how you conduct that business in the first place.

    This was a good, wide-ranging session with good insights on the maker movement and other topics. I’m sure it will be streamed later today. I’ll include the link here when it becomes available.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Microsoft explains latest Hotmail, Outlook glitch

    Microsoft attributed the March 12 glitch affecting Hotmail and Outlook.com to a temperature spike in one of its data centers. Many users said they had no access to the services on Tuesday night and into Wednesday morning. SkyDrive was also affected.

    According to a Wednesday Outlook blog post by Microsoft VP Arthur de Haan:

    “On the afternoon of the 12th, in one physical region of one of our datacenters, we performed our regular process of updating the firmware on a core part of our physical plant. This is an update that had been done successfully previously, but failed in this specific instance in an unexpected way. This failure resulted in a rapid and substantial temperature spike in the datacenter. This spike was significant enough before it was mitigated that it caused our safeguards to come in to place for a large number of servers in this part of the datacenter.”

    Many Hotmail and Outlook users have reported on-going issues with the services since January, when Microsoft started migrating Hotmail users over to Outlook.com. Making things harder to track is that not all these issues show up on the Microsoft Live Status page which only reflects problems affecting a “significant” number of users. Microsoft told users who are having issues to log into their account to see for more information on their status.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Netflix fronts $100K for best cloud ideas

    Do you have a great cloud computing idea? Could you use $10,000? If so, check out the Netflix Cloud Computing Challenge which will offer 10 prizes of $10,000 each for the best cloud ideas entered.

    netflix-logoThe streaming video company, famous for its use of cloud services, is putting up $100,000 to urge developers to come up with new features or “improve usability, quality, reliability and security of computing resources delivered as a service over the internet.” The company’s not new to contests: In 2006 it launched The Netflix Prize for the best collaborative filtering algorithm to aid in personalized film ratings. That prize was discontinued a few years later.

    As for the new challenge, Netflix chief product officer Neil Hunt said in a statement:

    “Cloud computing has become a hot topic recently, but the technology is still just emerging … No doubt many of the key ideas that will take it to the next level have yet to be conceived, explored, and developed. The Netflix Cloud Prize is designed to attract and focus the attention of the most innovative minds to create the advances that will take cloud to the next level.”

    Prizes will be offered in 10 categories and winners will be judged by a panel including Werner Vogels, CTO of Amazon; Martin Fowler, chief scientist of Thoughtworks; Simon Wardley, cloud strategist; Joe Weinman, author and Telx SVP; Aino Corry, developer training expert at University of Aarhus; and Yury Israilevsky, VP of Netflix Cloud.

    Deadline for entry is September 15, 2013 with winners to be announced at the Amazon Web Services (AWS) Re:Invent conference in November. There’s more information on the prize at Github.

    AWS already hosts a startup challenge, but contests like this might bring in some fresh thinking from new and exciting sources.  I look forward to seeing what comes of this contest.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Vagrant gets busy with VMware Fusion, Rackspace support

    Vagrant, a popular open source tool that automates the setup of virtual workspaces for software developers, is getting promised support for VMware Mitchell HashimotoFusion and for Rackspace Open Cloud with the Vagrant 1.1. release, due on Thursday.

    Initially, Vagrant ran only on Oracle’s VirtualBox, but in November, Vagrant creator Mitchell Hashimoto said he planned to add support for more platforms including VMware Fusion. He launched a company, Hashicorp, to do this and to offer ancillary services for Vagrant.

    In February, Hashicorp started testing a plugin for Amazon Web Services so that developers using Vagrant for local configuration can also hook right into Amazon’s public cloud. The new Rackspace support gives them a choice of clouds as well.

    For companies with many developers, configuring each machine for their work can take days or even weeks. Vagrant automates that workflow. As Hashimoto told me last fall, Vagrant makes it much easier to create isolated virtualized sandboxes for each project. Vagrant hooks both into VirtualBox (and now Fusion) and uses CFEngine, Chef or Puppet to set up the workspaces.

    Vagrant was initially a labor of love — or maybe of necessity — for Hashimoto, who built it for his own projects as a student at the University of Washington. But it took off beyond his expectations. Users include DISQUS, BBC News, Mozilla, Yammer, Expedia, LivingSocial, Nokia, and the New York Times.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • VMware’s hybrid vCloud takes on Amazon. Kinda.

    VMware’s new vCloud Hybrid service, now in beta and due to ship mid year, is the company’s public Infrastructure-as-a-Service play, according to VMware CEO Pat Gelsinger.

    vmwarelogoThis may be one of the industry’s worst kept secrets, with stories surfacing about the plan last summer on GigaOM and CRN. But it is nonetheless important. VMware will make all the relevant code available to its existing  VSPP partners, Gelsinger told analysts at a New York investor event held by VMware, parent company EMC and the Pivotal Initiative spinoff. That may reassure some of those service providers and VARs who were already implementing vCloud Director in data centers of their own.

    The selling point is that vCloud Director running customers’ private clouds  will interoperate will with vCloud running in public clouds and will facilitate work loads moving back and forth easily. That’s pretty much been VMware’s story for years. But clearly VMware has Amazon and its public cloud might on the brain, as evidenced by VMware’s recent  ”Amazon will kill us all” comments .

    Gelsinger said all the vCloud Hybrid intellectual property will be made available to the company’ s partners — which may reassure them that VMware will not build out its own Amazon-like public cloud infrastructure. Time will tell. VMware CFO Jonathan Chadwick tried  to lay that fear to rest. “We will leverage other people’s infrastructure” rather than building out VMware’s own data centers, he said.

    Forrester cloud analyst James Staten in a research note wrote: “VMware said its public cloud will be aimed at its existing customer base and sold through its existing VAR and SI [system integrator] channel. This explains CEO Gelsinger’s strong comments from last month’s Partner Exchange – it wasn’t public clouds he was worried about but non-VMware public clouds. But for this channel fulfillment strategy to come true, its partners will have to get with the cloud program too and like the [infrastructure and operations] clients they serve, many don’t see more revenue at the end of the public cloud rainbow.”

    Staten makes a point. It’s also by no means clear that VMware’s vCloud push is gaining traction. Most service providers that offer it also offer other, non-VMware options. One big hurdle for vCloud adoption issue is price. One IT consultant summed it up last week:

    “I did a cost analysis for a big [integrator] last week – VMware is $6 per GB ram per month – adds about 30% in some cases to price – and do you think customers care what the hypervisor is? I can’t see how VMware’s core revenue maintains in any shape or form.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • The Pivotal Initiative, in case you were wondering, is now official

    Not that there was a lot of doubt but the Pivotal Initiative spin-off of VMware and EMC, has now officially spun off and will likely go public, according to EMC CEO and Chairman Joe Tucci, speaking at an investors event in New York.

    EMC chairman and CEO Joe Tucci

    EMC chairman and CEO Joe Tucci

    Pivotal is 60 percent owned by EMC, 31 percent VMWare with about 1250 employees in $300 million in revenue, Tucci said. As has been reported, EMC contributed Pivotal Labs, Greenplum and VMware ponied up Cloud Foundry, Spring and Cetas.

    “it’s not the riskiest thing we’ve ever done with an experienced executive, Paul Maritz, taking charge,” Tucci said.

    Structure 2011: Paul Maritz – CEO, VMware

    Structure 2011: Paul Maritz – CEO, VMware

    Maritz is the former CEO of VMWare, and was a long time top executive at Microsoft. Maritz will speak next week at GigaOM’s Structure: Data event in New York City. Maritz is on the agenda later today and will speak more about Pivotal.

    “We have a great opportunity to bring EMC and VMware and Pivotal together and create great value for customers who want to deal with a few strategic suppliers,” Tucci said.  But he also emphasized choice and flexibility. “If VMware wants to do something in storage that EMC might not like, they can do it.”

    This post will be updated throughout the morning.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Microsoft Hotmail, Outlook problems crop up. Again.

    Not to sound like a broken record, but Microsoft Hotmail and Outlook users are having problems again. In fact since reporting on issues related to the Hotmail-to-Outlook.com changeover in early January, there’s been a fairly steady flow of complaints from users about inaccessible or only partly operative email. Much of that time, the Microsoft status page showed no issue but on Tuesday night it lit up like a Christmas tree.

    outlookprob

    There was another public flare up of problems at the end of February. At that time, a Microsoft spokeswoman explained that when a small number of users are affected the status page will not show a problem. Issues that impact a significant number of customers, on the other hand, “will be noted and visible on the server status page.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • MIT’s role in Aaron Swartz prosecution assailed at memorial

    The role that the Massachusetts Institute of Technology played in the prosecution of Aaron Swartz was front and center at a memorial service for Swartz Tuesday afternoon at the MIT Media Lab. Swartz, the 26-year old co-owner of Reddit and founder of DemandProgress, committed suicide in January. He was facing trial on charges that he illegally downloaded too many documents from MIT’s JSTOR library.

    IMG_0224Swartz’s partner Taren Stinebrickner-Kauffman and his father Robert Swartz both called on MIT to open up its investigation into its own actions and to do it soon. Much of the coverage after Swartz’s death focused on the role of U.S. Attorney Carmen Ortiz, who was slammed by critics for pursuing an overzealous prosecution for a minor offense. U.S. Attorney General Eric Holder and others have defended the prosecution.  But there was little mention of Ortiz or the U.S. prosecutors today. At MIT, it was MIT being scrutinized.

    After Swartz’s death, the school announced an internal investigation into its actions. “I was hopeful that it could learn from mistakes made and make sure this injustice and tragedy is not repeated,” Stinebrickner-Kauffman told a couple hundred people at the event. “I have since become less hopeful,” she said.

    “I fear a PR exercise, a whitewash. The [MIT] general counsel is running this. Aaron’s lawyers and father have not been interviewed and there is no sign that the report will be released,” she said.

    Taren Stinebrickner-Kauffman

    Taren Stinebrickner-Kauffman

    She said that while MIT’s stated mission of generating and disseminating knowledge is perfectly aligned with Swartz’s ethic, the school has diverged from that mission, as evidenced by the fact that it could have stopped the prosecution several times.

    “MIT called in the Secret Service when it could have handled the issue internally. When people called on them to drop the case, MIT refused. MIT helped the prosecution while it refused to provide [information] access to the defense,” she said.

    It’s been two months since Swartz’s death and there is no sign of that report she said.

    Other speakers, including some employed at the school, also worried about MIT’s standing here.

    MIT Media Lab director Joi Ito, who hosted the event, acknowledged his conflicted role as a member of the institution and a friend and colleague of Swartz. Introducing the proceedings, Ito noted: “I have an official voice and a personal voice. If it wasn’t for the official voice, I would have spoken out more on this,” he noted.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • One secret about Ray Ozzie’s secretive startup is out: It will tap Amazon’s cloud

    Ray Ozzie, as is his practice, has been nearly silent on the topic of his new startup, Talko. But now we know that it, like thousands of other startups, will use Amazon Web Services. How do we know this? Ray’s Talko colleague Ransom Richardson spoke at a local AWS meetup in Cambridge, Mass. Monday night, according to several attendees.

    Ray Ozzie

    Ray Ozzie

    Richardson spoke about remote management but did not offer many (or any) details about Talko’s product or its timing, according two attendees. “We did learn that they run on AWS and he said it would be a communications service for mobile — something that takes into account the pervasiveness of mobile devices and tries to provide a more engaging experience,” one attendee said. That’s pretty much all that Ozzie has said publicly about Talko, which was once called Cocomo.

    Last March, Ozzie signaled that he was open to using a wide array of services including but not limited to those from Microsoft, where he was chief software architect then chief strategist and which he left in 2011. Talko has netted $4 million in funding that we know about.

    The Talko team also includes Neil Ozzie, (Ozzie’s son), Eric Patey, and Matt Pope. Patey, Pope and Richardson were with Ray at Groove Networks, his last startup, which Microsoft acquired in 2005. A check of LinkedIn also shows other employees including Richard Speyer, another Microsoft veteran who also spent time at Endeca and Howard Nager, from Digitas and Microsoft. Some have been with him since his days at Iris Associates, the Lotus Development Corp.-affiliated company that built Lotus Notes, now a part of IBM.

    There has been speculation that Talko/Cocomo is working on a Mobile backend as a Service (MBaaS). But for now we’ll be stuck in guesswork mode because Ozzie’s not talking. Reached by email, Ray Ozzie had no comment.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Egnyte opens up cloud options for its storage service

    Choice is becoming a big deal in cloud storage. Even customers with hybrid cloud implementations want to be able to pick the “back end” cloud that’s best for them.
    That’s why Egnyte, which already lets customers store their stuff on premises or in Egnyte’s cloud, will now let customers tap Amazon S3, Google Cloud Storage, Microsoft Azure and NetApp Storage GRID as well.

    egnyte external storage setup

    The Mountain View, Calif. company has always maintained that companies want to be able to keep some their files and other digital paraphernalia within their own data centers and some in external clouds. But till now external clouds consisted of its own considerable infrastructure — the company has 9 petabytes worth of cloud running out of data centers in northern California, Asheville, N.C. and Amsterdam.

    But, as we all know, latency is an issue in cloud world, so the addition of these massive third-party clouds, which run out of data centers around the world,  might appeal to companies with far flung offices.

    Egnyte is focusing more on enteprrise accounts and claims some big ones, including Young & Rubicam, the advertising giant owned by WPP, and for which Egnyte manages 125 TB worth of storage, according to CEO Vineet Jain.

    The company, with about 159 emplioyees worldwide, brought in $16 million in Series B funding last summer from Google Ventures and others, bringing total venture backing to about $32 million.

    Jain said he sees the pace of cloud storage adoption picking up and fast.”Last year we got one or two requests for proposals a month and now we get three to four per week. People have budgeted for this, they’re now comfortable with it. In 2007 or 2008 we’d bring on 10 TB every 15 days and now we add as much every 48 hours.  There is just a deluge of data and people need to deal with it.”

    The new EgyntePlus is available now. The market may be booming but so is the number of competitors. Box is very aggressive in targeting enterprise cloud storage market as are Panzura, Nasuni, OwnCloud and other companies including legacy storage giants EMC. And, don’t forget that the big cloud guys — Google, Microsoft and Amazon — have their own enterprise storage plays.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Startup Strongloop brings supported Node.js to Red Hat

    Strongloop, founded by heavy-hitting Node.js committers Bert Belder, Ben Noordhuis and Al Tsang. has come out with a version of the popular server-side language for Red Hat Linux. Since Red Hat Enterprise Linux (RHEL) is the Linux of choice for many enterprises, this is a significant development for the growing community of Node.js programmers and for enteprise developers who want a supported version of the language for their own work.

    While there has been a Node.js download available for RHEL nd its cousins Fedora and CentOS via the Red Hat Package Manager (RPM), there was no formal support from Red Hat or Joyent (the company behind Node.js) and Node.js itself is not included in the Red Hat distribution. Besides Red Hat/CentOS release 6.3, Strongloop Node also supports:

    • Debian/Ubuntu 12.10 (DEB)
    • Mac OS X Mountain Lion 10.8 (PKG)
    • Microsoft Windows 7 (MSI)

    offers Node.js support on Ubuntu Linux, Mac OSX and Windows.

    The official support and service that Strongloop provides could be critical for RHEL developers who want to make use of Node.js’ event-driven talents. Now if a RHEL developer has an issue or problem with Node.js he or she has to go to the mailing list for help. “Now they can get support from us and we write Node.js,” Tsang told me.

    As Joyent CTO Jason Hoffman once told GigaOM, Node.js is a very good way to write high-performance servers that need to handle APIs and facilitate very fast data ingress and egress. Those are attributes that might come in handy for enterprise developers.

    Strongloop’s news comes the same day Node.js v. 10.0 debuted.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Wanted: More jobs for Watson

    IBM’s Watson has already proven it can beat human brainiacs at Jeopardy and has shown promise in cancer research. It has even gone to college at Rensselaer Polytechnic Institute. Now, results of a new academic challenge at the University of Southern California may show that Watson can also suck cost out of legal proceedings and help sufferers of post-traumatic stress disorder (PTSD).

    Winning team of USC Watson competition.

    Winning team of USC Watson competition.

    Two dozen student teams participating in the recent IBM Watson Academic Case Competition at USC had 48 hours to come up with a new use case along with a business plan that would harness Watson’s natural language processing skills and cognitive capabilities. The plans were judged by a panel of IBM execs, school officials and business leaders.

    “The goal is to come up with a business plan that had to be feasible, well thought-through, come up with a go-to market, and it had to have a stable business model,” said said Steve Gold, VP of Watson Solutions for IBM. IBM provided a crash course on Watson’s capabilities, one-on-one consulting and an open Q&A — and then turned the teams loose.

    IBM wanted students to work across schools and areas of expertise. Its stated goals were to expose new people to Watson’s capabilities, advance the curriculum around Watson and encourage research that could help build out Watson’s talent pool.

    Putting Watson to work in law, training and PTSD research

    The first-place team came up with a plan to use Watson in a legal setting, an area some would say is rife for disruption. Law firms traditionally relied on per-hour fee arrangements and those fees quickly add up. Now, law firms are “encouraged” to use a flat fee arrangement, Gold said.  The outsourcing of legal work — when a law firm ships work off to a third-party provider but bills clients for that service,  is already a $4 billion business, Gold said. All those legal documents are a perfect example of the reams of unstructured data that Watson is great at parsing. In January, GigaOM’s Derrick Harris touched on the legal discovery as a  big data application. 

    This team recommended that Watson be used to conduct discovery for corporate legal departments — sifting through court documents, briefs, legal articles and related material.  The takeaway, according to IBM:

    “By placing Watson in charge of research, firms can recover time and costs while delivering better legal outcomes. In turn, firms that leverage Watson’s speed and efficiency can address the legal trend towards ‘flat fee’ billing and research outsourcing.”

    The second-place team recommended that corporate human resources departments use Watson to evaluate data about employee career goals and assess and recommend the training options needed to attain them. In theory, a successful application would mean better-prepared employees and a higher level of job satisfaction.

    The third-place team proposed that doctors use Watson to find undiagnosed PTSD patients by sifting through military veterans’ data, including their medical histories and combat records. The problem with PTSD is that it often is not reported by the sufferer.

    None of the USC teams actually had access to Watson, although that could happen down the road, Gold said. “There’s only so much you can do in 48 hours. The goal here is to find a market that needs addressing, and deliver a plan to attack it that would be applicable to Watson’s talents,” he said.

    Next year, IBM and USC hope to expand the competition to 500 students across USC’s various schools. IBM has run similar Watson competitions at Cornell and the University of Rochester. One thing’s for sure: IBM will milk Watson’s success for all it’s worth and also use it as a recruiting tool. Last year, Gold’s group brought on 17 interns and plans to field 26 to 28 interns next year, including two high school students.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Who’s the biggest cloud of all? The numbers are in

    And the winner is …  Amazon, at least among Infrastructure as a Service (IaaS) providers, by a very wide margin in the fourth quarter of 2012, according to new numbers from Synergy Research Group. Synergy ranked IBM second and somewhat surprisingly — to me anyway — British Telecom ranked third worldwide.

    Overall revenue from IaaS and Platform as a Service (PaaS) made up just 15 percent of the overall cloud infrastructure market, although they were the fastest growing categories. That’s hardly a surprise given the cash funneled into these arenas, not only by Amazon but by Rackspace, HP, IBM, Red Hat, and all the telcos.

    According to a post by Telegeography, a Synergy partner and the company behind all the cool fiber pipeline maps:

    “In the past year, IaaS and PaaS revenues increased 55% and 57%, respectively. Amazon dominates the IaaS segment, accounting for 36% of revenues, and is quickly approaching PaaS leader Salesforce’s 19% market share. Although well behind Akamai and Level 3 in theCDN/ADN segment, Amazon holds the number three spot with a 7% market share.”

    synergycloud

    Rackspace, because of its roots as opposed to its budding OpenStack cloud business,  leads the managed hosting segment, followed by Verizon and NTT. Across all segments, Amazon is the market leader in North America and NTT leads in Asia. Europe, the Middle East and Africa or EMEA is a battle ground hotly contested by  France Telecom-Orange, British Telecom, and Deutsche Telekom.

    Content Delivery Network (CDN) leader Akamai remains at the front of the pack in that category, followed by Level3 and Amazon.

    Numbers like these are fascinating snapshots of what will doubtless be a changing market. I was surprised to see Amazon so strong in PaaS — it was ranked second after Salesforce.com.  Amazon’s Elastic Beanstalk PaaS just doesn’t seem to have that much traction — but definitions of PaaS vary and the array of higher-level services that Amazon offers atop its IaaS foundation (but aren’t considered Elastic Beanstalk) could be considered PaaS-like.  Salesforce.com’s PaaS tally presumably includes both Force.com and Heroku.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • New AppDynamics release aims to fix, not just find, application problems

    The goal of application performance management (APM) products is to monitor how applications work and alert IT if things start to go awry. AppDynamics says its new release will automatically fix many of those problems without human intervention — a tall order.

    Appdynamics CEO Jyoti Bansal

    Appdynamics CEO Jyoti Bansal ”We need to

    “We need to handle apps more dynamically and expand more into operational management. We started with monitoring and now we’re expanding into automating the fixing of problems as well. We have to move more work into the machine itself instead of doing it all manually,” said Jyoti Bansal, CEO and founder of the San Francisco-based company.

    “I use the analogy ot flying Boeing 757.  You have to be trained and need instrumentation and dashboards to fly it but you also have autopilot,” he added. And the product, which previously handled Java and .NET applications — arguably comprising 90 percent of corporate workloads — is adding PHP apps to the mix with this release as well.

    AppDynamics, which raised $50 million in Series D funding in January,  competes with New Relic in APM, although Bansal would argue that New Relic targets startups and smaller companies while AppDynamics takes on big, enterprise clients and claims noteworthy customers including Netflix, Time Warner Cable, Orbitz, Stubhub, and Fox News as customers. New Relic also manages .NET, Java, Ruby PHP and Python applications.

    The new capabilities run both  as Software as a Service but can also  be run on premises if the company desires.

    When companies trust more of their workloads to outside  cloud providers, the importance of dashboards, the Boeing instrumentation Bansal mentioned. The metrics provided must be reliable and  factual. If users cannot believe what they are seeing, as happened in the recent Rap Genius-Heroku case, the consequences could be huge.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • The week in cloud: VMware cloud saga continues; Amazon kicks up monitoring wars

    We’ve heard it before and we’re hearing it again: VMware plans to to take on Amazon Web Services. GigaOM reported it last July, breaking news of a planned spin off tasked with this job.  Part of that spin off would revolve around Project Rubicon, a public Infrastructure as a Platform play. CRN followed up in August with its report on Project Zephyr which it described as a challenge to Amazon’s public cloud.his week CRN again reported on VMware’s AWS killer.
    vmwarelogo

    One can only hope that VMware CEO Pat Gelsinger, EMC CEO Joe Tucci and EMC chief strategist Paul Maritz will put some clarity around all this Wednesday when they talk to institutional investors. This event has been billed as an update on the Pivotal Initiative, the aforementioned spin-off which was finally announced in December.

    Meanwhile, as talk of VMware’s proposed AWS killer circulated, Forrester analyst James Staten had his own interesting take on how VMware should attack its public cloud problem and it doesn’t hew to VMware’s vSphere-and-vCloud Director blueprint. That’s a non-starter in this new world of mobile and net-new applications, he wrote:

    “What you should be doing is admitting you screwed up with vCloud Director 1.0 and 1.5 and kicking ass in engineering to get a true cloud to market ASAP.Your real threat? CloudStack and OpenStack atop Xen or KVM sold to a new cloud administrator inside the enterprise who starts with the service elements of a cloud the DevOps crowd values and worries less about the underlying abstraction layers and infrastructure. This is your real threat.”

    GigaOM Pro analyst Jo Maitland took those thoughts a step further. “VMware should buy Eucalyptus for customers that want an internal cloud that is compatible with AWS and build an OpenStack alternative to AWS just like everyone else. vSphere and vCloud Director do not a cloud make.”

    AWS partners shouldn’t be shocked, shocked!

    awslogojpegSome providers of AWS monitoring services expressed outrage last week after Amazon offered a a free trial of its own Trusted Advisor service. The issue is that Trusted Advisor offers configuration advice and other perks that compete with some of what Newvem, Cloudyn, Cloudability and other third parties offer.

    Here’s the thing: Trusted Advisor has been around for a while — GigaOM reported on it last June. As Ed Byrne, CEO of CloudVertical, another monitoring provider who seemed less perturbed by the move, put it: “It’s been in beta for probably six months — everyone in the industry knew it would exit beta and become part of the basic offering, It’s a little rich to cry wolf now.”

    Indeed. Any third party who thinks that AWS won’t keep building more services and capabilities is nuts. It’s like Amazon started out building a big, airy room with lots of space and gaps to be filled by third parties but then starts sucking all the air out of that room itself as it adds more of its own stuff.

    Byrne summed it up: “If we have to make our business hoping AWS won’t do obvious value-adds for their customers, we are dead in the water. Of the $1.7 trillion per year spent on IT infrastructure, probably $3 billion is on AWS. There’s a whole lot of people trying to understand cloud economics for their own organization, for a migration, for a private or hybrid strategy [there’s] plenty of scope for providers like ourselves and the others.”

     

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • What if your cloud dashboard isn’t telling the truth? Hint: It ain’t good

    What happens if you can’t believe your own dashboard?  Whether it’s for your car, your plane or your computing cloud, it’s not a good thing if the console that’s supposed to tell you what’s really going on just isn’t doing so.

    rapgeniusThat’s why the recent Heroku-Rap Genius dustup is important. To recap:  About two years ago Rap Genius, which runs its Ruby-based application on Heroku’s platform as a service, started noticing performance issues. As traffic grew, it dutifully added more Heroku resources, aka “dynos,” in Heroku parlance. But performance still lagged. Rap Genius dealt with lots of customer complaints although its Heroku log files and related New Relic dashboard said nothing was amiss.

    Customer mandate: transparency and trust

    It turns out that Heroku, the PaaS company acquired by Salesforce.com in 2010,  had tinkered with the routing underpinnings of its site in such a way that jobs were not getting deployed optimally. This move from “intelligent load distribution” to “random load distribution” plus the fact that this change was not documented — let alone publicized —  to customers, was the issue.

    In a February 13 Rap Genius blog post detailing the issue, the company said:

    “A Rails dyno isn’t what it used to be. In mid-2010, Heroku quietly redesigned its routing system, and the change — nowhere documented, nowhere instrumented — radically degraded throughput on the platform. Dollar for dollar a dyno became worth a fraction of its former self.”

    That blog post generated a ton of “up-votes” on Hacker News  and  probably promoted an apology from Heroku, which TechCrunch covered.

    Rap Genius co-founder Tom Lehman described what happened in a recent phone interview. “We had been running 90 dynos at $20,000 a month which we thought was sufficient based on the incorrect data we were getting but it turned out that 90 dynos was woefully insufficient. So we upgraded to 300 dynos at $40,000 per month and performance is still bad. We can’t pay $40,000 a month for this.”

    On February 16, Heroku issued a more detailed apology  and outlined a plan of action including:

    • Improving our documentation so that it accurately reflects how our service works across both Bamboo and Cedar stacks
    • Removing incorrect and confusing metrics reported by Heroku or partner services like New Relic
    • Adding metrics that let customers determine queuing impact on application response times
    • Providing additional tools that developers can use to augment our latency and queuing metrics
    • Working to better support concurrent-request Rails apps on Cedar

    When asked for comment, Heroku referred back to its blog post.

    Lehman said his company is in a tight spot. It can’t sustain payments of $40,000 per month. “Unless something changes we have to move.”

    the likely destination? Amazon Web Services, a transition he would not take lightly because Heroku does much that AWS cannot. On the other hand,  many of Rap Genius’ third-party providers are already on AWS.  ”I still have love for Heroku. Without it we couldn’t get to where we are today but they have not been 100 percent upfront with customers.”

    In his view, this should not be the end of the story. “We feel Heroku (and therefore Salesforce.com) overcharged and misled a bunch of small (and big!) start-ups and if they indeed did something wrong they should be held accountable.”

    The bigger picture

    I’ve asked Lehman if he is party to the class action suit and will update when he responds, but lets get back to the broader issue. Update: Lehman said he is not part of the lawsuit.

    Companies already get the heebie jeebies over the perception that moving to the cloud involves a “loss of control” over its IT. Imagine the impact if they think they can’t trust or believe in the metrics they’re given by their providers.

    This is about way more than Heroku and Rap Genius. It’s about customer trust and the lack of that is a real danger to cloud adoption.

    Photo courtesy of Shutterstock user 3Art

    This story was updated at 8:42 a.m. PST to reflect Lehman’s position on the class action suit.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Amazon slices prices on DynamoDB database service

    Amazon’s DynamoDB NoSQL database is just about a year old and Amazon’s cut prices on  it to celebrate and is now offering reserved capacity if you qualify — you have to run all the instances in one region, for example — and can commit to one- or three-years of usage.

    Specifically, the company is cutting the cost of reads and writes by 35 percent and indexed storage by 75 percent across all regions. The usual AWS blog chart is here:

    awschart

    In his blog, Amazon CTO Werner Vogels lauded how customers like Shazam have wielded the managed database service.

    As Vogels told GigaOM last year at the product launch, NoSQL suits social gaming and web applications but is also critical for the big data applications demanded by business.

    Since DynamoDB debuted, Amazon has launched a series of other big data and enterprise-focused services like the newly shipping Data Pipeline and promises more as it faces heightened competition from OpenStack players which include legacy IT giants IBM, Red Hat, Hewlett-Packard as well as Rackspace and others. Rackspace just bought into the NoSQL database service with its acquisition of ObjectRocket and its MongoDB technology.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • For sale from Pivotal Initiative: Cloud Foundry

    The Pivotal Initiative is now selling software and support subscriptions for the Cloud Foundry Platform as a Service (PaaS) and is opening up governance of that effort to bring outside voices into the process.

    Pivotal Initiative officeThe addition of “external committers” to the project could ease tensions brewing among some Cloud Foundry backers — companies that built their own PaaSes atop the Cloud Foundry framework.

    But then again, the fact that Pivotal is now selling software/support could open new areas of contention with partners that may want to do the same thing. Such is the life of an open source project where coopetition is the rule of engagement.

    As set forth in a new blog post, Cloud Foundry is going to add “full-time external committers” to the process. Governance and openness had been an ongoing issue with the PaaS project according to an exec with one Cloud Foundry vendor. “We just didn’t have any visibility into what was going on [inside the project],” he said.

    He would like to see the whole effort turned over to a vendor-neutral foundation for management, as Rackspace did with OpenStack and IBM did with Eclipse. That didn’t happen here but the addition of outside committers is a step in the right direction and, to be fair, some folks in the OpenStack community complained that Rackspace took its sweet time to make its move.

    Lucas Carlson, CEO of AppFog, another Cloud Foundry backer, said he’s seen other good signs from Cloud Foundry. He is thrilled, for example, that the code is back on a public Github repository. It had been removed some time ago. “We see it as a sign of a more open approach from the Cloud Foundry team,” he said.

    Collaborators or competitors: a fine line

    Some history: The worry initially was that Cloud Foundry, despite all the talk of open-source goodness and just plain openness, was too closely associated with one vendor:  VMware. Then, when VMware spun it off to a VMware-and-EMC-backed entity (Pivotal) there was more uncertainty about its future.

    There was also concern that some of the Cloud Foundry players were going to take the work they’d done and fork the project altogether because of the lack of visibility into Cloud Foundry plans. Under this definition a “fork” — and yes, I’ll get hate mail on this — that could lead to the creation of several not-always-compatible versions of a project. For some in the open source community, there is no such thing as a bad fork.

    But for mere mortals there is worry about an actual ecosystem divergence when many members of the same community start getting their updates from different places instead of relying on a central source, in this case Pivotal. To be fair, there is analogous concern that several versions of OpenStack backed by many vendors — some contributing back more than others — will lead to the same problem. At any rate, that’s the kind of angst Pivotal is trying to lay to rest.

    In Thursday’s blog post, James Watters, head of product for Cloud Foundry, reiterated that the project will support multiple clouds, promising “open interfaces, support and continued development on AWS, OpenStack, vCloud and vSphere environments.”

    And, he maintained, that the addition of outside committers was always a goal:

    ” … we are engaged with several organizations about putting dedicated resources on the extended engineering team –we believe this to be a very important step forward. The scale of these external investments is significant and a major milestone in our growth. The heart of Cloud Foundry, however, really comes from individual community contributions and users, so of course, we invite you to join us. All you need to do is send a pull-request.”

    Going  orward it will be interesting to see what engineers from which companies will be added as committers. For now, the naysayers appear to be relieved at what Cloud Foundry has done.

    Watters endorsed Cloud Foundry’s existing “corporate sponsored, Apache 2 licensed, pull request driven approach” as the right way to go. The outside committers will open up the process going forward, but he also left the door open to further changes. He wrote: “The massive growth of the community and ecosystem requires mediating a diverse set of needs and we will always be open to other governance models for the project in the future.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.