Author: Barb Darrow

  • Amazon takes another step to suck up more enterprise data

    Face it, for all the drama around Microsoft’s recent travails, it owns a good chunk of today’s enterprise. I would wager even most Google Apps shop also run Microsoft Office, for example. And then there’s the tons of corporate data living in Windows-centric IT shops– in SQL Server and SharePoint. That’s why Amazon, in its push to draw more enterprise customers, had to make sure the Amazon Storage Gateway will  run in Microsoft Hyper-v virtualized shops. Which it now does. awslogojpeg

    The news, released on the AWS blog, means the year-old storage gateway — a key bridge between in-house company data and Amazon’s cloud — already supported VMware’s popular ESXi hypervisor. As the AWS blog explains, the gateway:

    ” … combines a software appliance (a virtual machine image that installs in your on-premises IT environment) and Amazon S3 storage. You can use the Storage Gateway to support several different file sharing, backup, and disaster recovery use cases. For example, you can use the Storage Gateway to host your company’s home directory files in Amazon S3 while keeping copies of recently accessed files on-premises for fast access. This minimizes the need to scale your local storage infrastructure.”

    If companies are to trust more of their data to a cloud — any cloud — they need to see that pumping their data in and out of it is: a) easy as pie; b) secure.  That’s the reason Microsoft  snapped up StorSimple last October. And that’s the idea behind such slick services as Nasuni and Panzura. Face it, it’s in Amazon’s best interest to blur the line between in-house data and data that lives in its cloud — and that’s the idea behind the storage gateway.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Cloud adoption: It’s not about the price, stupid

    Don’t look now, but there has been a shift in thinking around why companies move — or should move — workloads to the cloud. A few years ago, most of the talk was all around saving money. Look at how cheap Amazon Web Services are! Pennies per hour to spin up instances! We don’t need to buy more servers!

    But over the last year, the discussion has morphed more into how cloud offers companies flexibility and agility and there’s growing realization that for stable, non-variable workloads, cloud — even public cloud — is not the cheapest option at all– especially if you’re dealing with non-variable workloads taht might actually be cheaper to run in house. But that flexibility for occaional or variable workloads remains the public cloud’s siren call. Check out posts from Virtual Geek and Cloudave for more thinking on this trend.

    So the reason to go to cloud is no longer price but being able to move fast — deploy, re-deploy, and un-deploy workloads as needed without having to buy servers and software that could become shelfware next week or next month.

    IaaS follows SaaS arguments of the past

    What’s interesting to me is that this debate is evolving much like the discussion around Software as a Service (SaaS) did a decade or so ago. Initially, when Salesforce.com was coming into its own, most of the sales pitch was around price. Salesforce was so much cheaper than Siebel Systems. (Remember Siebel Systems? It’s now part of Oracle).

    At that time, Microsoft was getting into the CRM business with its own on-premises edition. It’s counter-pitch was: “Sure, Salesforce.com may be cheaper at first, until you use it for three years. Then Microsoft on-premises CRM is cheaper.”

    Of course, when Microsoft started rolling out its own cloud-based CRM, that price-based argument dissipated. The new thinking was that “cloud” CRM is better because everyone’s on the same, latest release and you can add/subtract users easily. Salesforce.com’s message likewise evolved to mirror that same message — especially as the more feature-rich Salesforce.com package options started to get um, quite pricey. Then Salesforce’s benefits became that it freed companies from the tedium and expense of on-site server and software upgrades. You could focus on business and leave the IT heavy lifting to your provider.

    Everest Group partner Scott Bils agrees that the thinking around cloud deployment motivation is happening. “No doubt the conversation has shifted from [total cost of ownership] to agility,” he said. A survey Everest conducted of about 350 attendees at last week’s Cloud Connect show reflects that trend.

    Cloud Connect 2012 Enterprise Cloud Adoption Survey

    Cloud Connect 2012 Enterprise Cloud Adoption Survey

    Customers surveyed cited reduced time to provision applications and infrastructure as their primary reason to move to cloud, followed by the cloud’s overall flexible capacity. TCO, on the other hand, came in way down the list. Now, remember, these people were at a cloud computing conference, so they may be more up to speed on these issues than the average IT user. But as Bils noted: “Interestingly, vendors still mistakenly believe [cost remains] the most important factor.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Microsoft takes hits after bad PC numbers

    Wall Street analysts piled on Microsoft after new research showed how low the PC market could go. On Wednesday, IDC pinned at least part of the blame for bad PC sales numbers on sluggish Windows 8 adoption. Microsoft shipped Windows 8 in November and made a big bet to create Surface, a business-friendly tablet alternative to Apple’s popular iPad. Right now, neither of those bets is doing very well.

    On Thursday, Goldman Sachs downgraded Microsoft shares to “Sell” from “Neutral” and Nomura Securities cut its call to “Neutral” from “Buy.” The moves came a day after  IDC called the first quarter of 2013 “the worst quarter” ever, with PC sales down 14 percent from the year-ago quarter. (Gartner numbers were slightly better: it had PC sales only off 11.4 percent year over year for the quarter.)

    “At this point, unfortunately, it seems clear that the Windows 8 launch not only didn’t provide a positive boost to the PC market, but appears to have slowed the market,”  Bob O’Donnell, IDC Program Vice President, Clients and Displays said in a statement. (Full IDC statement here.)

    Long-time Microsoft watcher Rick Sherlund at Nomura Securities wrote that the combination of “sluggish” Windows 8 adoption and the “lack of compelling new hardware is disappointing with no relief likely” until later this year when Intel releases the new Haswell notebook processor.

    As if on cue, the Wall Street Journal (subscription required) reported that Microsoft plans a new 7-inch Surface tablet to come later this year.

    To be fair, for the first quarter, IDC also acknowledged that industry darling Apple also faded a bit. While it did better than the overall U.S. market, shipments of Apple PCs also slipped — apparently because more people are opting for iPad tablets as PC replacements. 

    MSFT Chart

    MSFT data by YCharts

    .

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Dropbox adds single sign-on support to woo more business users

    Business workers hate, hate, hate having to sign onto multiple services — cloud-based or on premises — with different passwords and credentials. That’s why Dropbox is bolstering its business version with single sign-on or SSO capabilities. First, it’s supporting the Security Assertion Markup Language (SAML) which means if your IT people have set up a SAML federated process in the office, you can sign on once to access all those affiliated applications.

    It’s working with identity management experts — Ping Identity, Okta, OneLogin, Centrify and Symplified — to bring SSO to those users. And, in case it’s not clear that Dropbox wants to attract business users, it’s re-christening Dropbox Teams as Dropbox for Business. Got it? Good.

    IT admins can already integrate Dropbox with Microsoft Active Directory, the directory services scheme used by many companies, to automate the creation and removal of Dropbox for Teams accounts from an existing directory. But until now (well, actually until next month, when it comes online) it did not support SSO.

    dropbox business

    Dropbox is the undisputed king of consumer-focused file-share-and-sync — as of November it claimed more than 100 million users. It is far from clear, however, how many of those users graduate from the free to the paid consumer service. Nor does the company provide numbers of Dropbox for Teams, er, for Business users, which costs $795 per year for 5 users plus $125 for every additional user. But it does say that Dropbox is used in 95 percent of all Fortune 500 companies.

    As we all know by now, people sho use a given service at home like to use it at work, which means that the 95 percent figure is credible. We also hear about Fortune 500 companies — including IBM – prohibiting the use of such consumer-focused products (including Dropbox specifically), and that’s the trend that Dropbox is trying to nip in the bud here.

    Earlier this year, Dropbox added a more IT-friendly console that lets admins restrict access and transfer of company documents and helps them track user activity.

    Sujay Jaswa, VP of business development for Dropbox, said the company does not see Dropbox competing with SkyDrive — which Microsoft has tied tightly into Office and Windows — nor with Box, which would love to be the Dropbox of the Enterprise. “We just want to build the kinds of features people love,” he said.

    But anyone outside of Dropbox would say that it is definitely contending with Microsoft, Box and the Google Apps-and-Drive tandem in business accounts.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Cloud wars to rage on with dueling OpenStack, AWS events next week

    Neither of the principal parties would admit this, but the competition between the OpenStack cloud forces and Amazon Web Services will play out next week with the OpenStack Summit taking place in Portland, Ore. April 15-19 and Amazon Web Services Summit  in New York on April 18. Both events are sold out although realistically, can you remember the last tech event you attended that was not “sold out?”

    full openstack cloud software logoI have no numbers for the AWS event (at the Javitts Center) but as of Tuesday night, the count for OpenStack Summit is 2,400 registered attendees up from 1,314 for last year’s San Diego extravaganza, according to an OpenStack source with access to that data. (These numbers are supposedly public although darned if I could find them.)

    Rackspace, HP pack OpenStack show

    Rackspace, one of OpenStack’s granddaddies along with NASA — has registered 199 people — a number which one OpenStack member characterized as overkill. Hewlett-Packard, depending on how you count or spell it, has 169 people or so on tap. Here’s how that list breaks out: HP (85), Hewlett Packard with no dash (30); Hewlett-Packard with dash (22); HP Cloud Services (21) and HP Cloud (7), Hewlett Packard Co. (4). Seriously, HP, what’s up with that?

    Red Hat is on with 74, IBM with 72 and the list goes on. What I’ll be looking for, however will be real, live OpenStack customers which are starting to trickle out. OpenStack Foundation member Cloudscaling (which registered 14 summit attendees) just announced video game publisher Ubisoft as a customer and already has claimed LivingSocial and IBS Datafort as reference accounts.

    Structure 2011: Werner Vogels – CTO, Amazon.com

    Structure 2011: Werner Vogels – CTO, Amazon.com

    Some other interesting tidbits from the OpenStack Summit attendee list: Controversial foundation member VMware registered a whopping 4 people. VMware bought Nicira, a big OpenStack player in software-defined networking. And non-member Oracle registered 14 people. Interesting. Oracle is going its own way with cloud but recently buy bought Nimbula, an OpenStack member.

    Amazon to tout OpsWorks, other enterprise-class services

    Meanwhile, on the other coast, AWS CTO Werner Vogels will probably talk up AWS’ value to the enterprise and tout its new-and-improved cloud management features and services including OpsWorks lifecycle management offering and RedShift, Amazon’s inexpensive data warehouse alternative to Teradata, Oracle, IBM and HP products.

    AWS, a favorite among developers at startups and big companies alike, still needs to persuade  financial services companies and organizations in other heavily regulated industries that its public cloud infrastructure can be trusted for sensitive workloads — things beyond archival storage. And, there are indications — including the private cloud it’s allegedly building for the CIA that it’s getting over its aversion to private cloud deployment as well.

    OpenStack clouds are starting to gel — at least at some customer accounts. What remains to be seen is which of the many OpenStack cloud providers will gain traction. And meanwhile, AWS continues to chug along.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Upstart Server Density sets sights on RightScale with new cloud management goodies

    Cloud monitoring startup Server Density has big plans to take on RightScale in the multi-cloud management space.

    The London-based company made its bones by offering customers — which include Electronic Arts, Intel and The New York Times — an easy way to monitor their Amazon Web Services and Rackspace workloads. In that arena it competed with open-source tools like Nagios, Cacti and commercial offerings like Scoutapp and Cloudkick, which Rackspace purchased in 2009.

    Now, it’s moving into the more rarefied air of multi-cloud monitoring services where RightScale, Santa Barbara, Calif. reigns as a big, entrenched competitor.

    serverdensity

    Server Density, which now has 13 employees, put Server Density v2 in private beta a few weeks back and will start rolling it our more broadly in coming weeks, co-founder and CEO David Mytton said.

    RightScale has its vulnerabilities, in Mytton’s view, chief among them what he terms its “awful UI” and pricing that he says is more enterprise-y than you might expect for a cloud focused company. (For the record, RightScale offers a 60-day free trial and then pricing starts at $500 per month for one account with 5 users.)

    Server Density monitors an unlimited number of servers for $10 per month and then will charge per server when the user enables additional capabilities.  Its route to market is bottoms-up — sysadmins sick of dealing with multiple cloud dashboards — from AWS and Rackspace —  typically use their credit cards to check out Server Density and its use often spreads to whole departments, Mytton said.

    “We’ve spent the past year taking feedback from our existing monitoring customers and are adding cloud provisioning which is our first step into infrastructure management — we provide an abstraction layer for web and mobile that lets you control your Rackspace and Amazon instances without having to use those APIs,” he said.

    Of course, RightScale isn’t standing still. The company builds and buys additional capabilities as needed.  And it works with lots of clouds including Google Compute Engine in addition to AWS and Rackspace.

    Server Density will also evaluate adding more clouds as it grows, but for now AWS and Rackspace are the two huge opportunities, Mytton said.

    In some ways, this upstart and the company it seeks to unseat, also have to face the fact that the cloud providers themselves are adding more monitoring and management tools themselves. AWS Opsworks is an example.  Then the argument is that most  companies don’t want to lock into one cloud and will need a tool set to monitor and manage multiple cloud infrastructure providers.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Salesforce.com and Rackspace gear up for mobile developers

    If there was any doubt that mobile development is where the action is, witness two piece of news. First, Rackspace, the infrastructure-as-a-service and hosting company, is launching a pre-packaged mobile “stack” specifically for mobile applications. Second, Salesforce.com is beefing up its mobile software development kit (SDK) and is coming out with “quick start” packs to jump-start HTML5 or hybrid mobile applications.

    Salesforce says developers using its tools can build apps that tap into troves of legacy data from existing CRM customers.

    Salesforce says developers using its tools can build apps that tap into troves of legacy data from existing CRM customers.

    Given these developments, and rumblings that public cloud king Amazon Web Services is gearing up its mobile development push, it looks like legacy cloud giants are crowding into a space pioneered by smaller, more focused providers of mobile back-end services. (GigaOM Pro analyst Janakiram MSV has a good take on choosing an MBaaS here — subscription required.)

    Who needs an MBaaS?

    Salesforce.com’s pitch is that, while there are tons of useful consumer mobile apps, enterprise apps to date are still lacking.  ”It’s hard to build mobile apps that don’t just look nice but are engaging and that comes down to data. They need to be connected into your work data,” said Adam Seligman, VP of developer relations at Salesforce.com. “You have to make it easy to build the apps, the client side stuff, but you also need those hooks into corporate data.”

    The new mobile packs, which support three lightweight mobile frameworks — jQuery Mobile, Backbone.js and AngularJS — should help on the ease-of-development front.

    Salesforce, which backs both Force.com and Heroku Platforms as a Service (PaaS), subscribes to the school of thought that a specialized Mobile Backend as a Service (MBaaS) — from Parse, Kinvey, Kii or Stackmob — isn’t necessary. Those smaller competitors would no doubt argue that developers need to build applications that connect to myriad applications from many sources — not just those from one company.

    Rackspace wraps up mobile stack in an easily deployable package

    Rackspace already hosts “tons of mobile apps” but it wants to make it easier for developers and companies to deply them, CTO John Engates said. So it’s wrapped up a mobile-focused technology stack as a sort of prepackaged cloud for that type of user.

    “We want to streamline things. We put together a stack — including Linux, MySQL, PHP, Memcached, Varnish cache in a sort of blueprint that we can deploy consistently and quickly,” he said.

    This backend runs in Rackspace’s public cloud infrastructure, but on cloud servers that are dedicated to that customer. “We’re basically running a single tenant infrastructure on a multi-tenant cloud,” Engates said. “Heroku is a multi-tenant platform that lives on Amazon, a multi-tenant infrastructure cloud. We’re trying to build a single-tenant platform atop a public cloud. You can build your own deployment and specs and scale it for what you need.”

    Rackspace CTO John Engates

    Rackspace CTO John Engates

    The entire stack is open source and developers can use their SDKs of choice to develop for any mobile device. Rackspace has also signed up some partners to work with its stack: FeedHenry, New Relic, Sencha, SOASTA, StackMob and Trigger.io.

    “The idea there is you use our infrastructure but then SOASTA can test your application from many perspectives — not just Rackspace — and throw a load up there to make sure it scales before you deploy it,” Engates said.

    As more of these bigger, broader “cloud” companies add mobile development and hosting capabilities, it may be time for consolidation in the MBaaS business to kick off for real.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Firebase brings Google Docs-like collaboration to its real-time backend

    Firebase, which offers a real-time back-end for software developers, is adding capabilities that let developers easily build real-time collaboration into their applicatins, according to co-founder James Tamplin.

    Based on what it’s seen from its users, the San Francisco startup sees collaboration as a really big vertical, Tamplin said in an interview.”Many people have tried to build collaborative text editors like Google Docs but it’s really difficult. Users were using Firebase to synchronize their whole text block, but that’s not as efficient as Google Docs which just syncs the changes,” Tamplin added. Figuring out how to just deal with the deltas and how to handle re-dos and un-dos when multiple people work on the same thing at the same time is a really hard problem, he said.

    firepadscreen1

    “We actually replicated a full collaborative text editing library atop Firebase and are open sourcing it [under the MIT license],” he said.

    As GigaOM has reported, developers user Firebase to easily create and debug web applications without having to worry about server infrastructure

    If several people are collaborating on a WordPress blog post, if there is a Firepad plug-in, they would be able to work on the same document at the same time, with Firepad tracking edits and enabling re-dos as needed, Tamplin said.

    Atlassian is using Firepad  in a plug-in for Stash, a tool for managing Git code repositories. That add-on lets different team members edit code together. and startup LiveMinutes is using it to build a way to pull content out of Evernote and work with that content collaboratively, Tamplin said.

    What Firebase is doing with Firepad is similar to Etherpad, which Google bought in 2009 and Firebase competes in a broader sense with companies like Pusher and Pubnub.

    Being able to endow apps with collaboration is becoming table stakes for building the next wave of applications,  Tamplin said. “We want the next Twitter or Facebook to be built on Firebase,” he said.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Superstorm Sandy and Hurricane Irene aside, folks still want to build up their new data centers in New York

    Lobby at Verizon office at 140 West Street, New York post-Sandy

    Lobby at Verizon office at 140 West Street, New York post-Sandy

    This is surprising — at least to me. Despite the angst that Superstorm Sandy and Hurricane Irene caused data center providers and their customers in the New York metro area over the last two years, businesses still want to expand their data center capacity in that low-lying, suddenly storm-surge-prone area.

    According to a new survey for Digital Realty Trust, 65 percent of 148 companies surveyed that definitely plan to expand their data centers, want to do so in New York City or its environs. This flies in the face of speculation that big New York area companies would put more of their new data center firepower far from the coast. (GigaOM’s Jordan Novet has ore on the research here.)

    Financial services companies and exchanges clustered in New York obviously need some compute power nearby to reduce latency on trades, but data center experts said those capabilities could be parcelled out judiciously to local data centers while most of the other heavy lifting could be shipped off to data centers located in areas far from the coastal flood plain.

    drtchart

    According to the new research:

    “The majority of respondents who definitely plan to expand in 2013 would prefer to locate a new or expanded data center in New York City (65%); Los Angeles (47%), Dallas (36%), Chicago (31%), San Francisco (30%) and Phoenix (28%) are other U.S. cities mentioned often.”

    Other highlights:

    • Security was cited as the most important factor on decisions about location.
    • Folks tend to opt for a site close to their current work location. 69% choose their home city as one of their expanded data center locations.

    Of course when two 100-year storms hit the same area within two years of each other, you might start evaluating new locations and then the question becomes what areas are not susceptible to natural disasters. As Chris Perretta, CIO and EVP of State Street told GigaOM last year: ”In the Midwest you get tornadoes, on the coast you get surge, in Florida you get hurricanes, in the west you get wild fires, in California you get earthquakes.”

    Given that, maybe these findings are not such a surprise after all.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Serious question: Is it too late for HP Project Moonshot to disrupt anything?

    Hewlett-Packard said its first “Generation 2″ Project Moonshot server, based on Intel’s( intc) Atom Series 1200 chip is available as of Monday with other versions running chips from Calxeda, AMD, Applied Microand Texas Instruments to come.

    The goal of Project Moonshot, as initially stated last year, is to offer a super energy-efficient and compact servers capable of running the world’s biggest webscale (and biggest enterprises)  at a fraction of the cost. HP said it shipped a number of early versions for customer proofs of concept last year but today’s news represents broad availability of what HP execs called a  ”software-defined server designed for the data center.”

    The new server puts 4,500 Proliant servers in one HP 1500 enclosure. Compared to traditional Proliant servers, this iteration uses 89 percent less energy, 80 percent less space and is 97 percent less complex than the former state of the art at 77 percent less cost.

    It’s understandable given HP’s huge server installed base in enterprises why it lays out that comparison, but companies might be more interested in how Moonshot boxes compare with webscale servers from what used to be no-name rivals like Quanta, Inventec, Quanta. The notion of BYO servers is also spreading.  In January, Rackspace the big hosting and cloud provider, for exmaple, said it would start building its own servers.

    That trend puts traditional server vendors  like HP,  Dell and IBM in a tough spot.  It’s good to see HP willing to cannibalize its installed base., the question is whether those big web-scale workloads have already set sail on no-name servers.

    I will update this story as needed throughout rest of today’s HP web conference.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • From outsider to IBM Fellow in less than 2 years: Neil Bartlett sets a record

    Neil Bartlett set some sort of land-speed record when he was named an IBM Fellow last week.

    IBM Fellows 2013. Neil Bartlett is at far right.

    IBM Fellows 2013. Neil Bartlett is at far right.

    “I came into IBM a year and a half ago… it’s shocking that IBM would allow someone like me to become a fellow,” Bartlett said in an interview.

    There have been only 246 IBM Fellows in the 50 years since the program launched, and 85 of those are still active out of a total 442,000 IBM employees worldwide. Last week, IBM tapped 8 more, including Bartlett, who became an IBMer when Big Blue bought his company, Algorithmics, in September, 2011.

    IBM’s gonzo over analytics

    Given that IBM (like many other tech powers) has gone ga-ga over analytics, it’s not surprising that Bartlett’s speciality is risk analytics. That’s a specialty which Toronto-based Algorithmics focused on with huge financial services and insurance companies.

    As an IBM Fellow — his other title is director, development & CTO for risk analytics – Bartlett hopes to take what he and IBM have learned about evaluating risk and make it more available to smaller entities. “I’ve worked with big banking organization, large buy-side institutional investors and insurance companies. What I’m hoping to do with IBM in the mix is to do a better job servicing those guys, but also bring what we learned to a much, much larger audience,” Bartlett told me.

    Understanding risk is all about managing uncertainty, and smaller companies face risk and have uncertainty to manage too, he said.

    What can he do as an IBM Fellow that he coudld not do before? For one thing he can get access to the top.  ”I can pick up the phone and say, ‘Ginni, how about this?” Ginni is Virginia “Ginni” Rometty, CEO and chairman of IBM.

    Becoming an IBM fellow is a little bit like being named a MacArthur Fellow, although no-one at the company will talk about what, if any, monetary award might be involved.

    Money or no money, it’s a huge honor and gives the recipient a big platform and access to all of IBM’s tools — yes, even Watson, the technology known for beating human Jeopardy champs. And, like many McCarthur recipients,  Bartlett was surprised that he was tapped. “I was in London when the call came and the number had an unusual series of digits and to be honest I ignored the first two calls but picked up the third,” he said.  ”It was IBM Software GM Steve Mills with the big news.”

    Attacking the opportunity in Brazil

    IBM does expect its Fellows to pull their weight business-wise. Each becomes an ambassador for one of IBM’s targeted “growth markets”. In Bartlett’s case that growth market is Brazil — which is a little odd since Bartlett speaks some French, Italian, German, Spanish, Japanese, Thai and Russian, but not Portuguese.

    Brazil has huge potential as it emerges as a world economic power. “Algorithmics was there for a few years as a private company and there’s a lot of value we can bring to the table, especially in Brazilian banks, as the company grows and changes,” he said.

    According to Bartlett’s IBM biography, he earned a degree in computer science and electronics at the University of St. Andrews. According to the bio:

    ““I had plans of going on for my doctorate until I talked to my bank manager who told me how much it would cost me. I decided I needed to start working,” recalled the eldest son of a London car mechanic.

    Last year, USA Today reported that IBM was the sixth largest spender in R&D among U.S. companies — after Microsoft, Pfizer, Intel, Merck, Johnson & Johnson, according to S&P Capital IQ, spending $6.3 billion over the previous year.

    IBM R&D Expense Quarterly Chart

    IBM R&D Expense Quarterly data by YCharts

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The week in cloud: Google and Amazon cut prices (again); OpenStack Grizzly debuts

    Amazon and Google trade price cuts. Again

    The incumbent public cloud champ and its wanna-be rival took turns cutting prices again last week.

    Google Compute Engine vs. Amazon EC2Amazon Web Services sliced the price on Windows on-demand EC2 instances by 26  percent – although as the price still depends on region. That move came within hours of Google cutting prices of most of its GCE instances by an average of 4 percent — that little tidbit was buried in larger news that Google is opening up access to Google Compute Engine to any customer willing to pay $400 a month for Google Gold Support. But because the AWS price cuts were for Windows, that move may have been directed at Microsoft Windows Azure more than Google, but why quibble? NetworkWorld has more as does the Motley Fool.

    ProfitBricks, another cloud contender, extended its scale up vs. scale out cloud pitch last week as well, making its biggest instance bigger. The new super-duper instance weighs in at 62 cores and 240GB of RAM up from 48 cores and 196GB of RAM.

    “By offering variable instance sizes, which now tip the scales at 62 cores and 240GB of RAM, ProfitBricks continues to define Cloud Computing 2.0. ProfitBricks customers can now run massive computational processes at a lower cost while taking advantage of better speed and performance. It also enables users of databases and big data software to scale their virtual servers vertically rather than horizontally.

    Talkin’ Cloud has more here.

    When is Amazon cloud not the cheapest option? Hint it’s more often than you think

    awslogojpegOver the past week several conversations with tech vendors have come around ot the fac tthat, when it comes to actual production workloads, the most cost-effective deployment model — repeated price cuts notwithstanding — is not AWS at all.

    For example, the venerable analytics company SAS Institute, when it was testing out its new visual analytics tool, did so on AWS because it couldn’t deploy its own hardware fast enough. But that lasted about a month. “Amazon was way too expensive, so we brought it in-house,”  SAS CEO and founder Jim Goodnight told me in a recent interview. “Amazon doesnt’ give it away for free,” he said.

    Once companies start deploying higher end services and run advanced analytics, other options are cheaper, Goodnight and his CMO and SVP Jim Davis told me.The two execs  were on a nationwide road show to show off the company’s new visual analytics service which will be widely available within months. and will eventually be available from SAS’s own data centers or via private clouds, as the New York Times reported.

    If a company uses the vendor’s new visual analytics applications for six months or more,  it’s cheaper to run on SAS infrastructure rather than AWS, they said.

    I was talking about this conversation last week with Buzzient CEO Timothy Jones, and he agreed wholeheartedly with that assessment that AWS if fine to get going, but less than price optimal for actual production use. AWS  is a “honey pot,” he noted. “You can get in cheap but pretty soon it’s not very cheap at all.”

    I would love to hear from readers in the comment field about specific scenarios when the AWS public cloud goes from being a great cradle for new applications to a less-than-optimal site to run them.

    OpenStack crowd gears up for summit

    full openstack cloud software logoThe new OpenStack Grizzly release was ready for download last week, two weeks before the OpenStack Summit kicks off in Portland, Ore. This, the seventh OpenStack release, adds better support for VMware and Hyper-V hypervisors; support for multiple storage options; and some software defined networking (SDN) perks.

    As Lew Tucker, VP of cloud computing for Cisco  told InformationWeek, Grizzly’s updated  Quantum componentlets networking companies create applications that will programmatically control the underlying network based on rules and policies.

    Photo courtesy of Shutterstock user Brian A Jackson

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Seeking startup cred: SAP pushes HANA as a platform for data startups

    SAP and Kendall Square have a lot in common. They are both legacy tech powers that want to attract — and keep — shiny new data startups.

    That’s the back story to Friday’s SAP Startup Forum held at the  hack/reduce facility in Cambridge, Mass.’s Kendall Square neighborhood. There SAP talked up HANA, the company’s analytics database as a development platform data (or big data) applications to more than a dozen startups including Hadapt, Entagen, Diffeo, Objective Logistics, InsightSquared, Luminoso, Sqrrl, and Veracode. Those startups, in turn, were able to tout their business plans and  demonstrate their products to an audience of reporters, VCs and others.

    hack/reduce logo“We’re pitching to the startups and they’re pitching to us,” Scott Jones, SAP’s senior director for startup training and enablement told about 100 attendees. HANA, which debuted three years ago, has given SAP traction in an audience beyond its usual big-company ERP customer base and SAP fully intends to press that advantage.  SAP Ventures, the company’s VC arm, is increasingly active in finding and funding data startups. And it would very much like them to build their technology atop HANA.

    The SAP execs repeatedly talked up HANA ONE, which runs on Amazon Web Services, as if to say “this isn’t the traditional, big iron, expensive SAP” of another era. It costs $3.49 per hour to run HANA ONE on an AWS EC2 8-core cluster.

    As for the startups, many were clearly intrigued by HANA’s capabilities although none of those I talked to had actually run it. The consensus was this type of event and the promised perks — Jones offered free “no strings attached”  licenses and training — are what cash-strapped startups need. Indeed that may be only way for a large commercial software vendor  like SAP to hook small companies born-and-bred in a world dominated by free or nearly-free open-source software and rentable AWS infrastructure.

    Plea to local startups: Stay put

    A gaggle of area VCs were also on hand to sweet talk entrepreneurs into staying local rather than decamping to Silicon Valley after graduating from Harvard or MIT — as has been a standard practices. Chris Lynch, the former CEO of Vertica Systems  who has helped nurture a big data startup community in and around Boston was on hand to talk up that effort.

    And, Dr. Sam Madden, of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) was there to help out. “Boston is an awesome place for startups — there’s a spectacular pool of outstanding, hungry young talent [here].” It helps that the  VCs that used to live way out  on the 128 corridor have relocated in closer into the hub. In the past two years, Kendall Square has seen a huge building boom with growing presence from Microsoft, Google, Amazon, IBM, Oracle and others. It’s a hip area to work for young techies — many of whom dont own cars and like how mass transit and bike-friendly the area is.

    Still, not every attendee was buying either pitch completely. Cyrille Vincey, CEO and founder of qunb, a data analytics and visualization startup that does use HANA, extolled its features and performance, but had one suggestion: “HANA is simple and fast. It feels like open source. Why don’t you open source it?”

    And Timothy Jones, CEO and founder of Buzzient said his company, which has been based both Cambridge and Boston but is now virtual, may relocate to the San Francisco area. “We can get office space cheaper there than in Kendall Square,” he noted.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • You want to crunch top-secret data securely? CryptDB may be the app for that

    There are lots of applications for data crunching in the security-obsessed worlds of the defense, healthcare, and financial services industries. The problem is that these organizations have a hard time crunching all that data without potentially exposing it to prying eyes. Sure, it would be great to pump it all into Amazon Web Services and then run a ton of analytics, but that whole public cloud thing is problematic for these kinds of companies.

    Dr. Sam Madden of MIT's CSAIL lab.

    Dr. Sam Madden of MIT’s CSAIL lab.

    CryptDB, a project out of MIT’s Computer Science and Artificial Intelligence Lab, (CSAIL) may be a solution for this problem. In theory, it would let you glean insights from your data without letting even your own personnel “see” that data at all, said Dr. Sam Madden, CSAIL director, on Friday.

    “The goal is to run SQL on encrypted data, you don’t even allow your admin to decrypt any of that data and that’s important in cloud storage, Madden said at an SAP-sponsored event at Hack/reduce in Cambridge, Mass.

    He described the technology in broad strokes but it involves an unmodified MySQL or Postgres app on the front end that talks to a CryptDB query rewriter in the middle which in turn talks to a MySQL instance at the back end.

    According to CryptDB’s web page:

    “It works by executing SQL queries over encrypted data using a collection of efficient SQL-aware encryption schemes. CryptDB can also chain encryption keys to user passwords, so that a data item can be decrypted only by using the password of one of the users with access to that data. As a result, a database administrator never gets access to decrypted data, and even if all servers are compromised, an adversary cannot decrypt the data of any user who is not logged in.”

    The technology is being built by a team including Raluca Ada Popa, Catherine Redfield, Nickolai Zeldovich and Hari Balarkishan.

    CryptDB  could also run in a private cloud but there are still some big implementation questions. Asked how CryptDB would negotiate data transmission through firewalls, for example, Madden punted. “That’s not something we’re focusing on. The great thing about being an academic is we can ignore some problems,” he said.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Man bites dog: Rackspace sues “notorious” patent troll

    Rackspace, which just successfully defended itself in a lawsuit filed by one patent troll, is now declaring war on another.

    On Thursday, the company said it sued IP Nav and Parallel Iron, asking the federal court in its hometown of San Antonio Texas for damages, for breach of contract and to enter a declaratory judgement asserting that Rackspace does not infringe on Parallel Iron’s patents.

    The back story, according to a Rackspace blog post, is that Parallel Iron sued Rackspace and 11 others in Delaware. That suit alleges that the defendants infringed on three patents that Parallel Iron claims cover the use of the open-source Hadoop Distributed File System (HDFS).

    In his post, Alan Schoenbaum, Rackspace SVP and general counsel wrote:

    “Parallel Iron is the latest in a string of shell companies created to do nothing more than assert patent-infringement claims as part of a typical patent troll scheme of pressuring companies to pay up or else face crippling litigation costs. At least that is what it looks like on the surface.

    As GigaOM’s Jeff Roberts has reported, many of these litigious companies (aka trolls) are shells created by patent aggregators. Their goal is to wring money out of targets. Sometimes, legitimate tech companies give their IP to trolls in order to harass rivals or even create their own shell to pursue this sort of litigation.

    Patent shell companies claim that they give small companies — those without the resources to enforce their own patents — a way to do so. Under that theory, these small companies, or academics or a nongovernmental agency (NGO) might turn their IP over to a shell company to protect it.

    That’s a contention that Shoenbaum called “laughable.” Rackspace’s complaint is here.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google cracks open access to its compute cloud — a little bit

    The Google Compute Engine — the company’s response to Amazon’s EC2 service — is starting to open the floodgates to new users. While technically still in preview — frankly what at Google is not in preview? — customers willing to plunk down $400 a month for Google Gold customer support can get access to GCE.

    Previously GCE was sort of an invite-only sort of thing, which made it analogous to a swank nightclub. If you can make it past the velvet rope and the surly bouncer — perhaps with a wad of cash — you’re in. Now you just need a couple hundred dollars a month. You could also sign up online for an account — but my nightclub analogy falls apart there. As a sweetener, Google also said it cut prices of its instances on average by 4 percent. It’s all outlined on the company’s blog.

    Google Compute Engine logoAs GigaOM has reported here and here, many see GCE — which debuted last June — as perhaps the only real competitor to AWS on the compute side. Third parties are starting to support its APIs and some businesses who want either an alternative to AWS or a supplement to it want to try it out.

    In February, RightScale, which helps customers monitor and manage their multi-cloud implementations, said it will resell and support GCE. That was right about the same time it started rolling out more comprehensive, tiered support options for  GCE, Google App Engine and other parts of its cloud empire.

    Of course, Amazon has ramped up its customer support options — especially for businesses — over the past year and rolled out better management tools as well.

    Google won’t divulge the number of current GCE users or talk about the wait list, but a third party with knowledge of the situation, said there is a huge backlog of would-be users — tens of thousands of them — waiting to get in.

    That’s some kind of rope line.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Anatomy of a security fix: Postgres launches massive update to address vulnerability

    It’s the kind of call any software vendor — or open source project — dreads: A large customer (in this case NTT) flagged a vulnerability in PostgreSQL, the popular open-source database also known as Postgres. That happened March 12. The  next step for the Postgres community (and it is a community, not a single vendor, which complicates things) is to assess the vulnerability, evaluate whether it’s really an issue and then figure out what it takes to fix it.

    In this case, the security SWAT team deemed this to be a real problem and scrambled to address it. “The team evaluated it and wrote the code for the fix, which actually took very little time and then started scheduling the release,” Josh Berkus, Postgres core team member told me in an interview. And it’s in that scheduling and coordinating the rollout to thousands of Postgres repositories that was really tricky.

    herokustatus

    When it comes to any big fix or patch, the practice is to create an installer to ease its application. In Postgres’ case, because it runs on virtually every flavor of Linux as well as Windows, they needed to come up with 80 different packagers. “That’s why delay was built into the process,” Berkus said.

    “If it were a normal, minor update for bugs, we don’t worry about making them all available at once, but for this, we felt we needed to.”

    There  was also the issue of disclosure. You want users to be alert as to what’s happening but you don’t want to “provide a roadmap” of the vulnerability to the script kiddies, Berkus said.

    A March 28, message posted up on the Postgresql message board alerted folks of a patch to come April 4. Then Heroku the popular Platform as a Service, which supports lots of Postgres users, posted that it was issuing the patch starting April 1. That timing set off some of the Postgres faithful who felt that Heroku was getting special treatment. It also garnered some press attention and Hacker News comment.

    Berkus said there’s a reason for that. Heroku provides the database as a service, not the binary code itself. Heroku requested that early access because it has lots of machines. And, as it turns out, the vulnerability could impact any Postgres user that has port access to the database even if he or she does not have a valid account.

    The nature of that vulnerability meant Heroku — which runs on Amazon Web Services– or any Postgres user running on AWS or other public cloud could be vulnerable depending on how they set up their servers. Many customers running on public cloud leave ports open, probably because they don’t know better.

    From the Postgres FAQ about the issue:

    Any system that allows unrestricted access to the PostgreSQL network port, such as users running PostgreSQL on a public cloud, is especially vulnerable. Users whose servers are only accessible on protected internal networks, or who have effective firewalling or other network access restrictions, are less vulnerable.

    This is a good general rule for database security: do not allow port access to the database server from untrusted networks unless it is absolutely necessary. This is as true, or more true, of other database systems as it is of PostgreSQL.

    Berkus and the folks at Heroku who spoke to me on this issue were quick to assert  that while this was a big vulnerability — much bigger than the last vulnerability back in 2005 —  there was no sign of any exploits.

    So, net net net, Heroku rolled out its fix earlier this week. Minor drama ensued. But as of now, the rest of the world is covered as well.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • OpenStack Grizzly adds scale, storage options. Now, bring on the users

    OpenStack, the open-source cloud stack backed by nearly every tech vendor you can name, remains a work in progress, but the latest, seventh release dubbed “Grizzly” addresses some key pain points.

    Bring your own hybervisor

    For one thing, it adds support for VMware ESX and “especially” Microsoft Hyper-V hypervisors, said Jonathan Bryce, executive director of the OpenStack Foundation. Up till now OpenStack was largely KVM and XEN focused. Microsoft — one of the few non-OpenStack companies left — helped with HyperV. And VMware, which had been another OpenStack holdout but joined the effort last summer, helped with ESX, said OpenStack COO Mark Collier. Support for multiple hypervisors was a key customer request, both execs said.

    Grizzly also attacks (pardon the pun) scalability with a new “Cells” capability that lets customers manage multiple OpenStack compute environments as a single unit. “You expose a single API endpoint and a single control system but underneath that can be a whole nest of clusters,” Bryce said in an interview. And, a new “NoDB” architecture manages how data is shared within an OpenStack environment and reduces reliance on a single database.

    Grizzly also expands block storage options. “You can now create OpenStack block storage service that sits in your data center in front of your high-performance storage, your archival storage, your spinning disks and lets you intelligently put your work on different types of storage arrays as needed,” Bryce said. There is also better drivers and support for storage from Ceph, Coraid, HP, Huawei, IBM, NetApp, Red Hat (Gluster), SolidFire and Zadara.

    And a new dashboard is there to expose and manage all these new features.

    The code is available now, two weeks in advance of the OpenStack Summit in Portland, Ore.

    As usual, the foundation touted the number of new contributors to this release — 517, up 56 percent from the last Folsom release.

    Wanted: real-world OpenStack users

    Here’s the thing though: What folks need to start seeing is real-live end users at companies beyond the tech vendors that support OpenStack as part of their cloud offerings. To claim Cisco/Webex as an OpenStack user does not hold the same weight as saying a huge bank is or a consumer packaged goods company is a customer. To date, Disney has been one charter end user. At this year’s show, Comcast, the country’s largest cable company and an OpenStack member and Best Buy will present case studies.

    The other — possibly related — concern is that myriad OpenStack implementations — from Rackspace, HP, IBM, Internap, Cloudscaling, Red Hat, Nebula, Canonical et al — may not be fully compatible with each other.  After all, the pressure will be on for HP to offer features and perks that distinguish its OpenStack cloud from IBM or Red Hat’s OpenStack clouds. Foundation members assure the world that will not be so, but doubts remain.

    OpenStackLogoMany companies are kicking the tires of OpenStack as an alternative or additional cloud to Amazon Web Services. GigaOM Pro Analyst David Linthicum sees three pools of potential OpenStack adopters:  companies looking to deploy a private cloud; companies that don’t want to move to AWS; and companies  that “think they’re protecting themselves by leveraging a standard.”

    He added: ”The key concern about OpenStack, as with other standards, is that the providers will move off into their own proprietary directions and thus hurt compatibility.  Clearly most of them won’t wait for the standard to mature to get to the features their users and the market demands. New releases, such as Grizzly, will curtail some of that, but there is not a chance that the standard will move as fast as the distribution providers need them to move.”

    To see a demo of the new OpenStack dashboard, check out the video below.


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • A Tableau IPO could validate the big data visualization push — or not

    When Tableau Software’s IPO actually happens, it could validate — or poke a hole in — the hype bubbling up around data visualization, a market segment some experts predict will hit $51 billion within three years.

    Tuesday, Seattle-based Tableau filed for an IPO with an initial “placeholder” value of up to $150 million. Unlike many tech companies going this route, Tableau is profitable, with net income of $2.7 million in 2010, $3.4 million in 2011, and $1.6 million in 2012. Its revenue more than doubled last year to $127 million. It also boasts an impressive customer list including Bank of America, Barclays, Pfizer, Goldman Sachs and others.

    Total venture funding for the company, which will trade on the NYSE under the ticker symbol “DATA,” stands at $15 million in two rounds, both from NEA.

    Tableau is not the first new-look data visualization company to go public.  Competitor QlikView went public in July 2010 at $15 per share and is now trading at $24.70. Splunk, which analyzes and visualizes machine data, went public last August at $17 per share and is now trading at just over $39 per share.

    The company’s challenge going forward will be to keep up the pace it has set thus far with its “80+ percent year-over-year growth,” said Ovum analyst Fredric Tunvall.  The company will clearly need to keep investing and it may be hard to manage shareholder expectations based on past performance, he said in a statement.

    At the same time it must add more advanced and richer analytics to the mix and factor in back-end data management capabilities including data integration, data quality and master data management, Tunvall added in a research note.

     

    QLIK Chart

    QLIK data by YCharts

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • What is SignalFuse and why should we care?

    Silicon Valley is chock-full of stealthy startups — but some more interesting than others. SignalFuse, based on its pedigree, is one of the more intriguing of these mysterious companies.

    SignalFuse co-founder Karthik Rau has said nothing about its plans, but a Silicon Valley source said the San Mateo, Calif. company is building technology that takes time-series data from multiple systems, analyses it fast and puts it into trend lines. “It’s like Splunk but in real time,” the source said. The trend lines are roughly analogous to what Bloomberg does with stock data — it tracks prices and movement to show volatility over time. Those patterns can then be used to predict future problems, according to the source. If that is true and if they can execute, SignalFuse is attacking a big, important problem.

    But what really has piqued interest are the people at the top.  The resumes and reputations of Rau and his co-founder Phillip Liu are stellar. Rau, who as a 27-year old worked for VMware co-founder Diane Green during his six-year tenure at the company, is viewed as instrumental in growing it from a hypervisor vendor with $100 million in revenue into what has become a $4+ billion platform provider. He helped lead the effort to build out VMware’s infrastructure business which included vCenter, vMotion, and the tiered vSphere Virtual Infrastructure packages.

    Rau worked with Liu before that at Loudcloud, the Marc Andreessen startup which became Opsware. When HP bought Opsware, Liu became chief architect of server automation and a distinguished technologist. Then, at Facebook, Liu worked for Jonathan Heiliger, former VP of technical operations and infrastructure, as a software architect who helped design Facebook’s Amazon EC2 equivalent for the company’s data centers and built out its IaaS platform.

    That’s a pretty impressive pool of talent in two founders — it will be really interesting to see what they and their team come up with.

    Officially, here’s all that the company has to say from its nascent web site. 

    “We are a stealth-mode company led by former Facebook and VMware executives that has raised $8M in venture financing from Andreessen Horowitz. We are hiring world-class engineers who want to work on hard problems with smart peers and build software that will be used by millions of people. If you are passionate about distributed systems, data science and statistics, or simple and elegant user interfaces, we’d love to hear from you. We are based in downtown San Mateo, 2 blocks away from the Caltrain station.”

    This is one Valley startup that will be engrossing to watch unfold.

    Feature photo courtesy of Flickr user ryanmilani

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.