Author: Derrick Harris

  • Why Amazon Should Worry About Google App Engine for Business

    I wrote last week that the time may be right for Amazon Web Services to launch its own platform-as-a-service (PaaS) offering, if only to preempt any competitive threat from other providers’ increasingly business-friendly PaaS offerings. The time is indeed right, now that Google has introduced to the world App Engine for Business.

    That’s because App Engine for Business further advances the value proposition for PaaS. PaaS offerings have been the epitome of cloud computing in terms of automation and abstraction, but they left something to be desired in terms of choice. With solutions like App Engine for Business, however, the idea of choice in PaaS offerings isn’t so laughable. Python or Java. BigTable or SQL. It’s not AWS (not that any PaaS offering really can be), but it’s a big step in the right direction. App Engine for Business is very competitive in terms of pricing and support, too.

    Google is often is cited as a cloud computing leader, but until now had yet to deliver a truly legitimate option for computing in the cloud. Mindshare and a legit product make Google dangerous to cloud providers of all stripes, including AWS.

    The integration of the Spring Framework in App Engine for Business is important because it means that customers have the option of easily porting Java applications to a variety of alternative cloud environments. Yes, AWS supports Spring, but the point is that Google is now on board with what is fast becoming the de facto Java framework for both internal and external cloud environments.

    Meanwhile, in the IaaS market, AWS is busy trying to distinguish itself on the services and capabilities levels now that bare VMs are becoming commodities. Thus, we get what we saw this week, with AWS cutting storage costs for customers who don’t require high durability (a move some suggest was in response to a leak about Google’s storage announcement), and increasing RDS availability with cross-Availability-Zone database architectures. It’s all about differentiation around capabilities, support and services, and every IaaS provider is engaged in this one-upmanship.

    If PaaS is destined to become the preferred cloud computing model, and if the IaaS market is becoming a rat race of sorts, why not free cloud revenues from the IaaS shackles and the threat of PaaS invasion? Amazon CTO Werner Vogels will be among several cloud computing executives speaking at Structure 2010 June 23 & 24, so we should get a sense then what demands are driving future advances for AWS and other cloud providers. For more on Google vs. Amazon and PaaS vs. IaaS, read my entire post here.



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • CA Delivers on Cloud Investment with Service-Management Suite

    It’s taken a full year and upward of $700 million in acquisitions, but CA Technologies (yes it’s a new moniker) finally delivered on its cloud-computing strategy with several major product announcements. The Cloud-Connected Management Suite — the centerpiece of CA’s announcements — leverages pieces technology it acquired from Cassatt, Oblicore, NetQoS, 3Tera and Nimsoft over the last year, as well as, no doubt, large amounts of internal innovation within CA.

    As I describe in detail in my research note on GigaOM Pro (sub required), the new products – Cloud Insight, Cloud Compose, Cloud Optimize and Cloud Orchestrate – attempt to simplify decision-making by letting organizations know what services are available to them and which of their physical, virtual or cloud-based resources are for hosting them. The products complement CA’s infrastructure-management tools, which now support Cisco UCS and provisioning in Amazon Web Services. With these products, CA has set the bar for how management software must act within cloud-connected organizations. It must recognize resources of all types, understand that some services will be hosted elsewhere, and somehow enable users to make sense of it all.

    Of course, evolution happens fast in cloud computing, and there’s no telling what the competitive landscape will look like once the products start becoming available in the fourth quarter of this year. Competitors — be they systems-management vendors like IBM, or virtualization vendors like VMware, or both, like Microsoft — fully understand the cloud market (some played integral roles in shaping it, in fact), and they could invest in research or acquisitions to match CA’s service-focused approach to cloud computing. CA might have more innovations up its sleeve, too, which will just serve to up the value proposition for its products. Furthermore, the company maintains partnerships with vendors like VMware and Microsoft around their hypervisors, so customers might be able to build best-of-breed virtualized environments.

    However, the success of CA’s approach actually might hinge on the success of the Carnegie Mellon University-led SMI Consortium and the Cloud Commons, two new entities backed by CA and on which which its new software rely heavily. SMI stands for “Service Management Index,” which is a matrix of six factors against which cloud services are rated. Cloud Commons contains SMI rating for thousands of cloud services, as well as qualitative data from experts and users, and CA Cloud Insight compiles SMI ratings for internal services to compare against what’s available in the cloud. It will be interesting to see what happens if neither efforts catches on before the first of CA’s products hits shelves in the fourth quarter, or if competitors latch onto both and incorporate them into their own virtualization- and cloud-management offerings.

    Read my full report for in-depth product, competitive and roadmap analysis, and attend Structure 2010 to hear how the rest of the software industry is tackling the cloud.

    Photo courtesy CA Technologies.



    Alcatel-Lucent NextGen Communications Spotlight — Learn More »

  • With SaaS, Microsoft Sweetens Its Azure Offering

    Microsoft this week rolled out its CampaignReady suite of services, which is anchored by the Windows Azure-hosted TownHall. Designed for political campaigns, the suite works by letting candidates connect with constituents via TownHall, while Microsoft’s online collaboration and advertising tools help campaign workers communicate with each other and spread their messages. Especially for local or regional campaigns without the resources to build the specialized tools President Obama’s utilized, Microsoft’s pitch — a prepackaged solution that can be set up, torn down and paid for on demand — should be appealing. But Microsoft’s SaaS-plus-PaaS business model has legs beyond politics, and beyond Redmond.

    As I describe in my column this week at GigaOM Pro, the combination of cloud services designed for and hosted on cloud platforms seems like a surefire strategy to secure PaaS (or even IaaS) adoption. By creating targeted applications designed specifically for use on their platforms, cloud providers can increase the likelihood of bringing customers into the fold (and can increase their profit margins) by letting applications help sell the platform instead of relying on the platform itself.

    The possibilities are perhaps best exemplified by the number of Salesforce.com customers using its flagship CRM offering, which sits atop its Force.com platform — more than 72,000, according to the company. Presumably, it was positive experiences with the SaaS application that inspired 200,000-plus developers to build more than 135,000 custom applications that run on Force.com. It’s possible that Force.com could have attracted an equally large base as a standalone offering not intrinsically connected with Salesforce.com’s SaaS business, but unlikely.

    The issue for most cloud providers is figuring out how to develop an application strategy to complement their infrastructural competencies. Microsoft, on the other hand, brought its decades of software experience with it when it launched Windows Azure. It developed its Pinpoint marketplace of third-party applications ready to run on the platform, it partnered with business-friendly ISVs like Intuit, and now it’s gotten into the SaaS act itself with TownHall. Azure has garnered its fair share of praise, and if Microsoft continues down the SaaS path, Azure could garner more than its fair share of customers and dollars. Read the full post here.

    For more on cloud computing, join the GigaOM Network at its annual Structure conference June 23 & 24th in San Francisco.

    Image courtesy of Janet Wall via PhotoExpress

  • MorphLabs Launches Services to Help MSPs Ride the Cloud Wave

    MorphLabs made available in the U.S. today its cloud computing solutions, a series dubbed mCloud, which are designed to let managed service providers (MSPs) enter the cloud provider market. The company, which has offices in El Segundo, Calif., Japan and the Philippines, has been testing its solutions pre-release within eight Japanese service providers and already has two U.S. customers lined up. Transforming MSPs into cloud providers is becoming more common as traditional service providers try to fend off cloud-based competition from the likes of Amazon Web Services, Rackspace Cloud, GoGrid and others.

    The mCloud series consists of two things: mCloud Controller and mCloud Server. The former is an appliance “used to convert commodity hardware into a cloud” for customers already running virtualized environments, while the latter is a holistic solution (which includes mCloud Controller) in the form of a preconfigured IBM BladeCenter S. The appliance-based approach is novel among cloud solutions, but according to MorphLabs CEO Winston Damarillo, it eases the transition into a cloud environment and, in this case, houses the solution’s hardware-based failover mechanism. In the same way that VMware partners can offer vCloud Express-branded offerings, MorphLabs customers can brand their cloud offerings as mCloud On Demand.

    MorphLabs also hopes to capitalize on its compatibility with Amazon Web Services. Not only can end users port applications from a MorphLabs-powered offering to AWS, but smaller MSPs can complement AWS’s impersonal service with offerings such as personalized SLAs, server sizes and other such touches. For large enterprises that implement MorphLabs in-house, AWS compatibility means a smoother path to a hybrid-cloud environment. Letting its MSP customers work with AWS rather than necessarily against it aligns MorphLabs’ own experience as a cloud provider itself. Damarillo says it’s no use trying to compete with AWS when you can leverage its popularity to bolster your own offerings.

    The company could face a tough road trying to sell against established vendors like 3Tera — which is now part of CA and which Damarillo says was a regular MorphLabs competitor in Japan — and cloud pundit Reuven Cohen’s company, Enomaly. Likewise, vendors such as Eucalyptus and VMOps have strong internal cloud products that could be part of MSP cloud transition efforts, too. What’s certain, however, is that small MSPs and traditional hosters won’t have to vanish when there are so many tools available to let them ride the cloud computing hype while continuing to sell personalized offerings that the big boys aren’t really equipped to sell.

    For more on cloud computing, join the GigaOM Network at its annual Structure conference on June 23 & 24 in San Francisco.

    Images courtesy of Morph Labs

  • Q1 in the Cloud: It Was All About Big Vendors

    When talking about cutting-edge topics like cloud computing and web infrastructure, it can be easy to let startups and niche vendors dominate the discussion. After all, they’re often the ones driving innovation and issuing case studies that illustrate entirely new methods of computing. In the first quarter, however, the IT infrastructure market was all about the big boys.

    As I describe in the latest Quarterly Wrap-up at GigaOM Pro (sub req’d), the big news in cloud computing was general availability of Microsoft Windows Azure and its related suite of services. The company had been touting the platform since October 2008, and the reaction when it finally hit the ground was overwhelmingly (but not entirely) optimistic, thanks in part to Redmond’s smart strategies around partnerships and attracting traditional businesses. In fact, Microsoft also had a hand in the quarter’s second-biggest cloud trend, which was the call for deeper looks into the legal aspects of cloud computing. Microsoft’s Brad Smith called for congressional action on existing laws to account for the cloud.

    Meanwhile, CA and VMware both made big splashes in the internal cloud space. Systems-management giant CA did so by announcing an aggressive cloud strategy marked by intriguing acquisitions. After buying NetQos and the floundering Cassatt in 2009, CA kicked off 2010 by folding Oblicore, 3Tera and Nimsoft into its cloud mix. However, just when it looked like CA was set to run away with cloud systems management, VMware executed a coup d’état by acquiring parent company EMC’s Ionix business. Now, VMware will be able to match CA (and others) across a variety of core functionalities, including the very important ability to manage and provision both physical and virtual infrastructure.

    The cloud-based collaboration space also saw VMware play a big role by acquiring Zimbra. VMware is not, however, the biggest fish in that pond. During the first quarter alone, IBM updated its Lotus Live strategy and convinced Panasonic to move some 300,000 personnel to the system, and SAP finally got its cloud act together by announcing its StreamWork collaboration and corporate networking service.

    Perhaps the only place major vendors and providers didn’t make a mark during the first quarter was in the sometimes contentious debate over open-source software. Discussions over the role of open source in cloud computing and web data centers have inspired many different theories about both business and technology, but it’s quickly become clear that proprietary vendors –- especially large proprietary vendors –- have little to no place at the infrastructural level in these brave new worlds. The database tier provided a prime microcosm of this attitude during the first quarter, as developers debated the merits of open-source NoSQL tools vs. open-source MySQL.

    We’ll discuss many of these topics onstage at the upcoming Structure 2010 event, and many more will no doubt surface as the cloud’s leading minds mingle in the hallways and networking receptions. The event’s theme is “Put the Cloud to Work,” and the elevated presence of mega-vendors at all levels indicates those aren’t empty words.

    Read the full Q1 report here.

  • Can Facebook or Twitter Spin Off the Next Hadoop?

    Like most people, I suspect, I wasn’t too surprised to find out that Hadoop-focused startup Karmasphere has secured a $5 million initial funding round. After all, if Hadoop catches on like the evidence suggests it will, Karmasphere’s desktop-based Hadoop-management tools could pay off investors many times over. In some ways, though, the fact that Hadoop is mature enough to inspire commercial products means it’s yesterday’s news. Now, I’m wondering, which open-source, big-data-inspired product will be the next to launch a wave of startups and drive tens of millions in VC spending?

    Big data has narrowed the gap between the needs of bleeding-edge web companies, their offspring and even traditional businesses. Hadoop has caught on across industry boundaries as an analytics tool for unstructured data sets, and it seems logical that other web-based tools will catch on in other parts of the data layer. In my weekly column over at GigaOM Pro (sub req’d) today, I took a look at the potential for Cassandra, which grew out of Facebook, and Gizzard, Twitter’s ill-named big-data baby.

    Given its growing popularity and expanding functionality, Cassandra right now seems like a prime candidate. Rackspace has taken over its development reins, and its found varied applications within Digg, Twitter, Reddit, Cloudkick and Cisco to name a few. This diversity illustrates Cassandra’s versatility; it’s not just for the social media crowd. Furthermore, Cassandra graduated to a top-level Apache project in February, signifying the quality of the work done on it thus far and, most likely, a groundswell of new developers.

    Twitter’s newly open-sourced Gizzard tool seems to have promise, as well. By eliminating some pain from the often difficult sharding process, Gizzard makes it easier to build and manage distributed data stores that can handle ultra-high query volumes without getting bogged down. Like Google, Yahoo and Facebook before it, Twitter has played a role in evolving how we use the web, and software developed within its walls should be a hot commodity for present and future Twitter-inspired sites and products.

    Which do you think will take off?

    Read the full article here.

    Photo courtesy Flickr user zzzack

  • Who’s Making Money From Open Source in the Cloud?

    Event moderator Bernard Goldstein, GigaOM Pro analyst and CEO of HyperStratus; SugarCRM CEO Larry Augustin; and Jim Zemlin, executive director of the Linux Foundation

    At this morning’s Bunker Session, the central question of the relationship between cloud computing and open-source software was answered early and often. The one thing everybody agreed upon is that most clouds of any appreciable scale are built on open-source software and, in fact, might not even exist without it. As to whether there’s any money to be made with open source, however, there was enough contention to go around.

    The rub, as posited by Citrix’s Simon Crosby, is that everybody making money with open source actually has a proprietary angle. Open source is a great tool for advancing products, branding a company and expanding its reach, but vendors make their money with proprietary solutions. This holds true for companies ranging from Citrix with XenServer to Amazon with EC2. The numbers actually back up this proposition: Jim Zemlin, executive director of the Linux Foundation, pointed out that the leading open-source investors aren’t VCs, but large IT companies like IBM, Intel and Google. Investment in open-source projects helps these companies crowdsource R&D — which saves time and money — before rolling out a commercial offering based on the results. This isn’t an indictment of vendors’ use of open source, by the way, it’s just reality –- not to mention smart business.

    However, as Om pointed out from the crowd, there is a big difference between helping vendors make money and actually making money yourself. Save for Red Hat, most truly open-source companies don’t last too long before they’re snatched up by proprietary vendors that want to leverage the associated momentum and product capabilities in their own businesses. But the definition of success isn’t universal. SugarCRM CEO Larry Augustin countered this argument with the position that open-source companies like JBoss and SpringSource, which did great things and built huge communities, are no less successful because they exited via acquisition rather than IPO.

    Despite debate over what constitutes a viable open-source business model, the area from where we can expect to see one emerge is cloud computing. As I discussed recently in a GigaOM Pro column (sub. req’d), open-source products are building momentum in private-cloud settings especially, and the reason might be that they help users achieve the same efficiencies as large IT companies that invest in open-source projects.

    Another possible avenue for open source success in the cloud is interoperability. When Yahoo’s Tom Hughes Croucher asked the Bunker crowd what the most likely solution to lack of interoperability among public clouds is, open source won by a landslide over vendor-developed standards. Opscodes’s Jesse Robbins buttressed this opinion by pointing out that many concerns over cloud interoperability and application portability can be addressed in the planning stages and by utilizing automation capabilities from companies like RightScale. Another possibility is to use an open-source interface like libcloud, which simplifies movement between clouds.

    Where the money comes into play is when CIOs demand these types of capabilities before moving to cloud delivery models. As Accenture’s Joe Tobolski noted, they want to leverage the cloud, but they want to know there’s a workable Plan B and Plan C in case the primary cloud provider goes down. Cloud platforms don’t necessarily need to be open source if they’re open enough to work with third-party managements solutions. If an open-source company can build a cross-cloud automation product that lets businesses tweak it to suit their specific needs, that company might find itself in a position like Red Hat did when users were searching for a viable alternative to Windows.

    Check out response to the event on Twitter via #fosscloud.

    Full event video and post-game analysis are available to GigaOM Pro subscribers. For a limited time, subscribe using the coupon code BUNKER0310 to get 20 percent off of our already discounted, charter-year price of $79.

  • Cloudkick Moves (Quickly) Into the Hot Hybrid Cloud Market

    Cloudkick, a San Francisco-based cloud monitoring startup, today launched Hybrid Cloudkick, an extension to its cloud-monitoring service that brings non-cloud servers into the fold. The company is moving pretty fast with a major upgrade just a few months after releasing its flagship service to the public, but rolling out new features fast is a hallmark of cloud startups.  And with businesses looking for easy and low-risk methods for cloud adoption, anything “hybrid” is sure to draw some eyes.

    The service’s key feature is that it lets users monitor both dedicated servers and cloud servers (from several leading providers) using the same API, and from a single dashboard. Those dedicated servers can be on-premise or colocated, physical or virtual. So, for example, Rackspace customers running both managed and cloud servers could use Hybrid Cloudkick to monitor the entirety of their virtual infrastructure. Rackspace itself doesn’t even offer hybrid computing or management between its services (although GoGrid does), so a hybrid monitoring service, at least, could be particularly appealing to those customers.

    Another key feature is the Cloudkick Proxy, which adds security to the communication between dedicated servers and the Cloudkick service. This will be especially important to enterprise customers, which tend to be more leery about cloud security than are some of their smaller counterparts. In fact, Cloudkick had larger customers in mind when it developed the proxy feature, as co-founder Alex Polvi told me Hybrid Cloudkick is aimed at users managing 500-plus servers, who likely were using multiple monitoring tools in the past. Co-founder Dan Di Spaltro told Om in January that the flagship Cloudkick service targets users in the 10-100 server range.

    As with its main service, though, Cloudkick isn’t operating without competition in the hybrid monitoring space. The field actually is quite small, but recent CA purchase Nimsoft is another third party offering single-pane-of-glass monitoring of internal and external resources, and cloud provider Voxel just announced a similar capability for its customers. But while comparatively speaking, Cloudkick lacks both corporate backing and an established track record, it makes up for those deficiencies with two attributes that many of its web-startup customers will appreciate — a narrow product focus and an open source pedigree.

    Related content from GigaOM Pro: (sub req’d)

    For Open Cloud Computing, Look Inside Your Data Center

  • For Open Cloud Computing, Look Inside Your Data Center

    For all the talk about openness and interoperability in cloud computing, both public-cloud and private-cloud providers still operate very much in their own silos.

    Amazon, Rackspace, Google, Microsoft are all doing wonderful things — but they’re doing so largely within their own environments. And while (most) data center vendors can’t offer users complete vertically integrated cloud stacks, they’re more than happy to lock users into their product lines as much as possible and form strong partnerships in areas they don’t play.

    However, the writing on the wall suggests that, from the customer’s perspective, things might be changing for the better — especially when it comes to internal clouds.

    Two of the best examples, as I discuss in my weekly column over at GigaOM Pro, are Red Hat and Eucalyptus. Both open-source companies have increasingly popular products that compete well with the big dogs — VMware, Microsoft, Citrix and Amazon. Red Hat’s continuously high profits in the face of the economic recession show customer confidence that might follow it into the cloud when it starts pushing such a migration. Eucalytpus appears to be doing strong business as well. According to reports, the company, which has raised $5.5 million to this point, is now valued at $100 million.

    Openness is picking up on the hardware side, too. Dell, for one, has been touting its open approach to picking components, and it bolstered its argument with a slew of cloud announcements this week, as well as InfoWorld test results that show Dell blades performing on par with those from market leaders. At this point, a standards-based approach anywhere in the stack should be welcome: While open standards have long been a rallying cry of cloud commentators, reports from the recent Cloud Connect event suggest we can expect to wait a long while until meaningful software standards actually emerge.

    How the internal-cloud market will play out is anybody’s guess, with systems and software vendors all trying to establish themselves as cloud-computing leaders.  What’s clear, however, is that open source and open standards will have a place within cloud data centers at levels currently not present in the public-cloud sphere.

    Subscribe to GigaOM Pro to read the full article and view a subscriber-only webcast of the GigaOM Bunker Session on open source and cloud computing technologies, to be held March 31, from 9 a.m. – 12 p.m. (PST).

  • To Space and Beyond: The Rise of Research-driven Cloud Computing

    I remember attending the inaugural GridWorld conference in 2006 and hearing Argonne National Laboratory’s Ian Foster discuss the possible implications of the newly announced Amazon EC2 on the world of grid computing that he helped create. Well, 2010 is upon us, and some of the implications Foster pondered at GridWorld have become clear, among them: For many workloads, the cloud appears to be replacing the grid. This point is driven home in a new GigaOM Pro article (sub req’d) by Paul Miller, in which he looks at how space agencies are using the cloud to do work that likely would have had the word “grid” written all over it just a few short years ago.

    Miller cites a particularly illustrative case with the European Space Agency, which is utilizing Amazon EC2 for the data-processing needs of its Gaia mission, set to launch in 2012. The 40GB per night that Gaia will generate would have cost $1.5 million using local resources (read “a grid” or “a cluster”), but research suggests it could cost in the $500,000 range using EC2. The demand for cost savings and flexibility isn’t limited to astronomy research, either.

    Research organizations that need sheer computing power on demand are looking at EC2 as the means for attaining it. Several prominent examples come from the pharmaceutical industry, where companies like Amylin and Eli Lilly have publicly embraced the cloud, as has research-driven Wellcome Trust Sanger Institute. A related case study comes from CERN’s Large Hadron Collider project, which is using EC2’s capabilities as a framework for upgrading its worldwide grid infrastructure. So high is demand cloud for resources, in fact, that even high-performance computing software vendors, such as Univa UD (which Foster co-founded), are building tools to let research-focused customers run jobs on EC2.

    Unlike HPC-focused grid software, however, the cloud opens up doors beyond crunching numbers. Miller also highlights NASA’s Nebula cloud, a container-based internal cloud infrastructure used to host NASA’s many disparate web sites. Built using Eucalyptus software, NASA users can provision the resources they need for their sites as those needs arise. In theory, they could call up some of those resources for parallel processing, too. While grid computing projects often federate resources and democratize access to them, they do so at a scale that makes tasks like site-hosting impractical, and grids don’t provide the nearly bare-metal access that makes cloud resources so flexible.

    Of course, none of this is news to Foster. In early 2008 he noted the myriad similarities between the two computing models, including the ability to process lots of data in a hurry. In late 2009, the cloud market having matured considerably, he observed that a properly provisioned collection of Amazon EC2 images fared relatively well against a supercomputer when running certain benchmarks. There are plenty of reasons why cloud services will not displace high-end supercomputers, but where simple batch processing and cost concerns meet, the cloud could make in-house grids and clusters things of the past.

    Full article on GigaOM Pro (sub req’d)

  • Webscale Databases: Is Open Source Really Necessary?

    When it comes to deploying databases — or any infrastructural pieces, really — at web scale, many large sites opt to “go cheap, go custom or go home.” Given their unique needs, this credo makes sense, but I wonder if the companies following it aren’t making more work for themselves than is necessary. Might the resources spent developing open-source projects or building tools from scratch not become extraneous if companies could buy solutions that would work just fine?

    Isn’t it plausible that a proprietary vendor –- Oracle, let’s say –- could launch a webscale database or analytics solution that would do the trick for a company like Facebook? If there’s one thing Larry Ellison knows better than relational databases, it’s how to make a buck. Hypothetically speaking, Oracle could offer database and data-analysis solutions that could save a company like Facebook from having to act like a software company itself. It certainly hasn’t hesitated to buy its way into alternative markets in the past.

    Another consideration is where web companies draw the line regarding commercial solutions: Is an open-source but subscription-based vendor like Red Hat out of the question? What about any of the emerging startups tackling file systems, memcached and other issues?

    I’m not suggesting that Facebook et al are heading down the garden path with their current approaches, or that there’s a glut of proprietary products on the market, only that it’s not inconceivable that commercial vendors could meet the needs of these companies. You can read my full column over at GigaOM Pro (subscription required). What do you think? Are open-source and DIY solutions really the best bet for webscale companies?

  • Appistry Joins Cloudscale Storage Fray, and Brings Hadoop With It

    Appistry today added another element to its cloud-computing application platform, announcing the April availability of CloudIQ Storage. With the release, the St. Louis-based company joins the growing ranks of companies seizing on demand cloud storage solutions that can maintain performance in the face of rapidly growing data volumes. Appistry hopes to distinguish its scale-out storage offering from the competition, however, with two key innovations: (1) an ace-in-the-hole that it calls “computational storage,” and (2) CloudIQ Storage Hadoop Edition.

    Customers can achieve the performance benefits of computational storage by launching a commodity-server-based cloud with Appistry CloudIQ Engine, and installing CloudIQ Storage on the same pool of servers. This is a big change from the standard model of having separate islands of processing and storage connected by what Appistry VP of Product Management and Marketing Sam Charrington calls “a straw.” In a computational-storage model, processing tasks automatically route themselves to the relevant data, wherever it’s located across the file system. Charrington says this capability lets applications access data at bus speed (i.e., within the same box), thus eliminating the network bottleneck.

    With the Hadoop Edition, Appistry hopes to “upgrade” the performance and availability of Hadoop-based applications by replacing the Hadoop Distributed File System (HDFS) with CloudIQ Storage. While Hadoop is wildly popular right now, one issue is its use of a “namenode” – a centralized metadata repository that can constrain performance and creates a single point of failure. Appistry’s approach retains Hadoop’s MapReduce engine to assign parallel-processing tasks, but attempts to resolve namenode problems with CloudIQ Storage’s wholly distributed architecture.

    Charrington told me that Appistry also is “in the lab as we speak” with an intelligence-sector customer that has “massive, massive” applications built on HBase, a distributed NoSQL database with Hadoop at its core. Although CloudIQ Storage doesn’t formally support HBase, it has helped the customer improve database throughput, and formal support might be on the way. Because of their inherently scalable natures, Charrington says CloudIQ Storage and NoSQL databases are complementary solutions to handle structured and unstructured data.

    The idea behind cloud storage is the same as the idea behind cloud computing: Organizations want to meet their ever-expanding storage needs as they arise, and they want to do so at lower price points than are available from incumbent vendors like EMC and NetApp. For customers in areas like social media, scientific imaging or film rendering, though, scale and price must be matched with performance. This is where companies like Appistry come in, but it certainly isn’t alone in the quest for these dollars. Startups Scale Computing, Pivot3, MaxiScale and ParaScale all have raised millions for their unique offerings, and HP last summer snatched IBRIX to boost its relevance in the performance-hungry film-rendering market.

  • HP Takes the Consulting Fight to IBM With Cloud Design Service

    HP, continuing down its consulting-based path to cloud computing revenues, today introduced its Cloud Design Service, an attempt to capitalize on its role in designing private cloud infrastructures, including the Department of Defense’s much-ballyhooed RACE platform. While HP’s cloud portfolio might be incomplete overall, it’s difficult to argue with the company’s decision to one-up try to distinguish itself from IBM in the services department, especially when the idea of private clouds is coming into its own.

    Essentially, the Cloud Design Service aims to assist customers in building their own clouds by assessing their specific needs and proposing an individually tailored plan of action. Because HP relies upon its own cloud reference architecture and ITIL v3 or best practices, customers can be confident that HP isn’t just shooting from the hip in regard to how their clouds should look. According to its Cloud Consulting Services web site, HP also will provide customers with a “[d]etailed bill of materials and implementation plan,” which supposedly will ease concerns over unpleasant cost surprises when it comes time to move from planning to building. The Cloud Design Service is a natural evolutionary step after HP’s pre-existing information- and business strategy-based Cloud Workshop and Cloud Roadmap services.

    From the company’s perspective, the best part about the new service is likely to be that it takes HP’s cloud consulting a step beyond what IBM (or anybody else, really) currently offers. For now, IBM’s cloud consulting options resemble HP’s prior collection in that they revolve around creating business models that utilize cloud computing, but do not address the ground-level concerns of actually building a cloud infrastructure. IBM does offer its own, not exclusively cloud-focused infrastructure-planning services, but they appear more strategic, while HP’s new service appears more operational. In short, IBM will tell customers what services and infrastructural elements they might need to achieve certain goals, whereas HP will give customers a ready-to-build blueprint.

    However, the limitations of HP’s new service are indicative of its cloud strategy as a whole. By focusing on consulting and infrastructure, HP loses value for customers who want to leverage external cloud services, as well as internal cloud services, without seeking third-party solutions. After all, not every customer will have security and compliance needs on par with the Department of Defense. IBM, on the other hand, offers a cadre of external cloud services, including ones for storage and test development, and intends to roll out even more. For businesses willing to let a major vendor design their cloud infrastructures, the inclusion of these complementary services could be a major selling point, even if it means less guaranteed bang for their consulting bucks.

    Related content from GigaOM Pro (sub req’d):

    Delivering Content In the Cloud

  • CA Wants to Be the Enterprise Watchdog in the Cloud

    Let’s be honest, systems management vendor CA doesn’t exactly inspire visions of innovation (heck, until 2006, it went by the not-so-intriguing Computer Associates moniker). That’s about to change, however. Over the past year, CA has been buying up thought leaders across a variety of disciplines -– notably Cassatt, NetQoS and Oblicore –- each of which plays a critical role in CA’s mission to become the undisputed leader in managing cloud-connected IT departments.

    According to Chris O’Malley, executive vice president of CA’s new Cloud Products & Solutions business line, the company’s goal is to give its customers everything they need in order to analyze the available cloud solutions, figure how to integrate them into those customers’ existing environments, then make sure they perform as expected. Ultimately, O’Malley believes, even mission-critical workloads will be delivered as services, and ensuring such services stay live is where CA believes its will have the competitive advantage. It wants to provide the guards watching over the enterprise crown jewels.

    This is where CA’s new intellectual property comes into play. As Jasmine Noel of Ptak, Noel & Associates noted to me, “They’ve picked up some gems with their acquisitions,” which include:

    • Cassatt: Cassatt’s software automates the movement and scalability of applications across an organization’s pool of resources, based on a variety of user-defined policies, which means service-level agreements (SLAs) for on-premise applications are all but guaranteed because the software scales to meet demand.
    • NetQos: NetQoS helps ensure that the network isn’t a weak link affecting an application’s performance. This is increasingly important today, when applications and services might span physical, virtual and even cloud boundaries. Personnel across various IT disciplines need business-level information to help them monitor performance without having to decipher technical data they might not understand.
    • Oblicore: If IT departments are going to act like service providers they will need the tools to translate technical metrics into actionable data. Oblicore provides reports, dashboards and other tools that make it easy to figure out, even predict, which services are living up to their guarantees. For instance, O’Malley notes that CA currently utilizes 60-plus external services on top of its internal ones, which is a lot to track in real time without software to help.

    What’s Next? We’ll Know More in May…

    Although CA was happy to announce its newly found cloud ambitions, it won’t be releasing any specific details until CA World in May. There, according to O’Malley, the company will introduce initial products and its cloud road map — and customers can expect to see an increase in SaaS options. After that, CA will plans to roll out new capabilities at an “aggressive,” SaaS provider-like pace, with an eye to getting ahead of the cloud management market.

    For now, the plan is to develop many new capabilities internally. “The scale of innovation that’s going on within these four walls is probably at a rate and pace and depth that we haven’t had in many, many years,” said O’Malley, a 23-year CA veteran. However, he didn’t close the door on acquiring new ones. In fact, although he declined to elaborate, O’Malley admitted that he sees some “interesting companies” selling capabilities that align with CA’s cloud vision. This seems in line with new CEO William McCracken’s plans to spend at least $300 million both this year and next on cloud computing and security acquisitions.

    So who else should CA buy? Here are a few possibilities:

    • Eucalyptus: A startup selling software that lets companies create their own Amazon EC2-style infrastructures internally, Eucalyptus would give CA a soup-to-nuts cloud computing solution. While Cassatt’s technology and CA’s current products let customers manage services across existing infrastructure, Eucalyptus’s technology would allow customers to provision new virtual resources as the need arises internally. Plus, it would make connecting to EC2 even easier should customers seek hybrid cloud platforms.
    • Cloudkick: Oblicore lets CA customers manage SLAs across cloud services, but Cloudkick’s dashboard would let them monitor the underlying service provider performance in real time. Assuming it expanded the scope to cover additional cloud platforms, Cloudkick’s service would let CA customers troubleshoot and possibly make real-time decisions about which platforms on which to launch any new services. Additionally, CA could fulfill its SaaS dreams by selling the service even to businesses that don’t use CA for systems management.
    • Appirio: Appirio would bring a much-needed services angle to CA’s cloud offerings. Managing and monitoring services and infrastructure is great, but helping customers transition to cloud computing and/or SaaS, even helping them develop new cloud-based applications, adds an incredible amount of value to any company serious about pushing cloud solutions. Appirio also offers Cloud Connectors, which bridge Salesforce.com applications with Google Apps, Amazon Web Services and/or Facebook. But the services business would be the real key if CA made this move.

    Whatever it decides, CA had better live up to its promise of aggressively advancing its cloud-computing business. The other big systems management vendors -– IBM, HP and BMC -– are not asleep at the wheel. Indeed, 2010 could bring, as Ptak, Noel & Associates’ Noel put it, a race to see who can actually get companies to sign checks for cloud solutions.

    Related content from GigaOM Pro (sub req’d):

    As Cloud Computing Goes International, Whose Laws Matter?

  • Nasuni Targets Primary Storage in the Cloud

    Cloud storage startup Nasuni entered public beta today, bringing with it a new, but familiar, approach to storing primary data. Instead of competing in the already overpopulated cloud storage-provider market — where offerings generally target backup operations and simple file storage — the Natick, Mass.-based startup sells software that looks and acts like a traditional file system but stores data in cloud offerings from Amazon, Rackspace, Nirvanix and Iron Mountain.

    For $250 a month plus capacity fees (Nasuni covers data transfer costs), customers can use Nasuni to house their primary data in the providers’ clouds. One-year-old Nasuni, which has so far raised $8 million from Sigma Partners and North Bridge Venture Partners, is betting that putting a familiar face on cloud storage will make it accessible to small and medium-sized business that require robust storage arrays but don’t want to pay the capital expenses required to meet skyrocketing data volumes. In short, it’s doing exactly what the cloud ought to do.

    Nasuni’s feature set includes thin provisioning, snapshots, deduplication and escrowed encryption, where the data is stored with one cloud provider and the encryption key with another. And rather than getting rid of their existing hardware, customers can keep around a few hundred gigabytes to serve as a local cache or database. According to CEO Andres Rodriguez, managing 1.5TB for three years with Nasuni would cost about half as much as doing so with a low-end EqualLogic array, and far less than an EMC CLARiiON system.

    However, Nasuni’s product is available only as a VMware virtual appliance, which could limit its potential customer base. Virtualization still isn’t ubiquitous, and Rodriguez acknowledges that no one will buy VMware just to use Nasuni’s solution. Still, he’s optimistic, hoping for 100 paying customers by the end of 2010 and, via a strong partner program, 30,000 customers by the end of 2013.

    Rodriguez also drew a distinction between his company and cutting-edge Silicon Valley startups: While they’re leading the charge when it comes to building large cloud infrastructures and new delivery models, he’s sticking with the storage systems expertise for which the East Coast is known, adding that only “the guts are different.”

    Nasuni hopes this familiar focus will help bridge the culture gap between the two coasts and bring skeptical businesses to the cloud. This strategy makes sense (see, e.g., IBM’s pragmatic cloud strategy), but Nasuni’s 30,000-customer goal could be derailed should other East Coast storage vendors decide to get into this game themselves. Not only do they have mindshare advantages, but vendors like EMC have their own clouds to cut out the middle man.

    Related content from GigaOM Pro (sub req’d):

    Thumbnail image courtesy of Flickr user Jeff Kubina

  • Can the Cloud Catalyze Change in International Data Laws?

    Despite imagery of the cloud as global collection of servers in the sky, among which data and applications move freely, the truth is that cloud computing is far more down to earth and far more localized. As a new GigaOM Pro report explains (sub. req’d), most cloud providers house services in only a few geographically distributed data centers, and national or continental data storage regulations can limit how -– and if -– organizations move their operations to the cloud. A question that could affect the ultimate scope of cloud adoption is whether legislation can be passed that takes into account the economic and technological realities of a cloud-based world.

    As the report makes clear, European data protection laws are particularly tough, making it difficult for Europeans to use cloud services, which are largely U.S.-based. In the meantime, different data retention times in different EU countries make intra-continental cloud use a challenge (presently, for example, Amazon Web Service has an Availability Zone in Ireland, only, and Microsoft will offer Azure zones in Ireland and the Netherlands). European organizations considering cloud computing need to figure out whether the data involved limits their choice in cloud providers or precludes the move entirely.

    The good news for supporters of a truly global cloud is that efforts are underway that could change the way governments view cloud data. Microsoft, for example, has been actively lobbying the United States to pass laws protecting sensitive data in the cloud, and lobbying the EU to relax its data transportation laws. Certainly, strict laws in the U.S. would make it much easier to convince Europe to loosen up. On the compliance front, security guru Christopher Hoff is pushing the A6 audit, which is designed specifically for cloud environments and could assuage governments concerned about differing security protocols among different providers. And as the report notes, there are technological advances that could enable the application of different security policies depending on geographical location.

    It’s possible, of course, that no new laws ever get passed, rendering certain applications and data unfit for the cloud. But in light of the love shown for cloud computing by governments on both sides of the pond, I’m betting on progress sooner rather than later.

    Image courtesy of Flickr user jivedanson.

  • Clouds and CDNs: a Match Made in Heaven?

    Not only are there numerous synergies between the content delivery and cloud computing markets, but the two are set to become increasingly intertwined, according to a new GigaOM Pro report (sub. req’d). Indeed, given how fundamentally different the tasks CDNs and clouds perform are -– delivering cached content and running applications, respectively –- such a suggestion might seem odd. But the two are in fact highly complementary, in ways both CDN and cloud providers are trying to cash in on.

    Cloud providers understand that latency breeds contempt, so they turn to CDNs to get a boost. Rackspace and Limelight are close partners, GoGrid just teamed with EdgeCast, and Amazon Web Services provides its own CloudFront service. The result won’t be improved application performance or faster database calls, but videos and files will load far faster than they would if they were delivered from a centralized data center.

    However, it’s not just cloud providers that are taking advantage of their Internet-delivery cohorts. CDN leader Akamai, especially, has inserted itself smack into the middle of the cloud computing ecosystem, to the benefit of the SaaS market. Through a partnership with SaaS-platform provider OpSource, for example, Akamai’s route-optimization technology speeds the delivery of web applications across the Internet, giving customers a more real-time experience. Akamai even fancies itself a cloud provider by letting customers deploy Java applications across its collection of 50,000 servers, an offering it calls EdgeComputing.

    The SaaS connection actually seems like something that all CDN and cloud providers should be looking to exploit. As the report notes, CDNs are struggling to make video delivery a profitable business, and while SaaS is profitable, cloud providers are still trying to convince users to move important applications into the troposphere. Clouds give CDNs something to deliver, and CDNs give clouds confidence-inspiring delivery. And they might want to get busy, as telcos seem well positioned to capitalize on this synergy unilaterally should they get proactive.

  • From Azure to VMware: A Look Back at Infrastructure Trends From Q4

    Looking back at the past three months of data center and cloud computing news, what’s striking is not so much what happened, but what will happen. As I outline in the latest Quarterly Wrap-up for GigaOM Pro (sub. required), there were plenty of major announcements and big happenings, to be sure, but many won’t materialize until later this year. When they do, the results could alter their respective landscapes significantly.

    Data Center Shape-shifting

    Of all the infrastructure trends during the fourth quarter, the biggest may have been the changing shape of the data center market. Once comprised of separate vendors for separate functions, the space is now is full of cross-component partnerships and alliances, most notably that of Cisco, VMware and EMC. The three formed their Virtual Computing Environment alliance to peddle the jointly developed Vblock solution, and Cisco and EMC finally launched their long-awaited joint venture, Acadia. Reactions to this trifecta included alliances between and among competitive vendors like Microsoft, NetApp, Dell, Fujitsu and others.

    Microsoft Azure Wows With What Might Be

    Subscribe to GigaOM Pro for the full report.

    In the cloud space, the soft launch of Microsoft Windows Azure had the cloud community and prospective customers alike discussing -– with much anticipation -– the merits of a platform and associated features that won’t be publicly available until later in 2010. Likewise, Amazon Web Services’ introduction of Spot Instances for EC2 sparked much discussion about the possibility of a free market for cloud computing instances. The requisite pieces for such a system aren’t yet in place, but many think it’s now just a matter of time until it materializes.

    Oracle Cleared Final Hurdle to Sun Buy

    In the ongoing saga that is Oracle’s acquisition of Sun Microsystems, the fourth quarter brought the first signs of progress since the deal was announced in April. Oracle laid out a list of concessions that seemingly allayed European Commission concerns over the future of MySQL, which has been the primary obstacle in clearing the purchase. Despite MySQL creator Monty Widenius’s late campaign to free the popular open-source database from Oracle’s clutches, the EC approved the deal yesterday.

    Green Shoots in Q4 Financials

    Economic recovery in the IT sector seemed likely when third-quarter results were announced in October, with IT vendors across the board reporting higher revenues and other results that beat Wall Street’s estimates. Additionally, the server market, fresh off the worst quarter since the 1990s, showed quarterly revenue gains for the first time in a year, and VC funding was up 16 percent from the second quarter. Collective demand for computing resources also kept the data center market expanding through the fourth quarter, spurring M&A activity and driving up stock prices.

    The Downsides

    The fourth quarter was not great for everybody, however. Intel was hit with two lawsuits — one by the State of New York and one by the FTC — and settled its existing litigation with AMD for $1.25 billion. And every company associated with “the cloud” suffered a black eye as a result of Microsoft and T-Mobile losing Sidekick users’ personal data. Although the data ultimately was recovered, the incident garnered much media attention and resulted in a class-action lawsuit against the companies involved.

    A more in-depth look at these trends and others is available in the latest Quarterly Wrap-up from GigaOM Pro. Get the scoop on last quarter’s happenings in our five focus areas — NewNet, Mobile, Green IT, Connected Consumer and Infrastructure — along with dozens of detailed research briefings and in-depth articles on specific topics in each of these areas. You can subscribe here.

  • Why Hadoop Users Shouldn’t Fear Google’s New MapReduce Patent

    Updated: Google, nearly six years since it first applied for it, has finally received a patent for its MapReduce parallel programming model. The question now is how this will affect the various products and projects that utilize MapReduce. If Google is feeling litigious, every database vendor leveraging MapReduce capabilities – a list that includes Aster Data Systems, Greenplum and Teradata — could be in trouble, as could Apache’s MapReduce-inspired Hadoop project. Hadoop is a critical piece of Yahoo’s web infrastructure, is the basis of Cloudera’s business model, and is the foundation of products like Amazon’s Elastic MapReduce and IBM’s M2 data-processing platform.

    Fortunately, for them, it seems unlikely that Google will take to the courts to enforce its new intellectual property. A big reason is that “map” and “reduce” functions have been part of parallel programming for decades, and vendors with deep pockets certainly could make arguments that Google didn’t invent MapReduce at all.

    Should Hadoop come under fire, any defendants (or interveners like Yahoo and/or IBM) could have strong technical arguments over whether the open-source Hadoop even is an infringement. Then there is the question of money: Google has been making plenty of it without the patent, so why risk the legal and monetary consequences of losing any hypothetical lawsuit? Plus, Google supports Hadoop, which lets university students learn webscale programming (so they can become future Googlers) without getting access to Google’s proprietary MapReduce language.

    So why get the patent at all? Well, it certainly doesn’t hurt Google to have it, and it lets the company avoid the possibility of a patent troll stealing it and taking the fight to Google. Or maybe it wants the ability to assign patent rights for its MapReduce version. Say what you will about the ethics of software patents, but as long it doesn’t do evil by offensively enforcing this patent, you can’t blame Google for protecting itself.

    Update: A Google spokeswoman emailed this in response to our questions about why Google sought the patent, and whether or not Google would seek to enforce its patent rights, attributing it to Michelle Lee, Deputy General Counsel:

    “Like other responsible, innovative companies, Google files patent applications on a variety of technologies it develops. While we do not comment about the use of this or any part of our portfolio, we feel that our behavior to date has been inline with our corporate values and priorities.”

  • Elastra Makes Its Cloud Even Greener

    Elastra has incorporated energy efficiency intelligence into its Cloud Server solution, allowing customers to define which efficiency metrics are important to them and then rely on the software to route each application to the optimal resources with their internal cloud environments. Elastra’s efforts are just the latest in a growing trend toward saving data center costs by using the least possible amount of power to accomplish any given task. Especially in the internal cloud space, power management capabilities are becoming a must-have, with vendors from Appistry to VMware offering tools to migrate workloads dynamically and power down unneeded servers.

    What sets apart Elastra’s approach is its focus on application needs as opposed to just server utilization rates. After Elastra’s ECML and EDML markup languages determine application and resource properties, respectively, the Plan Composer function lets customers set their own policies based on application needs and specific power metrics (such as wattage, PUE, number of cores, etc.). Therefore, if an application requires 4GB of RAM and two cores for optimal performance, and if the customer is concerned with straight wattage, Elastra’s product will automatically route it to the lowest-power 4GB, dual-core virtual machine available.

    While server virtualization and efficient hardware reduce the total cost of maintaining the status quo, this new breed of software solutions shaves off additional dollars by using such resources as efficiently as possible — all without the need for human intervention. The main drawback for customers, however, is that they’re tethered to cutting-edge internal cloud platforms, which many people are still afraid of — just ask the now-defunct Cassatt. As interest in internal clouds and Green IT continue to rise, these solutions will become more than good stories to tell and we can see real-world examples of how much money they actually can save.

    Related GigaOM Pro Research:

    For Truly Green IT, We Need Truly Clean Data Centers

    Report: Green Data Center Design Strategies