Author: Mike Kirkwood

  • This Tweet is Priority 1: SalesForce.com’s Chatter is Transactional Social Media

    chatter LedeSoon, Twitter users will be in a better position to get satisfaction with the companies that they do business with. This morning, SalesForce.com is announcing that the Chatter beta developer preview has grown to 500 companies and is integrated with its popular Service Cloud offering. The company has shown its ability to leverage the disruption of social media – rather than be disrupted by it.

    We had a chance to review the new tools and experience what an end-to-end social media driven customer experience looks like. It was eye-opening for us – and is coming soon to the 70,000-plus customers of SalesForce platform.

    Sponsor

    The first thing we learned in our briefing with SalesForce is that the company has fully digested the reality of the new web. The company talks about how it started on a mission to bring the power of great web applications like Amazon.com to enterprise customers. Now, ten years later, the web and the company have moved on towards the new dominant engagement model on the web, Facebook, YouTube, and Twitter.

    Here is a graph the SalesForce team shared with us on the emerging trend of Internet usage, a key driver in how the Chatter product has been considered.

    internet Usage

    old NewSalesForce makes a case that a fundamental shift is underway and its completely re-factoring the engagement model. The company calls it the “Facebook Imperative”, which we interpret as “be as social and easy to use as Facebook, or whither”. Reminiscent of the Wired Magazine’s “Wired: Tired” lists SalesForce shares its observations of the fundamental shifts in the industry. We see Amazon.com as the old incumbent leader of the Internet being replaced by Facebook. Also series of observations that show the landscape change dominated by mobile, location, and web standards.

    Here, we see a Chatter enabled service desk, where we can easily see the different channels that have opened tickets for customer service.

    Service Cloud Dashboard

    A case that has been opened via Twitter is seen in the dashboard here. It can be shared among team members, or escalated. We think this is an interesting evolution of the “follower” mechanic borrowed from Twitter. In this case, you can be assigned a topic to follow, since in the enterprise there is a job to be done.

    Twitter Case in Service Cloud

    Here, we see the familiar Twitter interface as the origination point of the case being managed internally.

    twitter Response Received

    From what we learned, several marquee customers such as Bank of America plan on rolling out Chatter plus Service Cloud. Shown here is the Bofa Twitter feed responding to individuals in the public forum.

    twitter Bofa Chatter

    Some of the productivity benefits offered by Chatter plus Service Cloud offered by the company are listed here:

    • “Monitoring Priority Cases: Service agents can stay on top of high priority cases, updates to critical knowledge articles, and the latest product updates
    • Locating Expertise: Service agents can follow experts across their organization and instantly get help from other agents, other departments, or from across the company
    • Real-Time Case Collaboration: For high priority cases, service supervisors can assemble the best expertise and information to close complex cases faster
    • SLA Management: Salesforce Chatter proactively can alert service agents of upcoming service level agreement milestones that they must meet
    • Sales-Service Alignment: Service agents and sales reps can share the latest case and opportunity updates for their customer to ensure good service means good business”

    We think there could be several big winners with SalesForce Chatter release.

    • SalesForce may have found its way into the entire enterprise, where it becomes essential to connect departments and individuals together in the best collaboration model possible.
    • Twitter seems like a big winner here, where it is now being demonstrated as the front end to customer service relationships. This pattern has been developing for several years with leaders like Comcast servicing customers with Twitter. Now, its moving to the next level where when you Tweet an issue, you’ll essentially be opening a ticket. And, where tickets are opened, you can be sure that it is someone’s job to close them. It seems that Twitter being cemented into enterprise processes just like the telephone of yesteryear.
    • Consumers win by getting faster answers with less searching in document bases, or waiting in call center queues. Consumers also win by bringing speed and transparency to the process. No longer, will we wait on hold all alone, as we’re bringing our followers with us with every Tweet.
    • IT departments that have invested in document management and other solutions will now be able to extend their reach
    • Customer service departments that have the job of closing tickets and meeting SLAs (Service Level Agreements)

    Welcome to the future of customer service, no telephone required, but your smart mobile device is definitely invited.

    Do you believe SalesForce.com onto the next big shift in enterprise computing with the upcoming launch of Chatter?

    Photo credits: Salesforce.com

    Discuss


  • Channeling Murdoch: Choices for Content Payments on the iPad

    wolverineWith the launch of the iPad, the value of content is being reconsidered once again. It’s clear that free (the way of Google) isn’t the goal of the publishing industry. As a result, many new iPad owners are faced with the question of whether the book or magazine they own or consume should be purchased again on the iPad.

    Similarly, developers are faced with a set of decision on how to appropriately extract payment from consumers. In this post, we’ll take a look at the solution offered by Apple and a host of solutions that exist on the web that are available to the iPad as applications or web sites that charge for views.

    Sponsor

    iPad Apps Apple

    To App, or not to App, That is a Question

    For content providers that have websites, or an existing relationship with Amazon, the question of whether to push content to Apple’s book store, or to build an app is more complicated than first meets the eye.

    On one hand, getting paid through multiple channels makes a ton of sense.

    iPad Web SitesApple, being very wise is working to find balance in the closed and open ecosystem. Its breakthrough product, iPad, supports web sites, applications like Kindle, and its own book store simultaneously.

    On the other hand, it can be a pain to track customer relationships and inventory across multiple channels.

    If your content is recurring (weekly, monthly), it is even more complicated. And, if you already have subscribers on your web site behind a paywall, it can be even more challenging to rationalize how to merge these customer populations and price points. In a way, we’re all learning together and competition across channels is going to make it harder in the short term to figure out the right mix.

    Distribution Matters

    storeKitApple’s StoreKIt Framework is the tookit for developers to enable payments in iPhone and iPad applications.

    Apple has evolved its own rules and technology in the last several years to do this, and in 3.0 released last summer offered the Store Kit API. When first launched, only paid apps were able to offer recurring payments through Store Kit. However, in one of the more recent updates offered by Apple, the company changed the terms of Store Kit to allow developers to set the price of their application for “Free” and then to offer updates or subscriptions as additional payments.

    In the release of iPad this week, we saw veterans of the industry release amazing reader applications, but as reported in The Huffington Post, price does matter, and parity across offerings is still a question to be answered.

    amazon Kindle App StoreAmazon offers its books and other goods via the iPhone in its Kindle for iPhone applicaiton, in this free application, Amazon is the back-end for payment and Apple is the distributor of the application.

    Amazon, like Apple has a direct relationship with publishers to offer their inventory in its stores. In recent weeks, it has been reported that intense negotiations have been underway between the publishers and Amazon on pricing and margin. As reported by AP, Amazon has seemingly conceded a new level of pricing control to publishers.

    Amazon and Apple are setting the pace for the future of computing. Both are engaged with content providers to work out all the kinks of pricing in this new world. We wonder if there is a way that both can win, as certainly they are both extremely well positioned and successful in payments, computing, and content distribution.

    Getting Paid is Hard – However Recurring Billing is a Real Pain

    In contrast to these end-to-end platforms for distribution, we took a look at companies that focus on the process of payment itself as offered as a platform to content and application developers.

    aria Logo April 2010Aria Systems announced a new release of its PCI compliant platform to support iPad today.

    The company’s core vision is to “Simplify Your Billing”. To do that, the company is making a bet that content providers will want to extend their relationships with consumers via the iPad and integrate their web applications by announcing a future integration with Safari.

    Aria Systems brings a plug-and-play solution to integrating with back-end systems that a company may be running, including QuickBooks and SalesForce.com. It also specializes in dealing with the process of recurring billing transactions, which can be a difficult thing to work out in the context of subscription revenue recognition. The platform is described as being PCI complicant, which can be a huge benefit for content and application owners that don’t want to take on the burden and liability of all of the controls of handling credit cards and personal privacy.

    chargify Logo April 2010Chargify is another software-as-a-service application that suggests its core value being “Build Your Business, Not Your Billing System”. We think this is the value-add that both Chargify and Aria bring to companies that want to offer recurring payments. Many do not have the time to also become experts in the hard word of accounting and compliance.

    The Chargify solution is offered as a set of APIs to integrate into an existing application and offers an iPhone application to peer into the payments processes in play.

    Here’s a quick view of how it works.

    chargify How It Works

    Content Distribution Enters Into a New Chapter

    We are left wondering whether Apple or Amazon will extend their reach further beyond their current distribution.

    Will they grow their payment solutions to become the center of gravity for all payments for content on the web?

    rupert murdoch

    Or, will solutions like Aria Systems and Chargify get closer to the distribution channel and offer app developers and content owners they can have a deeper relationship to consumers when they remove distribution as the driver of payments? We expect that in the next years content providers will require the best of both worlds will require both great distribution and simple billing.

    Perhaps we’re asking the wrong question by focusing on what Rupert Murdoch is doing.

    Maybe we should ask what Wolverine would do.

    After all, content is king.

    Are you looking at recurring payments for your content or application? In 2010, is it possible to choose a single solution?

    Discuss


  • Cloud Aware Monitoring: GroundWork and Eucalyptus Offer Private Cloud Beta Program

    ground Work logoTomorrow, GroundWork Open Source Inc. and Eucalyptus Systems will be announcing that they have partnered to deliver monitoring and management of applications running in a Eucalyptus private cloud environment.

    If your enterprise is running private cloud powered by Eucalyptus, you now can plug your cloud into the GroundWork’s monitoring solution. This allows you to join your view of resources from Amazon and other servers in your enterprise with your private cloud solution.

    Sponsor

    What is Eucalyptus?

    eucalytupsTopology.jpgWe covered Eucalyptus recently in an interview with the company’s founder and CTO. The company is a first-mover in helping organizations build private clouds that can achieve parity with Amazon’s EC2. The company’s enterprise addition will allow you to run an Amazon instance on your VMware infrastructure, effectively joining your virtual infrastructure and the Amazon cloud.

    “Detailed monitoring and management of private cloud applications can give Eucalyptus users important real-time information to increase productivity and reduce costs,” said Marten Mickos, CEO of Eucalyptus Systems. “Through our partnership with GroundWork Open Source, Eucalyptus open source users and Enterprise Edition customers can now benefit from a proven, open source solution to monitor private clouds as part of their overall network environment.”

    GroundWork’s newest solution offers the ability to monitor topology of your private cloud and to plug the results into the monitoring you are doing with other servers and the Amazon public cloud infrastructure.

    In the briefing we attended with company executives, several things emerged that we’re considering.

    First, it was pointed out that private clouds are “where the action is” for large enterprises. What we heard is that some companies, like pharmaceuticals that GroundWork currently has in its portfolio simply won’t be able to move all of their data out to the public cloud yet. But, they do want to get the benefits of cloud computing internally.

    Second, we learned that one thing GroundWork’s offers is a flexible hosting model, where your monitoring infrastructure can be hosted internally, or in the cloud on a managed EC2 instance.

    Recently, we checked out CloudKick, another cloud monitoring startup that also can monitor servers in the cloud and in the enterprise. The GroundWorks solution that is launching in beta both offers topology view of the private cloud and flexible hosting options that may be attractive to enterprises that plan on keeping most of their assets internal.

    From what we can see, CloudKick is positioned to companies that are starting on the cloud for scaling purposes, and GroundWork seems positioned towards companies where the center of gravity is inside the data center and now the private cloud.

    “More and more of our customers are investigating and investing in private cloud usage. Eucalyptus gives incredible power and cost savings to IT teams building out cloud services. Coupled with GroundWork’s automatic instance and application monitoring, this partnership provides a robust cloud solution with clear ROI that enterprises can take advantage of quickly,” said Peter Jackson, GroundWork Open Source President and CEO.

    What is GroundWorks private cloud solution?

    GroundWorks offers the premise that if you are running a private cloud, the monitoring solution needs to be aware of your architecture (topology, software stacks, and servers).

    Here is a visual representation of how the company envisions cloud aware monitoring:

    cloud Aware Monitoring

    Here is a screenshot of the GroundWorks monitoring solution:

    GWMonitor.jpg

    Here is a bit more from the companies on the beta program:

    The GroundWork Monitor Enterprise Cloud for Eucalyptus beta program offers:

    • “GroundWork Monitor Enterprise Cloud usage to cover on-premise, public or private cloud hosted applications and infrastructure
    • Access to Eucalyptus EE, including VMware support to implement private clouds in existing environments
    • The opportunity to provide direct feedback to the engineering and product teams, helping define the future of IT operations in the cloud
    • Engineering and technical assistance for the duration of the beta program.
    • Participants will gain these benefits with the combined GWOS and Eucalyptus
    • Quickly and easily build and monitor private and hybrid clouds with your existing environment and other public clouds
    • Run Amazon Machine Image (AMI) instances on VMware-based hypervisors within your Eucalyptus private cloud
    • Seamlessly manage environments with multiple hypervisors (Xen, KVM, vSphere, ESX™ and ESXi™) under one management console and transition applications without any modifications
    • Manage service performance and availability based on IT monitoring insight trend and usage reports across environments”

    More information available about the beta program at http://www.gwos.com/products/Enterprise_Cloud_beta.html

    It is becoming clear that private clouds are increasingly becoming an important part of the enterprise. Eucalyptus has a real opportunity as a first-mover in deploying them with its tools. From experience, we know that where enterprise-class computing exists, monitoring follows.

    GroundWork and Eucalyptus are working together to make a seamless offering that plugs into the private cloud deployment process in this beta release – and they are asking for feedback from administers interested in the program.

    Does deploying a private cloud change your view of administration tools and monitoring?

    Discuss


  • iPad: Citrix Brings Windows 7 Back to the Future

    ipad April 2010When will the iPad deliver in the enterprise? We first asked this question on Feb 11th when we interviewed the Citrix team. At the time, we gave it a thumbs-up as the Citrix team had good answers for all of our questions.

    Today, with the launch of the iPad, Citrix has delivered on its promise of making Citrix Receiver (powered by XENApps) for iPad available as a day one app in the iPad app store. Seeing is believing, so today we took a look at the new application on the new device.

    Sponsor

    Citrix isn’t shy about the opportunity for iPad. The demonstration they shared focused on one of the toughest IT nuts to crack, health care applications in the hospital. From what we see, with some reasonable configuration, the iPad, with Citrix Receiver installed, is ready to read and write health care records today.

    Citrix Reciever uses software virtualization to bring backoffice applications all sorts of smart clients, including Windows, Windows Phone, iPhone, and iPad. Here is a demonstration offered by Citrix on the Citrix Receiver on iPad.

    Today, the ReadWriteWeb team installed the Citrix Reciever for iPad app, here’s first impressions.

    • Real-time setup. The app installed perfectly, but setting up the Citrix account takes 15 minutes. In our real-time world, we found ourselves downloading other apps and almost forgot to check back. A minor issue, but perhaps something to consider for enterprise deployments.
    • Gestures. In the world of iPad, point and click converts to pinch and slide. As noted in our earlier post, Apple has done a lot of the heavy lifting to get parity with mouse and finger controls. Still, we asked the question of whether gestures mapped, our first impression is that all the basics are there.
    • Custom icons for each application session. We reported that this might be an important feature. Citrix acknowledged it being part of the implementation. Today, we see that this part of the framework enabled by Apple and supported by Citrix. Each enterprise application can be dropped onto the iPad home screen and it will have a custom icon per your setup.
    • Streaming. One of the considerations for all software virtualization is the ability to stream real-time changes from screen to screen. In the demonstration above, we see examples of real-time application heart visualization. This capability seems to be a key for leveraging existing investments and to avoid having to re-build these applications for iPad.

    More About the iPad Launch

    Click here for our full archive of posts about the iPad launch.

    Come back throughout the day for more of our coverage of the iPad launch.

    Citrix has done some heavy lifting to prepare iPad for the enterprise by focusing on its strength in preparing back office enterprise applications to be available to smart clients like iPad.

    Now, the big question. Are you preparing your enterprise for iPad? Do you have questions or wish-list items for the team at Citrix?

    Credit: Frederic Lardinois RWW’s brave iPad owner who wrestled control of his new day-one device from his family to do this evaluation.

    Don’t miss the ReadWriteWeb Mobile Summit on May 7th in Mountain View, California! We’re at a key point in the history of mobile computing right now – we hope you’ll join us, and a group of the most innovative leaders in the mobile industry, to discuss it. Register now »

    Discuss


  • Are You the Next Zynga? The Rocket Science at RightScale Helps Deliver a Safe Liftoff

    scale rocketZynga is a leading example of how to wield cloud infrastructure to achieve scale. The company uses RightScale to help match demand of its incredibly successful game franchise with appropriate resources. Zynga seems to be a master of understanding how to model customer demand and underlying resources. As even virtual goods have COGS (cost of goods sold) server resources are part of the bill when conjuring up virtual goods for tens of millions of users.

    Although we can’t all be as smart (or cute) as Zynga, many of us are catching on that scaling into the cloud is a smart choice. This brief analysis of RightScale looks at its offerings and the momentum the company is gaining in the market.

    Sponsor

    What Does RightScale Offer?

    social-gaming-cloud-computing-best-practicesRightScale is a platform that abstracts cloud offerings from Amazon and a host of other cloud providers to help orchestrate the management and provisioning of cloud assets.

    In the case of social games, this may be algorithms that help spin up services during a dramatic swing of usage. Or, in the reverse case, it maybe scaling infrastructure across the life cycle of a property as it is launched, goes viral, and eventually is replaced with the next thing.

    grid_diagramThe company also offers resource portability, where it can deploy servers with Amazon, or other cloud providers that compete in providing cloud workload services and the ability to spin up new services through APIs. RightScale has tuned its tools to both learn and to react to changes required in the infrastructure for applications using the platform.

    New customer announcements include Hitachi Systems and Services in Japan and ProKarma in the United States.

    Both are strong systems integrators that have chosen RightScale as the platform to bring the cloud to their customers. RightScale has announced over one million servers launched using its platform.

    Maybe Zynga is the next Zynga

    roller Coaster KingdomThe company certainly has the viral pattern down, and delivery nailed. And, one thing that we’ve learned in watching the excitement of social games is that demand can be like a roller-coaster.

    In addition to all of the natural benefits of cloud infrastructure in cost and timing, we think being ready for wild success is just good practice – it can much less expensive than failing to scale. More importantly, have a platform that scales can open up new doors to business that may have not existed without it.

    RightScale: For All Shapes and Sizes

    At RightScale, it doesn’t matter if your application is an addictive game, or monthly billing application. The company knows that in the next years, it is likely that hosting in the cloud makes sense for internet infrastructure and it is well positioned to be a piece of a lot of solutions that want to scale with demand.

    If the momentum with heavy-hitting system integrators continues, RightScale will be coming to you through its partners. Of course, you can also try it for free and get started in managing the cloud. The company is targeting companies that have more than a handful of servers and has a compelling offering to get started and to grow from there.

    Does RightScale fit into your scaling plans?

    Photo Credit: jurvetson

    Discuss


  • Cloudkick Broadens its Scope: Now Monitors the Datacenter

    cloudkick hyrbidCloudkick is a cloud monitoring start-up that helps system admins manage cloud servers. Today, the company announced it is getting physical, bringing its cloud monitoring capabilities to internally hosted servers and virtual machines.

    The company has had a lot of success in helping companies who startup in the cloud and start to achieve scale. It already has a host of hot startup companies including Posterous, Bump Technologies, and Urban Airship. Through listening to users, the company decided to offer local server support to merge its view of all server assets for these organizations.

    Sponsor

    What is CloudKick?

    Cloudkick enables a company to manage internally hosted servers and run the Cloudkick’s agent and report into the same console as your cloud computing infrastructure from AWS, RackSpace, SliceHost and others. When installed, the CloudKick agent will respond to status checks from the Cloudkick monitoring solution, which itself is a distributed cloud application. Cloudkick supports a host of cloud provider solutions and shares a report of feature.

    cloudKick officeWe met with the company at their offices in San Francisco. Upon entry to the warehouse, called “The Farm” near the Mission District, we realized that was a true technology startup, founded by system administrators trying to make their jobs easier. The team participated in Y-Combinator and has received an initial capital infusion by Avalon Ventures.

    The Cloudkick system offers consolidated server reports and shows server events by polling registered clients in cloud (and now data centers) and piping them to Cloudkick’s multi-tentant event aggregator.

    The tools are modeled after administrative tools like Cacti, Nagios, and Munin, but are delivered on on top of an agent-driven real time view of the underlying assets of server infrastructure.

    When checking out the demonstration, we also noted that the browser is updated in real-time as events are polled. This keeps the information fresh without having to re-check and brings the best of browser based real-time communication to system administrations.

    Cloudkick’s implementation is simple and elegant. The young company is demonstrating product leadership by living the mantra of simplicity and utility.

    Here’s a sample of the graphs from CloudKick’s feature inventory.

    cloudKick Graphs

    Monitoring Every Server

    Cloudkick ToolsThe goal of this release is to bring servers from the datacenter to power of cloud monitoring. It allows a larger and larger region of infrastructure to rely on outside controls to monitor it’s health and well being.

    One feature we we intrigued by with Cloudkick was the ability to tag and filter groups of hosts, and to then set rules across them. For example, tagging all servers “web apps” allows a rule to quickly set custom rules for checking up time.

    The company offers an API for its services and uses 2-legged OAuth for API authentication. OAuth is “an open protocol to allow secure API authorization in a simple and standard method from desktop and web applications.”. The company also offers a proxy service that streamlines and secures the connections for hosts that will connect to the Cloudkick services.

    Cloudkick is a cloud company monitoring clouds and shows us in many ways the architecture of the future. In one of the blog posts from company, they share “love affair with cassandra” and how multi-master database technology is an enabler for co-location of server assets in infrastructure clouds.

    cassandra_logo.png

    Where does Cloudkick go from here?

    Discuss


  • Rulers of the Cloud: A Multi-Tenant Semantic Cloud is Forming & EMC Knows that Data Matters

    emc Private Cloud SkylineEMC is a large company focused on high performance storage for enterprises. It’s offerings are closely aligned with the idea of extending infrastructure from virtualization to private cloud infrastructure. The company wants to help IT data provisioning services are as easy as Amazon and as secure as Fort Knox.

    To get a handle of where enterprise data storage meets the web, we looked for inspiration from architects of the web and Internet, including web pioneer Sir Tim Berner-Lee and Vint Cerf. We take a look at EMC as positioned as the closet, physically, to the core assets of the enterprise.

    Sponsor

    In this report, we also spoke with Ted Newman, CTO of the Cloud Infrastructure Group of EMC Consulting, which is part of EMC Global Services to find out what is really happening in the enterprise sales and delivery engines.

    We mashed his thoughts up with some big-thinkers in the core of computing to get perspective on the company’s future as a map to enterprise information assets.

    Where Does Data Live?

    emcLogoMar2010.jpgEMC’s byline is “Where Information Lives“, and by being a leading provider of storage solutions, this claim is literal indeed.

    Here, we see that data does have a home.

    In this case, in an enclosure, in a data center. This YouTube video shares a 2009 demonstration of EMC’s Symmetrix V-Max. This unit, built in partnership with Intel, can be configured with up to two petabytes of storage and one terrabyte of cache.

    Based on our interview Newman from the company and its focus on creating and extending private clouds, we think the EMC is recognizing the vast power of extending the enterprise out and providing services that compete with with the ease and speed of Amazon Web Services, but also provide enterprise class controls and performance.

    Where Does Data Dance?

    Sir Tim Berners LeeTim Berners Lee sheds some light in this interview about the future of the web and its data.

    Question: “Is your vision of the Semantic Web one in which data is freely available, or are there access rights attached to it?”

    Answer: “A lot of information is already public, so one of the simple things to do in building the new Web of data is to start with that information. And recently, I’ve been working with both the U.K. government and the U.S. government in trying not only to get more information on the Web, but also to make it linked data. But it’s also very important that systems are aware of the social aspects of data. And it’s not just access control, because an authorized user can still use the right data for the wrong purpose. So we need to focus on what are the purposes for accessing different kinds of data, and for that we’ve been looking at accountable systems.

    Accountable systems are aware of the appropriate use of data, and they allow you to make sure that certain kinds of information that you are comfortable sharing with people in a social context, for example, are not able to be accessed and considered by people looking to hire you. For example, I have a GPS trail that I took on vacation. Certainly, I want to give it to my friends and my family, but I don’t necessarily wish to license people I don’t know who are curious about me and my work and let them see where I’ve been. Companies may want to do the same thing. They might say, “We’re going to give you access to certain product information because you’re part of our supply chain and you can use it to fine-tune your manufacturing schedule to meet our demand. However, we do not license you to use it to give to our competition to modify their pricing.”

    This vision is where there is opportunity, accountable means controls. Shared, means cloud. Perhaps a new term in the making: Accountable clouds.

    Does Your Cloud Compile?

    Vint CerfVint Cerf, Chief Internet Evangelist posted to the Google Research blog, Cloud Computing and the Internet that further expands on vocabulary management and cloud computing. We see a definition of cloud computing emerging here that ties it to data portability and capability, a defining moment in the definition of semantic web.

    “Interestingly, my colleague, Sir Tim Berners-Lee, has been pursuing ideas that may inform the so-called “inter-cloud” problem. His idea of data linking may prove to be a part of the vocabulary needed to interconnect computing clouds. The semantics of data and of the actions one can take on the data, and the vocabulary in which these actions are expressed appear to me to constitute the beginning of an inter-cloud computing language. This seems to me to be an extremely open field in which creative minds everywhere can be free to contribute ideas and to experiment with new concepts. It is a new layer in the Internet architecture and, like the many layers that have been invented before, it is an open opportunity to add functionality to an increasingly global network.”

    All of the sudden, the semantic web seems required to realize the vision of the cloud. And, the great thing about it is that the cloud layer being a first example of the semantic web shows us we can start it in information technology’s own backyard.

    EMC’s Opportunity

    The enterprise of the future needs to share nicely, store petabytes at-will, and be available on demand.

    Also, to the degree that organizations run sensitive or personalized enterprise software, the platforms it runs on and interacts with will need to demonstrate the controls and permissions similar to those today inside the enterprise. This will be a key factor in whether the enterprise systems can gracefully consume cloud computing – or what they can adopt it for.

    This is the space open for EMC to provide hardware solutions coupled with software to manage the resources of the cloud, including storage, computing, and network.

    This is also the area of much focus – from monitoring to provisioning. And a winner is not going to be determined overnight.

    A roundup of open questions for the company and the enterprise information industry:

    • VMware and Not – Can EMC win soley with ties to VMware, if open source hypervisors take significant market share, can and will the company be well positioned in these architectures?
    • Oracle with Sun – Will Oracle’s move into hardware, cloud, and storage have an impact on the companies positioning?
    • S3 Servers in the Enterprise – We may have made this up. It seems clear that S3 and other Amazon Web Services will become the core fabric for IT adopting the cloud. It only makes sense to do the same with abstracting storage in the enterprise. We believe in the power of the cloud to creep in, and we want to see how big storage providers react to this new logical competitor. A key here for EMC and the rest of the IT industry is that Amazon sells storage with no consulting involved, or waiting period. At EMC, global services was responsible for 37% of EMC’s total revenue in 2009 and is a important part of servicing customers. We wonder, should EMC offer an “S3” for the enterprise that plugs into Ionix and other EMC offerings?
    • Open Protocols Inside, APIs Outside? – We asked recently in a discussion with Hitachi Data Systems whether open protocols instead of APIs would be the driver for this industry interoperability. Amazon, is clearly an API, where things more in the core of storage tier are protocols, worked on in tandem by many and influenced by those who matter.
    • Helping IT Respond to Now – In a way, EMC and cloud computing meet in the IT budgeting process. We think that providing “always available” and “highly available” will meet, “low latency” and “DR” in a real way in future Amazon vs. internal discussions. What we mean, is that Amazon providing “scale as you go” is perfect disruption for the IT department. Iinfrastructure scales, IT budgets don’t. This can be a big headache for IT trying to predict the future and is an opportunity for EMC to provide a better solution for enterprise capacity management. Yes, that means paying with a credit card – at least sometimes.
    • Intel / Cisco as partners – New types of network management and cloud services are evolving in the chipset and network layer. We see the companies maturity in how it has global partnerships with these companies to help the the channel and drive solutions. At the same time, this centuries IT industry is more of a mosh-pit than a sing-a-long, and it seems like it is going to get very cozy in the future in the area of network and cloud management.

    This EMC rant on YouTube is a funny take on where the company is positioned.

    cisco Wants Data Center

    If EMC plays it’s cards right, enterprises will choose its tools to “control the shape” of the data and systems in the data center. And, if it evolves quickly enough, the same IT manages will have solutions that keep all of the companies assets, including public cloud offerings, under one umbrella.

    Is your enterprise moving your data out into the cloud?

    Or is the cloud moving into your company’s data?

    Photo credit: paul_clarke

    Discuss


  • Earth Hour: Is it Time to Virtualize the Electrical Grid?

    candles earth hour 2010Another Earth Hour has passed by this weekend. Electrical systems across the globe were shut down to observe, for an hour, that energy is precious. In this moment, we also acknowledge that as humanity, we have the power to do better for ourselves. One great thing about Earth Hour is the photos. If you haven’t yet, check out the brilliant photo essay at Boston.com on Earth Hour 2010.

    If you haven’t taken initiative to shut down your computer yet, read on to get a refresher on how better computing resource utilization creates a better world.

    Sponsor

    Earth Hour Translates to Megahertz

    earth Hour MalaysiaThe link to energy and efficiency is clearly evident in the data center. Where electricity is bundled in time units, processing is calculated in megahertz.

    We can see how important the work is at Intel and others (AMD, IBM, Apple) to get higher processing per energy consumption at the core.

    In the data center, applications and processes drive resources, as well as flows of traffic from users. In a way solving the challenge of energy efficient data centers is where information management and physics collide.

    Higher utilization is the promise of server virtualization. However, like in many things, scaling up is harder than scaling down. The tricky part is the linkages across the network, storage. These configurations are where further opportunity exists to abstract the workload, infrastructure, and energy to orchestrate a flow of resources that turn off and on when needed. In this way, we wonder, will find ways to connect energy consumption to workload – and cost.

    Is Energy Social?

    smartMeter.jpgWe see a time in the future where personal computing is a utility, and the plug knows who we are. With smart homes, mobile computing, and personal health records, it has to be so.

    One thing that struck a note with us about Earth Hour is how easy it is to do locally. All you do is turn off the switch.

    In California, there is a very lively discussion on automated, or “smart” meters from the default electrical company, PGE. See (some) of the dialog on PGE’s smart meter site on Facebook.

    berkeley Light not at NightOn one hand, having computerized meters gives the needed management to observe consumption in real-time and optimize the grid.

    On the other hand for many users, this type of oversight needs to be tied to consumer privacy and pricing.

    As shown with Earth Hour, there is an important social component and to giving back to the world, not just the shareholders. People question the intentions of a monopoly and as people we seem to get a better win with a simple, “Turn it Off” where we get a chance to contribute by ourselves.

    For us, Earth Hour represents people rallying for the future.

    Around the world, from Sidney to Singapore, Buenos Aires to Boston people are doing it because we are a people – not to support the systems.

    Here’s to hoping that someday we can all check in to Earth Hour in a way that turns off our gear, lights, and grids – if only for a moment.

    Location Matters. Huddling Up to Where its Warm

    Thumbnail image for Dalles.jpgOregon’s has a lot of natural resources. From salmon, honey, and redwoods, to mobile technology, the state is blossoming like spring.

    One interesting trend are the massive data centers popping up out of the ground (like Facebook and Google) that have been placed close to energy resources.

    In several small towns in Oregon, modern high-density computing environments are being deployed next to the oldest technology for generating power, the dam.

    These services show that tariff’s and pricing do matter when it comes to energy and how it converts to the bottom line to the leaders in cloud computing.

    Computing Matters: A Few Green Guides Resources to Consider

    Intel: Seeing the Sensitivity of Server Refresh is an Intel internal review of ROI of pulling in newest versions of server technology and doing technology refreshes. Density does matter.

    VMware: This energy efficiency analysis walks us though the concepts of energy efficiency by pooling servers as virtual resources. The Gartner quote below us how serious energy is ties to computing costs.

    “Gartner estimates that over the next 5 years, most enterprise data centers will spend as much on energy (power and cooling) as they do on hardware infrastructure.”

    Even if all of the technology was free, energy would still a very significant expense in running a data center operation.

    VMware also shows that energy saving can be viral, or can expand into other areas of the corporate environment.

    Earth Hour is a Question, Not an Answer

    One of the best aspects of Earth Hour is that we know it won’t work for the real-time web. We aren’t ready to shut down the Internet, or data center. Instead, as technology leaders, we may be able to design systems that react and become more efficient. With time, perhaps the Internet at large will “go dark” for an hour or so per year in celebration.

    For us, Earth hour was a trigger to consider the impacts of energy and look at it as a system, instead of a free resource.

    We compiled a few questions for enterprise managers considering how to tie global movement and questions into the day job:

    • Earth Hour has a .9% difference in the electrical grid in some areas. We know virtualization offers more. What number are you using in your enterprise for virtualization energy savings?
    • How long will it take for electrical grids and computing grids merge? Will it happen in our lifetime?
    • Would your company be able to take down your network down for an hour with the flip of a switch? What part of the infrastructure would you be the most concerned about?
    • How high of a priority is it for your organization to reduce it’s energy footprint?

    Photo credits: demorganna & xshamx

    Discuss


  • Rulers of the Cloud: Your CEO has a SalesForce.com-Powered TweetDeck, and She’s Following You

    demi-moore.jpgToday, we drop another another segment in the Rulers of the Cloud series, focusing on SalesForce.com, the cloud innovator that re-invented the rules of CRM (Customer Relationship Management).

    SalesForce is growing into a big company, recently announcing over a $1 billion in revenue annual run rate. Yet, the company is still an agile organization focusing on upheaval of the enterprise through cloud services. The newest release brought a major new services focus, SalesForce Chatter. We took a look and found that this product may be the service that brings the company further into the enterprise as a dominant enterprise cloud and collaboration vendor.

    Sponsor

    salesforceLadyBug.jpg

    Chatter is an industrial-grade collaboration framework that is designed for mixing following and deal flow, and finding the place where communication drives sales. Chatter feels like Twitter for the enterprise, with the advantage that its multi-tenant approach can be hosted and segmented for your organization. The toolkit was recently opened for select developers as part of the company release, dubbed LadyBug.

    We’ll take a look at the core business and how this product may inspire IT leaders to create real-time tools for the enterprise.

    A Critical Asset: The Business Forecast

    To plot out the company’s future, we want to highlight the past and present briefly. The company competes with big enterprise vendors such as SAP and Oracle for CRM. From day one, SalesForce has had a “No Software” mantra focus on the power of cloud platform approach. The lightweight, easy-to-install platform has lots of tools for the management of hardcore customer information including the scenario shown here.

    A Critical Asset: Developer Tools

    SalesForce’s offerings for the enterprise are evolving. Key updates to the platform continue to roll out, as these shown for the Spring 2010 Ladybird release.

    In our recent briefing of SalesForce Chatter the thing that impressed us most is how the development community can use all of the SalesForce platform APIs in concert with the new Chatter services. In this case, a developer of “Chatter Bubbles” has taken chatter experience back to the future with a closer parity with Twitter.

    saleforce Chatter bubbles app

    This demonstration peaked our interest, seeing how the Chatter experience could easily tug the “I could build a better Twitter” emotion. Now, each enterprise team that deploys Chatter can customize microblogging for the company or salesteam on top of the SalesForce collaboration cloud.

    A Critical Asset: Platform as a Service

    We noticed that SalesForce.com has a deep set of partners and relationships to technology companies. For this reivew, we took a look at the SalesForce and Adobe partnership as an example of where the company has, like its relationship with Google, created a partnership that brings the organizations’ developers together.

    In the announcement here, the we see that Adobe AIR and the Flash platform are being enabled to consume SalesForce objects and to create persistent rich client applications. AIR has seen a lot of exposure in the Twitter application space, with very popular applications living on its client technology.

    Killer Enterprise Apps are Right Here, Right Now

    If we put all those things together, we see a new class of application emerging in the enterprise, literally a Tweetdeck-like, keyword-filter powered command center for each facet of the organization. We think enterprise software is headed there, and with the pieces SalesForce has put together, it could be built.

    TweetDeck

    This Tweetdeck screenshot sparked our imagination of how we could build a rich client for the enterprise.

    In the example shown, we can see the streams flowing further together to cross the enterprise to social bridge. In this perfect world, we see @GigaOM as our CIO, and @TechCrunch as head of marketing. Demi Moore is our CEO and wants to know your deal is flowing.

    In this not-so-distant future, we see the threads of decisions, meetings, and key concepts fly by in real time, and simple, user-controlled filtering could give personalized views to any stream.

    The Cloud Opportunity is Still Evolving

    In a way, SalesForce’s biggest challenge is opportunity. The platform works; it has an obvious opportunity to chip away at the CRM market and adjacent markets through the dynamics it has been founded on.

    We wonder how platforms bind themselves to SalesForce and how the enterprise cloud might evolve. Here’s a few we’ll be interested in learning more about.

    • Should the company go much further in building a developer community, or should it integrate the communities within other platforms (Google, Adobe, Microsoft).
    • As a platform company, will SalesForce.com also be able to build the killer app for Chatter? Is it addictive? From our view, the question isn’t will Chatter beat other tools, but instead, will it be a dominant form of communication? We wonder, could chatter beat email? From what we’ve heard so far, it has promise, but we’d like to see it.
    • How does the Force.com cloud map to cloud efforts at Amazon, Google, Microsoft, and VMware? Will there emerge a deeper integration between online and offline cloud resources, or a peering of services between SalesForce and Amazon, SalesForce and VMware? What is SalesForce.com’s trajectory with core services like compute, storage, and other things that are getting clouded in the enterprise?
    • Do multi-vendor collaboration platforms work? Should we expect that both Buzz and Chatter will be at our fingertips, or will in the end, one application win? We see the advantage of being “the message bus”, like Twitter, and enabling smart clients to define experience, similar to TweetDeck’s relationship with Twitter. In this case, it is the application (Tweetdeck) that decided to support other social apps (social clouds) such as Facebook and Twitter simultaneously. Perhaps we’ll see the same in enterprise collaboration.
    • Will SalesForce.com update its brand to show off the breadth of the opportunity? As an example, Apple Computer became Apple, Inc. to represent itself. Could SalesForce.com become Force? Does it need to?

    Personalities Matter: Are you Social with Your Boss?

    A lot of organizations are awaking to enterprise social opportunity, including the small and growing Yammer and Jive. These companies are bringing next-generation communications to the enterprise.

    There seems to be a communication landscape change, where the boundaries of “water cooler” and “board meeting” will meet. It will be interesting to see how these tools promote themselves and how social etiquette will evolve.

    Will our CEO send us an inspirational quote of the day, like so many others do on Twitter?

    demiProgress.jpg

    Or, instead, next time you log on, will there be a direct message: “Come to my office”?

    This brings us back to SalesForce.com. For many in the enterprise, the question isn’t only “What’s happening”, like it is on Twitter, but instead “Did you close?”.

    This is where we think SalesForce’s core premise of building a strong core business from CRM, along with its well-formed APIs give it a path to meet its ambition for delivering on leaderships thirst for knowledge.

    We wonder, will SalesForce.com power your CEO’s real time view of your organization?

    Discuss


  • Act Now. Amazon and Microsoft Launch Windows Server License Mobility Pilot

    amazon ec2Early this morning, we received an announcement from Amazon the company is launching a pilot for EC2 customers to allow your enterprise organizations to move existing Microsoft Windows Server licenses to Amazon and receive a proper discount for the new EC2 instance.

    The offer is open until September and is being called a pilot by the companies to test the waters and pattern for hosting Windows within Amazon.

    Sponsor

    win_logo.pngThe note from Amazon is on the Windows Server license mobility prompts immediate action:

    “Dear Amazon EC2 Customer,

    We are excited to announce the immediate availability of the Microsoft Windows Server® License Mobility Pilot, which enables customers with Microsoft Enterprise Agreements (EA) to migrate their existing Windows Server licenses to Amazon EC2. By moving existing licenses to the cloud, you can leverage licenses that you have already purchased to reduce your cost of running Windows On-Demand or Reserved Instances by up to 41%. Microsoft will stop accepting new enrollments for the pilot on September 23, 2010 so it is important to act quickly.

    To participate in this pilot, Microsoft requires that your company meet the following criteria:

    * Your company must be based (or have a legal entity) in the United States
    * Your company must have an existing Microsoft Enterprise Agreement (EA) that is valid for a minimum of 12 months after your entry into the pilot
    * You must already have purchased Software Assurance from Microsoft for your EA Windows Server Enterprise, Datacenter, and Standard licenses
    * You must be an Enterprise customer (Academic and Government institutions are not covered by this pilot)

    Once you have enrolled in the pilot, you will be eligible to run your Windows Server licenses in Amazon EC2 for the next 12 months following your sign-up. You will still be responsible for maintaining the appropriate number of Client Access licenses and External Connector licenses needed to operate your EA Windows Server licenses.

    To learn more about this pilot or sign-up, please visit http://aws.amazon.com/ec2/windows-license-mobility-pilot. We hope that you take advantage of this new pilot!”

    By clicking that link, you are one form away from that hardware in your closet moving to Amazon. Make sure your license administrator is handy, it asks for the basics, and of course the company agreement numbers with Microsoft.

    microsoft Enteprise Agreement

    Microsoft is Going Big into Cloud. And Amazon, Deep

    For the right license holders, this is could be a big opportunity to jump to Amazon’s cloud. This could cause organizations look at old hardware with a new rigor. If this program grows, we could see whole blocks of infrastructure move to Amazon and new Windows servers materialize.

    Even if this license mobility is a low or no-revenue event to Microsoft in year one, the company will win to see those servers in action. Each one, no matter where it is hosted, represents value for the company and the ecosystem.

    We think that Amazon wins with Windows Server license mobility. We can see system admins de-provisioning hardware adding a step to the script “fire up new instance at EC2” and feeling even better about tidying up your data center.

    License mobility seems to be another sign of how the cloud may just be more portable in its nature. And faster that FedEx.

    Are your windows servers licenses ready for EC2?

    Discuss


  • Healthcare Reform is a Cloud: Interview with Matthew Holt & Richard MacManus

    an apple It’s a sunny afternoon in San Francisco and health care is in the air. I’m sitting at the the Peet’s in the SF Ferry Building eating a vegan ginger cookie and waiting for Matthew Holt, founder of The Health Care Blog and the leader of Health 2.0 conference to show up for an interview. He arrives wearing shorts and a Health 2.0 t-shirt, and has his dog with him. He tells me he jogged to our location on the bay from Health 2.0 headquarters seven minutes away. It’s a beautiful day – and here in the United States, the health care reform bill just passed.

    ReadWriteWeb’s founder and leader, Richard MacManus, joins us, and we dive into a conversation on the revolution underway in cloud, mobile, and social health tools. By the end of the day, we were left with one question: Will health care reform build a health Internet, or will entrepreneurs do it because they can?

    Sponsor

    A Brief History

    One nice thing about profiling the thoughts of bloggers is that they leave a trail to track them down.

    Here are a few of Holt’s social and technology posts on The HealthCareBlog:

    Here are a few of MacManus’ posts at ReadWriteWeb that track to health care:

    Health Care Reform is like Ice Skating in San Francisco

    ice skating in sfA phenomenon I see every year in San Francisco in December is the setup of the ice-skating rink. Palm trees and skaters. For children and adults alike, it’s a way dream about a past and present, whether real or fiction.

    And, yet, while good for humanity, something about it doesn’t quite hold the spirit of the pristine pond and cabin by the lake. We know, even though the ice is icy, generators are pumping along the edges. It’s not quite pristine, and it’s not quite ours.

    That’s how health care reform feels – a victory indeed – but for some reason not a personal win. Somehow, reform feels artificial and hard to grasp. A small part inside of me wants to scream out, “is there an app for that”?

    Is it One Big Health Cloud?

    matthew holt in SFTo get the conversation started, I asked Holt and Macmanus, “What is your take on cloud computing for healthcare?”

    Holt asked in return, with a grin, “What exactly is the cloud? Is it a thing, or is it a collection of services that are connected together?”

    We discussed this question in practical terms

    Holt: “Here’s a question: Will Salesforce’s cloud be merged with other organizations’ contacts, and will we have shared controls? Is that the difference between cloud computing and SAAS?”

    We came back to our business, blogging. Blog software like Moveable Type (RWW) and WordPress (The Health Care Blog) generate common feeds in simple formats (RSS) that can be used and mashed up in all sorts of ways. But, that doesn’t mean that MT and WordPress themselves are hot swappable, as there are controls, widgets, and other tools that are optimized in the application layer.

    Perhaps, in this way, EHR (Electronic Health Record) systems can be thought of as a blogs, where people are the categories, and events are the posts. If so, what is needed for health care information exchange is a basic feed for key members of the exchange: doctors, patients, pharmacies that connects new systems on top of it.

    For health care exchange, connecting patients is so much more than connecting infrastructure, platforms or software. Like all good software, it’s about finding the shortcut. We should endeavor to find, build, and monetize the simplest exchange that will drive the future generations of meaningful interoperability.

    As we spoke, a light turned on.

    Is Health Part of the Internet of Things?

    richard macmanus in san franiciscoMacmanus: “Health devices are one of my favorite use cases for the Internet of Things. Let’s take the example of a blood pressure monitor. It’s a portable device that augments your life and well being, and the promise of connecting to other things and streams is real”.

    Holt: ” …and look at these devices closer – we see they are intelligent, self adjusting, and include feedback loops and reminders. Thse devices are starting to connect to the Internet and to people.” “And what about the Wii,” he continued. “The Mii is already virtual me, and the WiiFit is compelling and network enabled”.

    All of us noted that Nike’s work in this area is inspiring – from ease of use to business model implications, there is something great going on with the Nike + sensor and the company’s broader ambitions.

    We realized that technology has already started a revolution in health – and it’s getting traction.

    nike Plus Shoe iPhone

    Macmanus: “I’m fresh from SXSW and have location on my mind. We heard that FourSquare is at work on a next-generation feature on websites, where checking in will connect virtual and real worlds. Also, with innovations like self-tagging StickyBits and Microsoft Tag floating around, real-world augmentation is starting to take form and connect with the Internet world.”

    Holt: “UPC tag scanners, such as mobile phone bar code readers like ScanAvert connect real world things to facts about them, such as ingredient and nutrition information.”

    We were reminded of the Quantified Self movement. This is a meetup that has growing momentum in the SF Bay Area and around the country. It is a place where self-reporters get together and share war stories.

    Organized by Gary Wolf and Kevin Kelly, it combines what’s on the cutting edge and our overwhelming fascination of creating a digital diary through logging data about oneself. And, best of all, the meetings focus on “What did you learn about yourself,” which focuses the meetup on us, not just technologies or business models. We learn that our motivations matter.

    Let’s Run it All on Amazon and Get Scale

    The tools are ready, entrepreneurs are on board, and we all believe that the cloud is here.

    But, what about the data?

    That is a tougher question, and a familiar storyline of permissions, identity, matching, EDI, XML – it’s enough to make you sick considering all of the potential work to align it all.

    In the spirit of the shortcut, the three of us came up with an idea: What if instead of connecting all of the hospitals, instead we connected every person in the U.S.? What if we would each have a server in the cloud, tuned to receive and share our own health transactions? This health server on the network would run software to receive files, add streams and connect devices under our direct control.

    The three of us did a bit of back of napkin work and believe that we could outsource the entire thing to Amazon for about US $1 billion yearly. This would cover server fees and data access for every American to have their own instance of server optimized for transmitting health information

    Here’s our math:

    300 million people [multiplied by base fee of $30.00 per year multiplied by the .1 concurrent utilization rate. Build a cloud architecture that reduces the cost by 10 times by leveraging computing systems that spin up on demand and therefore dramatically reduce physical costs.

    We think this type of math, however crude (and perhaps wrong), is worth thinking about as we spin up the servers for health care reform.

    We’re Convinced: People Eat, Sleep, Pirouette, Take Pills

    By the end of our conversation, Macmanus, Holt and I were left with an invigorating idea about the new health care reform: It isn’t a thing, it’s a moment in time.

    Innovations for health care are already springing out of the Web and will thrive on their own merits, so the job of health care reform technology should be to instigate this innovation, stat.

    What would you do if offered a fixed bid contract for $1 billion annually to build a new health cloud for America?

    Who would you bring along to get the work done?

    Photo credit: abhijittembhekar

    Discuss


  • Enterprise Cloud Control: Q&A with Eucalyptus CTO Dr. Rich Wolski

    eucalyptus treeEucalyptus a software layer that forms private clouds patterns in the enterprise. Private clouds are bringing together the best of Linux, Amazon, and VMware in a practical way.

    It could be argued that the cloud itself is a product of the open source spirit. So, with that in mind, we took a closer look at Eucalyptus and sat down with Dr. Rich Wolski, Chief Technology Officer of the Eucalyptus team to figure out what is the opportunity and why it is gathering the attention of successful open source entrepreneurs, investors, and partners.

    Sponsor

    A Cloud Forest. Where the Cloud and Servers Meet?

    We asked this abstract, but also practical question.

    Eucalyptus offers a solution for that models enterprise resources around Amazon’s core cloud services. The result is resources in the enterprise having parity with instances in Amazon’s cloud.

    By modeling the Enterprise Cloud after EC2, EBS, and S3 and joining a cloud control center into the enterprise, the company introduces a control point for enterprise resources.

    AWS-Eucalyptus2.png

    The resources are bound together at the core model of compute and store, and build a network control point for surrounding services.

    Simple, but elegant.

    xeonDr. Wolski pointed us to a reference implementation that shows a cloud enabled data center with the cloud manager enabled, Intel® Cloud Builder Guide to Cloud Design and Deployment on Intel® Platforms, which features a scenario provided by the Ubuntu cloud.

    We found this scenario a great description of the powerful join happening around open source and Amazon’s AWS (Amazon Web Services).

    eucalyptus Reference Cloud whitepaper graphic

    The components of this model described here in the white paper give an idea of how this model includes the cloud controller as a map to the brains. It gets access to each of these core services on the network, and choreographs how they connect.

    Here is a little bit more about each, offered by the white paper.

    ubuntu Enterprise Cloud“•The Cloud Controller provides the primary interface point for interactng with the cloud. Commands to create or terminate virtual machines are initiated through the API interface at the Cloud Controller.
    • The Walrus Storage Service exposes the object store. The object store is used to hold the virtual machine images prior to instantiation and to hold user data.
    •The Storage Server hosts the actual bulk storage (a 1.4 TB JBOD in this case). Storage is exposed to the Block Storage Controllers and the Walrus Controller as a set of iSCSI volumes.
    • The Cluster Controllers manage a collection of Node Controllers and provide the traffic isolation.
    • The Block Storage Controllers (SCs) manage dynamic block devices (e.g., EBS) that VMs can use for persistent storage.
    •And, the Node Controllers (NCs) which are the servers in the pools that comprise the compute elements of the cloud.

    *It is noted that many of these pieces are interchangeable (e.g. Walrus) in this example with other components. Also noted: Eucalyptus supports numerous hypervisors in the market today.

    So, in this quick list of components we have a real-life definition of cloud computing, in the form of an enterprise service layer.

    Is the enterprise more complex in reality? You bet. Now, the fun begins.

    If a Tree Falls in the Forest, Does the Forest Know?

    Now your server distribution of Ubuntu, et al can one-click to cloud. That is interesting, but we know there is more. We found that cloud computing capability and cloud design are two different things and there are many pieces ripe for market upheaval.

    In a way, Dr. Wolski and team bring a new protagonist into the network, as he told us “A new abstraction to the toolkit”.

    If Eucalyptus works, we’ll see the company continue to grow as a piece of the fabric and bring this cloud object into the enterprise toolkit in a substantial way. IT leaders will start to plan around it, model it, and evolve it into core practices, disaster recovery, and the many scenarios around turning down and bursting resources.

    To do all of this, Eucalyptus creates a lens to distributed resources, a join of all the facets of the cloud that should move to keep in sync. This model is built on core compute fabric that is offered by Amazon to isolate the simplest go-to-market pattern for connecting enterprise and public resources. Here, we see a view of this model from the white paper.

    auto Scaling Fractal

    A Pristine Forest of Enterprise Cloud Servers

    We have a few questions left remaining, so we plan on keeping in touch with Eucalyptus.

    cloud Forrest

    • Will Eucyluptus gain enough mass in the private cloud while continuing a cozy relationship with Amazon?
    • Will Eucalyptus bring forward competitors to AWS and/or commoditize Amazon’s services by offering “in parity” providers? Is it possible to compete?
    • What impacts might this have to VMware’s core offerings, will this move VMware offering cloud computing closer to Amazon’s AWS
    • How does this impact software that is packaged for a data center and/or cloud?
    • Will this model become critical mass for deploying Amazon? How does it trend with the deployments of Amazon’s model of creating private data centers and cloud monitoring services?

    Also, a bigger question came to mind.

    Is this Eucalyptus further evidence that AWS was “the shot heard round the world“? Computing may never be the same, as freedom has rung now that the base computing solution is in the cloud?

    It really feels like competing has forever changed, and as a server, it just doesn’t make sense to be a alone, when a forest is all around you.

    Open Source – Fastest Way to 10 million downloads

    eucalyptus logoEucalyptus seems to have chose the path of least resistance, and brought open source into its corner. Becoming packaged at the core, in the Linux distribution and connected to other fabric it has the opportunity to grow quickly.

    And, since it’s a private cloud, it can also grow for critical tasks. To that end, we see friends of open source, like Intel, Extreme, and Ubuntu ready to go the distance with Eucalyptus in their stacks.

    We asked about traction for the product. Dr. Wolski chuckled a bit when he mentioned the large volume of downloads it has received with company partners. “It’s rewarding being in open source model. It’s in the core of our company and our motivations”.

    Can you win by binding dominant platforms with open source? And, is that itself, open source?

    Photo credits: kubina & lgb06

    Discuss


  • Rulers of the Cloud: Google Becomes the Cloud, Search is a Feature

    verb can bend a nounThe shortest way to describe this is that Google is no longer a verb. It’s becoming a noun. Not just the few clicks to find information, but the information itself and the experience surrounding it.

    Today, we get to add Google’s chapter to “Will One Company Dominate the Cloud” introspective series and take a glimpse of the silent revolution from “index” to “be” that is transforming the company and it’s products to the default way to engage the Internet.

    As fate has it, Google done us a big favor in preparing for this piece. The company has launched an assault on the enterprise with its movement in the Google App Engine, having a stand-off with China, and negotiating with the EU. And that was just a bit of Google news from this week.

    Sponsor

    Whereas it’s a bit more clear where Amazon and Cisco win (our recent analysis) as they head towards the cloud, with Google it takes a bit more expansive view. We have to take the focus out a bit, to be able to dial in on the details.

    Acknowledgment: Developers are the Products they Build

    Tim.jpgWe recently had the opportunity to sit down with Tim Bray. He has been a key contributor and thought leader in key areas of interoperability and information design, including his leadership in bringing XML to the world. He recently announced that he’s joining Google and focusing on Android in a transition from Sun.

    Several things struck us in our dialog that we think are key for Google.

    First, when Bray described his new job at Google, he talked about what he wanted to do and what he saw that needed to be done. Within three days of being there, he has a sense of ownership of the companies products and mission. In some organizations, you may never get such a luxury.

    Second, Bray described his opportunity to “roll up his sleeves” and get back in the groove as a developer on a project he feels passion for. He mentioned his desire to take the open APIs of Android and expose some of the information in a more portable way, for example to transfer a call log from one phone to another. A very interesting project, with tangible results. This type of innovation lives on top of all the work the company has done to make the API exist, and to attract individuals who are willing to rethink how it should really work.

    We think that open innovation is the most interesting thing about where Google is right now. It’s “open” mantra gives the company the ability to see a whole generation into the future of information channel disruption. And, by bringing in “no holds barred” developers like Bray and a legion of others, the company is patiently solving problems that many of us don’t even know exist.

    Lastly, Bray said something that caused us some deep thought.

    verb_muscle.jpgHis comment, “when the Drizzle team was acquired by Rackpace, they just kept working on the their open source project and things stayed nearly the same.”

    What caused us to pause was that open source development, whether Linux or XML, gives the developer, as a person, a way to contribute to the world. And it’s documented. If the Internet was the Bible, leading a key open source initiative, is like getting your own chapter in the book. Here, time, will be the judge of your actions. Much better than your manager alone.

    To some, their project is their baby. It is nice to know that hard work, intellectual capital, and of course libraries are available to the world after the project is complete. This really speaks to the artist in us, in a way, the paid open source developer is using Google as a canvas.

    If working at Google offers this emotional spark to employees to go further, it will gain entirely new efficiencies in solving the big problems. Developers like to contribute to a version of the greater good…and want fans to witness.

    What we learned; acknowledgment matters, and connections to the whole population of people is an amazing vehicle offered through open source. Google: you can become an indie rock star – with the strength of your grep.

    All of the Information on Earth

    verb pointsGoogle’s destiny to become the hub of the worlds information is intertwined with history. And this comes with artifacts of policy and posturing. To start with, not everyone agrees that Google should achieve a dominant cloud position. As we’re noticing, stopping it is another matter.

    We’d like to suggest that in 2010, the company is not shy about stepping towards its future and will use its power, technology, and cash to stir it up. Here is our list of organizations in the world that Google has, is, or will be, continually bumping into in its quest for cloud information dominance.

    • China (counties own the filters for the people)
    • ATT (service providers own consumer on the network)
    • Penguin (book publishers own the words in the texts)
    • Visa (financial institutions own the digits in the transactions)
    • Facebook (social networks know the details)
    • Amazon (commerce sites own the decision point)
    • Twitter (owns “what’s happening”)
    • Microsoft (owns the computer applications and files)

    Open can be a Key to Unlock Doors

    verb excuse meWe see both practical and strategic reasons that Google has a deep connection with the open source movement. Strategically, being the new optimized layer, removing all historic barriers to information give the company more leverage. Practically, solutions can be built where information is free.

    Reviewing a few examples, such as Google Earth, Android, and even GMail and we see that where there are open protocols and information disruptive products can be built. Once they are built, the Google wields a significant economic advantage in binding the worlds information assets and converting them to eyeballs.

    Here, we take a quick look at the information assets that Google is investing the global cloud.

    • Results: Google has moved away from Page Rank to “Closest Object” in it’s default results. What this means is that many businesses today show up as widget in the results in google with embedded links, maps, and other efficiencies.
    • Ads: This is perhaps the best known and most valuable insight and unique asset, who wants to pay for what customer
    • Realtime index: Google has worked to keep up with Twitter’s realtime firehose
    • Semantic index: The company continues to add more and more microsyntax parsers into its index, giving more controlled tools for publishers
    • GMail: It had to be done. And it is monetized.
    • Documents and files: Google Docs and the Apps Marketplace create a whole new stream of information about an individual. Private, personal, and shared.
    • Mobile transactions: This is an interesting sample of where Google’s strategy to build the Android OS pays off in the cloud. Not only does Google get to connect mobile to the rest of the offerings, but also to be able to dial in on movements, calls, and other critical tasks in our real-time lives.
    • Books: Indexing all of them, first is an interesting piece of the strategy to break apart historic containers of knowledge. Is the book copyrighted? How about the quote?
    • Browsers: The browser knows a lot. Google’s Chrome moves it from being default search, to being default experience. This was a great example of where access to information “Faster pages” is the simple value proposition for consumers to switch.
    • Filters: Protecting companies, trademarks, and interpreting the legality of free speech. Someone has to do it, if we’re all one people.
    • Health transactions: Google has even taken on one of the most sensitive challenges, private health information. And, it’s connections to legacy systems that prefer EDI to JSON.

    It’s clear that Google is making progress. What we’ve also learned in this review is that the companies biggest asset – people – may scale to solve problems in lightweight ways that entire teams and companies haven’t been able to in the past. Perhaps being open, or transparent, gives the company a unique advantage in being prepared for a cloud future.

    Is the cloud where the action is?

    What verb would you be if you were hired at Google?

    Discuss


  • Got Budget? Virtualization Leads to Fewer Meetings

    electricalWire.jpgMcKesson is a global health care leader that has 26 operating companies. The centrial IT group had the vision to automate “the last mile” of IT planning, the budget approval process. We think of it as the budget approval dance, and when containing costs, it’s a ritual that can leave scars. This company has evolved to the point of improving the cost of budgeting, and making it faster and smarter by understanding the assets, services, and service delivery of IT.

    Budgeting can be painful because it can be in slow-motion. Contrast this with the real-time controls of such as VMware V-Motion and Amazon’s web service console and we see a great linkup for driving process change through budgeting. And driving budgeting by cloud and virtualization. We took a look at McKesson’s journey and the service catalog functions of NewScale, an IT services catalog company.

    Sponsor

    McKesson: Let’s Start with Less Meetings and Less 5mb Spreadsheets

    NewScale has customers like McKesson and Charles Schwab and competitors like HP, IBM, Tivoli. The company has been growing its customer base and helping stable-state enterprises to leverage Service Management. And that leads directly into cloud procurement.

    We tracked the use case at McKesson, where the company landed at the service desk in the cloud as a means to the end in their journey to build a low-impact budget process.

    lowCostBudgeting.jpg

    We see a lot of benefit in this approach, where if successful, it would mean that the advantages to go with commodity pre-approved services dramatically improves the timing and effort of procurement. This is a lever that gives Finance a significant hand in the IT spend. Since cloud and virtualization offerings can be spun-up with service call, the cloud is well positioned to be there as budgeting and approval processes are automated.

    In phase one, the company reported significant progress in moving processes towards the service catalog.

    mckessonCatalog.jpg

    One click vs. Fill Out the Form

    In the end, the move towards enterprise standards may be won over simplicity. Is it less clicks to provision. This means connecting the dots between processes, systems, software, teams, and policy.

    front office graphic

    To EC2, or to EC2 through Official Channels: That is the Question

    IT services management comes into the picture and could make a difference in how the business and technical contributors of organizations are rewarded for moving to a standard platform.

    logo-itil.gifInformation Technology Infrastructure Library is tool set that has been given to IT managers to try to wrap standard language around IT service management. It gives the enterprise a common way to manage processes for IT and track the changes involved in building and operating systems.

    Services platforms like Amazon and Salesforce can be considered IT disinter-mediation. We all know a IT leader out there somewhere who is funding their project by credit card out in the cloud. IT, of course, knows this also (especially since they are likely watching your network traffic). One part of the service management offering is making it even easier than Amazon. Carrot, vs. stick.

    Service catalog management has the promise when it wraps things like Amazon’s EC2, or VMwares offerings, gives the enterprise a way to get the same service from the web. And, with budget approval and IT approval baked in, the carrot is there.

    All of IT moves towards transparency and IT processes as being measured as processes. In the ITIL community, there is discussion of the next layer of the library moving towards service delivery in the move towards ITIL Version 3. It’s easy to see that “provision server” becomes fully automated. Soon, all the IT functions below it become invisible. We see this as a future cloud inflection point, where instead of there “cloud services”, we are all in one.

    Zen Mashup

    What has been your experience in mashing ITIL, ITIL Service Delivery in your environment? Do your IT services flow like water?

    Discuss


  • Future: Amazon’s ‘Think Clouds’ are Data Aware

    Amazon DudeAt the RSA Keynote a few weeks back, Amazon’s Security Lead, Steve Riley participated on a panel with other security leaders of the industry. We were impressed with the openness of all of the participants, and particularly excited with the new concepts coming from at Amazon. Riley used a term that is being used within his part of Amazon, the “Think Cloud”.

    As we understand it from the discussion on stage, a Think Cloud is a “body of knowledge” that is a real-time information base of Amazon cloud that can be pivoted all the way down to the threads and individual data concurrency. It would be an index that acts like a control point that helps define movement of data through a servers and compute tasks. Looking at the journey from the data point of view, including data about the environment itself and how to repair itself when damaged and keep data concurrency in tact.

    Sponsor

    Here’s the RSA cloud security keynote to get a bit of inspiration to benefits of portable (cloud) computing.

    In this 30 minute discussion, there are several notable considerations from the contributors on how cloud security challenge can be thought of as a big opportunity and that perhaps now is time to debunk the myth that security is not a part of the cloud.

    We picked out a few of Riley’s comments that we believe are leading towards the idea of the Think Cloud and why Amazon may be there first.

    I/O

    amazon cloud hits the streetsAmazon knows it is critical to be able to have good inputs and outputs. And emphasizes ease of use even more than data portability standards themselves.

    Riley described a great use case where an un-named customer used Amazon for compute, another cloud provider for data processing, SalesForce for crunching, and then pushed the results to Facebook. Interconnection is happening and applications are already “using all the clouds out there”. In this case, all the way down to the consumer.

    When we look at this pattern, it we see parts that mimic the history of web in the enterprise. Back-end systems moving data around, optimizing, and passing it to the a web portal. And, the portal demanding “real time” updates for key pieces of data, while relying on batch for others.

    We can see that idea of a Think Cloud may come into this pattern to help set boundaries and checks so that when a piece of data passes through an Amazon, it is returned reliably, ever time. Perhaps a Think Cloud is a registry that does part of what a smart Enterprise Services Bus does when registered new applications for master data, that is keeps track of activity.

    In a way, we need to solve the cloud-equivalent “floating point” problem in the CPU of generations past in the computer itself.

    On the CPU math co-processor, the question was, “Does it know how to do math correctly every-time under all conditions?”.

    Perhaps the question in the cloud may be “Are all my customers still in the database even though that thread died?”, or “Do we have encryption set on every cpu that this user’s information is stored in memory or on disk”. Solving that problem of interchange the role the concept of Think Cloud might lead.

    Many legacy applications won’t make it to the cloud.

    At least, not as-is. Riley comments that “servers are disposable horsepower, they come, they go”.

    amazon databases in the cloudIn other words, Since applications sit on top of servers, and servers are sinking into the cloud, applications will sink or swim based on how they migrate to this model. So, the first movers are “the rats” that have jump ship as it started to sink. Follow the rats, or drown.

    The tear-down of the server into the n-resource cloud breaks-or-suboptimizes server based applications in a fundamental way.

    Thinking back, this is very similar to web services revolution in the enterprise, where just because an application can export its data model, doesn’t mean it is optimized for web services, or API level interaction.

    We find this almost a reverse-trend to server virtualization, which has expanded the physical compute space. Perhaps we are finding that there is some new turf to be claimed on where the cloud reaches and virtualization ends.

    We like to think of it as “smart service bus” meets “smart application” on infinite resources. Infinite, or course, equaling the credit in your PayPal (or other) form of payment collection required by either, or both parties.

    As reported by The Register’s Cade Metz, Microsoft’s Steve Ballmer recently pointed out that this is a potential opportunity with Microsoft and Azure. Where, instead of “only” focusing on infrastructure clouds, the company is working towards a new programming model, Steve said on March 4, 2010.

    “I think Azure is very different than anything else on the market. I don’t think that anyone else is trying to redefine the programming model”

    When we look at the services recently in our post, Is Amazon’s Computing Fabric a New Economy, we noted a series of services outside of core computing that start evolving Amazon quickly down the path of a new development paradigm. Abstracting storage, network, monitoring, and perhaps in future security, in raw terms gives rise to new opportunities to bind them back together.

    RSA 2010 LogoSecurity is the topic for RSA. Compliance is the reason to get it right. If the computing model wants to be secure, it needs to know the assets and their relationships. As reported by Search Cloud Computing, Amazon’s Riley also tipped the audience at RSA that Amazon is weighing in on encryption as a service offerings. This is another example, where that now Amazon is supporting a new services such as Virtual Private Cloud, it moves one step closer the knowledge point for all the key assets, including their peers within the corporate network.

    Devo Venn DiagramWe find this area, as well as certificate management, to be an area ripe for the type of thinking we see at Amazon. The problem to be solved isn’t a better routine, but is how to apply it tandem with the moving assets and data that is ever changing in demand.

    Perhaps We Needed to Get to Random, to Get to Secure

    We wonder if Amazon’s Think Cloud is something new, and if so, is a path towards solving the collision of the major parties in the network. If it joins network, storage, person, and server resources together, perhaps it is the brains of the next generation Internet.

    The winner will be the one that makes it simple, because as Devo on Chatroulette is proving, demand is asymmetric, and access control is from the eighties.

    Photo credit: RSA, Devo, Inc.

    Discuss


  • Rulers of the Cloud: Will Amazon’s Computing Fabric Become a New Economy?

    monopoly dice and housesThis is the third entry in our exploratory series “Will One Company Dominate the Cloud“. Today we’re blinking twice after reviewing the innovation engine at Amazon.

    The Amazon AWS product is all about services. While others are marketing the cloud with an explanation point, the cloud leader is focused on the raw building blocks. This includes everything from storage to people. Amazon is learning how to find new ways to optimize connections and monetize them in increments of time.

    Sponsor

    Amazon, the Verb: Motion

    When thinking of Amazon as a verb, one word stands out, motion. When Amazon was first introduced as the Internet bookstore, it immediately created a change in the landscape.

    It seemed like the writing was on the wall for brick and mortar retail, and to a large degree, it was. In a mere 15 years, it has disrupted the entire book vertical with an end-to-end digital system. Amazon is now in the position to completely automate the flow of content bits from upstream to downstream.

    AWS logoNow let’s look at the AWS services to see if can it do the same for computing. We’ll analyze the services Amazon offers and how they work together, specifically in four areas: computing, storage, networking, and people.

    (Although we didn’t include several areas in this roundup, including database and monitoring, we see them as clear signs of momentum and scope of Amazon’s evolution.)

    Compute

    ComputeWe signed up (again, as a new user,) for EC2 to refresh ourselves with its offerings and to remind ourselves what it means to be utility-based.

    Amazon defines workload in relationship to the types of instances the company offers in the EC2 solution.

    Microsoft logoWindows on EC2 is optimized around bringing a three-tier Windows web environment into the Amazon stack. It supports ASP.Net, AJAX, IIS, and SQL Server. Amazon has also tuned it’s network and storage offerings to nicely plug into the Windows on EC2 package and offer seamless integration with existing Amazon EC2 features like Amazon Elastic Block Store (EBS), Amazon CloudWatch, Elastic-Load Balancing, and Elastic IPs.

    IBM LogoIBM WebSphere is also supported on EC2, and hosts a lineup of enterprise computing tools including the WebSphere Server, Portal Server, DB2, Tivoli Monitoring, and Data Quality products. IBM mentions that one of the targets is getting developers to use this model for getting development or proof-of-concepts projects up and running quickly.

    The patterns for firing up a new instance are defined as AMI (Amazon Managed Instances) so the software has been appropriately targeted the infrastructure instance it will run within. Have extra licenses, or want to retire legacy hardware? IBM has an agreement with Amazon to allow you to migrate your licenses to EC2.

    Hadoop Map ReduceThe EC2 MapReduce is a service that targets large data streams and optimizing processing of these data sets. It leverages the Hadoop Map Reduce project and provides as an example of breaking the computer entirely into services.

    The Map Reduce service doesn’t just host an application stack, but is automatically configured using Amazon Simple Storage Service (Amazon S3). This is an example of an open-source implementation project (though Apache) optimizing in such a way that it fits on the EC2 stack as a core feature, and it has become a peer to the WebSphere or .Net patterns.

    Storage

    storageThe storage offerings include S3, Elastic Block Storage, and Input/Output.

    Amazon S3 (Simple Storage Service) has been out there several years serving web based applications as their simple cloud away from home. Customers of it have famously stood up their entire data solution for images and other key storage tasks based on Amazon’s S3 service. It’s popular, well known, and evolving to include additional features that enforce data level integrity like databases.

    Elastic Block Storage is another storage service offered by Amazon. Instead of being a simple, writable data service in the cloud like S3, it is focused on EC2 instances that need storage as part of their footprint. An EBS can be built alongside the EC2 instance that is 1GB to 1TB in size and can be mounted from that service. This is designed for applications that expect raw physical storage locally addressed by the server.

    Network

    network imageAmazon offers Elastic Load Balancing. Considering Amazon’s power as an elastic compute provider, this is a critical piece of the puzzle. Here, load can be configured to continually monitor and self heal across a set of hosts, moving the resources towards optimal performance.

    The company also offers Virtual Private Cloud, which enables an enterprise to segment access to a portion of Amazon’s cloud with access control and security enforcements (such as subnet, encrypted VPN).

    Virtual Private Cloud

    People

    An amazing thing about all of these services coming from Amazon, is that Amazon is a consumer facing company with an amazing relationship with consumers.

    Amazon has the ability to learn about us. We share our ideology (books we buy), lifestyle (products we consume), and financial position (credit cards we use). The company has also implemented an important part of identifying consumers by going deeper with services and verifying identity.

    The company implements a two-factor signup process that goes the extra step in granting authorization to a user to change compute resources.

    This second factor gives Amazon some assurance that the person really is that person, because in addition to getting the credit card and password (which are network resources), it also calls out to your phone to verify that the person logging in to the network has the phone (physical resource) at the same time.

    Here is step one: Signup

    Amazon Identity Step 1

    Here is step two: Verify PIN on your mobile phone:

    Amazon Identity Step 2

    And, step three, proceed (you are now free to spin up resources):

    Amazon Identity Step 3

    When combining these two things together, Amazon is in a position to easily bring its current customer base to a two-factor security solution, and providing a service that meets government level controls. And, with two factor credentials it’s less likely that there will be automated bots being deployed in Amazon’s cloud by scripts or hackers.

    Amazon is in the unique position to view the next generation computing fabric from the consumer sales process. Amazon may be the only company in a position to see how it all pieces together, even perhaps a longer view of the future supply chain than its new book competitor, Apple.

    In addition to consumers and developers, Amazon also has the power of people as resources, with the Mechanical Turk marketplace.

    Need a simple task completed and queued for the Internet (of people) to execute on? Get started with one of these sample scripts and draw legions to your command.

    Amazon HIT Template

    We find it compelling that Amazon has connected consumers, verified individuals, and tasks to be executed on. These pieces are perhaps foundations for a broad appetite for connecting workers with resources and optimizing along with way.

    Banking with Amazon – or – Selling Time Instead of Licenses

    The time value of money is the value of money figuring in a given amount of interest earned over a given amount of time.

    When signing up for the AWS features as a new user, we found ourselves asking looking at pricing options that reminded us of bank products. Earn more by committing to 1, 2, or 3 years. Are the Amazon Web Services an economy, and the individual services themselves currency?

    First, let’s look at Microsoft and its revenue. A server is sold, Microsoft gets a piece by the sell of the OS. Part of this business model is very predictable (company gets x% of all PC shipments. And part of it is a bit lumpy. Where consumers have choices, they may choose to exercise them. For example, choosing Google Docs as an alternate to Microsoft Office, or bypassing an entire OS update, such as Vista. These choices represent risk to Microsoft in its revenue position.

    Amazon, is increasingly using something more predictable to sell it’s services, time. And the nice thing about time, is that it’s always ticking. So, instead of waiting for an entire “new PC”, or “OS update”, Amazon’s implementation of selling resources is triggered to contracts. And, if this works, the consumer of the risk chooses the service longevity and the risk is reduced for Amazon.

    To put this in financial terms, the time value of money states. “The method also allows the valuation of a likely stream of income in the future, in such a way that the annual incomes are discounted and then added together, thus providing a lump-sum “present value” of the entire income stream.”

    What this means, is that Amazon is going to understand value for its AWS users over the entire life of their contract and can start to model interaction patterns against future events. For example, if Amazon knows you have a 3 year contract for EC2, but you’re 50% more likely to renew it if it also has SimpleDB services, it can trigger events and discounts based on these service connections. Here we see the EC2 reserved instance pricing chart. There is heavy discounting for committing to a term.

    Amazon EC2 Reserved Instances

    From what we see, Amazon will be successful in gaining new efficiencies in pricing of computing resources, like it did with books. We expect the company to successfuly squeeze out hard costs that exist in the middle.

    We feel that Amazon is the quiet cloud company that you can “go long” with in terms of it’s future value. Like the market itself, Amazon is a prime innovator in sharing the future into the terms of the present.

    Will cloud computing re-factor how we look at the technology stack for good, and will “payment” be in the middle? If so, is time the business model?

    Photo credit: wwworks

    Discuss


  • LadyGaga as a Service: Bringing Apple and Google to Commerce 2.0

    gagaBooty.jpgLady Gaga, along with her record company, is evolving the album in the form of software as a service. Considering the content of her hit new video, Telephone, it is fitting that she would use software to tackle the hard problem of getting paid by amazing fans.

    On her path to global dominance, the site, LadyGaga.com has innovated the next generation of brand management for artists. To do this, she creates a join between Google’s YouTube, Apple’s iTunes, Twitter, and Facebook. Way beyond having a an Twitter account, LadyGaga is hosting an interface party, and you’re invited. She’s a performer who is inventing ways to create the value of using multiple platforms to juice the network effects.

    Sponsor

    Commerce 2.0

    Like it a lot? Take a souvenir home from the party for the low introductory price of $1.99 in your iTunes.

    Today, we noticed another cultural icon, VC Fred Wilson posted this question on his blog as to what will emerge as Commerce 2.0.

    avc-logo.jpg

    “So the question is who will the YouTube, Facebook, and Twitter of commerce be? Maybe they exist today and will emerge as large scale web services soon. Or maybe they are still ideas in the minds of entrepreneurs and will be hatched in the coming years.

    It’s an area I am excited about and will be on the lookout for. Clearly I’m not the only one.”

    This is where we think LadyGaga.com’s promotion for Telephone stands out as an example of the new world economy. This world is connected by the best ad engines of Google. And it is directly connected to Apple’s amazing commerce engine. Apple, in the context of digital goods is showing how extremely well it is positioned to be the industry payment engine.

    Embedding YouTube: Get it Now, Anywhere

    We witnessed YouTube transition from the Wild West to a control point for record labels. Now, a lot of the newest official artist videos flow straight to Vevo, the branded label friendly site that runs ads and controls the experience of the brand.

    YouTube is taking advantage of its place as a channeling service for video. In this case, the top, most requested inventory pays for the rest of the service. The higher the demand, the more attention it gets.

    LadyGaga.com, like many sites, uses embeddable YouTube. In addition to pointing to commerce services like iTunes, the video embeds curated links into other properties at Vevo. This provides the site custom promotion experience while leveraging the YouTube and Vevo distribution channel.

    ladyGagaTelephoneSite.jpg

    iTunes is prominently offered for both buying the video (which is also free on the same page) and also the album. So, there is a bet here that people want to own it, or place value in their iTunes library to offer this connected service.

    This brings the user to a one-moment to buy scenario. Shown here, there is the familiar transition to iTunes from the LadyGaga site.

    oneMomentPlease.jpg

    And, the authorization to ‘Buy Now’.

    iTunesGaga.jpg

    In default mode, iTunes is set to require a validation step (a second click) to buy the media. The user can can be set easily to bypass this step and enable the user one-click to buy from the web in the future.

    iTunesGagaAuthorize.png

    This feature is available to any Apple affiliate, but we find it particularly effective coming from the artist embedded with the video and other endorsements.

    Tweeting, End to End, Facebook, FTW

    We noticed that with a simple interaction, we can logon to Facebook and Twitter from LadyGaga.com and drop a status post, or “tweet” into Facebook or Twitter. Incredibly, inside iTunes, both services are available as well. The real story is that social networks, and commerce networks work together, end-to-end, and, for-the-win.

    beyonceVevo.jpgIn a twist of fate, in this version of digital music future the record labels win big. They do it by being close to both eyeballs (Google) and library (Apple), and bringing out the thing they know, the pop.

    LadyGaga is on a roll

    With the help of Twitter, Facebook, Google, and Apple she will connect to more platforms than ever before, with fewer clicks and passwords.

    We wonder how this evolve further into other platforms. Will LadyGaga’s services continue to find new ways to leverage real-time services? We’re starting to envision personal mobile and location aware fan applications.

    Will the forces of cloud computing and commerce force Apple and Google be best-friends-forever in music?

    And, will we ever build a phone that doesn’t disrupt us while on the dance floor?

    And, for god’s sake, damn, Beyonce’ has her back.

    Discuss


  • Will Google’s Cloud be a Cozy Nest for Aviary?

    aviaryBirdLede.jpgAviary, the online creative platform is a visionary tool. When it launched a few years back, the irony of a Flash based Photoshop competitor was, well, ironic.

    With the launch of Aviary in Google’s App Marketplace, we can say that the company is close to making lightening strike twice, this time around creating a home for the creative professional and their most important assets.

    We want this to work – so we ran it through the paces. Here we got a front-line view on where cloud app meets cloud. We looked forward to counting the pixels that get wasted in the process.

    Sponsor

    Aviary and Google will disrupt Microsoft (the default filesystem for the world), and along side it Apple and Abobe, with this simple joining of services that allows users to create, share, publish and present with a simple Web based client and “always available” files.

    GoogleAviary.png

    It feels like the tide has changed and soon it will be hard to imagine an app not defaulting to file storage in the cloud. In a world of cloud-hosted apps, writing to a PC filesystem just seems wrong and goes against the grain of a mobile workforce. The creative professional’s cloud is going to be in vivid color and available from the local coffee shop.

    As a clear sign of preparation for these applications, Google Docs recently started accepting files of any type.

    If you’re a user, you’ll likely see this headline at the top of your account, like we do.

    uploadAnyFile!.jpg

    Google Supports a Virtual File System for Business Documents

    For images, this is useful for people who use Google’s presentation software. Today, all of your other files are online. Now you can have your images close at hand, so it’s easy to use all files whene you need them, as shown here in this piece by Aviary and Google.

    In this Google Docs upload feature demonstration, we see that Google interprets certain filetypes and offers a way to convert into a native Google file format upon uploading. When this happens with an Office-based document, for example an .docx file, Google will process it as needed to be usable in the Google Docs document editor.

    uploadFiles.jpg

    Aviary is part of Google Apps Marketplace and part of the Google Docs application.

    AddToGoogleApps.pngComing from the Aviary side of the world, we see this as a natural extension to the work the company has done in joining accounts with Flickr, Facebook, and others. Images need editing. And to be shared many times over. Aviary makes it easy to get started with Google using a third-party login capability to join accounts with Google.

    When this sharing hits productivity apps like presentations., that’s where we start to see an interesting landscape emerging. Google is playing the role as a peer (e.g. share images with multiple editors) and also is moving towards the “cloud of choice” for consumer document management.

    Below is a Google Apps-powered Google Docs listing after Aviary has been installed. Aviary is now available as an editor, a library has been created for Aviary documents, and when saving a document in a properly configured Aviary-Google account, a list of Aviary docs will show up in the main listing.

    gdocsAviaryMenus.jpg

    A page opens with a view of the image and the option open the image in Aviary.

    aviaryNinjaEdity.jpg

    Our ninja file is edited and saved…

    ninjaBlood.jpg

    Mime Type 2.0

    In practice, all of this marketplace integration is harder than it might first look.

    This is a a few of the features and or landscape issues that make this experience “not quite” the same as saving a file from Photoshop to Windows.

    • Multiple entry points can be confusing to newcomers. We found that by going to Aviary.com and launching versus launching from Google docs that there were subtle features and connections that worked differently (in our account, it offered different views of the total image library). Also, which repository was setup as the default. In a way, both models need to be supported, but even subtle differences can make the overall solution more error-prone.
    • What are the the default for saving new file. We notice this especially when moving files from Google and expecting to see them in Aviary. Like setting up a specific application to open for certain files, in the case that there are dual masters (or apps), this becomes much more difficult to edit on. We would like (at least) Google to recognize more about the file post-Aviary and launch it when I bring in new images (or at least offer to). This begs the interesting question of whether a person’s files should have a default home.
    • On the reverse side, “Save As” to your Google Docs from Aviary may need fine tuning. This is a software and workflow challenge that didn’t exist when there was an implied “master” of all the files. We see this challenge existing also with the desktop experiences and how the apps react to changes from these repositories. In a way, if Google Apps was master for all the docs, it would move the experience forward. But, Windows, Photoshop, and even Aviary, may feel different.
    • Does the likelihood of failure increase due to interdependencies as well as other factors that make the services less predictable? After a brief error or two in getting Aviary to Save to Google rather than Save to Aviary, a few things of note. 1: Helping the user know what is happening is going to be important, especially if two (or more) ways are supported. 2: This needs to be as easy as finding “My Documents” on the PC, or adoption will suffer.

    This is Aviary in “Google mode” and trying to save the document to Google Docs account, but not completing the job. ( We’re not saying it doesn’t work, just that it doesn’t work sometimes.)

    GAppsSaveError.jpg

    Creative professionals may not use Aviary as their default tool… yet. And Google Docs may not be as fast or be as reliable as a PC. But for those of us who do light image edits and are Google Doc users, this is a major leap forward.

    We see this as an unlocking of the desktop (both machine and software) and love the promise of creating anywhere, storing anywhere, getting paid.

    As this starts to work, it’s clear that Google, Aviary, and cloud applications will continue to encroach in the workflow of things to come.

    Where’s your limit to what you do with Aviary and Google Apps in a Google Cloud?

    Discuss


  • Health Clouds Forming: California’s Health Internet Exchange

    arnold.jpgToday, the California Health and Human Services convened a summit with an expected three hundred people in the interest of a state HIE (Health Information Exchange). This project has been tasked by volunteers and state groups and led by Jonah Frolich, deputy secretary of California Health and Human Services. The teams formed have met a series of hurdles already in preparation for the next big phase of executing the next generation system and raising an initial seed of $38.8m to move the effort forward.

    At stake is at least $3 billion by connecting to these services for doctors and hospitals that qualify by using the HIE as built. This means that doctors can bill for more Medi-Cal and Medicare payments that are expected to be available in coming years from the American Recovery and Reinvestment Act funds while using HIE services. Additionally, the services being created will need to support applications that engage consumers as they play a role.

    We see the opportunity for California’s investment to touch many interesting areas of cloud computing, identity management, and mobile – right as it is getting interesting.

    Sponsor

    cchs.jpgLast week, Governor Arnold Schwarzenegger and California Health and Human Services Agency Secretary Kim Belshé named a new nonprofit entity called Cal eConnect to oversee the development of Health Information Exchange services. One of the first tasks at hand is to finish the CA HIE Operational plan and to finalize details in budget, technical, and engagement plans to execute with the recent first grant by ONC for $38.8m.

    Today’s meeting and web conference is part of a process kicked off last July and offers monthly reports towards a state plan to direct funding for building a next generation model of HIE.

    Leading up to this point, the CHHS efforts were led by Jonah Frohlich and comprised of many organizations and individuals contributing time to the effort to get the funding for the state. In addition to work group updates, the meeting included major parts of the organizations impacted the most, including quick fire discussion with CalPSAB’s Bobbie Holm, Medi-Cal’s Kim Ortiz, and for Public Health, Linette Scott.

    Internet of Services, or, a Big Private Pub/Sub/Hub/Sub Cloud?

    Shown here is the different core services that exist and how they provide links through HIE secure practices. And, how secondary services are build from there. This framework is focused on simplifying the overlay with existing services in a complex environment.

    CAHIETechnicalArch.jpg
    Here are some of the goals for the technical architecture:

    • Provide a trust infrastructure for the electronic exchange of health information across organizations that have no pre-existing data-sharing arrangements
    • Provide a directory infrastructure for providers to locate each other and to determine the format(s) that they mutually support for health information exchanges
    • Assist organizations to match exchanged health information to the correct patient records
    • Address gaps in the NHIN (Nationwide Health Information Network) specifications with respect to achieving HIE for meaningful use

    Core, Context, and You

    Determining what to offer and model the core services was a major part of the discussions in the technical work group.

    One challenge the group faces in how to straddle this difficult issue is that identity and assertions for a person to live at the edge today, they are embedded in each application today in the form of passwords or tokens. Therefore, the CA HIE wants to be pragmatic in approach and have the first phases of architecture mirror the situation today.

    At the same time, it was noted in the meeting today that the absence of a citizen registry is absent from the core services may be fatal. In this model, authorization, access, and consent is dealt with in the architectures as being “best effort” to integrate the patient data, but not “guaranteed”. This is due to the practical challenge that all the endpoints aren’t perfectly aligned in data, nor practices. And, that the HIE itself isn’t a panacea for identity on the Internet.

    On high level, the question becomes: Is the HIE a services where citizens are registered “agents” and have a requirement to be joined directly to each message about themselves, or is there information about people in the system being exchanged is by agents? We think the more people are engaged the better and solving for this will lead to a citizen registry concept in the future.

    Here we show the planned services, networks, registries and directories architecture view.

    CAHIETechArch2.jpg

    A New Network Forms Around Providers and Documents

    The CA HIE project aligns with the work at the federal level with NHIN.

    We got a briefing from Brian Behlendorf on the recent work from the NHIN Connect project. He gave some context of the base thinking going into the models.

    The NHIN standards are, in a way, DNS (who has records for this patient? etc) and HTTP (transfer this data, securely) for health IT. Building as much as possible upon pre-existing components (SOAP standards, HL7, etc), using a document-exchange-oriented paradigm that matches the use cases closely. It encourages the use of information models for health IT, to make the data as computable as possible, but it does not require it. It is being used for needs as diverse as patient record search, public health reporting, and disability determinations. The trust model is the least well developed portion of it – every node on the network signs an agreement called the DURSA that sets a high bar for patient privacy, consent, auditing, and such – but it is agnostic to who a node is.

    Shown here CA HIE is both a supporter of NHIN and also has several suggestions for modifying the scope to include a tighter link to Provider registries and communication practices, for the practical reasons of insuring end-to-end provider to provider communications.

    This seems like a good balance where the state is more focused on the entities doing business and should have a tighter link with these entities and the business they conduct on HIE.

    CANHINFeedback.jpg

    People: The Hardest Service

    While moving forward with HIE for California, an important question is being raised. Do clouds consist information about citizens, or do they contain user identity services that join and connect individuals?

    This is something that is somewhat hard to grasp, but we feel it will play a role in how the experience of HIE is for the person who is coming into the system. We’d like to see third party authorizations and trusted identity federation for citizens evolve and HIE seems liek the right backdrop to get it done. As reported earlier, third party logon can work, with the right incentives.

    Jokingly, we ask ourselves will HIE work with RSS for all my provider feeds.

    And with a more critical eye, we ask the same question. Shouldn’t all the services of HIE be using the best sharing technologies and patterns already in use? If I have a health concern, it seems that it should be as easy as “following” “Mike’s ashtma” to get every update from providers in a real time feed. It should be mash it up with other personal data, devices, and social forces.

    We hope that the architecture gives extra consideration to the person, so they can do just as much, if not more, “mashing” of their own data streams.

    We Worked Extra Hard to Weave in Lady Gaga

    ladyGagaTelephone.jpgCelebrities, even ones not from California, are people with trials and tribulations too. In fact, in her recent video released today, “Telephone“, she addresses the access and issue of too much interruption and the benefits of both glamor and lifestyle.

    She was spotted in California recently. The pictures of her hanging out at a SoCal mall reminded us that she is very much human.

    Considering Gaga, we have added a few practical questions that the health cloud will need to manage as final food for thought.

    • Will it work for traveling citizens and non-citizens alike that may have different records, languages, and locations.
    • How will it enable care providers who manage children or elderly and will sign in and out of systems to pick up medications, for example.
    • Does it work with mobile communications and social networks?
    • Can it really offer true privacy for superstars, and everyday citizens with their health information? Is this even the right question?

    We bet Jonah, the work groups, and the new team at Cal eConnect will be working hard to find answers.

    Those answers may us to a next evolution. A health cloud – that powers the Internet at large.

    Why does Health Information Exchange seem harder than colonizing Mars?

    Discuss


  • Cloud Religion: Do’s, Do Not’s, and a Glimpse of Nirvana

    Samuel JacksonAs the cloud is getting more players and interfaces, best and worst practices are emerging. As the market grows and more companies try to plug in, the cloud may benefit from guiding principles.

    Similar to new technology movements in the past, a natural process is underway to define “what is good”, which, for some in the industry, equates to “what is open”. Like religion itself, open can be defined in ways that are uplifting, or on the other side of the coin, restricting. Also, we learn again, nothing is free.

    Sponsor

    Cloud APIs Must Walk on Water

    If you’ve been part of a software development project, you know that sometimes it’s hard to get the team to all agree on best practices for interface design, database optimization, or even what technology to use. In this analysis, we take a look at some of the movements in cloud computing that start to lay a framework of good as it relates to this technology.

    In this context, API designers for cloud applications need to think ahead and avoid common pitfalls. For several reasons, more than ever before. First, because many people will be accessing your one piece of code. Second, is that in this world of open APIs, it’s easy to compare your code against another.

    We notice that data management practices are at the core, and details matter when provisioning in platforms. At the same time that groups are forming to align practices and forms of virtualization and cloud standards, a voice whispers that perhaps this is a free-market problem. People who benefit at solving it, will; others will ignore it or compete directly. We enjoyed this post from Joyent on where standards matter in a practical sense.

    In essence, the question raised: If a vendor makes it easy and bakes in the ability to “just do it”, do you know or care about the standards? This seems to mirror an iPhone development paradigm, which is to expect work from the vendor SDK or libraries. The SDK wraps standards implementations, which is done in the way best understood by that vendor.

    Do Unto Others as You Would Have Done To You

    commandments parchmentWe know the cloud is big – perhaps it will inevitably be bigger than the Internet itself as it usurps our conception of location, space and time.

    Where power forms, rules, groups, and organizations do as well. In information technology there is always tension between open standards and defacto standards. The former are crafted through agreements, the latter through leadership and market dominance.

    We asked in a prior series “Will a single company become the dominant provider in the cloud?” Today we look at the more practical side of “who is winning now” – who is setting the rules and who is in the trenches.

    Quite a number of the responses to our earlier posts emphasized that “the cloud should be free”, meaning that it should have governing principles to avoid one vendor from owning the landscape.

    Here are a few groups that have emerged to provide some context in how this may come together, both philosophically and practically. In both, the devil is in the details. A good summary of some of the current combining of forces is by the Open Grid Forum. (In our opinion, grids have given way to clouds as the dominant concept in this technology makeover).

    • A resource directory of initiatives is located at the Cloud Standards Wiki, which in itself was formed by a handful of organizations and movements working to align around setting rules and patterns for cloud computing.
    • The Open Cloud Consortium is organized around developing practices around sharing resources and has recently focused on a developing a test bed.
    • The DMTF is working at the core definition of virtualization. It recently focused on the 1.1 version of the Open Virtualization Format (OVF) specification that focuses on packaging virtualization instances and creating a portable mechanic distribution by defining envelope and collection parameters around the virtual machine and its services. The organization, which contains members of IBM, Microsoft, Dell, VMware, XENSource, Sun, and NEC, has submitted 1.1 for consideration as an ANSI and ISO standard.
    • The efforts by the federal government in its data.gov initiative shows that there’s a market that’s starting to see the value of raw government data formats. Soon, we would expect this to be powered by a mesh of computer resources that allow all sorts of jobs – integrated jobs – to work with these data sets. It would comprise an active government cloud.

    Do Not Covet Thy Neighbors Network Resource

    When looking for things to avoid, we found a lot of philosophical questions around data ownership, logging and portability. These discussions are alive and well and seem to be being absorbed into vendor solutions and consortiums like the ones mentioned earlier.

    For a more practical view, we turned to a friend of ReadWriteWeb, Thorsten von Eicken, and have summarized his thoughts from a recent post, “Top Cloud API Sins. Bold items are our (loose) mapping to biblical terms.

    • Do not covet your neighbors resources.: Listing of resources without the details, e.g., a list-servers call that doesn’t return all the details for each server. This makes it very expensive to poll for server state changes
    • Do not make cast idols: Not returning a resource id on creation. Some APIs don’t give you a server i.d. when you request a server
    • Labor six days, rest on the seventh: Providing a task queue. Several APIs I’ve seen have a task queue that is supposed to provide updates on tasks that are in progress E.g., you launch a server and you get a handle onto a task descriptor. For us that’s just overhead
    • Though shall not bear false witness: Not returning deleted resources in a “list resource” call. In particular, terminated servers must be returned in a list servers call for a certain duration, probably at least for an hour. Ouch!
    • Shall not covet his neighbor (or force me to repaginate): Pagination that goes page-wise instead of using a marker, e.g. where you get page one or the first 100 resources and then issue a query for “page 2″ or “from 100 on”. Explain to me how a client can get a consistent resource listing when resources can be added and removed concurrently
    • Randy Bias added to Torsten’s post: Treat others as you want to be treated Your UI MUST use your API so you understand how to be a consumer of your own API

    We plan on keeping up with this list and seeing how it intersects with implementations and standards that evolve.

    Nirvana: Smells Like Services Orientation

    Torsten goes on to describe a picture of the future. “Now here’s what I’d really like to see. This is what we’re working on for internal purposes and it’s not easy, which is an event based interface instead of a request-reply based interface… ”

    nirvana Smart services in the cloud, rather than resources alone. This starts to get us closer and closer to an object-orientated network. Maybe that’s what the cloud will be for platforms, infrastructure and software. The industry has been quick to identify the layers. But perhaps the point is piecing them together in a smart transactional framework.

    A way to engineer highly reliable systems around these architecture challenges may sound familiar to those who monitor existing data centers today.

    Torsten continues, “We run a good number of machines that do nothing but chew up 100% cpu polling EC2 to detect changes. Fortunately cpu cycles are cheap :-)”.

    This is practical intervention between vision and get it done.

    We find it refreshing to hear this type of dialog in the industry and see a fresh opportunity for defining efficient patterns for this next generation of the cloud infrastructure.

    Perhaps a new concept is forming: “Divine Computing”.

    What buying decisions will be based on the openness of cloud resources and common APIs?

    Photo credit: tsarkasim, Amsterdam Esogna

    Discuss