Author: Mike Kirkwood

  • Network as a Service: Open Source Enables Efficient Cloud Hosting

    networkMap.jpgTo keep up with the growth of cloud computing and virtualization, networks keep evolving. But unlike Twitter’s Trending Topics, IT budgets don’t scale up. In fact one of the major initiatives in many IT shops is creatively reduce their own expense.

    To get to a scalable cloud infrastructure where costs are contained, it sounds like the network industry is going to see a time where a “Linux” arrives on the scene. An open source alternative to building networks may disrupt the networking landscape and give network admins an open source network operating system.

    Sponsor

    Virtualization: It’s in the Network Too

    Distributing workload across machines, storage, and environments has required networks to be smarter than ever. Now, the network needs to be intelligent enough to not only route traffic both a bridge and a toll-gate, but to also provision and de-provision all aspects of the environment at a moments notice.

    Providers like Rackspace are in the business of using the network to optimize the performance of the entire data center. To be effective in keeping up with dynamic system provisioning, technical teams need access to all tiers of the computing environment to reduce operations overhead.

    extremeRouter.jpgHosting providers, such as Amazon Web Services and Rackspace innovate new integration patterns – including ones in the core of the network – to get their job done. Network operating systems that are open, like Extreme XOS enable large scale hosting providers to look deeper into networking gear and start to tune it themselves. And enterprises may follow this trend.

    Servers Don’t Sleep at Night, but Applications and Admins Do

    For a long time, networks have been used to detect the peers and devices. Many of us use the nearly ubiquitous DHCP (Dynamic Host Configuration Protocol), which is the the thing that automatically assigns IP addresses to a PC when plugging into the network.

    In an analogy, there is a need for a “super DHCP” is needed that can keep up with the highly virtualized cloud infrastructure per virtual instance. To do this, engineers look deeper to find efficiencies in how the network talks to the hardware and software for the virtual machines.

    A good example of benefit for this is where a resource has peak loads during the day. Due to natural usage, the applications compute power is not utilized during the night. Using monitoring and provisioning tools, the network can de-provision the extra hardware and offer it to another service. This “freeing up” allocation saves power and money.

    This is a simple example of where virtual data center solutions are being innovated in the industry to figure out how to further timeshare the computing resources. The network has the ability to help manage the scale down to the moment is enabled by it’s reach to everything over IP (Internet Protocol).

    The Open Network Wins, Developers Rule

    Extreme Networks is betting IT leaders that have become very familiar with Linux and open source Hypervisors like XEN want to tweak the network. For the data center manager that wants to go into the core network engines innovate, there comes a need for APIs, SDKS, and open access libraries.

    Extreme’s openness is in the form of web services, many offered that are offered as XML or CLI scripting that allow integrate tools into the core of the network via XML, and configure edge ports for security and VOIP access as dynamic provisioning.

    The company offers a code workbench of its own to download widgets to plug into the network. Designed for the open source developer, it shares the familiar pattern that presides in open source community for application frameworks and operating systems code sharing.

    Shown in the diagram, Extreme’s network offers real-time provisioning of code widgets in the network.

    extreme os widgets

    Play Nice: the Networks Worst Enemy May be Success

    Will the network evolve to see an open source player that drives change in pricing and value?

    In the rush to enable new efficiencies we wonder if this is an Apple A-HA moment in the making. The question seems to be can the giants in the space balance the fine line of better end-to-end experience of managing the environment and whether vendors do it best. If we follow the Apple example of industry success, and end-to-end play for the network may be in the cards.

    Last month, Juniper announced it has created a new business group and commitment to an Junos ecosystem.

    Juniper has made a big move towards open source innovation in it’s recent re-branding and at least to one analyst, John Furrier from Silicon Angle, seems to be suggesting that Juniper Judo’ing Cisco, like Google did with Open and Microsoft. That probably doesn’t feel the least bit nice to the market leader, especially when Cisco is priming it’s engines for changing the Internet forever.

    Cisco announced opening IOS in 2007 in an effort towards compartmentalizing IOS as part of its overall movement into a more software based organization. With the complex series of network enhancements and feature sets, it will be interesting to see how Cisco views “open” vs. “customizable” and where the control lives for network management and up-time.

    When visiting the Cisco IOS website today, we see the standard license and no clear mention of open source licensing. Cisco strikes the balance between open and controlled in it’s a approach to defining what an open network is and where networks will be encapsulated as services.

    We wonder if Cisco deliver the capabilities to pull more traffic into it’s end-to-end range, while open networking APIs rise as part of the network service stack. With this market, it’s likely both. At very least, open networking has a role in determining the fate of the network and where territories are being defined.

    The Cloud is a Network of Services

    The cloud is defining a world where service orientation rules – both the software and physical layers. And, it is breaking the rules of workload distribution, where network topologies are changing. The requirements of connecting the layer 2 and layer 3 networks, as well as IT leaders that are building solutions for mass scaling (enterprises or service providers) are evolving and being driven by an ability to be efficient at the workload level.

    Extreme Networks Technical Brief, Dynamic Network Virtualization Overview, explains the value of plug and play network components in today’s topology.

    cleanDataCenter.jpg“By leveraging Extreme Networks® ExtremeXOS®, a modular, edge-to-core operating system, and our extensibility frame-work including Universal Port Scripting and an XML interface, Extreme Networks is able to tightly integrate the switching network with the virtualization environment to create a virtualization-aware network fabric that automates the network-level virtualization required in next generation data center and cloud computing environments. This unique functionality enables Extreme Networks to provide seamless support of virtualization capabilities across the various hypervisor platforms, including Citrix ZEN, Microsoft and VMware. The highly integrated solution allows the Extreme Networks solutions to trigger responses to virtualization moves as they happen in the network by virtue of a tightly integrated XML-based network management framework.”

    Extreme, and now Juniper, are moving in the direction of offering IT administrators control points in networks and protocols to optimize it opens the market.

    It looks promising to give administrators vendor leverage in buying services without vendor lock, or waiting for feature releases from the vendor. And, it mirrors the open-source movement in bringing communities together to solve problems and build compatible services.

    Open APIs may define the cloud’s network of the future for large hosting providers. We wonder if for the enterprise.

    Photo credit: opensourceway

    Discuss


  • Cisco in the Core: Preparing for the Next Generation Internet

    fp-crs3.jpgToday, Cisco announced the CSR-3 product, a game-changing system to managing the network core. But for CEO John Chambers, the news is about market transition forces and being ready for the next generation of the Internet.

    The company invested $1.6 billion research and development in CRS-3 to be ready for the next phase of market growth that merges video, cloud, and mobile trajectories. When asked, he said when his team looks out 3-5 years that network growth may be 300 to 500%. Cisco is investing that another revolution is on the way for consumer access and enterprise productivity.

    Sponsor

    Getting a handle on scale

    crs-3.pngCisco’s fabric offers network speed never seen before.

    According it’s’ own estimates the CRS-3 could offer bandwidth for:

    • Every man woman and child in China the ability to make a simultaneous video call.
    • All movies ever made to be downloaded in 4 minutes (if you had enough disk to store them).
    • 1 GB link to every household in San Francisco.

    Virtualization and Cloud: Moving from Plumber to Business Architecture

    The network has many touch points. Chambers said that this has been moving Cisco from being a technology partner to a trusted business partner for nearly all its enterprise and service provider accounts. He views this as a tipping point in how Cisco engages customers and innovates.

    To that end, the major market transformation underway in the cloud, and Cisco has positioned its network, including the CRS-3 to offer tight linkage between data center and virtualized services.

    Chambers mentioned “It’s all about the cloud, the CRS-3 family talks directly to the UCS in the data center.” To that end, it’s nice to consider the end-to-end network being prepared for the connections in the data center, especially for physically distributed environments that benefit from fast links between them.

    Chambers continued: “We kept our partners VMware and EMC in mind in this solution, to be ready to fulfill our vision in the data center”.

    Here it is a summary diagram on CRS-3 connects data centers.

    crs-3-cloud-innovation.jpg

    Service provider: Critical Network Backbone

    keithCambron.jpgKeith Cambron, President and CEO, AT&T labs added. “AT&T was the first user of CRS-1 with it’s 40 GB interfaces and have been using them to manage their network growth. We are testing the 100GB interface in the labs and real production environments.”

    Chambers mentioned that Cisco’s goal is to have long-term partners and to never compete with pervice provider. With a company like AT&T, Cisco’s product goals are to help be ready for where things are headed, to be there when it’s needed. Cisco does not want to be the bottleneck for the Internet.

    He continued “Service providers are our partners, if our goal is to bring this technology to everyone in the world, we must work in a tight fashion and follow the market transitions with them”

    Q&A

    chambersMar2010.jpgSome of the best parts of this dialog was the question and answer session, a few excerpts below show the depth of thinking Cisco is moving forward with the intelligent network.

    Q: How does this impact the mobile data flow?
    A: The team was asked about how this innovation impacts consumers and the mobile data flow.

    Chambers added, “As a consumer, I want any video any time. To share it on any device in the living room and to bring it with me when I’m on the go. The network has an important role in enabling that future”

    Cambron spoke from AT&T’s goals with this technology “It provides a single network design for around the globe. Particularly important for important customers who are using private networks and deploying mobile applications. A common network design that is highly video centric is central to our business”

    Q: Is Cisco an on open vs. closed company, will core innovations from CSR-3 be open source?
    A: Chambers responded “Interoperability is one thing we don’t debate in Cisco. We believe we need to bring together all of these protocols into one network. We will of course be an open architecture.”

    Q: What do you think about Google’s service provide announcement?
    A: Chambers responded “Google is a great company. We love anyone who adds loads to networks. We think the question here is how do we find the “and” here and find ways to build load and also built great networks with the right partners.”

    Q: Why is Cisco building in silicon?
    A: Chambers was excited to talk about how important silicon was to the company in the products. “Cisco’s investments in ASICS has been a key part of many of the core products. The reason that Cisco does our own, is that silicon ties all of the key components together. Cisco had to change the way its working style to have a collaborative team to build this next generation of silicon.”

    Cisco is in a unique position to see the future of bandwidth better than anyone.

    We wonder if Cisco will be rewarded for avoiding future network bottlenecks and propel the network forward with CRS-3?

    Discuss


  • Do Open Protocols Bring Storage Costs Down?

    storageLede.jpgThe move to virturalization leaves no stone left unturned. It touched the public network via EC2 (and now a host of hosts) it formed the Cloud and fused a new generation of the Internet. Service orientated also hits the data centers and this means things like switches, servers, and disk.

    At the core of the movement of virtualization movement is freedom of the physical environment. Optimize hardware performance and set the workload free. In the process of doing this, a promise of cost savings has set a off a storm in re-factoring the data center.

    This is the first in a series of posts taking a look at areas of the data center and how an openness strategy become a driver for winning customers by bringing costs down.

    Sponsor

    We took a look at the storage landscape from the eyes of Hitachi Data Systems, “HDS”.

    HDS_Yu.jpgWe spoke with Hu Yoshida, CTO of HDS. He gave us a practical overview on how the needle of enterprise costs are being reduced focusing on reducing operational costs.

    One thing the he mentioned was that Hitachi’s HDS division was able to grow in the storage business in this tough climate, which is amazing considering it is an industry that follows economic spending as a whole.

    Yoshida attributes part of this to HDS decision to deliberately disrupt their own “closed” box solution where storage and management are sold together. This allows IT shops to have more choice, and decouple vendors. He said that this was a big decision for the company, as it opened up more competition to a core business.

    Protocol vs. API

    hitachiLogoMar2010.jpgYoshida said that the team at HDS decided it was inevitable for this protocol level standardization to exist. His team felt that HDS needed to be a leader in this opportunity. He cited a customer that uses an HDS head as a management function that had NetApp behind it as a pattern they supported that several years ago would have been done by partnership rather than protocol level support.

    Although in this scenario HDS didn’t win “all tiers” of this storage solution, it was able to be a fabric and join a customer that “loves NetApp” and loves HDS too.

    Mr. Yoshida said that his company decided to fully embrace the protocol level integration with the surrounding systems, instead of only releasing only APIs, as a means to allow more competition – and cooperation in the ecosystem through technology rather than selective partnerships.

    Considering the Tiers

    An area of storage that is ripe for cost savings is supporting different types of solutions, e.g. production vs. development and classes of storage based on the application.

    HitachiCubeSquare.jpgIn his blog post, New Considerations for Tiered Storage, Hu examines reduction of costs.

    Looking under the covers we see that there is a lot of questions to ask in the details of these strategies, and marketing matters in how solutions are perceived and how different types of hardware (for example Seagate vs. HDS) make a difference for buyers, and that to be a leader, it is key to have answers across the industry ecosystem.

    When we look at the decision being on moving the cost needle down for operations management instead of hardware savings, it becomes clear that playing nicely pays. HDS is a company that plays on both sides of the storage spectrum (management layer and disk) and it’s partnerships include relationships with HP (as OEM) and companies like Cisco and Brocade as go-to-market partners. It is tempting to “hardwire” solutions together, but it is a bigger win when instead these are loosely coupled and partner-ready.

    Looking at it from the angle of cost reduction for open standards gives the pivot point to consider this natural tension offered by virtualization has a promise binding vendors together to optimize their solutions for the plug-and-play data center.

    Does an open protocol powered data center reduce total costs? IBM, HP, Cisco, NetApp, Oracle…Hitachi thinks so, do you?

    Discuss


  • Reinventing the Handshake: Polite Servers and Smart Networks Lead to Active Security

    handshake.jpgIf there was a real-time tag cloud for the RSA conference this year, three words would be in big bold letters: Security (of course), Cloud, and Virtualization. Paul Congdon, from HP’s ProCurve Networking group gave us a view into the not-so-distant future where servers, like good house guests, knock before entering. In this case, it’s the link they request, and to get it they will properly announce themselves and their intentions to allow the host to prepare to accommodate them.

    This capability is a linchpin in removing the process bottleneck in provisioning new services in the data center. For most organizations, the network is manually configured. To keep up with the movement of the provisioning of virtual machines, the network needs to enable “plug and play”.

    Sponsor

    Complexity Means Controls

    The network is in a unique postion as a “pipe” as well as a “control” where it needs to know what communications go where and plays the role of traffic cop.

    This means opening ports between servers, controlling traffic and setting monitors to make sure traffic is optimized. When things change, configuration does as well, especially when a new service is requested. Today, this is controlled by human processes and controls to keep the network up to date with the applications and servers that host them.

    congdonNetworkSwitches.jpg

    In the future, there is the opportunity to move forward in auto-configuration or even smarter handshakes.

    In essence, to oversee this process a directory or resources or inventory would exist that allows the network to “know” what is in place within it. This is a new control point for the data center, and is a resource to the network.

    congdonSmartNetwork.jpg

    Solutions in Protocols

    802.1.x is technology that has been used in WiFi connections. One reason it was useful in that context is that it’s expected that the link drops and reconnects frequently and so is seen as an opportunity for the physical link as well.

    802.1RevisionHistoryFull.jpg

    The potential upgrades to 802.1 would enable a richer dialog between the server as it starts up its networking process. This would allow the server to announce itself and its requirements (e.g. encryption) and allow the network to respond to these appropriately (e.g. set encryption key). This process can become a big win for configuration management where now, the server can come up in the network and be provisioned according to the policy.

    802.1ServerFramework.jpg

    All of this reminds us of the benefits of a company like Apple. Having the unique opportunity to control the model from end to end means have the ability to make better tools. We wonder if natural evolution will get multi-vendor shops a solution for all of their IT assets.

    What will it take to get to a model-driven data center?

    Photo credit: orinrobertjohn

    Discuss


  • Feeding Amazon’s Cloud of Workers with Virtual Peas

    farmvilleBoy.pngCrowdsourcing may be stretching the geo-political landscape much the same way that cloud computing is redefining the data center. In short, nothing is safe and yet everything has the promise of being better.

    Companies, industries, and economies are forming in the cloud by taking on these tasks and participating in the ecosystem across the world. For the last several years, Amazon has been leading the way in defining this market with Mechanical Turk. Today, when we checked, there are 75,000 HITs (Human Intelligence Tasks) waiting to be collected on.

    Sponsor

    Last night’s crowdsourcing panel at the San Francisco Commonwealth Club included several thought leaders, and was moderated by Brad Stone of the New York Times. It was filled with young people – and few of receding years – and it explored the phenomena and the marketplace being built by crowdsourcing.

    There are a number of twists and turns already being uncovered in the world of cloud sourcing. Below, a brief profile of three of the companies the speakers from the panel represent.

    Working On The Virtual Farm

    crowdFlower2010.jpgLukas Biewald, CEO of CrowdFlower shared his company’s success in being a platform for crowdsourcing. His tools live on top of Mechanical Turk and helping optimize the platform for quality of delivery.

    Crowdflower guides companies through the process of designing questions and setting targets that map to the goals of the projects and offers reports and manageability for clients like O’Reilly, Microsoft, and others that have digital assets that need batch processing by humans.

    fvghostchili.gifOne of the interesting observations from Biewald is that the company is now a large buyer of virtual Zynga’s currency to power Farmville. His audience values virtual currency enough to trade it directly for tasks. A bit of brain power for you; a bit of brain candy for me.

    He offered the crowd a brain twister that might befuddle even the deluxe version of TurboTax:

    “A person completes an intelligence task assigned to them, which is a job that pays them in virtual currency. They then buy seeds from Farmville created by another person who created them in Farmville and is earning virtual currency. Who pays what taxes for which goods?” He also mentioned that his company is not working on solving that problem.

    CrowdFlower gives us a glimpse of a new world of the real-time workforce and how it moves up the stack. In a way, this seems reminiscent of the rise of power in outsourcing. Originally, it was call centers, then it moved up the IT stack to provide services all the way into business strategy and complete outsourcing.

    Activating a Globe: Moving from the Farm to the Machine

    Leila Janah, the founder of Samasource, was also on the panel. Her stories focused on how crowdsourcing can fight global poverty by giving people around the world an opportunity to earn money by completing knowledge tasks.

    Even though many of the jobs and wages are low for Turk jobs, the teams that she has setup have been able to (with a little help) get pods of people set up in India, Africa, Haiti and other places.

    Here’s an example mission that is focused on spinning up a group of translators to help the efforts in Haiti to sort out communications (and in this case translate text messages).

    Mission 4636 from CrowdFlower on Vimeo.

    GiveWork.jpgJanah reports that these individuals really aspire to participating in the global economy and moving ahead as knowledge stakeholders.

    She also noted that their approach is to setup high-performing teams that work together to manage crowdsourced tasks. It is exciting to think that these knowledge outposts forming around the world.

    Samasource has teamed up with Crowdflower to offer a program for promoting sharing work with individuals in Africa to help fight global poverty. They have released an iPhone app called Give Work that allows a person to give work.

    Avoiding the Crowd SweatShop, or Cesar Chavez for the Cloud

    Lily Irani, Ph.D. candidate at the University of California Irvine has been researching the phenomena of the new distributed work force. She brought a cautious point of view to the panel, reminding the audience that crowdsourcing has created a massive shift in power towards the employer.

    She asked us to consider some of practical challenges in equitable pay and enforcing contracts for payment with providers in a distributed environment.

    Irani reminds us that over the decades we’ve seen waves of globalization, and globalization has massive impacts on the worker-employer balance. She has contributed to a project called Turkopticon, which allows an overlay on top of Amazon’s Mechanical Turk to allow the workers to rate the employer, which currently not a feature of the system.

    This is a presentation “http://www.slideshare.net/lirani/agency-and-exploitation-in-amazon-mechanical-turk” co-written by Six Silverman and Irani that gives a good overview of Amazon’s Mechanical Turk. It also explains several risks to the population of workers including some of the basic things you may not encounter today in a in-person job.

    Her work begs us to look at the past and to consider the balance of power in crowdsourcing. Will these knowledge workers be able to take advantage of the benefits of the historical labor movement?

    unionizingBrainWorker.png

    Will this movement bring the world closer together? What would you be willing to do for Farmville peas?

    Discuss


  • Reporting From RSA 2010: Identity, Health Care, and a Higher Realm of Credentials

    RSA2010Logo.jpgThis week we are reporting from RSA, the security conference in San Francisco. We’ve seen hackers, threats, and industry leaders roaming these halls – and among these we found leaders of the identity community, people who are thought leaders focused on creating a safe Internet for all individuals.

    This includes folks who in the Identity Commons and OASIS workgroups, and the 1-year-old Kantara Initiative. The latter was announced to the public at RSA 2009, and this year it hosted an all-day workshop that brought cloud computing into the forefront of the dialog.

    Sponsor

    Diverse Community of Interests Coming Together

    kantara_logo.gifToday’s all-day workshop offered by the Kantara Initiative focused almost exclusively on identity services and included viewpoints from several perspectives: enterprises (CA, Ping Identity, Aetna, Oracle, HP), service providers (NTT), consumer applications (Paypal, Google), and government agencies (NIH).

    The room was packed – standing room only. After the kickoff we had a chance to ask Trent Adams, chair of the Kantara leadership council, to share his thoughts about identity, cloud computing and year one of the new organization.

    He talked about the potential big win that existed for the organization because of its involvment in preparing standards for federal government approval. These are in historic times, he said, and embracing openness at the federal level was an opportunity the organization decided was valuable for the community. We’re keeping our ears open to learn more about how identity services will be enabled and approved through the government.

    Landscape Change: Cloud Computing Invigorates Identity Efforts

    One thing that is clear is that things get more complicated when combining identity services with cloud computing. We were reminded that many of the technologies that have been developed, including things like OpenID and SAML were designed around the same scenarios of sharing across domains. Identity can be solved in a multi-vendor, multi-protocol, and multiple-infrastructure world.

    Matthew Gardiner of CA summed the importance of the link between identity solutions and cloud computing in his talk, “Identity as Security Glue for the Cloud”:

    “I want to say the phrase cloud security in the first few moments of my talk because you’ll be hearing it a thousand times before the end of the conference. Cloud security can be viewed as a Rubik’s cube of security implications, when identity services and combining them within the vectors of Iaas, PaaS, and SasS combined with private, public, and hybrid clouds.”

    The West Coast Perspective on Health Care

    MEDecisionMarch2010Logo.jpgRSA and HIMSS fall on the same week this year. While nearly all of the healthcare IT leadership headed to Atlanta, several companies also came to San Francisco.

    Yesterday, MEDecision presented their solution and connections to different Web applications and health care records and systems, and gave a very tangible set of scenarios showing how cloud computing and identity meet around sharing information about a person who is a patient.

    At the same time on the East Cost, MEDecision was also at HIMSS demonstrating open exchange of health information in a HIE product offering that helps connect services across providers in order to aggregate a view of an individual. The company offers software and services to insurers to negotiate their cloud-based work flow, including moving private data across pharmacy, doctors, insurers, and the entire health care landscape.

    No Passwords in the Cloud

    patrick_harding_1.jpgPatrick Waring of Ping Identity spoke about his company has learn about cloud computing in this session, “How the Cloud is Changing Federated Identity Requirements”. A few of his observations:

    • Software is no longer build vs. buy. It now includes subscribe, which by definition is a shorter term relationship.
    • Cloud computing is an evolution of architecture. It arrives after Web services, which evolved from Web, client server, and mainframe.
    • Complexity of the identity layer is harder than ever for the simple reason that there are more apps per user than ever before.
    • Services are becoming any-to-any, where internal (employee) and external (customer) classifications don’t matter nearly as much as before. Because of this firewalls are losing their usefulness.
    • Audit is no longer an afterthought. Auditors don’t care how or where applications hosted, but hey do need their reports! This includes Sarbanes-Oxley, HIPAA, Gramm-Leach, Bliley, and more.

    A core theme of this session was how the consumer mindset is driving requirements for application experience. Consumers expect it to work on any device, be secure, and be portable. To deliver on this, it must be easy to use. At the same time, password risk must be reduced.

    A key trend that Waring pointed out is moving identity systems from “push” models into “pull” models. Instead of updating partners and directories by batch services, companies need to be building real-time identity resolution in applications.

    We asked Waring if he had any predictions for where that type of service will come from. His response led us to the conclusion that the leader will be a brand and service that people trust and understand the motivations of. It will likely enter the market from a higher realm of credentials than Twitter or Facebook – perhaps from financial services.

    Context is Fundamental: Person, Father, Employee, All of the Above

    One thing we learned today is that Google’s App Engine is worth watching as this space evolves. Several interesting things are being done in this sandbox that haven’t been accomplished other places, including how to connect consumer services to enterprise login discovery using domain.

    Google has inserted itself into the sweet spot by getting consumers and enterprises alike hooked on their applications, giving the company a unique view of the challenges and solutions in joining identity with cloud computing. We’ll be taking a closer look at these offerings and where Google is headed.

    Another thing we observed is the power of the network. NTT gave a demonstration of the power of mixing identity protocols (SAML and OpenID) for the purpose of connecting social, information, and financial transactions in the browser with one login. It starts to show how the next generation Internet might work, where the application requests profile from the cloud rather than a user typing it in.

    A summary of overlapping-world-multi-protocol integration has been shared on Google’s site.

    OverlapIdentity.jpg

    Discuss


  • Lady Gaga as the Killer App: Moving Identity into the Cloud

    ladyGagaHat.jpgProtocols, protocols, everywhere, and not a drop to drink. OAuth, OpenID, UX, Shibboleth, SAML, XRI, FOAF, Facebook Connect, that is a small sampling of some of the technologies that have been invented to move Internet Identity forward forward for the web.

    Today, at the Open ID User Experience Summit, a jaw-dropping statistic was given that 89% of users coming to LadyGaga.com chose a third-party logon rather than create a new account. “Signup with Facebook, Twitter, or MySpace” is the default option on LadyGaga.com – and it works.

    Sponsor

    Do You Have the Credentials: Want to be on the Guestlist?

    We wondered why is this site getting such high level of adoption of third-party logons, which hasn’t been seen at this level anywhere else.

    It seems that in addition to the momentum of the OpenID community, part of the story is about the landscape change occurring around video content. One great example is in the changes at YouTube including the Vevo ad wall. More than ever, tracking of the users that access protected content is becoming the norm. Even innocent phenomena like Rickrolling have been at risk as copyright holders are removing more and more content from YouTube.

    Have you seen one of these notices on YouTube before?

    ladyGagaPulledDownUMA.jpg

    It’s an example of a piece of content that has been pulled. Vevo now serves up the same thing, and the content owner is provided with marketing tools to place around the content -ads, placement, positioning.

    Google likes to get paid. It’s easy to see that while searching for Lady Gaga on the current version of Youtube. In contrast to the list of free videos that once once stood, sites like LadyGaga.com are offered as one of the top links right next to the Vevo-supported link. Somewhere, in an office in Mountain View, there is a voice whispering “Show me the money“.

    ladyGagaYouTubeSearchAnnotated.jpg

    Third Party Logon: Facebook and Twitter Connect our Personas

    LadyGagaSignUp.jpgIronically, when clicking on the link from YouTube and arriving at LadyGaga.com, the most prominent third-party logon solutions displayed are Facebook, Twitter and MySpace. Universal Music uses the RPX services based solution by JanRain, a Portland, Oregon company to power this capability for the site.

    Yesterday, at the OpenIDUX community meeting in Chicago, Brian Ellin of JanRain presented a set of slides on best practices in the OpenID world for User Experience and third-party logon.

    This talk included best practices and notes from JanRain, the company he works for, supported population of 173,000 web sites that use it for a third party logon. The company offers credentialing from over 10 key identity providers including Twitter, Facebook, Google, Microsoft, and soon LinkedIn.

    Identity as a Service

    For the user, there is little reason to know the technology that is behind the scenes. All we need is to be able to easily choose the provider we want – and that it all works. In fact, when it’s done right, it’s is less clicks and time. And of course, one less password to remember.

    That being said, on the LadyGaga.com site at least three protocols are supported from the home page alone: OAuth for Twitter, Facebook Connect, and OpenID with MySpace. Getting the experience right first, has allowed these companies to support the work of integrated logon experience while the industry continues to innovate on the different core protocols for sharing identity across websites.

    janRainLogoFeb2010.jpgWe spoke Lisa Hannah, Director of Marketing at JanRain. She shared information on their analysis of the social media trends of third party logon users. They represent a balance in building industry relationships, while at the same time finding consumer solutions that work well enough to drive adoption.

    The Nice Thing About Change is That it Requires a Lot of Hard Work

    The OpenID community iterating the solution to get it right. The organizations and leaders in the community, including individuals like Brian Kissel, Allen Tom, Monica Keller, Mike Jones, Joseph Smarr, Eric Sachs, David Recorden and Chris Messina have continued to build momentum in the community and find common ground – even beyond where the companies they represent (Google, Microsoft, Facebook) are deciding on the balance of power of this generation of the Internet.

    Third-party logon is becoming standard practice. And like many things, it becomes easier to use with time. For this user, it’s starting to feel familiar, like an old friend.

    We’d Like to Ask Lady Gaga to Solve Health Care

    It had to happen somewhere, and LadyGaga.com is the site that shows us a glimpse of a new world of the Internet. Behavior change is all about the incentive – and bling. Perhaps this type of evidence will motivate the federal government in the direction of third-party logon.

    In the case of identity on the Internet, it is clear that there is benefit when open, mixed, and hybrid solutions are supported for the good of the user. As always, the experience is what matters.

    We’ll continue looking at Lady Gaga and third-party logon in an upcoming piece where we go through the details from a user experience and technical view.

    Do you support the movement towards users signing in using third-party logons? What do you think about Twitter’s OAuth, Facebook Connect and OpenID?

    Discuss


  • Be Found on Twitter: Connecting Our Dots in the Social Graph

    twitter_logo.pngToday, Twitter took the wraps off a new feature of the site. When logging in, it prompts the user to set defaults on being discovered with their email address or mobile phone number. It’s called “Be Found on Twitter”. Our contact at Twitter told us that, like many new features, this will show up for some users today and others soon.

    Up to this point, Twitter allows people to create a persona for themselves that may not be directly correlated to the real world. You can’t do that on Facebook (assuming that you’re following the terms and conditions).

    This change in settings – even if it is optional – represents a shift in how the service is working behind the scenes to connect people that already know each other. Personal data is moving in between the social networks and becoming a key part of cloud services.

    Sponsor

    Do You Want Followers? Default Settings Make it a Reality

    So far, Twitter hasn’t offered a way to make this kind of connection easy. We believe the reason this service is being offered now is simple: Twitter wants to take your email inbox and turn it into relationships.

    Below is the screen that popped up for Web client users of Twitter that are being offered this enhancement. (Note: Those are my personal email and mobile phone digits, but I chose to opt-out of the service. In case you want to contact me, email is still an option.)

    beFoundTwitter610.jpg

    Thinking a bit into the future, perhaps Twitter will offer to take my email folders and auto-magically create lists of users from the email accounts and phone numbers in them.

    This all gets interesting in the context that Twitter lists are viewable to the outside world – and inbox settings are not. There still seems like more work to do to make this all make sense, but for now, it seems to be somewhere between Buzz and Facebook’s approach to connecting users to their intimate relationships.

    The Reverse of Buzz, or, Do Memes have Cellphones

    We can see the motivation for Twitter to launch this feature. One of the challenges the service has is also its greatest feature: no rules. Anyone, anything can have an account today: devices, dogs, spacecraft, germs, conferences. All of ’em are out there somewhere and are one button away from being in your feed.

    Something to think about in this mixed model of accounts, is that although the settings on Twitter are now moving towards discovering email and phone numbers for our contacts, we don’t expect the Cassini Saturn spacecraft to have a mobile phone number.

    cassiniSaturn.jpg

    Although Twitter is amazing for finding information about the world in real time, one of the things that Twitter has lacked is stickiness with intimate contacts.

    Trying to get folks use Twitter to connect to their real friends, people they work with, and family members is a part of the battle for the real-time Web. Facebook has unique features and momentum in this area (e.g. requiring your real name), and Google Buzz made a big move in connecting the inbox for millions of Gmail users to its social service.

    API Makes it Harder to Create Harmonized Settings for Users

    This is a great example of where Twitter, being so decentralized, has to rely on partners to roll out these types of features. Traditional Web users see these features offered by the company, but others – Seesmic, Tweetdeck, Tweetie – may not ever offer this feature in their client.

    One thing to watch will be how Twitter evolves the terms of service and default settings as it ramps up its efforts to compete further for mind share in the real-time web.

    Taking this all into consideration, do you want to be found on Twitter?

    Discuss


  • Sometimes it Pays to Solve Hard Problems: CA Acquires 3Tera

    CALogoFeb2010.gifAs ReadWriteWeb’s Richard MacManus reported in 2006, 3Tera is a company to watch: “3Tera strikes me as a company to keep an eye on – they’re tackling a complex problem and they have a lot of potential customers out there.”

    CA must agree. The companies have entered into a definitive agreement for CA to purchase 3Tera, adding it to CA’s growing list of cloud acquisitions.

    Sponsor

    Simplifying Deployment

    3Tera’s focus is simplifying the deployment of environments. The tools also helps synchronize capabilities between cloud providers and so-called “private clouds” hosted inside a company’s data center.

    The company has a GUI based application to help visualize, manage, and deploy solutions in the cloud. This is an important thing to solve, especially if time is of the essence in getting your cloud-based application supported by your IT team, and keeping your choices open after it is deployed.

    The Cloud is Mainstream

    3teraFeb2010.jpgAs self-reported by the 3Tera team in their blog, this acquisition represented cloud computing becoming mainstream in IT. CA sees a need to fill in this piece in their portfolio and IT leaders are asking for tools to deploy and manage cloud infrastructure assets.

    “We started 3Tera to radically ease the way IT deploys, maintains and scales – MANAGES – applications. Our AppLogic® cloud computing platform provides the foundation of our partners’ orchestration of cloud services for public and private clouds around the world. Today, we’re taking the next step in moving toward making cloud computing mainstream by joining CA.”

    It looks like cloud computing is becoming essential to the enterprise. Is it in yours?

    Discuss


  • Microsoft to Government CIOs: Choice is Here

    USFlagLede.jpgAt the 8th annual U.S. Public Sector CIO summit in Redmond, Microsoft shared its progress in offering cloud software services to the attendees. The company has been making progress along multiple fronts, showing the power of focus and persistence.

    Microsoft has been investing in its products to meet the security requirements that are popular in the government setting. The company also applies its software-plus-service pattern as the way to reach knowledge workers in highly secure settings that require high availability.

    Sponsor

    msftCIOSummit.jpgThe event kicked off with presentations and press announcements from numerous Microsoft government and cloud teams. One area that is central to Microsoft’s message is that the company is committed to federal standards for security, including – deep breath – International Organization for Standardization (ISO) 27001, Statement on Auditing Standards (SAS) 70 Type I and Type II, Health Insurance Portability and Accountability Act (HIPAA), Family Educational Rights and Privacy Act (FERPA), Title 21 CFR Part 11 of the Code of Federal Regulations, Federal Information Processing Standard (FIPS) 140-2, and Trusted Internet Connections (TIC) compliance.

    Folks in health care can take note that HIPAA is on the list, giving us a glimpse into the future that this service may eventually compliment Microsoft’s other health care solutions in the market. In the context of health care, it’s worth mentioning that in Microsoft’s roadmap two-factor authentication will be addressed in the next six months.

    Microsoft posted an primer on YouTube on its software-plus-services government strategy. The key theme: choice.

    Software-Plus-Services Translated Scenario: Running Excel Locally

    Business Productivity Suite includes hosted Exchange, Sharepoint, and Office Collaboration tools. This solution doesn’t include the applications themselves online; instead the pieces that join the files together are in the collaboration workflow. For the starter price of $10 per user, per month, it seems very attractive in comparison with other offerings in the market. We used the cost calculator and found that was as you would expect, the price per user goes down as the volume increases.

    Microsoft seems to be leveraging its strength in Exchange and Sharepoint to couple with local versions of Office Software to enter the market. There may be several reasons for this that shape this solution.

    • Protecting the dynasty of Office revenue
    • Making it easier to get started for existing license holders and users already familiar with their local version of Office
    • Online solutions not ready, or don’t meet the the usability requirements
    • Microsoft’s vision that some software runs best when local

    We wondered how this compares with the capability of editing a Google Spreadsheet with Google Apps. As a user, running the entire application on the network is a bit of a double-sided sword today.

    On one hand, browser based applications aren’t as feature rich as current desktop applications and are still maturing. Where this innovation happens is also unclear; HTML5, JavaScript, SilverLight and Flash all have roles to play.

    On the other hand, when you have the document hosted online like in Google Docs, collaborating becomes a natural and expected part of the experience. The app can offer simple cues like always telling a person who else is editing the document at the same time.

    Today, we are signing up for the trial service to give it a test, and will report back on our findings as we setup and put it through the paces.

    Microsoft’s Commitment to Government: Is it Winner-Take-All?

    Microsoft has benefited in the past by understanding how to leverage the network effect. For example, file formats and applications like Word became so dominant that it is assumed that all users use .doc, or use an application that talks to this file format. This, in effect, makes it very difficult to displace Word as an application.

    The push into secure cloud services seems like a race to ubiquity. The vendor that wins “first” may end up being the default way to collaborate in government departments.

    We can see how valuable it will be for the systems to overlay with existing technologies such as email, word processing, and data analysis. They will need to hit the same ubiquity so that all members of a team or project can share information freely and securely.

    Microsoft is heavily invested in government services. You can keep up on the happenings by follow one of the handful of Microsoft government Twitter accounts the company has. We also found this Gartner interview with Ray Ozzie valuable in framing the cloud from Microsoft’s perspective.

    Is Microsoft positioned to be the default delivery of cloud services to the Feds?

    Discuss


  • Rackspace Cloud: An App Store that Pays

    rackerLede.jpgDo you like getting paid?

    Today, RackSpace Cloud announced a new cloud partner program designed to bring new business to reward partners for bringing hosting to the cloud offerings with the company.

    Now with the Rackspace program program, resellers can receive direct rewards in the form of percentage points on the back-end and join in the financial benefits of cloud hosting.

    Sponsor

    Where Did This Idea Come From?

    We got a chance to speak with the Rackspace team. They emphasized customer feedback and testing with beta customers as key drivers for this program. From what we can see, the developers said “pay me” and “make it simple”. And that is what Rackspace aims to offer.

    “As a leader in cloud computing and a company that is committed to maintaining tight bonds with our partner community, we have aimed to create one of the most compelling partner programs in our industry,” said Emil Sayegh, general manager, The Rackspace Cloud.

    How Does it Work for Resellers?

    Rackspace Cloud customers who want to buy our hosting services directly and re-sell them to their own customer base. A reseller may divide our hosting services into smaller bundles so they are able to generate a profit from their customer fees, or build an application and attach it to the Rackspace infrastructure.

    In a way, this model is like a reverse app-store, where instead of paying 30% like app developers pay Apple to be on iPhone, app developers get paid for bringing business to the platform.

    If this program takes off, it could be a signal of a major sea-change where applications developers – rather than system integrators – are the one’s being rewarded for bringing infrastructure together.

    And now the company has extended it’s offerings to offer a Windows product it is broadening it’s appeal even further. The new Windows beta allows Cloud Server users new and existing features including control panel and API access for create, delete, reboot, and rename functions. Showing that the cloud is encapsulating the server into common functions and administration is one move towards consolidated infrastructure.

    From the press release. “The Rackspace Cloud Reseller Program offers discounts of up to 12 percent for resellers selling large volumes of cloud hosting across Rackspace Cloud’s services.”

    This also might be a good approach to monetize great open source solutions. The software is free, but when installing it you run it on a slice of the cloud and get paid as part of the whole bundle from the infrastructure provider.

    How Does it Work for Affiliates?

    RackSpaceMonthlyPayout.gif“The Rackspace Cloud Affiliate Program is designed for companies that have offerings or content that attracts users interested in a cloud hosting solution. It is offered as a text or image link from a site, blog, or tweet.

    Members can earn from 5 to 7 percent (depending on the total number of referrals) of a referred hosting customer’s payments for a period of up to three years.”

    If you see a button like this one on the web, be assured that the company presenting it participates in the revenue stream if they sign up for RackSpace.

    rackSpaceAffiliateButton.gif

    [Note: Author has asked management our position on this for sites like RWW]

    Apple offers an afflilate program for the iTunes and the App Store. Many other online retailers offer similar features today on goods bought on the Internet, including Amazon with Amazon Associates.

    When first released, there were some questions on the process and payouts that have been updated in this new release, including a cookie-tracking mechanic and easier ways to reporting on activity. It sounds like Rackspace isn’t giving up and instead is innovating further in making infrastructure a click-away from the web.

    Will Amazon, Others Respond?

    RackSpace is growing it’s partner network and incentives, and is offering a sales model that rewards applications resellers, developers, and web sites.

    Amazon has an amazing affiliate outreach with it’s core store, we wonder if they will soon will offer more programs to attract software to bundle in infrastructure.

    Will more and more software come with “infrastructure inside”?

    Discuss


  • Blazing the Path to Email Collaboration: Without all of the Buzz

    yousenditLogoFeb2010.gifToday, email is nearly as ubiquitous as the computer itself. It offers a simple process that “just works” for most users and it has become a defacto communication process for enterprises and individuals alike.

    YouSendIt found its place in the evolution of email by providing existing email users a solution to a common problem – sending large files. Along the way, the company has leveraged its position in cloud based solution to offer additional benefits to its users.

    Sponsor

    What do we Know about Email

    A few things about using email that define it as a communications tool.

    • Each message can be targeted to a person, a list of people, or an entire group of people.
    • Individuals can respond to the message, ignore it, or mark it as Spam.
    • It’s security and privacy model that starts as opt-out, rather than op-in. Meaning that if you get an account *anyone* can send you a message.
    • Spammers are the single largest sender of email traffic.
    • Email messages can include file attachments that offer a way to send a file from one computer to the user on another one.

    Improving the flow of attachments is part of email is that YouSendIt specializes in.

    Large Files Needed a Home

    For both user experience, technical infrastructure, and cost reasons many email systems cap the size of attachments they allow to pass through the gateway. Large files are routinely blocked, causing email users the challenge of figuring out another way to get them across the network.

    One way to think about it, is that large file attachments “real” home is not the inbox, but more rightfully the filesystem (aka My Documents). And, increasingly these files are being stored in the cloud rather than the filesystem.

    A diagram describing how it works shows how it creates a new channel for connecting the user to their file, while continuing to use email “as-is”.

    howYousenditWorks.gif

    Collaboration Happens: Did You Get my Email?

    By offering a cloud solution to deliver the files, YouSendIt was also able to track whether the recipient downloaded the file. The company currently has has about 5 million “file batches” sent per month with over 10 million downloads. On average, each file is being read by two persons on the other side and the sender is able to see which ones.

    This audibility provides YouSendIt users a way to close the loop and gain a deeper insight to the status of their communications.

    Imagine the sales person, who sends a brochure to the prospects, getting a report back for who opened it and who didn’t. It automatically separates out the interested from the others and gives an opportunity to target the next message.

    Email is a social application that has it’s own rules and nuances – Google reminded the world recently with the launch of Buzz that connecting email and social networks is harder than it looks.

    YouSendIt might be onto something. Instead of reinventing the entire social context of email, the company is focused on enhancing the existing email system as it works today.

    YouSendIt Adopts the Enterprise

    Microsoft Exchange has become the dominant email system in the enterprise. YouSendIt spent a lot of time working closely with Exchange and its client counterpart Outlook to bring its cloud-based attachments solution to the platform. This solution offers an approach to a gradual transition to cloud computing – one message at a time.

    yousenditOutlook.jpg

    Change is in-the-air around collaboration and email. We wonder if the next generation of email going to evolve into the killer app for bringing social networking it into the enterprise.

    What do you think, will email ever fade away?

    Discuss


  • Rulers of the Cloud: Will Cloud Computing Be the Second Coming of Cisco?

    ciscoLogo.gifCisco is betting heavily on the network as the platform. We took a look at the role of the network in the emerging landscape of cloud computing as part one of analysis of “Will One Company be Dominant in Cloud Computing“. We started with Cisco, since the cloud implies the network to float upon.

    Like religion itself, Cisco is a company that evokes deep emotions. Many IT leaders believe in Cisco and bet their operations on the company. And to unbelievers, using Cisco gear is one of the deadly sins.

    Sponsor

    heavenDoor.jpgFollowing that analogy, on the first day, there was IP. And the Internet was formed. And on the second day there was virtualization, and the virtual machine was born. On day three there was pay-as-you-go computing, and Amazon released EC2. On day four, the iPhone was released. And there was rejoicing.

    We’re not sure yet what will happen on days five and six – but on the seventh day, there was a globally interconnected cloud, powered by the Internet, IP, and more than likely, Cisco gear. In Cisco’s prophecy, this leads to The Human Network.

    Cloud Computing is the Next Version of the Internet

    Cisco enjoyed massive benefits in the first phase of the Internet in the late 1990s. Its gear powered the Internet, and the market rewarded the company for its leadership. Cisco’s business was built by the enterprise and their huge appetites for interconnected networks to connect companies and power Web applications.

    To diversify and expand its opportunities in the late 1990s, Cisco made big bets in the service provider and SMB marketplace. In the service provider space, Cisco drove combining voice, data, and video networks. In SMB, Cisco entered into the commodity space with Wi-Fi and lower-end products. These have paid off and have positioned the company to be ready for the next phase of the Internet.

    cisco20YearStock.jpg

    Charting Cisco’s performance from its first day on NASDAQ 20 years ago to today shows a massive spike in its stock price during the rise of the Internet.

    In the next phase, we’re seeing a repeat opportunity for Cisco to assist in the disruption of the datacenter in the move to virtual data centers and private clouds. At the same time, the company is going further in the hottest area of the service provider opportunity to the core of mobile computing.

    We wonder, will the market see another spike in Cisco as cloud computing goes mainstream?

    IP Everywhere – Networking is the Platform

    Cisco has invested in several trends that will define the next generation of the network. The network is a complex thing. If anything, the opportunity for Cisco is also the risk. Can it manage these complex infrastructures and prove that it is the one company that can bring it all together?

    First, working closely with VMware, Cisco has defined the virtual data center. The virtual data center is one with an aware unified fabric. Cisco’s Nexus switches bridge storage, virtual instances and network.

    To get a high performing distributed virtualized data center, it is important to have all servers on a switched network, deployed as the same Layer 2 VLAN. This means extending VLANS over Layer 3 routed networks. To do this, Cisco has brought a new data center interconnect solution called Cisco Overlay Transport Virtualization (OTV), which has the goal of providing the performance of Layer 2 while still preserving most of the scalability, resiliency, multipathing, and failure-isolation characteristics of a Layer 3 connection.

    OTVCisco.jpg
    “OTV can be thought of as MAC-address routing, in which destinations are MAC addresses, and next hops are IP addresses. OTV simply maps MAC address destinations to IP next hops that are reachable through the network cloud. Traffic destined for a particular MAC address is encapsulated in IP and carried through the IP cloud to its MAC-address routing next hop. The rich information in the MAC-address routing protocol enables Layer 2 connectivity over Layer 3 networks based on MAC-address destinations.”

    The second major area of innovation Cisco is making news in right now is the mobile core.

    3GNetworkCisco.jpgCisco announced in December 2009 that it has finished the acquistion of Starent Networks for over $1billion this year. The combined Starent-Cisco puts the company in a key leadership position in routing mobile data, delivering an end-to-end mobile multimedia IP architecture for the mobile operator packet core. Cisco sees the Internet of Things coming, and connecting mobile operations with virtualization is a part of making it all hapen. Virtualization and mobile coming together – two major trends that will form the future of cloud computing.

    This week, Cisco CEO and board chairman John Chambers shared his thoughts at Mobile World Congress on the value Cisco is bringing to the service provider.

    As the Next Generation Emerges, the Stakes are High

    On one hand, there is a joining that’s happening where we see Cisco, EMC and VMware partnered in a significant way. The announcement in November 2009 describes a vision where the companies offer enterprises a coordinated set of solutions. These solutions cover the core three tiers of computing: OS, storage, and network.

    But on the other hand, HP and Cisco are parting ways. Even in the best of times, this relationship was a bit strained as Cisco and HP have been both competing and cooperating over the last few years. The companies signed a global partner agreement. But Cisco competes with HP in unified computing, and HP has introduced the ProCurve line of network devices and purchased 3Com. Now Cisco has decided to pull HP out of its partner program. HP seems to be the one company (outside of its network compeitors) that wants to challenge Cisco’s destiny.

    Does cloud computing give Cisco a second chance to outperform the world as a dominant vendor? What are your hopes or fears of a Cisco-powered cloud computing fabric?

    Photo credit: rebeauty

    Discuss


  • Google Certification Program: Building Cloud Approved Developers

    qualifiedDeveloperGoogle.gifIn an effort to court enterprises, Google is moving full steam ahead with its developer certification program. This includes a directory of talent that is certified with Google’s APIs and have successfully launched projects into production.

    Opening its APIs to the world has been a big boon for Google’s ambitions to be a hub of the worlds information. It has made it easy for developers to build solutions using Google Maps, Search, and other offerings. With this program the company is making it easier for businesses to find qualified developers.

    Sponsor

    Developers Matter

    This program reminds us of the mantra held by Microsoft, “win the hearts and minds of developers”. Where a development community is committed, good things happen with the platform and for customers.

    As the cloud is exploding with competition, Google is committing resources to make its offering successful by giving incentives for developers. This includes a listing on the Solutions Marketplace where companies can find solutions to plug in. Additionally, Google is offering warm fuzzies to developers, from their FAQ:

    By attaining qualification, the developer receives:

    • Google’s recognition as a tested and Qualified Developer.
    • The official Google Qualified Developer logo specific to the API for which qualification was obtained, and which can be displayed on a website to showcase the developer’s skills and help attract clients.
    • Listing in the Developer Directory, which validates the Google Qualified Developer credentials.
    • A warm, fuzzy feeling for this grand accomplishment.

    Google is leveraging its greatest asset – traffic – to developers who participate in this program.

    Simply put, developers will get seen more if they complete the certification. Here’s a sample of the directory as it looks today.

    GoogleDeveloperDirectory.jpg

    We’re wondering if Google will go further in the future and sprinkle these listings into other parts of Google. For example if a search is done in the main site for “maps developer” will a developer from the list show up in a premium position?

    Enterprise Visions

    Google’s enterprise footprint is growing. On the enterprise site today, several core services such as enterprise search and maps are available to the enterprise.

    This week, at Mobile World Congress, Eric Schmidt mentioned again that “Google’s Future is in the Enterprise“. The company is focusing on how to grow the revenue engine – and to continue to make progress in unseating Microsoft in key areas such as enterprise applications.

    We’ll have a chance to see soon how far Google will go towards the enterprise with Buzz. Will Google figure out how to offer its new, but popular Buzz social networking application to users of Google Apps? Here, where privacy will really matter we’ll get a chance to see if Google will adjust to the reality of corporate requirements. In some cases, this means paying attention to the nuances of enterprises not as computer savvy as the staff at Google. This is another reason why a strong developer program – one that Google learns from – is a needed piece of the puzzle.

    What Does it Take to Get the Certification

    Developers entering into the program will be required to receive points in several categories to receive certification.

    • Application Development (can you do it)
    • Community Participation (are you an asset to the development community)
    • References (can you deliver)
    • The Qualification Exam (API test consists of 50 questions. This is the highest weighted piece)

    Are you ready to become Google Certified?

    Discuss


  • Facebook and Twitter: SalesForce.com Offers Social, Real Time Enterprise Tools

    chatterSmall.jpgToday, Saleforce takes the wraps off Chatter pilot program. After several months of testing with select customers, it is going into production for this group. We reviewed Chatter with SalesForce’s VP of Corporate Strategy, Bruce Francis and SVP Product Marketing, Kraig Swensrud to find out what all the excitement was about.

    Chatter has the goal of bringing the best of the social media tools to the enterprise, making enterprise sales as easy to use as Facebook or Twitter. With all of the buzz around privacy and social networking tools, it’s refreshing to hear that these tools use a the same security models that the rest of SalesForce has built in to its platform.

    Sponsor

    Adding Collaboration to the SalesForce architecture

    Chatter is a new module in the SalesForce architecture that takes advantage of the existing APIs and services, while providing rich collaboration features. “Collaboration as a Service” is now trending as a new category in the industry.

    chatterArch.jpg

    Adds Dialog to Existing Sales Flow

    If you’re used to using SalesForce today, you’ll see new features embedded in your screen with Chatter, including the “What are you working on” box, which mimics the experience of posting status on Facebook or Twitter.

    Similar to Twitter, Chatter users can team can add a hashtag to their post and create a new topic to share in the enterprise, inviting others to join them in dialog use the same hash. For those of you not familiar with the #hash, it is used to create a ‘topic’ that is known by individuals to increase the ability to search posts about the same topic. Learn more about the hash on Wikipedia.

    We had questions about these features and whether the #hash would be robust enough to support enterprise sharing. In Twitter today, we see groups overwriting each other and having “hash overload”. Will this happen in the enterprise? From our investigation, it seems that SalesForce has this figured out, and is adding more features in this area. We’re not sure how yet these dynamic groupings will work alongside more static topics that may exist. This seems to be an area of opportunity for companies to build their own patterns.

    salesScreenChatter.jpg

    Follow People, Documents, and Applications

    followersChatter.jpg

    Chatter supports setting up follower lists and teams and advanced permissions for supporting projects. In addition, a person can follow a document, or a record in a database. For example, a person may want to be apprised of all changes in a sales forecast document and have those changes piped to them in their feed. This feature follows the “like facebook” mantra that we’ve heard from the SalesForce product team, instead of the user going to the document, the updates are coming to them.

    Dominic Shine, from Reed Exhibitions gave us some insight into why his company is a first-mover in the pilot program.

    “Many of our employees use Facebook and/or LinkedIn or Twitter so are fairly familiar with the concept but we’ve never managed to get these to work internally properly. We are very enthusiastic about the concept of having that sort of functionality integrated with our sales and marketing processes and also as a more general purpose tool that will allow general social networking but also the ability to “follow” what is happening in processes or on platforms.”

    We still have a few questions about how easy this be for users to use the following mechanics correctly.

    Will there be social blunders in the enterprise, where your favorite team member doesn’t follow you back? Will employees “game” the system and follow everyone? These questions and more are some of the things we look forward to learning more about from customers as they proceed in their production roll out of Chatter.

    We asked the SalesForce team about these questions. Kraig Swensrud, SVP Product Marketing shared his perspective on the design goal is for followers in Chatter:

    • It is designed to be less about popularity contest and more about the people who you need across company
    • The followers will represent the social graph of connections in a company
    • Will see custom “groups” of followers as a pattern around projects, sales leads, and business processes.
    • Trending topics. What are most followed documents, records, bringing the most valuable enterprise assets up to the top

    We have additional questions that aren’t yet answered. For example how it works to bridge Facebook and Twitter, and what it means for a Chatter user to “follow” a Twitter person (does the Twitter person know that they are considered a lead). So, to dig in a bit deeper we asked a few of the SalesForce customers their experience in setting up the tools during the first phase of testing, whether they are seeing any surprises. Here’s what we heard so far:

    Our question: What are your thoughts on how you promote “proper” following in your enterprise. Have you seen users do things that you wouldn’t expect? Any social challenges or surprises?

    James Sheppard, from Vetrazzo responded:

    “Some gravitate to heavier use than others, as would be expected. I’m very pleased that many employees have stopped communicating the old fashioned way of “Broad CC” email and instead are turning to Chatter to both ask questions and share ideas and resources. It’s too soon to call them “social patterns” perhaps, but I see real potential to have topic oriented conversations in a way that’s far more intuitive and useful than email.”

    Dominic Shine, added:

    “There’s some natural interpretation about who to follow at first. But people have talked for years about how work really gets done through informal networks. The surprising thing about Chatter is that it’s a simple way to for people to interact through those informal networks without chasing people. If someone is great at vendor negotiations, I follow that person. If someone is a superstar account rep, I follow that person. I can start to pick up and replicate their success through that collaboration. And, with Chatter, everyone can tailor it and its use for their own needs. I think we’re starting to see, and will see more of, Chatter helping to “flatten” the organisation so that people at all levels of the organisation can collaborate around an issue, sale, product etc whereas the communication on these items can be hierarchical in organisations and there is frustration sometimes about slow communication or decision making up and down the hierarchy – Chatter can democratize that communication flow.”

    All in all, this seems like a great move by SalesForce. It is going to likely take some tweaking to get it perfect, but it seems on track to be a hit for SalesForce customers. Customers that we spoke to have been giving feedback on features that can be enhanced to meet their needs. And, to the credit of the SalesForce team, they are taking the time to get feedback and get it right.

    The enterprise is becoming more social. Maybe soon you’ll be able to tweet your way to a nice bonus, thanks to Salesforce and cloud computing.

    Is your enterprise ready for social sharing and the power of Chatter?

    Discuss


  • Investing in the Cloud: Trending Topics for a New Fund

    you.jpgToday we had the chance to get inside the head of one of the top entrepreneurs in cloud computing – a guy who also just happens to be an investor at one of the Valley’s top firms. Satish Dharmaraj, former founder and CEO of Zimbra, spoke with us about the trends he is looking at in his role at Redpoint Ventures, a Silicon Valley venture firm. Redpoint recently closed a $400 million fund that is focusing on mobile, cloud computing and clean technology.

    We wanted to ask Satish where the sweet spot is in cloud computing and share it with you – just in case you’re starting a killer cloud venture today.

    Sponsor

    ‘Computing Utility’

    redpoint_satish_large_final.jpgTo start off, we asked Satish what the cloud is today versus where it was several years ago.

    When Amazon put EC2 and S3 came out, for the first time you could rent compute and storage power at a moments notice. Elastic on-demand provisioning is what it originally meant when Amazon invented the concept of the cloud.

    Now, cloud computing also includes SaaS. It also includes apps that sit on top of the cloud. Additionally, with the invention of the private cloud, large enterprises are able to look at their datacenter with a new focus. The evolution is underway on how to leverage the concept of “computing utility” in the enterprise and run the datacenter as an on-demand cloud for hosting internal applications.

    Redpoint’s Investment Thesis for the Cloud

    First, a bit of background on where Redpoint sits in the investment spectrum. Redpoint’s fund is focused on early-stage companies, with 75% allocated to Series A investments and 25% in Series B. Additionally, this fund sometimes looks at seed investments between f $250,000 and $1 million, where Series A runs from $2 million to $5 million.

    There are two areas Redpoint is looking at for cloud computing and virtualization.

    1. Taking applications that used to run behind firewall and moving them to the cloud. This as a big trend for SMB and emerging for enterprise-class applications. SMBs are enjoying this trend now because they don’t have large IT departments already in place. In some cases, Redpoint also thinks that large enterprises will adopt these. It gives them more freedom of choice.
    2. The second thing Redpoint is looking at is where large enterprises have data centers that are becoming a private cloud, and running vendor software on your own infrastructure that has been packaged for virtualization footprint. The new data center is an on-demand set of services that supports elastic computing. In the future, there will be similar advantages the public cloud offers, but for internal departments. They will be able to order computing services with a Web form and expect their delivery in hours, rather than weeks or months. With this will come applications for billing, provisioning and configuration management. Redpoint ahs invested in one company already in this space, VMOps, which is considered a IaaS (Infrastructure as a Service) company.

    Additionally, there is a big trend in service providers with Web hosting operations (like 1&1 and Savvis). They are finding that they can cut costs by 1/10th by moving dedicated server business to virtual server business. Most of dedicated servers are running at 10% of the time and it makes sense for them – and for their customers – to reduce the data center footprint and cost infrastructure.

    redPointVentures.jpg

    Is There a Way to Summarize This Generation?

    We took advantage of Satish’s background as an early Java developer, unified messaging architect, and cloud computing leader to ask him if there was a way to summarize what is happening in this technology movement.

    If Java’s promise was “write once run anywhere”, what is the promise of the cloud?

    According to Satish, previously the method was write once and target any processor. Now companies can package the whole thing as a virtual machine and don’t even need to care about where it runs – the virtualization layer removes that problem. A new way to think about it might be: “write once and run everywhere”.

    Will there be a Dominant Company in the Cloud?

    Following up our recent post , we asked Satish, “Will there be a dominant company in the cloud?”

    In essence the answer is, “It’s possible”. Satish believes one company could be the nexus for cloud computing, but he’s not sure which one, yet. Here is a summation of his thoughts on some of today’s top leaders in cloud computing:

    • SalesForce is an interesting company that is growing as a cloud company. They might have a cross-channel relationship challenge in the concept of the AppExchange. It’s too early to tell if they have enough channel leverage to enable other companies to build large businesses on their platform.
    • VMware is really an OS company and is extending its reach and presence to take advantage of its leadership position. They could grow a lot in coming years.
    • Facebook is in an interesting role already. They’ve already seen a big win in the market with Zynga. A big, big win.
    • Amazon is really leading the pack. It could be a different company today than in the next decade. They biggest on-demand software as a service player and have the vision and leadership to grow.

    What do you think about Redpoint Venture’s focus? Are there any companies that come to mind that you might suggest, if you were to meet with Satish and his team? Let us know by posting in the comments section below.

    Discuss


  • Will One Company Become the Dominant Player in Cloud Computing?

    OneCloudRing.gif

    With each new milestone in technological evolution we’ve seen a company emerge as the clear leader. In the current landscape, we observe this happening in several key parts of the marketplace including networking, search and operating systems.

    Cloud computing is a new disruptive force that makes us ask the question whether we’ll see the future of the cloud dominated by a single company. In this multi-part series, we’ll take a look at a handful companies and envision what the world might look like, if, in fact, they win it all. We’ll also analyze what it will take for a new company to rise up and claim the leadership role in this chapter of computing.

    Sponsor

    Dominance Happens: A Bit of Recent History

    There has been a love/hate relationship with companies that dominate markets. On one hand, it’s us consumers that make it happen. But when they become giants we cheer as governement regulators and competitors knock them down.

    courtHouse.jpg

    Microsoft has faced this issue perhaps more than any company in the past few decades. When the browser battles were in full swing in the late 1990s, Microsoft was taken to court by the Department of Justice for antitrust violations.

    In this note released in 2000 – Technology, Market Changes, and Antitrust Enforcement -Microsoft evaluated the idea of whether it was consistent with public welfare for a company to “win” a technology market, and what it means to have a network effect in technology.

    Microsoft makes the point that no technology company will hold a dominant position for long if it doesn’t innovate and expand the market definition. Additionally, if a company doesn’t find the right balance of trust and pricing between its customers new technologies will find a way into the market and cause customers to defect.

    Point: A Dominant Vendor Will Emerge in the Cloud

    moutainPeakCloudSmall.jpg

    Taking these factors into consideration, we believe there are several points that can support the argument that a dominant player in cloud computing in the future. Due to the nature of market forces a single vendor will emerge as the clear leader in offering cloud solutions.

    • First mover advantage: We’re already seeing amazing things happen at first-movers like Amazon that are defining product and pricing. This gives them an advantage in fueling further growth and by learning and iterating the solutions in the market. Being first in an infrastructure-driven business will help them reach scale that others just can not reach easily – and potentially price it where others can’t match.
    • Vendor lock: Once you get started with an infrastructure provider it becomes interwoven into business operations. By the current nature of the cloud (e.g. little standards, a lot of innovation) being first with leading solutions adds more momentum to the first-mover that wins strategic customers.
    • Strategic synergies: When we look at the combination of cloud computing and collaboration, we see a natural fit in services that meet more needs and take more market share. It may just work out that bundling works also in the cloud and creates the network effect that Microsoft is famous for. Cisco is also partnering across the landscape, with a focus on preparing the network for the cloud. By making it easier to manage your cloud with Cisco gear, it will provide IT leaders a reason to expand their relationships today, and stay tomorrow.
    • Acquisitions and Partnerships: Companies that buy their way into the market will be a big factor in putting momentum behind their offerings. Companies to watch: VMware, Cisco, Oracle. These companies are already showing that the race is on to win the cloud through aggregation of capabilities. Cisco has a blog dedicated to Cloud Computing, Oracle is going on tour sharing its ambitions for the cloud

    Counterpoint: A Dominant Company Will Not Emerge in the Cloud

    Perhaps no single organization will have the ability to create a dominant foundation in cloud computing. Instead, we’ll see many types of solutions as equal peers in the market.

    In a way, this runs against the grain of existing technology landscape and our history with successful innovations. Maybe that is why we love the idea of the cloud itself?

    • It’s too big to own: One big reason to doubt a single dominant force in the cloud is that it feels like owning the Internet. Even Cisco with its strengths can’t make such a claim. Perhaps the cloud is the perfect market, where the barriers of entry are low enough that continual evolution will occur.
    • It’s a movement, not a layer: Another argument against the cloud having a dominant player is its fuzzy definition. There are many parts and pieces to it, and it’s not clear today what it would mean to “win” the cloud computing market.
    • Portability will keep vendors in check: If customers demand solutions where they can move from vendor to vendor freely, it will impact the landscape. Companies with cloud solutions in the marketplace could be required by these customers to remove barriers to moving data and services between different entities. Additionally, standards and best practices may emerge that allow companies and individuals to move freely between providers. In this world, it will become a fluid market that prevents vendor lock and promotes pricing and trust as brand differentiators.

    A Glimpse at Potential Futures

    We’ve compiled a list of companies worth reviewing as candidates as possible dominant players in cloud computing. We’ll be looking at their brand and the available assets that could be leveraged to achieve this position. Finally, we’ll take a fresh look at what it might feel like if they succeed and shape the brave new world of cloud computing.

    The list of candidates we’re analyzing includes: Google, Microsoft, Apple, VMware, IBM, HP, Cisco, Amazon, Salesforce, Facebook, and our favorite, Insert new startup to our list by adding a comment below.

    Please let us know what you hopes and fears are with the cloud computing marketplace. Any companies we should we add to our list (or remove)? What’s your take: Is there one company today that is best positioned to win the cloud?

    Photo credit: reddodo & savingfutures

    Discuss


  • Is Virtualization to Windows what Windows was to DOS?

    dinosaur.jpgEvolution happens.

    When Windows first arrived on the scene, there were lots of questions in the industry, like “Will people use it?”, or “I prefer command line”, or “Does it take up too much processor?”. Similar questions have been asked to the virtual server. “Is it secure?” and “Is it easy to manage?” are common questions for IT leaders considering virtualization today.

    With the constant state of technology evolution in mind, will the virtual server win the day and become de-facto pattern for software to run in the datacenter?

    Sponsor

    Taking the Pulse of the Industry

    Virtualization.info is a site that has been aggregating news and trends for the last several years and keeping track of the industry predictions, especially as they change from year to year. For example:

    In May 2007 for example Gartner predicted that virtualization will be part of nearly every aspect of IT by 2015.

    In April 2008 Gartner also said that 4 million virtual machines were expected by 2009, while we would have 611 million virtualized PCs by 2011.

    Gartner released a press announcement in October 2009 disclosing that only 16% of workloads run inside virtual machines.

    Recently, Gartner predicted that this amount is going to reach around 50% by 2012, which is equal to 58 million deployed virtual machines.

    The analysis firm suggests starting small and growing as your team gains experience. “Gartner advocates a ‘start small, think big’ approach to virtualized server deployments that begins with a specific project but builds towards a wider strategic plan that includes management and process changes.”

    However, once that is behind you, sometimes it is best to just go all the way.

    One Company Goes End-to-End

    HayGroup_logo.gifHay Group is a global management consulting firm with 85 locations in 47 countries. The company has partnered with Forbes Magazine to help craft the “World’s Most Admired” list.

    Worlds-Most-Admired-logo-2009-200x67.jpg

    Hay Group’s lifeblood is its IT infrastructure. It has adopted vSphere 4 to build an internal cloud where users can get access to infrastructure and provision servers as needed, and then extend the cloud by leveraging VMware vCloud providers – tapping into additional computing resources when needed. For instance, if its business processes require additional resources, but only require them every 30 days, Hay Group can just lease those extra resources from a vCloud provider once a month for a day.

    This makes computing resources much more like a utility where when you want more, turn it on, and when you don’t, turn it off. We wonder what it would be like if it were that easy to manage people resources.

    The Future

    With companies going full-on virtual, it seems like a new pattern will emerge as the dominant path in the next several years. We’ll see companies stand up virtualization-powered data centers for each computing resource with private cloud resources standing by to handling the extra load when needed.

    If you had the budget and time to start fresh with a new data center architecture, would you virtualize your computing resources end-to-end?

    Photo credit: slworking

    Discuss


  • Loving the iPad: A Real Computer for Virtualizing Enterprise Apps

    ipadLeadFeb2010.jpgThe iPad is dropping soon. The question remains, how big of an opportunity is it for the enterprise? Today we take a look at the work being done by software virtualization leader Citrix to get ready for streaming applications to the iPad. And we find that it looks more promising than ever to move quickly to supporting a tablet experience in the four walls of the enterprise.

    Building on the massive momentum of the iPhone, software virtualization (running non native apps directly from iPad) allows existing apps to run on iPad without changing them. Citrix Reciever, an application designed to bring streaming software from one machine to your iPhone is being prepared for the iPad, and will also be able to interact with existing Windows applications in production environments.

    Sponsor

    The Ultimate Mashup: iPad Enterprise-Ready on Day 1

    iPhoneExcelGraph.png

    If your application isn’t designed for Safari, or uses media objects that don’t run on the iPhone, using Citrix Receiver – along with some design considerations – can give you an amazing mobile-ready experience for your existing applications. The Citrix Community Blog takes a look on how to optimize your current Web applications and desktop tools for the iPhone and iPad.

    Attention to Detail

    iPadName.pngWe thought that perhaps all of this is too good to be true, so we reached out to Chris Fleck of the Citrix mobile team. Here’s what we learned.

    • Can iPad processor handle virtualized software and display it smoothly? The A4 1 ghz processor looks like its clearly up to the task of delivering Web and desktop applications to the device and rendering them in real-time. The Citrix Receiver is already working on the iPhone, so at first glance it looks like scaling up to the bigger screen size scales nicely with the faster processor.
    • What about the issue of background processing? This was one of the obstacles we were initially wary of. Today, the iPhone does a great job of enabling hooks within applications to remember the state of the application when it shuts down, so when going back there’s the same map or email. Citrix Receiver does a similar thing, but even goes further by allowing the application to continue (if wanted) to run in the background on the host. And, according to Fleck, IT can set a policy on whether the application requires a new logon or not based on your preferences.
    • Can it be branded and have my own icon? So, we wondered, can a team customize this experience so it carries the proper branding? Citrix Receiver allows you to drop a custom icon onto the desktop of the application, which is preset to a specific location on the homescreen of the iPhone/iPad and to a specific Citrix hosted app.
    • Screen size. Steve Jobs said on stage at the Jan. 28 event, iPad is “the best Web browsing experience you’ll ever have.” One of our questions was whether the iPad would work well for existing applications. The good news as reported by Fleck on his blog: “It turns out the 9.7 inch display on the iPad with a 1024×768 screen resolution works great for a full VDI XenDesktop. Windows applications run unmodified and securely in the data center, and even multiple applications at once.”
    • Interactions: Mouse, gestures, landscape? Okay, so this is area that is going to be hard to confirm until we touch it, but we asked some hard questions. First, whether gestures will be supported, such as swiping, zooming, and the virtual keyboard. The answer is yes. Those are in the iPhone Receiver application now and are being worked on behind the scenes. In short, with the native functions in the iPhone SDK and some design work by Citrix, a good experience will be delivered out of the box.
    • What about video, Flash, or processor heavy applications? One use that we expect to be important is the use of streaming protocols. This is an area where it is not yet clear that the processor has enough juice to keep up with streaming video. In exploring this with the Citrix team, we discussed ways to use smart application design (e.g. moving the video streaming out of the Receiver and into native apps) so that streaming video or Flash applications with lots of screen activity are optimized for viewing on the mobile device. Standard Flash applications should be no problem at all. It’s is something that is still speculative in our minds, but this could very well be a place where a mixed-mode design could support streaming needs and also support the native Web applications.
    • Peaceful co-existence: Native vs. non native apps joined as one? This is one of the areas we dove into. Is it possible for a user to have a great experience when in the virtual session and then go back and forth to native applications? Apple, by being forward thinking in the integration of Web applications has provided several tools to make this easier. First, proxy settings to open up native apps when links are clicked, and the ability to drag the virtual apps (like Web apps) to the home page of the iPad or iPhone go a long way in giving the tools to developers.

      Citrix Receiver does part of the work by creating a way for a Web link to have a custom icon and can “remember” its session so that when the application is clicked on, it goes to the right place in the virtual application. What does this mean? Since Safari is running on the iPhone/iPad and can not display a Receiver app at the same time, the real enterprise app could be running on a Citrix server and quickly reconnected after leaving Safari.

    Health Care Opportunity

    win7ipad4.jpgA few weeks back, we explored the opportunity of an iPad-like device in health care. In the context of virtualization leading the way to enterprise adoption, the hospital may be sweet spot for innovation.

    If you’re an EPIC or Cerner shop already, you are likely using software virtualization to deploy EHR (electronic health records) to your PCs in your hospitals. Citrix is already in this software virtualization space with its XENDesktop and XENApp products. Right now, we can picture the smiles on the folks in Kaiser Permanente’s IT team. Their jobs just got more interesting, an effect we expect to see rippling across the industry.

    Who Wins

    Microsoft Windows: Applications get another client, and Microsoft wins in keeping its customers happy in supporting the newest innovation in technology.

    Apple: The iPad will be able to run native apps and virtualized enterprised apps at the same time. A new opportunity in the iEnterprise and more sales into the channel. Perhaps it will take the iPad to finally bring the iPhone inside as well.

    Citrix: It goes without saying that the XENDesktop and XENApp clients will rely heavily on this capability. New customers may emerge, and existing customers will get massive value from their existing relationships with Citrix.

    Epic and Cerner: This is a bit harder to predict, but we suspect that even though these vendors are moving forward with their native iPhone applications, it can only be good news to see their existing products get support from the iPad. With this, the work of optimization can begin. In the future, we can see tuning one of these applications to enable “virtual but custom” views on the iPad and iPhone, while supporting investments already in place today.

    Cisco’s network: Having been a Cisco employee for five years in the past, I always like it when the network wins. Here, it clearly does, where streaming applications in real-time saves time and money, and then brings more value to real-time connections

    Users everywhere: And finally, those of us who spend time in the large enterprise. It just wouldn’t be fair for another class of cool technology to pass by the knowledge workers. This might be the time to investigate for your team how you might be able to get iPad in your hands – sooner than later.

    Are you thinking of iPad and have XENDesktop today? Do you see holes in this approach to bringing the tablet to the enterprise?

    Discuss


  • Paperboy 2.0: Using the Cloud to Get Paid for Application and Content Subscriptions

    ariaLogoJan2010.jpgFor content and application developers there is more opportunity than ever to monetize subscriptions. The Apple App Store has sparked a revolution in the mobile space generating billions of dollars for Apple and also creating look-alike services from nearly every mobile vendor. In the iPhone OS 3.0, Apple included a “Store Kit”, which allows an application such as a game or a news source to include the ability to offer subscription services.

    Additionally, some of the crown jewels of the content industry – The Wall Street Journal and The New York Times – have been exploring the implications of a pay wall for both mobile and Web access to content. The news industry is in the midst of defining where free vs. subscriptions are appropriate. All in all, it’s a complicated issue, but the signs are clear: Not everything will be free, no matter how hard Google tries to make all content available. For the content and apps that individuals want to pay for, Aria Systems is making it easy for companies to manage the connection between their assets and the users who want to access it.

    Sponsor

    Subscription and Billing in the Cloud

    ariaChart.jpgThis week, we had a chance to sit down and talk with Ed Sullivan, CEO of Aria Systems to learn more about subscription payment in the cloud, including everything from casual games to enterprise class applications.

    It gets complicated for developers implementing such a subscription service to consider all of details with subscription user tracking and revenue recognition. Many of these scenarios require detailed consideration and business logic when implementing that can take time away from the core offering. So, the question becomes: Is there a way a content or application provider get a handle on all of their customers across different channels in an easy way through a cloud offering that can be connected to the different form factors that applications are delivered into.

    Big or Small Businesses

    One benefit for on-demand companies is that they can maximize their back-office functionality with a platform like Aria, which integrates billing, customer management, and marketing tools into a single, on-demand application. In our interview, Sullivan pointed out that Aria plugs into QuickBooks, SalesForce, NetSuite that companies may have deployed today.

    Case Study: iPad Subscriptions for Content Providers

    steveiPadFeb2010.jpgOne company, Issuu, has already signed on with Aria in order to be ready for subscription revenue management using the iPad. The iPad is a great example where a traditional media provider may want to charge for this form of content or charge the individual for subscription services across form factors.

    Although we don’t know who Issuu’s customers are and whether a brand like the New York Times is part of the mix, it seems ideal to get news on the iPad and to pay once for Web, mobile, and print versions. Paperboy, please deliver a copy of the Times to our iPad.

    As far as cloud applications go, getting paid for consumer subscriptions is an important piece of the fabric for managing customer relationships. Are you a developer who has grappled with these issues before? What do you think about getting this service from the cloud?

    Photo credit: curiouslee

    Discuss