Author: Joe Weinman

  • The Cloud in the Enterprise: Big Switch or Little Niche?

    Cloud computing — where mega-data centers serve up webmail, search results, unified communications, or computing and storage for a fee — is top of mind for enterprise CIOs these days. Ultimately, however, the future of cloud adoption will depend less on the technology involved and more on strategic and economic factors.

    On the one hand, Nick Carr, author of “The Big Switch,” posits that all computing will move to the cloud, just as electricity — another essential utility — did a century ago. As Carr explains, enterprises in the early industrial age grew productivity by utilizing locally generated mechanical power from steam engines and waterwheels, delivered over the local area networks of the time: belts and gears. As the age of electricity dawned, these were upgraded to premises-based electrical generators — so-called dynamos — which then “moved to the cloud” as power generation shifted to hyper-scale, location-independent, metered, pay-per-use, on-demand services: electric utilities. Carr’s prediction appears to be unfolding, given that some of the largest cloud service providers have already surpassed the billion dollar-milestone.

    On the other hand, at least one tech industry luminary has called the cloud “complete gibberish,” and one well-respected consulting group is claiming that the cloud has risen to a “peak of inflated expectations” while another has found real adoption of infrastructure-as-a-service to be lower than expected. And Carr himself does admit that “all historical models and analogies have their limits.”

    So will enterprise cloud computing represent The Big Switch, a dimmer switch or a little niche? At the GigaOM Network’s annual Structure conference in June, I’ll be moderating a distinguished panel that will look at this issue in depth, consisting of Will Forrest, a partner at McKinsey & Company and author of the provocative report “Clearing the Air on Cloud Computing;” James Staten, principal analyst at Forrester Research and specialist in public and private cloud computing; and John Hagel, director and co-chairman of Deloitte Consulting’s Center for the Edge, author and World Economic Forum fellow. We may not all agree, but the discussion should be enlightening, since ultimately, enterprise decisions and architectures will be based on a few key factors:

    Drivers and Barriers: These can include enhanced flexibility and agility from on-demand, scalable resources; reduced total cost via optimal hybrid solutions; accelerated time-to-market via ready-to-ware applications and innovation platforms; application architecture constraints and requirements; security and more.

    Development and Deployment Options: Solutions can be built in-house from scratch; use pre-built software on owned, dedicated infrastructure; go all cloud; be outsourced or be the result of a combination of such approaches.

    Metrics and Models: A true apples-to-apples comparison of financials, risk, compliance, customer experience and competitiveness is tricky, because among the factors that need to be taken into account are the roles of sunk costs and depreciated assets, migration costs, marginal capital investments or ancillary costs required to implement and transition to robust solutions, power, cooling, space, management, administration, certification and training. Stair-step effects — where provisioning of new capacity is done in large blocks which at first are underutilized, or the need for one additional quantum of compute capacity drives construction of an entire new data center — complicate things further.

    Trends: Differences between enterprise and cloud cost structures will shift over time based on competitive intensity, scale economies and learning curve effects, as well as technology and best practices diffusion. There is a widespread belief that larger cloud providers have dramatic scale economies, but these may be illusory or unsustainable, since the same building blocks — servers, storage, automation tools, even containerized data centers — and potentially the same options (such as optimal site location) may be available to both enterprises and cloud providers.

    Demand: Variable and unpredictable customer demand — due to macroeconomic factors, bullwhip effects in the supply chain, and fads and floods — impact the total cost/benefit equation, while the bottom-line benefits of pay-per-use services vary based on demand curve differences.

    Consider all of these factors together. Nick Carr’s predictions will likely be realized in cases where a public cloud offers compelling cost advantages, enhanced flexibility, improved user experience and reduced risk. On the other hand, if there is a high degree of technology diffusion for cloud enablers such as service automation management, limited cost differentials between “make” and “buy,” and relatively flat demand, one might project a preference for internal solutions, comprising virtualized, automated, standardized enterprise data centers.

    It may be overly simplistic to conclude that IT will recapitulate the path of the last century of electricity; its evolution is likely to be more far more nuanced. Which is why it’s important to understand the types of models that have already been proven in the competitive marketplace of evolving cloud offers — and the underlying factors that have caused these successes — in order to see more clearly what the future may hold. Hope to see you at Structure 2010.

  • What If Metcalfe’s Law Is Wrong?

    Networks — be they telecom, social, transportation or otherwise — are the fabric of modern society. They provide immense value to consumers and businesses alike, enhancing mutual relationships and enabling the distribution of goods, services and information. But does this value grow as the size of the networks grow? And if so, how much?

    Metcalfe’s Law” has long been accepted as characterizing the value — and value growth — of fully connected networks. It states that the value of a network is proportional to the square of the number of its nodes, which may take the form of devices — such as computers — or users, in which case a network connection is represented by a “friend” or “follower.” But there are times when the “law,” which has been used to explain network effects and justify mergers and acquisitions, appears to overstate a network’s value. And if that’s the case, what can service providers do about it?

    While the number of possible connections in a network is indeed proportional to the square of the number of nodes, value is not necessarily equivalent to number. After all, I may have 10 bills in my wallet, but it matters a lot whether they are $1 or $10,000 denominations.

    As I’ve previously observed (at Telecosm and via some math (PDF)), there are several reasons that Metcalfe’s Law can overestimate the value of a network. First, typically only a fraction of the possible connections have value. Second, there are natural limits to consumption of that value. And third, the value of the entire network may decline over time.

    Convergent Value Distributions

    The number of links in a fully connected network is certainly proportional to the square of the number of nodes. If each connection had the same value as any other, then Metcalfe’s Law would be correct. What would that mean in practice? It would mean that you would spend equal time on the phone with each of the nearly 7 billion people in the world, that they would all friend you or follow you, and you would reciprocate. But humans don’t behave that way.

    In 1992, anthropologist Robin Dunbar posited that primates have neurobiologically-based limits to the size of their social networks. For humans, “Dunbar’s Number” is 150. This is exemplified by the fact that the most popular social networking site on the planet now has more than 400 million users, yet the average number of “friends” a person has is only 130 and only a small percentage of those “friends” actually communicate with one another. And although there are a variety of ways to slice the data, the largest microblogging site has close to 100 million users but the average number of followers is roughly 126. Even if we were to assume that tweets have the same “value” as intimate face-to-face interactions and that electronic media might expand Dunbar’s number in some way, there is still an upper bound to the number of relationships, or even weak ties, that can be maintained. If the total value of such social media is related to engagement, and engagement is related to the number of friends, such value would in these cases be linearly proportional to the size of the network, rather than the square of its size.

    Intrinsic Limits of Consumption

    Suppose you did have equal social interest in the nearly 7 billion people on the planet, or the more than 100 million shared video clips or even the more than 100,000 touchscreen phone apps out there. You then would run into intrinsic limits to your ability to benefit from all those relationships or consume all that content. Perhaps in the early days of TV it would have been possible for a single individual to consume all the content produced. Currently, however, nearly a day’s worth of content is uploaded to YouTube every minute. Assuming that all those clips did have equal value, even a multitasking insomniac couldn’t keep up. All networks have intrinsic upper limits of consumption, be they bandwidth or dollars or time or attention span.

    Holistic Network Value Declines

    Even if all nodes were of equal value, and there were no limits to consumption, well, people get jaded. Emotional rewards from novel stimuli are processed by dopamine receptors in the striatum, but the brain is designed to habituate, that is, not get so excited by repeated stimuli. What this means is that an entire social or content network may “grab” you at first, or even for a couple of years, but this infatuation may eventually wear off, and you’ll depart in search of the next new thing. Technological progress can also cause the value of the entire network to decay — consider what the web and email have done to the value of fax networks.

    Strategies

    There are ways to manage these three effects, however.

    If the network node values follow a convergent distribution, ensure that whatever value is present can be extracted by reducing or eliminating core bottlenecks and enhancing the process of discovery. Specific approaches such as scalable non-blocking network infrastructure, content delivery networks, tagging, recommendation and search engines can help.

    To extract maximum value when there are intrinsic limits of consumption, not only is removing access bottlenecks effective, but so are personalization, richness, context sensitivity and multitasking facilitation.

    And to keep a given network exciting and the dopamine system revved up, new features, content or applications can help. Even if the core “network”— whether social site or app store — remains the same, using a platform for new content or apps can continue to trigger the pleasure receptors associated with novelty, maximizing value and engagement.

    Overall, the behavior of real-world networks isn’t always as simple as what’s represented by Metcalfe’s Law. However, understanding their underlying characteristics can help users and service providers maximize their value as well as create new business opportunities.

  • How Falling Prices Have Created Video Ubiquity

    With Super Bowl XLIV just hours away, it’s a little late to run out and take advantage of the insane sales on big-screen TVs. But that doesn’t matter as much as it would have a few years ago, as prices have been heading steadily lower not just for displays, but all elements in the video value chain.

    Improvement to the video price/performance ratio means $99.99 can now buy an HD video camera roughly the size of three fingers, a pen that shoots video or a digital photo frame with video playback. Such a “squanderable abundance” of video capability is leading to video ubiquity, which will in turn mean the consumption of more bandwidth than ever before transforming networking, and require more processing and storage than ever, transforming IT, including cloud computing.

    Video Ubiquity

    That’s because lower-cost CMOS and CCD image sensors don’t just mean lower-cost video cameras; they mean ubiquitous video embedded everywhere. By analogy, consider computers, which used to be multimillion-dollar monuments encased in glass house data centers. Moore’s Law didn’t mean (just) cheaper data centers, but that compute power is now found in everything from thermometers to toasters to toys. Today’s car buyers often focus less on style and performance than on information technologies such as navigation systems, accident alerting, and on-board entertainment systems.

    In other words, when things are cheap enough, it makes good business sense to leverage that so-called squanderable abundance in order to differentiate products or enhance customer relationships. Greeting cards today cost $4.95 whether they are just paper and glitter or can record and play back a voice message. They’ll still be $4.95 tomorrow, but be capable of recording and playing back a video greeting. At some point it will make sense for manufacturers to build a two-way live customer service videoconferencing capability into each dryer or copier or refrigerator, even if it’s only used once a year — or never used at all. Moreover, why wait for the customer to place that video call when cameras mounted inside the dryer can easily report that the drum belt is fraying?

    Transformational Impact

    The impact of more video devices in more places is the consumption of more bandwidth than ever before, which will transform networking. And more processing and storage will be required than ever before, which will transform IT, including cloud computing.

    Today’s HDTV streams need somewhere between 4 and 7 megabits per second. 4K or, in a few years, Ultra-HDTV video streams will need tens of megabits per second, just for one channel. Increase that further for 60 frames per second, finer gradations of color, 3-D, and multiple screens, and network capacity will need to increase ten- or twenty-fold, or more.

    Consumer networks are already mostly carrying rapidly growing amounts of user-generated video content, IPTV and peer-to-peer traffic, and Cisco forecasts that video will account for 90 percent of network traffic by 2013. Sure, there’s text and images and spreadsheets and slideshows traversing networks too, but it takes a lot of 140-character tweets to equal one full-length motion picture. Enterprises are increasingly adopting mobile, desktop, and immersive telepresence solutions. Meeting all this demand will require increased investment in wireless and wired networks.

    IT will also be transformed. After all, how effective will databases that were designed for alphanumeric data be when a majority of future IT expenditures will be for acquiring, managing and maintaining enormous repositories of unstructured video for security/surveillance, merchandising optimization, field service, collaboration, depositions, entertainment, or applications we haven’t imagined yet?

    The cloud will change, as it increasingly moves from just using multicore CPUs to also using GPUs, due to cost-effectiveness as well as the affinity that GPUs have for parallel compute-intensive tasks such as scene analysis, ray-tracing and compositing. Also, cloud-based video functions such as bridging, transcoding, transrating, and rate adaptation will become more important as they allow multiple users and devices using different technologies to interconnect.

    So while the game may be over in a few hours, we are just now kicking off a new era in video.

    Related content from GigaOM Pro (sub. req’d):

    Not Your Grandfather’s Streaming Video Business