Author: Jason Verge

  • Nlyte 7 Extends Further Into the Data Center

    The name of the game with data center infrastructure management (DCIM) has been aligning it with all of an organization’s business IT management. It’s what Nlyte has focused on in the latest iteration of its DCIM software.

    With version 7, the Nlyte suite now includes extensive real-time monitoring of power and virtualized resources,  as well as centralized management of all the assets of the data center.  The suite includes a business intelligence engine, a contextual data repository, an intuitive web interface, improved workflow and business process management, global scalability, asset discover and reconciliation and resource visualization now extended to the rack level.

    “The complexities of the data center have grown, so in developing the seventh generation of our software, we have responded to hundreds of customer-inspired feature enhancements,” said Rob Neave, Co-Founder and CTO of Nlyte. “We’ve created a next generation data center management platform that leverages Nlyte’s patented intelligent asset placement technology, state-of-the-art data storage and retrieval techniques and business analytics for real-time business decision making. Nlyte 7 ensures the management of the data center fabric is aligned with existing business practices and ITSM solutions in place today.”

    The Nlyte data repository provides contextual relationships between all enterprise data center attributes. These interrelationships, and related workflow and analytics capabilities, support SLAs, technology refresh projects, data center consolidations and/or migrations, allowing for advanced capacity planning and facilitating “what-if” scenario planning.

    Visualizing the Infrastructure

    In addition, the Nlyte 7 suite offers contextual visualization that encompasses physical, logical and virtual infrastructure views.  The Nlyte 7 DCIM platform offers what it claims to be the industry’s only business intelligence engine, providing enhanced real-time reporting and dashboard information, automated asset discovery and reconciliation on your network. It has direct support for Intel’s DCM real-time energy metric technology.

    The capabilities are extending both within a data center, and within an organization. “The operational value of DCIM is well understood,” said Rhonda Ascierto, senior analyst, datacenter technologies and eco-efficient IT at 451 Research. “Companies are now seeking greater financial insight and management of their datacenters—to address capacity limitations and to make IT service decisions. Nlyte Software is helping not just datacenter operators but also C-level and other executives better understand their large-scale data center environments.”

    Aside from the operational benefits, there’s a big push to make and present DCIM as user friendly. Nlyte provided a customer example from Grasshopper, which provides phone forwarding and management for entrepreneurs and small businesses.

    “Before Grasshopper adopted Nlyte as our DCIM solution, we tracked our data center assets and managed capacity with spreadsheets and Visio drawings – that old system was time-consuming and nearly impossible to keep current,” said Craig Tata, Compliance Manager, Grasshopper. “With Nlyte, I am able to save five to six hours a week thanks to its repository tracking our data center assets, capacity and performance. With Nlyte 7, I am especially looking forward to the Physical and Logical Row Viewing with the Cabinet Device Overlay Reports as they’re able to discover and represent data that I would otherwise not have access to. Armed with reporting capabilities like these, I can begin to look forward to previously unforeseen ways to manage the Grasshopper data center to further our savings.”

    Grasshopper’s previous strategy – using spreadsheets and Visio drawings – is what a lot of companies continue to do to get by. Nlyte’s Mark Harris Vice President of Marketing and Data Center Strategy, uses a good, general example as to the benefits of using DCIM.

    “You can do your taxes with an abacus, but why not use TurboTax (or other tax software)?” said Harris.

  • PeakColo: All Channel, All the Time

    PeakColo_Composite2

    The executive team at Denver-based PeakColo, which has experienced strong growth by focusing on sales of turnkey “white label” cloud offerings through reseller channels. (Image: PeakColo)

    PeakColo has recorded triple digit growth annually for the last few years, but you don’t often hear about it. That’s because PeakColo is often the provider behind the provider – the company is channel based, offering white-label cloud services that are resold by managed service providers (MSPs) and value-added resellers (VARs).

    The company was founded in 2006, and has powered Infrastructure as a Service (IasS) clouds for a variety of offerings, from local VARs to helping data center providers offer hybrid services. PeakColo enables its customers to supply IaaS under their own brand in a turnkey fashion. Customers include Arrow Enterprise Computing Solutions, Avnet Technology Solutions and CDW.

    “Our differentiator is we’re 100% building clouds through the channels,” said Luke Norris, Founder and CEO of PeakColo. “Growth has been great, 100% each year. The adoption in the last year has picked up, with bigger names, bigger customers.”

    Filling A Need for Resellers

    The company supplies and manages the nuts and bolts of an IaaS offering and acts as senior level support. There are a lot of agents, value-added resellers and system integrators out there, and many are finding they simply can’t compete without a cloud offering. PeakColo helps them snap one on. “Our customers are predominantly in the mid-market, the $50 million to $150 million or so shop,” said Norris. A customer like Arrow has around 25,000 VARS in their channel that can leverage IaaS which is ultimately provided by PeakColo.

    The company has three cloud offerings, RedCloud, BlueCloud, WhiteCloud. Red is the hosted private cloud. It’s targeted to enterprises that need to meet critical governance compliance requirements like PCI, HIPAA, and SOX. It’s VMware- based, comes via vDirector portal and has an SLA of 100%. Blue is the public cloud, and White is the white label cloud. Its storage infrastructure is entirely built on NetApp.

    The company has distributed cloud nodes in major cities throughout the continental United States. It has infrastructure in Seattle, Chicago and three nodes in  Denver.”One of our delineators is to eventually be in every NFL tiered city,” said Norris. Last January, the company expanded its VMware vCloud-Powered platform into the New York and New Jersey metro areas through its partnership with NYI.

    Trends in Cloud

    In addition to a healthy channel market, the company is seeing various trends in cloud. While PeakColo doesn’t work directly with the end customers, they are seeing several trends in the channel. Customers are working with their VAR to get their cloud strategy together.

    “The big trend we’re seeing is an enterprise production push,” said Norris. “It’s no longer solely disaster recovery and development driving cloud. There’s also a tremendous uptick of ‘geo-dispersement’, customers are picking multiple nodes.”  The company also says that VMware still seems to be dominant in terms of demand, but Microsoft’s Hyper-V is picking up.

    “There’s a dramatic uptick of organizations getting out of the enterprise consumption model,” said Norris, referring to the tech cycle refresh that plagues an organization that chooses to do everything on-premises.  “Now, you’re able to drive business in a way that hasn’t been seen before.”

    “A tremendous driver is helping match applications to infrastructure,” said Norris.  While there is multiple price point compression, and VARs are getting pinched, offering cloud and showing customers how to properly use it means their role is an integral one.

  • SoftLayer Infrastructure Supports 100 Million Gamers

    softlayer-singapore

    SoftLayer’s Singapore data center (pictured above) provides an international footprint that it key to some of the company’s game developer customers. SoftLayer says its infrastructure now supports more than 100 million gamers. (Photo: SoftLayer)

    SoftLayer now supports more than 100 million active game players online worldwide, and has added 60 new gaming companies to its customer list in the last two quarters alone, the company said this week.

    What’s SoftLayer’s secret sauce with gaming companies? In a world in which a couple of milliseconds of lag can be the difference between virtual life and death, and the inability to scale up with demand can kill a new game title, there is little room for failure on the part of the hosting provider. In the game hosting arena, success breeds success, allowing SoftLayer to build organic growth atop its track record.

    The company has built a solid reputation via gaming conferences and word of mouth. “It’s kind of a tight knit space, and they all talk,” said Marc Jones, VP of Product Innovation at SoftLayer. Notable game developers among SoftLayer’s customers include Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment.

    “Game developers don’t have the time or resources to manage their own complex infrastructure because they need to focus on their core business – developing great games, launching on time and keeping players engaged,” said Jones. “Because we understand the high stakes of their operations, we’ve tailored our cloud platform to meet gaming companies’ demands – from initial game release and explosive, overnight growth, to the performance and availability demands that come with everyday play.”

    SoftLayer provides these companies with the infrastructure to test, deploy, manage, play, and grow their games. SoftLayer’s global infrastructure platform spans more than 100,000 servers in 13 data centers across the U.S. Europe and Asia. The company’s ability to provide bare metal (dedicated servers) combined with a low latency network is key to its its appeal.

    Bare Metal vs. Public Cloud

    “The ability to have hybrid solutions from a bare metal standpoint is perfect for a gaming world with a lot of real time interactions, multi-player and social aspects,” said Jones, “There’s constant communication. A lot of the companies are capturing interaction points to understand how people are engaging, and using this info to help them tailor the experience. Bare metal gives you much better performance than public cloud, and they have the ability to use public to scale when needed during spikes.”

    In most of those cases, gaming companies are running a database on the backend, and that a lot of companies are leveraging NoSQL database options in particular, according to SoftLayer. One example of this is Hothead Games, which has released six games that have hit the Top 10 in both the Apple and Google Play app stores, including the BIG WIN Sports series of games and the recently launched Rivals at War. The company runs their back-end database, Cloudant, on the SoftLayer platform, enabling Hothead Games to scale massively and economically, handling billions of database transactions per-day while delivering a superior experience to gamers.

    “From a compute standpoint, (bare metal cloud) is definitely what we see as an advantage,” said Jones. “On equal footing is our network. Maintaining a private network that interconnects all our data centers is appealing.” The company has its own private network, which allows it to deliver a predictable, low latency experience.

    The company’s international presence is also a selling point. “A lot of our bigger gaming customers have a lot of servers deployed in multiple data centers,” said Jones. “A few customers are active in Amsterdam, Singapore and the US.”

    Gaming Trends

    By all accounts the online gaming vertical continues to grow at a rapid pace. “We definitely see a lot of online games – Facebook style games, social games and mobile applications,” said Jones. “Those are the ones we’ve seen the most in the last six months to a year. We have hundreds of gaming customers, and the size of those customers is usually pretty substantial. They’ll build out that infrastructure, as they get popular.”

    The company can’t disclose its largest customers, but provided examples of mobile, first person shooters (FPS), and MMO (massively multiplayer online) all of which have unique needs.

    The Social Gaming Customer: Peak Games

    Peak Games is the largest and fastest growing company in Turkey, the Middle East and North Africa. With over 200 employees and 45 million gamers, it is one of the largest global social gaming providers. “The ability to add tens of thousands users overnight is the value of working with SoftLayer,” said Safa Sofuoglu, CTO of Peak Games. “SoftLayer understands the needs of game developers. We can very quickly double our infrastructure requirements when one of our games take off, and easily manage and support new users without compromising on performance while not incurring massive costs. SoftLayer gives us the flexibility to utilize what we need without being locked in.”

    THE FPS: Ballistic

    Rumble Entertainment has a first person shooter named Ballistic. With an FPS, low latency is of the utmost importance, with players obsessing down to the millisecond. These are games of accuracy and precision. “We’re expanding our first-person shooter game Ballistic into Asian markets, and we wanted to partner with a cloud service provider that could deliver not only raw computing power but also high-quality network service,” said Jim Tso, senior producer for Rumble Entertainment. “SoftLayer’s data center in Singapore and global network footprint help us overcome any network latency issues, giving our users a great online experience.”

    THE MMO: Path of Exile

    Path of Exile is unique among online action RPGs (role playing games) because players play on one large international realm,” said Chris Wilson, managing director for Grinding Gear Games. ”SoftLayer’s data centers on multiple continents, and the free bandwidth between them, let us run servers local to the individual players while still allowing them to play with their international friends if they choose to,” said Chris Wilson, managing director for Grinding Gear Games. SoftLayer’s ability to provision new servers quickly allowed us to deal with the immense demand we faced when we launched Path of Exile’s Open Beta. We’re extremely pleased with SoftLayer and the server reliability that it allows us to offer our customers.”

  • RightScale: Cloud Providers Aggressively Slashing Prices

    rightscale-cloud-price-cuts

    The cloud pricing wars are on. Cloud management specialist RightScale says it is seeing aggressive price-cutting on the part of major cloud providers.

    The company has counted 29 price reductions over the course of 14 months from AWS, Google Compute Engine, Windows Azure, and Rackspace Cloud. Amazon led the pack with eight price reductions on core cloud services, while Rackspace had four and Google and Azure cut prices three times in eight months. Rightscale tracks pricing with PlanForCloud, a tool that attempts to track cloud pricing across the major providers; it includes over 12,000 different prices across 6 cloud providers.

    Pricing remains volatile, which is partly due to competition. Rackspace introduced new tiered pricing for storage just this February that resulted in price reductions of as much as 25%; AWS, Azure, and Google keep one upping each other with price cuts.

    PlanForCloud is useful in the sense that cloud pricing is in no way uniform, and so can often be like comparing apples to oranges. There are several subsets of services, and these cuts have varying degrees of impact.

    RightScale, as a management platform for cloud, wants the underlying infrastructure to be as transparent as possible. It gives the following recommendations:

    1. Develop competency in cloud forecasting – Price cuts are a positive trend when justifying cloud, but it doesn’t eliminate the need to forecast as accurately as possible. Of course, the pitch for the tool is assisting in this regard.
    2. Consider all your options for price, performance, features and support – Again, each cloud provider has a different mix of features, performance, and support, so pricing is only one consideration. Pricing often gives a cloud provider the edge in one arena, but you can bet they make up that cost somewhere else.
    3. Efficiently use the cloud resources you have – Over-provisioning, running unnecessary resources is a costly proposition. Cloud doesn’t make sense if you’re not using it properly and efficiently.

    Keep RightScale’s position in mind here, as this information is in support of using their platform. However, it’s all sound advice regardless of vendor. RightScale has been going the extra mile in trying to provide a transparent look into clouds lately, recently releasing an overview of major outages the company tracked across public, private cloud, and hosting providers. It all makes for a useful compendium when shopping around.

  • As Cloud Wars Heat Up, Server OEMs Bet on OpenStack

    openstack-oems

    The largest server vendors are now embracing a movement that at one time represented a threat: cloud computing. In doing so, these OEMs (original equipment manufacturers) are getting into the same business as some of their largest customers, which can be a risky proposition in the IT world. But in recent years, large cloud players have used their leverage to squeeze  margins, threatening to commoditize servers. As margins have narrowed, cloud has altered the economics of the server world, and leading marquee server brands have been emboldened to launch their own cloud offerings.

    HP, Dell and IBM have all turned to public clouds based on OpenStack to remain relevant and position themselves to capitalize on enterprise hybrid strategies. The move by the “Big Three” also is certain to add momentum for OpenStack, the open source cloud computing infrastructure project which grew out of a collaboration of Rackspace and NASA, and has quickly built a stable of support that includes Red Hat, Intel, Dell, Cisco, AT&T, Canonical, and SUSE.

    The server vendors are seeking to build offerings that differentiate them from the public clouds offered by market leader Amazon Web Services (AWS) and Internet titans Google (AppEngine) and Microsoft (Windows Azure). In rolling out their own offerings, Dell, HP, and IBM are setting up a play for a hybrid sale in which they sell gear and run an open cloud. They’ve followed slightly different paths:

    • Dell has been part of the OpenStack community since its creation, and has worked closely with a number of key OpenStack partners, including Rackspace, Citrix, Opscode, Canonical, Intel, and others. Dell is also one of the industry’s most experienced partners to top global cloud services providers, including Facebook and Microsoft Windows Azure. It has also built out a global network of data centers to support its public cloud infrastructure.
    • HP announced its plans to support Open Stack in 2011. sees this “converged cloud” project as an opportunity to enable customers, partners and developers with unique infrastructure and development solutions across public, private and hybrid cloud environments.
    • IBM developed SmartCloud before OpenStack was founded two and a half years ago. A year after joining the OpenStack community, IBM made a splash, announcing it had moved into open cloud architecture, which included a new cloud offering based on OpenStack. The company sees open standards as an opportunity for businesses to take full advantage of the opportunities associated with interconnected data, such as mobile computing and big data.

    “History has shown that standards and open source are hugely beneficial to end customers and are a major catalyst for innovation,” said Robert LeBlanc, IBM senior vice president of software. “Just as standards and open source revolutionized the Web and Linux, they will also have a tremendous impact on cloud computing. IBM has been at the forefront of championing standards and open source for years, and we are doing it again for cloud computing. The winner here will be customers, who will not find themselves locked into any one vendor — but be free to choose the best platform based on the best set of capabilities that meet their needs.”

    Competing with AWS

    The three server vendors are fighting incumbent cloud player Amazon Web Services with standards. They all see hybrid cloud as the endgame and want to retain their position in the enterprise as this evolution occurs. While it might seem that OpenStack has the lead in the cloud world based on the headlines, the fact of the matter is that AWS owns the lion’s share of the cloud market, more than two-thirds by most accounts.

    AWS had a head start in cloud computing and is aggressively pursuing the enterprise As AWS continues to add enterprise-friendly features, it’s a very real threat to these OEMs. Features like Availability Zones and Autoscale Groups make AWS very compelling to enterprises. The previous response by the server vendors was to simply attack cloud as not enterprise-ready, but now there’s a realization that cloud, in some use, is inevitable. OpenStack is a way for OEMs to partner against AWS dominance.

    Chris Kemp, CEO of Nebula and former NASA CTO, would disagree that OpenStack is competing with Amazon Web Services, mentioning in a keynote during an OpenStack event that both have completely different use cases in the industry. However, having a public cloud offering to complement private cloud is a necessity – and while the use cases might be different now, AWS is going after all cloud business.

  • American Internet Lands $43.5 Million Credit Facility

    American Internet Services has refinanced at a great time, lowering its cost of capital due to historically low interest rates, as well as providing some capital for continued growth. Fortress Credit Corp, an affiliate of Fortress Investment Group, has provided the company a $43.5 million senior secured credit facility. Terms of the transaction were not disclosed.

    AIS was founded way back in 1989. It provides tailored data center and cloud service solutions to companies with an emphasis on security, compliance, connectivity, and customer service. AIS operates SSAE 16-compliant, SOC 1-, 2-, and 3-audited, redundant facilities in San Diego, Los Angeles and Phoenix. It has more than 600 enterprise customers worldwide, and is backed by private equity firms Seaport Capital, Viridian Investments, and DuPont Capital Management,

    “Refinancing AIS’ debt enables us to take advantage of historically low interest rates while providing resources for additional investment in new products and services, continuing our market expansion, and meeting the developing needs of our customers,” said Tim Caulfield, Chief Executive Officer at AIS. “We look forward to working with Fortress, a knowledgeable and experienced lender to the internet infrastructure sector.”

    Fortress Investment Group LLC is a global investment firm with over $53 billion in assets under management as of December 31, 2012. Founded in 1998, Fortress manages assets on behalf of over 1,400 institutional clients and private investors worldwide across a range of investment strategies – private equity, credit, liquid hedge funds and traditional fixed income.

    “This financing illustrates Fortress’s continuing support for middle market companies in the data center and cloud service sector,” said Ken Sands, a Managing Director of the Credit Funds, Fortress Investment Group. “AIS is a recognized regional leader in tailored data center and cloud service solutions and we are pleased to have arranged this financing that will support their future growth.”

    DH Capital served as advisor to AIS on the financing. DH Capital is a private investment banking partnership specializing in Internet infrastructure, telecommunications, and SaaS with a focus on M&A and capital placements.

  • Apple Hits 100% Renewable Energy in its Data Centers

    apple-maiden-aerial-solar-4

    An aerial view of one of Apple’s two major solar panel arrays in Maiden, North Carolina, which supply electricity to help support the power requirements for a nearby Apple data center. (Photo: Apple)

    In the wake of pressure from the environmental group Greenpeace, Apple said Thursday that it has achieved 100 percent renewable energy at all of its data centers, including facilities in North Carolina, Oregon, California and Nevada. The company also is using renewables to support office facilities in Austin, Elk Grove, Cork, and Munich, and its Infinite Loop campus at Cupertino.

    The road to renewable was a formidable one. Apple doubled the size of an already huge solar array in North Carolina, buying another 100 acres of land to support the expansion.  The two separate 100-acre solar arrays in Maiden, N.C. each produce 42 million kilowatt-hours (kWh) of energy annually. Apple also uses biogas from nearby landfills to power Bloom Energy Server fuel cells at its Maiden site.

    Although it’s been secretive about the project, the company has been vocal in its plans to use renewable power exclusively for its new data center in Prineville, Oregon. That energy will come from a mix of sources, such as wind, hydro, solar and geothermal power.

    Facebook also has data center nearby in Prineville that uses an evaporative cooling system in combination with the natural moderate climate to save on energy costs. Facebook initially faced heat from Greenpeace over using energy from PacifiCorp, which is derived largely from coal.

    Gary Cook, senior IT analyst at Greenpeace called Apple out at an Uptime Symposium saying that it and Facebook should  “wield (its) power to alter the energy paradigm.” Apple has since stepped up in a big way. Since 2010, it has achieved a 114 percent increase in the usage of renewable energy at corporate facilities worldwide, up to 70 percent overall from 35 percent.

    “Apple’s announcement shows that it has made real progress in its commitment to lead the way to a clean energy future,” Cook said in a statement Thursday. “Apple’s increased level of disclosure about its energy sources helps customers know that their iCloud will be powered by clean energy sources, not coal.”

    Cook insisted that Apple “still has major roadblocks” to meeting its 100 % clean energy commitment in North Carolina, where he said electric utility Duke Energy “is intent on blocking wind and solar energy from entering the grid.” Greenpeace called on Apple to disclose more details on its plans for using renewable resources in all its data centers.

    See Apple’s environmental impact statement for details of its announcement.

    apple-bloom-servers-470

    Apple has also deployed a 10 megawatt installation of fuel cells in Maiden. The Bloom Energy Servers use biogas from a nearby landfill to generate electricity to support Apple’s data center operations. (Photo: Apple)

  • ProfitBricks Raises $19.5 Million For Its Muscular Cloud

    Profitbricks CEO Achim Weiss in front of a diagram of the company's Infiniband data center network. (Images: Profitbricks)

    Profitbricks CEO Achim Weiss in front of a diagram of the company’s data center network. (Images: Profitbricks)

    Profitbricks has raised a $19.5 million investment from the company’s founders and from United Internet AG, a European Internet services provider. United Internet is the parent company of mass market hosting giant 1&1 (you might have seen their commercials) so it’s starting to look like quite the web power play in Germany. The founders of ProfitBricks, Achim Weiss and Andreas Gauger, were also the founders of 1&1.

    Profitbricks has now raised $38.3 million since its founding in 2010. The funding will go towards development of ProfitBricks’ “virtual data center” offering, as well as helping the company expand into new industries.

    Profitbricks seeks to differentiate itself through the ability to provide both vertical and horizontal scale, flexibility in the network, and a data center design tool with an interface that makes building a virtual data center a fairly easy and straightforward endeavor. The ability to perform live vertical scale, that is to run applications on a large server rather than many servers, combined with the infiniband network it uses, sets the company apart. ProfitBricks calls itself the “second generation of cloud infrastructure,” and it has been growing at a quick clip since launching.

    Check out a recent profile on the company here.

  • Joyent, Cloudant Launch Database Service Atop SmartOS

    The database as a service market is heating up, as cloud providers look to court application developers with promising offerings. The latest such move is the result of a deepening partnership between Joyent and Cloudant.

    Cloudant is now available on the Joyent high-performance cloud platform, and the two today came out today with a multi-tenant Cloudant cluster running in Joyent’s Jubilee data center in Ashburn, Virginia. Dedicated Cloudant clusters running on Joyent will be available this month and can be hosted in any Joyent data center.

    Cloudant initially partnered back with in April of 2012. That partnership made Cloudant’s NoSQL DBaaS available on Joyent’s infrastructure.

    Joyent has always been tuned towards application developers. “We built Joyent to power real-time web, social and mobile applications, so it makes sense to have a DBaaS partner like Cloudant that’s geared toward operational application data,” said Steve Tuck, SVP and general manager at Joyent Cloud. “Giving Cloudant the ability to quickly deploy throughout the global environment of our public cloud service aligns with our focus on scalability and performance. That’s what our customers care about most: low cost, real-time systems that are easy to use and support their apps.”

    Joyent’s cloud was built with and runs on SmartOS, an open-source distribution of the OpenSolaris fork illumos, which is optimized to support high-scalability apps for cloud computing. Another differentiator for it is DTrace, a dynamic tracing framework with real-time troubleshooting which provides insight into global system performance. DTrace allows both Cloudant and Joyent to deliver increased application performance.

    “Collaborating with Joyent to run Cloudant on SmartOS is just another example of how we efficiently improve service,” said Alan Hoffman, co-founder and chief product officer at Cloudant. “Making sure operational data scales flawlessly with application code is challenging, which is why when we see technology that helps our customers, we start integrating it. Now, with SmartOS, we’re able to quickly provision Cloudant accounts across the global network of Joyent data centers.”

    One interesting feature of Joyent is it allows a customer to conceptually build a stack on its website. The company has been partnering with companies throughout the platform, application, data, infrastructure, and services layers.

    Cloudant recently received strategic investment from Samsung Venture Investment Corporation.

  • Siemens Brings Clarity to Crowded DCIM Market

    With more than 75 companies now offering tools under the wide umbrella of DCIM, it isn’t easy for a new player to make a splash. Unless that new player is global electronics and electrical engineering powerhouse Siemens, which has focused its ambitions on the data center and heading into DCIM in a big way.

    Datacenter Clarity LC is the company’s foray into the world of DCIM (data center infrastructure management), a suite that combines management and facilities management functions. The company has thrown its muscle into this effort, boosted by a broad existing portfolio of data center solutions, a history in efficiency, as well as a global talent pool of engineers in support.

    Meeting point between IT and Facilities

    The DCIM solution unveiled last month, Datacenter Clarity LC, consists of engineering and lifecycle management software tools that ensure uptime while optimizing energy and operational efficiencies to accommodate the rapidly changing needs of today’s data centers.  It integrates information from both IT and facility assets, workflows and work orders, and conducts “what if” analyses.

    “Datacenter Clarity LC can help you optimize capacity planning while driving operation and energy efficiencies,” said John Kovach, Siemens’ new Global Head of Data Centers.

    Datacenter Clarity has an open API architecture facilitates interoperability with other systems. “Our vendor neutral solution supports more than 400 protocols from both the IT and facility perspective, giving customers total visibility of their data centers,” said Kovach.

    Siemens Ripe for DCIM Play

    Siemens’ DCIM power play wasn’t out of the blue. Siemens has an established track record in facility/enterprise infrastructure development, separately providing different aspects of data center infrastructure to customers over the years. The list of what the company provides isn’t short:

    • Medium Voltage gear that connects the building to the utility grid.
    • Low Voltage gear that distributes the power throughout the building.
    • All the interconnecting controls to enable the Emergency Generators and UPS equipment.
    • Complete Power Monitoring system that provides detail on the usage, power balance and consumption of the entire facility.
    • Building Automation and temperature controls for the cooling of the facility.
    • Fire and Life Safety systems to protect the people and equipment within in the building.
    • Perimeter and physical security of the building to control access to the building, CCTV systems to provide visual images of the critical locations within the facility

    The company sees a formal DCIM play as the logical evolution of its data center strategy, and is setting out to bridge the divide of IT and facilities management.

    “The exponential growth and importance of data centers was leading to a need to bridge the growing “silos” of IT and facilities’ management of data centers,” said Kovach. “Having the two areas collaborate and work together was a constant challenge requiring a central system that would eliminate the inefficiencies that were developing from those separate silos.  This is the purpose of DCIM – and Siemens’ existing expertise in the different infrastructure areas, coupled with our established leadership in energy and operational efficiency, seemed a perfect fit.”

  • SunGard Seeks to Make Business Continuity User-Friendly

    A close look at the customer cabinets inside SunGard Availability Services’ data center located at 1500 Spring Garden, Philadelphia, PA.

    A close look at the customer cabinets inside SunGard Availability Services’ data center located at 1500 Spring Garden, Philadelphia, PA.

    SunGard Availability Services believes business continuity services should be simpler for its customers to manage. This isn’t a hunch: SunGard engaged more than 100 customers every three weeks in its effort to design and develop a better way to deliver business continuity services.

    The result is SunGard Assurance(cm) a continuity management software as a service offering. The end result is a product that is more user friendly – “Facebook easy,” one might say. It also serves as a platform for  democratizing the entire process, allowing less technical stakeholders to provide valuable input.

    SunGard Assurance(cm) is a secure SaaS solution available anytime from any device, including mobile devices, with a service level agreement (SLA) guaranteeing 99.9 percent uptime. Its interface is simplified and designed to be accessible for even those who aren’t disaster recovery experts. The solution also incorporates dynamic plan templates that dramatically reduce the amount of data entry needed to create plans that meet the test of disasters. There’s also Integration with Configuration Management Databases (CMDBs) to provide a real-time view of data center infrastructure configuration.

    “This solution is designed for all people involved in business continuity planning and execution,” said  Louis Grosskopf, general manager, Business Continuity Software. for SunGard, who said there’s a good reason to involve more personel in the planning and execution. “There’s a common misperception that the team of business continuity (BC) planners are the only ones responsible for BC. However, these teams – usually five to 10 people depending on the size of an organization – must rely on others in their organization to understand and be prepared to act on their BC plan.

    “The ‘novice planners’ also called the ‘innocent bystanders,’ are the people that outnumber the BC plan team and usually have no disaster recovery (DR) experience,” Grosskopf added. “However, they’re expected to provide the information critical for recovery (e.g. the manager of accounts payable, or a bank branch manager; someone with no DR experience).”

    Business continuity assurance helps customers deliver on the core business benefits of disaster recovery and business continuity planning at the time of need:

    • Providing service to customers with less interruption
    • Safeguarding customers and employees before, during and after disaster scenarios
    • Protecting corporate reputation-Enhancing shareholder value

    Since the 1980s, business continuity management (BCM) has seen numerous shifts in regulatory pressure, and each issue forced customers to react swiftly, according to Grosskopf.

    “First it was data center recovery, then Y2K and next was terrorism,” he said. “The current issue disrupting BCM is state-sponsored cyber threats. As the world of business continuity stays focused on ever-increasing pressures from regulators, business leaders today demand broader participation in the planning process and increased confidence that today’s plans will lead to better outcomes. These changes in market dynamics are driving a need for a new business continuity approach.”

    The company gives the example of natural disasters like Superstorm Sandy and the business risks they pose to customers.

    “”The outcome we all seek is to keep the business running, our employees safe and our shareholders protected from risk,” said Grosskopf. “With higher engagement from the whole organization, assurance aids customers by increasing confidence at the time of need that the plan meets the test of disruption.”

  • Aspera Powers Video Delivery for UFC’s Media Empire

    ufc-cage

    The UFC broadcasts its mixed martial arts programming to 800 million TV households throughout 145 countries, as well as via UFC.com and every major mobile gaming platform. The company now uses Aspera to speed its digital asset delivery backbone. (Photo: UFC)

    The Ultimate Fighting Championship (UFC) has built a sports empire atop massive streams of video, delivering mixed martial arts action to a global audience via live pay-per-view events, reality TV shows and an endless stream of promotional videos. The UFC has turned to Aspera, which specializes in software to move large data at maximum speed, to serve as the back-end for its video services.

    “We needed to push large volumes of high-quality content from locations that sometimes have poor connectivity, so we developed a solution using Aspera,” said Christy King, vice president of digital and technology research and development at UFC. “There is simply no way we could have ever distributed this much content as broadly as we can today without the ability to deliver video as quickly as Aspera does.”

    UFC programming is broadcast to 800 million TV households throughout 145 countries, and content is hosted on UFC.com and delivered on every major mobile gaming platform including iTunes, Hulu, Xbox, Amazon, Roku, and Sony PS3. Exclusive footage offered includes fighters in training, event week activities such as workouts, weigh-ins and press conferences, and behind-the-scenes interviews.

    Huge Shift from Tape to Digital

    Eighteen months ago, the UFC delivered 90 percent of its content on tape. It has since overhauled its operation using transport technologies from Aspera, which is now delivering around 95 percent of UFC video. The relationship has had a chain reaction, as UFC’s partners also moved to digital delivery. Aspera now has over 200 clients who push and pull assets.

    UFC deployed Aspera as the backbone of its digital asset management system allowing field production teams to quickly pull existing content from venues all over the world for inclusion in the live broadcasts. The same system links UFC’s Las Vegas headquarters and Los Angeles office, with video editors transferring gigabytes of file-based media on a daily basis.

    The UFC team has a small mobile staff that has to travel frequently, working from hotels and conference centers. “The company oftens drops a 60 foot office trailer and we do a lot of our operations on site because we have so much content that rolls into the broadcast,” said Tim O’Toole, VP Event production.

    “They’re pulling in content behind the scenes, editing, and pushing it out in a matter of hours. Production wise, on a given week, we can be creating 4-5 post-produced shows, along with promo videos for upcoming fights,” said Mike Saindon, Production Engineer with UFC.

    Over the last year, it has completely changed its workflow over to Aspera. Aspera enables a small team of people to create massive amounts of content and distribute it out over a wide variety of partners. Aspera is also at the heart of UFC’s content delivery to over 150 international partners.

    “We are thrilled to support UFC in their exponential growth by allowing them to scale and accelerate their content delivery workflow,” said Richard Heitmann, vice president of marketing at Aspera. “With Aspera software deployed on-premise and in the cloud, UFC can confidently rely on its technology investments to meet the growing worldwide demand for mixed martial arts content.”

    Here’s a behind the scenes look at the UFC’s operation:

    UFC_Truck

    The UFC has an entire mobile video production operation housed in a 60-foot truck that travels to event venues for its mixed martial arts cards. (Photo: Aspera)

    UFC_ingest_racks

    The UFC deploys an array of on-site IT to handle video ingest – the capture, storage and transport of MMA video. The cases are used to repack the gear for transport to the next venue. (Photo: Aspera)

  • SEC Outsources to IO in $17.5M Contract

    io-nj-modules

    A row of data center modules inside an IO data center. The company has signed a $17 million deal with the SEC.

    IO Government Services, part of IO Data Centers, landed a $17.5 million contract to provide outsourced services to the U.S. Securities and Exchange Commission (SEC).

    This is a win for IO, and a positive move for overall government data center consolidation efforts, but it does threaten the standing of the SEC’s self-run Alexandria facility and its workers. Also, this approach is significant because of what that data center does–it runs Electronic Data Gathering and Retrieval (EDGAR) an online database of corporate filings. One might conclude that government agencies are getting more and more comfortable in outsourcing higher-level, sensitive functions.

    This contract makes the SEC less dependent on the General Green Way building that currently houses this infrastructure. The shift will save SEC millions, according to former SEC chairwoman Mary Schapiro, who wrote in a letter to Congress that outsourcing and eliminating the Alexandria facility from its portfolio of leased space would save $18 million.

    More government consolidation comes as no surprise, as it’s been a major ongoing initiative. The contract is for one-year with renewal options up to nine years and it’s the second contract for the outsourcing of the SEC’s staff-operated data center at 6432 General Green Way in Alexandria. The facility is leased from Cafferty Commercial Real Estate Services.

    According to the Washington Business Journal, Commission officials have floated several proposals over the past few years to shift the Alexandria workers to sites in downtown DC.

    It’s believed that the staff at the Alexandria facility would be moved to sites in downtown DC, including the SEC’s Station Place HQ. This is causing some real estate experts to question the figures Mary Shapiro stated in her letter to Congress, as the rent is higher ($61 vs. $35 per square foot) downtown compared to the Alexandria facility. However, they fail to see that rent per square foot isn’t the true expense; it’s running the actual data center that costs a significant amount. As the outsourcing initiative continues and savings are realized, bigger and bigger self-run data centers will be outsourced.

  • The Green Grid’s New Metric Tackles IT Recycling

    biggreen-earthday

    The Green Grid continues to broaden its scope. After expanding from data center energy to broader resource metrics, it is now tackling the current state of IT recycling.

    The industry group is proposing an electronics disposal metric for commercial end-users of IT equipment, and believes that its new Electronics Disposal Efficiency (EDE) metric will boost recycling and influence change on a global scale. The Green Grid intends for this metric to be used as a way for organizations to measure themselves and improve over time, rather than as a score to be compared with other entities. We’re looking at you, PUE.

    EDE has been developed as a simple, easy-to-use metric for understanding the disposition of IT assets. The new metric was introduced last week at The Green Grid’s 2013 Forum in Santa Clara. The goal is to provide a way for organizations to measure the responsible management of IT EEE (Electronics and Electrical Equipment) that reaches EOCU (end of current use) or EOL (end of life and reuse, recycling, and disposal), through providing guidance and maintaining standards.

    The EDE describes the percentage of all decommissioned IT assets that are known to have entered a material stream where they will be handled with responsible disposition guidelines. The basic formula is  ”Weight of equipment responsibly disposed” divided by “Total Weight disposed” to arrive at the EDE. So in all cases, EDE is less than or equal to 100 percent.

    Data for Different Disposal Streams

    However, EDE doesn’t stop there. There are also calculations to track the channels and disposal streams. The ultimate goal is efficiency, with proper disposal, reuse or recycling when applicable – so tracking a company’s efficiency when it comes to this metric isn’t cut and dry.

    The EDE metric complements the Data Center Maturity Model (DCMM), which contains clear goals and direction for improving energy efficiency and sustainability throughout a data center.

    The Green Grid believes the two sides of Green IT are converging: Content (materials/end of life) and resources (energy/emissions/water), and data center (servers storage networking) and the office (desktop laptop workstations). Its metrics continue to address this evolution. Over the past few years, The Green Grid has developed a series of metrics to evaluate and improve data center operations. This includes Power Usage Effectiveness (PUE), Data Center Energy Productivity (DCeP), energy reuse effectiveness (ERE), data center compute efficiency (DCcE), and others.

    The Green Grid delves far deeper into this in a white paper that also discusses considerations, the development of the metric, goals for EDE, considerations and recommendations.

  • Oracle Buys Nimbula as Tech Giants Wake to Cloud Potential

    ellison-cropped

    Last fall Oracle CEO Larry Ellison announced a new an Infrastructure as a Service cloud computing offering. Today Oracle said that it has bought Nimbula,which makes private cloud technology. (Photo: John Rath)

    Larry Ellison might love the cloud after all. Oracle has acquired open cloud player Nimbula, a provider of private cloud infrastructure management software.  Nimbula’s product is complementary to Oracle’s growing cloud play, with Nimbula expected to be integrated with Oracle’s cloud offerings. The transaction is expected to close in the first half of 2013.

    Nimbula’s flagship product is Nimbula Director, which allows enterprises and service providers to build large-scale, fully functional infrastructure services from bare metal in a matter of hours. Nimbula Director differentiates by its high level of self-service, automation, application orchestration features, and ease of use. Providing a one-stop virtual data center management solution, Nimbula Director isolates customers from the operational and hardware complexities associated with deploying a private, hybrid or public cloud. Nimbula joined the Open Stack movement last October.

    Oracle most likely was attracted to Nimbula because it addresses a private cloud management need, as well as adding some heavy cloud talent in the form of the company’s founders. So we have one of the most promising early entrants in the cloud landscape joining with one of the most misunderstood (in terms of cloud) tech giants. The details of the acquisition are sparse, but the deal indicates that Oracle is continuing to get serious about its cloud play.

    Nimbula, which emerged from stealth mode in June 2010, was founded by former Amazon executives Chris Pinkham and Willem van Biljon, who led the development of the Amazon EC2 public cloud service. It was an early player on the scene, and one that was surrounded by a lot of hype thanks to its founders. The company never quite seemed to live up to its promise, mainly because its promise was astronomical and market confusion around cloud has been pervasive.

    Oracle’s Misunderstood Cloud Ambitions

    Also misunderstood has been Oracle’s cloud strategy. By many accounts, Oracle and CEO Larry Ellison used to be a bit disdainful about the cloud. This perception was built on a few comments by Ellison rather than the actual business, as Oracle has a growing play across all parts of the cloud stack (Infrastructure as a service, Platform as a Service, and Software as a Service). However, perception is a driving force in the market.

    Oracle has been forward-thinking in terms of cloud in some regards; it has several SaaS-based enterprise applications and has pushed to bring social capabilities across the portfolio. Ellison founded NetSuite, one of the earliest SaaS players. Most of the disdain Oracle and Ellison have displayed in the past appears to be for “cloud washing,” an industry-wide rebranding of everything and anything as “cloud.” However, Google “Ellison Cloud” and you’ll see quite the controversial history.

    There are several promising cloud players out there that offer a piece of the larger puzzle, but the overall picture remains fragmented. However deals such as this one are occurring more frequently as traditional technology giants are finally moving away from legacy practices ( namely, license and maintenance fees) as the driving force behind revenue.

    Cloud flips long-established business models on their heads, which is why there’s been some hesitancy on the part of the largest technology companies. These tech giants, particularly the public ones, are under pressure from investors to maintain license and maintenance revenue, and cloud/recurring revenue services has historically been seen as a cannibalization of these revenues. However, both investors and enterprise tech giants are realizing that cloud is the way of the future, so there’s been, and will continue to be, consolidation occurring in the market. Companies like Oracle will continue to pick up important cloud pieces out there to build out full cloud plays.

  • Markley Launches Cloud, as Colo & Cloud Boundaries Continue to Blur

    One-Summer-Street

    The interior of a data hall inside One Summer Street, The Markley Group’s date center hub in Boston. The company has introduced a new suite of cloud services. (Photo: Markley Group)

    The Markley Group in Boston is offering cloud services, making it the latest facilities-centric provider going after hybrid customers. Markley is the owner of One Summer Street,  one of the prominent multi-tenant data center hubs serving Boston, so the news marks the continuing blurring of colo and cloud. 

    “Our vision for Markley Cloud Services is built upon the same virtues that framed our company formation in 1992 – custom-design based on customer demand that paves the way for innovation and performance,” said Jeffrey Markley, CEO of Markley Group. “MCS leverages our broad carrier network to offer customers the very best in connectivity and bandwidth so they worry less about cloud initiatives and more about how IT can impact their overall business goals.”

    One Summer Street is an 800,000 square foot  building located atop the intersection of key fiber rings serving the Boston market and New England region.  That’s made it a popular data center destination for the region’s IT operations, including Markley Group tenants like the Boston Red Sox, Harvard University, MIT, Boston Internet Peering Exchange and the New York Times. Markley’s key offerings at One Summer include carrier-neutral colocation and managed services

    Aimed at the Enterprise

    The new Markley Cloud Services (MCS) is an Infrastructure as a Service (IaaS) packaging a utility-style service for compute and storage with cloud management software atop. This offering is aimed squarely at the enterprise; the company utilizes VMware for its cloud, differentiating MCS from public clouds built on commodity servers and network hardware. The solution includes reserved RAM and enterprise-class SAN-based storage to ensure no oversubscription, and allows customers to configure their virtual machines (VMs), including virtual central processing units (vCPUs), memory, storage and network, as they choose and on demand.

    The IaaS integrates with resources in dedicated colo cabinets and suites through private fiber optic connections via cross connects. By enabling a fully integrated hybrid computing environment, it allows current customers to gain efficiencies in cost and capacity planning from cloud in a secure manner. MCS allows businesses to develop or run applications in an on-demand cloud environment that accommodates seasonal workloads or development and testing scenarios, without being forced into making additional equipment purchases for short-term or unpredictable requirements.

    “Markley focuses on the customer rather than taking the one-size-fits-all approach so common in most cloud environments,” said Andy Shoemaker, principal consultant for JNS, a consulting services firm for IT organizations. “Recently, a client of JNS had an unusual project, and the only cloud provider willing to take on the challenge was the team at Markley. Markley Group’s hybrid cloud approach ensured their systems provisioned on-time, performed as expected and did so at a reasonable price.”

    Markley Group is usually pretty quiet due to its security focus, but another trend is occurring in the market; data center providers are opening up about what they’re doing. They’re no longer (figuratively) cold buildings, they’re becoming social and more marketing oriented. Last week the Markley Group held the latest in a series of data center summits offering presentations and panel discussions on industry trends.

    It’s also important to note that some colo providers continue to avoid direct cloud offerings for of fear of stepping on customer toes. Facilities that have cloud providers residing in their walls often opt to partner with these customers rather than compete. There’s also offerings like Amazon Web Service’s Direct Connect that allow providers to address hybrid plays.

  • Latisys Secures $200 Million Credit Facility

    Latisys-Ashburn

    The interior of a Latisys data center. The company has arranged a new $200 million credit line.

    The expansion at Latisys shows no signs of slowing. The company has just announced a new $200 million credit facility, including a 6-year, $180 million institutional term loan and a 5-year $20 million revolving credit facility. This means the company will continue its 2012 momentum, spreading that capital across Latisys’ Infrastructure as a Service (IaaS) platform to drive accelerating growth and customer acquisition. The data center service provider said it will continue to expand its facilities adding high density capacity, enhancing its technology platform, increasing automation, as well as adding high skilled personnel to the team.

    “Latisys’ growth strategy centers around ongoing strategic expansion of our IaaS platform and our ability to provide innovative right-sized, hybrid IT solutions that solve business problems,” said Doug Butler, Chief Financial Officer for Latisys. “The new credit facility provides additional capital necessary to maintain technology leadership as well as additional support services required to respond to increased demand for higher margin managed hosting and cloud services.”

    The credit facility was substantially oversubscribed, with commitments from several leading sector lenders and institutional investors. The credit facility was arranged by RBC Capital Markets, TD Securities (USA) and SunTrust Robinson Humphrey, and funded by a consortium of over 20 leading financial institutions and institutional investors.

    Over the past four years Latisys has invested more than $125 million in expanding its facilities and services to keep pace with demand. Latisys’ national expansion has been ongoing through 2012 and into 2013.  Recent announcements include DEN2—Latisys’ state-of-the-art data center in Denver—along with the ASH1 DC5, CHI DC6 data centers that added 22,000 and 10,000 square feet of secure, ultra high-density raised floor in Northern Virginia and Chicago respectively. In Southern California, Latisys recently announced an additional 12,000 square feet in its Irvine, CA data center.

    Latisys’ total data center platform now exceeds 343,000 square feet across seven data centers in four major markets. Product-wise, it launched its next generation managed hosting and cloud platform and launched its unified service desk in 2012.

  • Boston ARMing Developers With ARM-as-a-Service Cloud

    Boston-Ltd-Purp

    Boston Limited’s Viridis server will power a new “ARM-as-a-Service” cloud offering powered by ARM technology from Calxeda. (Photo: Boston Limited)

    Boston Limited wants to help developers future proof their applications for an upcoming ARM-based world. The low-cost, power-efficient processors have been drawing a lot of attention, and now comes a commercially available cloud for developer needs.

    The Boston ARM-as-a-Service (AaaS) cloud was unveiled at CeBIT 2013. It was built specifically to assist in migrating and porting applications from x86 to ARM, providing developers with tools and services required to port and migrate software. “The Boston cloud is the ultimate resource for application and software developers looking to port their software on to ARM,” said David Power, Boston’s Head of HPC. “Our platform provides all the tools to facilitate porting software to ARM in one easy to use cloud offering.”

    The Boston AaaS is based on Calxeda’s EnergyCore ARM-based processor as well as Breeze technology. Breeze is used for tracing programs as they run in order to monitor file dependencies and environment settings. It shows the inner workings of complex scripted flows such as those used in semiconductor design or complex software builds. ARM also uses Breeze to profile and troubleshoot applications on their own HPC cluster.

    “We understand the hardware constraints that many software developers will face in trying to enter the ARM environment,” said Dr. Rosemary Francis, Managing Director of Ellexus, “and Boston’s cloud offering provides a fantastic opportunity for those wanting to future-proof their applications as more and more users move over to ARM technology.”

    Boston’s ARM-as-a-Service delivers dedicated physical quad-core nodes, as opposed to virtual CPUs typically seen with cloud offerings. Users will be able to develop on single nodes or test scaling capabilities of applications across multiple nodes within the cluster. Users will be able to choose from varying levels of software and professional services to assist in their migration.

    “There is tremendous demand for easy access to ARM server technology, so the time is ripe for AaaS,”says Karl Freund, VP Marketing, Calxeda, Inc. “We are thrilled to see Boston and Ellexus stand up this service to provide cloud-based access to engineers to build and optimize their server codes for ARM.”

    There is also an ARM OpenStack test-bed available at TryStack for short-term testing.

  • Pertino Raises $20M to Bring Cloud Networking to SMBs

    cloud-connectivity

    Another player looking to redefine networking in the cloud era has raised a nice chunk of change. SDN-powered cloud networking player Pertino has raised $20 million in Series B funding. The company will use the funds to expand its platform and market strategy. The company previously raised an $8.85 million Series A round in April of 2012.

    Pertino looks to bring SDN-powered wide-area networks to the masses. There is a demand for this thanks to increasing globalization and rise of cloud computing, as well as an increasingly mobile workforce. The proliferation of mobile usage and remote employees means companies are becoming more dependent on wide-area networks (WANs) and Pertino wants to bring these capabilities to small and medium-sized businesses (SMBs) by cutting out the barriers of cost and complexity. Pertino has something it calls “cloud network engine” which allows the creation of a cloud-based network in minutes with no hardware, expertise, or upfront investment required. Basically, it looks to bring a previously cost-prohibitive capability to the world of SMB, hence the bringing it “to the masses.”

    “SMBs comprise almost half of IT spending worldwide, yet a significant number of these organizations struggle with having the resources to deploy new technology and applications,” said Craig Elliott, Pertino co-founder and CEO. “By leveraging the cloud and SDN technology to radically simplify networking, Pertino is poised to unlock a massive opportunity and Jafco’s SMB and business development experience in Asia will help us realize it.”

    Priced for the SMB Market

    The company is going after the home office or SMB here with the pricing as well. Pertino allows customers to build a free network for up to three members with three devices each, and then pay $10 per member per month as they grow.

    The Series B round was led by new investor Jafco Ventures with existing investors Norwest Venture Partners and Lightspeed Venture Partners also participating. Jafco’s portfolio consists of several cloud and network security companies including Reputation.com, Huddle, FireEye, and Palo Alto Networks.

    “One of the things that attracted us to Pertino is the fact they have built a cloud-based solution leveraging the most innovative and disruptive technology to hit networking in a decade – SDN, and they’re delivering it in a practical and consumable way to an underserved global market,” said Jeb Miller, general partner at Jafco. “Disruptive technology and a massive global market opportunity, coupled with the pedigree and experience of the executive team makes Pertino an ideal portfolio company for us.”

    The company launched into limited availability last month and says it saw significant demand. The limited launch came after concluding a successful beta program within the Spiceworks IT community that resulted in over 250 customers deploying and testing their own Pertino cloud network. Since then, the number of deployed customers has grown to over 700.

  • eBay’s DSE: One Dashboard to Rule Them All?

    DSE-dashboard

    Has eBay developed one dashboard to rule them all? The company took a big step closer to the holy grail of a unified data center productivity metric, unveiling a methodology called Digital Service Efficiency (DSE) at The Green Grid Forum 2013 in Santa Clara, Calif.

    In the conference keynote, eBay’s Dean Nelson outlined a system of metrics to tie data center performance to business and transactional metrics. DSE enables balance within the technology ecosystem by exposing how turning knobs in one dimension affects the others, providing a “miles per gallon” measurement for technical infrastructure. In drawing direct connections between data center performance and cost, the dashboard provides eBay with insights that go directly to its bottom line.

    “We’re making $337 million per megawatt,” said Nelson, the Vice President, Global Foundation Services at eBay. “That’s the productivity of our infrastructure, not the cost overhead. Through the DSE Dashboard, these numbers are laid out in simple terms that are understandable across business roles. This starts conversations at every level about how we achieve goals. It’s that bridge that’s been missing for so long.”

    That data point provides a vivid example of the productivity of data center infrastructure, which typically has construction costs of $5 million to $10 million per megawatt for large users like eBay.

    How To Measure Productivity?

    The Green Grid has spent several years evaluating various metrics that could be used to measure data center productivity. The industry group popularized the use of Power Usage Effectiveness (PUE) as the leading metric for data center energy efficiency. But PUE was primarily a measure of facilities infrastructure, and didn’t address the effectiveness of IT systems within the data center. Various gauges have been proposed to measure productivity, but none has addressed all the objectives for an industry-level metric.

    With Digital Service Efficiency, eBay has developed a methodology it believed can bring these diverse puzzle pieces together. It’s based on eBay’s e-commerce operations, but the company says its approach can be adapted by other data center operators, who can substitute their own business metrics. “While the actual services and variables are specific to eBay, the methodology can be used by any company to make better business decisions,” eBay writes in an overview of its process. “Just as ‘your mileage will vary’ from any MPG rating, DSE provides an introspective view of how well a company has optimized its technical infrastructure.”

    Most importantly, eBay believes it has sorted out a way to integrate the many variables that a data center must serve.

    “Think about this as a Rubik’s cube,” explains Nelson. ”On one side it’s performance.You have cost on the other side. You’re going to know cost per transaction. The third dimension is environmental impact. The fourth dimension is revenue; how much revenue is generated per transaction. There’s a balance needed – you can solve one side fairly easily, but solving all four sides is the goal and the true value.”

    During his presentation, Nelson shared some key metrics on eBay’s data center operations. The auction giant has 52,075 servers consuming 18 megawatts of power to support 112.3 million active users.  That equates to revenue of $54 per user, and $117,000 per server.

    The development of DSE began three years ago, when the company was looking to unify the view of the business, the infrastructure, and assess it’s impact in terms of energy, cost and environment. DSE is a dashboard for the company’s technical ecosystem – the data centers, compute equipment and software that combine to deliver its digital services to consumers.

    An MPG Rating for Data Centers

    “Much like a dashboard in a car, DSE offers a straightforward approach to measuring the overall performance of technical infrastructure across four key business priorities: performance, cost, environmental impact, and revenue,” said Nelson. Drawing a parallel to the Miles Per Gallon (MPG) measurement for cars, Nelson argues that DSE enables a view into how a company’s “engine” performed with real customer consumption, how the car performed as it was being driven, or in eBay’s case, how the eBay.com engine ran while its users drove it.

    “This is what is being consumed, this is how our customers are driving our car,” he said.