Author: Jason Verge

  • CA Integrates Eaton Power Management Into DCIM Suite

    Two players in the data center management space just got closer. Eaton’s power management technologies, which help increase power reliability and control cost by reducing energy consumption, have been integrated into DCIM software from CA Technologies in a turnkey solution that helps bridge the gap between IT and facilities.

    Eaton’s power management technologies are designed to help customers increase the reliability of their power infrastructure and control cost by reducing energy consumption. CA DCIM software provides a web-based, centralized solution for monitoring power, cooling and environmental conditions across facilities and IT systems in the data center. Their functions complement one another, forming a turnkey solution.

    “CA Technologies and Eaton are two of the leading companies in data center technology, each providing critical products in very different areas,” said Andy Lawrence, vice president of Data Center Technologies and Eco-Efficient IT, 451 Research. “Both companies understand that the most effective and competitive data centers in the future will make full use of embedded intelligence and intelligent management systems. Their formation of a complementary hardware and software partnership is a notable and very logical step forward for data center systems and DCIM.”

    IT Software Talks to UPSes, PDUs

    The Eaton and CA Technologies collaboration provides integration between CA DCIM technology and select Eaton power management hardware, including uninterruptible power systems (UPSes) and power distribution units (PDUs). As a result, CA DCIM can collect, analyze and report on metrics such as temperature, humidity, power by phase, current by phase and power factor.

    Customers can use CA DCIM to create intelligent alerts to identify environmental and power issues before they have an opportunity to threaten IT infrastructures. Customers can also accurately calculate total PDU power usage for Eaton equipment or other device groups. These Eaton data points are automatically added to CA Technologies unified management portal views.

    “Eaton’s alignment with CA Technologies represents an evolution in infrastructure management and will provide integrated, pre-engineered solutions for efficient and effective business performance optimization,” said Hervé Tardy, vice president and general manager of Eaton’s Distributed Power Quality Division. “This collaboration is designed to provide a level of power hardware integration and data center control that isn’t currently available.”

    Bridging Facilities and IT

    DCIM provides organizations with greater insight into critical data center infrastructure, across both facilities and IT systems.  Both companies come to the table with complimentary functionality to this end, aiding data center managers increase operation efficiency, mitigate risk, and enhance performance.

    The combined solution will also enable managed service providers to grow their businesses by providing high-value DCIM-as-a-service offerings that include monitoring and maintenance of customers’ critical infrastructure and business systems.

    “Our agreement with Eaton will enable customers to increase efficiency while safeguarding IT service delivery,” said Terrence Clark, general manager, DCIM, Energy & Sustainability Solutions, CA Technologies. “These benefits are critical as demands on IT and data center managers continue to escalate at a pace that exceeds the ability of MSPs and enterprise customers to fund, power and flexibly operate their data centers.”

  • The Azure Cloud, Exposed to the Azure Sky

    Microsoft-quincy-outdoor

    Data center modules packed with servers sit outside at the newest phase of the Microsoft data center in Quincy, Washington. In the background is the vast concrete shell of the company’s initial Quincy data center. (Photo: Microsoft)

    QUINCY, Washington – As the Windows Azure cloud expands across central Washington, the physical building has all but disappeared. Lightweight enclosures filled with servers, known as ITPACs, sit under the barest of skeleton of a facility. They are self-contained data centers, assembled in days, housed on a concrete slabs and attached to a power “spine” supplying connections to the grid and the Internet. It’s completely open to the air, and in production.

    With the latest phase of its data center in Quincy, Microsoft is getting out of the air conditioning business and deploying thousands of servers inside factory-built modules, which can be installed in days and allow the company to reach new heights of energy efficiency. The ITPACs take advantage of the natural environment in Quincy, allowing cool air to flow through the modules and cool the servers powering the Azure cloud.

    One Campus, Three Experiences

    Walking through the Quincy campus provides three very different experiences. After passing through security, you encounter Microsoft’s first facility on the six-year old campus, a typical data center in an immense concrete shell. The next phase is a lightweight building housing ITPACs. The end of the trip takes you to open air – yet even here, the Microsoft cloud continues to grow.

    After passing through a gate that could stop a truck, you arrive at the 470,000 foot concrete building, constructed by Microsoft in 2007. The Columbia campus feels like a fortress, with a gauntlet of security that includes a staffed access gate, biometrics and a mantrap corridor (which earns its name from the doors at both ends, which cannot open at the same time, limiting access to ) . After you’re cleared, as a guest you receive a badge that expires after a set time, with the word “VOID” appearing out of nowhere. This is your traditional enterprise data center with all the trimmings, and then some. It features all the physical redundancy, the giant generators, and the massive UPS rooms you would expect to be hidden beyond such security.

    Once you walk outside, you begin to see the evolution of Microsoft’s data center design. The next building you enter isn’t really a building at all, but a steel and aluminum framework. Inside the shell are pre-manufactured ITPAC modules. Microsoft has sought to standardize the design for its ITPAC – short for Information Technology Pre-Assembled Component – but also allows vendors to work with the specifications and play with the design. These ITPACs use air side economization, and there are a few variations.

    Essentially, they are data centers in a box. Cooling is supplied by fresh air and the equivalent of a garden hose. Fresh air is drawn into the enclosure through louvers lining the side of the module, which functions as a huge air handler with racks of servers inside. Each IT module is also equipped with an evaporative cooling system in which air passes through a moist media filter. The system uses just 1 percent of what the traditional data center uses.

    Inside A ‘Secret Garden’ of Servers

    These ITPACs are highly efficient, and quick to deploy. The shell around these first ITPACs is wall-screened.

    However, by the end of the trip, you’re back in open air, in a “Secret Garden” of servers. The building has disappeared, but each ITPAC unit acts as its own steel perimeter. You’re still on camera, you’re still in a highly secure area, but it feels wide open.

    Just as the security evolves, so does the power infrastructure.  Cloud servers drive a whole different design, down to the components.

  • Cloudant Raises $12 Million for Database-as-a-Service

    Mobile and web applications are growing at an astounding clip, bringing the market for hosted databases in tow. That’s behind the investment in Database-as-a-Service company Cloudant, which has raised an additional $12 million in Series B Funding from Devonshire Investors, Rackspace, and Toba Capital. Current investors Avalon Ventures, In-Q-Tel, and Samsung Venture Investment Corporation also purchased additional shares. The funding will support global expansion, as well as go towards growing the company’s support, service, and go-to-market strategies.

    “The market opportunity for managed, hosted databases is large, and the NoSQL model is where major mobile and Web applications are moving,” said David Jegen, managing director at Devonshire Investors. “We’re seeing that shift accelerate across the industry with Cloudant in the sweet spot of this market, adding big customer names with a highly scalable and durable DBaaS.”

    Cloudant provides a scalable, managed NoSQL DBaaS based on a globally distributed network of data centers. Application developers build their back ends on the Cloudant Data Layer cloud database, freeing developers from the mechanics of data management so they can focus exclusively on their applications. It allows customers to offload the administrative burden of operating and scaling distributed databases.

    Samsung announced a strategic investment in the company last February, given its interests in the mobile space.

    Support from Rackspace

    Rackspace’s investment is also of note, given the service provider is at the heart of the “Internet of things” movement. The company is careful and calculated in terms of what it invests and acquires, usually focusing on what is sees working and grabbing the most attention first hand.

    “We hear all the time from customers that dealing with the complexities of large-scale systems infrastructure just slows them down,” said Pat Matthews, senior vice president of corporate development at Rackspace. “Developers want control of their infrastructure, but they don’t want to have to manage it 24×7. Cloudant is the natural extension of this idea at the database layer. We’re partners that share a commitment to delivering the highest level of customer support, which is why investing in Cloudant works so well from a Rackspace perspective.”

    Momentum in the enterprise market has also helped the company court new investors like Vinny Smith at Toba Capital, the former CEO of Quest Software, which he led to IPO and a $2.4-billion acquisition by Dell.

    “Enterprises are quickly realizing that they want a cloud that isn’t one size fits all. They want to scale their app without having to customize it to fit within a third-party cloud,” said  Smith. “Spending on cloud infrastructure is no longer an IT line-item; it’s now a major line-of-business concern. With strategic support from Rackspace, Cloudant is providing a clearer path for businesses to run large production workloads in the hybrid cloud.”

    Cloudant also announced the opening of a new office in San Francisco. Market demand recently drove the company’s expansion into the U.K. with an office in Bristol. The company’s headquarters are in Boston, and it has an office in Seattle as well. Cloudant will use its new presence to strengthen its relationships with the Web and mobile application development communities and to build its brand in the enterprise software market.

  • Digital Realty Trust Launches DCIM Software

    dlr-image

    Data center developers provide the bricks, mortar, power and ping to support their tenants. But they’re increasingly finding the need to get into the software side of the data center business, offering tools to make management easier. The latest to do so is turnkey wholesale giant Digital Realty Trust, which today launched EnVision, a comprehensive data center infrastructure management (DCIM) solution.

    Digital Realty says EnVision is a DCIM solution built by a data center operator for data center operators. The software will provide increased visibility into data center operations through a user-friendly interface, offering access to historical data as well as predictive capabilities. The EnVision rollout will begin this month and take approximately 18 months to complete across Digital Realty’s global data center portfolio, which consists of 122 properties in 32 markets.

    “Up until now, data has been collected, but it has not necessarily been easily accessed or arranged in an intuitive manner that is helpful to a data center operator,” said David Schirmacher, senior vice president of portfolio operations at Digital Realty. “The goal in rolling out EnVision across our global portfolio is to give our customers a common database that is structured around the specific needs of data center operators and can therefore manage the millions of data points that are found in today’s large-scale facilities.”

    Diversifying its Capabilities

    The announcement further blurs some of the traditional lines in the data center business, and reflects Digital Realty’s move to diversify its business to offer a broader set of capabilities. In recent years Digital Realty has expanded into colocation and dark fiber services. It’s not the first infrastructure provider to develop its own management software (one early example is IO, which entered the DCIM market in 2011), but as the world’s largest data center landlord, Digital Realty has the resources to be a player very quickly. To speed the process, last year Digital Realty hired Schirmacher, who previously worked at DCIM specialist Fieldview and helped automate infrastructure at Goldman Sachs.

    The new product will let current and future Digital Realty customers analyze data located within specific racks, buildings, entire states and even up to the entire global portfolio, providing insight whether on a granular or high-level basis.

    “EnVision links data center IT and infrastructure metrics in order to give our customers real-time, historical and predictive views into their operations,” said Michael Foust, chief executive officer at Digital Realty. “This will benefit our customers in a variety of ways. For example, it will provide improved efficiency analysis and help operations teams to support future planning. We are excited to bring EnVision to market and feel that it represents the next critical stage in the ongoing evolution of DCIM solutions.”

    EnVision provides an organized view, not only saving time and adding efficiency, but it also addressing a key data management challenge by pulling together siloed, or stranded, data and presenting it in context, providing a complete real-time view into the environment rather than just a view into a portion of the environment over a “slice” of time.

    Schirmacher will discuss Digital Realty’s DCIM initiative tomorrow in one of the afternoon keynote sessions at The Uptime Symposium in Santa Clara, Calif.

  • IO Partners With TMI to Bring Modules to Asia-Pacific

    io-dayton-modules-aisle

    A look at IO’s modular data center technology in a recent deployment. The company has partnered with TMI to distribute its “Data Center 2.o” technology in Singapore, Malaysia and Brunei. (Photo: Rich Miller)

    IO has been aggressively trying to expand global distribution of its Data Center 2.0, its modular solution. The company is partnering with Tractors Machinery International (TMI) to distribute its data center modules in Singapore, Malaysia and Brunei. TMI will serve as the exclusive distributor of the IO.Anywhere platform in these countries, providing the local presence IO needs to expand its concepts in the region.

    Modular design was initially viewed as a niche play by many in the data center industry, but has been seeing increasing traction. Once thought to be limited to mobile requirements, temporary capacity, or novel designs like cloud computing facilities, there’s been an influx of wins, discussions and partnerships around modular designs in general. The IO_TMI partnership extends that trend to new geographies.

    “IO’s Data Center 2.0 products are embraced by the most demanding enterprise technology users in the world, and Asia Pacific is a highly sophisticated market,” said Oon Ho Tan, General Manager of Tractors Machinery International Pte. Ltd. “With IO, we are empowered to deliver next-generation technology that optimizes service delivery, reduces risks and aligns the data center with the needs of business and IT.”

    This latest partnership is good one for IO, who has been looking to expand its presence in AsiaPac, also announcing a facility in Singapore a while back. TMI also will distribute IO.OS, the world’s first true data center operating system that integrates modular and legacy infrastructure with the entire IT stack, providing unsurpassed visibility, insight and control to optimize data center performance.

    By partnering with IO, TMI gains exclusive rights going forward to distribute IO.Anywhere products within the territory, as well as access to IO training and certification programs, technology resources, and sales and marketing support.  As an IO global distribution partner, TMI is poised to generate incremental revenue by leveraging IO’s next–generation technology platform and global brand equity.

    “With the global rise of mobility, cloud and big data, companies everywhere must rely on their data centers to deliver unprecedented business agility,” said Adil Attlassy, IO Senior Vice President of Global Operations.  “TMI is ideally situated to help IO’s global clients improve data center efficiency, agility, security, reliability and sustainability.”

  • Equinix Unveils New ‘Crown Jewel’ for Ashburn Campus

    equinix-dc11-exterior

    The new Equinix DC11 data center is the largest facility yet on the company’s six-building primary campus in Ashburn, Virginia. (Photo: Equinix)

    Equinix keeps growing in northern Virginia, expanding the largest Internet exchange in North America with the largest facility yet on an already immense campus. The new DC11 facility will support growing network traffic in Ashburn, which shows no signs of slowing as the integral East Coast network hub.

    DC11 the company’s eighth facility in northern Virginia, and physically sits next to DC6 on the Equinix Ashburn campus. The 230,000 square foot facility has room for 120,000 square feet of colocation space, with the initial phase adding more than 42,000 square feet of space, enough for 1,200 cabinets. In terms of power, there’s 15 megawatts of critical potential.

    DC11 is one of eight fiber-connected buildings on a single campus with more than 500,000 square feet of data center space. The DC11 project represents about $79 million in capital investment, another indication of the growing demand for data center space in northern Virginia.

    New Crown Jewel for Ashburn Campus

    Equinix is known for a distinctive look to their facilities, but data center design is evolving on a regular basis. The facility also includes flex space for business continuity and disaster recovery, as well as several amenities. The data center isn’t a cold, faceless entity anymore. It’s a community, and DC11 is the crown jewel in an already impressive cluster of buildings.

    The facility employs overhead ducting rather than raised floor, as is the long-time practice at Equinix. It has state-of-the-art security meeting the most stringent requirements, including perimeter fencing around the entire campus, guard house with controlled parking entry, mantrap entry, biometric hand-reader access, 24×7 guards and recorded CCTV monitors throughout the facilities.

    Equinix continues to see strong demand across a number of verticals, including cloud and IT content media and retail. As it is Ashburn, government is also a big vertical, and DC11 meets Federal Data Center Consolidation Initiative mandates for efficiency, as well as industry compliance frameworks such as HIPAA and PCI. DC11 offers access to the five leading providers of the GSA Networx Telecom services contract: AT&T, CenturyLink, Level 3 Communications, Sprint and Verizon.

    Retail Also Solid in Ashburn

    Ashburn is the key East Coast data center market (along with New York Metro) and continues to see a flurry of activity. Ashburn is a key communications hub for heavily populated U.S. East Cost and European markets, with a high concentration of IT, telecommunications, biotech, federal government, and international organizations setting up shop in the area.

    While many of the facilities being built are more focused on the wholesale sector of the data center market, Equinix is leading the pack in terms of retail colocation. That can be partially attributed to its connectivity story. The campus in Ashburn features more than 10,000 cross connects, over 900 networks and direct access to 90% of internet routes. Connections are available to more than 300 network carriers, 140 clout and IT service providers, and 100 content and digital media companies.

    Equinix also has a thriving ecosystem of over 4,000 businesses in the Equinix marketplace. Equinix has over 95 data centers and over 7 million square feet of space worldwide.

  • Wall Street Going Wireless in Bid for Ultra-Low Latency

    Can wireless connectivity provide faster ultra-low latency connectivity for financial traders? (Image copyright David Neale and licensed for reuse under the Creative Commons Licence)

    Can wireless connectivity provide faster ultra-low latency connectivity for financial traders? (Image copyright David Neale and licensed for reuse under the Creative Commons Licence)

    There’s growing interest in wireless as a way to get faster connectivity for financial customers conducting low-latency trading, a trend seen in several announcements this week. 325 Hudson announced the addition of a wireless Meet Me Room (MMR) through a partnership with NexxCom. Meanwhile, Hudson Fiber Network has added ultra low-latency wireless infrastructure through ULL Networks.

    In the race to zero latency, the technologies are changing and evolving. Wireless technology providers are one avenue that fiber providers and financial customers are looking at closely as they seek ever-faster connectivity for their trading systems. Wireless can offer speed advantages over cabling, as signals can travel faster through air than fiber, and wireless transmission can allow data to move in a straighter path than fiber cabling routes (see Telecom Ramblings for a good explainer on wired vs. wireless).

    Data center providers in the New York and New Jersey markets depend on their bread and butter ultra-low latency financial customers. Hudson Fiber Network bills itself as the premier data transport provider, targeting financial, content, carrier, and enterprise clients with flexible networking solutions. 325 Hudson is strategically located on fiber-dense crossroads of Hudson Street and the Holland Tunnel, and is also emphasizing its services for financials.

    325 Hudson’s Wireless MMR

    325 Hudson, the carrier-neutral core interconnection facility strategically located on the fiber-dense crossroads of Hudson Street and the Holland Tunnel in New York City has partnered with NexxCom Wireless for the first managed wireless Meet Me Room (MMR), which will operate from the 325 Hudson rooftop. Wireless services will be made available on both a private network basis and as a managed service, initially to several key sites in New Jersey.

    The design of the wireless MMR is provided by NexxCom, and is able to minimize frequency interference, maximize roof space and optimize customer ease of wireless connections. The building’s management says it will provide the lowest latency connections across Manhattan, Northern New Jersey and beyond to support users with financial exchanges, mobile backhaul and disaster recovery connectivity needs.

    “NexxCom’s proprietary wireless technology provides the first solution where high capacity, high availability and low latency aren’t mutually exclusive,” said Sal Benti, Chairman of NexxCom Wireless. “The wireless Meet Me Room will offer the state of the art in wireless capabilities from technology to planning to provide customers with a tailored solution to suit each user’s specific needs.

    “Our clients require data center access solutions that near the speed of light; these requirements have been the driving force behind our creation of these wireless links,” said Benti. “As a result, today we can provide point-to-point wireless access between key trading firms and financial data centers at latencies that are superior to traditional optical fiber solutions.”

    The wireless MMR will enhance connectivity to subsea cables, New Jersey data centers and exchanges and provide access into long haul fiber and wireless networks to Chicago and additional western points. The wireless offering is coupled with access to multiple fiber and core transport providers and acts as a low cost alternative for first or last mile connectivity from the building-wired MMR with 325 Hudson’s interconnection facility.

    “Our partnership with NexxCom Wireless to provide the first-of-its-kind, carrier-neutral wireless Meet Me Room in New York City further exemplifies our commitment to provide our customers with cutting-edge, state-of-the-art services,” said Hunter Newby, Joint Venture Partner at 325 Hudson Street. “We look forward to bringing the submarine and terrestrial lit transport communities together with the microwave and millimeter wave community at 325 Hudson.”

    NeXXCom Wireless is a broadband wireless equipment and systems business focused on low latency and ultra broadband networks, and specifically targets firms conducting High Frequency Trading (HFT).

    Hudson Fiber Network to distribute ULL Networks

    Hudson Fiber Network (HFN) is exclusively distributing ULL Networks’ ultra low latency RF wireless connectivity capabilities within the New York and New Jersey metropolitan areas. Wireless routes will be offered for connections between major New York and New Jersey exchange points, including the Equinix NY1, NY4, NY8 and NY9 facilities, and key financial data centers in Weehawken, Mahwah and Carteret, New Jersey. Nationwide routes to Chicago facilities will also be available.

    HFN will offer ultra low-latency RF wireless services at the maximum available bandwidth of 1 Gig with latency reduction ranging from 30 to 60 percent depending on the wireless route. HFN and ULL Networks will partner together to provide additional routes in the future.

    “Our clients are some of the most influential players in the financial industry,” said Brett Diamond, President of HFN. ”By introducing this capability to our extensive lowest-latency fiber routes, we have the versatility to offer the financial community stand-alone fiber and wireless services, as well as hybrid services, based on each customer’s specific needs. Our partnership with ULL furthers our capabilities as the premier provider of low-latency services across the board nationwide, with a specific focus on the New York and New Jersey metropolitan areas.”

    “HFN was the partner of choice for ULL networks in the New York/New Jersey market,” said Ed Kopko, CEO of ULL Networks. ”Their lowest-latency fiber routes, now coupled with wireless, are simply the best offerings in market. Our best-of-breed wireless services will help HFN’s financial customer base by giving them seamless access to ultra-low-latency options in the ongoing pursuit of the highest possible performance.”

    Both of these moves target latency sensitive customers, as well disaster recovery and business continuity operations. As providers look to appeal to financials, they’ll continue to  blaze new trails in terms of technology adoption in the hopes of gaining an edge.

  • BMC Acquired By Private Investors for $6.9 Billion

    Data center management is hot, and investors are picking up on the trend. Software maker BMC is going private in a deal that sees the company acquired by private equity firms Bain Capital and Golden Gate Capital together with GIC Investments and Insight Venture Partners. BMC will be acquired for $46.25 per share in cash, or approximately $6.9 billion. Credit Suisse, RBC Capital Markets and Barclays have agreed to provide debt financing.

    There are several reasons why BMC would choose to go private. At the top of the list is the flexibility it provides from a strategic standpoint. Answering to investors is difficult, particularly in times of technological paradigm shifts such as these cloud days.

    “BMC believes the opportunity to become a private company will provide additional flexibility and position us to invest more strategically to drive powerful innovation and deliver cutting edge customer solutions,” said Bob Beauchamp, chairman and chief executive officer at BMC. “We look forward to working closely with all parties to complete this transaction and enter into our next chapter of growth and industry leadership.”

    The board of directors has approved the deal, which offers only a modest premium above the current stock price. “After a thorough review of strategic alternatives, the BMC board of directors is pleased to reach this agreement, which provides shareholders with immediate and substantial cash value, as well as a premium to our unaffected share price,” said Beauchamp. Shares of BMS have fluctuated between  $44 and $45 in recent weeks, and traded at $45.45 this  afternoon. 

    Elliott Management, which owns 9.6 percent of the BMC common stock, has agreed to vote its shares in favor of the transaction.

    BMC an attractive play to investors

    BMC’s flexibility and its already-strong position in the infrastructure management market made it attractive to the investor group. The company has been expanding its management capabilities, both for internal and external infrastructure.

    “BMC is the only enterprise software vendor that can go from mainframe to mobile, with solutions that help IT drive real business innovation and optimize operations management and employee productivity,” said Ian Loring, managing director at Bain Capital. “We and the rest of the Investor Group look forward to working with the management team and employees of BMC to execute additional growth strategies designed to expand the Company’s capabilities and enhance its relationships with customers and partners around the world.”

    “BMC is an innovative leader in IT operations management and has strong leadership positions in growing segments such as cloud management, service management and workload automation,” said Prescott Ashe, managing director of Golden Gate Capital. “We are excited to work with the management team and employees to accelerate BMC’s growth and strengthen its position as the best-in-class provider of IT management software for heterogeneous environments.”

  • Dell Acquires Cloud Management Player Enstratius

    Dell has acquired Enstratius (previously known as Enstratus), a provider of enterprise cloud-management software and services provider that delivers single and multi-tenant cloud management capabilities. The acquisition continues the streak of tech giants buying up some of the most interesting cloud pieces in order to flesh out their own cloud portfolios and capabilities. Back in March, Oracle purchased Nimbula in a similar “giant gobbles up interesting cloud player” move.

    Terms of the transaction were not disclosed. Enstratius was founded in 2008 and is headquartered in Minneapolis, Minn. Dell plans to retain the staff of Enstratius and, as with previous acquisitions, will continue to invest in additional engineering and sales capability to grow this business. The question of whether these acquisitions will thrive amongst a larger organization or wither away are always present. However, Dell does have a good track record in terms of integrating acquisitions (Boomi is one popular example).

    The acquisition of Enstratius enhances Dell’s ability to provide cloud management solutions to its customers, as the company fleshes out its portfolio of cloud offerings.  Enstratius helps organizations manage applications across private, public and hybrid clouds, including automated application provisioning and scaling, application configuration management, usage governance, and cloud utilization monitoring.

    Enstratius is available as software-as-a-service or as on-premises software, enabling full control from within a customer’s data center, or via a hosted service.   Enstratius currently supports more than 20 public and private cloud platforms, including OpenStack, VMware, Rackspace, Amazon Web Services and Windows Azure, with the added flexibility to easily add new clouds.  The bottom line: Enstratius has been going out of its way to ensure compatibility across the cloud landscape, and its acquisition positions Dell to be similarly compatible.

    Big Technology Players Embracing Multi-Cloud Strategies

    The big trend among customers is embracing cloud either in a hybrid setup, as part of a larger strategy, or employing multiple clouds for various reasons including cost savings, redundancy and extending overall reach. However, managing multiple clouds is tricky. This is where Enstratius comes in. The big technology players are looking to acquire cloud management offerings that are cloud agnostic.  The Enstratius acquisition enables customers to choose from a wide variety of public and private cloud providers, including Dell and non-Dell clouds.

    “As enterprises increase their use of public, private and hybrid clouds, the need for controls, security, governance and automation becomes more critical,” said Tom Kendra, vice president and general manager, systems management, Dell Software. “Dell, together with Enstratius, is uniquely positioned to deliver differentiated, complete cloud-management solutions to enterprise customers, large and small, empowering them with the efficiency and flexibility in the allocation and use of resources.”

    Dell says Cloud management is a key strategic priority for Dell, given customers’ rapid adoption of cloud-based applications and the compelling array of cloud deployment models.  Enstratius brings Dell cloud infrastructure management for public, private and hybrid-cloud deployments and complements the capability Dell recently acquired from Gale Technologies, now Active System Manager (ASM), by providing enhanced multi-cloud management and application configuration capabilities.

    Enstratius also builds upon Dell Software’s strong portfolio of technologies such as Foglight performance monitoring, Quest ONE identity and access management, the Boomi integration platform, and data protection offerings such as AppAssure and NetVault to create a stronger systems management portfolio that enhances multi-cloud management.

    “We are excited to join the Dell team and bring our expertise to Dell’s rapidly growing cloud-management capabilities,” said David Bagley, CEO of Enstratius.  “Together, Enstratius and Dell create new opportunities for organizations to accelerate application and IT service delivery across on-premises data centers and private clouds, combined with off-premises public cloud solutions. This capability is enhanced with powerful software for systems management, security, business intelligence and application management for customers, worldwide.”

  • New QTS Lab Will Advance High-Security Federal Clouds

    QTS-Richmond-LEED

    A look at some of the data center space inside the QTS Richmond campus (Photo: QTS)

    QTS (Quality Technology Services) wants to help federal agencies get comfortable with cloud computing, and is dedicating some of its data center space toward this goal. The company, in conjunction with i2 Sentinel Associates, has set up a testbed inside its massive data center campus in Richmond, Virginia that will focus on creating highly secure cloud computing capabilities based on the needs of the U.S. Department of Defense, federal agencies and the U.S. intelligence community.

    QTS sees the continuous transformational environment (CTE) lab at its Richmond data center as an exciting development in the federal sector’s adoption of cloud computing.

    “The Richmond CTE lab contains some of the most advanced technologies in cloud computing today,” said Scott Shinberg, executive vice president, federal systems group – QTS. ”Its technology paired with the unbiased and secure environment will provide a gateway to enhancing the performance and interoperability of critical government applications. Today’s ribbon cutting marks the start to speeding the development and deployment of tomorrow’s cloud computing services.”

    CTE is housed within QTS’ 1 million square foot Richmond Data Center campus, where a grand opening was held today. The lab provides a stable environment for critical testing and analysis of off-the- shelf software and hardware for commercial and government users. The lab will be used to evaluate and improve the development of secure cloud computing technologies.

    The goal is to develop production-ready software and hardware for high performance computing, cloud computing, cross-community partnerships and shared knowledge transfer, which is an important priority for the intelligence communities within the U.S. government.

    “With industry and more federal players looking to cloud computing, physical and logical security in this area will continue to be a critical element,” said i2Sentinel CEO Thomas Preston. “QTS has long played a role in this space with the presence of its Richmond facility, and the addition of this lab solidifies the company as a major player in the federal market for cloud computing.””

    Attendees at the CTE lab’s grand opening include John A. Marshall, Deputy Director of the NGA National System for Geospatial Intelligence Program Management Office, Brigadier General Brian D. Beaudreault, USMC, Deputy Director, Future Joint Force Development, Joint Staff J7, and , among others. Attendees were given technology demonstrations anda  tour of the CTE lab.

  • Surviving Sandy: Two Views of the Superstorm

    sandy-house

    A look at some of the damage wrought by Superstom Sandy on a property adjacent to the IFF data center in Union Beach, New Jersey. (Photo: IFF)

    LAS VEGAS – For Alex Delgado, things were going from bad to worse as Superstorm Sandy slammed the Jersey Shore. It was high tide, during a full moon. There was a 13 foot storm surge, and the data center was less than a mile from the beach. Six hours into the storm, the company’s operations team in India had to be evacuated due to a cyclone.

    The staff at the International Flavors & Fragrances (IFF) data center in Union Beach, N.J. used to joke about a single telephone pole that carried “half of the Internet and half of its power.” As Sandy came ashore, that was the pole that fell. In short, had Delgado won a raffle that week, it would have been for the Hunger Games. Everything was going wrong.

    The campus was swamped with six to seven feet of water. Both its power substations were under water, as were the diesel fuel pumps. UPS batteries were nearing their end of life. Street power was out, and access to the facility was hindered by partially collapsed road.

    Different Scenarios, Different Considerations

    Delgado, the Global Operations and Data Center Manager for IFF, shared his experience this week as part of a keynote panel at Data Center World in Las Vegas. The panel showcased two stories of Sandy’s impact: one from the Jersey Shore at the heart of the damage, another from Philadelphia.

    The data center in Union Beach supports more than 50 manufacturing facilities around the world for IFF,  a chemical manufacturing company that did over $2.8 billion in revenue last year. While Delgado and his team struggled with the storm, the event had no major impact on customers, as the company didn’t lose a single order.

    The 4,500 square foot facility is a single tenant building with 30 minutes of UPS backup. Its disaster recovery site is 2 hours away at an IBM facility in Sterling Forest, New York. As the storm intensified, IFF was able to shift its critical operations to the backup facility.

    The damage in Union Beach was severe. The data hall stayed dry, as it was on the second floor of the building. But the storm surge took out power and mechanical infrastructure, and flooded the machine shop, ruining most of the facility’s power tools and spare parts. With the power out and roads closed or blocked, staff stayed in place for days. With provisions exhausted after the first 48 hours, IFF staff subsisted on vending machine food as they began the recovery effort, Delgado said.

    The data center was returned to service on Dec. 8 with new generators and infrastructure. Delgado wound up procuring 300 batteries and 3 generators.

    Delgado’s key “lesson learned ” included vendor support. ”If you don’t have a good relationship with your vendors, start shaking some hands today,” he said. He also noted that the company had moved to cloud email, which saved a ton of headaches in terms of communication.

    The View From Philly

    Donna Manley, IT Senior Director at the University of Pennsylvannia, showed a different side of the storm. While Philadelphia wasn’t nearly as impacted as much, the operational impact of the storm was great.

    The university’s data center is in a multi tenant building, with a main data center of 4,850 square feet in the University City section of Philadelphia. Manley’s story is important because it revealed a larger concern than just the data center: the city of Philadelphia’s aged infrastructure.

    A week prior to the storm, Manley and her team started the planning process. They identified teams, began tarping the windows, and put diisatser recovery provider SunGard on alert. “We started our crisis command center on the 29th, setting up a separate box.net instance just in case we lost power and were in an emergency situation,” said Manley.

    Understanding the geographic diversity of the staff was important, as some employees lived in areas where the storm hit hard. “We had very few individuals that could have been on site,” said Manley. ““We needed to make sure there was technical and management staffing.””

    Cloud Services Play a Role

    Manley leveraged online storage provider Box.net to get them through the storm. “Resourcing doesn’t just mean people,” said Manley. “One of the big things we have going on is our documentation. Up until recently, we had it in Sharepoint. We made it available on box.net, and we didn’t have to worry about servers going down and documentation not being available to us.”

    Manley’s advice is to have a data center crash kit checklist. “Because we’re an urban campus, we have a couple of unique items on there – respirator masks, subway tokens to get to disaster recovery site at Sungard)”, she said.

    She said it’s also important to read the fine print on Disaster Recovery Agreements to see whether a fee is required to put your provider on standby. There’s also food, as there’s  a chance workers will have to stay put at the data center for extended periods of time, and the local restaurants aren’t as committed to staying online as a data center.

    Both organizations said the prospect of managed services and hosting now appealed to them a little bit more than prior to the storm. Cloud services played an important role in both disaster plans, even if only to keep communications open through email.

  • David Shaw of IO is AFCOM’s Data Center Manager of the Year

    LAS VEGAS – David Shaw of IO has been named the Data Center Manager of the Year Award by AFCOM, the leading association for data center managers. Shaw, the Senior Vice President of IO, was honored Wednesday night in an awards ceremony at Data Center World in Las Vegas.

    Shaw manages more than 1.5 million square feet of data center capacity for IO, which has been a pioneer in deploying modular data center designs. He oversees IO’s “Data Center as a Service” offering to deploy managed data space in IO’s factory-built modules, which also includes the company’s IO.OS data center management software.

    The other finalists were Tate Cantrell, the Chief Technology Officer at Verne Global, and Donna Manley, the Senior IT Director at the University of Pennsylvania.

    Shaw, who has been working in the industry since 1987, oversaw the opening of the world’s largest modular data center in Edison, New Jersey, and ensured that the facility remained 100 percent operational during Hurricane Sandy and in its aftermath.

    The Power of People

    In accepting the award, Shaw noted that there are six things data center managers work with – power, cooling and connectivity, and people, process and technology. Of those, he said people were the most important part of the equation. He dedicated the award to his team of 50 staff members, who work in four data centers in the U.S. across the U.S.

    Shaw also emphasized the importance of bringing “new blood” into the industry by getting the next generation of IT workers working in the field.

    Prior to joining IO, Shaw led the greenfield build and operational implementation of a 7 megawatt data center to support electronic medical records and critical patient care systems for 26 hospitals and services across five states. He was also responsible over a 10-year period for global data center management at Perot Systems.

    He received in-depth specialist data center and operations training from IBM and the UK Ministry of Defense, and is certified in ITIL service management. While serving in the Royal Air Force, he graduated in electronic engineering and studied computer-aided engineering, specializing in robotics.

    The data center manager of the year was selected in a blind test judged by three past winners. The award is named for Len Eckhaus, the founder of AFCOM.

    AFCOM was founded in 1980 to support the educational and professional development needs of data center and facilities management professionals around the globe. The association has more than 3,500 members and 40 chapters worldwide, and provides education and networking for data center managers through its Data Center World conferences, regional chapters, and Data Center Management magazine.

  • BYOD is Not the Enemy: Using Consumer Tech to Manage the Data Center

    clouds-mobile

    LAS VEGAS – BYOD is not the enemy. Instead, the Bring Your Own Device movement of adopting consumer technology can be of great benefit for an It organization, according to Joseph Furmanski, Associate Director Data Center Facilities and Technology at the University of Pittsburgh Medical Center (UPMC).

    In a presentation Tuesday at the AFCOM Data Center Word Spring 2013 conference, Furmanski outlined how consumer tech such as iPhones and tablets were embraced at UPMC and have become valuable tools for the huge health care provider. Workers bringing their own devices to the workplace are inevitable, he noted, so why not embrace it?

    For UPMC, consumer tech offered a way to do more with less manpower. ”There’s 3 people, we stretch them a lot and want to minimize that,” said Furmanski.

    Furmanski believes adopting consumer tech is important in addressing long-term staffing challenges facing the data center industry. Many in the data center field are no longer spring chickens, and the industry will need younger workers who are accustomed to using iPhones and tablets rather than Blackberries and PCs.

    Attracting A New Generation of Staffers

    “We have to train a new generation and get them excited, and the key is using the tools they use,” said Furmanski, who said his thinking was influenced by discussions at AFCOM and other industry groups on attracting and retaining Millenials. “The people we’re hiring now grew up with this stuff.”

    UPMC operates more than 20 hospitals, with 3,200 physicians and more than 55,000 employees at400 clinical locations, which include hospitals as well as long-term care and senior living facilities. UPMC also operate s a health plan with nearly 1.8-million members.

    With the objective of improving data center management and IOC support, the company began looking at consumer tech, with the stipulations that it would be low or no cost, and must be used in a way that required little to no custom programming. The effort initially focused on the most popular applications: Facebook, Twitter, Dropbox, Evernote, and QR code to perform various functions and communications.

    Quick Deliverables, Quick Wins

    “We had to convince management that we weren’t doing it just to have fun,” said Furmanski. The initial stages were about quick deliverables and quick wins.

    The UPMC’s main data center is in Forbes Tower, a 10,750 square foot facility with limited staff. The hardware is leased on 3-4 year cycles, and they’re heavily virtualized with over 5,000 virtual machines in use. It’s not a large data center, and they are constantly looking at how they can improve efficiencies. This is where consumer tech came into play.

    The first phase of the plan cost under $2,000 in equipment. The staff would use QR codes and code scanners for things like the FM 200 manual, making it easier to access documentation. The staff took video for information purposes, and used Skype to call subject matter experts to solve problems.

    UPMC was able to more effectively use limited staff, save on paper and organize documentation through use of consumer tech.  As time passed, using tablets improved quality and processes and saved the company a lot of paper.

    It wasn’t all smooth. “Integration was the beast that stopped the project,” said Furmanski. Security integration was a particular pain point with devices like iPads and iPhones. “There were a lot of good vertical applications, but we hated logging in over and over,” he said. “There was little to no integration. We talked to a lot of vendors about this.”

    The Surface to the Rescue?

    The company then looks to Microsoft’s Surface tablet, and believes the new Surface Pro will make many of those headaches go away. “Security and content worked really well with it,” said Furmanski. “We found we can run any web or windows based app.” Working with these devices are now a DCIM requirement.

    The company uses these devices to access data on the overall health of the data center. The wealth of applications emerging from app stores for consumer devices is proving useful to UPMC. Staff can view real-time PUE and the environmental control system, all from a tablet. Furmanski says Sharepoint is a key knowledge repository, and Windows 8 and Active directory passes context through, so the silos between apps are breaking down.

    The bottom line for UPMC is that a small staff, with limited investment in consumer tech devices, was able to do more and virtually eliminate the heavy paper usage that plagued the company, Furmanski said. Information is at their fingertips, a limited staff can do more and can access documentation quickly and easily. UPMC will continue to look into how consumer tech can improve its everyday operations.

    By implementing proper usage of BYOD and consumer tech, the data center can greatly improve processes and drive valuable insight, even with limited manpower. There are a wealth of applications from data center management providers coming out every day that increase the value of these devices, so it’s worth looking into the consumerization of data center management.

  • Microsoft: Centralization is Driving Energy Efficiency

    microsoft-janous-1

    Microsoft’s Brian Janous was the keynote speaker at Data Center World Spring 2013, which got underway yesterday at the Mandalay Bay in Las Vegas. (Photo: Josh Ater)

    LAS VEGASMicrosoft is looking deeply into energy efficiency and insists that that its gaze must encompass the entire system – not just on a the level of the data center, but scrutinizing what it takes to create a unit of content.

    We are now in an era of centralization, Microsoft Director of Energy Strategy Brian Janous said yesterday in his keynote address at Data Center World Spring 2013 at the Mandalay Bay, drawing many parellels between the data center and energy industries at the turn of the last century.

    While the talk was dubbed “commoditization of the cloud,” a key theme was the centralization of data that is occurring thanks to cloud, and the efficiencies this is driving in terms of energy – the oxygen of the cloud.

    Energy is Critical

    Cloud is driving centralization, which is in turn driving the need for efficiencies. “There’s probably no other industry, at this point, where energy is more critical,” said  Janous.

    Janous oversees energy agreements for Microsoft, as well as strategic partnerships to make sure the company’s power supply is reliable and sustainable. Joining Microsoft in 2011, his career had largely been focused in the energy sector. Janous believes data center operators must look into the entire supply chain for opportunities. This includes “out of the box thinking” on topics such as fuel cells, methane, solar and everything in between.

    “Physicists believe it’s possible to create a perpetual motion machine, if this happens, everything I talk about will be irrevelant,” Janous joked.

    What Microsoft is Doing and How They’re Doing It

    Microsoft has been moving from a traditional box software company to an online services company. Microsoft runs 200 online services worldwide. Office365 is the fastest growing product in the company’s history, Janous said, serving more than 1 billion customers in the cloud. Janous provided a quick snapshot of where Microsoft’s data centers are and noted that the company is still in rapid expansion mode. ““It’s an exciting time to be on the forefront,” he said.

    The company sees the evolution of efficiency in the data center occurring over five generations. The first was colocation; multi-tenancy to take advantage of scale. The second generation was a focus on density. The third on containment. The fourth generation is modular. The fifth generation is Integration, is this is where Janous says PUE will approach the 1.0 mark.

    “Integration is thinking about system on a chip, and it’s a little broader than that. It’s about what it takes to make a data center work. “What integration means is thinking about upstream and downstream value chain and how we optimize it,” said Janous.

    In short, efficiency isn’t just about the facility. “Let’s figure it out all the way to creating a unit of content,” said Janous.

    Centralization Drives Efficiency

    Janous spoke of parallels between online services and the energy sector, particularly the electricity sector. “Beyond the dependencies, there are a lot of similarities: large centralized plants, a network, and we have a need to balance supply and demand on a near instantaneous basis,” said Janous. “There’s not a lot of industries that need to do this.”

    “On a macro level, I think of the development of the electricity sector and over the last 100-150 years, it developed as starting as bunch of components, and then became centralized in plants. This is what we’re seeing in data centers these days.”

    Janous notes that there were roughly 50,000 power plants distributed across the United States at the turn of the previous century. “Businesses didn’t trust utilities,” he said. “Then we had a massive revolution, not just on the tech side, but the business side. Centralization meant you could spread costs over customers, driving costs down and driving up demand.”

    Janous believes the data center industry needn’t fear commoditization. He uses the example of the light bulb. Electric companies believed that because the light bulb was 4 times more efficient, it would drive them out of business. Instead, it created 4 times more lights. The corollary: as we get more energy efficient in the data center, it creates a bigger demand.

  • SoftLayer and Basho Team on Turnkey Big Data Solution

    Service providers are quickly coming up with big data solutions in a turnkey fashion, as cloud presents a massive opportunity for big data going forward. Global cloud infrastructure provider SoftLayer and distributed systems company Basho have teamed up on a big data solution. Riak and Riak Enterprise are now available on SoftLayer infrastructure on a consumption-based pricing basis. Riak is an open source, distributed database.

    The combination, dubbed SoftLayer big data solution: Riak, will provide database administration and infrastructure management with the flexibility of SoftLayer’s consumption-based pricing. Common use cases for Riak include content delivery platforms and global session stores; aggregating large amounts of data for logging, sear and analytics; and managing, storing and streaming unstructured data and general-purpose data stores.

    “The customer demand for easy to deploy and manage big data cloud-based solutions continues to rise,” says Duke Skarda, CTO of SoftLayer. “We are seeing substantial adoption through joint customers, such as Bump, who want to leverage this joint offering. Our SoftLayer big data solution: Riak offering is further validation of our commitment to deliver big data solutions that can run over our highly scalable, automated cloud platform.”

    The customer mentioned, Bump, is one of the most popular mobile apps on the market today. The app makes it easy for users to share their contact information, photos, and other objects by simply “bumping” their smartphones. Bump runs on SoftLayer’s infrastructure while using Riak to store user data including events, communications sent and received, handset information and tokens needed to authenticate using social networks.

    “Operational ease is key to our business success.” says Mark Smith, Operations Lead at Bump. “The combination of Softlayer, who we already trust with our business and data, and Basho, who makes the database that we trust at scale, saves us time and effort and allows us to focus on our business, not our data infrastructure.”

    The new solution makes it easier to create and configure customize Riak clusters at the push of a button. Deployment is standardized, reducing risk of human error. It uses pre-engineered systems that are optimized for Riak including dedicated bare metal servers, for high performance and high reliability. The solution uses best practices based on joint insights, expertise and experience. This allows Riak users to run optimized hardware and OS configurations, automated deployment of multi-data center fault-tolerant clusters, and integrated monitoring and support in the cloud.

    High Performance, Scalable Riak Environments in the Cloud

    The combination means a more accessible and more flexible, pay as you go option for Riak distributed databases. It’s purchasable in an a la carte fashion through SoftLayer, enabling organizations to quickly deploy big data applications. Organizations can start using Riak more quickly by provisioning a complete solution set through SoftLayer’s portal or API for ease of management, administration and support.

    “Basho and SoftLayer have long catered to innovative developers building the next generation of web, social and mobile applications. Today, enterprise customers are demanding the same, an architecture that provides for zero-data loss and ensures zero-downtime.” says Bobby Patrick, Executive Vice President and CMO of Basho. “We believe distributed systems software, such as Riak, and distributed infrastructure is required to help customers truly achieve these ambitions. Basho is excited to partner with SoftLayer to help
    companies easily deploy applications that are truly distributed, scalable, and always available.”

    Basho is a distributed systems company dedicated to making software that is highly available, fault-tolerant and easy-to-operate at scale. Basho’s distributed NoSQL database, Riak and Basho’s cloud storage software, Riak CS, are used by fast growing Web businesses and by over 25% of the Fortune 50 to power their critical Web, mobile and social applications and their public and private cloud platforms.

    SoftLayer’s Riak solutions are available immediately with prices starting at $359 per month. This price includes an Intel Xeon 1270-based server with 8GB of RAM and two 500GB SATA storage drives.

  • New Eucalyptus Features Boost Hybrid Clouds for AWS

    Eucalyptus Systems continues its laser-focus on enterprises using Amazon Web Services in need of private cloud software, adding three of the most anticipated capabilities in version 3.3. Support for Auto-Scaling, Elastic Load Balancing, and CloudWatch have been added, making Eucalyptus more ideal for testing applications built for AWS in a private environment. The company has also added resource tagging, expanded instance types and a new maintenance mode, as well as support for Netflix’ OSS tools Chaos Monkey, Asgard, and Edda.

    With each subsequent release, Eucalyptus is building on its existing EC2, S3, EBS, and IAM features so it maintains compatibility with Amazon, positioning itself as the private cloud option in a hybrid setup.

    “We’re focused on three principles,” said Andy Knosp, Vice President of Product at Eucalyptus. “The first is increasing agility, allowing an organization to be much more responsive. The second is cost effective control – over time, doing dev and testing can be quite expensive. Eucalyptus is a low-cost alternative to test and development on private platform (the customer) controls. The third, is that the future is hybrid, and we will continue to develop and enable capabilities around Hybrid.”

    What The Feature Additions Mean

    Auto Scaling: Auto Scaling allows application developers to scale Eucalyptus resources up or down based on policies defined using Amazon EC2-compatible APIs and tools. With Auto Scaling, cloud resources can be seamlessly increased or decreased to maintain performance and meet SLAs.

    Elastic Load Balancing: Elastic Load Balancing is an AWS-compatible service that distributes incoming application traffic across multiple Eucalyptus instances to provide greater fault tolerance for applications.

    CloudWatch: CloudWatch is AWS-compatible service that monitors cloud resources and applications running on Eucalyptus clouds. It provides a reliable and flexible monitoring solution which allows application developers and cloud administrators to programmatically collect metrics, set alarms, identify trends, and take action to ensure applications run smoothly.

    Leveraging Netflix Tools

    Additionally, the company has added support for Netflix OSS tools, including Chaos Monkey, Asgard, and Edda, through its API fidelity with AWS. Chaos Monkey is an important testing tool that introduces random errors to identify potential problem points. Asgard is Netflix’s open-sourced management console for AWS, which makes it easier to work with the service through additional functionality that isn’t accessible in the normal AWS web interface, removing a lot of the command line tools required to do certain things. Edda tracks changes in the cloud. The three all come out of a company that has had its share of experience with AWS, and offers its software as open source.

    “Eucalyptus was the first private cloud platform to support Netflix OSS tools, including Chaos Monkey, Asgard and Edda, through its API fidelity with AWS,”” said Adrian Cockcroft, cloud architect at Netflix. “Thanks to this integration, those tools can now be used in both private and public cloud environments.”

    The goal is to make Eucalyptus as AWS-compatible as possible to provide a standardized and consistent environment that spans both a private and public cloud. It alleviates some budget and resource-strapped teams of developers, test engineers and QA teams test and get things running quicker.

    Some examples of sophisticated engineering organizations leveraging Eucalyptus include MemSQL, AppDynamics, Mosaik Solutions and Nokia-Siemens Networks. All have already deployed Eucalyptus private clouds for continuous high-volume, large-scale testing of their applications built for AWS.

    Eucalyptus has also released resource tagging that allows the assignment of customizable metadata to resources in Eucalyptus, and goes a long way in helping categorize cloud resources in different ways. The expanded instance types align closely to the new instances Amazon has released for EC2, making it easy to go back and forth; and a new maintenance mode allows administrators to perform maintenance with zero downtime to instances or apps running in the cloud.

    “This will lead into the ability to do live upgrades,” said Knosp. “We hadn’t really leveraged the live migration features. Many customers were designing with failure in mind, (so) there wasn’t a huge demand. But we’re increasingly seeing more customers as they move, and demand is coming from the administrator side.”

    “Before Eucalyptus, our engineers had to manually configure each set of nodes for distributed testing, which was incredibly slow and painful,” said Eric Frenkiel, CEO ofMemSQL. “Now, we can set up and run new instances in just 30 seconds. This allows our engineers to quickly run thousands of tests, and deliver the highest quality product.”

    “The ability to automate infrastructure across hybrid environments is becoming increasingly important for IT organizations tasked with accelerating development cycles and achieving greater efficiencies,” said Bryan Hale, VP of Online Services, Opscode.”Like AWS, Eucalyptus utilizes Opscode Chef as an automation engine, which helps ensure consistency across cloud environments. With this release, Eucalyptus is driving even deeper compatibility between public and private clouds to meet customer demand for hybrid cloud computing.”

  • Schneider Electric Opening New R&D Facility Near Boston

    Schneider Electric does plenty of research, and a new R&D facility is part of its commitment to advancing the energy efficiency and management market. The company announced today that it plans to build a new high-tech R&D Innovation Center in Andover, Massachusetts, just outside of Boston.

    The 235,000 square foot property will serve as the company’s North American R&D hub, accommodating more than 850 Schneider Electric employees. Out of the gate, the facility promises to be one of the most energy efficient buildings in the world, with LEED certification at the time of its opening. The Andover site is scheduled to be fully occupied by fall of 2013.

    “The vibrant ecosystem of innovation in the Greater Boston area is the ideal backdrop for Schneider Electric’s Global Innovation and Technology Center,” said Chris Curtis, president and CEO, North America, Schneider Electric. “The center will be a rich resource for customers, and it will bring together cutting edge innovators in the region with researchers from around the world, allowing for a cross-pollination of ideas essential for transformative and disruptive innovation.”

    The facility will combine all of Schneider Electric’s business segments, including Buildings, IT, Industry, Power and Corporate, under one roof. It will operate an R&D laboratory at the facility to develop and test new classes of technologies, ranging from data center management, to home and small business automation, to commercial business automation. It will also include a StruxureLab – a cross-discipline technology integration laboratory where Schneider Electric tests and validates its solutions, as well as a customer innovation center, a training facility, and a state-of-the-art conference facility.

    “We are bringing together top talent to collaborate across several disciplines, with the expectation that we will deliver breakthroughs in energy efficiency that will change the industry forever,” said Barry Coflan, senior vice president, Buildings Business, Schneider Electric, and member of Schneider Electric’s Global Innovation and Technology Council. “In addition, the new facility will be a fertile place for innovation, attracting new employees, students, researchers and customers to the Boston area, driving business and community development.”

    The facility will leverage the company’s portfolio of technologies, including StruxureWare energy management software applications and suites. It will leverage critical power and cooling, power distribution and control, and video surveillance and lighting, all from Schneider Electric. The facility will leverage the SmartStruxure Building Management solution and a highly efficient chilled beam HVAC system that will reduce costs, as well as operation and maintenance requirements.

    Schneider Electric is headquartered in Rueil-Malmaison (Paris), France, and the new Global Innovation and Technology Center in Andover joins four existing Schneider Electric Global R&D centers located in North America, Europe and Asia.

  • Open Data Center Alliance Tackles Cloud Lock-In

    cloud-monitors

    Just how easy is it to move workloads around in the cloud? The Open Data Center Alliance (ODCA) is testing virtual machine interoperability in the enterprise cloud. A new report looks at hypervisor interoperability – just how easy it to move a virtual machine from hypervisor to hypervisor? It identifies gaps that hypervisor and VM solutions providers need to address in order to move VMs between public and private enterprise clouds going forward.   It’s tackling the dangers of “Cloud Lock In” on the hypervisor level, and trying to establish an ODCA VM Interoperability Usage Model.

    “A capability for VM interoperability is an important precondition to truly realize the oft expressed benefits of virtualized clouds, such as the ability to balance resources through fungible pools of resources, business continuity and load balancing by leveraging distributed publicly available resources, as well as demonstrable avoidance of lock in to a single Cloud Provider, platform or technology,” states the report.

    The report details a test taken to check out hypervisor interoperability, with the specs and components used for the test bed solution stack and hardware components available in the report. The aim was to:

    • Check interoperability
    • Move or copy between two hypervisors and cloud providers
    • Leverage common operation and interoperability

    The results aren’t meant to determine interoperability, but rather show there’s room for maturity when it comes to hypervisors playing nice with one another. It includes matrices showing how well commonly used VMs were moved between VMWare, Citrix Xen, KVM and Microsoft Hyper-V. Who was the friendliest hypervisor during the test? Who was a big, fat jerk hypervisor? You’ll have to read the report (note: the report does not actually call any hypervisor a “big fat jerk”).

    Cloud interoperability remains a concern, particularly with all of this hybrid/multi-cloud talk. The ODCA continues its efforts to resolve interoperability challenges that might prevent cloud adoption.

  • StackPop Aims to Become the Mint.com of Infrastructure Management

    stackpop-bridgethegap

    There’s a flood of vendors out there looking to make infrastructure management as smooth and as manageable as possible. Stackpop is a cloud-based service that helps enterprises analyze and optimize their IT infrastructure spending. Its pitch is that its easy to use and brings in real savings. The New York-based company addresses the disconnect between finance and IT, helping track contracts and spending, and providing useful intelligence when it’s time to renegotiate or shift vendors and technologies. It also acts as a comparison tool and marketplace for buyers and sellers, pitting it against colo brokerages in addition to spend tracking and aggregation.

    While its appeal is obvious for the end users of infrastructure, for the infrastructure providers themselves, StackPop has the potential to become a very potent marketplace to drive sales. It aids in comparing, configuring and buying from over 450 infrastructure providers in 40 countries and wants customers to never buy blind again. As it grows this part of what it does, it stands to gain insight to general buying and infrastructure trends across the world, so users can see what is being paid on average.

    The challenge lies in making infrastructure services like colocation as transparent as cloud or hosting. As hybrid infrastructures continues to gain traction, a management platform like StackPop stands a good chance of becoming something of note. It already touts some large customers like gaming site IGN, social check-ins provider Foursquare, and online fashion retailer Gilt Groupe.

    A large part of what StackPop does is analagous to personal finance and budgeting tools, such as personal finance portal Mint.com, only for IT infrastructure. Mint.com, now owned by Intuit, is an application that helps people understand their finances. It gives a user the ability to aggregate and monitor all financial accounts from one simple, attractive, and mobile-friendly app. It drew a large crowd of users as a startup and was soon acquired by the financial software giant Intuit. Mint.com and Stackpop are similar in terms of what they hope to achieve, although Stackpop’s features go beyond the “read-only” data aggregation seen at Mint.

    Roots at Panther Express

    Stackpop was founded in October of 2011. The co-founders were infrastructure guys, network and systems engineers. Co-founder and CEO Jason Evans says the talent behind the company grew its global chops at content delivery network Panther Express, where they started with 10 servers and, grew the company to 45 global locations before being acquired by CDNetworks.

    Evans moved to Mediamap, a real-time bidding platform that had a handful of servers on Rackspace upon arrival, which built out to 6 global data centers including Asia Pacific and Europe, Middle East and Africa.

    “We’ve always had the idea of creating better tools to help grow and scale the infrastructure,” said Evans. “The original idea for Stackpop came from we had an unused cage we had purchased on a 2-year agreement at Panther, and it was a big waste. We tried to sublease it but couldn’t.”

    That unused IT purchase highlighted a problem, and an opportunity. “The idea came out of the question – ‘How do we create a second level marketplace for space capacity at a discounted rate?’” said Evans. In April of 2011, Stackpop was formed to pursue solutions.

    Evans teamed with Stackpop CTO, Aram Grigoryan, also a Panther Express alumni, and put together a seed round of funding in the fall of 2011. The beta for the serviced came out March of last year, and has closed $1.2 million in transactions through the platform and through its partners. Provider feedback has been positive.

    Infrastructure Spending Insight That Goes Beyond Cloud

    There are a lot of web sites that provide information about infrastructure. However, there’s no one transactional platform that drives and dominates infrastructure spending. “There’s a need for a more transparent and transactional platform,” said Evans. “We had to get a little more involved from a personal level.”

  • Survey Says: Multi-Cloud Usage Growing, DevOps on the Rise

    rightscale-survey-2013

    Enterprise cloud usage has begun to hit a point of maturity, and there’s increasing preference for a multi-cloud approach, according to a survey from cloud management provider RightScale. The report also documents the rise of DevOps – the marriage and integration between Development and Operations – which has grown like a weed, taking a hold in 54 percent of respondents.

    The company assessed what stages businesses and enterprises were in their cloud strategies, with 49 percent either partially or heavily using cloud, 17 percent developing cloud strategies and 26 percent working on proof-of-concepts or first projects. That means only 8 percent of respondents weren’t thinking about or planning to use cloud.

    Larger organizations are choosing multi-cloud and hybrid cloud strategies by a large margin. Seventy seven percent of large organizations said they’re going multi-cloud while close to half at 47 percent said they were planning or are using hybrid.  Enterprises with hybrid cloud strategies are making progress toward their goals, with 61 percent of those organizations already running apps in public cloud, 38 percent in private cloud and 29 percent in hybrid cloud environment.

    The trend shows organizations moving up the maturity model, and reaping increasing benefits as they do. The top benefits reported in the survey are faster access to infrastructure, greater scalability, faster time to market with apps, and higher availability. The main issues surrounding cloud tend to disappear as these organizations move up in terms of cloud maturity. For example, security and compliance is often a major concern. It’s viewed as less of a challenge as cloud usage matures: 38 percent of beginners but only 18 percent of experienced cloud users viewed it as a challenge. Thirty percent of “Cloud Beginners” reported that they were gaining faster access to infrastructure, while 87 percent of “Cloud Focused” respondents realized that same benefit.

    More cloud is being used, and it’s being used more often in a hybrid or multi-cloud situation, and DevOps has indeed taken a grip on the enterprise. RightScale has interest across the board here, from helping cloud watchers and beginners to more advanced users who want to scale out their use of cloud.  The full RightScale 2013 State of the Cloud report is available here (registration required), and last year’s survey here