Author: Jason Verge

  • CyrusOne Grows Houston Campus for Oil and Gas Industry

    The Houston campus for CyrusOne just grew significantly. The colocation provider has purchased 32 acres adjacent to its Houston West facility, increasing the size to a total of more than 45 acres. The campus targets the sizeable oil and gas industry, touting itself as the largest multi-tenant digital energy campus in the country for seismic exploration computing. CyrusOne has finely tuned high-density server environments catering specifically for the industry, accommodating up to 1,000 watts per square foot and higher for these compute-intensive jobs.

    “Over the past decade, CyrusOne has created a geophysical center of excellence for seismic exploration computing at its Houston West data center,” said Gary Wojtaszek, president and CEO of CyrusOne. “Purchasing adjacent property presents an exciting opportunity for us to continue expanding and providing more of the innovative products and services that our customers are demanding from us.”

    Oil and Gas Industry Continues to be Focus in Houston

    The company is building on its strong market position with the oil and gas industry – it does business with nearly all super-major and major oil and gas firms worldwide. The company believes the campus will attract significant research and development investment to the Houston area as more companies and countries look to expand their exploration expertise, notably in the newer hydraulic fracturing (“fracking”) procedures.

    The company developed a high-performance computing (HPC) cloud solution specifically for the oil and gas industry. It’s been a whirlwind 2013 thus far for CyrusOne. In addition to this land grab, The data center service provider completed its IPO, opened two large data centers, upgraded power at another, and reported record sales and leasing in its first earnings report as public company. The company, which was spun off from parent Cincinnati Bell, is said to be looking at new markets and could expand through acquisition, according to executives earlier this year.

  • Internap Steps Closer to “Cloudy Colo” Dashboard

    An illustration of the front facade of Internap's newest data center, located in Los Angeles.

    An illustration of the front facade of Internap’s newest data center, located in Los Angeles. It will be one of the launch facilities for the company’s “Cloudy Colo” dashboard.

    There’s a continued movement toward using a single dashboard to see and control all IT assets, concurrent with the movement to use hybrid infrastructures. Internap, positioned squarely in the center of this thanks to a diverse portfolio of colocation, cloud and managed services, has moved closer to this one dashboard vision.

    The company has formally announced its “cloudy colo” capabilities it teased earlier. The universal customer portal is now in limited release for Internap Labs customers and generally available at the end of the second quarter for customers in the company’s Los Angeles, Santa Clara, and Dallas data centers. The rollout will gradually be extended across the entire footprint.

    The company is bringing cloud-like remote visibility and management benefits to colocation customers and enable hybridization of cloud and colocation footprints through a universal customer portal. The portal will be provided as a standard part of Internap’s offering and will deliver granular visibility and management of the colocation environment,  increasing control while reducing costly visits to the data center or the use of remote hands services.

    Making Colo Feel Like Cloud

    “Based on growing comfort with the automation offered by cloud services, organizations are seeking easier and faster access to their infrastructure,” said Carl Brooks, analyst, Internet infrastructure services at 451 Research. “As a result, there’s a strong opportunity for service providers to give customers access to elastic, on-demand resources with new kinds of controls, agility, ease-of-use, and infrastructure hybridization.”

    Rather than replacing traditional services like colo, cloud often acts as a complement or breeding ground for future colo customers.

    “Both cloud and colocation will continue to play critical roles in meeting organizations’ diverse application requirements,” said Raj Dutt, senior vice president of technology at Internap. “Internap’s universal customer portal bridges these typically distinct worlds with ‘cloudy colo’ capabilities, providing remote visibility into the colocation environment – unprecedented in data center offerings – and enabling the on-demand integration of colocation, cloud and other infrastructure with the simplicity of one trusted network, one contract and one support team.”

    The portal will initially include the following colocation management features:

    • Inventory management with integrated support tracking: Customers can review their entire colocation footprint; check device power status and create alerts; deploy stencils for device-level inventory tracking and management; and open support tickets instantly and receive feedback directly from Internap’s NOC.
    • Power utilization monitoring and management: Customers can view circuit-level power utilization trends; remotely reboot or power down any configured device without incurring charges; and easily access and view log files of all initiated power actions.
    • Environmental and bandwidth monitoring: Customers can view rack-level temperature and humidity conditions; track IP traffic and conduct trend analyses; and capture and analyze device health and usage stats.
    • On-demand provisioning of hybrid services: Customers can integrate management of colocation – typically a “siloed” environment – with on-demand provisioning and scaling of cloud compute, bare metal and cloud storage assets to rapidly align their infrastructure portfolio with changing business and application needs.The company calls it on-demand hybridization. It’s made possible by its PlatformConnect service which provides private network connectivity between multiple Internap services – including colocation, managed hosting and cloud – within the same data center. Customers can hybridize application environments as needed via the universal portal, rather than in days or weeks. 
  • Storage Player Exablox Emerges From Stealth With $22M in Funding

    The newest player on the storage scene is Exablox, which emerged from stealth mode today with $22.5 million in funding from Norwest Venture Partners (NVP), Doll Capital Management (DCM) and US Venture Partners (USVP).

    Exablox promises to provide turn-key enterprise-grade object-based storage and data protection in less than five minutes. It is targeting resource-constrained organizations with unstructured and backup/recovery storage problems with its solution sold exclusively through the channel. The company helps deliver scale-out network attached storage (NAS) built on object-based storage through an appliance called OneBlox, and its OneSystem management software.

    The Mountain View, Calif. company looks to solve businesses’ common storage pain points: complicated installation, cumbersome storage management, lack of data security and forklift upgrades.

    Exablox is also upfront with pricing; pricing begins at under $10,000 for a 32TB solution and under $40,000 for a replicated four-node 64TB disaster recovery solution. The company’s been working with beta customers, but is now open to select customers.

    Two Years in Development

    “We spent the last two years working with customers and partners to build a next-generation storage solution that addresses the pain points they’re confronting as they deal with the explosion of unstructured data,” said Douglas Brockett, CEO of Exablox. “We’re tearing down the technology barriers that have forced customers into choosing between the features they need and the solutions they can afford. At Exablox, we think every customer should feel their data storage is safe and scalable.”

    OneBlox is a scale-out object-based appliance that is expandable and uses flexible media, whether it be cost effective SATA drives or performance boosting SAS or SSD. It is accessible with Server Message Block (SMB) and Common Internet File System (CIFS) network protocols. Additional OneBlox appliances can be added automatically, enabling  dynamic scalability rather than forklift upgrades.

    The storage architecture doesn’t limit the choice of drive types or capacities, the company says, allowing organizations the ability to mix and match drive technologies and sizes. The ability to add a new drive at any time offers the potential for just-in-time storage capacity, dynamically pooled within the global file system. OneBlox includes inline deduplication for primary and disk-based backup and recovery storage, to minimize waste and maximize storage utilization.

    The OneSystem storage management offering is cloud-based, so there’s nothing to install and no command line. A drag-and-drop interface offers site-to-site replication and configuration, allowing service providers the ability to proactively manage and monitor all of the storage resources across companies and geographies.

    “If storage admins try to solve the challenges associated with managing unstructured data with the same technology they’ve been using, insanity may certainly be in their future because things just won’t get better,” said Terri McClure, Senior Analyst of Enterprise Strategy Group. “Vendors like Exablox are out there with innovative technology that is designed for the unstructured data challenges IT faces today, not the challenges faced 20 or 30 years ago.”

  • Google Pumps $400 Million More into Iowa, Investment Now Tops $1.5 Billion

    An overhead view of the server infrastructure in Google’s data center in Council Bluffs, Iowa. (Photo: Connie Zhou for Google)

    The data center building boom continues to accelerate at Google. The search giant today announced plans to invest another $400 million in its Council Bluffs Southlands facility, pushing the investment in its Iowa facilities to more than $1.5 billion. The announcement comes the same day that Facebook has confirmed plans to build a major data center in Altoona, making it a giddy day for Iowans.

    Google spent $600 million to build the initial phase of capacity in Council Bluffs, followed by phased expansion investments of $300 million, $200 million and now an additional $400 million. Total: $1.5 billion. Let that sink in.

    The data center houses some of the infrastructure behind Google search, Gmail, Google Maps and the Google+ social network. “Google has not only invested heavily in Iowa, they’ve helped to create an atmosphere of high tech development that is the envy of the Midwest,” said Lt Gov. Kim Reynolds.

    Need for Growth

    “Our commitment to Council Bluffs and Iowa grows stronger each day,” Google Data Center Operations Manager Chris Russell said. “As demand for our services grows, our operations need to grow as well. We’re excited to be an integral part of Iowa’s expansion into next generation technology.”

    The company is spending like crazy on its infrastructure, and had made Council Bluffs the focus of continued rounds of investment.

    “Google has again put Council Bluffs at the center of high tech development in Iowa with this $400 million investment in the next phase of their growth and development south of the City,” said Council Bluffs Chamber President and CEO Bob Mundt. “They’ve also been a tremendous corporate citizen, supporting our local schools, colleges, the Chamber and the City of Council Bluffs. Google’s continued expansion in Council Bluffs is a strong vote of confidence in Iowa and the Council Bluffs region as a major player in the ever expanding technology sector.”

    Continued Investments in Renewable Energy

    Google reaffirmed its commitment to renewable energy. In 2010, as a measure to reduce its carbon footprint in Iowa, the company entered into a long term agreement to purchase 114 megawatts of wind energy produced by a wind farm in Story County. Google has also made investments to help accelerate the wind power industry in the state; in 2012 the company invested $75 million in a 50 megawatt wind farm in Rippey, Iowa developed by RPM Access.

    The commitment to renewable energy extends beyond its own needs. It’s working with its utility partners to develop solutions for large electricity consumers that want to increase use of renewable energy by basically sharing information and outlining how it’s doing it; the company states it looks forward to developing similar solutions in Iowa.

    Google’s investment in Iowa goes beyond servers and data centers, as the company has focused on being a good neighbor and engaged in the local community. Some examples:

    • Sponsored the 2012 Iowa State Fair
    • Sponsored 52 free “Iowa! Get Your Business Online!” events for Iowa small businesses wanting to establish sales in the digital marketplace.
    • It Became a key proponent of the Governor’s Iowa STEM initiatives, and has extended over $600,000 in educational grants for science and technology efforts.
    • Established the 2012 Caucus Hangout and Media Center for Iowa’s First in the Nation caucuses
    • Launched its Google for Veterans efforts in Iowa.
    • This summer, Google will offer free Wi-Fi during RAGBRAI, a first for this iconic Iowa event.

    Iowa is very happy about this continued investment. “Several years ago, state and local officials began a partnership to streamline our economic development efforts for the high tech industry, said Iowa Gov. Terry Branstad. “Those efforts paid off then with Google establishing a data center in Council Bluffs and today those efforts continue to bear fruit. Iowa is now at the epicenter for high tech development and our work on this front will continue to be a focus of my administration.”

  • T5 Plans $800 Million Campus in Colorado Springs

    T5Colorado

    T5 Data Centers is developing a major project in Colorado Springs. Pictured are Craig McKesson of T5 Data Centers, Colorado Governor John Hickenlooper, Robert Branson of Iron Point Partners, and Vince Colarelli of T5 Data Centers. (Photo: T5 Data Centers)

    T5 Data Centers has unveiled plans for an $800 million data center campus in Colorado Springs, Colorado. The project marks a major step forward in Colorado Springs’ ambitions as a data center destination, and continues a steady expansion by Atlanta-based T5.

    Colorado Springs has been a potential data center hotspot for a while, thanks to cheap power rates and its free cooling-friendly environment – it’s part of the reason T5 Data Centers was attracted to the region for its sixth complex.

    T5@Colorado is situated on 64 acres of land, with completion of the first phase of the project expected by the first quarter of 2014. The campus will offer 100 MW of available power, with power rates of 4.4 cents per kilowatt hour and potential to use free cooling for up to 97 percent of the year, according to T5.

    “Ideal Location” for Data Centers

    “Colorado Springs is the ideal location for the next phase of our expansion,” said Peter Marin, President and CEO of T5 Data Centers. “This is a vibrant, growing area with a strong and supportive business climate, and the proximity to Denver, as well as local colleges and military installations, gives us access to terrific talent and local resources. We want this new data center campus to be groundbreaking as an eco-friendly green facility, offering our customers the best possible enterprise infrastructure and support services at competitive rates.”

    Colorado Springs is home to numerous colleges and the Air Force Academy, as well as an active community of contractors working with the military.

    T5 currently offers wholesale data center space in business-critical data center facilities in Atlanta, Los Angeles, Dallas, and Charlotte with new projects announced in Portland and Colorado.

    As part of T5’s larger strategy of leveraging free cooling, the new Colorado Springs campus will be able to take advantage of the high plains climate by using the cooler, drier external air to reduce air conditioning and operating costs. The location is also strategically placed close to the Denver Technology Center, a tech and communications hub, and will serve as a central data relay center to lower latency for business-critical enterprise applications country-wide.

    T5 executives anticipate that the new data center campus will create 400 to 600 new jobs in the area

    Bringing Data Centers to Colorado Springs

    There have been several efforts to bring data centers to Colorado Springs over the years. Benefits of the region include tax incentives, cheap power rates, a data center-friendly business atmosphere, an environment relatively free of natural disasters (though wildfires are arguably a threat) and a climate ideal for free cooling.

    In 2011, Wal-Mart and FedEx announced Colorado Springs as the home of new facilities, attracted to the region thanks to both tax incentives and the ability to build environmentally sustainable facilities.  The city of 426,000 located about 60 miles south of Denver, is also home to existing data centers for Verizon Wireless, HP, FedEx, T. Rowe Price, Progressive, HP and Intel, among others.

  • Power, Network Upgrades Underway at 325 Hudson Street in NYC

    325-Hudson-Construction

    Construction is underway on upgrades to the meet-me room and power infrastructure at 325 Hudson Street in New York.

    325 Hudson in Manhattan is in the midst of an overhaul at a time when there’s a lot of movement in the New York City data center scene in the wake of Superstorm Sandy. The new owners, comprised of Amerimar Enterprises, Jamestown Properties and Hunter Newby, have commenced construction on a renovation announced back in October. They say the planned enhancements at 325 Hudson are a direct response to industry demand.

    “We have experienced increasing and specific demand for carrier-neutral colocation and interconnection options in New York City,” said Newby, a veteran of the Manhattan data center market and Joint Venture Partner at 325 Hudson Street. “Our acquisition of the building and MMR upgrades further affirm our commitment to addressing this demand and providing a stable, long-term, diverse, neutral environment for global network operators to thrive here in New York.”

    In the cards is an upgrade to the building-owned and managed Meet Me Room (MMR) to support the requirement for diversity in global carrier and enterprise 100G DWDM (Dense Wave-Division multiplexing) network deployments. Improvements will reduce network operating expenses, while adding redundancy and route diversity for customers. 325 Hudson is currently executing new MMR agreements with new and existing network operators including, submarine cable systems, carriers, content providers, broadcast, media and entertainment companies.

    The owners are also upgrading power and back-up power supplies during the construction period. These include:

    • The building’s AC 110c UPS capacity will be upgraded to 2N at 275 kVA, while the DC 48v capacity will be upgraded to 2N at 3400 amps as well as being backed by 1.5 MW diesel generator with plans for additional upgrades.
    • 16 new 4-inch conduits and 56 innerducts are being pre-built between 4 diverse manholes into the Empire City Subway system, through diverse Points of Entry and terminating directly in the Meet Me Room. This eliminates the need for any customers to perform outside or inside plant duct or conduit construction

    Several experienced players have committed to renovating the strategically located building, disclosing plans last October. 325 Hudson Street is a 10-story, 240,000-square-foot telecom building located in New York City’s Hudson Square area. The building has direct access to transatlantic cables and major metro and regional fiber providers. The first phase of the development will include cage and cabinet spaces as well as a large Meet Me Area where customers will have the opportunity to make physical interconnections with one another.

    A former industrial building, 325 Hudson was redeveloped in 1998 into a telecom center. In April the partnership of Jamestown and Amerimar bought the facility from Young Woo & Associates and the Bristol Group, with a reported sale price of $110 million. The building offers heavy floor loading capacity, 12-foot ceilings, as well as robust HVAC, power, and back-up power supply.

  • TeraCool’s Audacious Idea: Data Centers Next to Liquid Gas Plants

    Tokyo-gas_Negishi_LNG_Tarmi

    Will future data centers be located next to liquid natural gas plants, like this one in Tokyo? That’s the idea being put forth by TeraCool. (Photo by Yo-sei_Shoshi via Wikimedia Commons)

    Will data centers leverage liquid gas plants to generate cooling and electrical power for data centers? Concord, Massachusetts-based TeraCool believes so. By locating data centers in close proximity with liquid natural gas terminals, it improves the efficiency of both facilities, the company says. Natural gas storage plants produce excess refrigeration, and waste enough energy to potentially power data centers.

    TeraCool has developed a way to bring the two industries together. The company says its approach can achieve mutual energy conservation, significant cost savings and environmental benefits including air, water and green house gas emissions reductions. It has created a method that links a data center’s rejected heat and a Liquified Natural Gas (LNG) terminal’s surplus refrigeration via a heat transfer loop. There’s a potential symbiotic relationship here, as the waste heat from servers can help vaporize natural gas, and the energy released from the process could in turn power a data center.

    Natural gas is extracted from the ground, then liquefied by condensing it. It is transported in liquid form in tankers to LNG plants and stored in tanks, then turned back into a gas when it’s needed. The process stores energy, and the process releases energy. The energy and cooling normally go to waste because they’re usually isolated from other potential buildings and centers that might be able to use this excess energy. Now, it’s a matter of convincing data centers to build and align with this idea.

    TeraCool recently won the “Audacious Idea Award” at the Uptime Institute’s Green Enterprise IT (GEIT) awards 2013, which recognizes new, unprecedented ideas for realizing energy and resource efficiency. Other finalists included Microsoft’s “data plant” proof-of-concept in Wyoming and s freestanding chilled air duct developed by QTS (Quality Technology Services).

    Other GEIT Award winners of note

    The Uptime Institute says the award helps surface new approaches to complex problems, and contributes to industry innovation.

    “There’s two things behind the award scenes,” Matt Stansberry, Director of Content and Program Director, notes about the awards. “The judges panel consists of radical, household name companies. It’s a double blind process, one of the most democratic processes out there. The other thing that’s important about the rewards is the data churn. The documents they submit are really about sharing what worked for them. This is the biggest reason Ken (Brill, founder of Uptime Institute) started this was to get people talking.”

    “The bottom line is that Green IT has gotten a lot better, especially on the enterprise level,” said Stansberry.

    Interxion, TD Bank also noteworthy winners

    Other winners include Interxion which won the “Facility Retrofit Award” for its use of seawater to cool its Stockholm data center.  Interxion is a leading provider of cloud and carrier-neutral colocation data center services in Europe, supporting more than 1,300 customers at 33 data centres across 11 countries.

    Another noteworthy award winner was TD Bank Group, which won for Facility Design Innovation. TD Bank Group’s new facility integrates sustainable design elements including rainwater harvesting, onsite renewable energy generation, heat recovery systems and natural lighting. The company met its IT goals through server virtualization, tiered storage platforms, energy efficient infrastructure, overhead cabling and more. In short, it went above and beyond in terms of design and implementation. The phased construction project is Tier III certified by Uptime and LEED Platinum certified by USGBC.

    AOL, Barclays Feted in Server Roundup

    AOL and Barclays won the second annual Uptime Server Roundup for their efforts to improve data center efficiency. “The purpose of Server Roundup is to highlight what should be a routine activity – removing obsolete hardware from the data center, and moving it to the forefront of the conversation,” said Stansberry.

    This was the second straight win for AOL, which prevailed in the initial 2011 roundup. This year is decommissioned 8,253 servers, resulting in (gross) total savings of almost $3 million from reduced utility costs, maintenance and recovery of asset resale/scrap. Environmental benefits were seen in the reduction of more than 16,000 tons of carbon emissions, according to AOL.

    Barclays, a global financial organization, removed 5,515 obsolete servers in 2012, with power savings of around three megawatts, and $3.4 million annualized savings for power, and a further $800K savings in hardware maintenance.

    According to industry estimates, around 20 percent of servers in data centers today are obsolete, outdated or unused. Decommissioning one rack unit (1U) of servers can result in a savings of $500 per year in energy costs, an additional $500 in operating system licenses and $1,500 in hardware maintenance costs.

    Other finalists can be found here. The winner can be found here with all winners will present their case studies at Uptime Institute Symposium 2013, taking place May 13-16, 2013, at the Santa Clara Convention Center in Santa Clara, Calif.

  • Amazon’s S3 Storage Cloud Hits 2 Trillion Objects

    Amazon Web Services launched its Simple Storage Service (S3) way back in 2006. The company finally hit the 1 trillion objects in cloud storage mark in June of 2012. Just 10 short months later, that figure has doubled. Amazon cloud evangelist Jeff Barr announced today on his blog that Amazon has now hit 2 trillion objects on its S3 cloud storage.

    Earlier this month, The director of Amazon Web Services for UK and Ireland, Iain Gavin said the service had hit 1.7 trillion objects, and was peaking at 835,000 requests per second.

    What’s driving this remarkable growth? AWS is an engine for startups and innovators across the web, and often serves (at least partially) as the backend for a lot of big time applications like Dropbox. The world is accepting storage of data in the cloud, and S3 is the biggest cloud storage service.

    Barr tried to put this growth in perspective, as 2 trillion is a hard number to wrap your head around. Our galaxy is estimated to contain about 400 billion stars, writes Barr. That works out to five objects for every star in the galaxy. The field of Paleodemography estimates that 100 billion people have been born on planet Earth. Each of them can have 20 S3 objects. Our universe is about 13.6 billion years old. If you added one S3 object every 60 hours starting at the Big Bang, you’d have accumulated almost two trillion of them by now.

    Amazon’s announcement serves as a revealing data point in documenting the demand for data center space. All that data needs a place to live. If Amazon’s storage cloud is doubling in 10 months, what impact will cloud applications have on data center requirements as other providers scale up their storage clouds? The numbers may appear to reside in the clouds, but the bits live in servers within physical data centers.

  • TSO Logic Targets Power Management at the Application

    A screen shot from the TSO Logic energy management software, which launches today.

    A screen shot from the TSO Logic energy management software, which launches today.

    The first wave of innovation in data center efficiency focused on energy management between the grid and the rack. The next wave is focusing on managing power usage on the server, and optimizing for specific applications. That’s the goal of TSO Logic, which has launched its new energy efficiency software for data centers.

    TSO Logic is advancing ”Application Aware Power Management,” with software that analyzes activity at the application level and manages power usage at the server level, throttling idle servers or turning them off completely.

    “Most data centers leave all of their servers on all of the time, regardless of the actual demand,” said Aaron Rallo, CEO and founder of TSO Logic. ”We think that’s like leaving your car running all day just in case you have to drive somewhere. I started TSO Logic after first-hand experience running businesses that relied on large-scale server farms. Server demand would go up and down, but our huge power bill was always the same. The lack of data, insights, and viable solutions was frustrating.”

    Recognition from Uptime

    The company says the software can cut server power costs by more than 50 percent, and gives visibility into how data center power dollars are spent. TSO Logic’s claims got early validation from the Uptime Institute, which awarded the company a Green Enterprise IT Award for a pre-release deployment of its solution with Toronto-based digital media studio Arc Productions. The TSO Logic product suite identified 56 percent potential server energy savings at the studio’s data center, which houses more than 600 servers. Now Arc Productions is using TSO Power Control to realize those savings.

    Server power capping isn’t new, and has been available for years vendors including Intel and HP. There are other software players in this space, including Emerson Network Power, whose Trellis DCIM software is being used by Joyent to track app-level power usage.

    The history on power capping and management has been that it offers great potential to improve efficiency, but some companies have been apprehensive about turning servers off or down to take advantage of the savings. TSO’s approach seeks to create a comfort level for customers with very granular management functions, allowing fine-tuned control over which servers are monitored, what days and times those servers are managed, and how aggressively they want to save energy. Relevant data is displayed on a dashboard that allows graphing of the data over longer periods of time.

    “It makes a great tool for an IT guy to hand the data to the CFO and say ‘here’s what’s trending, here’s what going on,” said Rallo.

    Targeting Variable Workloads

    TSO Logic says it is already managing 2,165 enterprise servers, and helping these pre-release customers save on server power costs while reducing their environmental footprints. The company believes it can bring significant cost savings to anyone with variable workloads such as content providers, digital animation, retailers, as just a few examples.

    The majority of data centers experience what is known as variable load, which simply means that the demand on servers varies from hour to hour. The problem is that in order to stay prepared for periodic surges – “peak demand” or “peak load” – these data centers must keep running at full capacity all of the time. This means that energy is wasted on idle servers, which TSO aims to address.

    The product suite consists of TSO Metrics and TSO Power Control. The company’s two software toolsets are deployed together in one install, typically on a single server. It has no negative impact on server performance and requires no significant changes to data center infrastructure. The software uses application-level inspection to determine how much of a data center’s power draw is going toward revenue-generating activities versus idle servers. The insight is supposed to drive electricity savings without sacrificing performance, through automatically controlling the power state of servers based on application demand.

  • Windows Azure Launches IaaS Cloud, Targets Amazon

    Racks of servers housed inside the Microsoft data center in Dublin, Irelandm which is undergoing a major expansion. (Photo: Microsoft).

    Racks of servers housed inside the Microsoft data center supporting Windows Azure cloud services. (Photo: Microsoft).

    Taking clear aim at cloud computing leader Amazon Web Services, Microsoft announced the general availability of its Windows Azure Infrastructure Services (IaaS), the final piece of the puzzle in Microsoft’s cloud portfolio, and committed to match Amazon Web Services on pricing.

    Microsoft now offers customers a comprehensive hybrid cloud solution that integrates existing IT infrastructure with all the benefits of the public cloud. “Customers don’t want to rip and replace their current infrastructure to benefit from the cloud; they want the strengths of their on-premises investments and the flexibility of the cloud,” writes Bill Hilf, General Manager, Windows Azure Product Management, on the Azure blog. “It’s not only about Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), it’s about Infrastructure Services and Platform Services and hybrid scenarios. The cloud should be an enabler for innovation, and an extension of your organization’s IT fabric, not just a fancier way to describe cheap infrastructure and application hosting.”

    Additionally, Microsoft is announcing a commitment to match Amazon Web Services’ prices for commodity services like compute, storage and bandwidth. This starts with reducing GA prices on Virtual Machines and Cloud Services by 21-33 percent. “If you had concerns that Windows Azure was more expensive, we’re putting those concerns to rest today,” GM Steven Martin said. It’s worth noting that in matching Amazon’s pricing, Microsoft actually appears to have raised prices on some virtual machine instance types that were discounted during the trial period.

    Rightscale’s recent survey of cloud providers found a trend of aggressive price cutting on the part of cloud providers.

    Broadening the Cloud

    Microsoft has added in new high memory VM instances (28GB/4 core and 56 GB/8 core) to run demanding workloads.  Based on customer feedback, it has also added in a number of new Microsoft validated instances to its list including SQL Server, SharePoint, BizTalk Server, and Dynamics NAV to name a few.

    The broader cloud strategy is enabling hybrid solutions. Customers will now be able to use Windows Azure Infrastructure Services to preserve existing on-premise investments, and as a result, will reap the benefits of greater speed, scale and economics – three key drivers for enterprise companies.

    Hilf gives an example of one of these hybrid strategies. Automotive marketing and social media firm Digital Air Strike is using Windows Azure’s Infrastructure Services and Platform Services to create an instant feedback mechanism for all car purchases and service transactions for automotive giant General Motors. This enables GM to monitor the health of their customer relationships in near real time, providing deep and valuable business insights.

    Another example is Telenor, a Norwegian telecommunications company that needed to upgrade to the latest SharePoint solution across 13 business units and 12 countries.  It spun up their SharePoint 2013 farms and reduced their set-up time from 3 months to two weeks, and saved 70 percent in costs on their test environment, according to a Microsoft blog post.

    This example is used to highlight Microsoft’s commitment to avoiding vendor lock-in. Ultimately for production, Telenor is leveraging VM portability available between Windows Azure and Windows Server to move their final production deployment to their existing third-party hosting provider. So again, cloud is being used as a complement, not a threat to existing infrastructure in many cases, with the tech giants focusing on enabling hybrid plays rather than a “move everything to the cloud” sentiment that was once persistent throughout the industry.

  • PeakColo Extends Petabyte-Scale Storage to the Channel

    Channel-focused cloud provider PeakColo has launched a new storage offering addressing Petabyte-scale amounts of data. Called Mountain-Moving Storage, it’s a white-label object-oriented storage as a service offering that extends petabyte-scale objects storage to providers, averting the need for them to roll out their own.

    PeakColo enables value-added resellers, agents, and service providers to roll out cloud services, addressing a growing need. “PeakColo’s Mountain-Moving Object-Oriented Storage service is a key differentiator for our channel partners – from value-added resellers to data center providers – that look to diversify their product portfolios with cloud-based offerings,” said Luke Norris , CEO and Founder of PeakColo. “With the massive amounts of unstructured data created and used by enterprises of all sizes, cost-effective management can be a real problem. Our Mountain-Moving Storage is easy to manage, while still allowing end-users quick access to their stored data. And unlike other providers, Peak does not charge customers for seeding or retrieving their data.”

    Mountain-Moving Storage is based on NetApp’s Distributed Content Repository Solution. It offers multiple solutions for managing massive data repositories, including deep archival of unstructured data, long term backup retention, and archival of video and image objects such as medical records, addressing regulatory requirements including PCI, HIPAA, and SOX.

    “PeakColo’s pure channel focus offers high-margin financial models to NetApp partners,” said Jon Mellon, vice president and general manager, Service Provider Partners, NetApp. “PeakColo’s new object-based storage service is a great addition to their portfolio and will help our channel partners better meet customer needs for managing petabyte-scale unstructured data.”

    This is yet another piece that PeakColo can offer to channel partners to integrate into their own service offerings, increasing competitive advantages. The company is 100% channel focused, as covered in a recent profile.

    Providers can create customized back-up solutions for their clients with full protection of their existing data, as well as plan for future data requirements. Once stored, data can be accessed anytime, anywhere. It adds to PeakColo’s appeal with those dealing with and/or wanting to leverage object-minded applications such as web 2.0 and emerging applications designed with objects in mind.

  • Schneider Electric Addresses the Dangers of Arc Flash

    Arc flash is a serious safety issue for electrical maintenance, and Schneider Electric is taking the initiative on this issue. Schneider says its Virtual Main Arc Flash Mitigation system is a new concept which reduces arc flash energy across the entire low-voltage switchgear, rather than just reducing energy levels for downstream equipment as largely seen in the past. It’s designed to improve worker safety, enhance electrical system reliability, and help organizations comply with new standards.

    Extending to the low-voltage switchgear and switchboards has typically been more difficult to address. However users can be subjected to dangerous levels of arc flash incident energy when low-voltage switchgear is fed directly from a power transformer. The system reduces arc flash energy on low-voltage switchgear and switchboards, including the main incoming power distribution switchboard.

    Components of the Virtual Main Arc Flash Mitigation System include:

    • An engineering study to evaluate the optimum settings for the relays and circuit breakers in the unit substation. Optimizing the circuit breaker settings improves the reliability of service while assuring a reduced arc flash level at the substation. This is done by setting the virtual main relay to operate fast enough to reduce arc flash energy while operating slower than the downstream circuit breakers (circuit breakers closest to the fault).
    • A switching device with fault interruption capability on the high-voltage side of the service transformer. If the high-voltage disconnecting device does not have fault interrupting capability, a circuit breaker or other vacuum interrupter can be retrofit in place. When the disconnecting means that is located on the high voltage side of the transformer trips, the entire low voltage equipment including the bussing at the incoming line section of the switchgear, is de-energized. This prevents the possibility of propagation of arcing fault within the switchgear.
    • Three relaying class current transformers installed on the secondary side of the service transformer in the transformer compartment. The current transformers are installed in the transformer compartment (not the switchgear enclosure) to minimize the possibility of arc propagation beyond the current transformers.
    • A self-contained relay package including a microprocessor-based relay and the necessary terminal blocks, pilot lights, and selector switches. The self-contained package is easy to install and connect. It is factory wired and tested, minimizing the required shutdown in the field.

    The dangers of Arc Flash

    The solution comes at a time when electrical safety and arc flash protection are increasingly top of mind for a wide range of organizations, including commercial buildings, industrial plants, data centers, and government and healthcare facilities.

    When an electrical arc occurs, employees working on electrical equipment without adequate Personal Protective Equipment (PPE) risk serious injury or death. Even someone standing more than 10 feet from the fault source can be fatally burned. According to the American Society of Safety Engineers, more than 3,600 workers suffer disabling electrical contact injuries annually. Check out 10 Arc Flash Prediction and Prevention Myths for more information.

    Schneider’s Virtual Main Arc Flash Mitigation System helps organizations comply with new standards from the Occupational Safety and Health Administration (OSHA) and the National Fire Protection Association (NFPA). Some of those rules were discussed in an Industry Perspectives column in September 2012.  One of those new standards is NFPA 70E which requires organizations to implement arc flash protection boundaries.

  • Verizon Terremark Backs Cloudstack and Xen

    Here’s another big win for open standards: Verizon Terremark, the cloud computing arm of the huge telco, says it is investing in Xen project and CloudStack. These are the company’s first active investments in open cloud projects.  While Verizon Terremark says it has long been supportive of open standards, it believes now is the right time to get formally involved in the open-standard ecosystem.

    It’s an interesting revelation, given the company’s very VMWare roots. VMware was an investor in Terremark, which was one of the first companies to roll out a vCloud-based public cloud offering. So why the open source love now?

    Verizon Terremark believes supporting open source programs is important because they increase the overall market acceptance of these platforms, thus allowing the company to provide additional choice to its customers. The bottom line: the company believes open standards are driving innovation, and it needs to be able to provide choice as hybrid cloud becomes the major play going forward.

    Participating in Cloudstack, Contributing to Xen

    The company is endorsing the Cloudstack project and actively participating in the community. With Xen, It is making a monetary contribution to the development project and joining the Linux foundation as an advisory board member (the Linux Foundation is the new home of the Xen project).

    The investment grew out of the existing close relationship with Citrix, the company said. Citrix currently supports the Verizon Terremark portfolio of enterprise-class IT services. As open cloud wars are heating up, Verizon Terremark is a nice notch in the belt of CloudStack. There’s room for more than one open cloud standard, but there’s definitely a race to win support from the enterprise heavyweights.

    Verizon Terremark says it is investing in technologies that allow it to bring high quality products to market, while also helping participate in the long term development of key components of the cloud service delivery platform.

    “From our perspective, investing in open source technologies at this stage of market development makes sense because it accelerates sharing, technology and ecosystem growth and reduces development and go-to-market costs,” writes Chris Drumgoole, SVP Global Operations, in a company blog.

    Focus on Security

    On one hand, it’s surprising given Verizon Terremark’s history with VMWare, but makes sense given its relationship with Citrix. In terms of cloud, the company is largely identified as a VMware shop, focusing on security-centric verticals such as its large federal business.
    However, even the most enterprise-centric companies are embracing open standards. Strategies are shifting to supporting hybrid infrastructures, public and private cloud deployments, so companies can no longer solely focus on one type of cloud, but rather enabling cloud usage as a whole.

    Verizon Terremark sees many benefits in supporting open standards, with Drumgoogle listing some in his blog post:

    • API, application and technology sharing – Open source virtualization platform capabilities and applications make it easier and faster to develop programs and reduce training and compliance costs for end users. Technology sharing leads to higher quality, more robust implementations.
    • Ecosystem and market growth – Open standards allows developers to build rich systems of cooperating solutions which foster a market and encourage a higher level of adoption by businesses of all sizes as well as developers and consumers.
    • Cost reductions – Standards lower the barrier to entry for new technology companies as well as service costs for established players. End users ultimately win with increased price competition and innovation.

    The road forward is paved with Open Standards. All of the large OEMs hopping on OpenStack is one example of this mentality permeating throughout the industry. Verizon Terremark is spreading its chips, hedging its bets and committing to moving cloud forward in general, because of potential it has on its business. Although this is its first active investment in Open Cloud projects, it will definitely not be the last in terms of supporting the open source movement.

  • Equinix Strengthens London Ecosystem as CME Expands

    equinix-fiber-tray

    The cabling in these trays at an Equinix data center reflects the dense connectivity found in the company’s financial trading hubs. Today the CME Group said it was expanding with Equinix in the London market. (Image: Equinix)

    Equinix is reinforcing its financial ecosystem in the London market, as the derivatives marketplace CME Group is establishing a Globex hub inside the Equinix LD4/LD5 campus in Slough, scheduled to open in May 2013. Locating the CME hub at Equinix places CME Group’s business in close proximity to Europe’s leading trading platforms and electronic trading customers.

    CME Group exchanges offer trading across all major asset classes, including futures and options based on interest rates, equity indexes, foreign exchange, energy, agricultural commodities, metals, weather and real estate. The group brings buyers and sellers together through its CME Globex electronic trading platform and its trading facilities in New York and Chicago.

    Approximately 25 percent of CME Group’s electronic trading volume comes from outside the United States, primarily from the EMEA (Europe-Middle East-Africa) region. “We continue to see growing demand from our customers based throughout Europe for our product offerings, which means that we also need to focus on building our infrastructure and technology capabilities in the region,” said William Knottenbelt, managing director EMEA, CME Group.

    Community of Potential Customers

    By choosing Equinix, CME Group further aligns with its regional customers and gets access to an expansive community of potential customers located inside Equinix International Business Exchange (IBX) data center. Customers interested in connecting to the CME Globex hub simply need to acquire space in Equinix’s LD4/LD5 campus and cross-connect to the platform or lease a line.

    It’s another case of customers begetting customers on the part of colocation players, and Equinix’s strength in the financial vertical means customers are eager to get in the same ecosystem as like-minded businesses.

    “In today’s evolving market, exchanges want to reach the largest trading community with the lowest infrastructure costs, using data centers already well-populated with their target customers,” said Stewart Orrell, managing director of Global Financial Services at Equinix. “Equinix is the only network-neutral data center provider that’s able to meet these needs globally, and CME Group will be a uniquely powerful addition to the thriving financial ecosystem inside Equinix.”

    New market regulations across Europe are driving the movement of derivatives to trade on exchanges, resulting in trade processing through central clearing houses and data being reported and housed in trade repositories. Over the past few years, Equinix has built a cross-asset class business which is well-positioned to meet the demands of the evolving algorithmic trading market and its growth into additional asset classes such as FX and derivatives.

  • Stream Data Centers Building Again in San Antonio

    The new Stream Data Centers private data center property in Richardson, Texas.

    The new Stream Data Centers private data center property in Richardson, Texas. The company will build a similar facility in San Antonio.

    Stream Data Centers is expanding again, building a greenfield data center on land it has acquired in San Antonio. The company announced today it has acquired 9.6 acres of land in Westover Hills Business Park.  This will be the company’s second data center in San Antonio, and ninth in Texas. Stream Data Centers will break ground on its San Antonio Private Data Center in May 2013 and the facility will be fully commissioned and ready for occupancy in February 2014.

    The data center will be a 75,840 square foot purpose-built facility that will initially deliver 2.25 MW of critical load power, with the ability to easily expand the critical load to 6.75 MW with all necessary conduit and pads in place. It is being divided into three private data center suites, each containing  10,000 square feet of raised floor space

    “We are excited to build upon our previous success in San Antonio and start our latest project in the area,” said Paul Moser, Co-Managing Partner of Stream Data Centers.  ”San Antonio’s diverse mix of Fortune 1000 companies and their growing IT requirements is driving the need for more data center space in the city.  San Antonio is also attractive to out-of-region enterprise data center users due to its central US location, reliable infrastructure, and stable cost of electricity.” The company has witnessed the strength of the San Antonio market in the past, selling a previous project to a Fortune 100 company.

    Stream will utilize its standard 2N electrical / N+1 mechanical configuration and the project will include dual feed power from two separate substations. Additionally, the carrier-neutral facility will include redundant telecommunication rooms serving each PDC Suite with access to the multiple fiber providers serving the site.

    It is being constructed using Miami-Dade County Building Code Standards, providing the ability to withstand 146-mph straight-line winds and uplift. Stream is using accredited construction and design practices required to achieve LEED Gold Certification

    Stream strategically selected the site for this development in Westover Hills Business Park because it boasts high security and a robust fiber and power infrastructure. The site is in close proximity to other primary enterprise data centers occupied by Microsoft, Chevron, Lowe’s Corporation, Valero, Frost Bank, Christus Health and others.

    San Antonio is one of the fastest growing oil & gas markets as a result of its proximity to Houston and attraction from large companies who are creating operational hubs in the city. It is also home to a large concentration of financial services, healthcare, and government related organizations. Microsoft can at least partially be credited for kicking off a strong San Antonio data center market way back in 2008, when it decided to build a mammoth data center there.

    “We look forward once again to working with Stream Data Centers in San Antonio to identify and recruit enterprise data center users to the area,” said Mario Hernandez, president of San Antonio Economic Development Foundation.

    Stream has other data center developments in Dallas, Houston, Denver and Minneapolis. Stream Data Centers has a fourteen year track record of providing space for enterprise data center users including Apple, AT&T, The Home Depot, Chevron, Catholic Health Initiatives, Nokia and others.  During that time, Stream has acquired, developed and operated more than 1.5 million square feet of data center space in Texas, Colorado, Minnesota, and California representing more than 125 megawatts of power.

  • Google Invests $390 Million to Expand Belgium Facility

    View of sunset over the exterior of Google's data center in St. Ghislain, Belgium. (Photo: Google)

    View of sunset over the exterior of Google’s data center in St. Ghislain, Belgium. (Photo: Google)

    Google continues to make big infrastructure investments, in this case in a key facility powering European services. The company is investing 300 million Euros ($390 million in U.S. dollars) to expand its data center in Belgium. Its the latest in a series of expansion announcements for Google, which sees its data centers as the technology engine powering its online search and advertising platform.

    In January, we noted the company had poured $1 billion U.S.D. into its data centers in a period of three months. That investment comes after other major funding went to multiple data centers such as an additional $600 million in North Carolina, bringing Google’s total investment there to over $1.2 billion. Last year, the company’s investment in Iowa passed the $1 billion mark. The year before, there was a $600 million expansion in Oklahoma. Google also recently unveiled its first data center project in South America, which will be located in Quilicura, Chile.

    The Belgian facility in in St. Ghislain, southwest of Brussels, is the underpinning of Google’s services such as search, gmail, and Youtube in Europe. The center currently has approximately 120 employees and the facility is touted as a highly energy efficient. Google also operates data centers catering to the European market in Ireland and Finland. The Hamina data center in Finland received $184 million in investment last year.

    Hallmark of Belgium Data Center is Efficiency

    The climate in Belgium supports free cooling almost year-round, and the facility is chiller-less. The facility is “water self-sufficient,” as it is draws water from a nearby industrial canal and has built a 20,000-square-foot water treatment plant  to prepare the canal water for use in the data center. This is among the reasons why the facility is a top performer when it comes to energy efficiency, hitting a Power Usage Effectiveness (PUE)  of 1.11 over a 12-month average in 2011. For more details on how Google runs without chillers, see Google’s Chiller-Less Data Center for our coverage of engineering prowess behind the concept.

    Google also allows the ambient temperature in data halls to rise in its Belgium facility, with humans working there staying within climate-controlled sections of the building for the most part. For the majority of the year, it’s cool enough to where this design works with no problems, but when it heats up in Belgium, the company participates in “excursion hours.” Indoor temperatures can rise above 95 degrees, and the humans leave the server area. This rarely occurs, and the machines work just fine – only it’s uncomfortable for humans.

  • Compass Commissions Its First Data Center In Nashville

    Compass Datacenters has completed construction and commissioning of its first data center facility in Franklin, Tennessee, rolling out a 21,000 square foot, 1.2 megawatt stand-alone data center facility. Groundbreaking to customer handover occurred in just six months using Compass’ patent-pending “Truly Modular Architecture.”  The facility in suburban Nashville has been leased by a customer (previously identified as Windstream) that is taking possession of the data center this month.

    “Completing a stand-alone, hardened, Tier III-certified data center facility in only six months is a fraction of the time it typically takes for this kind of facility, but that is the standard timeline with Compass’ methodology,” said Chris Crosby, CEO of Compass Datacenters. “It’s not uncommon for this kind of project to take more than a year or two with traditional design and construction practices for data centers. Compass was founded to make that a thing of the past, and our very first project is a successful demonstration of the advantages of our methodology,”

    “There was 50 full days of rain in Nashville during the timeframe,” Crosby added. The company still hit its deadline. “For a greenfield build, it’s a big deal.”

    The facility in Franklin was built using Compass’ modular architecture, which makes it possible for companies to locate their data centers where they need them—at an affordable cost—rather than where their provider happens to have a facility. The centerpiece of the design is the CompassPod, which provides 10,000 square feet of column-less raised floor space supported by 1.2 MW of electrical power with 2N power distribution. The facilities delivers a PUE of 1.2 – 1.5 or lower at loads as low as 25 percent. CompassPods are contained within, and protected by, the CompassStructure, a hardened, energy-efficient, highly-secure structure for the facility’s mission critical IT systems.

    The CompassPowerCenter provides the UPS (2N) and switchgear (2N) equipment required to ensure uptime and reliability. Each facility includes a dedicated CompassSupport module that provides to meet the needs of operational staff and logistics for data center operations, including a security center, lobby, office space, loading dock, break area and restrooms.

    “In terms of momentum, this is huge,” said Crosby. “Raleigh Durham is going through level 5 commissioning next week. Once again, it’s within the six month time frame. With only six months from groundbreaking to delivery, this brings the concept of just-in-time-delivery to data center facilities, enabling customers to take delivery of their new standalone facilities on a timeline that was never before possible.”

    In short, Crosby believes the architecture has proven itself. In terms of giving the ability to expand, and time the capital, Compass’ model is attractive to customers. Compass customer Windstream, as one example, has been able to add space as they add revenue. It takes the guessing game out of the equation and allows a company to expand data center space in line with the business.

    One of the big differences Crosby sees is the hardened nature of the facilities. “The level of hardening is unique for our space and will continue to set us apart,” said Crosby. “During construction, there was a tornado that touched down basically across the street. Those folks were happy they were inside that facility at the time.”

    The company is seeing continued interest across the country. “The level of interest in the secondary markets are high. The next set of markets, Minneapolis and Columbus, also are seeing high interest,” said Crosby.“From an overall funnel perspective, we’ve been tracking 75-80MW of opportunity. We feel pretty good where things are at. We’re at the negotiation stage with a few clients.” The company’s goal is to be able to manage up to 10 projects at the same time in 2014.

    “2012 was the year of the prototype, this year was the engine – we’ll probably work another six to 10 projects this year,” said Crosby. The company says that although it has improved the level of efficiency in builds, it has a continuous improvement program in effect. Crosby gives the example of a car model undergoing tweaks from year to year to become better and better. Compass believes it has the blueprint to do things right, but will not stop looking for ways to enhance at every step of the way.

    The company’s prospects have prompted it to add Jay Forester, formerly of Digital Realty Trust, to the talent pool. “We are getting an unbelievable resource here,” said  Crosby. “It’s really an opportunity for him to take industrialization and move it to productization. Jay will lead that charge.”

    Forester was named Senior Vice President of Data Center Product Delivery, a new position at the company with responsibility for the construction and delivery of data center facilities across the United States.

     

  • HP Project Moonshot: Low-Power Chips To Increase Density

    moon-shot2013

    HP is now selling its first Project Moonshot systems–the bleeding edge of servers–which HP states is “the world’s first software-defined server to run Internet scale applications.” Also, Moonshot 1500 is using a low-power processor–specifically Intel’s Atom 1260 processor found in cell phones–that uses less energy, less space and reduces complexity and cost.

    There’s been an overall movement toward ultra-low power servers, and Project Moonshot is HP’s attempt. HP clearly sees an opportunity in building low-power, many-core servers, which can slash power usage over large footprints of Internet infrastructure.

    As CEO Meg Whitman said, “We’re living in a period of enormous change. There will be hundreds of billions of devices going to be connected.” As the IT world enters the era of the Internet of Things where every device and appliance is connected, creating and storing data, an increasing demand for compute is evolving. ” It’s no longer about petabytes, but brontobytes,” said Whitman. “And all of this takes a lot of elements in the background. We’re on a path that is not sustainable from a space, cost, and energy perspective.”

    Converged Infrastructure

    The Moonshot 1500 platform uses a converged infrastructure, using workload-optimized, extreme low-energy “server cartridges” in a unique enclosure that pools resources across thousands of servers through using HP Converged Infrastructure technology. This allows the sharing of resources—including storage, networking, management, power and cooling.

    The HP Moonshot 1500 System chassis is similar to a blade chassis, but on steroids. It is a 4.3U (7.5 inches tall) chassis that hosts 45 independent hot-plug ProLiant Servers, all attached to multiple fabrics.

    One moonshot system can get 180 servers in the system, including built-in switches. High-speed uplinks connect all the servers, with 10 terabits per second of I/O. One rack of Moonshots can replace 8 of traditional 1u 2p servers. It uses 89 percent less energy, 80 percent less space, 97 percent less complexity, which leads to 77 percent less cost.

    While the first Moonshot version on the market uses Intel processors, additional servers shipping later in 2013 will take chips from multiple partners such as AMD, Calxeda, Applied Micro and Texas instruments.

    Project Moonshot represents a new class of server designed to run Internet-scale workloads, and target specific workloads such as those that support gaming, genomics, telecom, video analysis and more.

    Client-server infrastructure was not designed to handle the level of computing that Internet-scale organizations are running, according to HP, and that the economics behind social, mobile, cloud and big data will deteriorate.

    The company also announced the Pathfinder Innovation Ecosystem: a program focusing on servers for different workloads. There are Internet-scale organizations today operating over one million servers. Additionally, many enterprises in finance have tens of thousands of servers. HP sees the opportunity to market a solution that meets the needs of these kinds of businesses. HP working on how to move large enterprises from a general purpose server, and move to a new era of software-defined server. The future is all about the software-defined server, according to HP. These are servers, which are specifically designed for different workloads and they’re looking to power a range of applications.

    HP first revealed it was building such low-power machines in the fall 2011.

    Close up of Moonshot 1500 from HP. The company today rolled out the units which use low-power processors.

    Close up of Moonshot 1500 from HP. The company today rolled out the units which use low-power processors.

  • The Evolution of DataSite Marietta

    datasite-marietta

    Some of the racks inside DataSite Marietta, a 73,000 square foot facility in Georgia that it the company’s second facility, along with DataSite Orlando.

    DataSite is adding an additional 18,000 square feet of purpose-built data center to its Marietta, Georgia facility, where it’s offering a product it calls hybrid colocation. The company has invested $19 million in the current phase of expansion, with future phases bringing the total investment to $30 million.

    The company’s two properties are DataSite Marietta and DataSite Orlando, which each have their own unique evolution. While DataSite has been offering wholesale colo for a while in Orlando, the 73,000 square foot Georgia project offered an alternate approach.

    “Marietta was a canvas upon which we could paint the data center we wanted to build,” said Jeff Burges, CEO of Burges Property + Company. “With this, we’re announcing the next iteration: the colocation footprint.”

    The new space at DataSite Marietta is designed to provide a minimum of 2 megawatts of UPS load in a standard footprint, and is scheduled to begin accepting customers this month. DataSite is carrier and vendor neutral and has designed its offerings to be flexible enough for clients with specific data center requirements.

    What is Hybrid Colo?

    The hybrid colo offers customers options in how they provision four key infrastructure components: The main utility (switch gear and utility service),  the generator plant, the UPS plant and the cooling plant. This approach was developed to address users that didn’t want to be a colo customer, allowing them the option to own and control the pieces they want, even with their money.

    “We dedicated and delivered exclusive use of UPS and cooling for a particular user at Marietta,” said Burges, citing one example of a hybrid arrangement. “It was one case where I said: I’m going to build you your own UPS and cooling plant. But generator and switch gear, that’s an expensive proposition – so you’ll share, and then get your own UPS and cooling.”

    A customer can eventually able to take over the management, operations and service levels of the cooling, or they can elect to have DataSite team to maintain it.

    The company plans to announce 1-2 more facilities this year in undisclosed locations, which will also feature a flexible product model. “You have to be able to accommodate 40 watts a square foot, and 400 watts a square foot,” said Burges. “We’re trying to be as broadly flexible as we can.”

    Tracking the Industry’s History

    Burges’ career tracks the growth and evolution of the data center industry. As CEO of Burges Property + Company, he started out with a commercial real estate background, operating large office complexes, which included some laboratory space. “The experience in labs gave me background in specialty real estate,” said Burges.

    The company’s first real foray into IT facilities was the acquisition of 274 Brannon Street, a long distance hub building in San Francisco. “We did not go in on purpose, it was luck,” said Burges. “We brought great improvements. We took that model and became arguably the biggest telecom owner in the country. We got in, and then we got out, very healthy and very happy. We were stingy about what we bought and we were careful. The telecom hotel evolution over 4-5 years gave us unique insight heading into the data center business.”

    The company was wise to take advantage of the Exoduses and Abovenets and Worldcoms, companies that expanded too fast and built top-quality facilities that were largely empty when the bubble popped. “We started to acquire these beautiful, empty data centers in 2004-06,” said Burges. One such facility was 1920 East Maple in El Segundo, Calif., which it sold one year later in 2005 to Equinix.

    “Perhaps the finest building was the ATT facility in Orlando in late 2004,” said Burges. “We bought it and sat on it for a while. The most difficult thing is to capture that low basis if you can. If you can acquire something well engineered it is a vastly better proposition.”

    During 2005-2007 Burges says the nature of customer requirements began to change. “Then, we were still in that ‘I need to own and control UPS and cooling system’ stage,” he said. “That is changing now. In 2009-10, there was a move to trusting the colo operator.

    “DataSite Marietta represents a history book of that evolution,” said Burges. “We went ahead and recapitalized our Datasite Marietta and Orlando. We opened both buildings in 2009 with vastly different models.”

    Matching the Infrastructure to the Customer

    DataSite Orlando is a wholesale data center, with the company entering its third phase of build-out. Orlando has 8 megawatts of critical load. DataSite Marietta was built to suit. Half of DataSite Marietta is occupied, and with the other half, the company is building out its Hybrid Colocation offering.

    The company believes that there is a gap between the cutting edge designs touted by many providers and the needs of most data center customers.

    “There’s a lot of talk about efficiency and low PUE, but what we see the market wanting is boring and typical,” said Burges. “What we’ve been successful with is heat exchangers and managed chilled water. The majority of the clients are still air cooled, under 200 watts per square foot. We go cautiously into the high density world. I fear the future for some of the folks who have gone mechanical enclosure, 20kW (rack density), because it comes down to the almighty dollar.

    “Our customer base is a vast array of meds, feds and eds,” said Burges, referring to healthcare, government and education tenants. “They all have their own way of doing things, but we find that static UPS is the way to go. Continuous power systems have too many points of failure.

    “We’re practical. We have tremendous redundancy and a terrific uptime record. Customers are able to grow by the rack and not have to decide what the footprint is going to be 10 years from now,” said Burges.

  • Digital Keeps Buying With Deals in Dallas, Phoenix

    Digital Realty keeps the acquisitions going, acquiring a data center in Dallas and a future development site in Phoenix, Arizona, targeting two very hot markets showing incredible growth right now. The Dallas acquisition continues the trend of buying fully leased properties in attractive markets, while Phoenix adds future inventory to a market experiencing a lot of demand.

    The Dallas Texas facility is an operating, 61,750 square foot data center and was acquired for $8.5 million. It’s a single tenant facility leased on a long term basis to an undisclosed provider of business, information technology and communications solutions, so this is another acquisition of a fully leased property following the one earlier this week in Minnesota. The facility is approximately 3.5 miles from Digital Realty’s Digital Dallas Datacenter Campus. Dallas is, of course, a major hosting hub and has long been a hotbed of data center activity.

    “The acquisition of the Dallas facility is a continuation of our strategy of adding income-producing data center facilities to our global portfolio that offer attractive returns for our shareholders,” said Scott Peterson, Chief Acquisitions Officer of Digital Realty.

    The second property acquired is in Phoenix, Arizona, and consists of 3 buildings totaling around 227,000 square feet that sold for $24 million. The first 109,000 square foot building is going into Digital’s inventory of space held for development. Subject to market conditions, it’s capable of supporting 7.2 megawatts of IT capacity. The seller of this property and current tenant occupies the remaining two office buildings in the deal on a short-term lease basis.

    Phoenix has been home to a lot of recent data center activity (check out the Phoenix Region on DCK), so it’s definitely an area where Digital wants to get its ducks in a row. “The acquisition of the Phoenix site adds future inventory to a market where we have experienced significant absorption at our existing facilities coupled with continued strong demand from enterprise customers,” said Peterson.

    “Both acquisitions expand our footprint in markets where we see positive demand from customers for our flexible data center solution offerings,” concludes Peterson.