Author: Jason Verge

  • Interxion Uses Sea Water to Cool Stockholm Data Centers

    interxion-containment-overh

    A cold aisle containment in an Interxion data center, viewed from above. (Photo: Interxion)

    European data center provider Interxion is no stranger to innovation. Over the years, the company has been a pioneer in modular design and cold aisle containment, and is now using seawater to cool a Stockholm data center, generating some serious efficiency benefits. Energy costs have been reduced by 80 percent, the company said, slashing enough IT load to allow additional customers to colocate in the facility.

    Interxion says the Power usage Effectiveness (PUE) for its Stockholm facility has dropped to 1.09, making it one of the most efficient data centers in Europe. The type of efficiency Interxion is experiencing in Stockholm is most commonly associated with facilities using air economization (free cooling) to leverage the cool environment in cool servers.

    “We don’t use outside air. We use chilled water, and we achieve 1.2 from this,” said Lex Coors, VP data center technology for Interxion. “With the sea water we can achieve a PUE of 1.1 because we do not have to cool it over time. With sea water, you can take it in and push it out easily.”

    Mother Nature as Your Chiller

    Seawater cooling systems pump deep, cold seawater through a data center’s HVAC system. As a result, the air circulating within a facility is cooled, which has the effect of lowering the inside temperature. Although the mechanics of this process are similar to chiller systems, seawater cooling completely eliminates the need to cool water down, which requires high levels of energy.

    Interxion’s seawater cooling system is particularly notable because it runs water through multiple data centers multiple times, instead of the conventional strategy to run water through just one facility. This method also reduces operational and environmental costs, as it requires half the amount of water to cool each of the data centers. Interxion also doubles the use of seawater by reusing the warm water to heat local offices and residential buildings before returning it to the sea.

    There are a number of techniques to tap external sources of cold water, effectively using Mother Earth as your chiller. But some work better than others, Coors said. He  noted the challenges of using deep lake cooling systems versus seawater and aquifers.

    “With deep lake and aquifers there is basically a push back on using water,” said Coors. “People are so afraid of legionella that they don’t use water power. We have to look for alternatives. There’s drilling into the ground, but that’s not allowed often. We can use sea water and salt aquifers.”

    Coors mentions there are advantages to working in Europe. “In Europe, we do not have to focus on the smart grid pipes, because basically if you are connected to a power grid in Europe, you’re connected to the national and international grid,” he said. “It’s a matter of paying a little bit more and telling them ‘I want this kind of green power.”

    So will seawater cooling, and its benefits, become a trend in the US? “Depending on the area, it should be a design trend,” said Coors. “It really helps with the environment. If you have to run a pipe a mile, it’s still very beneficial. The Gulf is a bit hot, but in California there are enough opportunities.”

    A Track Record of Innovation

    Interxion has a history of innovating; in addition to seawater cooling, the company has been pioneers in phased design and cold aisle containment. The company has been practicing phased construction in its data centers since 1999, a time when most data centers were being built as “barns” with large open floor plans being built in their entirety.

    “We had to build data centers in 11 countries and only had a limited amount of capital available,” said Coors. “I can from a ship company and took that idea of shipping containers back to the data centers. I didn’t want to build a bunch of data centers I’d have to upgrade in three to four ears. I did not install the whole infrastructure, nor did I build out the whole data center. Instead, we chopped the building into 4 phases of 10,000 square feet and installed infrastructure to support a limited capacity and over time, adding additional infrastructure to not interrupt operation.”

  • Gartner, IDC See Server Sales Turning a Corner


    server-market-share-feb2013

    Server sales are up after a sluggish start to 2012, but growth isn’t coming from the incumbent server vendors, say several new reports. The “other” category, which includes Quanta and other companies building custom servers for large cloud companies had the most impressive growth, suggesting a significant impact for Open Compute designs at the expense of major server makers like Dell and HP.

    So while reports from IDC and Gartner show a slightly down year for servers, there was positive growth in the fourth quarter, driven by hyperscale data centers. Enterprise adoption of new server hardware was sluggish, with hyper-scale Internet companies and service providers driving growth.

    Gartner and IDC saw revenue increase 5.1% and 3.1% respectively in the fourth quarter of 2012 versus same period 2011, but both saw declines of 0.6% and 1.9% for the year compared to 2011. IDC said The Q4 increase was the first quarterly increase in five quarters.

    Growth offset by enterprise sluggishness, virtualization

    While hyperscale data centers were a bright spot in fourth quarter sales, overall annual sales are still being dragged down by enterprise delays. “2012 was a year that definitely saw budgetary constraint which resulted in delays in x86-based server replacements in enterprise and mid-sized data centers,” said Jeffrey Hewitt, research vice president at Gartner. “Application-as-a-business data centers such as Baidu, Facebook and Google were the real drivers of significant volume growth for the year.”

    Gartner’s outlook for 2013 suggests that modest growth will continue. These increases continue to be buffered by the use of x86 server virtualization to consolidate physical machines as they are replaced. Some replacements are likely to begin in the enterprise segment as servers continue to age and economies improve.

    IDC says the x86 server market experienced sharp revenue growth in 4Q12 as systems based on Intel’s Sandy Bridge processor — which was launched in early 2012 — experienced strong demand which helped drive sharply higher average selling prices across the market.

    Trends of Note

    IDC is observing an increased interest from the market for converged systems, suggesting enterprises are latching onto the idea of turnkey solutions. Servers optimized for high density were helped by the growth of service providers in the market as well.

    “In addition to HPC, Cloud and IT service providers favor the highly efficient and scalable design of Density Optimized servers,” said Jed Scaramella, research manager with IDC. Revenue for Density Optimized Servers grew 66.4 percent year over year in 4Q12 to $705 million.

    “Blade servers are being leveraged in enterprises’ virtualized and private cloud environments,” said IDC, which said enterprise IT organizations are viewing converged systems as a method to simplify management and increase their time to value. IDC said blade server revenue grew 3.3 percent from last year to $2.4 billion, accounting for 16.3 percent of total server revenue. Gartner said blade servers posted a revenue increase of 3.2 percent but a shipment decline of 3.8 percent for the year.

    Another interesting trend is being driven by Open Compute. Gartner’s data shows strong growth for vendors lumped under “Others,” which includes Quanta and other companies (Hyve, ZT Systems, WiWynn) building custom servers for large cloud companies.

    High performance computing and cloud helped Linux, according to IDC, and now represents 20.4% of all server revenue. Microsoft Windows server demand continues to increase as well. What took the biggest hit is Unix, which saw its share decline for the sixth consecutive quarter. “Relatively weak mainframe and RISC/Itanium Unix platform market performance kept overall revenue growth in check,” said Jeffrey Hewitt, research vice president at Gartner.

    Growth from a geographic perspective

    IDC had forecast that server demand would begin to improve in the second half of 2012, following a number of product refreshes in the first half of the year. That was correct, albeit the turnaround was a quarter later than expected due to sluggish enterprise buying and soft market demand in particular regions.

    “While this did happen in the fourth quarter, market demand was uneven with the U.S., Asia/Pacific and Latin America all experiencing sharp growth, while demand in all other regions remained soft,” said Matt Eastwood, group vice president and general manager, Enterprise Platforms at IDC. ”Average selling prices for servers increased sharply in the quarter as large and small enterprises continued to invest heavily in new server capacity to drive additional consolidation and virtualization initiatives.”

    Gartner saw similar growth patterns from a geographic perspective.There was server revenue growth in spite of relative softness in some regions, most notably Western Europe. The three highest growth rates were shown by North America (5.5 percent), Asia/Pacific (3.4 percent) and Latin America (0.2 percent) in terms of unit shipments. These were the only regions to experience an increase in shipments. These three regions grew at a rate of 16.3, 15.5 and 6 percent respectively.

    The Revenue Market Share Rankings

    Both research houses once again had had IBM as the top server vendor. IDC said it had 36.5% market share, while Gartner accords Big Blue a 34.9% market share. IDC attributed IBM’s growth to improvements in demand for its System z mainframes, which recorded their highest quarterly revenue for System z in more than a decade. Revenue for IBM’s System z mainframe running z/OS increased 55.6 percent year-over-year to $1.8 billion, representing 12.3 percent of all server revenue in the fourth quarter.

    Jean S. Bozman, Research Vice President in IDC’s Enterprise Platforms Group attributed the System z growth to several factors, including “technology refresh, new products such as zEnterprise, new accounts in emerging economies, and consolidation of some enterprise Linux workloads onto IBM System z, using the Integrated Facility for Linux (IFL) specialty engines. Although revenue results for System z are traditionally heavier in the fourth quarter, this accelerated acquisition shows the breadth and depth of the IBM mainframe installed base.”

    HP is number two, with both IDC and Gartner pegging its share at 24.8 percent. Dell ranks third,  with IDC placing it 15.1 percent factory revenue market share in the quarter, while Gartner has it at 14.3 percent.

    And then there’s Cisco Systems, the networking giant which entered the server market in 2009. This was the first quarter that Cisco maintained a position in the top 5 server rankings for IDC. Of Gartner’s top five vendors in server shipments worldwide, Cisco was the only vendor to experience an increase in shipments in the fourth quarter of 2012. Cisco’s worldwide server shipments increased 40.9 percent in the quarter.

  • Microsoft Joins Open Data Center Alliance

    Microsoft has joined The Open Data Center Alliance (ODCA), an industry group that publishes usage models for Open Specifications for Cloud Computing. The Open Data Center Alliance is comprised of more than 300 companies that represent over $100 billion in annual IT spending.

    “In line with Windows Azure’s commitment to openness and interoperability, we are pleased to join ODCA and work with industry leadership on standards for the cloud,” said Bill Hilf, general manager, Windows Azure. “We are dedicated to serving the industry and customers by providing an open, reliable and global approach to the cloud, and we look forward to contributing to the ODCA’s mission.”

    ODCA aims to be a voice for enterprise IT in articulating the requirements for the transforming enterprise IT landscape with focus on topics including open, interoperable delivery of compute infrastructure as a service, cloud security, and best practices for adoption of big data analytics. Microsoft brings a valuable perspective to the organization.

    “The ODCA brings together leaders from across industries to work together towards a vision of open, industry standard cloud solution delivery,” said Mario Mueller, BMW’s Vice President of IT Infrastructure and Chair of the Alliance. “In order to truly accelerate availability of cloud services, enterprise IT needs to work closely with cloud service and solution providers.  Microsoft’s participation is a valuable addition to the organization’s mission, and we heartily welcome their membership.”

  • Rackspace Acquires ObjectRocket for MongoDB Service

    Rackspace Hosting is acquiring ObjectRocket, a provider of database as a service (DBaaS) offerings using the MongoDB database. The acquisition expands Rackspace’s big data play, allowing them to provide Open Cloud customers with demanding applications with a NoSQL DBaaS.

    The acquisition is expected to close today, and the ObjectRocket offering will be available in early March for Rackspace customers . The offering will roll out first for customers in the company’s Chicago data center, but will soon be integrated across Rackspace’s Open Cloud portfolio. Financial terms of the deal were not disclosed.

    Rackspace is establishing itself in the high growth NoSQL database market. NoSQL offerings forego traditional approaches to databases, including the use of the relational database model or Structured Query Language (SQL), in favor of an approach that provides better scalability across a distributed architecture. The NoSQL approach has become popular for use in large cloud applications. A  recent report from The 451 Group projects that NoSQL software revenue will grow at an annual rate of 82 percent to reach $215 million by 2015. 

    “Databases are the core of any application and expertise in the most popular database technologies will be critical to us delivering Fanatical Support in the open cloud,” said Pat Matthews, senior vice president of corporate development at Rackspace. “As we look to expand our open cloud database offering into the MongoDB world, we are really excited to work with the entrepreneurs and engineers at ObjectRocket.”

    MongoDB has already been adopted by organizations such as Disney, The New York Times and Craigslist, among others. Built on a NoSQL, open source engine, MongoDB is able to store structured data in JSON-like schemas. By offering MondoDB as a service, ObjectRocket makes it easier to use for customers by eliminating much of the setup and configuration.

    ObjectRocket will continue to be sold as a standalone service, so it’s still usable in conjunction with other clouds; it also leverages AWS Direct Connect to provide low latency and free bandwidth to AWS customers.

    The ObjectRocket founding team collectively brings more than 50 years of experience in scaling large data systems, including MongoDB. They have also designed and managed systems that power some of the busiest sites on the web, and played key founding development roles at companies like Shutterfly, PayPal, eBay and AOL. 

    “With Rackspace’s open cloud philosophy and our shared emphasis on providing the highest level of customer support, we feel this union is an ideal fit,” said Chris Lalonde, co-founder and CEO of ObjectRocket. ”Since the beginning, our focus has been on creating a DBaaS platform that would perform, scale and support critical workloads in a superior manner.  Joining forces with Rackspace will enable us to achieve this goal, while delivering one of the most advanced MongoDB DBaaS solutions on the market.”

    At the beginning of the year, Rackspace CTO John Engates stated in his cloud predictions that “this is the year when Big Data makes its way into enterprise conversations.” This acquisition reinforces that belief.

  • After Strong Quarter, Internap Preps Cloudy Colo

    internap-exterior

    The exterior of an Internap data center. The company’s shares have gained in recent days on the strength of fourth-quarter earnings. (Photo: Internap)

    Shares of Internap have surged after the company recorded a strong quarter, indicating it is striking the right chords with its diverse portfolio of colocation, managed services and cloud. The company has also been touting what it calls “Cloudy Colo,” a true hybrid solution available through a single portal. The strong finish to the year and the cloudy colo concept are signs that Internap has found its identity.

    In two trading sessions since the release of its fourth quarter earnings, shares of Internap have gained 11.5 percent, rising from $7.91 at Thursday to a close of $8.81 on Monday. The fourth quarter saw the highest quarterly revenue, segment profit and adjusted EBITDA in company history.

    Revenue for 2012 was $273.6 million, with fourth quarter revenue of $69.7 million. Internap’ revenue was up 2 percent from the previous quarter. The high growth was attributed to its data center services segment, which includes Voxel which Internap acquired in 2011. Data center services revenue hit $43.7 million, up 24 percent compared to same time last year, and up four percent from the third quarter. For the full year, data center services generated revenue of $167.3 million, up 25 percent over last year. IP services was flat, but slightly down year over year. Customer churn was down. The company counted 3,700 customers as of December 31, 2012.

    Strong Finish to 2012

    “We are pleased with the strong finish to 2012,” said Eric Cooney, President and Chief Executive Officer of Internap. “The continued execution of our growth strategy is reflected in full year revenue and Adjusted EBITDA growth of 12 percent and 20 percent, respectively. Successful integration of the Voxel business and focus on our organic colocation, hosting and cloud infrastructure businesses have delivered full-year growth in data center services revenue of 25 percent.

    “As we look forward to 2013, the priority is simple – focus on continued execution of the strategy to deliver a platform of high-performance, hybridized IT Infrastructure services,” Cooney continued. “We remain confident that the opportunity for long-term profitable growth and stockholder value creation is significant in the market for outsourced IT Infrastructure services.”

    Internap has shifted its focus over the years. The company was founded in 1996 on its expertise in IP services and route optimization. It later added colocation and content delivery services, but has had its stumbles, most notably the 2006 acquisition of VitalStream, which led to a $99 million write-off amid customer support problems. Cooney became CEO in 2009, and immediately focused on the company’s colocation business. Since Internap was realizing higher margins on its company-owned data centers, and began phasing out its use of third-party space and building data center space. The company rolled out 26,000 square feet of company-controlled data center space in 2012.

    Sneak Peak: Cloudy Colo

    The company is working on what it informally calls “Cloudy Colo.” It is an extension of its core data center OS platform, with some customers using the beta version.

    “Our whole goal is to redefine what the limitations around colo are,” said Raj Dutt, Senior VP of Technology at Internap and former CEO of Voxel. “We’re going to start giving visibility and control into the obvious things that people don’t get from colo – reboot, bandwidth, inventory management asset management, the ability to hybridize managed cloud in the same portal.

    “Through software – DCIM-like software – customers can focus at stuff in the rack rather than outside of the rack,” said Dutt. “DCIM has barely started in terms of inside the rack. The roles of machines are changing, and DCIM falls short on this. This is where it starts to get interesting.”

    Offering up a variety of services through one portal from colo to cloud, as well as giving DCIM-like insight into total infrastructure, will aide Internap in cross-selling its services.

    “This makes colo great for colo customers,” said Dutt. “It also makes colo within reach for cloud guys. As cloud customers need colo, it’s an easier way to go about that. From an infrastructure standpoint, we don’t think the cloud is the be-end and end-all,” he said. “If you’re deploying any application, the best solution is a hybrid situation.”

    Dutt noted that Internap offers everything from colocation to dedicated servers to cloud. “Very few people offering all of these product sets as one infrastructure fabric,” he noted.

    Dutt believes the economics of cloud are often misinterpreted, and cloud is not always the most cost-effective approach for the customer. ”I’d rather sell 100 racks of cloud than 100 racks of colo any day,” said Dutt, stating that the margins for providers are simply better for cloud within the same footprint.

    Dutt atttributes Internap’s success with its diverse portfolio to one thing. “The market certainly got more educated,” he said. “More and more people are treating infrastructure as a competitive weapon more than cost center.”

    It’s still early, so there’s no formal “cloudy colo” product yet. The company is evaluating different models. However, all indications are that Internap is working on giving customers deeper control and analytics across the portfolio, from colo to cloud, with deeper DCIM-like functionality. A formal announcement is most likely coming within the next quarter.

  • Latisys Adding More Space, Power at California Campus

    Latisys-Ashburn

    The interior of a Latisys data center in Virginia. The company has announced plans to expand its data center campus in Irvine, California. (Photo: Latisys)

    Latisys is expanding its west coast hub, adding 12,000 square feet of raised floor and 1.8 megawatts of power to its OC2 data center in Irvine, California, which currently features 20 megawatts of power capacity and 93,000 square feet of data center space. The new space is expected to be available to customers by the end of the second quarter of 2013.

    In addition, Latisys also announced that it has purchased its OC1 data center, a 50,000 square foot facility next door to OC2, in a strategic move to control existing assets for the long term.

    “Southern California continues to grow as a destination of choice for outsourced IT infrastructure services and as a gateway to Asia Pacific,” said Tom Panarisi, Regional Sales Director for Latisys. “Latisys has been an active participant in the Orange County business community for several years and we take our role as the region’s trusted provider for outsourced IT solutions very seriously. We have come to know the most dynamic large and mid-size enterprises—not just in Orange County but across the U.S.—and we are proud to serve as an extension of any organization’s critical IT team.”

    National Demand for IaaS Solutions

    Latisys’ growth in Southern California reflects increasing national market demand for data center, hosting and cloud services, with Gartner predicting the Infrastructure as a Service (IaaS) market will grow by 47.8% through 2015.

    The Irvine data center serves as the west coast hub of Latisys’ national IT Infrastructure as a Service (IaaS) platform – which also includes Denver as an ideal disaster recovery site, along with Chicago and Northern Virginia as gateways to Europe and Latin America.

    “Every day we see accelerating demand for high performance, highly secure, hybrid Infrastructure as a Service,” said Pete Stevenson, CEO of Latisys. “And because that demand is national, Latisys continues to invest in our platform so we can ensure that businesses continue to have multi-site deployments across most flexible, secure and resilient infrastructure solutions optimized for today, and scalable for tomorrow.”

    Latisys’ national expansion has been ongoing through 2012 and into 2013.  Recent announcements include DEN2—Latisys’ newest state-of-the-art data center in Denver— along with the DC5 and CHI2 data centers that added an additional 22,000 and 10,000 sq. ft. of secure, ultra high-density raised floor in Northern Virginia and Chicago respectively.  Latisys’ total data center platform now exceeds 343,000 square feet across seven data centers in four major markets.

    OC2 is tied directly to a variety of fiber carriers, monitored 24x7x365 by on-site NOC personnel and systems, and operated under SOC 2 Type II and SOC 3 audited controls.

  • With New Owners, Kansas City Carrier Hotel Gets Upgrades

    One of the entrances at 1102 Grand, the data center hub in Kansas City, which has been acquired by Amerimar. (Photo: 1102 Grand)

    One of the entrances at 1102 Grand, the data center hub in Kansas City, which has been acquired by Amerimar. (Photo: 1102 Grand)

    The new ownership of Kansas City data hub 1102 Grand has commenced redevelopment. Amerimar Enterprises and Hunter Newby acquired the property in October 2012 and quickly worked on a master plan to reinforce its position as the leading carrier hotel in the region.

    “Kansas City is booming with innovative business opportunities and pushing the limits of telecommunication network capacity and as 1102 Grand is the City’s carrier hotel, these building improvements are necessary to foster continued growth,” said Newby, a joint venture partner at 1102 Grand.  “We look forward to strengthening the building’s infrastructure and providing connectivity options for the many growing companies in the region.”

    This plan includes a complete overhaul of the electrical and cooling infrastructure. A permit has been secured for two additional 2 megawatt generators and has commenced construction on the first new generator. During the installation of the new generators, the partnership will be simultaneously increasing the incoming electrical service at the property by 66 percent to 5 MVA.

    “These infrastructure improvements are in response to existing customer growth and robust new demand we are seeing in the market,” said Gerald Marshall, CEO of Amerimar Enterprises and owner of 1102 Grand.

    Cooling Upgrades

    Cooling upgrades will include significant capacity increases, as well as redundancy at each level from origination to delivery. The building will also benefit from a full façade repointing and selective replacement and resealing of windows. With these improvements, the new owners seek to secure the building envelope and eliminate any potential points of intrusion that could compromise the data center operations.

    The partnership will also be upgrading the building management systems to monitor all security, fire alarm, fire suppression, cooling and power systems from one central command center, which will also be available remotely. The size of the facility’s support team has nearly doubled as well.

    The project ownership cited the importance of cooperation from the city of Kansas City, Missouri, and the leadership of the Planned Industrial Expansion Authority, which have authorized the public-private partnership necessary to reinvigorate the historic structure.

  • Fighting Back: Big Data Center States Push for More Tax Incentives

    dlr-loudon-exchange

    This is the newest data center building at Digital Realty’s northern Virginia campus. State legislators in Virginia recently passed additional tax incentives for data centers to stay competitive. (Photo: Rich Miller).

    Financial incentives have been popular among states looking to build a reputation as data center destinations, and to compete with neighboring states. It looks like the growth of incentives is now prompting the traditional data center hubs to beef up their tax breaks. Both Virginia and Texas, already two of the most active data center markets, have recently passed or are working on bills to attract more of these high-tech facilities.

    Individual states recognize that data centers are good business, and they’re an increasingly important component of site searches for data center projects. Several states have customized incentives programs for data center operations, which focus on full or partial exemption of sales/use taxes on equipment, construction materials, and in some cases purchases of electricity & backup fuel.

    Virginia is for Data Center Lovers

    Just a year after revising their existing offerings, Virginia legislators are working on new incentives yet again. Barbara Comstock introduced legislation House Bill 1699, which aims to promote and expand Virginia’s data center industry by providing tax incentives. The bill is intended to “create a separate classification, for purposes of permitting localities to set a lower personal property tax rate, on computer equipment and peripherals used in a datacenter.” It passed in the House of Delegates 95-5 on February 5th.

    Senate Bill 1133, identical to the above, introduced by Republican Senator Ryan Mcdougle, was passed in the house 94-4.

    “The data center industry is projected to grow by hundreds of millions of dollars in the coming years. This bill will help Virginia continue to be a leader in this 21st century marketplace,” Comstock said in a statement. “Data center jobs and investment are a critical element in diversifying Virginia’s technology economy and attracting private sector jobs as federal spending and procurement decreases.”

    Loudoun County recently announced that 3 million square feet of new data centers are under contract or construction. Loudoun is already home to 5 million square feet of server farms, and estimates that as much as 70 percent of the world’s Internet traffic passes through the county.

    Virginia passed its data center incentives in 2009, partly to remain competitive with North Carolina, which has been competing aggressively for major data center projects. The original incentive package had a 2017 sunset date for the expanded sales tax exemptions, but in early 2012 those incentives were updated to extend the tax benefits to 2020 – a key consideration for companies looking to build or lease data centers with an expected lifespan that will include several server refresh cycles.

    Comstock has continued to work with the Northern Virginia Technology Council and the tech community to build upon her bill passed last year. The updated incentives passed last year were cited as a key factor in a Capital One project, as well as being a factor in several large leases for DuPont Fabros Technology (DFT).

    Tax Incentives Bigger in Texas

    Texas has been a strong market in recent years, but apparently feels the need to play defense as other states enact generous exemptions and other incentives for data center owners and operators.There was a lot of talk of potential new tax incentives at the end of last year, and now there is a draft.  Bill HB 1223 was filed on Feb. 12.   

    The bill would provide refunds to qualifying data centers on any taxes on the purchase of “tangible personal property,” including electricity, power and cooling equipment, backup generators, servers, storage devices, networking gear and software. The refund is eligible to data cener owners, operators or tenants who create at least 20 jobs and spend at least $150 million in Texas on power and improvements to equipment installed at the data center over a four-year period after initial construction or refurbishing of the facility. Qualifying jobs must pay 120% of the county average weekly wage. Republican Harvey Hilderbran from district 53 is the primary on the bill.

    Tax incentives remain an integral part of site selection. The moves on the part of both Virginia and Texas suggest that these states can’t rest on their leadership laurels, but need to position themselves as attractive options compared to rival states.

  • NTT Communications Adds Enterprise Cloud Locations

    Singapore Serangoon Data Center is located in northeastern Singapore. The tier III+ data center offers co-location, cloud services, NTT Com’s global network services and other related services.

    NTT Communications’ Singapore Serangoon Data Center is one of three facilities in which the company is adding its enterprise cloud computing service offering. (Photo: NTT Communications)

    NTT Communications launched its Enterprise Cloud last year, and is hoping its initial successes translate globally. New locations were announced today, as the company made its cloud available worldwide through data centers in Asia, the United States, and Europe.

    NTT Communications’ Software-Defined Networking (SDN)-based Enterprise Cloud was initially launched via data centers in Japan and Hong Kong in June 2012. Today’s expansion adds Enterprise Cloud locations in Singapore, Virginia and California in the US, and England. NTT anticipates opening three more data centers in Australia, Malaysia and Thailand in March 2013.

    “NTT Communications’ Enterprise Cloud is a full-layer, self-manageable virtual private cloud that is now global, and growing to incorporate virtualized networks in eight countries and nine locations by March 2013,” said Motoo Tanaka, Senior Vice President of Cloud Services at NTT Communications.

    NTT noted it is seeing strong interest from global enterprises who view Enterprise Cloud as a flexible extension of their own data centers, enabling them to connect existing private networks to the cloud and gain additional cost-optimized and secure compute capacity.

    “NTT Com understands the enterprise client, their struggles, goals and needs,” said Tanaka. “Being truly enterprise class is what makes NTT Com the leading partner of choice for client cloud transformation through comprehensive cloud lifecycle services, and is what has led us to develop this real-world cloud, built on a foundation of advisory, migration, operational and management services.”

  • OnRamp Will Build Second Austin Data Center

    The interior of the OnRamp data center in Raleigh as it was preparing to open. The company is also building a new data center in Austin, Texas. (Photo: OnRamp).

    The interior of the OnRamp data center in Raleigh as it was preparing to open. The company is also building a new data center in Austin, Texas. (Photo: OnRamp).

    Data center operations company OnRamp announced it is building a 42,000 square foot data center in Austin which will open early in the fourth quarter of this year. This will be the second data center for the company in Austin. The announcement of OnRamp’s Austin II project comes just a week after the company announced the opening of a data center in the heart of Research Triangle Park in Raleigh, NC.

    OnRamp says the additional facility was necessitated by demand. “We are excited to open a second, enterprise-class Data Center in Austin,” said OnRamp CEO Lucas Braun. “We’re an Austin-based company, and a large percentage of our managed and cloud hosting and HIPAA compliant hosting services are delivered by our teams in Austin.” The facility is being designed for industry-leading levels of high density computing, with the capability of delivering upwards of 30kW per rack, contiguously. In addition, the SSAE 16, SOC I Type II, HIPAA and PCI Data Center will feature a separate high security area for HIPAA hosting. OnRamp’s Redundant Isolated Path Power Architecture delivers true 2N power to customers, from the utility to the rack.

    OnRamp is working with Square One Consultants to oversee the design, development and construction of the facility.

    OnRamp was founded as an ISP in 1994 in Austin, Texas. Its first colo customer was a year later, and It’s first managed server came about in 2000. It built its first data center with 2N power in 2003. Private Cloud came in 2007, and an investment from Brown Robin Capital followed in 2009.

    The company offers colocation, cloud computing, high security hosting and disaster recovery services backed by what it calls Full7Layer support, which is, of course, support across all 7 layers including all the way to the application layer.

  • Cologix Opens Second Site at Dallas INFOMART

    The distinctive facade of the InfoMart, where colocation provider Cologix now operates 40,000 square feet of data center space. (Photo: Cologix)

    The distinctive facade of the Dallas INFOMART, where colocation provider Cologix now operates two data centers. (Photo: Cologix)

    Interconnection and colocation company Cologix has been rapidly expanding since it acquired its first data center at the Dallas INFOMART from NaviSite back in 2010. Now, after expanding across North America, the company comes full circle, launching its second data center at the INFOMART.

    Cologix today announced the successful commissioning and launch of the new facility at the INFOMART, also known as 1950 North Stemmons Freeway. The 12,000 square foot data center holds over 300 cabinets and is supported by 3.2 megawatts of power from three diverse substations. The new Dallas facility is Cologix’s 12th North American data center, including key carrier hotel locations in Toronto, Montreal, Vancouver, Minneapolis and Dallas. Cologix bought its first data center at the INFOMART in late 2010 from NaviSite. The company first announced the addition of a second facility there in April 2012.

    The new data center includes hot aisle containment pods, modular power distribution units (PDUs) and in-row cooling technology, which collectively provide for rapid deployments and the ability to dynamically cool equipment specific to the needs of individual cabinets. “Our hot aisle containment and in-row cooling systems are unique in the Dallas market and provide for enhanced customer experience and efficiency,” said Rob DeVita, General Manager of Cologix Texas. “We are excited to introduce this technology to the Dallas community and look forward to supporting our customers’ growth.”

    Key Internet Data Hub

    Dallas-Fort Worth is the fourth largest metro market in the US and host to twenty Fortune 500 company headquarters.  Its central location and network density make it a primary Internet peering point and natural location for regional and national network nodes.

    “The continued rapid adoption of the cloud by all customer segments is dramatically elevating traffic and network performance requirements, which continues to heighten the value of colocation and interconnection options in the downtown INFOMART,” said Cologix CEO Grant van Rooyen.  “Cologix is focused on providing our customers network neutral access to broad connectivity options, represented today in this new Dallas inventory.”

    Customers at the new facility have the ability to directly interconnect with existing customers and 25+ networks in the existing Cologix meet-me room (MMR) as well as accessing other tenants in the carrier hotel.

    The Dallas INFOMART is a 1.2 million square foot technology hub with tenants including SoftLayer, ViaWest and Equinix, as well as network providers including MCI, Allegiance Telecom and Level 3. The Infomart was built by Trammell Crow in 1985, and was initially envisioned as a hub for computer industry trade shows. The building’s glass facade was designed to be a replica of the Crystal Palace, built in London in 1851 as part of the first World’s Fair.

  • Amazon Launches Data Warehouse Service Redshift

    Redshift

    AWS announced a new data warehouse service, called Redshift.

    In a continued bid to gain enterprise market share for storage, Amazon Web Services (AWS) officially launched Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. The company announced the service late last year with a limited preview by invitation only. The service is now available in U.S. East (Northern Virginia), with plans to expand to other AWS Regions in the coming months.

    AWS built Redshift based on technology licensed from Paraccel, of which Amazon is an investor.

    Impact on the Marketplace

    With Redshift, Amazon is taking on established offerings from Oracle, IBM and Teradata, and it’s challenging them on cost. At its re: Invent conference in November, AWS presented the pay-as-you-go incentive, calculating that it would cost between $19,000 and $25,000 per terabyte per year at list prices to build and run a good-sized data warehouse on premise.

    Redshift is a good example of AWS working harder to provide an enterprise-friendly service. The Redshift service follows AWS Glacier, which provides low cost cold storage/archive storage with the tradeoff that archives aren’t available instantaneously. The company also recently unveiled high memory instances. It appears that Amazon is going hard after enterprise data on a variety of fronts, and with Redshift, it’s expanding into the big data marketplace.

    Ease of Use

    Users can manage Redshift from the AWS Management Console. It includes a variety of graphs and visualizations to monitor the status and performance of clusters, as well as the resources consumed by each query. Customers can resize clusters, add or remove nodes, change instance type, create a snapshot, restore the snapshot to a new cluster, within the console through a couple of clicks.

    Redshift offers fast-query performance when analyzing virtually any size data set using the same SQL-based tools and business intelligence applications that are in use today. The company says it designed Redshift to be cost-effective, easy to use, and flexible. Redshift is anticipated to deliver 10 times the performance at one-tenth the cost of on-premise data warehouses. This is achieved through columnar data storage, advanced compression, and high-performance disk and network I/O.

    Redshift integrates with a number of other AWS services, including S3 and Amazon DynamoDB. Customers can also use the AWS Data Pipeline to load data from Amazon RDS, Amazon Elastic MapReduce, and Amazon EC2 data sources.

    Users can start out small (in terms of data warehousing, a couple of hundred gigabytes) and scale up as needed.

    Pricing

    • High Storage Extra Large (15 GiB of RAM, 4.4 ECU, and 2 TB of local attached compressed user data) goes for $0.85 per hour

    • High Storage Eight Extra Large (120 GiB of RAM, 35 ECU, and 16 TB of local attached user data) for $6.80 per hour.

    With either instance type, customers pay an effective price of $3,723 per terabyte per year for storage and processing. One-Year and Three-Year Reserved Instances are also available, pushing the annual cost per terabyte down to $2,190 and $999, respectively.

    Keep up on Data Center Knowledge’s cloud computing coverage, check the Cloud Computing channel.

  • Rackspace Accelerates OpenStack Enterprise Push

    openstack-cloud

    Rackspace announced OpenStack private cloud capabilities and partnerships with AMD, Brocade, Hortonworks and Arista Networks.

    Rackspace Hosting wants to make it easier to deploy and run clouds, and has been partnering with leading hardware and software providers to create three new Private Cloud Open Reference Architectures. Reference architectures and test criteria for OpenStack solutions help to ensure consistent performance, supportability and compatibility.

    “The cloud is a paradigm shift that affects IT operations and introduces an entirely new business model; therefore defining Open Reference Architectures is an essential step towards achieving cloud maturity,” wrote Paul Rad, Vice President of Private Cloud at Rackspace, in a blog post. These Reference Architectures help with Enterprise OpenStack adoption.

    Along with the new reference architectures, the company has developed the Rackspace Private Cloud Certification Toolkit, which validates all of the functionality of an OpenStack private cloud so your cloud operations team can be sure that your cloud is operational and that it has all of the necessary components properly installed and configured. Some of the first partners certified include AMD, Brocade, Hortonworks and Arista Networks.

    Ensuring compatibility and interoperability means that customers using Rackspace Private Cloud Software with OpenStack can more easily implement a reliable and flexible private cloud solution.

    The three reference architectures are:

    • Mass-compute with external storage: A scalable-compute cloud architecture where data can be stored to external resilient volumes and exported over iSCSI
    • Mass-compute: A scalable-compute cloud architecture for variable workloads where data resides on the compute nodes directly.
    • Distributed Object Storage: An architecture for an object storage cloud to store critical data across multiple zones for resiliency.

    Here’s a look at some of the hardware and software providers included in the first round of certifications:

    AMD SeaMicro

    AMD SeaMicro SM15000 server is certified for the Rackspace Private Cloud. A product certification for mass compute and object storage ensures that enterprise deployments of Rackspace Private Cloud on AMD’s SeaMicro SM15000 servers are tested and solid for the enterprise.

    AMD’s SeaMicro SM15000 system is a very high-density, energy-efficient server. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, up to five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

    The AMD SeaMicro SM15000 server has been certified for the following Rackspace Private Cloud reference architectures:

    • OpenStack Compute (“Nova in a Box”) scales horizontally and integrates with legacy systems and third-party technologies
    • OpenStack Object Store (“Swift in a Rack”) provides a massively scalable, redundant storage system.

    “The AMD SeaMicro SM 15000 system offers Rackspace Private Cloud customers unprecedented density, storage capacity and performance, bringing enterprises one step closer to running the cloud in their own data centers,” said Rackspace’s Rad.

     Brocade

    The Brocade VDX switch with VCS Fabric technology underwent the validation processor compatibility and interoperability and was given the thumbs up.

    “Ethernet fabric adoption has now reached critical mass and our enterprise and service provider customers are reaping the benefits of Brocade VCS Fabric technology as part of their cloud-based architectures,” said Jason Nolet, vice president of Data Center Networking, Brocade. “This certification from Rackspace is validation that the Brocade VDX switch family with VCS Fabric technology is perfectly designed to deliver the automation, reliability and agility expected by Rackspace and their customers.”

    A member of the OpenStack community since 2011, Brocade has embraced this open source cloud platform as part of its cloud architecture strategy and is optimizing its networking solutions for OpenStack.

    Hortonworks Data Platform

    Hortonworks, a leading contributor to Apache Hadoop, today announced that the Hortonworks Data Platform (HDP), an enterprise-ready, 100-percent open source platform powered by Apache Hadoop, has achieved certification for Rackspace Private Cloud.

    With HDP, data can be processed from applications that are hosted on Rackspace Private Cloud environments, allowing organizations to quickly and easily obtain additional business insights from this information. The provisioning, monitoring and management components of HDP are important enablers for the integration with the Rackspace Private Cloud, providing an easy path for getting data into and out of the cloud. HDP qualifies for the Rackspace Private Cloud Open Reference Architecture “Mass Compute with External Storage”.

    “The Hortonworks Data Platform is emerging as the de facto Apache Hadoop distribution for cloud providers, and the certification for Rackspace Private Cloud is another significant step in the enterprise viability of Hadoop,” said Herb Cunitz, president, Hortonworks. “Our commitment to the 100-percent open source model ensures that cloud providers will avoid any vendor lock-in when deploying HDP and Rackspace Private Cloud, and further extends the Apache Hadoop ecosystem to the private cloud, providing another method for exploring and enriching enterprise data with Hadoop.”

    Arista Networks

    Arista 7050 Series switches have achieved quality assurance and certification for Rackspace Private Cloud. The Arista 7050 Series enables wire speed, 10 GbE and 40 GbE switching, powered by Arista (Extensible Operating System) EOS for software defined networking applications.

    “The combination of Arista 7050 Series switches with Rackspace Private Cloud Software provides enterprise IT professionals with a certified, next generation data center architecture that drives new levels of IT efficiency,” said Ed Chapman, vice president of Business Development, Arista Networks.

    The Arista 7050 Series switches provide the low latency, wire speed network performance required in an OpenStack-powered cloud, in form factors of up to 64 ports in a 1 RU chassis. In addition, the Arista 7050 Series provides industry leading power efficiency with typical power consumption of less than 2 watts/port with twinax copper cables, and less than 3 watts/port with SFP/QSFP lasers.

  • CiRBA Targets ‘Licensing Sprawl’ in Data Centers

    The rise of the virtual machine has added a layer of complexity to software licensing, a headache that is made worse in a cloudy world where virtualization decouples virtual machines from physical servers. Data center management software has helped data center operators optimize their use of server capacity. One provider believes it can now help customers save money by optimizing their spending on software licenses.

    CiRBA, a provider of capacity management software, has added a new software license control system that delivers optimal virtual machine (VM) placements for processor-based licensing models. The idea is that targeting virtual machine sprawl can reduce “licensing sprawl” as well. Like playing Tetris with an environment, CiRBA moves the blocks (VMs) around so that they’re optimally placed to make the best use of server capacity. It now offers similar optimization for software licenses, with an add-on module that targets capacity-based licensing models.

    “Licensing optimization is now becoming a capacity management challenge,” said Andrew Hillier, CTO of CiRBA. “By cleverly placing workloads on licensed servers in such a way that the overall footprint is minimized, license costs can be reduced by 40 to 70 percent.  It is a showcase example of how the right analytics can save millions of dollars in unnecessary spend.

    Reducing License Purchases and Renewals

    Through the Software License Control module, CiRBA optimizes placement of licensed software on machines, which it says has saved customers an average of 55 percent on data center software licensing costs on average. The savings are realized through lower expenditures for renewals, deferral of new software license purchases, and reduced yearly maintenance. Savings can reach into the millions of dollars for expensive operating system, database, and middleware platforms. “Database optimization analysis saves 10x savings (compared to OS) on maintenance alone,” said Hillier.

    “In the past, licensing has been more of a bean counting exercise,” said Hillier. “The shift to virtual and cloud has led to a much more dynamic picture. Now we can actively manage these environments, minimizing their footprints.”

    Through analytics, CiRBA conducts a “defrag” in which, for example, it can consolidate the Windows components onto the minimum safe footprint. “Within constraints, we’ll minimize the footprint,” said Hillier. “We’re not overdriving those hosts. Too many SQL servers and you’ll blow up the IO, so we limit that, as one example.”

    Aligning Licenses with Physical Servers

    The CiRBA Software License Control module optimizes VM placements in virtual and cloud infrastructure, reducing the number of processors/hosts requiring licenses. It also determines optimized VM placements to both maximize the density of licensed components on physical hosts and isolate these licensed VMs from those not requiring the licenses. It then contains the licensed VMs on the licensed physical servers.

    Since virtual environments are dynamic and always changing, CiRBA also enables organizations to profile software licensing, configuration, policy and utilization requirements as new VMs come into an environment, routing these VMs to appropriately licensed physical servers, and reserving capacity for the new VMs through its Bookings Management System.

    This is essential when managing dynamic virtual and cloud environments, and also provides visibility into requirements to grow or modify license pools based on upcoming demand. Through this booking and reservation process, CiRBA ensures that density remains optimized by considering both the bookings and organic growth in the environment, and using this to forecast the impact on capacity and licensing.

    CiRBA is a transformation and control system built to optimize virtual and cloud infrastructure, driving up efficiency while driving down costs.  It’s been known in the market for its migration capabilities, moving machines from point A to B; physical to virtual, migration to cloud, and data center consolidation. It optimizes density and increases utilization, “kind of like a hotel reservation system for virtual environments,” said Hillier. It’s all policy based.

    The service is available on a subscription basis. Here’s a 2-minute video from CiRBA providing an overview:

     

  • Lyatiss Brings Application Defined Networking to the Cloud

    Lyatiss believes the cloud needs to evolve to Application Defined Networking. The company took its first step toward that vision with the beta release of CloudWeaver, a tool that brings compute and network together, and allows a customer to orchestrate it all from one tool. CloudWeaver works atop of Amazon Web Services with support for additional services to come down the line.

    So what is CloudWeaver, and what is Application Defined Networking, for that matter? Lyatiss feels it has found something lacking in the market: cloud orchestration and visibility that reaches deep into the network. The network is often hidden to cloud infrastructure, particularly with public cloud, and latency issues are becoming critical for cloud customers. As developers build increasingly sophisticated and constantly changing applications, monitoring and troubleshooting an infrastructure based on a “black box network” becomes a challenge. Application Defined Networking looks to connect the network to the cloud, through an intuitive orchestration of cloud networking.

    Addressing the Limitations of SDN

    Software Defined Networking (SDN) defines a new architecture for the “network machine” and is the first step in trying to address these issues. But Lyatiss feels that SDN solutions only partially fulfill the need for agile networking and don’t solve the problems that application administrators experience in the cloud. SDN is not sufficient to address the predictability and performance issues encountered in current cloud applications, nor can it meet the need for infrastructure differentiation and software control in this scenario.

    Instead, Lyatiss asserts, the cloud industry will have to evolve to Application Defined Networking (ADN) that orchestrates application flows. ADN accelerates and streamlines the movement of data throughout the entire virtual infrastructure of each application. ADN gives the application the ability to adapt its networking environment using APIs, so that application delivery and performance across public and private cloud networks are optimized, without compromising on application portability or security.

    “It’s a top down approach, and the goal is to serve the application,” said Pascale Vicat-Blanc, CEO of Lyatiss. “In the cloud, there are immediate needs that have to be covered. ADN answers these needs – this gives customers greater visibility of the network. This is related to SLAs in the cloud.”

    Focus on Infrastructure, Not the Application

    An application developer can’t see the latency constraints in the cloud, said Vicat-Blanc. With Lyatiss, “you can build the communication graph of the application. You also get the knowledge of how, the application performance management.” The difference between Application Performance Management and Application Defined Networking is that you’re not instrumenting the application, you’re instrumenting the infrastructure. CloudWeaver shares the communication patterns and correlates them.

    The company started in France, and has a history of helping high demanding applications get the best of networks. Headquarted in Silicon Valley, the team consists of more than 20 people and is expected to double this year.

    How Cloudweaver Works

    First the Cloudweaver application runs a discovery. A customer provides his or her Amazon credentials, and in a few minutes, a network topology map is created.  ”This is the first time a customer has seen a visual representation of what their AWS infrastructure looks like,” said Ankit Agarwal, VP of product at Lyatiss.

    A flow map is created, which is a heat map showing the network path and points of latency. For public cloud, this is deep insight into the network. CloudWeaver analyzes the data, the latency and throughput information, usage analysis, and has bottleneck detection. It shows usage statistics, allows the setting of user defined thresholds, and orchestrates it all – moving, changing and cloning nodes as needed. It has built in network services for integration, reconfiguring network resources. “It’s a very tight coupling of the network and the instances,” said Agarwal. “I need to be able to perform solo actions and network actions. I need to perform  network actions like creating a load balancer. :ater, you’ll be able to secure other network services for security, VPN.”

    A CloudWeaver customer can see the region and availability zones. If the customer clicks on a node, information comes up, for more information, a customer can SSH into the node. CloudWeaver can helps fix the following:

    • Unpredictable latencies and uneven user experiences. 
    • Potentially disastrous cascading effects from bottlenecks, failures and cloud outages.
    • Poor performance and lack of isolation from a large number of users.
    • Wasted capacity resulting from over-provisioned infrastructure.
    • Increased networking complexity, making it almost unmanageable.
    • Spiraling costs from lack of visibility in resource interactions.

    Several use cases were provided. The first is in staging, allowing customers to anticipate high loads. Customers are using CloudWeaver to test with specific configurations and  topologies to identify weakest point of infrastructure and remediate. “It’s difficult to plan in advance without seeing and imitating these situations,” said Agarwal.

    The other use is for planning and programming. The company gave a customer example of a social gaming company using CloudWeaver. The company provides a platform for games, and can need to scale from 500,000 to 2 million users for a few hours. It’s critical for the company to anticipate activity and get the right metrics of trends in real time. They use them for accessing the capacity they need, and like the graphical user interface.

    CloudWeaver can also be used as a staging environment to demonstrate potential scenarios to management. This helps save time and money, because the user doesn’t have to oversize the infrastructure. CloudWeaver is delivered in a SaaS model, and has an intuitive GUI. It has a RESTful API and SDN-SDK for easy integration.

    “This is a very cohesive orchestration that brings compute and network together, and you do it all from one tool,” said Agarwal. “Customers need awareness and intelligence of application flow,” said Vicat-Blanc

  • Citing Sandy, Data Center Firm Adds Disaster Recovery Space

    xand-DataCenter

    A look inside the Xand data center in Westchester County, New York. Xand is adding 35,000 square feet of disaster recovery space. (Photo: Xand).

    Data center and managed services provider Xand has added new disaster recovery space, citing increased demand in the wake of Hurricane Sandy as the impetus for the move. The 35,000 square feet of new dedicated and shared workspace will be added to its facilities in Marlborough, Mass. and Hawthorne, N.Y. The new space will be available to clients as private suites, shared workspace, and combinations of both, and is scheduled to be online during the first quarter of 2013.

    “During Hurricane Sandy, we enabled a multitude of businesses and organizations to rapidly rebound from the devastating effects of the storm,” said Yatish Mishra, Xand’s President and CTO. “With the addition of over 35,000 square feet of brand new workspace, we’re excited to offer even more disaster recovery options for our existing customers, while continuing to welcome new clients who are reassessing their current business continuity needs.”

    1,000 Customers On-Site During Hurricane

    Xand has six facilities, which provided a home to nearly 1,000 customer staff members during Hurricane Sandy, with over half of those folks at the Valley Forge, Pennsylvania facility. The company says it was able to accommodate several new clients, as well whose previous providers couldn’t meet their Recovery Time Objectives (RTOs).

    Xand maintained 100 percent uptime in all of their data center facilities during the storm. All six of Xand’s locations are well-positioned within 30 to 90 miles of New York, Boston and Philadelphia.

    “Everyone in the Northeast has heeded the lessons of Hurricane Sandy and Xand is stepping up to the table as well,” said Mishra. “We’ve seen a strong market demand for disaster recovery solutions, housed not just in secure facilities, but also on our cloud platform. The timing is perfect right now for more expansion to meet this increased customer demand.”

    Xand’s disaster recovery offerings include replication, online vaulting and electronic backups and cloud recovery, as well as traditional tape, physical asset recovery, shared and dedicated workspace and other custom designed applications. In 2011, Xand was acquired by private equity firm ABRY Partners to grow its footprint and market presence.

    Nicole Henderson at theWHIR notes a few other post-Hurricane Sandy disaster recovery moves: Nirvanix offered a “Disaster Avoidance Program” to move customers away from its data center in New Jersey before the storm hit, while Advanced Internet Technologies offered a free month of hosting to any company that experienced downtime or data loss with any other web host caused by Hurricane Sandy.

  • Host.net Acquired by Canadian Private Equity NOVACAP

    Canadian private equity firm NOVACAP announced the acquisition of US-based Host.net, a network infrastructure services provider that focuses on colocation, cloud computing, virtualization and storage. The deal signifies continued engagement by private equity firms in the internet infrastructure space, NOVACAP’s entrance into the U.S. market, and it is the 100th transaction for DH Capital, which served as exclusive financial advisor to Host.net. Terms of the deal were not disclosed.

    Host.net is based in South Florida, and has over 700 customers ranging from small to large multinationals. Some private equity backing will allow Host.net to continue its growth and expansion. “The transaction will allow Host.net to continue to lead the industry, to grow to the next stage by adding more data centers, and to expand their portfolio of services to remain the industry’s benchmark,” said Ted Mocarski , Senior Advisor at NOVACAP, which has $790 million in assets under management, and is one of Canada’s leading private equity firms.

    NOVACAP will leverage experience acquired in the Canadian market as it expands its investment strategy to the United States. “It is part of a plan to increase our presence in the United States, and this agreement shows that we are a serious player in the market,” said Pascal Tremblay, President of NOVACAP Technologies. “Our expansion will benefit our portfolio of companies, and will help us find additional opportunities throughout North America and in international markets.”

    “We are delighted to be working with NOVACAP, whose insight and investment will definitely benefit Host.net’s growth strategy,” said Jeffrey Davis, Co-Founder & Chief Executive Officer of Host.net, in the release. “Their experience in the industry will bring focus to the strategic steps needed in order to grow the company.”

    Host.net’s management team will remain in place and will be supported by newly appointed board members. “With this new acquisition, NOVACAP wishes to show its confidence in Mr. Davis’ team, ” said Tremblay.

    Host.net was founded in 1996 and is headquartered in Boca Raton, FL. The company operates multiple enterprise-class data centers connected to an extensive fiber-optic backbone delivering Internet, MPLS and layer 2 communications using a wide array of last-mile options. It serves customers in most major metropolitan regions of North America as well as portions of Europe.