Author: Jason Verge

  • Digital Realty Trust Acquires Six Austin Data Centers

    Digital Realty Trust acquired a six-building portfolio in Austin, Texas for $31.9 million. The portfolio is located at the MetCenter Business Park and consists of 337,000 square feet of operating data centers and flex space.

    The overall portfolio is 90 percent leased to a variety of companies, with two of the six buildings total approximately 100,000 net rentable square feet fully leased to three tenants.

    This move continues Digital’s recent streak of acquiring fully leased properties for income, plus adds some new development space for upside.

    “The acquisition of this portfolio achieves several key objectives for us,” said Scott Peterson, Chief Acquisitions Officer at Digital Realty. “It expands our existing data center footprint in the Austin market, while providing stable cash flow immediately at an attractive going-in cap rate. Second, it provides near-term opportunity to add value by lease existing vacant space. And third, it offers the option to convert a portion of the property to data center space over the longer term as leases expire.”

    The six buildings are located adjacent to Digital Realty’s data center at 7500 Metro Center Drive, approximately five miles southeast of the Austin central business district, and nearby Austin-Bergstrom International Airport.

    “This acquisition adds inventory to a market where we have already seen substantial absorption at our existing facilities, as well as strong demand from enterprise customers,” said Michael Foust, Chief Executive Officer at Digital Realty. “It is a continuation of our strategy of growing a world class data center portfolio in markets where our customers want to be located.”

    Where are the customers wanting to be located? Just this year, the company has acquired facilities in Dallas, Phoenix , Minnesota, Toronto, Paris,  and Sydney. All of these acquisitions were in markets where the company says it saw very positive demand, and most of these properties were fully leased at time of acquisition. It also launched its own DCIM, announced it was seeking out tier certification for 20 facilities, and is building dark fiber connecting its key internet gateways. It upgraded its POD infrastructure to offer up to 1.2 megawatts in each data hall, up from 1.125. The company has kept busy.

  • Texas Data Center Tax Incentives on the Horizon

    Major data center tax incentives are expected to become law in Texas, and the state’s data center operators are excited about it. The Lonestar state is already a central data center and hosting hub, but the tax breaks are designed to bring in new technology firms and capital investment. In recent years, other states have begun offering aggressive tax cuts, and the new incentives will help Texas defend its prominent position.

    The Texas Legislature has approved legislation to create a temporary sales tax exemption intended to attract major data center projects to Texas.This exemption will apply to data centers with single occupants only. It now comes down to Governor Rick Perry, who has 20 days to sign, veto, or allow the bill to become a law without his signature.

    “We are a very competitive state for large data centers in terms of our economy, geography and climate, but we haven’t been in terms of our tax code,” said Rep. Harvey Hilderbran, R-Kerrville, who authored House Bill 1223 creating the incentives.

    Data Center Providers Express Their Support

    Stream Data Centers, which has facilities in Dallas, Houston and San Antonio, and Companies like CAPSTAR, who own a large property in the Dallas market, are notably excited.

    “Large data center users consider these economic incentives as part of their total cost analysis, and Texas was being priced out of the market,” said John Patterson, who works with CAPSTAR Real Estate Advisors and has a partnership in the 3000 Skyline Dallas data center building. “These tax incentives benefit the technology firms, but they also have a huge economic impact on the state and the communities where they are located.”

    If signed, effective September 1, 2013, the sales and use tax exemption applies to  personal property that is necessary and essential to operate a qualified data center, including electricity; an electrical system; a cooling system; an emergency generator; hardware or a distributed mainframe computer or server; a data storage device; network connectivity equipment; a rack, cabinet, and raised floor system; a peripheral component or system; software; and any other equipment or system necessary to operate qualified property, including a fixture; and a component part of any qualified property.

    Senate Adds its 2 cents

    House Bill 1223 passed last Friday, and House Ways and Means Committee Chairman Harvey Hilderbran told the House he was willing to accept two important changes that the Senate made to the Bill:

    • The Senate increased the minimum capital investment required to qualify for exemption. Projects involving a capital investment of $200 million qualify for a ten year sales tax exemption. Those with $250 million investment qualify for 15 year exemption
    • The Senate defined a qualifying data center to require that the center be used by a single tenant.

    How to Qualify

    The significant thresholds are:

    • $200 million investment over the first five years following certification in infrastructure, hardware, software, electricity, etc.
    • 20 new full time jobs, which pay 120% of the existing county pay rate
    • 100,000 SF building and larger
    • Dollars spent on or after September 1st 2013.

    “This bill is significant because it isn’t location-specific,” said Todd Kercheval, a government affairs consultant who has lobbied for data center tax incentives. “These incentives are truly going to benefit enterprise companies, technology firms, and many communities all across Texas.”

    To qualify, data center owner, operator or occupants must jointly or independently meet required capital investment, create 20 full time permanent jobs that pay at least 120% of the average weekly wage of the given county, with these jobs maintained for five years.

    No investments made or jobs created prior to September 1, 2013 will count. The Comptroller of Public Accounts must pre-approveand will issue registration numbers.

    What the exemption doesn’t apply to: office equipment or supplies, maintenance or janitorial supplies or equipment, equipment or supplies used primarily in sales activities or transportation activities, property on which the purchaser has received or has a pending application for an enterprise zone refund, personal property not otherwise exempted that becomes an improvement to real property, equipment rented or leased for a year or less, or a taxable service that is performed on property exempted by the bill.

    “Data centers benefit communities by increasing real estate values in areas that are often underutilized,” Hilderbran said. “Higher real estate values mean more tax dollars for schools.”

  • Cisco Buys JouleX for $107 Million for Energy Management SaaS

    Cisco is acquiring JouleX, a provider of enterprise IT energy management for data center assets, for approximately $107 million. The acquisition enhances Cisco’s software-as-a-service offerings with energy management, fitting particularly well with Cisco EnergyWise. The acquisition is expected to be complete in the fourth quarter of this year.

    Joulex developed an agentless system that detects devices on IP networks and tracks their power use. The combined solution will provide customers with a way to measure, monitor and manage energy usage for network and IT systems without the need for device side agents, hardware meters or network configurations.  It uses capabilities of the network to gain visibility into and control of energy usage across global IT environments. JouleX previously raised $17 million back in 2011.

    “JouleX’s technology will strengthen Cisco Services’ Smart Offerings and complements our evolving services strategy,” said Faiyaz Shahpurwala, senior vice president, Industry Solutions at Cisco. “It extends our ‘Internet of Things’ capabilities and is a good alignment to Cisco EnergyWise. With network-enabled devices increasing exponentially, our partners and customers are asking for this solution today to operationalize their energy management capabilities in the network and reduce cost. JouleX’s cloud-enabled, agent-less architecture will allow our partners and customers to quickly deploy this solution at scale in addressing their IT energy management needs.”

    JouleX employees will be integrated into the Connected Energy Solutions team within Cisco’s Industry Solutions Group, reporting to David Goddard, vp and general manager. Cisco will pay approximately $107 million in cash and retention-based incentives in exchange for all shares of JouleX

    JouleX is headquartered in Atlanta with offices in Shanghai, Tokyo, Paris, Munich and Kassel, Germany, and throughout the United States..

    There is a big focus on network and IT energy efficiency, with enterprises seeking solutions to control energy consumption across infrastructure. JouleX gives Cisco some deeper intelligence around energy consumption, in a SaaS-based, agentless way, boosting its capabilities. The company is privately held with capital investments from Target Partners, TechOperators, Sigma Partners, Flybridge Capital Partners and Intel Capital.

  • The eBay Dashboard Shows Company Performance

    eBay dashboard

    eBay dashboard launched in March.

    With its Digital Service Efficiency (DSE) tool, eBay revealed just how truly efficient its infrastructure is across a variety of metrics. The company has now hit its first full quarter of public data that measures how the company performed against its own goals.

    In March, eBay launched its Digital Service Efficiency (DSE) methodology – a “miles per gallon” equivalent that displayed infrastructure effectiveness in real-time across 4 key business priorities: performance, cost, environmental impact and revenue.

    Among Q1 highlights, the company raised the number of transactions per kilowatt-hour and it exceeded its cost per transaction goal. Despite increasing its number of servers powering eBay.com, there was an increase of only 16 percent (2.69 MW) in power consumption, which the company contributes to the efficiency of new servers.

    There were also improvements across performance, cost, and environmental impact. In terms of environmental impact, a big boost came from the company’s Salt Lake City data center’s solar array, increasing the company’s clean energy use. While the solar array is small, it increased owned clean energy powering eBay.com by 0.17 percent, and there’s a fuel cell installation expected to come online this summer.

    The company’s goals were:

    •   Increase transactions per kWh by 10 percent transactions per kWh increased 18 percent year over year)

    •   Reduce cost per transaction by 10 percent cost per transaction decreased 23 percent in Q1 alone, exceeding the initial goal

    •  Reduce carbon per transaction by 10 percent carbon per transaction showed a net decrease of 7 percent; this was the only metric the company didn’t blow past. There’s its Utah Bloom fuel cell installation that will go live this summer, which will decrease and contribute significantly to the 10% carbon reduction goal set for the year. The company is confident it will remain on track to hitting this goal, recognizing that infrastructure is dynamic and changes.

    Some of these positive trends were driven by newer, smarter features on the site including feed technology rolled out last fall. The feed technology attempts to personalize the shopping experience by showing auctions that might be of interest based on a user’s history.

    Also important to note is that the company fine-tuned the methodology. It now only looks at server pools that receive external web traffic so it can measure “buy and “sell” traffic, calling all other server pools not receiving external web traffic “shared”.

    The company said it was making $337 million per megawatt last time around, but this metric has been fine tuned to measure Revenue per megawatt hour now to represent total consumption per quarter and year rather than quarterly averages.

    The auction giant had 52,075 servers then, which is up to 54,011 servers now. It was consuming 18 megawatts of power to support 112.3 million active users; now it consumes 19.08MW for 116.2m active users. Next quarter will paint a clearer picture of revenue per user, as on a Q/Q basis it seems to have dropped $1 from $15 to $14. For last year, the company showed revenue of $54 per user, and $117,000 per server.

    DSE provides a vivid example of the productivity of data center infrastructure, which typically has construction costs of $5 million to $10 million per megawatt for large users like eBay. All the information is publicly available at dse.ebay.com

  • What About Dell’s $1 Billion Cloud Buildout?

    Dell-module-ebay-470

    Dell’s use of modular data centers – like this unit deployed for eBay – was a key part of its plans to expand its data center network to host its own public cloud offering, an initiative which was discontinued last week. (Photo: eBay)

    Dell has abandoned its plans to offer its own public cloud, shifting to a partner-focused cloud model instead. Most of the chatter has been around what this means for OpenStack. But what does this mean for Dell’s plan to invest $1 billion in data centers?

    Back in 2011, Dell announced its plans to spend $1 billion on data centers to deliver its public cloud products. The company planned to build 10 data centers in 24 months; an aggressive plan by all accounts. However, with Dell dropping its direct public cloud offering, much of the infrastructure that would have populated these data centers has now disappeared.

    Has Dell held strong to its data center buildout plans? If not, has that plan significantly changed since dropping public cloud? If the company has or will invest this money in infrastructure, what will it be used for? Data Center Knowledge reached out to Dell to ask how its shift in public cloud plans will impact its planned data center expansion, but the company hasn’t responded.

    What We Know: Deployments in Quincy and Slough

    Few public announcements have been made about Dell’s data center expansion. Here’s a look at what we know.

    In late 2011, a UK data center in Slough entered production. It wasn’t a massive amount of space, consisting of 5,000 square feet divided into two sections – a raised-floor area featuring rows of cabinets using hot aisle containment and in-row cooling units, and a second section with IT capacity deployed in Dell Modular Data Centers. The Slough facility, which was developed by retrofitting an existing structure, will meet Tier III reliability standards and a power usage effectiveness (PUE) of about 1.5.

    One of the U.S. data centers announced and built was in Quincy, Washington where Dell purchased 80 acres of land and filed plans to build a 350,000 square foot data center. This was to be a key component of a global data center expansion to support the company’s push into cloud computing services. The first phase of the project was unveiled in February 2012, featuring 40,000 square feet of data center space.

    Last year a Dell executive outlined plans to build 20 data centers in the Asia Pacific region, commencing with one in India. In 2011, CEO Michael Dell stated that the company would build a data center in Australia.

    Dell’s Shift in Strategy

    The plan was to target its cloud offerings to all three tiers of the cloud market, including Infrastructure as a Service (IaaS) offerings for both compute and storage, Platform as a Service (PaaS) for application development and deployment, and an SaaS-level Virtual Desktop as a Service offering atop Microsoft’s Hyper-V virtualization solution.

    A few weeks ago, the company acquired Enstratius, which greatly deepened its capabilities in cloud management. Dropping public cloud makes sense considering the company’s growing play in the enterprise and on the platform level. There’s a lot of competition in terms of public cloud – AWS, Google, Microsoft, Rackspace and OpenStack all come to mind – as well as new players joining the fray everyday (VMWare is a recent example). By offering its own public cloud, Dell threatened to cannibalize its channel somewhat, though the company was positioning its offering as complementary to partners.  Dropping these plans means there’s no longer any potential conflicts of interest and it can supply these partners in a completely complimentary way. But it begs the question – what about the infrastructure?

    The company’s shift to using modular data centers means that it might not have had to make the same capital commitments to cloud as it once pledged. The company does have some initiatives like Workstations that promise or hope to fill up data center space. The company still believes in the growth of the public cloud, it just isn’t supplying it directly out of its data centers anymore.

    “Many Dell customers plan to expand their use of public cloud, but in order to truly reap the benefits, they want a choice of providers, flexibility and interoperability across platforms and models, the ability to compare cloud economics and workload performance, and a cohesive way to manage all of it,” said Nnamdi Orakwue, vice president, Dell Cloud, after dropping the public offering. “The partner approach offers increased value to Dell’s customers, channel partners and shareholders, as part of our comprehensive cloud strategy to deliver market-leading, end-to-end cloud solutions.”

    It’s a sound strategy. However, the earmarked billion dollars now leaves us asking – what happens with the infrastructure?

  • United States Remains the Best Place to Build a Data Center

    global470

    Compared to other nations around the globe. the safest and lowest risk place to locate a data center is the United States, and there’s an abundance of potential in several secondary and maturing markets, according to a report from Cushman & Wakefield, hurleypalmerflatt and Source8.

    The United States has the lowest risks likely to effect the operation of data center facilities, the group determined in its evaluation of the 30 most important global markets. Additionally, tenant activity is showing renewed vigor in 2013, with San Francisco Bay Area and Northern Virginia highlighted. The report, titled “Data Center Risk Index 2013 Edition” was released last week.

    The United Kingdom held second position. The nation’s high scores relating to international internet bandwidth and ease of doing business helped maintain its place above all other locations surveyed in Europe. Scandinavian country Sweden, with its cool climate and stable power grid, took third place this year, jumping from the eighth spot last year.

    The aim of the report is to help companies make informed investment decisions about where to locate data centers, as well as to develop strategies to mitigate anticipated risk. Factors such as energy and labor costs, internet connectivity, ease of doing business, natural disaster potential, and political instability are all taken into consideration and weighed to reflect different risk levels.

    What Variables Were Considered?

    The United States has long been on top of the list when it comes to best served country in terms of Information and Communication Technology (ICT) infrastructure and general connectivity, according to the report.

    “In the United States, factors such as robust internet bandwidth capacity and connectivity and stable power costs contribute to its top ranking,” said Jeff West, Director of Cushman & Wakefield’s Data Center Research in the Americas. “Throughout the Americas, secondary and maturing markets hold an abundance of potential. Despite ranking last on the Index, Brazil’s dynamic economy and strong demand ahead of the World Cup and Olympic Games is fueling a swell of new submarine fiber-optic cable and infrastructure construction, while Canada’s solid mix of strong market fundamentals and low risk is sure to continue to attract investment from the U.S. and Europe.”

    The rapid adoption of technology and its impact on data center real estate shows no signs of slowing and underlying market fundamentals will continue to trend positive, according to the report. The U.S. construction pipeline continues to be robust with most established data center markets seeing variable levels of new supply as demand moves from the sidelines into decision making and outsourcing becomes more popular.

    Other Countries to Note

    Canada, which is sometimes called “America’s Hat,” (I’m originally from Canada so I can say that) took fifth place. The Greater Toronto Area makes up the majority of the Canadian market, but all metros remain strong in terms of demand.

    Other interesting areas of the world include South America, and most notably Brazil, its major data center market, as well as the Nordics and the appeal of the hydroelectricity they can offer.

    In Asia, Hong Kong held the least amount of risk at sixth place. With natural disasters and a shaky economy, Japan dropped the most out of all countries surveyed, from 20th to 26th.

  • Fortress Buys arvato to Boost Modular Capabilities

    Fortress International Group consults on data center design, which is rapidly evolving, particularly in regards to modular architecture. This is part of the impetus behind the data center consulting and engineering specialist’s acquisition of the data center integration services business from arvato digital services for $1.5 million.

    This acquisition expands Fortress’ capabilities so it can perform a full range of services both to the rack, and inside the rack. The acquisition is expected to generate over $10 million of annualized revenue and be accretive to Fortress International Group’s results in the second half of 2013.

    “We are very excited to expand our data center services offerings to include integration services of IT equipment,” said Anthony Angelini, CEO of Fortress, which is perhaps best known for its Total Site Solutions brand. “This acquisition is strategically significant and will be financially accretive to our business.  Strategically, the acquired integration business is an ideal complement to our current offerings in both the traditional and modular data center markets.  As the company evolves, our ability to perform a full range of services for data center customers both to the rack, and now inside the rack, will enhance the value proposition that our customers gain by trusting their data center requirements to us.”

    About arvato

    arvato provides custom rack layout design and configuration for large enterprise IT solutions consisting of large banks of computer servers, digital information storage and networking equipment, with custom cabling, power and cooling within data center racks and mobile or containerized data centers.

    The business also includes testing and deployment, including onsite installation and network set-up of completed data center racks and mobile or containerized data centers. Additionally, the business provides configuration services including the configuration of IT equipment, which consists of loading applications or systems software, customizing memory or storage capacities, adding peripherals, and testing (including hardware power-up testing, diagnostics and software boot testing).

    “Modular data center services represent an important and growing business for us, and, with this acquisition, we now offer an unmatched set of capabilities to the overall data center market and particularly the modular data center space,” said Angelini. “The transaction represents a key step along our strategic roadmap, and we are very excited about the team of people joining us as a result of the acquisition. We anticipate a number of synergies in both business development and cost savings as we integrate our sales teams, existing customers and management teams. The transaction further provides an enhanced end to end solution to both existing and potential customers.”

    Fortress (FIGI) also said this week that it had secured a new credit facility through Bridge Bank to support the company’s growth strategy and provide financial flexibility. The facility provides a line of credit up to $6 million over the next two years.

  • ViaWest to Build In (Figuratively) Hot Minneapolis Market

    viawest-vegas-power1

    A look at one of the power rooms inside the ViaWest Lone Mountain data center, a Tier IV in Las Vegas. ViaWest will build a similar Tier IV facility in Minneapolis. (Photo: ViaWest)

    Minnesota has a chilly climate, but the data center industry is warming to its charms. Colocation provider ViaWest is going to build a Tier IV data center in Minneapolis, a market that has been heating up as of late. ViaWest evaluated several key markets before acquiring 28 acres of land, existing buildings and infrastructure in a Minneapolis suburb. The company expects to start construction of its 150,000 square foot facility in time to start accepting customers by the first quarter of 2014.

    ViaWest said the expansion is a response to a strong appetite for its services. “Demand within our hybrid niche has continued to increase, and we have identified the Minneapolis region as a market in which our local approach to meeting customer needs with tailored solutions will be well-served,” said Nancy Phillips, President and CEO, ViaWest. “Following the ViaWest blueprint, we plan to plant our flag in Minnesota, become a part of the community by hiring local technology talent and elevate business growth throughout the region.”

    ViaWest built out the first multi-tenant tier IV facility, in Las Vegas. Achieving Tier IV design certification in Minneapolis will look good to the region’s healthcare, financial and government sectors in particular.

    Lots of Recent Activity in Minneapolis Market

    There’s been a lot of activity in Minneapolis and surrounding suburbs as of late – it’s a key emerging market. Cologix is growing there, purchased the Minnesota Gateway located in the carrier hotel at 511 11th Avenue South. It gave the company 20,000 square feet in the most connected building in Minnesota. The 511 Building is a 270,000 square foot building adjacent to the Metrodome. DataBank acquired VeriSpace back in March, expanding beyond its primary Dallas footprint. Compass Datacenters received a 50% property tax abatement for a planned 89,000 square foot facility in Shakopee, Minn., and Digital Realty Trust acquired a fully leased facility in Eagan, Minn. in April as a sale-leaseback.

  • Cloudscaling Raises $10 Million for OpenStack Solutions

    While OpenStack is once again in the spotlight following Dell’s exit from public cloud, in the background, more venture bets are being placed on OpenStack. The latest is a new $10 million, Series B round for Cloudscaling. The company provides an OpenStack-powered cloud infrastructure system. The funding comes from Trinity Ventures and two new big name tech investors: network equipment maker Juniper Networks (through its Junos Innovation fund), and storage specialist Seagate.

    “This financing round caps a tremendous year of momentum for the company,” said Michael Grant, CEO of Cloudscaling. “That momentum affirms the voice of the market, clearly stating that customers want more than OpenStack. They want an on-premise, OpenStack-based private or public cloud turnkey system solution that delivers architectural and behavioral fidelity with major public clouds like Amazon Web Services. Our Open Cloud System product delivers on that need to enable hybrid cloud application deployments that span private and public cloud services.”

    In 2013 Cloudscaling has secured key customer wins with LivingSocial, EVault, Ubisoft and DataFort, launched a channel partner program, and announced support for OpenStack Grizzly in the third generation of its OCS technology, Open Cloud System 2.5.

    Partnership With Juniper

    When Cloudscaling announced Open Cloud System 2.5 in April, it was also the first step of its partnership with Juniper through the integration of Juniper’s virtual network control (VNC) technology, JunosV Contrail, into Open Cloud System (OCS). OCS is a turnkey, OpenStack-powered cloud infrastructure system for enterprises, SaaS providers and cloud service providers. Juniper liked what they saw.

    “Juniper Networks and Cloudscaling share a vision of how cloud infrastructure should be built and operated to support a new generation of cloud-aware workloads,” said Jeff Lipton, VP of venture and strategic investments at Juniper.  “Collaborating to integrate our technology into OCS was just the first step. We are excited to continue our work with Cloudscaling and support the company as a strategic investor.”

    Cloudscaling and Juniper plan to continue delivering innovative, joint networking solutions that are open and standards-based to support elastic cloud services and a new generation of enterprise workloads.

    Cloudscaling also joined the Seagate Cloud Builder Alliance Partner program in April. Seagate liked what it saw.

    “Seagate and Cloudscaling are working together on innovative solutions of jointly-optimized cloud systems supported by Seagate products,” said Rocky Pimentel, EVP and chief sales and marketing officer, Seagate. “We are pleased to deepen our relationship with them as an equity investor and be part of a collaborative effort to define and promote open source standards for cloud computing.”

    Cloudscaling and Seagate are focused on the development of optimized storage solutions for OpenStack-powered cloud infrastructure.

    Seagate and Juniper join Trinity Ventures, which so far seems happy with Cloudscale’s prospects. “Cloudscaling has executed on a vision of elastic cloud infrastructure as a turnkey solution that many agree with but few have delivered,” said Dan Scholnick, general partner, Trinity Ventures. “The team has gained new customers and partners at an accelerating pace, highlighting their success at tapping an emerging, growing need among enterprise, SaaS and service provider segments.”

  • EdgeCast Launches Dedicated CDN for eCommerce

    CDN provider EdgeCast has built a complete separate, dedicated network to serve eCommerce, dubbed EdgeCast Transact. It’s PCI compliant, and incorporates device detection and dozens of commerce-specific optimizations. This “share nothing” approach to network architecture is unique in the content delivery industry, and might start a trend of offering exclusive footprints to specific sets of customers.

    “Nowhere do speed and availability matter more than in eCommerce,” said Ted Middleton, EdgeCast VP of product management. “The Internet’s top retailers told us they wanted a discrete, secure, global network that they didn’t have to share with other types of content, so we spent the past year building exactly that.”

    Edgecast designed, tested and built the new network in major metropolitan centers around the world over the course of a year. It’s based on the same proven methods Edgecast has been using for the past six years on its content delivery network.

    The new solution offers optimized communication path to serve content and handle transactions regardless of conditions on the broader internet or with other EdgeCast CDN networks. eCommerce customers completely avoid competition for resources with other customers in different segments.

    Security being a major concern, the network is architected with robust redundancy and failover, elastic provisioning for holiday-type traffic spikes, and credit card detection and removal algorithms.

    Performance optimizations include secure pre-establishment of sessions between origin and end user. Mobile device detection and front end optimization is built in. Optimization is executed directly on edge servers for the best possible performance.

    EdgeCast also aligns the network’s operating policies with eCommerce business cycles, conducting code freezes during the busiest shopping times to ensure 100 percent availability and stability.

  • Microsoft Launches Azure in China Via 21Vianet Group

    windowsazure

    Microsoft is the first major U.S. provider to launch a public cloud in China. Windows Azure is rolling out in China through partner 21Vianet Group, a large carrier-neutral internet data services provider. Windows Azure service in China will be available on June 6.

    Microsoft CEO Steve Ballmer attended an event for the launch with 21Vianet CEO Josh Chen, US Ambassador to China Gary Locke and Shanghai Governor, Jiang Liang.  Also in attendance were CEOs from several of the platform’s initial and potential customers.

    This is a big development for Microsoft, and huge news for 21Vianet. In November 2012, Microsoft, 21Vianet and the Shanghai Municipal Government announced a strategic partnership agreement in which Microsoft licensed the technology know-how and rights to operate and provide Office 365 and Windows Azure services in China to 21Vianet.

    “21Vianet will act as an operation entity for Azure, hosting the service in its data centers and handling the customer relationship,” said Vianet’s CFO, Shang Hsiao. ”We also support the infrastructure too. That’s one of the reasons Microsoft selected 21Vianet – we specialize in China internet infrastructure. We’re considered the biggest Internet data center services provider in China.

    “In China at this moment, we don’t have open cloud services,” Hsiao continued. “This will be the first cloud partner outside of China to serve cloud customers. It’s very important.”

    21Vianet already has several customers lined up for the service. Named in the press release are Pactera, RenRen Inc, PPTV, a leading online video company in China, Kingdee International Software, and QOROS Auto Co. an independent international car company. Many of these names will be unfamiliar to Western audiences, but therein lies why this announcement is huge; China is a massive market whose potential hasn’t been tapped. Microsoft, through 21Vianet, is first in with an outside public cloud.

    “We are extremely excited to officially launch Microsoft Windows Azure services in China and believe 21Vianet will provide great contributions to the growth of cloud infrastructure and services throughout China,” said Chen, Chairman and CEO of 21Vianet. “Our cooperation further enhances 21Vianet’s capabilities in helping to develop China’s cloud infrastructure services and strengthening our core competency for customers.

    “As a cloud enabler, 21Vianet is pleased to offer Microsoft’s world-class cloud services for the first time to businesses in China,” Chen added. “By providing carrier-level services for better public cloud operations, including security and compliance, datacenter networking, maintenance, highly reliable engineering and customer services related to cloud operations, 21Vianet and Microsoft are committed to offering the best cloud services available throughout China.”

  • Microsoft Plans New Data Centers in Singapore, Australia

    Racks of servers housed inside the Microsoft data center in Dublin, Ireland

    Racks of servers housed inside a Microsoft data center. The company is planning new server farms in Singapore and Australia. (Photo: Microsoft).

    The Windows Azure Cloud is expanding its footprint in the Asia-Pacific Region. Microsoft is building a new data center in Singapore, with a facility expected online in 2014, the company has confirmed. No other details are available at this time, beyond the company confirming it is building a major project there.

    Meanwhile, Microsoft has announced plans for additional data centers in Australia, with new Azure sub-regions in New South Wales and Victoria. These sites will use a geo-redundant configuration, making it easier for customers to back up their data within Australia, meeting rules for “data sovereignty” in disaster recovery.

    Hot in Singapore

    Singapore is perhaps the hottest AsiaPacific data center market, with all the major providers either building or looking for projects there over the last few years. Bottom line: Singapore is the primary hub for AsiaPac cloud going forward.

    There’s many reasons why Microsoft would choose Singapore as a major data center hub. It allows it to extend delivery of its services to the Asia-Pacific market, where it’s currently seeing the highest growth rates. Office365 and Azure are its major plays going forward in the region. All major analyst firms expect explosive growth in the region for cloud services over the next few years, and Microsoft diligently studies where its opportunities lie.

    Singapore is one of the region’s leading financial and business centers, with many customers looking to deploy critical business applications there. However, there’s historically been a limited supply of enterprise data center quality supply, which has lead to most major providers looking to build and set up shop to address growing demand. Singapore also represents a location where many global customers are looking to set up shop. According to the Singapore Economic Development Board, Singapore is currently home to approximately 50 percent of South East Asia’s data center capacity.

    A look at projects in Singapore over the years also gives indication into the region’s data center boom.

    • First and foremost, Amazon added AWS Singapore making its cloud available in the Asia Pacific Region. Microsoft needs Azure to compete in AsiaPac and can’t do so without a strong, local presence.
    • Equinix is like Starbucks in that it does a ton of research into where it locates its data centers. Its continual investment and expansion in the region is one big indicator of the future of Singapore and its growing prominence as a connectivity hub. It keeps announcing a series of major projects there.
    • Digital Realty Trust purchased a Singapore Data Center in 2010, citing the Asia-Pacific region as the logical target for its next phase of growth. It has seen strong leasing in the region, signing blue chip tenants, IBM, Adobe, and SoftLayer.
    • In 2011, IBM opened a cloud data center in Singapore. While cloud services have been attractive in the past, concerns about the consistency of the service performance due to the potential impact of network latency and the location of the data have inhibited their uptake for anything that was a critical workload. This was the reason IBM chose a location in Singapore. This increased availability of enterprise-class cloud services will underpin the acceleration of cloud services in APEJ as cloud service shifts from the SMB sector to the large enterprise.
    • SoftLayer leases sizeable space from Digital Realty in Singapore. The hosting giant recently gave a few reasons why Singapore was so attractive.
    • Going even further up the stack, Salesforce.com located a data center there to accommodate strong adoption of Salesforce CRM and Force.com platform. “Asia-Pacific is our fastest growing market, and there has never been a better time for enterprise cloud computing,” said Marc Benioff, chairman and CEO, salesforce.com. “Our new Singapore data center represents continued investment in our global real-time infrastructure to accelerate customer success with cloud computing worldwide.”

    There’s also Savvis, BT and NTT Communications, which has a strong foothold in Singapore and surrounding regions. IO announced a partnership to bring modules to Singapore last week. T5 Data Centers, which has facilities in Atlanta, Dallas, and Los Angeles is also looking deeply at Singapore for a wholesale play.

    Are we forgetting someone? Most likely. There’s a ton of activity in Singapore as it turns into the premiere AsiaPac Connectivity hub. Any cloud provider that wants to win some of the market share in AsiaPac needs a presence in Singapore.

  • Uptime Will Certify 20 Facilities for Digital Realty Trust

    The Uptime Institute has a history of being focused on the enterprise data center, but service providers have begun embracing Uptime and getting their facilities certified with the group’s Tier system. Today Uptime scored a major win on this front, as Digital Realty Trust announced it will have 20 of its facilities certified by Uptime.

    Why is this a big deal and a big win for Uptime? Four years ago, Digital Realty might have been considered a critic of Uptime, with one of its executives publicly debating the merits of the Tier system as an industry benchmark. That’s clearly changed, as the multi-tenant universe has warmed to the value of tier certification.  Digital Realty says that five of its projects have achieved Tier III certification, including two projects in Sydney and two in Melbourne, and one data center in Trumbull, Connecticut.

    “As a global developer, operator and long-term owner of enterprise-quality data centers, we have long since designed our Turn-Key Flex solution to meet rigorous engineering and reliability standards,” said Jim Smith, chief technology officer for Digital Realty. “Working with Uptime Institute to obtain Tier III certifications for these new projects further demonstrates our commitment to meeting these high standards on behalf of our customers.”

    Tier Certification Making More Sense for Providers

    With enterprise users more frequently choosing OpEx-friendly colocation deals over in-house data centers, a multi-tenant facility with a tier certification gives provides these enterprises some peace of mind. It tells enterprises that their provider meets rigorous uptime requirements. For multi-tenant data center providers, tier certification tells them that their facilities have an effective life beyond the current IT requirements, meaning it’s more than just an attractive pitch to potential customers. The Uptime Institute is a knowledgebase of information culled over the years from numerous data center operators, so it’s a chance to leverage that collective knowledge.

    In addition to Digital Realty Trust, several service providers have jumped on certification including ATAC, IO, ViaWest, Compass Datacenters, and CyrusOne.

    Uptime Institute created the standard Tier Classification System to evaluate data center infrastructure in terms of a business’ requirements for system availability. The Tier Classification System provides the data center industry with a consistent method to compare typically unique, customized facilities based on expected site infrastructure performance, or uptime. Furthermore, Tiers enable companies to align their data center infrastructure investment with business goals specific to growth and technology strategies.

  • Going Off The Grid: Delaware Data Center Will Generate its Own Power

    There’s a major project brewing in Delaware, with a group called The Data Centers LLC (TDC) planning a sizeable data center near Newark. TDC says it is planning to invest more than $1 billion in the project, with construction alone for the first two phases expected to be around $400 million. The group wants to construct approximately 900,000 square feet of space.

    The massive project will also feature a large on-site energy component. The facility will draw no electricity from the grid; instead, the plan is to sell power back to the grid.

    This means added redundancy will be built into the project. The plant consists of a proprietary configuration of natural gas turbines, steam turbines and gas engines, with two independent natural gas supply lines on site to provide the reliability to deliver uninterrupted, fault tolerant power to the data center.

    Operating as a Grid-Free Island

    “The patent-pending design combines best-in-class data center energy efficiencies with the efficiencies of on-site cogeneration and tri-generation plants that can operate as an island without relying on the electrical grid as a backup,” writes Bruce Myatt, CTO of The Data Centers, in a summary of the project. “That means that critical power generation with gas turbines, steam turbines, and adsorption chillers back up one another to power and cool the data center while excess power can be supplied to the grid to support demand response requirements. The facility will secure long-term gas contracts to keep operating costs low and competitive.”

    TDC has signed a lease with the University of Delaware to occupy a site on the STAR Campus and has lined up over half of the construction funding with investment bankers. The STAR campus is a 272-acre property purchased by the University of Delaware from Chrysler during is bankruptcy back in 2009. TDC will lease 43 acres, and will be the second tenant there, next door to Bloom Energy, which makes solid-oxide fuel cells.

    Three tenants have already agreed to occupy space at the TDC facility when the site is operational in late 2014, including the University of Delaware. Opportunities exist for additional tenants to reserve space in the first phase of the facility as well.

    Project Boosted by Infrastructure Grant

    According to TDC CEO Gene Kern, the site will employ approximately 370 full time employees (FTE) and is expected to attract “over 90 other workers from our tenants, vendors, consultants, and our tenants’ tenants.” The company has begun discussing its plans in recent weeks. Kern is a veteran IT infrastructure consultant and cofounder of WAKE Technology Services. The TDC team also include President and COO Robert Krizman, previously a senior VP at Jones Lang LaSalle, and Myatt, who is familiar to many in the industry as a co-founder of the Critical Facilities Round Table.

    There’s a lot to like about Delaware, according to TDC. State officials have approved a $7.5 million infrastructure grant, with the usual caveats, including meeting certain conditions and documentation that state aid will be spent on infrastructure. State funds will help pay for bringing natural gas and water service to the site, with TDC planning to run two new, dedicated gas lines through Eastern Shore Natural Gas. Part of the $7.5 million grant will go towards building a new electrical substation near the building, which the city will own.

    There’s a symbiotic relationship forming here, where TDC will bring jobs, strengthen the power infrastructure. making service more reliable in the southern part of Newark, and possibly lead to lower power costs for local residents. TDC will also lay down fiber, in addition to its secure lines, to help the university attract future tenants. Then there’s the taxes – the size of the project means that TDC will pay a combined $20 million in property taxes to the city, New Castle County, and the Christina School District

    With its high reliability design and managed services capabilities, TDC says the data center has the potential to be an ideal location for high-performance computing and cloud computing operations.

  • Quincy: Data Centers Bloom Where Beans Once Grew

    Some of the filtration tanks inside Microsoft's water treatment plant at its data center in Quincy, Washington. The company will lease the facility to the city for$10 a year as part of a partnership to develop a more sustainable water supply in Quincy.

    Some of the filtration tanks inside Microsoft’s water treatment plant at its data center in Quincy, Washington. The company will lease the facility to the city for$10 a year as part of a partnership to develop a more sustainable water supply in Quincy.

    QUINCY, Wash. – Beans once grew on the land where Microsoft’s data centers now stand. Quincy was a small farming community that has grown into a town. As you arrive in Quincy, you can see the changes brought by the arrival of a cluster of huge Internet data centers.

    Driving through town, something stands out: the fire station. Instead of a one-engine company, the fire station for the town looks state of the art, rivaling the best fire stations in the biggest cities. It’s one thing Microsoft and other providers have helped bring to the community through its investment in Quincy and its community.

    Quincy’s motto is “Where Agriculture Meets Technology.” There are 200,000 acres of farmland surrounding Quincy, which is known for its rich soil and food processing plants. It’s also almost a perfect location for data centers. The Columbia River provides low cost power . The land was dirt cheap when Microsoft and Yahoo first purchased property here. A strange thing happened in this small community 20 years ago: the mayor decided to invest heavily in dark fiber. The citizens, by all accounts, thought this was crazy.

    The Economic Benefits of a Data Center Cluster

    Not any more. The combination of cheap power, cheap land and dark fiber set up the perfect storm. Now, thanks to Microsoft and others like Yahoo locating here, property values are rising, new houses and stores are everywhere. There is always new construction in town, which has grown from 5,400 residents in 2007 to more than 6,200 today.

    While just 35 to 50 people work on the Microsoft Quincy campus, the arrival of data centers has meant much more than just jobs. The town has benefited in a variety of ways, from a surge in construction to being able to get 100mbs internet connections for 20 bucks a month.

    After receiving $700,000 in sales taxes in 2005, Quincy’s tax revenue grew to $1.5 million in 2006 and nearly tripled to $4.3 million in 2007 due to data center construction by Microsoft and Yahoo. Those two Internet giants were followed by new data center projects from Intuit, Sabey Corp., Dell and Vantage Data Centers.

    On the Technology Frontier

    In the process, Quincy has become home to two of the world’s most advanced data centers. Both Microsoft and Yahoo have deployed cutting edge designs featuring pre-fabricated components and using fresh air for cooling, placing them among the most efficient facilities in the industry.

    The phased buildouts at Microsoft and Yahoo reflect the maturation of the data center industry. The first phase of Microsoft’s Quincy facility (known as Columbia 1 and 2) is a  typical colocation facility. It has 36-inch raised floors, and the roofs were painted white because it meant less heat and better energy efficiency. A UPS room has 20 minutes of capacity for the switchover to generators. Generators are always ready – Microsoft has patents in how it pre-heats them. Outside, diesel fuel is contained in glass lined containers, with enough onsite to operate for days and all the agreements in place for more. The generators are tested every month.

    With its latest phase, Microsoft has shifted to lightweight enclosures filled with servers, known as ITPACs, housed on concrete slabs. They are self-contained data centers, assembled in days, housed on a concrete slabs and attached to a power “spine” supplying connections to the grid and the Internet.

    Down the street, Yahoo has seen a similar transformation. Its first phase, built in 2007, features a relatively traditional concrete shell data center. Next to the building sit several new “computing coops,“  prefabricated metal structures measuring about 120 feet long by 60 feet wide. Each of the coops has louvers built into the side  to allow cool air to enter the computing area.

    Water Plant: A Model of Infrastructure Sharing

    Microsoft has also built a water processing plant for cooling, which showcases the relationship between the town and its data centers.  In a move that will save millions of gallons of potable water for the local community, Microsoft and Quincy have teamed to retool the city’s water treatment infrastructure. The multi-million dollar water treatment plant built by Microsoft to support its data center will be leased to the city for just $10 a year. The plant will be retrofitted and expanded to support the water reuse initiative, which will allow other nearby businesses and data centers to benefit.

    The relationship between the town and its data center has not been without controversy. A tiff between Microsoft and the local utility over power usage quotas made the New York Times in 2012. The growing number of diesel generators to provide emergency power for the data centers generated debate in 2010 when Microsoft applied to add more generators for the second phase of its campus.  The Washington State Ecology department conducted an evaluation of the health risks from diesel engine exhaust particulates, and found that the Microsoft expansion, viewed in isolation, was not likely to impact public health. An independent board later supported that ruling.

    In an era when it’s not always easy to quantify the economic bottom line of data center development, Quincy has emerged as the most prominent example of the two rationales for incentives: that landing one major data center will attract others and form a “cluster,” and that the collective impact of the cluster will have an economic benefit for the community.

  • AlteredScale Opens Doors at 601 Polk in Chicago

    alteredscale-601

    601 West Polk in Chicago is the home of a new data cneter for AlteredScale. The facility will be managed by Norland Managed Services.

    .

    601 West Polk is alive and kicking. The 100 year old structure just west of the Loop in Chicago has been through a lot over the years, including a previous owner passing through bankruptcy. After several millions of dollars worth of renovations. AlteredScale, a provider of mission critical data center solutions, announced this week that it has chosen Norland Managed Services to operate and maintain its data center at 601 West Polk.

    601 West Polk is an historic, 100-year-old structure, which features 25,000 square feet of newly-built raised floor space, which AlteredScale says is the largest contiguous chunk of colocation space in downtown Chicago.

    Real estate pickings downtown Chicago can be slim, and there’s a lot of history around this building. “Despite being one of North America’s most strategic data center markets, Chicago has suffered from a lack of capacity in the central business district,” said Kevin Francis, President of AlteredScale. “With the completion of phase one at 601 West Polk we are excited to serve the colocation needs of enterprises throughout Chicago and the Midwest. AlteredScale provides IT capacity in a convenient downtown location.”

    601 Polk is situated on the primary fiber-optic ring serving downtown Chicago, and near multiple power substations. In addition to the 25,000 square feet of data center space on the first floor, 601 Polk also includes 28,000 square feet of expansion space available for data center and office use.

    AlteredScale chose Norland Managed Services, Inc. to operate and maintain the critical facilities and Chicago-based Tiburon Security Inc., who custom built security protocols and staffs guard personnel for the site.

    601 Polk’s Long History

    Built more than 100 years ago, the 601 Polk building was at one point a warehouse for the retailer Marshall Fields and was later slated to be a “carrier hotel” for a telecommunications company. Purchased out of bankruptcy by Pi Data Holdings, LLC in 2011 for $10 million, the deal allowed the previous owner to pay off creditors and drop a Chapter 11 bankruptcy case,  according Chicago Real Estate Daily. Several million was spent on renovation, leaving about 25,000 square feet of first floor space for multi-tenant usage. Existing tenants of the building at the time included Comcast Corp., France Telecom and Kozy’s Cyclery, a bike shop.

    The 108,000-square-foot building was renovated extensively and turned from raw industrial space into modern, flexible data center space. It was a worthwhile endeavor, given the building’s location, access to power and connectivity.

  • CyrusOne Launches Internet Exchange Across Sites

    cyrusone-phoenix-finished

    An aerial view of the new CyrusOne data center in Phoenix. The company today launched a national Internet exchange. (Photo: CyrusOne)

    Colocation provider CyrusOne continues building up its connectivity story, introducing its National Internet Exchange (IX), an on-net platform deployed across CyrusOne facilities in Texas and Arizona. The platform enables high-performance, low cost data transfer and accessibility for customers, uniting 12 CyrusOne sites in Dallas, Houston, Austin, San Antonio, and Phoenix, with other locations coming on soon.

    CyrusOne first entered the interconnection market back in February 2012. Earlier this year, it launched a Texas IX, and now is focusing on a wider build-out of the exchange.

    “With the launch of our Texas IX earlier this year, and with the recent opening of our data center site in Phoenix, CyrusOne has completed the important first steps in building out the CyrusOne National IX,” said Josh Snowhorn, vice president and general manager of Interconnection for CyrusOne. “No matter what kind of scalability our customers choose, the National IX will deliver robust, national connectivity and enable content and ISP peering that brings the heart of the Internet closer to CyrusOne data centers. Enterprises benefit from core access to the most powerful networks in the world—opening the door to facility-to-facility interconnection at costs and performance metrics that were previously not available to them.”

    The interconnection play is a great business, complementing existing assets while not requiring a great deal of capital.  National IX delivers interconnection across states and between metro-enabled sites within the CyrusOne facility footprint and beyond.  CyrusOne customers have the ability to “mix-and-match” solutions to unite top-tier data centers within and across metro areas for both production and disaster recovery needs.

    The CyrusOne IX gives customers choice when building out capacity to transport large amounts of data. Customers may choose CyrusOne’s bandwidth marketplace, its Internet Exchange platform, or cross-connect to cloud services. Customers using the CyrusOne National IX have the ability to connect between CyrusOne facilities region-to-region at greatly reduced wholesale cost via terabit-class capacity. This capability can also provide cross-connection with any on-net third-party facility within metro regions for a minimal charge.

    “CyrusOne remains ahead of the multi-site deployment curve by continuously anticipating needs and changes while aggressively building and integrating data centers throughout the U.S.,” said Snowhorn. “We are excited about the opportunities associated with our new CyrusOne National IX.”

    While it’s billed as a national exchange, CyrusOne currently doesn’t have data centers in several major Internet markets, including Silicon Valley, northern Virginia and the greater New York market.

  • Latisys Launches Disaster Recovery as a Service

    Cloud service provider Latisys has launched Disaster Recovery as a Service (DRaaS), a tailored service requiring no capital investment by the customer. The portfolio of DRaaS solutions ranges from simple offsite data backup to near-instantaneous continuous availability (geoclustering) services.

    Latisys DRaaS services feature a consultative approach, which that begins with a critical business impact analysis that distills the need for disaster recovery into three key concepts:

    • A company’s recovery point objective (RPO) – how much data can you afford to lose? 
    • Recovery Time Objective (RTO) – how soon do you need to have your systems up and running?
    • The Cost of Downtime – how much does an hour of downtime actually cost?

    “Our customers are increasingly asking for comprehensive DR solutions,” said Christian Teeft, VP of Engineering, Latisys. “In the past you had to maintain a completely redundant infrastructure at a cost of 2X, putting DR out of reach for most small-to-medium enterprises. Today we have a range of options that can be tailored to your specific RPO (Recovery Point Objective) RTO (Recovery Time Objective), making DR more affordable, more powerful and more effective.”

    Based on the traditional concept of cold, warm, and hot site Disaster Recovery, Latisys’ DRaaS offerings include a wide range of options:

    • Data Protection (cold) – Managed data protection using EMC Avamar. These managed backup services ensure that data can be restored from a disk–a good option if cost of downtime is low, or if there is contractual or regulatory obligation to fulfill.
    • Storage Replication (warm) – Several options for storage replication are available, including using the HP 3PAR StorServ storage platform to replicate from the storage system to a remote location. This is ideal if RPO and RTO both need to be less than 12 hours.
    • Workload Replication with VMWare (even warmer) – VMWare Site Recovery Manager (SRM) maintains a scripted recovery plan to shut down specified virtual machines and automatically restore them to a recovery site. RPO is less than one hour and RTO is less than four hours.
    • Workload Replication with Microsoft (warmer still) – The Microsoft Hyper-V Replica function performs asynchronous replication over commercially-available broadband networks, enabling enterprises to perform manual failover in the event of a disaster. This form of hypervisor replication is a good option for smaller enterprises or those already invested in Microsoft technologies.
    • Geoclustering (hot) – When cost of downtime is very high, Latisys can design active/active geoclustered database replication and globally load-balanced sites with nearly instant failover. This is a good option for companies with thousands of transactions per hour.
    • The Latisys Cloud – Powered by the HP CloudSystem Matrix, Latisys’ enterprise cloud infrastructure provides flexible and cost-effective access to DR compute resources.

    The portfolio features a range of options, balancing the continuous availability needs of the high end of the market, with simplicity and flexibility needed for a large part of the market looking to get a DR plan together.

    “Latisys has made the capital investment in hardware as well as the operating investment in people and processes to tailor a DR solution to specific customer requirements,” said Pete Stevenson, CEO of Latisys.  ”DR is an increasingly important component of any enterprise IT business strategy and Latisys is focused on making DR both affordable and available so resources are actually there when businesses need them most.”

  • Box is Beefing Up its Network for the Enterprise

    box-net-infrastructure-470

    Box is one of those Cinderella technology stories. The cloud file-sharing and storage company started with just a couple of guys and now has grown to serving over 150,000 businesses, including 92 percent of the Fortune 500. Its vision: to let you share and manage and access your content from anywhere.

    With half of activity coming outside of the U.S. and 40% coming from mobile devices, its customers have tested that mission statement. The company has been boosting Accelerator, its global data transfer network, as well as adding several key certifications in a bid to make its global enterprise customer base happy. Further infrastructure expansion lies ahead.

    “We really think we’re solving a problem for an end user,” said Jeff Quesser, VP of Technical Operations for Box. “But we’re also solving an IT concern; they can get all the auditing, compliance they need. This can be run in a very safe way.”

    Engineering for the Enterprise User

    The company is still seeing triple digit growth year over year, with over 150 percent growth last year. That has prompted the company to tailor its service in the best ways possible to serve the enterprise crowd, which requires fast uploads and often has geographically dispersed workloads and workforces.

    An astounding 50 percent of Box activity is happening outside of the US, either from international firms or U.S. enterprises with a global presence.

    “It’s a tipping point where it became a first class problem,” said Queisser. “Speed is absolutely critical. If you have sites all around the world, you need blazing fast download speeds.”

    Accelerator: Infrastructure Plus Intelligent Routing

    This enterprise customer need was the impetus behind Box Accelerator. The company has established upload endpoints in key global data center hubs featuring end-to-end encryption. The company has built patent-pending intelligent routing and optimization technology that delivers uploads 2.5 times faster on average. It has built a network that helps you get data into Box as fast as possible.

    “(With) most consumer operating systems, networking stacks are not optimized,” said Queisser. ”There’s the bandwidth delay problem. TCP is an amazing protocol, but wasn’t made for these types of distances and this kind of bandwidth. It’s a testament to how amazing the protocol is that it’s done what it’s done.

    “What we’ve done is unique in that it’s optimizing inbound data,” Queisser added. “How do you ingest 100MB rather than send it out? The other piece is that we built these nodes, and a routing feedback loop technology.  It determines the fastest way to get to Box. Sometimes it’s an accelerator node, but there are times when direct is the fastest path.”

    Neustar conducted a performance analysis test and found that “Box had the lowest average upload time across all locations, about 66% faster than the closest competitor.”

    More Cloud-Based End Points, and an API in Box’s Future

    Accelerator started off as nine new points of infrastructure, but has been growing. It’s a small footprint that provides a big performance boost. The ultimate goal is to have cloud-based endpoints in all regions.

    The locations of the Box accelerators are also telling in that these are the areas where the company is seeing the most growth, and/or anticipating the most growth. If you see an endpoint pop up, it means a combination of latency mapping and customer growth gave birth to it. For example, one of the latest endpoints not yet on the official map is Dublin, an area that has seen its fair share of Internet infrastructure growth as a key European market.

    The future for the company is more Accelerator locations, and an upcoming API that will allow developers to leverage the work that Box has done for its own apps.

    API on the Way

    “We will have a beta for an API that lets any developer in the world use what we’ve built,” said Quiesser. “If you’re trying to build something that’s as fast as possible, you don’t want to have to do all we had to do. Instead you get all of that with an API call.”

    The company is also planning to apply this technology to file downloads. Accelerator has added speed to enterprise uploads, but the company says it is looking to speed up downloads in similar fashion. “We need to do that in a way where it’s encrypted and it isn’t cached,” said Quiesser.

    It in terms of certifications, it just added ISO 27001 this week, and announced support for HIPAA last quarter. ISO 27001 is the international standard for information security management systems (ISMS) and demonstrates how the policies and controls put in place at Box protect user data.  In short, the standard prescribes requirements and best practices for systematically building, deploying, verifying and managing information, content and data. It also has SOC-1/SSAE16 Type II, SOC-2 Type II reports.

  • QTS Now Has $575 Million Unsecured Credit Facility

    One of the power rooms inside the QTS Richmond Data Center. (Photo: QTS)

    With a new financing move, QTS (Quality Technology Services) has a substantial amount of money in its coffers to propel the company’s growth going forward. The data center and managed service provider announced its credit facility has increased to $575 million. Along with its $135 million increase, the credit facility was converted from a secured facility to an unsecured facility and the term extended through May 1, 2017.

    “QTS appreciates the confidence and trust of our lending partners. This announcement is a strong endorsement of our company’s success and growth and reinforces our partnerships with these financial institutions,” said Chad Williams, chief executive officer, QTS. “The credit facility allows us to further execute our development strategy. The financial flexibility enables us to focus on the continued expansion of our facilities in Atlanta, Richmond, Santa Clara and Sacramento and commence development of our recently acquired facility in Dallas.”

    QTS’s expansion is continuing across the United States. Data Center Knowledge recently covered QTS growth in major markets such as Dallas (QTS Enters Dallas Market, Buys 700,000 SF Facility), Sacramento (QTS Acquires Herakles to Expand into Sacramento), and a new project in Richmond (New QTS Lab Will Advance High-Security Federal Clouds).

    QTS engaged KeyBank National Association to serve as administrative agent and KeyBanc Capital Markets to serve as sole lead arranger for the amendment and extension. Eight additional financial institutions have joined KeyBanc Capital Markets as credit facility participants, including Bank of America Corp., Deutsche Bank Trust Company Americas, Goldman Sachs Bank USA, an affiliate of The Goldman Sachs Group, Inc., JPMorgan Chase & Co., Morgan Stanley Bank, Regions Bank and Stifel Bank & Trust.