Author: Julius Neudorfer

  • Data Center Design for a Mobile Environment

    This is the sixth and final article in a series on DCK Executive Guide to Data Center Designs.

    The trend toward the mobile user continues at an accelerating pace and trends indicate that the mobile applications and hardware (smartphones and tablets and even vehicle-based systems) will exceed the PC based information client. This transformation cuts across many divergent business types from social media and search to streaming entertainment media and even basic financial retail banking, such as using a smartphone to take a picture of a check to deposit it. While on the surface this would not appear to impact the design of the physical data center facility, long term it may well influence the some IT architecture and hardware that resides in the data center. In fact, it is foreseeable that as wireless devices and networks will require and carry more data than existing land based networks and data centers may directly or indirectly need to integrate into the wireless network infrastructure. This may change the design landscape for data centers which may be designed to primarily deliver services to mobile clients.

    The Bottom Line
    As a senior management executive it is your ultimate responsibility to look down the road and set the course for your organization’s business direction and how it will shape the IT architectural roadmap. In addition to predicting the future, you also need to see around the next corner to foresee the fork in the road or avoid the cliff at the end of a wrong turn. In the information systems world, every year (or sometime every month) seems to bring the “The Next Big Thing”. And while previously most of those trends did not really have much impact on the physical design of data center itself, over the past few years even that has no longer been a certainty.

    We are still at the dawn of the 21st century and one only needs to look at the technological developments that have occurred since 2000. The rate of change for information technology is accelerating, it is has become totally interwoven with nearly every aspect of daily life. What is commonplace in daily life today was barely imagined in the science fiction stories of the earlier part of last century. The IT hardware built only 5 years ago may still be operational, but in most cases is considered as functionally or technically obsolete, as are many data centers that were built only 10 years ago but were designed based on historic IT requirements.

    It may seem easier to simply build on last year’s data center designs and avoid looking too far down the road. Nonetheless, today’s data center needs to be designed for the future, not the past. Do not let the fear of endless “scope or feature creep” limit your consideration of being open to new design options. Yes, you will still need to draw a line somewhere, whether for budget or time constraints, but to not close your own mind or the limit the design team’s options to new ideas without first understanding their advantages (as well as potential pitfalls). Expansion and flexibility must be pre-designed in, not tacked or retrofitted on afterward as requirements change. The entire scale and scope of the demands and the delivery platforms have changed rapidly, and in some cases radical paradigm shifts in designs have occurred.

    The physical infrastructure still needs to be reliable and solidly built, since it is the critical underlying foundation necessary to the security and availability of the IT system it contains. However, in today’s socially conscious world, long term sustainability is no longer an option; environmental stewardship is now a requirement when planning any new project. Expect environmental sustainability issues to grow in importance in the immediate and foreseeable future.

    And so in closing, we hope that this Executive Series has provided you with the insight and strategies to help guide you to ask the right questions to challenge and provoke yourself, as well as your IT architects and data center designers and ultimately enable you and them to make more informed decisions about what needs to be considered in the design of your next data center.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Design Lifecycle: Leading Edge vs Current Practice

    This is the fifth article in a series on DCK Executive Guide to Data Center Designs.

    One of the design issues facing data center operators is the projected life cycle of the facility, and the ability of its infrastructure systems to be upgraded in order to feasibly and cost effectively extend its long term viability. The data center is evolving at a much faster pace over the last several years, especially when compared to the pace of change over the previous 35 years. Designs and systems that were once considered as Leading Edge can become the new normal State-of-the-Art reliable modern facilities, with a good long life cycle, if they have been well planned and have solid technical underpinnings. One such example is the use of “fresh air free cooling”, which would have been seen at unthinkable less than 10 years ago is becoming more common (see part 3 Energy Efficiency).

    The Software Defined Data Center

    IT systems have moved to virtualize every aspect of the IT landscape; i.e. the Virtual Server, Storage and Network. The next step is the virtualization of the data center – the “Virtual Data Center” which is a term that has begun to appear along with “Software Defined Data Center.”

    While this sounds a bit fanciful, it does not mean that the physical walls and rows of racks of the data center will literally move or morph with the click of mouse. However, it refers to the concept that all the key IT components (servers, storage and networking) will be fully virtualized and transcend the underlying limitations of a physical data center. This does not mean the physical data center will cease to exist, but it does imply that the new data centers must be able to be ready and be flexible enough to accommodate more changes in IT hardware designs and their new requirements. Virtualization has helped improve availability and resource allocation and effectiveness, yet in many cases the physical facility designs have not necessarily reflected the changes that can result by a fully virtualized IT architecture.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Design for Efficient Operational and Energy Management

    This is the fourth article in a series on DCK Executive Guide to Data Center Designs.

    While most data centers have some basic form of Build Management System (BMS), any new design needs to include a highly granular network of sensors in virtually all of the systems and sub-systems of the facility power and cooling infrastructure. Older, general purpose BMS systems typically had simple alarms to warn of equipment failures and perhaps a moderate amount of basic information on energy use. In recent years, it became clear that as data centers grew larger and used more complex systems, it became more difficult for operators to keep track of all the critical infrastructure system operational details on maintenance requirements and energy efficiency.

    A newer more sophisticated class of systems designed specifically for data centers have been developed which are known as Data Center Infrastructure Management (DCIM). They not only encompass monitoring energy usage and efficiency optimization, they can improve operational reliability, by early detection of operational anomalies. DCIM systems also can help track and schedule preventive maintenance and spot any trends of recurring problems.

    While DCIM software varies widely based on vendor offerings and continues to evolve, the data center design should include pre-installed or provision for sensors at all the critical systems and sub-systems. You need to in review a variety of DCIM vendor offering to see which product features offers what you need. Regardless of your choice, make sure that you include the pre-installation of sensors in the design phase. Adding sensors after building the facility is both costly and can be intrusive if the systems are already operational. Examples would be energy monitoring for every CRAC/CRAH and if a chiller system is involved Chilled water flow metering, as well as energy monitoring all the individual components such as compressors, pumps and fans. This information will allow you to optimize cooling system operation and energy efficiency. Moreover, with real-time monitoring and maintenance management you can detect trends and anomalies and proactively address potential issues before they become critical problems.

    In addition, every point in power distribution system to the IT equipment should have Branch Circuit Monitoring pre-installed. This will allow you to integrate real-time information from IT systems energy usage and correlate it to computing activity to provide for better capacity planning, resource optimization and avoid islands of stranded capacity and improve facility side provisioning of IT equipment deployments. No new data center should be designed or built without some form of DCIM system as part of the base infrastructure system. While DCIM requires additional investment, it can ultimately lower the TCO by improving operational and energy efficiency, while reducing the number of data center and IT support staff.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Design for a Dynamic Environment

    This is the forth article in a series on DCK Executive Guide to Data Center Designs.

    Historically, data center IT loads have been relatively stable and predictable if viewed over a 24 hour or weekly period. This is beginning to change for several reasons. The first is virtualization, which originally allowed for individual applications which were running on distributed and underutilized servers, to be consolidated on to more centralized hardware resources such as bladeservers, resulting in higher CPU and overall server utilization, contained in less space. More advanced virtualization software offers energy management features which can monitor computing demands. Excess resource capacity such as un-utilized servers can be put into low power sleep modes or even be powered off automatically when not needed, but which would power up and then be put back on-line as computing demands rise.

    The second reason is that the IT hardware itself became dynamic while becoming more energy efficient. Instead of wasting substantial amount of power when idle, they now reduce power significantly when idle, yet draw more power (and generated more heat) when called upon to do work. The US EPA Energy Star program for data center equipment requires this for Energy Star certification of IT equipment such as servers, since 2009 and now is in the process of finalizing the standards for Storage and Network equipment. (see part 3 Energy Efficiency)

    The result is twofold; the overall total IT power and cooling load has begun to vary more over time as the amount of computing load increased and decreased over a 24 hour cycle. Moreover, the heat IT loads have begun to shift from rack -to-rack and row-to-row, in response to demand driven computing activity, creating traveling hot-spots across the data center.

    While the overall goal is to improve the energy efficiency of the IT systems, this has challenged a lot of older more traditional cooling system which were not designed to handle these new more dynamic conditions. When considering a new data center design, the IT team needs to work with the facility design team to provide more information on the type of hardware they plan on using, as well as any of the energy management features of the virtualization software, which can impact the design of the cooling system.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Designing for High Availability and System Failure

    This is the third article in a series on DCK Executive Guide to Data Center Designs.

    In the world of mission critical computing the term “data center” and its implied and projected level of “availability” has always referred to the physical facility and its power and cooling infrastructure. The advent of the “cloud” and what constitutes “availability” of a “data center” may be up for re-examination.

    Designing for failure and accepting equipment failure (facility or IT) as part of the operational scenario is imperative. As was discussed previously (see part 1, Build vs Buy), the ascending tier levels of power and cooling equipment redundancies can mitigate the impact of a facility based hardware failure. However, the IT architects are responsible for mitigating the overall availability of the IT resources, by means of redundant servers, storage and networks, as well as the software to monitor, manage and re-allocate and re-direct applications and processes to other resources in the event of an IT systems failure.

    Traditionally there have been very little discussions or interactions between the IT architects and the data center facility designers regarding the ability IT systems to handle failover. As more enterprise organizations begin to visualize and utilize public and private cloud resources it may change the need for the amount of redundant IT resources located within any one single physical data center and create a logical redundancy shared among two or more sites. The ability to shift live computing loads across hardware and sites is not new and has been done many times in the past. Server clustering technology, coupled with redundant replicated data storage arrays has been available and successfully used for over 20 years. While not every application may failover perfectly or seamlessly yet, we cannot underestimate the long term importance of rethinking and including the ability of the IT systems to be part of our overall goal of availability, when making decisions about required redundancy levels of facility based infrastructure, required to meet the desired level of overall system availability.

    The holistic approach to include an evaluation of the resiliency of the IT architecture in the “availability” design and calculations should be part and parcel of the overall business requirements when making decisions on regarding the facility tier level, number of physical data centers, as well as their geographic locations. This can potentially reduce costs and greatly increase overall “availability”, as well as business continuity and survivability during a crisis. Even basic decisions, such as how much fuel should be stored locally (i.e. 24 hours, 3 days a week for generator back-up), needs to be re-evaluated in light of recent events such as Super Storm Sandy which devastated the general infrastructure in New York City and the surrounding areas (see part 4, Global Strategies).

    Ideally, the realistic re-assessment and analysis should be a catalyst for a sense of shared responsibility by both the IT and Facilities departments, as well as a catalyst for the re-evaluation of how data center “availability” is ultimately architected, defined and measured, in the age of virtualization and cloud based computing. These type of conversations and decisions must be motivated and made by the higher execute level of management.

    Designing for an enterprise type of user owner data center is different than for a co-lo, hosting or cloud data center. Also the level of system redundancy does not have to exactly match the tier structure. Many sites have been designed with a higher level of electrical redundancy (i.e. 2N) while using an N+1 scheme for cooling systems. This is particularly true for sites that use individual CRAC units (which are autonomous), rather than a central chilled water plant.

    Site Selection and Sustainable Energy Availability and Cost
    The design and site selection process need to be intertwined. Many issues go into site section, such as geographic stability, power availability as well as climatic conditions, which will directly impact the type and design of the cooling system. (see part 2 – Total Cost of Ownership). Generally, the availability of sufficient power is near the top the first critical check list of site evaluation questions, as well as the cost of energy. However, in our present era of social consciousness of sustainability issues, as well as watchdog organizations such as Greenpeace, the source of the power is also an issue that has become a factor, based on the type of fuel used to generate the power, even if the data center itself is extremely energy efficient. Previously, those decisions were typically driven by the low¬est cost of power. Some organizations have picked locations based on the ability to purchase commercial power that has some percentage generation from a sustainable source. The Green Grid has defined the Green Energy Coefficient (GEC), which is a metric that quantifies the portion of a facility’s energy that comes from green sources.

    In other cases, some high profile organizations have built new leading edge data centers with on-site generation capacity such as fuel cell, solar and wind, to partially offset or minimize their use of less sustainable local utility generation fuel sources, such as coal. While this would impact the TCO economics, since it requires a larger upfront capital investment, however there may be some local and government tax or financial incentives available to offset the upfront costs. Nonetheless, while this option may not be practical for every data center, green energy awareness is increasing and should not be ignored.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Data Center Designs for Evolving Hardware

    This is the second article in a series on DCK Executive Guide to Data Center Designs.

    Current designs for traditional enterprise type data centers aren’t necessarily flexible enough for the myriad of newer devices coming their way. IT hardware is beginning to morph into different form factors, which may involve non-standard physical configurations, as well as unconventional cooling and power schemes. This does not necessarily mean that a traditional design will not work in the near future, however the long term IT systems planning must be evaluated to understand the potential impact on the physical issues in the data center facility. Just as the widespread use of Bladeserver technology and virtualization had a radical impact on the cooling systems of older data centers; other hardware and software developments may also begin to influence the physical design requirements and should not be overlooked.

    Server Architecture
    Unique business models can also have an impact on the IT systems and therefore should be considered when designing a new data center. For example, while the X86 architecture has been (and still is) the dominant general purpose processor platform for over the last two decades, major IT manufacturers have launched a new generation of highly scalable servers that utilize low power processors that were originally designed for smartphones and tablets. One major vendor just released their modular server system that claims it can pack over 2,000 low power processors in a single rack, and that it is capable of delivering the same overall performance as 8 racks of their own X86 processor based servers, for certain types of hyper-scale tasks such as web-server farms. Of course, this architecture may not be in your IT roadmap today, however it may need to be considered as a possibility in the foreseeable future and its potential impact should not be ignored.

    The IT equipment landscape is also changing and manufacturers’ product lines are becoming more encompassing and fluid. Major competing vendors are crossing traditional boundaries and the lines of separation of Server, Storage and Network are becoming blended and blurred. This can potentially impact the layout and location of equipment (rather than the previous island style layouts) impacting the interconnecting backbone structured cabling (migrating from copper to fiber, to meet bandwidth demands). This needs to be considered and discussed by the facility and IT design teams.

    IT hardware physical forms are changing as well. In an effort to become more energy efficient while delivering ever higher computing performance at greater densities, even liquid based cooling is becoming a mainstream possibility. As an example, while we have previously discussed broader operating temperatures and the greater use of “free cooling” in the most recent version of the ASHRAE TC 9.9 Expanded Thermal Guidelines (see part 3 – Energy Efficiency), it also contained a set of standards for water cooled IT equipment, defined as classes W1-W5.

    These water based standards outline “cooling” systems that can harvest the waste heat from IT equipment and deliver hot water to be used to heat buildings. The Green Grid has also addressed this with the Energy Reuse Factor (ERF), which is a metric that identifies the portion of energy that is exported for reuse outside of the data center. This type of water cooled IT hardware may not be mainstream reality for every operation, but the mere fact that it was incorporated into the most recent ASHRAE guidelines and addressed by The Green Grid, makes it a foreseeable scenario that is within the realm of possible options for hyper-scale or high performance computing, but may eventually become more widespread in future mainstream data centers.

    Moreover, there is a trend toward open source hardware (such as Open Compute), similar on nature to open source software. One need to simply look at the success of Linux, which originally was a developed as open source “freeware” alternative to UNIX (which at the time was the “Gold Standard” for enterprise class organizations). Now Linux is considered a reliable mainstream operating system for mission critical applications. While Open Compute has publicly available hardware designs which can then be used as a basis for a blueprint for open source computer hardware, (see part 5 – Custom Data Centers).

    Storage Architecture
    Storage demands have soared, in both the absolute total volume, as well as the speed to access the data and search through it. Concurrent with that demand, Solid State Drives (SSD) has come to the forefront as the preferred, but more expensive first level storage technology, due to its higher significantly read-write speeds, as well as its lower power use. Prices of SSD have come down significantly and will soon be become the more dominate form of first level storage, with slower spinning disks as the second level in storage hierarchy. Moreover, SSD is also able to operate over a much wider environmental envelope (32-140°F) than traditional spinning hard disks. This will lower data center cooling requirements and need to be considered as part of the long term strategy in the data center design.

    Network Architecture
    Although the design of the IT network fabric architecture is not directly part of designing the data center facility, the nature of its design and related structured cable and network equipment required by the IT end user of the data center facility must be taken into account, rather than arbitrarily assumed or surmised by the data center designer.

    Data transmission demands and speeds have continued to increase astronomically. Over the last 20 years we have gone from 4/16 Mbs Token-Ring, to 10, 100, Megabit and 1 Gigabit Ethernet networks, and currently 10, 40 and 100 Gigabit networks are the state-of-the art for the datacenter “backbone”. Yet not long after we deploy the next generation of hardware with its increased performance, we always seem to be bandwidth constrained. Even now the Institute of Electrical and Electronics Engineers (IEEE) is already working on a 400 Gigabit standard with 1000 Gigabit not far behind. This affects the physical aspects of the size and shape of network equipment and impacts its port density and the size and type of network cabling (shifting from copper to fiber), as well as the cable support systems deployed around the data center. This not only impacts the amount of space, power and cooling, it also requires more flexibility, as networking standards and architectures evolve. In addition as was mentioned above some vendors are merging and converging IT product lines which can impact the traditional island style layouts of Servers, Storage and Networks, which in turn refines the cable paths.

    One should consider that the significant changes that have occurred in the manner information is accessed, displayed and utilized by businesses and consumers on mobile devices such as tablets and smartphones. How do we architect a data center to meet technical changes of this ever increasing onslaught of end-user driven demand for ever more storage, requiring more computing performance and greater bandwidth requirements, which in turn impacts the IT equipment and therefore ultimately the data center?

    When designing a new data center, perhaps one of the first questions to ask is who is the end user? A traditional enterprise organization will want a solid design that has a proven track record, most likely using standard racks and IT hardware from major manufacturers, but may still have its own unique set of custom requirements that they have developed (see part 5 Custom Data Centers). While a co-location facility will need to offer a more generic traditional design to meet a wide variety of clients. Moreover, in sharp contrast, a large scale Internet hosting or cloud services provider is more likely to have a radically different requirement and may use custom built servers housed in physically different custom racks (see part 5 – Custom data Centers). Even the need for the traditional raised floor has been called into question, and some new data centers have been built without, locating IT cabinets directly on slab.

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • DCK Executive Guide to Data Center Design

    Data center designs have varied widely, especially over the last several years. Originally data center designs focused primarily on reliability and availability, with little regard to energy usage or long term sustainability. As energy costs rose and operating efficiency gained more importance, a variety of technologies and designs were used in data centers that were previously not considered feasible.

    This article series is the sixth in a series of guides, and reviews some of the major factors in the design and construction of a data center that have been discussed in previous editions, such as; ”Build vs Buy”, “Total Cost of Ownership” and “Energy Efficiency.”

    This series of articles will examine some of the classic design adaptations and new data center design adopted to meet the paradigm shift underway in the computing landscape, as businesses strive to meet the challenges driven by evolving IT architecture, Social Media, and Mobile Computing, as well as higher availability for Cloud services.

    This last guide in the Executive Series is not just about the “nuts and bolts” (Uninterruptible power supplies, back-up generators and chiller plants, etc.) of the data center facility. While they are necessary to support the IT equipment and important elements of design process, they are the enablers, not the drivers of the design process.

    If you are a denizen of the “C suite” (CEO, CIO, CTO, etc.) you are more likely to be concerned about meeting the challenges of facing globalized economies along with the ever increasing pressure to deliver competitive innovations and greater performance, yet while economically burdened to do so with less resources. You need to engage with customers by embracing mobile, social and big data analytics. While you will still need to rely on the experts who are intimately familiar with the “inner workings” that make up the data center, senior IT management must set the long term logical information systems direction that in turn drives the physical design criteria.

    Here are the article that will follow over the coming days:

    • Design for Evolving Hardware
    • Design for High Availability and System Failure
    • Design for a Dynamic Environment
    • Design for Efficient Operational and Energy Management
    • Design Lifecycle: Leading Edge vs Current Practice
    • Design for a Mobile Environment

    The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

  • Business Justifications for a Custom Data Center

    This the fifth article in series on DCK Executive Guide to Custom Data Centers.

    The business case for custom data centers are sometimes driven by special technical requirements, rath¬er than a better ROI. In those cases, the IT architects should be asked to make a solid business justification for its long term viability of the specialized IT hardware, as well as the competitive advantages of the leading edge technology.
    However, if some reasonable compromise and proper research is done prior and during the design phase, it may be possible to deliver a custom built data center with minimal impact on the long term TCO (refer to part 2 “Total Cost of Ownership”). A case in point is the high density cooling requirements of blade servers or other high powered densely packed IT equipment sometimes called “server farms” used for large internet driven applications and also for “hyper scale” computing. Older data centers that were not designed to handle these higher density cooling loads, had difficulty in properly cooling the equipment and typically were forced to “overcool” in an effort to deal with heat loads beyond their design capacities. This resulted in wasted energy, which raised energy based operating costs.

    More recent designs can handle some level of higher densities, but are not necessarily able to cool in the most energy efficient manner (see part 3 “Energy Efficiency”). A custom data center specially designed for high density systems can effectively handle the high cooling loads and also maximize the energy efficiency. However, before committing to a custom extremely high density design, be aware that if it is not fully utilized at or near its maximum capacity, the projected CapEx and OpEx numbers will not be realistic. This also relates to the question of facility ownership vs leased facilities, which is also a significant part of the TCO portion of business justification (please refer to part 1 “Build vs Buy”). There are public relations related factors involved such a sustainability related to the use of energy as well as the source of energy, as was dis¬cussed above in the Alternate and Sustainable Energy Sources section.

    The Bottom Line

    While IT systems continue to evolve at an ever accelerating rate and the physical facility tries to allow for “future proofing”, yet still keeping to a reasonable semblance of adhering to current industry standards. The leading data center designers are torn between trying to anticipate upcoming perceived change in IT equipment and providing the flexibility to support it, yet continuing to provide the support for mainstream systems. This flexibility should be taken into consideration when looking at a custom design.

    If you are a typical mid-size to large enterprise organi¬zation that is using primarily standard IT hardware you may want to stay within the typical designs for a traditional data center, perhaps with a modicum of customization which may well be accommodated with a “build to suit” offering. Even if you decide to build and own your data center, you still need to consider that your organization may need to sell or lease it out in the future. This could be driven by any number of reasons, such as out-growing it, a merger or acquisition or even downsizing. A highly customized data center may be harder to sell or lease out.

    If you have carefully examined your IT roadmap and business requirements and taken a holistic approach to the long term goals, you may find that a custom solution may be a good long term investment, that produces a technical competitive advantage and a lower TCO, however do not be swayed simply because a certain design is the “latest trend”. Long term reliability and maintainability is still a critical element for the data center.

    On the other side of the fence if you are a multi-site Internet services driven organization that plans to operate specially designed or unique non-standard IT hardware, you may want to take advantage of some of the more esoteric designs that will provide a technical performance edge, or deliver an extremely high level of energy efficiency, but perhaps with a lower level of physical power and cooling system redundancy.

    Once you have a clear understanding and justification of your IT architecture, systems and strategies requirements and have decided on a build-to-suit or a highly customized design, it is imperative that you interview design and build organizations and select the one that you are comfortable with. They should have demon-strated experience in delivering standard data centers, as well as an open-minded design staff that are within their comfort zone to think outside the box, both figu¬ratively and literally.

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

  • Timelines and Potential Pitfalls of a Custom Data Center

    This the fourth article in series on DCK Executive Guide to Custom Data Centers.

    The advantages of a custom design can look and be attractive, since they may be able to accommodate non-standard IT hardware or provide very high energy efficiency or high cooling density. However there are potential pitfalls for those who have not had a great deal of experience with custom designs. If you change the design radically to meet specialized custom built hardware requirements you may lose the ability to easily adapt to different equipment after a hardware technology refresh cycle. This is especially true if your custom hardware requires non-standard cabinets, since the majority of typical computer hardware is designed for standards based cabinets.

    That is not to say that one should forgo custom data center design to support specialized hardware and simply limit themselves to standard designs. You may choose to simply allocate one section of the data center to deal with specialized hardware requirements, if your IT systems can gain substantial performance benefit.

    Timeframe
    If, after carefully weighing the facts, you decide to proceed with custom design, bear in mind the impact on the timeline. A standard data center can be designed and built in 12-18 months by an experienced builder, once the basic size and capacity has been defined. With custom design you need to anticipate extended timelines. The first timeline is the preliminary technical requirements discussions, as well as the business and cost justification, within your own organization. Once your internal requirements have been defined, there will be extended time required for meeting with designers and builders to explore the feasibility and cost projections for your custom requirements. These extra steps can add 6-12 months to the timeline. Once the cus¬tom design has be finalized, the build-out should take 12-18 months if standard power and cooling equipment has been used, however if custom equipment needs to be specially fabricated, then additional time may be required for these items.

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

  • Custom Data Centers: Responsibilities of the Stakeholders

    This the third article in series on DCK Executive Guide to Custom Data Centers.

    Like any large scale project, when commissioning a data center design, whether standard or custom, a clear understanding of the responsibilities and points of contact (POC) and/or project managers (PM) need to be carefully selected and agreed to by all involved parties. It is highly recommended that the POC or PM for the organization that is purchasing or leasing the data center be generally familiar and have some experience with the operation and basic technologies of a data center. This is especially important for a custom design, and simply appointing an “all purpose” internal POC or PM without any specific data center experience should be avoided if at all possible. If such a qualified person is not available internally, consider utilizing a qualified independent consultant to act as the POC or PM or at the very least a trusted advisor. While they do not have to be an engineer, they do need to be able to fully understand what is being asked of the bidding data center design and build firms and the implications of their responses, questions or change requests as the designs are developed.

    Before delving into the details, let’s first clarify the general data center categories and terms; standard, build to suit and of course a custom design.

    Standard Data Center
    While there really is no such thing as a generic “standard” data center, it generally involves a design that follows common industry standards and best practices. This usually covers the layout of the rows of cabinets, typically capable of supporting a moderate power density, then selecting the tier level of infrastructure redundancy and the total facility size commensurate to your organization’s immediate and future growth expectations. This type of data center is readily available for lease or purchase (please see part 1 of this series “Build vs Buy”) and is built using standard equipment and straightforward designs.

    Build-to-Suit Data Center
    The “Build to Suit” term and other similar marketing names such as “Turn Key” and “Move-In Ready” are used by some data center builders and providers in the industry. While the name sounds like, and would seem to imply a completely custom design, it generally offers a somewhat lower level of customization within certain limits of a basic standard design. This should be given serious consideration, since in many cases it may meet some or most, if not all of your specialized requirements, with a minimal cost impact. Also by keeping within the basic framework of a standard design, it would be less likely to face early obsolesce should a normal traditional technology refresh occur.

    Custom Design Data Center

    Like a custom built race car, designed and built for performance, a custom data center should represent a technically leading edge, tour de force design. In the case of a data center, the extreme performance is typically manifested in the form of higher flexibility, reliability, energy efficiency and power density, or some combination thereof.

    Hardly a week goes by without some headlines in the data center publications announcing a new custom built data center based on a radical new design, most commonly by a high profile firm in the Internet search, social media and cloud services arena, such as Google, Facebook, or Microsoft. It is important to understand that these are typically based on very large scale dedicated applications and may involve specialized custom built hardware for use in so called hyper-scale computing. As an example, Facebook and Google utilize unique custom built servers (each has their own different server design), which do not have standard enclosures and require special matching cabinets, as well as specialized power and cooling systems.

    This results in some technical and financial advantages, primarily related to lower cost per server and better overall data center energy efficiency. However, before embarking down the path of a highly customized data center design, it is important to understand that it requires a sufficiently large scale and IT architecture. It also may limit the general ability to support standardized racks and IT equipment. Let’s look at some emerging trends in custom data center designs.

    Hybrid and Multiple Tier Levels
    Tier levels generally refer to the level of redundancy and fault-tolerance resulting in a projected level of availability rating for a data center (1 lowest, 4 the highest).

    One area of customization that is becoming more popular is the incorporation of multiple tier levels of infrastructure redundancy within the data center. This can lower costs and may increase energy efficiency by creating a lower tier level (i.e. Tier 2) zone for less critical applications, while still providing a high level of redundancy (Tier 3-4) area for the most critical systems and applications.

    There are also those data center operators and owners that do not feel that they have to exactly follow all the requirements of the tier level system, but may prefer to use selected concepts and have a hybrid design. This allows them the flexibility to allow for greater level of redundancy of the electrical systems (i.e. 2x[N+1] dual path system — comparable to a tier 4 design), while using a less complex and lower cost cooling system, with only N+1 cooling components (for more details on tier levels please refer the “Uptime” section in part 1 “Build vs Buy”).

    Of course, once you have begun to explore a custom design you may choose to mix the multiple and hybrid design schemes to match you organizations various applications and systems requirements and may also lower your CapEx and OpEx costs.

    There is also a growing trend to try to segregate hardware by environmental requirements. Systems such as tape backup equipment in particular requires tight environmental control, yet does not require much actual cooling or power density. By isolating them from other hardware such as servers, you are able to properly support and maintain the reliability of more sensitive disk based storage and tape library equipment, by tightly controlling the temperature and humidity. This also improves the energy efficiency of the cooling system for other more robust hardware such as servers, or the new solid state storage systems, by allowing for raised temperatures and expanded humidity ranges (for more on this please refer to part 3 “Energy Efficiency”).

    Containerized Data Center
    The data center in a container is an alternative that is beginning to find some traction in the data center industry. These can be either an add-on to a traditional facility or the basis for an entire “data center”, based primarily on containerized or modular prefabricated units. Some designs are based on a core power and cooling infrastructure system meant to support these systems that are weather-proof units which can be placed on a prepared slab and then connected to the core power and cooling systems. While some other containers may require a warehouse type building to shelter them and again need to be to be connected to the core support systems.

    Although similar in concept, it is important to distinguish the difference between actual container units and some modular data center systems. It is important to note that containerized solutions or modular systems are not necessarily an inexpensive alternative to a traditional brick and mortar data center facility. They are typically best suited to very high density applications of tightly packed mostly identical hardware, typically thousands of small servers or several hundred blade servers, configured to deliver hyper-scale computing. Their main attraction is for those large organizations that required the ability to respond quickly to rapid growth in computing power and also to a certain degree to minimize initial capital expense, by just being able to add containers or modules, on an as needed basis.

    Regardless, it is important to note that whether you consider a container or a modular system, they still have to be installed at a data center facility that will support and secure them and that the overall facility infrastructure must be pre-designed and pre-built for the total amount of utility power, generator back-up capacity, as well as power conditioning (i.e. UPS typically required for most containers), and in some, but not all cases, a centralized cooling plant.

    Containers can be part of a hybrid custom design, based on a more traditional data center building as a core primary data center building which is relatively standard. However, the overall facility infrastructure has pre-allocated space, as well as power and cooling infrastructure for containerized systems which can then be easily added as needed for rapid expansion.

    Open Compute Project
    There are also some resources for “non-standard” or leading edge “outside of the box” designs. One in par¬ticular is the Open Compute Project (OCP) which has published its highly energy efficient basic designs and specialized IT equipment specifications. While not every organization is an ideal candidate for all the elements disclosed in the OCP designs, some aspects of the designs can be chosen selectively and incor¬porated into a custom data center. Some data center providers offer to build a data center based on the OCP designs.

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

  • 10 Considerations in Building a Global Data Center Strategy

    It would be imprudent to oversimplify all the tangible and intangible elements that need to be fully understood and evaluated when creating a global data center initiative. Yet here are ten considerations to evaluate when building your global data center strategy. This is the forth article in a series on Creating Data Center Strategies with Global Scale.

    1) Site Selection and Risk Factors – Knowing Where to Build
    Once you have selected a general geographic area, it takes a very experienced team to fully evaluate the suit-ability of a foreign location to build a new data center. Identifying risk factors, both the obvious ones, such as known seismic or flood zones, or the less obvious ones, such as adjacencies to “invisible” but potential hazards, such as airports and their related flight paths, must be an essential part of the final decision.

    2) Geopolitical Ownership Considerations
    Beyond the basic factors related to physical and logistical resources, the political stability of the country and region should be considered. In some cases the nationality or type of organization of the owner or tenant may make it a target for local political factions.

    Insurance costs and even the ability to get coverage may be impacted by building a data center in potential lucrative and growing markets, but which may have a higher risk profile, than a nearby country that has viable communications bandwidth into the target market.

    However, be aware that in some volatile or politically restrictive countries, internet traffic is filtered, blocked and or monitored.

    3) Global Risk Issues
    Given the recent and more frequent catastrophic weather related events affecting even highly developed areas, we all need to review and perhaps re-evaluate our basic assumptions. While there is still some contention about how much Global Warming impacts the world, it is no longer a matter of “if”. Planning based on 100 Year Flood Zones may no longer be considered ultra conservative. The evaluation of any potential data center or other critical infrastructure site is not a cut and dried exercise. Geographic diversity for replicated or back-up sites is no longer an option, it is a necessity.

    4) Extended Operation and Autonomy During a Crisis
    Regarding availability and continuous operation, how much fuel should be stored locally (i.e. 24 hours, 3 days a week)? During a small localized utility failure 24 hours of fuel may be previously considered adequate, but given more recent events 3-7 days offers a better safety margin. During an extended widespread crisis, the relied upon expectations of daily refueling may prove to be difficult, if not impossible to achieve (case in point, Hurricane Katrina, and “Super Storm” Sandy). In some cases, so much of the general infrastructure was dam¬aged that even fuel availability and delivery to back-up generators became a severe problem (both for data centers, and their employees, limiting their ability to get to work). In the end, you will typically pay more for the co-lo with the greatest levels of redundancy, resources and better SLAs, but would be imprudent to assume that nothing will ever happen to impact the operation of your own data center because you are in a “safe” area. Storing more fuel may be a small overall price to pay for the extended autonomy and could be the difference between being operational or shut down during a major crisis.

    Also understand that these same problems would potentially impact your communications providers, so investigate their capabilities for extended operations during a crisis. It is useless if your data center is operational, but your have no viable communications network during a major event.

    5) Availability and Cost of Power and Water
    Of course picking a site location that is physically secure and has reliable access to power, water and communications is an important first step. Since energy is the most significant operating cost of a data center, focus your attention on the cost of power and its long term impact. Energy costs are highly location dependent and are based on local or purchased power generation costs (related to fuel types or sustainable sources such as, hydro, wind or solar), as well as any state and local taxes (or tax incentives). In the United States rates vary but are generally low compared to some foreign markets. Internationally energy costs are higher and can vary widely. It is important to check local rates and look for utility and energy incentives. Some countries are offering tax and other incentives to build data centers. Another factor is location and long term overall market demand for constrained resources such as power and water, which can ultimately limit the data center capacity.

    If the site is relatively remote and needs to be newly developed, be sure to factor in the cost of bring¬ing in new high voltage utility services, which can be expensive and require long lead times to have planned, approved and constructed.

    Site selection can also directly impact the facility’s energy efficiency. The relative energy efficiency of the data center facility infrastructure is measured as “Power Usage Effectiveness” “PUE”), as well as the IT equipment use of power vs its computing performance. One of the largest uses of energy is cooling and is location dependent, since it is related to the ambient temperature and humidity conditions. With the rising acceptance of the use of outside air for “free cooling”, picking a location with a moderate climate can offer the opportunity to save a significant amount of energy cost over the long term, as well a lower initial capital investment by the reduced need for mechanical based cooling systems. For more details see part 3 of this series.