Author: Industry Perspectives

  • Software-Defined Power: The Path to Ultimate Reliability

    Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.

    Clemens Pfeiffer Power AssureCLEMENS PFEIFFER
    Power Assure

    About half of all service outages in data centers today are caused by power problems, and that percentage is expected to increase as the electric grid struggles to meet a growing demand on an aging infrastructure. Part of the reason for this shift is that hardware has become remarkably reliable, and the virtualization of servers, storage and network components, or the so called “Software-Defined Data Center,” has made applications immune to single points of failure. Power problems, by contrast, are only partially addressed by the uninterruptible power supply (UPS) and backup generator.

    To enhance their business continuity and disaster recovery strategies, most organizations now operate multiple, geographically-dispersed data centers. While this investment is made primarily to protect against catastrophic events caused by major natural disasters, the arrangement can also afford greater immunity from power problems, whether caused by weather or disruptions on the grid.

    What is Software-Defined Power?

    Software-Defined Power is emerging as the solution to application-level reliability issues being caused by power problems. Software-Defined Power, like the broader Software-Defined Data Center (SDDC), is about creating a layer of abstraction that makes it easier to continuously match resources with changing needs. For SDDC, the resources are the servers, storage and networking equipment, and the need is application service levels. For Software-Defined Power, the resource is the electricity required to power (and cool) all of that equipment, but the need is exactly the same: application service levels.

    With Software-Defined Power, overall reliability is improved by shifting the applications to the data center with the most dependable, available and cost-efficient power at any given time. Software-Defined Power is implemented using a software system capable of combining IT and facility/building management systems, and automating standard operating procedures, resulting in the holistic allocation of power within and across data centers, as required by the ongoing changes in application load.

    It’s About the Applications

    Once configured with the service level and other requirements for all applications, the Software-Defined Power solution continuously and automatically optimizes the resource allocations as it shifts loads between or among data centers. Adding power to the already existing software-defined computing, storage and network components of an application environment makes it possible to abstract applications fully from an individual data center and its power dependency. This is what enables the shifting and shedding of application capacity across multiple data centers by adjusting the IT equipment and critical facility infrastructure required at each, resulting in the maximum possible application-level reliability at the lowest operating cost.

    Not only does shifting loads between data centers help increase reliability by affording greater immunity from power problems that cause unplanned downtime, it also creates wider windows for the planned downtime required for routine maintenance and upgrades within in each data center. This makes it easier to operate applications 24×7 with no adverse impact on either availability or performance from power-related issues.

    Follow-the-Moon Strategies

    In addition to the increased reliability, Software-Defined Power also pays for itself by minimizing energy spend and enabling participation in lucrative demand response programs. Power is the most dependable and available at night, which is also when rates for electricity are normally the lowest. So shifting the load to “follow the moon” can afford considerable savings.

    Shifting load to a distant data center also enables shedding that load locally. A best practice in Software-Defined Power, therefore, is to power down the servers until they are needed again. This same ability to de- and re-active servers can also be used to dynamically match capacity to load within a single data center on a regular schedule or in response to changing application demand.

    Because utilities pay exorbitant rates for wholesale energy during periods of peak demand, they are willing to pay commercial and industrial customers handsomely to reduce usage during these peaks. Software-Defined Power enables data centers to participate in these demand response programs without adversely impacting on application service levels. Organizations can even go one step further: By knowing about potential grid issues, IT and facility managers can take preventive action to shift applications to another data center in advance of any power problems.

    The combination of paying less for energy and wasting less to power (and cool) idle servers (including during demand response events) can result in savings of over 50 percent. And considering that the operational expenditure for energy alone exceeds the capital expenditure for the average server today, the electric bill for a full rack of servers can be cut by as much as $25,000 every year.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Are You Suffering from PUE Envy?

    Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

    Tom_Roberts_tnTOM ROBERTS
    AFCOM

    It’s kind of become an “our PUE is less than your PUE” world as companies battle it out for green data center efficiency bragging rights.

    Of course, data centers with the most resources—financial, natural and manpower—have an advantage.

    Here’s recent examples:

    Yahoo! spent close to $200 million on its 1.07 PUE-boasting data center in Lockport, NY. Carbon-free hydroelectric power generated from Niagara Falls feeds its servers.

    Google, with an average PUE of 1.12 for its data centers, just purchased a $200 million stake in a wind farm in west Texas to add to an already impressive green portfolio that includes offshore wind power and solar. This brings the Internet giant’s total investments in alternative energy to more than $1 billion.

    Apple spent about $1 billion to build iDataCenter, its first data center facility in Maiden, North Carolina. With two massive solar arrays and a nearby fuel cell farm it also manages a 1.1 (or so) PUE.

    You are not alone if you’re experiencing a touch of PUE or budget envy from the previous examples. Slashdot.org recently reported “green fatigue” among data center managers who were tiring of the constant PUE chase.

    Let’s Get Real

    For most of us, building data centers next to magnificent rivers or buying huge chunks of real estate to place solar arrays is just pie-in-the-sky thinking. Our solutions must be much more grounded.

    In fact, Data Center World keynote speaker Brian Janous from Microsoft addressed this very real-world frustration. At the end of his talk, someone asked him: “It is great that (Microsoft) can experiment in using other sources of fuel to power its centers, but how do we (smaller data centers) benefit from that?”

    Well, because the Microsofts of the world can experiment with new ideas and renewable fuel sources, it takes the pressure off us to determine what is viable and what is not. If they find their experiment did not work as planned, they learn from it and try something new. Their experimentation turns into our future implementations.

    Practical Lessons from the Megascale Projects

    While we may not be able to match the scope of what these corporate giants achieve, we certainly can apply the lessons that make practical sense in our data centers. For example, you can thank the larger data centers for “discovering” the use of outside air and evaporative cooling to lower temperatures as well as establishing a safe threshold for raising them. Just choose the projects that make the most sense now, for your specific situation and budget.

    Research firm Gartner suggests keeping these guidelines in mind:

    • More exotic projects, like alternative energy and green building design, may take a decade or longer.
    • Five-year paybacks are probable for projects that attempt to change employee behavior, and for lifecycle management programs and green legislative initiatives.
    • Two-year paybacks are possible for efficient facility designs, advanced cooling, processor and server designs, and heating and power issues.

    Try the below five methods (from AFCOM’s Communique newsletter) to make quick, cost-efficient differences in your energy usage:

    1. Place Power Distribution Units (PDUs) in wider, warmer aisles in chimney-ducted rooms. Since they don’t need the low temperatures that cold aisles offer, PDUs shouldn’t be using up the colder air that other equipment requires.

    2. Don’t use doors on the cold aisle sides of cabinets. Fans can produce both kinetic and radiant heat during regular operation. The easier time these fans have aspirating air through cabinet door perforations, the less heat they will produce, saving on required cooling.

    3. Humidify using a combination of partial extraction of the warm return air, an atomizing spray of water, and a supply of 10-degree cooler air in back, overhead, and directly into the cross aisles. The natural vapor pressure will keep the area humidified.

    4. Align cabinets with hot sides on the other side of a demising wall that forms the perimeter of the data center. Use a second concentric demising perimeter external to the first demising wall to form a security barrier and a warm air collection point for drawing heat in winter to warm office spaces. Finally, supply cooled air directly below exterior windows in an office area back to the interior of the computer room or NOC to assure a complete airflow/air replenishment circuit.

    5. Use custom, break away ductwork in a chimney-ducted room. Affix the ductwork to the back door of cabinets that only vent horizontally, allowing hot air to redirect up and into a chimney when the back door is closed. The result is that the warm aisle stays cooler with less chance of mixing the warm air with the cold air within the data room.

    Follow the Leaders

    The challenges of lowering energy costs are here to stay. Whether you’re in a position to take on massive projects or nip away at smaller ones, be sure to keep your eyes open and ears peeled for the next great PUE-lowering strategy from the leaders in this arena. You—and the next generation of data centers—are bound to get something out of it.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • How Design Can Save the Average Data Center More than $1M

    Peter Panfil is Vice President Global Power Sales, Emerson Network Power. With more than 30 years of experience in embedded controls and power, he leads global market and product development for Emerson’s Liebert AC Power business.

    Peter PanfilPETER PANFIL
    Emerson’s Liebert AC Power

    There are many options to consider in the area of data center power system design, and every choice has an impact on data center efficiency and availability. The data center is directly dependent on the critical power system, and a poorly designed system can result in unplanned downtime, excessive energy consumption and constrained growth.

    When making choices, consider the UPS system configuration, UPS module design and efficiency options, and the design of the power distribution system.

    Show-me-the-money

    Increase Utilization Rate to Improve Efficiency

    Most businesses need to consider having some level of redundancy in their UPS system to mitigate the cost of downtime, eliminate single points of failure and provide for concurrent maintenance.

    A concern often raised in discussions about redundancy is utilization rate. A 2N UPS system that has the highest availability unfortunately offers the lowest utilization. Each bus of a 2N system can only be loaded to 50 percent so that one bus can provide full load in the event the other bus is not available. Many business critical data centers use 40 percent as the peak loading factor on each bus in this configuration to allow for variations in IT power draw and provide a cushion for immediate expansion capability. Customers have expressed concern that they don’t trust all UPS suppliers to be able to support 100 percent load.

    Find a UPS supplier you can trust whose UPS can provide full load across the range of high and low line conditions, temperature to 40C, blocked filter, fan failure and altitude. Potential cost savings to move utilization to 45 percent: $2k/yr

    Don’t Gamble on Availability – Fault Isolation Matters

    Transformers play a critical role in the power system by providing circuit isolation, localized neutral and grounding points for fault current return paths, and voltage transformation.

    Removing the transformers can result in a smaller, lighter footprint that is well suited for installation in the row of racks. Removing the transformers also exposes the UPS system to faults that could reduce the availability or push the critical load over onto utility power more often.

    One very common fault that has this effect is a DC ground fault. Shorting the positive or negative battery terminal to ground in a transformer based system results in an alarm, but the UPS continues to provide protected power. Shorting the positive or negative battery terminal to ground in a transformer-less architecture at best results in a transfer to bypass and the load exposed to unprotected power and at worst, drops the critical load.

    One transformer-less UPS manufacturer even filmed the performance of their transformer-less UPS on a battery ground fault. The UPS output waveform went through severe gyrations, the UPS groaned, cables shook and the UPS transferred to bypass. That manufacturer touted this as robust performance.

    Do you consider transferring to bypass during one of the most common UPS system faults robust? Don’t bet your career on it. Potential Cost Savings of increased availability: $505k per occurrence

    Modern Transformer-Based Topologies + Advanced Energy Optimization = State of the Art Technology

    There is the misperception that transformer-based UPS systems are “old technology”. This myth is spread for the most part by UPS manufacturers who only offer transformer-less UPS. Modern transformer-based UPS systems deploy the latest DSP-based controls and energy optimization features to offer the best availability for business-critical applications, and at efficiencies that meet or exceed transformer-less offerings.

    One such energy optimization mode is Intelligent EcoMode, which provides the majority of the critical bus power through the continuous duty bypass. This technology keeps the inverter active and always ready to assume the load in the event of an outage — a dramatic improvement over energy optimization modes that do not keep the inverter active. These UPS systems that do not deploy the latest active inverter Intelligent Eco-Mode often have a notch in the output waveform going in and out of Eco-Mode. They have to perform an interrupted transfer to turn off the bypass before turning on the inverter. Notch in the output? Interrupted transfer? Gulp! Increased cost savings using Intelligent Eco-Mode: $20,350/yr.

    Weigh Safety Risks and Hidden Costs of Alternative Distribution Voltages

    480V 3-wire distribution is the norm in enterprise data centers. There has been a lot of discussion about going 400/230V 4-wire direct from the UPS to the server. While this configuration looks good on paper it has some significant limitations. Fault current can be much higher in these direct-to-the-server configurations. This poses an equipment and personnel risk. This configuration also strands capacity in the gear, requires higher ampacity buses and can increase wiring costs. Before going to this extreme consult with your data center trusted adviser to understand the costs and risks associated with this architecture. Potential Cost Savings using higher distribution voltages: $3,500/yr

    Do Your Research

    Due diligence on the latest UPS technology and efficiency optimization modes will help you choose improved critical power systems with the highest availability and new levels of utilization and efficiency.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Optimizing Infrastructure for the Big Data V’s – Volume, Velocity and Variety

    Patrick Lastennet is director of marketing & business development, financial services segment for Interxion.

    Patrick-Lastennet-tnPATRICK LASTENNET
    Interxion

    The use of big data, in general, is still in its early stages for many industries, but the financial services industry has been dealing with big data for years. In fact, it’s already been managed and embedded into core financial processes. What used to be done in hours can now be done in minutes thanks to advanced data processing capabilities being applied to everything from capital market portfolio management applications to financial risk management. Prior to such advancements, data from previous days or weeks was analyzed to help re-strategize market approaches for the next day’s trading. But now, with more complex data analytics capabilities, financial firms are able to shorten that window for data processing and create more up-to-date strategies and trading adjustments in real time.

    However, it’s not just the increasing volume of data sets that is of concern to financial firms. There’s also the velocity and variety of the data to consider. When pulling clusters of diverse databases together for both structured and unstructured data analysis, financial firms rely on having powerful processing speeds, especially as real-time insight is increasingly a key strategic factor in market analysis and trading strategies. But are financial institutions equipped with the proper infrastructure to effectively handle the three V’s of Big Data – volume, velocity and variety – and benefit from real-time data analysis?

    Increasing the Value of Real-Time Operations

    With real-time data analysis, financial institutions are better able to manage risk and alert customers to real-time issues. If a firm is able to manage risk in real time, that not only translates into better trading performance, but also ensures regulatory compliance. Such improvements can be seen in consumer instances with enhanced credit card transaction monitoring and fraud protection and prevention measures. But, on a larger scale, the most recognizable incident that would have benefited from better data analysis may have been the collapse of Lehman Brothers.

    When Lehman Brothers went down, it was called the Pearl Harbor moment of the U.S. financial crisis. Yet, it took the industry days to fully understand how they were exposed to that kind of devastating risk. For every transaction made, it’s imperative that the financial firms understand the impact, or, as an extreme scenario, risk another “Lehman-esque” collapse. Today, with advancements in big data analysis and data processing, whenever any trader makes a trade, financial firms know what’s going to happen in real time through the risk management department–that is, if they have the right infrastructure.

    Optimizing Current Infrastructure

    The crux of handling the volume, velocity and variety of big data in the financial sector lies within the underlying infrastructure. Many financial institutions’ critical systems are still dependent on legacy infrastructure. Yet to handle increasingly real-time operations, firms need to find a way to wean off of legacy systems and be more competitive and receptive to their own big data needs.

    To address this issue, many financial institutions have implemented software-as-a-service (SaaS) applications that are accessible via the Internet. With such solutions, firms can collect data through a remote service and without the need to worry about overloading their existing infrastructure. Beyond SaaS apps, other financial companies have addressed their infrastructure concerns by using open source software that allows them to simply plug their algorithms and trading policies into the system, leaving it to handle their increasingly demanding processing and data analysis tasks.

    In reality, migrating off legacy infrastructure is a painful process. The time and expense required to handle such a process means the value of the switch must far outweigh the risks. Having a worthwhile business case is, therefore, key to instigating any massive infrastructure migration. Today, however, more and more financial firms are finding that big data analysis is impetus enough to make a strong business case and are using solutions like SaaS applications and open source software as stepping stones for complete migrations to ultimately leave their legacy infrastructure behind.

    Integrating Social Data

    While the velocity and variety of big data volumes from everyday trading transactions and market fluctuations may be enough of a catalyst for infrastructure migrations and optimization, now that social data is creeping into the mix, the business case becomes even more compelling.

  • The Hidden Costs of System Sprawl

    Florin Dejeu, director of product management, SEPATON, Inc., has more than 20 years of product management experience, overseeing the development of products that address the information management needs of large enterprises with emphasis on storage, archiving, classification, HSM and data protection solutions.

    Florin-Dejeu-tnFLORIN DEJEU
    SEPATON

    While data center managers have grown accustomed to rapid data growth, few could have anticipated the unprecedented data growth and increased complexity that has overwhelmed many data center backup environments in the past few years. According to industry analysts, data in large enterprises is growing at 40-60 percent compounded annually.

    Data growth is fueled by the proliferation of new business applications, the introduction of Big Data analytics, the increased use of mobile devices and tablets in the work place, and the increased use of large databases to run core company functions (ERP, payroll, HR, production management). Companies are not only creating massive volumes of data, they are also under pressure to meet increasingly stringent and complex requirements for protecting and managing that data. For example, they have to back it up in shorter times, retain it for longer periods of time, encrypt it without slowing backup performance, replicate it efficiently, and restore it quickly.

    Until recently, many enterprise data managers responded to data growth by simply adding disk-based backup targets. The most common type of disk-based backup target provided inline data de-duplication and a reasonable level of performance and capacity to accommodate the increased data volume. However, these systems are simply not designed for today’s massive data volumes or fast data growth because they lack two critical capabilities: they do not scale and they do not de-duplicate enterprise backup data efficiently. As a result, for many large enterprise data centers, the “add another system” approach has reached its breaking point.

    The Hidden Costs of Sprawl – Total Cost of Ownership

    For many organizations the breaking point for non-scalable systems is the point at which they cannot any longer meet their backup windows. While adding a single system may not seem overly cumbersome, for large enterprise data centers that require several of these systems, it can add unplanned cost, complexity, risk, and administrative time. The hidden costs and total cost of ownership (TCO) impact are significant:

    • Overbuying systems. Companies are forced to add an entire system when they have plenty of capacity but only need more performance or conversely, have performance and need more capacity.
    • Wasting money on capacity. By separating data onto multiple non-scalable systems, these systems cannot de-duplicate globally, reducing the efficiency of their capacity optimization.
    • Wasted IT admin time. To add a new non-scalable backup system, IT admins have to divide the existing backup(s) onto multiple new systems and load balance for optimal utilization a process that becomes more time-consuming and complex with every new system added.
    • Added maintenance cost. Each new system increases the cost of system maintenance by an order of magnitude. Every time a new software or hardware update or upgrade is needed, or standard maintenance is required.
    • Slow backups. Non-scalable systems typically use hash-based, inline de-duplication that slows backup performance over time. They are highly inefficient in database backup environment common in enterprises data centers for two reasons. First, databases often store data in sub-8KB segments that are too small for inline, hash-based deduplication to process efficiently without becoming a bottleneck to backup. Second, they do not support fast multiplexed, multi-streamed database backups – requiring IT staff to choose between fast backups and capacity optimization.
    • Rising data center costs. In simple terms, more systems with less-efficient de-duplication means more rack space, power, cooling, and data center floor space.

    Less is More for Low TCO

    In today’s fast-growing enterprise backup environments, consolidating backups onto a single, enterprise-class disk-based backup appliance is proving to be both more cost-efficient and less prone to human error and data loss than the “siloed” approach described above.

    Backup and recovery appliances are designed specifically to handle the massive data volumes and complex backup requirements of today’s data centers. These purpose built backup appliances (PBBAs) are designed to backup, de-duplicate, replicate, encrypt, and restore large data volumes quickly and cost efficiently. To ensure you choose an enterprise-class backup and recovery appliance, ensure to use the following best practices:

    Opt for Guaranteed High Performance

    Understand the performance impact that processing-intensive functions, such as de-duplication, replication, and encryption have on the system. Enterprise-class systems are designed to perform these functions in a way that does not slow performance. Some even offload CPU functions Ensure that any published performance rates are for guaranteed, continuous performance, and not simply the highest rates achievable in a widely varying ingest rate.

    Grid Scalability is Essential

    As described above, adding, managing, and using multiple backup systems is not practical or cost-efficient in today’s fast-growing, complex data centers. Enterprise-class backup and recovery systems offer grid scalability, that is, the ability to add performance and/or capacity independently as you need it. This pay-as-you-grow model eliminates over-buying, reduces IT management time, and enables you to store tens of petabytes of data in a single, consolidated backup appliance.

    Storing data in a single, optimized system has the additional benefits of enabling highly efficient, global de-duplication, and eliminating the need for load balancing and ongoing system-tuning.

    Ensure Deduplication is Designed for Enterprise Data Centers

    One of the most effective ways to reduce the cost of backup and recovery is to implement enterprise-class de-duplication. Unlike de-duplication optimized for small-to-medium-business, enterprise de-duplication is designed to enable faster backup performance and better overall capacity requirements. It is also capable of tuning de-duplication to the specific data types for optimal use of CPU, disk, and replication resources. For example they can de-duplicate database data at the byte level for optimal capacity savings or recognize data that will not de-duplicate efficiently (i.e., image data) and back it up without de-duplication. This “tunability” can save enterprises thousands of dollars in savings in capacity and processing costs.

    Reporting and Dashboards Enable Savings

    Detailed reporting and dashboards are key to enabling IT administrators to manage more data per person. They automate all disk subsystem management processes and put detailed status information at the administrator’s fingertips. They also provide predictive warning of potential issues enabling administrators to take action before they become urgent.

    Lowest Total Cost of Ownership

    For today’s large enterprise backup and recovery environments, the days of adding more and more backup systems are over. The speed of data growth, massive volume of data, and complexity of backup and recovery policies necessitate the use of enterprise-class purpose-built backup appliances. These appliances enable organizations to maintain backup windows by moving massive data volumes to the safety of the backup environment at predictable, fast ingest rates. They also streamline and simplify complexity by consolidating tens of petabytes of stored data onto a single, cost-efficient easy-to-manage system.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Capture Client Satisfaction: Seventh Key to Brokering IT Services Internally

    Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

    Dick Benton GlasshouseDICK BENTON
    Glasshouse

    Last August, I outlined seven key tips IT departments should follow to build a better service strategy for their internal users. Since then, I’ve taken a deeper dive into each of these steps on the way to becoming an Internal Cloud Provider (ICP), an essential transformation if IT wants to align with company goals and user expectations. My last post addressed the sixth step, proving what you delivered; in other words, it’s important to show management and service consumers that you’ve met the established service level agreements (SLAs) and key performance indicators (KPIs) for your IT services.

    Now, we’ve come to the seventh and final step in this process: capture client satisfaction.

    So you now have your nascent cloud service offerings out there in consumer land. Your service offerings are being selected by the end user from your Web-based service catalog. They are intelligently choosing the service they really need, because you have provided service attributes in terms the consumer can understand, and you have also identified the cost of each service offering to assist in their selection.

    You have provided a mechanism not only for auto selection, but also for auto deployment. Services selected are now provisioned automatically under appropriate policies agreed by management. Mean time to provision is now a matter of minutes or hours instead of days and weeks. Each month, you produce your score card showing which groups, departments or divisions have consumed which service offerings and at what costs, and you have confirmed in formal reporting that all SLAs have been either met or exceeded.

    Determine Satisfaction and Look to Tomorrow

    What more can IT do? To lock in your new understanding of consumer needs and to stay abreast of trends in these needs, it is essential to include a survey into your processes. This is not just a satisfaction survey vainly seeking confirmation that IT has indeed “done well”. Rather, it’s critical to use this opportunity to ask probing questions to get a handle on how needs are changing and how future offerings might be driven.

    The satisfaction survey process provides an opportunity to capture consumer needs, consumer consumption behaviors and service offering usage as well as satisfaction levels with your services. It is a tool you can use to better understand individual and overall service requirements. This is what we mean by aligning IT with business needs. Your metrics reporting should have already allowed you to identify frequent consumers from the occasional. You should also have a good handle on who is consuming what and be able to classify small, medium and large consumers. If you have offered services that lend themselves to being turned off as easily as they are turned on, you can also get a handle on the mean time-to-live of the various service offerings.

    Survey Can Help Set Your Roadmap for the Future

    The above information allows you to craft an intelligent survey that seeks information to assist you in planning new services to meet changing needs, changes to existing services as business drivers change, and even end-of-life decisions for some services that are no longer in demand. This knowledge is critical to retaining your relevancy to the consumer for your service offerings, and to remain competitive with outside public operations. It’s always a good idea to formally review what your competition is up to. Just because you think your consumers are captive doesn’t mean they see themselves as captive. Review the competition’s service offerings at least monthly, and follow their PR news feeds for service offering announcements. These new service offerings may well be suitable grist for your survey, to identify if there is a need for these within your organization.

    Finally, there is room for the classic component of the satisfaction survey. How well did you do? A scale of one to five is usually sufficient. Further levels of granularity add little. One through five provides a high, medium and low score with something in between for those who want to be picky. Start by asking consumers to assess the service offerings themselves. Are the service offerings meeting their needs? Would more offerings be helpful?  Be sure to allow for written feedback as well. Then move to questions around ease of use. Can they find the service offerings they are looking for? How easy is it to find the service offering they seek? How easy is it to select the service offering and place an order? Are the terms and conditions of the service offering clear? Do they think the costs of the service offerings are reasonable and competitive?

    Next, move on to the deployment process. The key question here is to ask how they feel about the mean time to provision the service they ordered (self-selected). Did they get the service they asked for? Were there any additional clerical steps required for approval? (This allows you to count such instances). Was the service delivered as promised? Were SLAs met on each occasion? Was the monthly/weekly reporting adequate? Did they need to escalate any issue? (Capture, count and classify). How do they feel about IT’s ability to respond to their issues? How do they feel about IT’s ability to resolve their issues? How do they rate the internal IT cloud against the competition (Amazon)?

    The creative mind can conjure a number of other questions to include in the survey; however, there is a risk of driving boredom or even dissatisfaction once a survey gets beyond a certain size. Perhaps 10 to 15 questions should be sufficient to capture key information about the services you offer, your ability to respond to consumer demands, and trends and future service offering needs. There are quite a few Web-based survey tools, and many are free like Survey Monkey.

    By running regular surveys, and even embedding mini surveys in your selection, approval or quote process and the provisioning and deployment process, a wealth of information becomes available to the IT organization dedicated to improving consumer satisfaction and continuous improvement. In summary, here’s the top tips to keep in mind:

    • Build surveys into your service order and service fulfillment procedures;
    • Run a quarterly satisfaction survey on your clients;
    • Differentiate between large clients and most frequent users, and small clients and least frequent users; and
    • Use the information to ensure you are delivering the service offerings your consumers need, with attributes they value, at prices they can afford.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Building A Data Center Can Be A Blast: A Little TNT Can Help

    Chris Curtis is the co-founder and SVP of Development for Compass Datacenters. We are publishing a series of posts from Chris that will take you inside the complexity of the construction process for data centers. He will explore the ups and downs (and mud and rain) of constructing data center facilities and the creative problem-solving required for the unexpected issues that sometimes arise with every construction process. For more, see Chris’ previous columns on the planning process.

    CHRIS CURTIS
    Compass Datacenters

    High explosives. Who doesn’t love them? Isn’t a large part of our culture based on blowing things up? Certainly some of our leading celebrities have made whole careers out of appearing in movies that feature one massive explosion after another. Well, the world of data center development is no different. It doesn’t happen often but, every once in a while, we have to roll out the dynamite and do some serious blasting. Like most things in the development world, the need to conduct controlled explosions has some plusses and some minuses.

    Lessons in Geology

    Most of the time, the average data center can be built without the need to prepare the site using cataclysmic force, but our site resides on what geologist’s refer to as a “limestone shelf.” In the technical parlance used by we developers. this is referred to as “a bunch of rock.” Maybe not as scientific, but a lot more descriptive. I don’t mind telling you, this news made the on-site guys positively giddy with excitement. The prospect of going to work and getting to say things like “fire in the hole” just seems to bring out the best in folks.

    Despite the electric atmosphere that the prospect of dynamite utilization brings, this is serious stuff. You know how your mother used to say that it’s “all fun and games till someone gets hurt,” well, this a few notches above that. Being blown to smithereens has a degree of permanence that you just aren’t going to find with the average office related mishap. Just like any refined activity, there is a protocol that must be followed before you can begin demolishing large swaths of real estate.

    Telling the Neighbors

    First, you must alert the locals. This means going from house to house to advise the occupants of the homes surrounding your project site that they might just want to keep the kiddies and pets inside between the hours of 9 and 11 this coming Tuesday. Naturally folks have questions, “Will it be loud?,” “Am I at risk from flying debris?,” “Can I watch?,” to which the answers are of course, “Yes,” “No” and “Sorry, but our lawyers won’t allow that.”

    Second, you put up signs and mark off the area. With this type of signage I’ve found that it’s best to be simple and declarative: “Blast Site – Keep Out,” for example. Some developers prefer “High Explosive Area – Trespassers Keep Out,”, but I find this a little pretentious and wordy. Short and pithy also eliminates the possibility of your sign being liberally interpreted. No one wants to have someone’s body parts distributed though out your job site because they live in the neighborhood and decided that the word “Trespasser” didn’t apply to them. This type of thing can really hurt morale. When marking off the blast area I prefer to go conservative. Sure it costs you a little more in orange plastic fencing, but I think we can all agree that the phrase “better safe than sorry” applies here.

    Dress Code: Hard Hat and Ear Plugs

    I don’t think that I can describe the level of anticipation until the big day finally arrives. Remember waiting for that special gift at Christmas? This is better, since you now it’s actually going to happen, and you’re not going to get a sweater instead of that new bike you wanted. When blast day finally comes, everyone gets to wear a hard hat and ear plugs–this is a developer’s version of a Fourth of July celebration. I must admit that even though I’ve been to a few of these things I can barely make it until the time that the big switch is thrown. And once it’s thrown – wow. The explosions are deafening, there’s dirt and debris flying everywhere, grown men are jumping up and down and pointing – you just don’t get entertainment like this every day.

    Someone once said that “There is always one guy who doesn’t get the memo,” and that’s the case with blasting. Just accept the fact that no matter how thorough your canvassing, or how many signs you post, someone in the neighborhood is going to complain. This being the case, I was not surprised when I received a nasty email from a local resident complaining about the noise and, helpfully suggesting that I build my data center somewhere else. Since all it takes is one crank with a friend that works at Channel 8 to turn your project into a PR nightmare, I recommend handling these situations in a face-to-face manner. As I said, I’ve been through this drill before so I put on my sympathetic face (Note: It’s good to practice this before your visit. Sometimes a sympathetic face can look more like an “I could care less” face, or worse, the “surly punk” face, so you really need to get into character before you go) and went to visit the offended party.

    The Developer’s Listening Skills

    My first grade teacher always told me to be a good listener. This is great advice for these types of “disgruntled neighbor” situations because, really, what else can you do? After all, the blasting is already done, and there’s a big hole in the lot behind their house, so you sit and listen. Remember to nod at all the points that they use to tell you that your actions are akin to a crime against humanity and assure them that the data center you are building will not have a negative impact on the neighborhood. And this is true. Since it only takes a few folks to run a facility and the building is full of servers, traffic and noise aren’t going to be on-going issues. This is what folks really want to know. Once you’ve apologized and assured them that the worst is over, even the most disgruntled citizen tends to listen to reason. After all, doesn’t everyone really just want to have their “day in court.”

    As a developer, the pendulum of your activity can swing widely. One day, you’re just another swarthy guy enjoying the primal thrill of blowing things up, and the next, a mild-mannered Dr. Phil talking an irate neighbor off the ledge. In this role, you must be prepared for anything.

    Stay tuned for the next article in the series, titled, Maybe We Should Turn This Data Center into an Ark: How Bad Weather Can Cause Chaos with a Construction Timeline.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Understanding Data Center Commissioning and Its Benefits

    Michael Donato, QCxP, LEED AP BD+C, is on the team at Emerson Network Power, Electrical Reliability Services.

    MichaelDonato-tnMICHAEL DONATO
    Emerson Network Power

    Commissioning has existed as a discipline of the building construction industry for nearly three decades, yet it is continually evolving. Despite widely available standards, there is still considerable difference of opinion as to the definition of commissioning and the processes involved. As a result, commissioning is generally misunderstood and some of the most valuable commissioning processes are underutilized.

    In the data center world, many owners don’t seem to have a clear picture of the purpose and value of this important quality assurance program. Commissioning is most often used to ensure a new data center process, system or expansion meets the owner’s needs. Specifically, the American Society of Heating,
    Refrigerating and Air-Conditioning Engineers (ASHRAE) asserts that the focus of commissioning is “verifying and documenting that the facility and all of its systems and assemblies are planned, designed, installed, tested, operated and maintained to meet the needs of the owner.”

    Because commissioning activities always tie back to meeting the owners’ needs, the owner is the best person to oversee the commissioning process. However, rarely does the owner have the time or expertise to fill this role, particularly in the middle of a large project. This is why owners typically hire a Commissioning Authority (CxA), such as Electrical Reliability Services (ERS), to provide building commissioning services, and oversee and execute the entire commissioning process.

    Unlike a Commissioning Agent, who has legal authority to make decisions on behalf of the owner, the CxA does not have any decision-making power on the project. However, a quality CxA will offer the expertise, guidance, and direction the owner needs to make informed commissioning decisions. Another way to think of the CxA is as a quality assurance professional that keeps the project focused on the goals of the owner, from start to finish, in order to realize the following benefits.

    Less Unplanned Downtime and Fewer Repairs

    Preventing or greatly reducing the possibility of unplanned downtime, which can be devastating to a business, is perhaps the greatest value commissioning provides for data center facilities. Commissioning activities ensure that mission-critical equipment is properly installed and that systems are fully integrated. The process checks for redundancy and single points of failure. It includes comprehensive system testing to verify availability in all operating modes. These activities help identify potential system-related problems so they can be resolved before leading to major equipment damage or a disruption of service. Commissioning can also ensure a well-trained and well-equipped operations and maintenance (O&M) staff that is less likely to make mistakes that lead to system failure.

    Reduced Life Cycle Costs

    Done properly, commissioning improves system performance throughout the life cycle of a data center. Better system performance not only optimizes data center performance, it also decreases operation and maintenance costs and cuts down on energy consumption for smaller utility bills.

    Fewer Change Orders and Delays

    Under the oversight of the CxA, projects experience fewer change orders, delays, and rework, avoiding the considerable costs of late occupancy, liquidated damages, extended equipment rentals, and other costs associated with delays.

    Cost-Effective Problem Resolution

    The commissioning process helps identify system-related problems early in the project when it is most economical to correct the issues. For example, design problems can be identified during design reviews as opposed to late in the construction process when it is much more time consuming and costly to correct them. Installation issues are pinpointed before system startup, and O&M process problems are noted before a component fails.

    Full System Integration

    For maximum data center availability, all critical systems – power, cooling and IT infrastructure – must function together as a fully integrated system. Historical approaches to testing and startup verified only that each individual system components functioned independently. Today, a CxA employs more sophisticated processes and tests to verify that components work together as an integrated system.

    Informed Workforce

    One of the outcomes of the commissioning process is a robust knowledge base about the new system or process, which can be translated into quality training activities, training materials, and O&M resources. Involving the CxA in the training process and Systems Manual preparation ensures that the O&M staff is well prepared and well equipped to operate and maintain the newly commissioned system. In addition, both veteran staff and new hires will have quality references for future training, refreshers, or troubleshooting.

    Benchmarking Data

    Commissioning creates extensive documentation for benchmarking system changes and trends. The data can be used to identify future problems with the system or process, maintain optimal operations, and evaluate future maintenance decisions.

    Improved Efficiency

    If efficiency features have been designed and built into the new system, commissioning activities can verify that the features function as intended. Commissioning can also ensure that the O&M staff has the training and operating resources it needs to fully leverage the design efficiencies, thus realizing the resulting energy cost savings.

    Enhanced Safety and Compliance

    The commissioning process produces a safer data center and reduces owner liability by uncovering safety problems throughout the design, construction, and occupancy phases of a project. Commissioners can ensure that owners and O&M staff receive proper education on safe operating and maintenance procedures pertaining to electrical and mechanical equipment.

    LEED Certification

    Commissioning is a requirement for Leadership in Energy and Environmental Design or LEED certification. Projects attempting the certification must complete fundamental commissioning activities and can complete enhanced commissioning activities for optional credit. LEED projects must involve the CxA mid-way through the design phase or earlier. Involving your CxA will help ensure your project is commissioned per LEED requirements.

    Return on Investment

    The benefits of commissioning often create a return on investment that far exceeds the cost of the commissioning project itself. In all recent ERS projects, cost/benefit analyses of key issues discovered and corrected during the commissioning process revealed value for the owner well beyond the cost of commissioning. These analyses took into account only material and labor costs and did not factor in the cost of data center downtime that likely would have occurred had the identified issues not been resolved.

    Despite the differences of opinion in the data center industry as to what the commissioning process should entail, commissioning is verifiably a critical step in the design and build of a new facility, system or addition. Ultimately, commissioning leads to greater availability, safety, and efficiency while reducing project and operating costs throughout the life cycle of the data center.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Real-Time Command and Control in NOCs

    Simon Clew is Sales Director at Adder Technology Limited

    Simon-Clew-tnSIMON CLEW
    Adder Technology

    Since the arrival of virtualization, the Keyboard, Video and Mouse (KVM) market for data centers has changed dramatically. KVM is no longer as critical within the data center as it once was, rather the focus has shifted to improving KVM within the network operating centers (NOC), which run and manage multiple data sources.

    Today’s NOCs and Command/Control Centers are characterized by vast arrays of screens and control panels being used and managed by a team of busy, and more than likely stressed, individuals. In these hubs of activity the ability to notice and react quickly to any situation is critical; otherwise it could result in a catastrophic data center shut down. For example, Emerson Network Power surveyed 41 data center companies and discovered that the average cost of an outage was $507,000.

    A key element for ensuring responsiveness in NOC and CCC’s is to give operators the ability to see clearly and in real-time what is occurring in the systems they are managing or using. This was the cause of huge problems for many NOC/CCC operators due to ineffective KVM solutions being used to view and control what was happening on a system network. Often the image on the screen would be poor-quality or pixilated. Even at the desk the image may appear to be acceptable but enlarged onto a video wall such small imperfections are hugely magnified underlining the limitations of analogue systems. Add to this the inherent latency of legacy KVM solutions, the lack of support for input devices such as touch screens; this all limits the operator’s agility to act

    Real-time Control

    We have moved towards the NOC where there is a real focus on real-time control. For example, in NOCs running a range of data centers, operators are looking at a multitude of data sources. If any of these are affected by a serious situation, the controller will need to act immediately. Those looking after data centers that are part of a critical infrastructure system, such as power distribution and management, will not have time to wait for a system to boot up and connect to the affected machine.

    Fortunately, with the advent of digital video and USB connectivity, real time control and low latency video are a reality. Another benefit offered by improved KVM in NOCs is the simplification of operator functions through the use of specialist input devices such as keyboards containing a number of unique keys or touchscreens and tablets – combined with common access card readers and multifunction mice.

    Digital KVM is also making an impact in NOCs through providing the ability to command and control multiple screens (and computers) seamlessly using just one keyboard and mouse, a capability offered in switching solutions such as the Adder CCS4USB. Allowing an operator to monitor several systems from one work station has a range of benefits, not least of which is improved efficiency.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Beating the Storage Odds in Age of Big Data

    Ambuj Goyal is general manager, IBM Systems, Storage and Networking.

    ambuj-goyal-tnAMBUJ GOYAL
    IBM

    Technical evolution moves at different rates and for different reasons. Unlike other areas of computing, for example, storage solutions for distributed systems have evolved as a result of proliferation, rather than more traditional reasons such as price, performance and technical advancements. In other words, when organizations have bought a particular storage technology, they’ve grown with it whether they planned to or not.

    That’s largely because storage vendors have spent a lot of time creating products that are based on a variety of individual architectures and protocols. Once an organization commits to one of those architectures, it’s difficult to even consider adding or transitioning to another, different, architecture, even if that alternative offers cost, performance, or management benefits. The result of being painted into this proverbial corner, of course, is that it can lead directly to things like storage sprawl, underutilized storage systems, and complex management – all of which reduces productivity and adds cost.

    Storage Controllers at the Center

    One area of repeated isolation has been the storage controller, or the brains of the storage system. For various reasons, the industry has had a propensity to create separate storage controllers for different protocols, such as block, file or object. Even though the media on which these controllers store the information is the same, the storage systems will only support the designated protocol it is serving. The software (or, so called microcode) simply interprets the protocol and stores the information.

    So the question becomes, why has the industry produced so many different controllers? One reason is that technology has a tendency to be “fast out of the gate.” The industry is rife with examples of technologies that have raced to production and market only to be reined in at a later point with standards or consortium-led initiatives that enable more competition, ease of use, or ease of management. And to be honest, it’s often in the vendor’s best interest to push the concept of “engineered” or “optimized” boxes for each protocol.

    The Revolution is Here

    The storage situation is not dissimilar to what the industry experienced with the original x86 ecosystem, where suppliers and vendors succeeded by creating a certain technology proliferation in the enterprise. Today, however, that ecosystem has been revolutionized. Now, through workload consolidation technologies implemented in private and public clouds, there is higher utilization, and consistency of management. And, note, that in the mainframe and Unix worlds, workload consolidation and the resulting improved utilization has been the norm for more than a decade.

    The storage environment is ready for the same kind of revolution. It’s ready for solutions that abandon the proliferation strategy of days gone by and help organizations avoid lock-in through wide protocol support, and encourage scalability through openness. That’s what we’re working on at IBM. Our Storwize platform of high-capacity systems, for example, tackles these issues head on.

    Do Your Research

    But don’t take my word for it. Ask yourself, what if there was a way to abstract the protocols from the basic store and retrieve functions? What if you could use old storage and new storage simultaneously, thus maximizing the return on capital investments? What if an application provider could automatically manage the life cycle of storage without getting a storage administrator engaged?

    That’s where the storage industry should be headed.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Showing IT Value through Proper Metrics

    Hani Elbeyali is a data center strategist for Dell. He has 18 years of IT experience and is the author of Business Demand Design methodology (B2D), which details how to align your business drivers with your IT strategy.

    Hani_Elbeyali_DellHANI ELBEYALI
    Dell

    Even when IT investments are showing positive business results, how could you prove beyond any doubt that this positive impact was attributed to IT investments? Maybe the improvement is due to external economic factors, shift in market demands or weak competition strategy. Not only measuring and reporting the benefits to the business is fundamental, but also how and what to measure are equally as important. These correct metrics will support your efforts in realizing and sustaining IT value.

    IT is Core to the Organization

    Reporting to management on IT should not mean using IT jargon. This doesn’t help IT and can be counterproductive because the unfamiliar language causes the IT department to be seen as an outsider to the organization’s core.

    Imagine that the CIO is walking into a board meeting, the head of the sales department reports on the execution strategy to meet the organization’s financial target, while the CMO reports on the globalization strategy to reach new customers, and the CFO report the financial health of the organization. Now comes the CIO’s turn to report on IT, and he/she starts by talking about the data center PUE ratio, the new ERP modernization project, the unified fabric offerings, and more abstractly, the cloud.

    Immediately, everyone in the room puts their heads down, then looks at each other. There’s confusion everywhere – no one understands what this jargon really means. They are asking, “How does any of that IT stuff enable them to perform their business and compete? Why the CIO is even here?” And so on. This type of communication cements the notion that IT is not core to the organization; that it’s only supporting function. What exacerbates the issue is the attitude that if IT is not core to the organization, then, why not outsource it? Or out-task some functions at cheaper price, at minimum.

    What to Measure and Communicate are the Real Issues

    I certainly agree, showing IT’s value to the enterprise is challenging. The problem is not the value IT creates, but what to measure and how to communicate this value. Current practices in IT performance measurement, metrics and reporting do not help, because they concentrate on reporting how IT spends money, rather than the value created from the spending.

    Businesses usually measure success in monetary values, profits and losses, and attainable financial targets. Investments in IT are made only in initiatives yielding positive return on investment (ROI). In many cases, IT projects are long term, the ROI comes in long payback periods, which is not attractive proposition to the business. Charge back and re-allocation makes things worst, because each line of business argues why they are paying too much. As a result, to change the perception of IT into business driver, you must stop reporting on hardware and software performance, start reporting the contribution of IT to the success of the business.

    What Happens If IT Impacts On Business Are Reported Correctly

    To best demonstrate the technology attribution to the business, and what executives are expecting to see when reporting to management, take a look at the report example provided in table 1.0, this example report1 is for a mid-size organization. Keep in mind this report is for illustration only; the intent is to shows what important factors relating to report on to the business unites for the past quarter.
    hani-table1-tnClick to enlarge.

    Communicating IT Value

    The report sample above speaks the same language of the business and it reflects IT attribution to the business the past quarter:

    • IT expense as percentage of revenue and gross income for the quarter, 6.3%. The IT organization ranked top 10% when compared to the IT industry.
    • The contribution of IT helped grew the business by 14.9% and drive 26.7% top line revenue, while keeping its operating expenses flat, The IT organization ranked top 10% when compared to the IT industry.
    • The report shows evidence of the value strategic technology investment can add to the business performance, even in downturn economy.
    • What is important to point out is the business continues to squeeze out and shrink the wrong operating expense—IT is only 6.3% of the firm’s revenue.

    The business only wants to know the impact of the IT performance on revenue, cost and margin. It’s the job of the IT leader to act as a gateway and ensure communications are translated properly in both directions.

    Please note the opinions expressed here are those of the author and do not reflect those of his employer.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.


    1 This is an example report for illustration purposes only

  • 5 Key Steps to a Smooth Cloud Transition

    Alan McMahon works for Dell in enterprise solution design, across a range for products from servers, storage to virtualization, and is based in Ireland.

    alan-mcmahon-tnALAN McMAHON
    Dell

    In today’s Virtual Era, the role of IT has changed due to the speed at which technology is evolving, and it requires the same evolution of our IT teams. In order for the teams to support, and carry out the goals of their organizations, they need the infrastructure to grow while becoming more agile and efficient. However, there are some challenges that come with these changes. Many companies, especially smaller ones that are just getting started, lack the resources with which they can use to make these changes. Additionally, it can be difficult to manage the outdated hardware and software which is used.

    It’s important that we begin working towards streamlining processes and making better use of the cloud in order to gain an edge over our competition, while fulfilling our company’s objectives. There are five imperatives that we must follow in order to accomplish a smooth transition to the cloud. Each of these will help to transform IT as we know it and will help to reduce the costs and complexity. On their own, they each aid in self-funding and reduce costs and overhead expenses. Together, these five imperatives help IT become more agile, efficient and allow the IT team to use their time, energy and financial resources to focus on other areas of innovation and growth for the company. Let’s take a look at the five imperatives:

    Leverage the Power of Virtualization

    The first thing that should be done is to virtualize data in order to boost efficiency. By removing touch points, silos and servers, costs are dramatically reduced. This task can prove challenging for companies who have an inflexible system that won’t allow for full integration at the present. But since technology is ever-changing, more and more resources are becoming available which will help you be able to unify everything seamlessly.

    Manage Your Data

    Today, data is being created at unprecedented levels, and it’s difficult to manage due to it being relatively unorganized and inefficiently stored. We need to have it organized so that we can find what we want, when we want it, with little effort. Our data needs to be able to be moved around easily and transferred to the device on which we want it, at the moment we want it. While it may seem reasonable to just add more storage devices to organize and handle the vast volume of data, the better choice is to virtualize it so that it can be everywhere, all the time. Virtual storage that is optimized and automated is the answer. Our data needs to be protected, able to be recovered and archived so that it can be integrated with our organization’s existing infrastructure.

    Enable Mobility

    Many of us do our work virtually, working from home, around town or even while travelling the world. We may have team members that we seldom see face-to-face, if we’ve ever met them at all. It’s essential to be able to have access to data and applications from anywhere there’s a wi-fi connection, at any hour of the day. Our teams also work with a variety of devices, such as tablets and smart phones, so it’s important that they have safe connections that allow employees access to the information which they need to get their work done.

    Consumerize IT

    As our workforce becomes more tech-savvy, IT needs to be made accessible to everyone in order to enable our employees to work remotely with their own devices of choice. There’s a vast array of devices, operating systems, hardware and software that our team members are comfortable using, so the IT organizations need to put systems in place to ensure that our data is monitored, protected, secure and backed-up, while still remaining accessible to everyone in the company.

    Transition To the Cloud

    In order to leverage our resources, work more efficiently, reduce costs and streamline management, we need to make the move to the cloud. A remote server and nas storage make this possible. Each company has its unique needs and will go about their transition differently. However, getting into the cloud paves the way for a more advanced, efficient working environment.

    As we make move forward into this Virtual Era, and transition to a more mobile work environment, we must keep these five things in mind. Not only will it help with the transition, but it will ensure our data is secure, yet available to the people that need it.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Cloud Crosswinds Ahead as Behemoths Battle on Price

    Ted Chamberlin is vice president, market development, cloud for Coresite.

    ted-chamberlin-coresiteTED CHAMBERLIN
    Coresite

    With the recent announcement from the Microsoft Azure camp stating “ its commitment to price match Amazon Web Services prices for commodity services like compute, storage and bandwidth, aligned with the general availability of Windows Azure infrastructure services,” the official race to zero begins in the Infrastructure as a Service market. This shot across the bow of AWS will most definitely bother AWS, but it should scare the stuffing out of the rest of the IaaS market. Particularly the providers who the traditional businesses trust a bit more for more enterprise-ish workloads should be concerned.

    Providers like GoGrid, Tata, Savvis, Terremark, Rackspace Cloud and others just entering the market will face the heavier crosswinds as these behemoths engage battles. The stark reality is that hyper scale providers like Amazon use their operational acumen and scale to drive pricing down on IaaS services on a regular basis. This will create an exceedingly tougher environment for the rest of the cloud providers to compete. How is an IaaS provider to thrive, yet alone, survive?

    The Power of Community

    Many of these market entrants have the solution already teed up in the data centers. Many companies will choose their colocation, hosting or cloud providers based upon the current occupants in those facilities so that they can trade traffic, conduct commerce or just inhabit a robust ecosystem.

    The next step in this evolution is to develop these neighbors into a fully-fledged community where users can vote/nominate who enters the community and freely conduct business in these exchanges. The community clouds truly contain elements that represent the best of both worlds in cloud adoption, scalability and exclusivity.

    According to my alma mater Gartner, “Private Cloud Computing is among the highest interest areas across all cloud computing according to Gartner, with 75% of respondents in Gartner polls saying they plan to pursue a strategy in this area by 2014.” Many enterprises understand cerebrally that public cloud scale is a key enabler tor growth, but many still are uncomfortable moving critical, compliant and sensitive workloads to public.

    Community clouds, although still nascent in the landscape will start to provide the benefits or interconnect and limited participation that will help companies move of those workloads to a community cloud. These communities of like-minded organizations also represent a departure from commodity, low cost, low value services. They value of the community rises as the participants -albeit the correct ones-join the community.

    Operators and enablers of cloud communities also have value-based roles to play such as providing the interconnect services, advertising the services of the community as well as developing and managing of those hardware and software stacks. The U.S. government’s Fed RAMP initiatives has test driven the community cloud for government agencies and partners, and despite the less then atomic pace that the Federal government exhibits, growth is starting to show in Federal cloud initiatives.

    Next up? Potentially digital media clouds, expansion of financial services low latency communities and healthcare exchanges. All of this potential commerce and transaction in these communities represent a way for differentiated, value based IaaS services that will help to slow down the race to zero so that everyone carve out their own revenue opportunity in the new world order of cloud series.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • DCIM: Inside the Information Bubble

    Gary Bunyan is Global DCIM Solutions Specialist at iTRACS Corporation, a Data Center Infrastructure Management (DCIM) company. This is the 11th in a series of columns by Gary about “the user experience”. See Gary’s most recent columns: Keep Your Servers Cool – And Your Business Hot, Turning DCIM’s Big Data into Actionable Insight and Unlock Your Capacity By Unplugging Your Ghost Servers.

    Gary-Bunyan-sm2GARY BUNYAN
    iTRACS

    If you’re like me, then the data center is the center of your professional universe. With tens of thousands of assets co-habitating in a vast web of inter-relationships and inter-dependencies, it’s one of the most complex entities on the planet and a simply amazing environment in which to work.

    But for the millions and millions of technology users and consumers out there, they barely know that we, and our data centers, even exist. And that’s the way it should be.

    Consumers can’t – nor should they — be bothered with the mind-boggling challenge of keeping all of those IT, facilities, and building management systems purring in harmony on the data center floor. All they want is the information those systems deliver in milliseconds so they can pursue their professional and personal lives with maximum freedom, choice, and success.

    And I don’t blame them.

    In fact, I’m very much like them.

    Immediate and Valuable Information

    I, too, want holistic information at my fingertips. But the information that I want, instantly and effortlessly, is a different kind of information. My colleagues and I want context-rich information about all of the assets, systems, applications, and workflows that feed into the operation of the data center so we can manage the physical infrastructure smarter.

    I don’t want to be buried in massive amounts of unrelated and indecipherable spreadsheets. I don’t want to be overwhelmed with tons of infrastructure-level minutia, literally trying to “connect the dots.”

    Instead, I want holistic management information – instantly – at my fingertips. Information that is already “connected” and is contextually-rich, meaningful, and actionable so I can clearly see the disposition of the entire physical ecosystem and how to optimize it.

    I want to be in what I call, the “Information Bubble.”

    The Information Bubble

    DCIM with interactive 3D visualization creates what I describe as an “information bubble” – a holistic, intuitive, real-time management environment, which, both literally and figuratively, immerses you in analytics, dashboards, and management tools giving you complete command-and-control over the physical ecosystem.

    I call it a “bubble” because all of the information you need is right at your fingertips – and all of underlying data that got you there is beyond the bubble, outside of your periphery. So you’re neither distracted nor confused.

    Inside the bubble:

    • You are navigating within a living, breathing 3D model of your infrastructure so you can see, understand, and manage it with unparalleled clarity.
    • You are immersed in deep-dive reports, dashboards, and analytics.
    • Information comes to you intuitively and holistically – you don’t have to go get it

    itracs-2The “information bubble” is like your own private operations center. Everything you need at your fingertips. Image courtesy of iTRACS software.

    The Value of Interconnected Data

    So how does DCIM take all of the underlying data about the infrastructure and present it contextually and holistically inside the “bubble?”

    By collecting, aggregating and analyzing data about both individual assets and the interconnected environment in which they are operating.
    itracs-chart

    Think of it this way:

    The amount of data collected from today’s physical infrastructure is increasing exponentially. This includes data about power, space, network connectivity, cooling, server utilization, business output, work-per-watt, facilities loads, and much more. But this data isn’t information. And the proliferation of this data isn’t insight. It’s the interconnectedness and context of the data that creates the value.

    Data becomes information when it is presented in context, with an awareness of the interconnected whole and a complete real-time view into the environment ¬– not just a portion of the environment over a “slice” of time.

    itracs1Inside the “bubble,” the exact disposition of your assets, and what you need to do about it, is crystal clear. Image courtesy of iTRACS software.

    Remember: Give someone a piece of data and you haven’t changed anything. Teach them how to turn it into information and you’ll change the world.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Staying on Top of an Ever-Evolving Industry

    Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

    Tom_Roberts_tnTOM ROBERTS
    AFCOM

    As you read this column, I am in Las Vegas surrounded by more than 2,000 of my closest friends from the data center and facilities management fields. As president of AFCOM, I couldn’t think of a better place to be than Data Center World Spring 2013!

    As you know, no one in this industry can afford to grow stale or complacent. There are many ways to stay updated and in-touch, such as with AFCOM’s bi-monthly magazine (Data Center Management magazine), monthly e-mail newsletter (The Communique), a comprehensive Digital Library on our members’ website. All of these can assist data center professionals in gaining relevant and practical education.

    Face-to-Face Interaction With Colleagues

    Yet, there is still nothing that as interactive as face-to-face meetings and conversations, which is why we convene two events per year and support our local chapters. At Data Center World, you’ll find the industry’s “movers and shakers” all under one roof. By that I mean data center professionals from every industry – government, health, financial, education – with expertise in all facets of the data center. Through case studies given by pros who are on the front lines, you can really “dig in” to the information. For example, here’s some thought leaders who will be at DCW:

    • Eric Lakin, manager of ILM Facilities for Trinity Health, will address ILM, one of the least understood but most beneficial long-term strategies an organization with a large amount of data can implement. Business value is the driver for ILM policies including active storage, archiving, and ultimately disposal. His session, “Why You Should Have an Information Lifecycle Management Strategy,” will examine the benefits of an ILM program and discuss the first steps required to make it a reality.
    • Joseph Furmanski, Associate Director of Data Center Facilities and Technology for UPMC, will present a fascinating ongoing project in which consumer technologies are used to manage corporate infrastructure—a shift away from traditional desktops and laptops to tablets and cell phones with apps such as YouTube, Blogspot, DropBox, Skype and QR Codes.
    • Myron Sees, Sr. Staff Data Center Specialist at Chevron, will discuss design standards in a session titled “The Development of Data Center Design Standards for Global Data Centers.” In it, he describes how Chevron developed design standards and standard operating procedures for 100 plus data centers that spanned the globe, yet varied in required redundancy, power, cooling and other requirements.

    Through the years, Data Center World attendees have requested more and more peer interaction so this conference includes a number of “platforums,” a term we coined to refer to idea-generating panels and roundtable discussions. Sometimes the most innovative solutions come from these casual, yet specific settings. They include topics like Business Continuity/Disaster Recovery, Energy Efficiency Rebates and DCIM.

    One Stop Shopping, Too

    Once attendees discover a solution or solutions that fit their data center, they can move to the expo hall, where exhibitors talk specifically about products. The final step is to get face-to-face time in the trade show hall with your companies of choice to help you make a decision.

    At AFCOM, our mission is to advance the education of industry professionals, by providing comprehensive vendor-neutral insight and analysis in key areas affecting all data centers. Data Center World is a cornerstone of our mission to keep the industry “up- to-speed.”

    If you would like to know what’s going on at Data Center World this very moment, visit us on Facebook and Twitter (@DataCenterWorld) or follow all the conference news at Data Center Knowledge (@datacenter).

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Earth Day Sparks A Look At Data Center Energy

    Marina Thiry is director of strategic marketing – data centers for ABB.

    Marina_Thiry_tnMARINA THIRY
    ABB

    As we celebrated Earth Day this week, many corporations are looking at their environmental strategies and are seeking to become more “green.” This begs the question: for organizations that are seeking to reduce their energy footprint, is it possible for a data center to employ distributed energy resources, like incorporating solar power, without giving up reliability? This is a worthy goal but there may be major issues that need to be addressed as the organization sets about working toward this goal.

    First, let’s be clear. Yes, it is possible! The characteristics of distributed energy resources for the data center include distributed energy sources (that is, beyond the emergency back-up generator) and – this is key – centralized control that may operate with the main power grid, but can operate independently of it, too. Sometimes the latter is referred to as on-site generation. Another characteristic of a distributed energy resource system is energy storage, but no one really has this, yet, in the spirit of how we are discussing it.

    Resilience and Sustanability

    One of the advantages of distributed energy resources is that they add resilience and sustainability to the total energy system within the data center. A distributed energy unit can achieve as high or higher level of reliability than any single resource. The challenge is to manage, utilize and optimize the unit in a dynamically changing fashion.

    From what I see, as businesses recognize the competitive advantages that an agile data center enables, they begin to invest in modernizing their data center infrastructure and operations so they can keep up with business requirements –whatever it takes to deliver more web services faster, and in a sustainable way. So, at ABB, we’re constantly innovating energy solutions so our customers can respond this demand.

    Consider, for example, mobile applications. Apple reported that customers downloaded over 40 billion apps, with nearly 20 billion in 2012 alone. These mobile apps, and the information and transactions collected from them, create enormous increases in data, as well as huge increases in the IT infrastructure and the energy required to support the business requirements behind those apps. At the same time, data centers are faced with the challenge of consolidating their resources. So data center operators are in dire need of finding significant ways to optimize.

    DCIM Allows For Energy Monitoring & Management

    If you are interested in reducing your energy footprint, then one of the most effective strategies for attaining aggressive – yet sustainable – growth is using a data center infrastructure management (DCIM) system. A DCIM system capable of managing those energy assets is vital to lowering the operating costs while maximizing availability and reliability. This approach helps extend the life of the data center by safely and reliably boosting the productivity of existing assets, getting more from less while keeping check of the return from sustainable energy investments that also help to reduce the energy footprint.

    The combination of distributed energy resources and DCIM offers significant reliability and efficiency improvements that begin with the energy source and purchase, and extends to improving energy utilization. A DCIM system like Decathlon provides the granular visibility, decision support and centralized control technologies—including the energy trading capabilities that enable data centers to exploit these new efficiencies safely and reliably.

    Tips for Managing Distributed Energy

    We recognize that every data center is different, and one data center’s successful approach may not work for another. Talk to experts who are well versed with power conversion and delivery technologies, utilities, and DCIM. Here are a few tips to keep in mind as you proceed in using distributed energy resources and DCIM:

    • To enable the near instantaneous balance of the data center infrastructure energy supply and demand, you will need two-way communications to deliver real-time information. Consider how your approach will manage multiple levels of integration and interoperability among various components of your data center.
    • As distributed energy technologies evolve, so will the applications and benefits. Think beyond the sources of energy. Consider, too, how your data center infrastructure will be able to manage distributed power generation, storage, process automation and demand response technology.
    • Examine effective ways of integrating different forms of distributed energy resources. Depending on your data center geography, some energy sources may be more practical or offer better economies of scale.
    • Finally, don’t underestimate the significance of the monitoring, decision support and process automation capabilities in your DCIM system. For example, consider the extent and depth of energy management capabilities, such as alerting you to purchase energy when it’s cheaper, and when to use more energy or less by scheduling compute loads during less expensive times. It won’t matter how robust or technically advanced your energy delivery network is if your data center infrastructure management is inadequate for the task.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Making Robust Data Center Design Decisions

    Bruno Raeymaekers is a data centre consultant at ARCADIS, a global project management, consultancy and engineering company, which has delivered over 600,000 square meters of net IT space in the past 15 years.

    BrunoRaeymaekers-smBRUNO RAEYMAEKERS
    Arcadis

    Over the years, our industry has come up with a veritable buffet of different technological solutions. Whether you’re placing many eggs in one basket and have chosen for a Tier IV facility, or your multi-site IT strategy allows for a number of decentralized Tier II locations – during design you’ll be hit with sheet after sheet of technical information, operational benefits, reliability impact assessments, and so on. Apart from those topics, cost will quite likely take a high position in the overall comparison of design choices. Many whitepapers have analyzed, down to the last nut and bolt, all the pros and cons of what’s available on the market.

    This often leads to the assumption that standard designs should be readily available. And yes – whoever has worked in the industry for some time has encountered many technical solutions, has investigated many more, and has formed an opinion regarding preferred solutions, based on experience in construction and operation of mission critical facilities.

    But do these preferred solutions stand fast for different markets? Can your off-the-shelf design withstand not only the expected reliability litmus test for your financial/banking client, but likewise the high expected EBITA/Return On Investment of your pharmaceutical client?

    Venturing into the design process and costing exercise of your soon-to-be crown jewel, an evaluation into Total Cost of Ownership (TCO or however you wish to call it) will give you a bottom line comparison of the different alternatives. Be it your cooling system, the choice for a certain type of UPS or generator, or the facility as a whole: after incorporating investment, operational and replacement/End of Life (EOL) costs, you’ll get a pretty clear view on where to head.

    Depending on your company situation and business case, you’ll have to account for different parameters in the calculations. What is your current and predicted electricity cost? Where do government incentives come in? What is your expected uptake profile over the next couple of years?

    TCO-NetPresentValueClick to enlarge.

    TCO expressed as Total Present Value: choose your parameters wisely…

    Below table summarizes just a few high/low values for parameters which take part in any TCO analysis, and which we’ve encountered over the years:
    arcadis-table

    Above figures combine a wide variety of clients – and locations. Building a facility in midlands US, Western EU or in the emerging markets will quickly tip any assessment in another direction. Most projects present a healthy mix of minimum and maximum values for that specific client, time and place.

    Entering those parameter sets in design evaluation studies, will yield significantly different results in overall TCO values. Adding other weighing criteria such as required resiliency, modularity, and scalability, will show you that different design choices fit different projects.

    An Example of Efficiency

    In past years, the fight for energy efficiency has mostly been won on the cooling infrastructure of the facilities. Assuming you’re set for an air cooling system, and well into air flow management (cold or hot aisle containment and everything that goes with it), an air-side economizing system might prove to be your best bet.

    But does it also make sense for your economic situation? Company 1, wants a 28°C supply air condition, has a business case for a 8% IRR, plans to take up the full 500m²/1000kW space within half a year, and pays 0,19 €/kWh in electricity costs.

    Company 2, with its legacy equipment, doesn’t want to risk anything beyond 20°C (yes – they do still exist), yet expects a 15% IRR, will be building a site expected to be filled up over 3 years’ time, and is paying only 0,08 €/kWh.

    Both have their minds set on a Tier III resilient facility.

    After crunching the numbers, you might find that for company 1, the airside economiser solution would seem to make perfect sense, considering the impact above conditions have on CapEx and OpEx.

    But company 2, with the same solution, will discover that its 15% IRR target is nowhere to be reached: the financial savings from the more efficient installation are in large part negated by the low electrical cost and longer uptake delay (even with a scalable build scheme). Furthermore, the 20°C expected supply air conditions directly hurt efficiency, but also require an additional chiller plant, compared to case 1 which is nearing the compressor-free tipping point, thus reducing significantly CapEx and maintenance costs.

    So that’s that – or is it?

    Looking closer into the company 1 analysis, the preferred design choice is just a few percentage points away in overall TCO from another option. You’ll thus want to finally make sure your solution is robust: what if your uptake delay is hit with a yet unforeseen increase due to some divestments in 3 years’ time? Or your energy costs turn out to increase at double the rate? Such changes can quickly take care of a couple of percents, and will impact your design options balance – but does the bottom line, all things considered, remain defendable?

    Performing a sensitivity analysis will allow you to identify those risks and unforeseen factors, and to at least start thinking about planning for some alternative scenarios should you need to adjust your concept along the way.

    Correctly assessing these topics with a broad and independent view on the technology market, and discussing them with your facility, IT, and financial managers, might set you back a couple of weeks in the early stages of your projects – but will avoid those unpleasant “Why on earth did you?” fights down the road. Or at least create a clear and written understanding on assumptions for those evaluations in 5 or 10 years time – when no one remembers who said what.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Technology Proves its Value in Wake of Boston Bombings

    Bill Kleyman is a virtualization and cloud solutions architect at MTM Technologies where he works extensively with data center, cloud, networking, and storage design projects. You can find him on LinkedIn. Also, you can find more of his regular contributions here, on Data Center Knowledge

    Bill-Kleyman-tnBill Kleyman
    MTM Technologies

    In light of this week’s events in Boston and elsewhere, one of the strongest statements we can live by is “the good guys will always outnumber the bad ones.” While some people have said that these types of events bring people to live close to the edge (as in You Only Live Once or YOLO), the reality is that these horrible events actually bring people closer together and deepen our appreciation of each others’ humanity.

    In the wake of the Boston Marathon bombings – which brought many reminders of 9/11 – we saw a new hero emerge:  technology. The fast responses of medical professionals already onsite likely saved numerous lives. Furthermore, those runners that finished the race and then continued on to donate blood at the local hospital should be praised as well. The human element, no doubt, played a vital role in minimizing casualties and helping people get medical attention quickly. Still, as the smoke clears and we begin to analyze and understand the situation, law enforcement and the officials working on this case have some interesting new tools at their disposal.

    Technology: The ‘Good Guy’ Multiplier

    According to Boston Police Commissioner Ed Davis, the site of the bombing and the surrounding area – Bolyston Street which serves as the finish line of the Boston Marathon –  is one of the most well-photographed and documented areas in the country. Although the crime scene is complex, the use of technology can help pinpoint the line of events that led to this horrible incident. In 2001, the prevalence of recording and digital equipment wasn’t anywhere as high as it is today. On April 15, 2013, a lot more “eyes” were watching the course of the day unfold. Let’s look at some of the areas where technology was involved in the event and aftermath.

    • IT consumerization. This is a common marketing term; along with BYOD. But the true magnitude of IT consumerization was on display on Monday. Because so many people have cell phones or other devices with cameras, thousands of high-quality photos were taken from almost every angle and vantage point during the Boston Marathon. People were on the ground, in buildings, at the finish line and everywhere in between. Within minutes, photos of the bombing were circulated and analyzed. These digital photos were used to piece together a very difficult puzzle. Modern phones are capable of taking 8-12 megapixel images. Compare that with phones from 2002 which could only do 0.3MP. As people took videos and photos documenting the event, these digital images are higher resolution than ever before, capturing more image and allowing details to potentially be used by authorities to find those responsible.
    • Everyone is a “digital technician.” In the aftermath, the numerous high-quality images being produced from the event have helped law enforcement in their efforts to bring light to the situation, and citizens are helping to analyze them. All across the web, amateur digital technicians are examining photos and processing individual video frames to catch inconsistencies. Just like law enforcement, these technicians have an eagle-eye for technology and can actually help officials pinpoint irregularities. Cloud computing and social media have been busy sharing pictures; discussing analysis of the event and helping everyone involved understand what happened.
    • CCTV and surveillance. As runners approached the finish line, they made their way through 26 miles of very public road. Along the way, there were hundreds of cameras and CCTV instances where live video was recorded and documented. A department store’s high-quality outdoor security camera has already helped police identify people of interest. The ability to zoom into a face or feature is paramount to helping bring those responsible to justice. These technologies are becoming more and more prevalent where high-quality feeds are capable of doing so much more than ever before. As the picture becomes clearer, officials will use every single frame from every source  that they can. This means that if the perpetrators took public transportation, video surveillance from around the city can help show the footsteps.
    • Social media and the cloud. The events of April 15were captured both on video and, simultaneously, on the Internet. Social media reports were posted as quickly as professional reporters brought the news on TV, radio or Internet. Twitter, Facebook, and other heavily used sites became hot spots for conversation. Social media served as a way to determine if friends and family on the ground were alright. In fact, I found out that a dear friend who was only 2 blocks away from the blast was alright – via a Facebook update. Furthermore, valuable pictures, recordings, and new vantage points have helped people put together the course of events that happened that day. Above anything else, social media (and cloud computing) helped bring people together. Whether it was words of support, an image of a friend, or just a though posted on Facebook, technology pushed aside human differences and the sharing through social networks brought everyone closer together.

    Today’s always-on, always-connected world strives to bring people and information closer together. During these types of events, technologists all over the world offer their support and work to utilize technological advancements to help people progress. Everyone from pro photographers to ordinary people with their high megapixel smartphone cameras can help authorities solve one of the most complex crime scenes in recent history. During these difficult times, the IT community has continued to offer its support to anyone who needs it.

    As a technologist, journalist and writer for Data Center Knowledge – I say with my whole heart – Boston, we stand with you.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • What CFOs Need to Know

    Matthew Goulding is managing director of Cannon Technologies – consultants, designers and turn-key builders of data centers based in the UK.

    Matt-Goulding_CannonMATTHEW GOULDING
    Cannon Tech

    As a CFO, CEO or CIO with a data center (or possibly several) in your organization, you’re undoubtedly aware that they devour electricity like small cities and that the up-front capital investment, cost of expansion or re-development is somewhere between painful and eye-watering. This column looks at how a very new data center construction method is set to allow true incremental ‘pay-as-you-grow’ whilst reducing total CapEx and OpEx.

    Over recent years, there have been several attempts at ‘pay-as-you-grow’ data center solutions – but whilst these have successfully deferred CapEx, they have led to a more expensive overall solution with higher lifetime CapEx costs and sometimes higher OpEx too.

    There is now a new ‘incremental modular’ pay-as-you-grow technique. So new, in fact, that your data center people may not have heard of it yet. The system has been implemented by Cannon Technologies in conjunction with telecoms network operators around the world over many years, and has now been adapted for data center ‘new-builds’ and upgrades.

    Reducing the Power Cost

    Unless your data center was built very recently it’s probably a massive bricks-and-mortar building with cavernous data-halls full of racks or cabinets which are, in turn, full of servers and other IT ‘hardware’.

    Each server cabinet could contain maybe twenty or so servers – many of which actually do fairly little most of the time. So while each cabinet would only use as much electricity as two or three one-bar fires, they are very inefficient.

    Hot air from all of these cabinets heats up the air (and the walls) of the great cavernous space – and massive computer-room air conditioning units (CRACs) around the perimeter then suck out the heat using almost as much energy again as the IT equipment is consuming. So you’re actually paying for nearly double the amount of power your servers need.

    Recent advances in server technology have massively increased efficiency. A technique known as ‘visualization’ means that one piece of server hardware can pretend to be several dozen servers. This keeps the IT guys happy because they can still have servers dedicated to specific tasks – but it means that the actual hardware works at 80% capacity rather than 10% which power-wise is far more efficient.

    The new generation equipment is also much more miniaturized, so they can squeeze 300 to 500 ‘virtual’ server cores in a single cabinet. This would allow the data center footprint to be reduced were it not for the fact that business and user applications demand more and more servers, storage, etc., year on year.

    Moving to these new style servers improves energy efficiency but creates problems too. Each cabinet now uses as much energy as 30 to 60 one-bar fires and the old CRAC cooling method I described earlier is far too inefficient. So to contain and manage the cooling, a system of rooms-within-rooms has to be built (we call it cold-aisle cocooning) together with installing cooling units very closely coupled to these massive ‘heaters’.

    Compare to Bricks and Mortar

    Bricks and mortar data centers have always been horrendously expensive whether new -build, re-purposing of existing building or rental. Given the need to provide for expansion, buildings have always had to be built, purchased or rented at a very considerably greater size that you currently need.

    Not only is the up-front CapEx several hundred per cent more than the current requirement for cabinet-space, the OpEx costs of operating a half-empty building year on year are exorbitant too.

    It can be ten years before the building is operating effectively at 80 per cent of utilization. And then within another five years or less it’s full and the IT guys need another new building.

    Frankly, for most organizations, it was never a sound business model. To be fair, there used to be few alternatives. But now there are.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Sixth Key to Brokering IT Services Internally: Prove What You Delivered

    Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

    Dick Benton GlasshouseDICK BENTON
    Glasshouse

    In our last post, I outlined the fifth of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: building the order process. That means developing an automated method for provisioning services via a Web console that can satisfy today’s on-demand consumers.

    Measuring and Communicating Your Outcomes

    This post covers how to prove what you delivered, because without metrics, monitoring and reporting that demonstrates you’ve fulfilled Service Level Agreements (SLAs), your service consumers and your management won’t know that you’ve met your commitments.

    Service offerings and the subsequent signed SLAs will typically contain two types of service delivery metrics. The first group comes under quality of service and may include performance (e.g. IOPS), availability (scheduled hours) and reliability (number of nines). The second group covers protection attributes including operational recovery point and recovery time objectives, as well as disaster recovery point and recovery time objectives. Some organizations also include a historical recovery horizon and retrieval time as service attributes. Service offerings may also typically offer some level of compliance or security protection, and most importantly, the offerings should include the cost of the deployable resource unit of the service offering.

    Determining KPIs

    It is very important that the process to establish service offering metrics includes the very people who must execute to the key performance indicators (KPIs) around each service. The operations staff must strongly believe in its ability to deliver to the target metrics. This is not the time to allocate stretch goals. In fact, nothing is more detrimental to consumer satisfaction (and IT morale) than IT failing to meet a published goal. Initial metrics must be absolutely achievable, and operations people must believe that they have an excellent chance of meeting those targets. Once operations have settled in, and the bumps have been worked out, then the process of using upper and lower thresholds and tracking actuals within the desired ranges can start to drive improvements and a better service level for the next service catalog publication. This means IT is now visibly improving its service levels and thus consumer satisfaction.

    Determining how to measure service attributes can require some creative thinking. You need a metric that can actually be captured and trended. The service indicators can be measured relatively easily for servers, storage and networks. However, operational protection service indicators can be more challenging. The dimension of time frame is also important. For example, will your metric offer a standard for a single point in time, a trend between upper and lower thresholds during the operational day or a standard at peak periods of the day? It is important to focus your choice of metrics on measures that the end consumer can understand and value. If you are going to differentiate between services based on such metrics, they need to be in “consumer speak” rather than “IT speak.” Formulating an appropriate policy on metrics, their time frame and their reporting should be a fundamental part of your service catalog

    Realistic Measurements

    The prudent CIO will take steps to ensure that each of the attributes mentioned in the service offering (as detailed in the organization’s service catalog) can be empirically tracked, monitored and reported. These indicators should be established with target operations occurring between upper and lower thresholds. Using a single target metric instead of upper and lower thresholds can inhibit the ability to intelligently track performance for continuous improvement, and can result in a potentially demoralizing black-and-white picture for the operations team. In other words, you either made it or you didn’t. With a range of “acceptance” metrics, the IT organization can ensure their own “real” target is smack in the middle of the acceptable range, with consumer expectations set at the lower threshold. It is important to ensure that the end consumer perceives the lower end of the range as an acceptable service level for the resource they have purchased. This approach gives IT some wiggle room, while the system and the processes and people supporting it go through the changes needed to deliver effective services. More importantly, it also provides an incentive to rise above the target with service level improvements.

    Now, Measure!

    Now that you know exactly what it is you are measuring and how the attributes will be measured, you have a specification for selecting an appropriate tool or tools to support your efforts. Unfortunately, finding the tools to produce the metrics can be a challenge. There are few, if any, that can work across the range of infrastructure and the vendors who provide it. Typically, more than one tool is required. Many organizations have chosen a preferred vendor and stick with that vendor’s native tools, while others have selected two or more third-party tools with the hope of staying viable as vendors constantly enhance and improve their products. However, at the end of the day, a simple combination of native tools and some creative scripting will provide all the basics you need.

    Finally, the prudent CIO will develop and publish a monthly “score card” showing which divisions or departments are using which service offerings, how much those service offerings cost, and most importantly, how IT performed in meeting its service level objectives for the period and in comparison to the previous reporting period. This provides a foundation on which new relationships and behaviors can be based, with IT being able to empirically prove that they delivered what they promised, and in some cases, beat what they promised.

    This is part of a seven part series from Dick Benton of Glasshouse Technologies. See his first post of the series.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.