Author: Industry Perspectives

  • Countering the Theat of Cloud: IT Ops With A Service-Oriented Approach

    Vic Nyman is the co-founder and COO of BlueStripe Software. Vic has more than 20 years of experience in systems management and APM, and has held leadership positions at Wily Technology, IBM Tivoli, and Relicore/Symantec.

    Vic-NymanVIC NYMAN
    BlueStripe Software

    Cloud computing is perceived as a significant threat by some data center organizations. By changing the focus away from managing server resources and adopting a Service-Oriented approach to IT Operations, IT organizations can turn that threat into an opportunity while helping to deliver business innovations to their enterprises.

    As you know, the corporate data center faces unprecedented competition for their internal customers. The threat of wholesale IT departmental outsourcing has been with us for quite a while, but for the individual employee, an outsourcing contract has often meant keeping the same job, but just being on the payroll of XYZ Systems instead of Venerable Bank and Trust.

    Cloud has the potential to be different. It means your company’s applications will run in somebody else’s data center, using somebody else’s employees to manage somebody else’s servers. It means that business-based application groups can bypass the operations process entirely. And while it is considered unlikely that large, critical legacy applications will be moved to cloud in the immediate future, over time Cloud can mean data center consolidations and staff reductions.

    At my company, we deal with customers who are facing this issue on a regular basis. One Director of IT Infrastructure recently told a story about trying to impose some discipline on their server management process, and being told flat out, “We can get the server we want from Amazon in 15 minutes.  What’s wrong with you?” Clearly this is a whole new world.

    Better Service is Answer

    The key for data center teams is to be able to deliver better service than the cloud providers do. Part of the source of demand for cloud services is the promise of hassle-free, efficient delivery on service-levels – that business applications will deploy and run as asked for, with minimal hassles and downtime.

    In many companies, the data center teams use a server resource-based approach to managing application performance. The focus is often on machine resource metrics – CPU and memory utilization, disk IO, network performance – metrics that are only loosely correlated with actual application performance. A better approach is to concentrate on application and transaction response times.

    We’ve seen customers who take this approach make significant reductions in their key performance metrics. Availability for mission critical applications has far exceeded SLA levels, and IT Operations teams have been freed up to work to deliver new capabilities.

    Here’s how they’ve changed the way they manage the delivery of applications:

    • First, recognize that transaction response times are more important than resource utilization. In a large, interconnected application with multiple tiers and extensive virtualization, chances are good that some servers will show high CPU utilization. Chances are also good that those servers will not have anything to do with an application slowdown. Focusing on the individual transaction response time will yield the source of the problem, and will help avoid “red herring” activities that don’t contribute to the solution.
    • Second, recognize that every component affects transaction response times. Highly distributed, inter-connected services typically involve ten or more servers – sometimes hundreds of servers. Don’t just look at the application server – the team doing the triage needs to consider the web tier, authentication, middleware, database, and even third-party services.
    • Last, recognize that within the problem server, every infrastructure layer affects component response times. Every dependency of the problem server is the potential culprit during a slowdown – rather than just looking at CPU and memory utilization, the data center team needs to look at the application component, other applications on the server, the operating system, virtualization, storage, networking, shared services like DNS, and even server management tools.

    By applying the Service-Oriented approach, data center teams can greatly improve their results in dealing with application management – making themselves competitive with cloud offerings.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Quantifying the Cloud: Bringing the Science Back

    Mike Goodenough is global director of cloud field engineering for savvisdirect at Savvis, a CenturyLink company and global provider of cloud infrastructure and hosted IT solutions.

    Mike-Goodenough-tnjpgMIKE GOODENOUGH
    Savvis

    Far back in the mists of time–at the pre-dawn of the digital age–scientists and engineers huddled in small rooms, devising ways to move electrons in such a manner as to represent the operations of logic upon the physical domain. No more would the slide rule be the sword of innovation and discovery. The future of the world rested firmly in the whirring circuits of that which would become: the computer.

    Computers weren’t things that rested on one’s desk. No, they were mighty monoliths erected as harbingers of mathematical, scientific and engineering feats that recreated the world of humanity in its own, towering image. To be master of the computer was to hold the destiny of the universe in your hand.

    A scant few years later, though, things changed.

    Where once a high degree of science was required to understand, navigate and create meaning from numbers, figures and formulas in a swarm of electro-mechanical interactions, now even high-school dropouts could construct powerful systems and arrange them strategically to illuminate the fabric of world commerce.

    Return to Science

    At some point along the way, the science was lost. When we abandoned the mainframes for personal computers and servers, math succumbed to convenience. Physics became a distant memory. And the culture that at once feared and admired the messengers of technology knelt down to worship tiny fruit-based entertainment.

    In this fashion, the corporate environment shifted. Enormous tasks could be tackled with stacks of small machines, instead of acres of memory core. Simplicity became the watchword, and when computer science was replaced by information technology, so too was knowledge traded for the total cost of space and power.

    But now cloud computing returns us full-circle to where we began, with enormous data centers housing collective systems so massive that a million businesses can fit inside. The science of “big computers” is here once again.

    Evolution of Cloud

    Bringing cloud into the mix changes the nature of modern business computing. Most recently, you would measure IT in terms of physical CPUs, disks, networks, blades and other manifestations of technology to be managed, replaced and amortized—entities of their own designs. However, the budgets with which these eventualities were addressed became so compressed that they began to collapse in upon themselves, ultimately resulting in an explosion of outsourced infrastructure.

    The equation thus mutated from purchasing power to operational effectiveness. Services now deliver what once was provided by legions of staff. And budgets for computers and software are instead shifted to create meaningful, lasting value for the enterprise. With the business itself rejoining this computing equation, the service model provides a solution to an evolutionary change in commercial mechanics.

    Measuring the Intangible

    When you compare it to the corporeal world in which we live, computer services are the energy in a universal system of hardware matter. Energy is mobile—it can transfer states, add or subtract properties, and alter its surroundings—yet exists regardless of its bindings. Cloud services are likewise portable, divorced from the platform on which they operate. Moving cloud energy around to satisfy the demands of a continually changing business environment does not end in chaos, but merely reflects the subtle alteration in technical trajectory.

    This is the promise of the future data center. Cloud meets the requirements of the operating system budget. What was once implemented with CapEx servers costing $7 per day is delivered for $3 per day in the OpEx cloud. Your operational parameters go from return on investment (ROI) on assets to value realized from capabilities.

    Indeed, cloud is more than just cost efficacy. It moves beyond hardware implementation and software licensing, and is instead quantified by the value of the services being provided. And while the basic variables in the algorithms of the cloud remain—compute, memory, storage and network—they become hidden by a universal architecture that focuses on the what, not the how.

    IT is not about defining “what is the cloud.” Rather, it is deriving value by conjoining business principles with technology innovation. How does the equation fit you?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • The Importance of Intangibles in Cloud TCO Analysis

    Ravi Rajagopal, Vice President at CA Technologies, has led and managed organizations that delivered innovative and practical technical and business solutions for corporations and governments around the globe.

    Ravi_Rajagopal-tnRAVI RAJAGOPAL
    CA Technologies

    In my last post, I discussed how the cloud changes the economic value of IT, and revealed a new model to understanding TCO and ROI. That’s the only way an organization today can make rational decisions about IT investments.

    One of the most popular cases for adopting the cloud is that it promotes organizational agility. Once you go cloud, the argument holds, the organization can now do things it never could do before, or can do established things much faster.

    Cloud Expands Horizons

    As an example, I know a company that recently switched from an on-premise call center to cloud-based solution. Among other things, moving to a cloud service now meant they could hire people all over the world, wherever there was an IP connection. Before, employees had to be on-premise at a limited number of locations.

    They gained access to a much larger labor pool. They could offer more flexible hours to employees, and even let them work from home or while traveling. And they opened up to new geographic markets they couldn’t even dream of servicing before. That’s agility.

    If we’re talking about making rational economic decisions about the cloud, how can we account for the transformative impact it can have? This is hard to quantify beforehand, as are many hidden infrastructure costs in IT. Most organizations remain blissfully ignorant about the full impact of these intangible costs.

    Focus on the Intangibles

    That makes it hard to arrive at a good, hard-dollar decision. But if you don’t focus on the intangibles, you won’t have a complete picture of the hard numbers. Once you have a handle on tangibles, start perimeterizing the intangibles. They might not be core to the decision, but you can get a sense of their boundaries.

    And the more data you have, the better the organization will be at making the decisions. That company that moved to a cloud-based call center? Their move to the cloud was initially close to break-even. Their understanding of the intangibles served to reassure them that they were making the right decision, economically and strategically.

    What are Your Outcomes?

    One way to measure the intangibles is to focus on the outcomes rather than on the inputs. You could, for example, start looking at some of the customer statistics, both in absolute numbers and in overall trending. For that to happen, you need a good baseline. You must also work with the same questions and parameters so you can make a valid before-and-after comparison.

    For example, you have the customer stat baseline that’s set before you made the transition. Once you’ve made the transition, you look again at your customer stat parameters. That will tell you whether you’re moving in the right direction, and how you can optimize your execution.

    Above all, it’s essential you understand your legacy environment, both tangible and intangible, so you can make a fully informed decision beforehand. Then, when you make the transition, you’re in a great position to compare.

    The broader discussion here is that there are substantial benefits to being a data-driven organization, which many organizations are not. Most businesses are measuring some things, but few are measuring everything. If you’re not a data-driven organization, taking a holistic approach to cloud TCO analysis is a great way to get started on becoming one—and the best, and perhaps only way, to really measure the cloud for business value.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • March Madness: Lessons for Networks

    David White is President North America for Ipanema Technologies. White is a senior executive with more than 25 years experience in sales, marketing and business development, with a background in WAN Optimization.

    David-White-smDAVID WHITE
    Ipanema Technologies

    March Madness, the largest NCAA college basketball tournament in the nation has a reputation for impacting work productivity at companies throughout the United States. The annual, month-long single-elimination tournament is in its 75th anniversary and is being broadcast by CBS online. Since the early round games take place during the workday, the bad news for businesses grows past productivity into how their WAN is being used during these days.

    The bandwidth capabilities of company networks have been pressured as many dedicated fans stream the tournament from their office computers throughout the week. While U.S. businesses anticipated a spike in demand for network resources during March Madness, data centers, Wide Area Networks and companies alike also prepared, lest they risk the performance of business critical applications.

    Mad for Networks

    During previous March Madness tournaments, there have been 2.4 million daily unique viewers following the games. The growth of access to the Internet at work will stress U.S. corporate networks like never before. Not only can games be watched on computers, but now mobile devices can stream content. According to a 2012 report, Challenger, Gray & Christmas, an outplacement firm, estimates that more than 2.5 million unique visitors per day each spent an average of 90 minutes watching games.

    What These Challenges Mean

    According to a Harris Interactive poll, 64 percent of Americans watch online video while at work. As the games occur during business hours, employees are highly likely go online to track their favorite teams. So take the stats mentioned above, and imagine the impact it could have on businesses should their employees flock to watch March Madness and their networks not be able to withstand the increase in demand.

    According to a survey by Modis, a global provider of IT staffing and recruiting services, two in five IT professionals report March Madness impacted their network in the past, with about a third reporting system slowdowns or complete crashes. If companies aren’t prepared, their business critical applications will be affected as they compete against YouTube and online video for limited resources. IT departments need to know how to guard against this.

    According to a leading analyst house, application performance problems really matter. Losing a mere 5 minutes per day costs 1 percent of overall productivity! It’s also important to consider that applications are a huge investment, costing roughly an average of $360/employee/month when maintenance and up-front cost is considered. With all this at stake it makes sense to protect that investment.

    Why Bandwidth isn’t the Complete Answer

    There are two solutions: companies can purchase more bandwidth before March Madness (or other known high-interest events) or companies can use what they have more effectively through Application Performance Guarantee solutions, which control and dynamically guarantee critical applications performance across networks.

    Adding more bandwidth isn’t the solution, as more bandwidth is rarely enough. Non-applications are bandwidth hungry and the more bandwidth you have, the more bandwidth they consume. They will systematically cannibalize critical application performance. And the added resources will never guarantee critical application performances.

    Application traffic tries to use up all the available bandwidth. Simply increasing it is like filling a bottomless pit: expensive and never enough to satisfy the ever-growing usage demands. The additional traffic may also hinder the performance of the business critical and often resource thrifty applications.

    It’s Not Size — It’s Sophistication

    The answer lies in having solutions that allow an IT department to control applications as they flow across the network.

    Not deploying an Application Performance Guarantee solution implies uncertain business application performance and control that may involve lost of productivity and end users complaints – Enterprises’ carry on running IT in a reactive mode that is neither optimum for the business nor satisfying for the IT department. Business applications continue to suffer from bad and/or unstable performance while the network is still over-sized. Unsatisfied users and business managers continue to complain. The WAN is unnecessarily upgraded without solving application performance and end-users experience troubles.

    Managing Traffic

    If we think of networks as roads, and applications as cars, an Application Performance Guarantee solution might be a police officer. It can direct cars into appropriate queues. It can slow cars that are less critical to the business (for instance, the YouTube car) and prioritize those that the business depends on (such as the Salesforce or SAP cars). This then allows vehicles to get to their destination in a timely, secure fashion – regardless of the amount of traffic on the ‘road’.

    As March Madness concludes, it is just one example that demonstrates the need for Application Performance Guarantee solutions. These technologies will continue to grow, helping enterprises to face Internet traffic growth challenges in an easy and user-friendly way by authorizing Internet and video traffic, while ensuring business applications perform at their best and that IT costs remain reasonable.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Five Essential Keys to Success When Relocating a Data Center

    Bruce Cardos is a principal consultant at Datalink, a publicly-held data center IT solutions and services provider. Bruce is a PMI-certified project management professional who specializes in data center relocations. Bruce has planned and managed nearly 100 data center openings, closing, consolidations, and relocations. His experience also includes managing several data centers, as well as positions in data center network design and implementation.

    Bruce-CardosBRUCE CARDOS
    Datalink

    A data center relocation (DCR) is not just about moving servers and plugging them in at their new locale.

    In reality, DCR can be one of a company’s most complex and challenging endeavors. With mission critical information and high-stakes money on the line, the failure of any key steps in the process can have potentially devastating repercussions. Valuable data can be lost. Expensive IT equipment can be damaged. Critical systems may remain offline for hours, days or even weeks as problems are resolved. Such issues can end up costing a company thousands–or even millions–of dollars in lost productivity and lost revenue.

    To ensure your company’s future data center relocation is successful, we offer five essential keys–all steps which occur before the first server is ever uninstalled and moved to the new location.

    Key #1: Recognize DCR Needs Special Management Skills

    Assigning a knowledgeable, experienced project manager (PM) is key to any successful data center relocation. While many companies have competent, professional project managers on staff, a data center relocation presents a different challenge. This requires a project manager with prior DCR experience. DCR project management involves identifying and pre-planning unique DCR issues that will impact creation of timelines. It involves managing associated people, budgets, and DCR risks. It also requires defining and executing the DCR’s critical macro and micro milestones while overseeing the production of key DCR planning documents.

    If you don’t have a knowledgeable and experienced PM on staff with deep expertise in data center moves, try finding a DCR partner with this skill set. Even if you appoint an internal PM (which we also highly recommend), you will want an experienced DCR professional to lead the project and transfer knowledge to your team.

    Key #2: Equate Good Planning with Good Documentation

    Complete and detailed planning is as important as the need for good quality DCR documentation. This documentation emphasis may surprise technical teams who’ve grown accustomed to having critical details ‘in their heads’. When it comes to DCR, however, this informal practice creates a guaranteed, single point of failure. While there is no cookie-cutter approach to data center relocation, certain documents are necessary for every successful data center move.

    The Big Four: Your DCR’s ‘Must-Have’ Docs

    At a minimum, DCR project information should appear in four main documents:

    • Where You Are Now: The Present Method of Operation (PMO). The PMO comprehensively documents what will be moved. It should include diagrams and detailed lists describing everything in the existing environment–from all hardware and software components to storage requirements, any logical or physical interactions, application dependencies, network connections, inventory lists and any support processes currently in use.
    • Where You Hope to Be: The Desired Future State (DFS). The DFS details the desired successful outcome of the relocation. This includes defining project attributes, success conditions, and details associated with the new placement of all moving components. As part of the DFS’ expected end state, you should include enough detail to resume various service management processes, such as change management, incident management and configuration management. The DFS should also define any anticipated updates or IT changes (i.e., virtualization, enhanced storage, technology uplift for some or all servers, network upgrades, etc.).
    • Your Roadmap to Get There: The Design Plan. The completion of the first two documents defines the end of the ‘Requirements Process’. After this process is approved, the Design Plan begins. This is the ‘roadmap’ for getting from the PMO to the DFS. It should convey a good understanding of overall processes needed to complete the relocation while defining any incremental budgets needed to acquire necessary components. Included in the design plan: Details on the various move groups, any new hardware and/or software needs, pre-requisite steps, known risks and their contingencies, a high level timeline, communication plan and the impact of client processes on the design.
    • Who Will Do What, When and Where: The Implementation Plan. The Implementation Plan is derived from the Design Plan. This includes all steps, dates, and responsible parties for the tasks to be accomplished in their proper order and with all the appropriate interactions and linkages defined. Included here is a Day of Move Plan which documents the hour-by- hour details for the move event(s) to be completed during the data center relocation. An updated project schedule is also included.

    Continue for the next keys!

  • Building A Cloud-Savvy Model for TCO and ROI

    Ravi Rajagopal, Vice President at CA Technologies, has led and managed organizations that delivered innovative and practical technical and business solutions for corporations and governments around the globe.

    Ravi_Rajagopal-tnRAVI RAJAGOPAL
    CA Technologies

    Economic benefits almost always lead the argument for moving to cloud computing. We’re told many things: cloud is cheaper; cloud frees up IT resources; cloud reduces capital expenditures; cloud allows organizations to scale with demand.

    Maybe it does. Maybe it doesn’t. The only way to make an informed decision, backed by a solid return on investment (ROI), is to first understand the total cost of ownership (TCO) for your current and planned cloud infrastructure in advance of any cloud adoption.

    This is obvious, right? But you might be surprised to learn that many large organizations commit to cloud computing without really knowing their TCO and projected ROI. It’s not that they’re irresponsible and ignoring this requirement. It’s that the tools most IT teams use to evaluate TCO and ROI are inadequate for application to the cloud.

    An Improved TCO Model

    That’s why I set out to create a better TCO model. In addition to my work at CA Technologies, I also teach at NYU. One of my classes is about managing the cloud. When I first taught the class three years ago, I heard lots of assumptions such as the cloud is not secure or it’s less expensive. These statements were nearly always based on opinions and word-of-mouth buzz.

    I engaged the class in researching the topic, with an eye towards developing the tools IT leaders need to get objective insights about cloud computing. We worked to develop a complete view of the cloud, beyond just the technical pieces. The result is a new approach that takes a business view of cloud computing by considering the economics and measuring its business value.

    Simply put, TCO changes for the cloud because the cloud changes IT’s business model. Cloud computing has taken the information technology silo and made it a business service. And from the standpoint of TCO analysis, this adds complexity, because the cloud can be both a function of, as well as an alternative to, in-house IT resources.

    For example, in the pre-cloud era, IT was simply a department of function. You could calculate the IT department’s cost, break it down using whatever algorithm you wanted to, and allocate cost back to the business units.

    But today IT, and its cost, is the function of many business units (including IT itself). The business units need to have visibility into their costs, plus a clear understanding of the value they’re getting from these expenditures.

    Unless the organization understands its total IT costs across all domains in the organization, it’s hard to arrive at an apples-to-apples comparison between what you’re spending in-house versus what’s available in the cloud.

    A Wide-Ranging Perspective

    To analyze cloud TCO, you must use a comprehensive view of your entire infrastructure and all services being provided by it, for it, or running on it, whether in the cloud, on-premise, or legacy. Only then will you be able to make an informed decision based on an accurate understanding of your total IT costs.

    Not long ago, McKinsey reported that moving to the cloud caused companies to spend around 25 percent more than they would otherwise for the same services. As you can imagine, this caused a controversy, as it ran contrary to what cloud service providers were saying.

    Once the study’s methodology was explained, however, what was happening became clear. Organizations were moving to cloud while keeping their legacy infrastructure in place. That’s fine if you’re piloting cloud or want to keep your options open, but it’s not a strategy to reduce cost.

    This is a key point about cloud TCO that many organizations miss. If you don’t make the right choices and changes when using cloud computing, you’ll end up adding services and cost to the infrastructure. Vendor promises of cost savings go right out the window.

    What’s Your Embedded Costs?

    It’s hard for many organizations to get a handle on the true cost of an application because there are so many embedded costs: servers, OSes, the network, electricity, the real estate, personnel, and more. Does moving an application to the cloud shave those costs? How do you remove the infrastructure cost from the total cost associated with the application?

    Part of the cost of an application is a service cost, which is visible and obvious. You can go into Salesforce.com and measure it on a per-user basis. But what’s not obvious is the associated infrastructure cost that’s needed, and what’s being done in the legacy environment.

    If you’re not diligent in removing that piece from your analysis, you’re going to run into cost issues. You would still be incurring part of the cost already in the legacy system which will not be eliminated, and you’re incurring additional costs from a SaaS perspective.

    These are just a few examples of how a better model for cloud TCO can help managers get quantitative analysis of cloud costs with no subjectivity. And as I mentioned earlier in this post, we’ve taken these insights and have started building a new model for determining the total cost of ownership of cloud services.

    Much of our research is now embedded in a spreadsheet which I am planning to make available to customers. I’ll be blogging about these efforts as we refine the model over the next few months, sharing what we’ve learned as well as your feedback on the findings. Stay tuned.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Standardizing Data Center Education Can Work Wonders

    Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

    Tom_Roberts_tnTOM ROBERTS
    AFCOM

    If you’re struggling to fill a job in the data center, you are not alone. With approximately 4 million IT jobs available just in the United States, to say a shortage of qualified people exists today is an understatement. It has created a worker’s market with only 4 percent unemployment in the technology sector—about half the overall jobless rate.

    This shortage exists for any number of reasons:

    • Graduates from universities and two-year tech schools entering the workforce are greener than a solar-powered data center and require far too much on-the-job training.
    • Those currently employed in IT seem to be staying put because they like what they’re doing and companies are no longer in layoff mode.
    • Others have reached or are rapidly approaching retirement and taking their decades of experience with them.
    • High-profile companies like Facebook, Google and Apple seem far more “sexy” than traditional corporations and directly compete with attracting the best and brightest from the younger generation.

    I believe the best way to conquer all of the above challenges is to standardize the education path for data center professionals. Treat those in the industry just like the architects, engineers, school administrators, mental health professionals and social workers who must adhere to rigorous CEU requirements to move up the ladder or stay qualified.

    The source of education is secondary; it can be gained through tech schools, conferences or corporate America. This will help boost standardization with respect to career paths, job descriptions, and skillsets.

    Here’s what I would like to see happen. In addition to being president of AFCOM, I’m chairman of Data Center World, a conference and trade show for data center professionals. For the first time we are offering attendance certification for those who attend our educational sessions. Then, in the near future, these records of attendance can be used as CEUs to supplement their current certification(s) obtained from the leading data center education companies (EPI, ICOR, C-Net Training, IDCP, etc.).

    As an association with a goal of advancing data center and facilities management professionals, AFCOM’s role is to provide ongoing education like it has for more than 30 years. I think it makes a lot of sense to work hand-in-hand with these companies so that education gained from conferences also count toward specific career goals/paths.

    If we can cross-track and document education from all different sources and provide an easy way for data center professionals to access a composite list, it would be a win-win for those recruiting and looking for work.

    Right now, inconsistencies are far too common. Two companies may be recruiting for a person to fill the same position, i.e. facilities manager, but the actual responsibilities and needed skills don’t match up. No two IT job descriptions are the same. You may attract a person with a mechanical engineering degree and some who fixes furnaces with the same advertisement. I read that the typical time-to-hire process for an individual IT resource is 55 days. Who has that kind of time?

    Change never stops in this industry, and now more than ever, you must keep your data center current or fall behind. The fact that so many companies can’t seem to find the right people with the right skills is a disaster waiting to happen.

    Let’s all work together to make sure it doesn’t.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Designing For Dependability In The Cloud

    David Bills is Microsoft’s chief reliability strategist and is responsible for the broad evangelism of the company’s online service reliability programs.

    David BillsDAVID BILLS
    Microsoft

    This article kicks off a three-part series on designing for dependability. Today I will provide context for the series, and outline the challenges facing all cloud service providers as they strive to provide highly available services. In the second article of the series, David Gauthier, director of data center architecture at Microsoft, will discuss the journey that Microsoft is on in our own data centers, and how software resiliency has become more and more critical in the move to cloud-scale data centers. Finally, in the last piece, I will discuss cultural shift and evolving engineering principles that Microsoft is pursuing to help improve the dependability of the services we offer.

    Matching the Reliability to the Demand

    As the adoption of cloud computing continues to grow, expectations for utility-grade service availability remain high. Consumers demand access 24 hours a day, seven days a week to their digital lives, and outages can have a significant negative impact on a company’s financial health or brand equity. But the complex nature of cloud computing means that cloud service providers, regardless of whether they sell offerings for infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS), need to be mindful that things will go wrong — because it’s not a case of “if things will go wrong,” it’s strictly a matter of “when.” This means, as cloud service providers, we need to design our services to maximize the reliability of the service and minimize the impact to customers when things do go wrong. Providers need to move beyond the traditional premise of relying on complex physical infrastructure to build redundancy into their cloud services to instead utilize a combination of less complex physical infrastructure and more intelligent software that builds resiliency into their cloud services and delivers high availability to customers.

    The reliability-related challenges that we face today are not dramatically different from those that we’ve faced in years past, such as unexpected hardware failures, power outages, software bugs, failed deployments, people making mistakes, and so on. Indeed, outages continue to occur across the board, reflecting not only on the company involved, but also on the industry as a whole.

    In effect, the industry is dealing with fragile, (sometimes referred to as brittle), software. Software continues to be designed, built, and operated based on what we believe is a fundamentally-flawed assumption: failure can be avoided by rigorously applying well-known architectural principles as the system is being designed, testing the system extensively while it is being built, and by relying on layers of redundant infrastructure and replicated copies of the data for the system. Mounting evidence paints a picture that further invalidates this flawed assumption; articles continue to regularly appear describing failures of online services that are heavily relied on, and service providers routinely supply explanations of what went wrong, why it went wrong, and summarize steps taken to avoid repeat occurrences. The media continues to report failures, despite the tremendous investment that cloud service providers continue to make as they apply the same practices that I’ve noted above.

    Resiliency and Reliability

    If we assume that all cloud service providers are striving to deliver a reliable experience for their customers, then we need to step back and look at what really comprises a reliable cloud service. It’s essentially a service that functions as the designer intended it to, functions when it’s expected to, and works from wherever the customer is connecting. That’s not to say every component making up the service needs to operate flawlessly 100 percent of the time though. This last point is what brings us to needing to understand the difference between reliability and resiliency.

    Reliability is the outcome that cloud service providers strive for. Resiliency is the ability of a cloud-based service to withstand certain types of failure and yet remain fully functional from the customers’ perspective. A service could be characterized as reliable, simply because no part of the service, (for example, the infrastructure or the software that supports the service), has ever failed, and yet the service couldn’t be regarded as resilient, because it completely ignores the notion of a “Black Swan” event – something rare and unpredictable that significantly affects the functionality or availability of one or more of the company’s online services. A resilient service assumes that failures will happen and for that reason it has been designed and built in such a way to detect failures when they occur, isolate them, and then recover from them in a way that minimizes impact on customers. To put the meaning of the relationship between these terms differently, a resilient service will — over time — become viewed as reliable because of how it copes with known failure points and failure modes.

    Changing Our Approach

    As an industry, we have traditionally relied heavily on hardware redundancy and data replication to improve the resiliency of cloud-based services. While cloud service providers have experienced successes applying these design principles, and hardware manufacturers have contributed significant advancements in these areas as well, we cannot become overly reliant on these solutions as paving the path to a reliable cloud-based service.

    It takes more than just hardware-level redundancy and multiple copies of data sets to deliver reliable cloud-based services — we need to factor resiliency in at all levels and across all components of the service.

    That’s why we’re changing the way we build and deploy services that are intended to operate at cloud-scale at Microsoft. We’re moving toward less complex physical infrastructure and more intelligent software to build resiliency into cloud-based services and deliver highly-available experiences to our customers. We are focused on creating an operating environment that is more resilient and enables individuals and organizations to better protect information.

    In the next article of this series, David Gauthier, director of data center architecture at Microsoft, discusses the journey that Microsoft is making with our own data centers. This shift underscores how important software-based resiliency has become in the move to cloud-scale data centers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Colocation Communities Are a Match for Cloud

    Kevin Dean is Chief Marketing Officer at Interxion

    Kevin-Dean-tnKEVIN DEAN
    Interxion

    The number of data centers may be shrinking, but their capacity is growing. Current market calculations show disruptive technologies like server virtualization and cloud computing are effectively consolidating servers enough to actually shrink the United States’ vast data center footprint. In fact, IDC predicts that the total number of U.S. data centers will fall from 2.94 million in 2012 to 2.89 million in 2016. However, while new data center facilities themselves may be on the decline, the data they house certainly isn’t, given that 2.5 quintillion bytes of data are created every day.

    With fewer facilities, but more data than ever, what can companies do?

    While the shift to more virtualized and cloud-based environments means that companies will build fewer data centers overall, it also means many will look to specialized, third-party data centers to support their data-intensive needs. Further predictions from IDC reveal that data center capacity will grow from 611.4 million square feet in 2012 to more than 700 million square feet in 2016, so colocation facilities that have the capacity to handle cloud and virtualization requirements are well suited to support this data boom.

    Statistics aside, today’s information explosion is proof enough that data is the lifeblood of any business and, therefore, how it’s contained is a top concern. As more enterprises put their internal servers under scrutiny, they are noticing that legacy enterprise data centers are becoming increasingly ineffective, no longer able to provide the space, power and security requirements necessary to support a company’s transition to the cloud. As a result, companies are optimizing their data through outsourcing options, such as data center colocation facilities. By choosing to colocate, enterprises benefit from a wide range of power connections with full backup, multi-layer security to protect data, lower maintenance expenses and more cost-effective cooling.

    Beyond these colocation benefits, however, one big advantage remains: communities of interest. These communities located within such colocation facilities are one of the biggest draws for companies to choose colocation in the first place. For instance, cloud communities offered by carrier-neutral colocation providers allow service providers across cloud markets to scale their resources and match fluctuating customer requests. Similarly, businesses that are part of communities of interest within finance and digital media content hubs benefit from colocation facilities’ interconnection with leading cloud platforms, which enable community members to take advantage of cloud computing and its cost efficiencies.

    Cloud Communities Make Gains

    Cloud service providers in particular benefit from multi-tenant colocation facilities’ communities, which enable members to connect with each other and with partners over near-instantaneous connections. The traditional selection criteria for data center facilities such as power, space, security and cooling capacity are now topped by the requirements of close proximity to end users with unbeatable connectivity and performance speeds – speeds that are achieved in such highly-connected industry hubs. Since these hubs host a variety of service providers, CDNs, carriers and ISPs and Internet exchanges under one roof, enterprises and cloud service providers have access to a marketplace of a variety of cloud-based services at their fingertips.

    Additionally, cloud hub participants benefit from partnership opportunities and additional revenue streams made possible through member interaction.  Furthermore, as the market shifts to more dynamic, hybrid cloud environments, the connectivity between private infrastructure and public cloud servers is more essential now than ever before. To ensure that these connections are performing as fast as possible for their customers, many colocation participants have the ability to establish private connectivity between a public cloud platform and their existing dedicated IT infrastructure. This interconnectivity allows members to take control over their hybrid environment while reducing network costs, increasing bandwidth and providing a more consistent network experience than Internet-based connections.

  • Why is Data Storage Such an Exciting Space?

    Srivibhavan (Vibhav) Balaram is the Founder and CEO of CloudByte Inc. He is a General Manager with more than 25 years of industry experience. He has spent 5 years working in the United States with companies like Hewlett Packard, IBM and AT&T Bell Labs.

    vibhav-photo-smVIBHAV BALARAM
    CloudByte

    For a while, the storage industry appeared to be fairly stable (read: little technology innovation), with consolidation around a few large players. Several smaller companies were bought out by larger players – 3PAR by HP, Isilon by EMC, Compellent by Dell. However, in the last year, we’ve seen a renewed action in the space with promising new start-ups, dedicated to solving the storage problems in the new-age data centers. So, what exactly is the problem with legacy storage solutions in new-age data centers?

    Evolution of Storage Technology

    For better perspective, let’s start with a quick recap of data storage technology evolution. In the late 1990s and early 2000s, storage was first separated from the server to remove bottlenecks on data scalability and throughput. NAS (Network Attached Storage) and SAN (Storage Area Networks) came into existence, Fibre Channel (FC) protocols were developed and large scale deployments followed. With a dedicated external controller (SAN) and a dedicated network (based on FC protocols), the new storage solutions provided data scalability, high-availability, higher throughput for applications and centralized storage management.

    Server Virtualization and the Inadequacy of Legacy Solutions

    Legacy SAN/NAS based storage solutions scaled well and proved adequate, until the advent of server virtualization. With server virtualization, the number of applications grew rapidly and external storage was now being shared among multiple applications to manage costs. Here, the monolithic controller architecture of legacy solutions proved a misfit as it resulted in noisy neighbor issues within shared storage. For example, if a back-up operation was initiated for a particular application, other applications received lower storage access and eventually, timed out. Further, storage could no longer be tuned for a particular workload as applications with disparate workloads shared the storage platform.

    Rising Costs and Nightmarish Management

    Legacy vendors attacked the above issues through several workarounds – including faster controller CPUs and recommending additional memory with fancy acronyms. Though these workarounds helped to an extent, the brute way to guarantee storage quality of service (QoS) was to either ridiculously over-provision storage controllers (with utilization below 30-40 percent) or dedicate physical storage for performance-sensitive applications. Obviously, these negated the very purpose of sharing storage and containing storage costs in virtualized environments. Subsequently, storage costs relative to overall data center costs increased dramatically. Being hardware-based, legacy vendors didn’t see any reason to change this situation. With dedicated storage for different workloads, there were several storage islands in a data center which were chronically un-utilized. Soon, “LUN” management became a hot new skill and also a nightmare for storage administrators.

    The New-Age Storage Solutions

    With the advent of the cloud, today’s data centers typically have 100s of VMs which require guaranteed storage access/performance/QoS. Given the limitation of legacy solutions to scale in these virtualized environments, it was inevitable that a new breed of storage start-ups cropped up. Many of these start-ups chose to simplify the “nightmarish” management either by providing tools to observe and manage “hot LUNs” (a term to denote LUNs that serve demanding VMs) or by providing granular storage analytics on a per-VM basis. However, the management approach does not really cure the “noisy neighbor” issues, leaving a lot of other symptoms unresolved.

    Multi-tenant Storage Controllers

    There is a desperate need for solutions which attack the noisy neighbor problem at its root cause i.e., by making storage controllers truly multi-tenant. These controllers should be able to isolate and dedicate storage resources for every application based on its performance demands. Here, storage endpoints (LUNs) will be defined in terms of both capacity and performance (IOPS, throughput and latency). These multi-tenant controllers will then be able to guarantee storage QoS for every application right from a shared storage platform.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Exposing the Six Myths of Deduplication

    Darrell Riddle, senior director of product marketing for FalconStor Software., is a professional with more than 23 years of experience in the data protection industry. Darrell has an extensive understanding of both the technical and business aspects of marketing, product management and go-to-market strategies. Prior to joining FalconStor, Darrell worked at Symantec.

    darrell_riddleDARRELL RIDDLE
    FalconStor

    Most companies have lots of duplicate data. That’s a fact. Many companies are aware of it, but it falls in the category of cleaning out the garage or a spare room. You see the problem, but until you run completely out of space, it usually doesn’t get straightened up.

    Many IT managers believe the software and/or hardware they purchased already deals with this kind of problem. The truth is, this may or may not be correct. In fact, enterprises are taking full advantage of how current technology can eliminate redundant data. In some cases, companies have not turned on features that help them with duplicate data (hereafter known as “deduplication” or “dedupe”), nor are they actively using deduplication as a key aspect of their data protection plans. The reluctance of IT administrators to embrace dedupe usually stems from their lack of knowledge of the potential benefits of deduplication or past experience with a less-than-robust solution.

    However, deduplication is a critical aspect of every backup environment that brings cost-savings and efficiency to the enterprise. Depending on which report you read, companies are faced with data growing at the rate of 50 percent to nearly doubling data annually. That impacts the entire data protection strategy. It also makes data slow and dopey like a koala bear. Backup windows aren’t being met, and there is no way that disaster recovery testing can take place. Think of this entire problem like picking up a squirt gun to put out a fire – it just won’t work.

    Deduplication solutions are also valuable to disaster recovery (DR) efforts. Once the data is deduplicated, it is then transferred (or replicated) to the remote data center or offsite DR facility, ensuring that the most critical data is available at all times. Deduplication is crucial as it reduces storage and bandwidth costs, provides flexibility and data availability, and integrates with tape archival systems. Deduplication is a vital part of the future of data protection and needs to be integrated.

    In this article, I will dispel six myths attached to deduplication, bring clarity to the technology and outline the cost savings and efficiencies enterprises can reap.

    Myth 1: Deduplication methodology is a life sentence with no chance of parole. Most enterprise IT admins feel that if they purchased a specific deduplication solution, they are stuck with that method for life.

    Reality: Flexibility is at the core of modern deduplication solutions, which allow firms to choose the deduplication methods that are the best fit for specific data sets. Many companies offer portable solutions, similar to being able to move electronic music from one device to the next. By doing this, IT can align its backup policies with business goals.

    Myth 2: Each server is its own island and there are no boats. The myth is that each server is its own island with separate deduplication processes and none of the islands talk to each other.
    Reality: As the Internet has expanded our ability to communicate globally, deduplication solutions have also gone global to eliminate any multiple copies of data. With global deduplication, each node within the backup system is deduplicated against all the data in the repository. Global deduplication spans multiple application sources, heterogeneous environments and storage protocols.

    Myth 3: I don’t have the money to swap out or upgrade my hardware, and even if I did, I would spend it on something else. The perception is that deduplication servers need to be replaced when space on the server runs out. The system doesn’t allow for upgrades. To increase capacity, companies need to exchange the equipment and implement more servers and memory.
    Reality: Scalability is key to all IT environments, as the rate of data is growing exponentially. IT administrators must be able to scale capacity to the backup target disk pool and build disk-to-disk-to-tape backup architectures around the deduplication system. Rather than a swap out replacement, deduplication repositories can scale as needed with cluster and storage expansions.

    Myth 4: Deduplication slows down performance worse than my antivirus product. IT admins feel that the performance of their systems will slow down because there is too much work for the deduplication server to handle. This performance will hamper the entire backup environment and cause issues when data needs to be recovered quickly.
    Reality: Deduplication can scale up to high speeds and has the ability to pull data into post processing to take the pressure off the backup window and increase the speed. In choosing a deduplication solution, IT administrators must consider how it will support the latest high-speed storage area networks (SANs). This is critical for achieving fast deduplication times. Those solutions with unique read-ahead technology provide fast data restore, even from deduplicated tapes.

  • Mapping a Course to Data Center Efficiency

    Jack Pouchet is vice president of business development and director of energy initiatives for Emerson Network Power.

    Jack-Pouchet-smJACK POUCHET
    Emerson

    Data center energy efficiency has been an increasing focus since the issue emerged in 2007. We believe dramatic energy savings can be realized without heroic measures that compromise availability. The key is to focus on the core IT systems, rather than just support systems. This is based on the cascade effect, which shows that focusing first on saving energy at the server-component level will drive energy savings throughout the data center.

    In 2007, Emerson Network Power introduced a free, vendor-neutral roadmap to saving 50 percent of your data center energy use. While many of the roadmap’s core principals –- such as the cascade effect –- still hold true, the industry has evolved at rapid rate over the past five years. The need to maintain or to build highly available data centers remains the same, but IT and critical infrastructure technologies have changed, creating new opportunities to optimize efficiency and capacity strategies.

    As a result, we’ve updated the approach to incorporate advances in technology and new best practices that have emerged since 2007.

    Ten updated strategies serve as a roadmap. In total, they have the potential to reduce a data center’s energy use by up to 74 percent in a typical 5,000 square-foot data center with a PUE of 1.9 and energy consumption of 1.5 MW.

    • Low-Power Components: The cascade effect rewards energy savings at the server component level, which is why low-power components, such as high-efficiency processors, represent the first step. [Save 172KW or 11.2%].
    • High-Efficiency Power Supplies: Power supply efficiency has improved since our original approach in 2007, but power supplies continue to consume more energy than is necessary. The average power supply efficiency is now estimated at 86.6 percent, well below the 93 percent that is available. [Save 110KW or 7.1%].
    • Server Power Management: Server power management can significantly reduce the energy consumption of idle servers. Data center infrastructure management systems that collect real-time operating data from rack power distribution systems and then consolidate that data can track server utilization, aiding in the effective use of power management. [146KW or 9.4%].
    • ICT Architecture: Unoptimized network architectures can compromise efficiency and performance. Implementing a cohesive ICT architecture involves establishing policies and rules to guide design and deployment of the networking infrastructure, ensuring all data center systems fall under the same rules and management policies. [Save 53 KW or 3.5%].
    • Server Virtualization and Consolidation: Virtualization is facilitating the consolidation of older, power-wasting servers onto much less hardware. It also increases the ability of IT staff to respond to changing business needs and computing requirements. Most data centers have already discovered the benefits of virtualization, but there is often opportunity to go further. [Save 448KW or 29%].
    • Power Architecture: Historically, data center designers and managers have had to choose between availability and efficiency in the data center power system. Now, new advances in double-conversion UPS technology have closed the gap in efficiency, and new features enable double-conversion UPS systems to reach efficiencies on-par with line-interactive systems. [Save 63KW or 4.1%].
    • Temperature and Airflow Management: Take temperature, humidity and airflow management to the next level through containment, intelligent controls and economization. From an efficiency standpoint, one of the primary goals of preventing hot and cold air from mixing is to maximize the temperature of the return air to the cooling unit. [Save 80KW or 5.2%].
    • Variable-Capacity Cooling: Cooling must be sized to handle peak load conditions, which occur rarely in the typical data center. Cooling systems that can adapt to changing condition and operate efficiently at partial loads save energy. [Save 40KW or 2.6%].
    • High-Density Cooling: Optimizing data center energy efficiency requires moving from traditional data center densities to an environment that can support much higher densities. High-density cooling makes that possible. [Save 23KW or 1.5%].
    • Data Center Infrastructure Management: Data center infrastructure management technology can collect, consolidate and integrate data across IT and facilities systems to provide a centralized real-time view of operations that can help optimize data center efficiency, capacity and availability. DCIM also delivers significant operational efficiencies by providing auto-discovery of data center systems and simplifying the process of planning for and implementing new systems. [Because DCIM is integral to many Energy Logic 2.0 strategies, it isn’t possible in this model to attribute an isolated savings percentage to DCIM.]

    This new process demonstrates the potential that still exists to optimize the data center. The introduction of a new generation of management systems that provide greater visibility and control of data center systems, and a continued emphasis on efficiency, serve as proof that there is no time like the present for the industry to begin taking significant actions to reduce the overall energy consumption of data centers.

    RoadMap

    Organizations need a clear roadmap for driving dramatic reductions in energy consumption without jeopardizing data center performance. But just how far can a data center efficiency approach drive you? Take a look at how far each of 10 energy-saving steps could take you via electric car. The cumulative result can literally drive you around the world.

    To see how much each strategy can save your data center visit the Cascading Savings Calculator. This online tool lets you explore the impact of each strategy by entering information that is specific to your data center, such as the load and facility PUE.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Do You Know the Hydro-Footprint Of Your Data Center?

    Ron Vokoun DBIA, LEED AP BD+C, leads the Mission Critical Market for JE Dunn Construction’s western region. Ron is a 25-year veteran of the construction industry with a focus on mission critical facilities and sustainability. Harold Simmons, Director of Strategy and Mission Critical Solutions for United Metal Products, co-authored this article. See more on Simmons below.

    Ron VokounRON VOKOUN
    JE Dunn Construction

    In my last column, Water Consciousness Continues in the Data Center, cooling technologies that are proven to reduce the consumption of water were discussed. I also outlined issues surrounding the availability and potential alternative sources of water for data center cooling. Clearly, there are ways to make data center water usage more sustainable. In this column, let’s discuss the complex relationship between water and energy use in the data center.

    WUE and PUE

    The Green Grid has been at the forefront of the data center energy efficiency movement and is again leading the way in monitoring the use of water in data centers. The Water Usage Effectiveness (WUE) established a Key Performance Indicator (KPI) metric to measure the amount of water used in a data center.

    Water Usage Effectiveness (WUE) is a new metric to evaluate water use.

    Water Usage Effectiveness (WUE) is a metric to evaluate water use and it is being promoted by the Green Grid.

    The complexity in the relationship between water and energy use can be illustrated by comparing WUE and PUE in a particular data center. If your focus is purely on reducing on-site water use, you can use air-cooled chillers and have a great WUE. However, there may be a premium paid in the form of higher energy usage compared to a technology such as evaporative cooling depending on your location, thereby elevating your PUE.

    An aspect of water use that is often ignored is the amount of water used in the production of the power that is used in the data center, which leads us to the discussion of hydro-footprint.

    Hydro-Footprint

    Depending on the location of your data center, the production of power can be quite water intensive. The National Renewable Energy Laboratory (NREL) performed a study titled Consumptive Water Use for U.S. Power Production (PDF) that analyzed the amount of water used in the production of power in each state. Illustrating the impact of water used in the production of power, let’s continue, using the data from my last column.

    According to the NREL study, power produced in the state of Arizona, on average, uses 7.85 gallons of water per kilowatt-hour. The table below illustrates the annual power use for a 36,000 CFM cooling unit using four different technologies. (Assumptions for both charts: DC located in Phoenix, based on ASHRAE recommended humidity range, based on inlet supply temperature to servers at 80-degrees F, water and power consumption is for 36,000 CFM unit, data is representative and does not apply to all brands, and data provided by United Metal Products.)

    AnnualizedPowerConsumption

    Click to enlarge graphic.

    The table below shows the amount of water used annually for cooling for the four different technologies, as well as the amount of water used in the production of the power used in their operation giving the total hydro-footprint of each unit.

    Click to enlarge graphic.

    Click to enlarge graphic.

    As you can see, although the air-cooled chiller uses the least amount of water in cooling operations, it’s higher power use yields a higher overall hydro-footprint than Options 1 & 2. This exercise highlights the need to look at water use more holistically and include the water used in the production of power.

    The Green Grid’s WUE metric also has the ability to take this into consideration by adding the water used during the production of the power used by the cooling equipment to the annual site water usage.

    WUE-source

    WUE Source calculation.

    Whichever metric you use, it will help you weigh the options for both power and water consumption and make an informed decision. By tracking your data centers’ efficiency in consuming water and energy, you take a huge step toward creating a more operationally sustainable data center environment.

    Harold-Simmons-thHAROLD SIMMONS
    United Metal Products

    Co-author Harold Simmons is Director of Strategy and Mission Critical Solutions for United Metal Products and Chil-Pak.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Six Tips for Selecting HDD and SSD Drives

    Gary Watson is Chief Technology Officer, Nexsan, an Imation Company.

    g-watsonGARY WATSON
    Nexsan

    With today’s wide variety of storage devices comes lots of confusion about what types of drives to use for what data types. Adding to the confusion is Serial ATA (SATA) and SAS, which refer to disk drive interfaces, and Solid State Drive (SSD) which refers to a particular kind of internal technology. Then there are considerations of random access performance, sequential performance, cost, density and reliability.

    All these factors make selecting the right drives a challenge. This article offers six tips for navigating through this complexity to help you pick the right solutions for your needs.

    1. Don’t Confuse Interface Type with Disk Performance or Reliability.

    In the past, SAS and SATA were used as convenient shorthand for fast (SAS) or dense (SATA) disk drives. Now, however, we have SSD drives with SATA interfaces as well as inexpensive and dense but relatively low-IOPS 7200 RPM drives with SAS or even Fibre Channel interfaces. Users can no longer make blanket assumptions like “SAS is better for databases.” For example, if we’re comparing a blazing fast SLC SSD with a SATA interface vs. a relatively sluggish 7200 RPM NL-SAS drive, we might be wrong by a factor of 1000x.

    Users can’t even use SAS or SATA as shorthand for desired drive reliability. There are several SATA drives that have a claimed 2.0M hour MTBF (mean time between failure), for example a 4TB enterprise hard drive from one of our technology partners. This is in contrast to the typical 1.6M hour MTBF  number for many 3.5-inch 15,000 RPM SAS drives, or the even lower 1.4M hour MTBF number for some 2.5-inch small form factor (SFF) 7200 RPM NL-SAS drives.

    Think about that last number for a minute – for a 40TB system, users would need 40 of the 1TB SFF NL-SAS drives, while only needing 10 of the 4TB drives referenced above – one fourth as many. Furthermore, and this is crucial, because the 4TB drive I referenced is so much more reliable, there would be 5 times as many SFF drives failing per year. Additionally, the 4TB drive would only consume 113 Watts, whereas the SFF drives would consume over 200 Watts for the same capacity. When power is a concern, 3.5-inch drive systems often deliver twice the gigabytes per Watt as compared to 2.5-inch drive systems.

    2. For Best $/GB, 3.5-inch RPM SATA Is Still King.

    Storage vendors have a seemingly endless variety of pricing models, but one constant seems to be that 2.5-inch systems cost twice as much per gigabyte as 3.5-inch systems, assuming both are using “enterprise-grade” drives. But as previously noted, the 3.5-inch solution will be far more reliable.

    10K and 15K SAS solutions in either 2.5-inch or 3.5-inch form factor will be approximately 3X to 6X more expensive per gigabyte. SSD solutions can be from 10X to 50X more per gigabyte than comparable SATA drives.

    3. HDD Performance is Mostly Dictate by Density and Mechanical Speed.

    The random or transactional (IOPS) performance of spinning drives is dominated by the access time, which in turn is determined by rotational latency and seek time. Interface performance has almost no influence on IOPS, except in the negative sense that complex or new interfaces sometimes have bloated or immature driver stacks which can hurt IOPS. Highly random applications which benefit from high IOPS drives include email servers, databases and hypervisor environments.

    Sequential performance, which is important for applications like video and D2D backups, are dominated by the RPM of the drive times the bits per cylinder. This number will decrease 50 percent or more as the drive moves from the outermost to the innermost cylinders. Again, as long as the interface is fast enough to keep up (and it is in all modern hard drives), the interface speed (or even the quantity of interface ports) has no measurable effect on sustained performance. The fastest drives today can sustain less than 200 MB/s, which is less than the performance of a single 3 GB SATA port.

    4. Consider SSD Instead of 10K OR 15K Drives for Transactional Workloads.

    Due to their ever-increasing performance and reliability, 7200 RPM SATA drives are taking on more types of workloads including moderate transactional applications. However, 15,000 RPM drives can deliver roughly two to three times as many small block random transactions as 7200 RPM drives due to their lower rotational latency and much more powerful actuator arm. As a result, they are often used for demanding database or email server workloads.

    Recently, SSDs have become mainstream options from most storage vendors. Though not faster at sequential workloads, they are incredibly fast at random small block workloads and may be a superior choice for demanding SQL, Oracle, VMware, Hyper-V and Exchange requirements. Many customers report that they can support more guest virtual machines (VMs) per physical server due to the lower latency of SSD solutions, which may offer tremendous cost savings depending on specifics of licensing and hardware.

    SSDs continue to advance at a very fast pace, and are now the leading technology in terms of dollars per IOPS as well as IOPS per watt. Today it is very likely that an all-SSD solution will have lower overall capital and operational cost than one made from 15,000 RPM drives due to the reduction in total slots required to achieve a given transaction performance, and the greatly reduced power footprint as compared to spinning drives for a given number of transactions. Some enterprise SSD’s meet or even exceed the reliability and durability of 15,000 RPM drive systems because far fewer SSD’s are required to achieve any given IOPS level.

  • Reducing the Storage Footprint & Power Use in Your Data Center

    Eric Bassier is Director of product marketing at Quantum. He has over 11 years of experience in the storage industry, with a focus on data protection and archiving.

    Eric-B---Quantum-tnjpgERIC BASSIER
    Quantum

    I was talking with a company based in Manhattan, and they told me that they have maxed out their data center’s power usage. They simply cannot get more power.  And because of the constraints of city real estate, they can’t add square footage to their data center – there’s nowhere to grow.  Moving their data center off the island is an option (and long-term plan), but clearly reducing the footprint of storage in their data center and minimizing power consumption are important factors, as they would be for any IT investment.  Although Manhattan has some unique challenges in this area, IT departments in many metro areas are facing similar problems. Data center floor space requirements and power consumption costs are both increasing and becoming important factors for all of us.

    When it comes to storage and data protection, there are a couple of key technologies that can dramatically reduce storage footprint and power consumption.

    Deduplication

    Deduplication technology is now mainstream, especially as it is applied to backup storage and disaster recovery.  Deduplication – especially variable length deduplication – enables users to store 20 to 50 times more data in the same disk space as standard disk storage.  As with all industries in IT, deduplication appliances continue to improve their disk density, with some vendors now offering appliances using 3 TB hard drives, which can further reduce storage footprint and reduce power consumption.  A purpose-built deduplication appliance using 3 TB drives can reduce power by as much as 50 percent compared to a similar appliance using 2 TB drives.

    As power consumption and the amount of spinning disk is reduced, power and cooling costs both decrease as well.

    Cloud Storage, Backup, and DR

    One of the interesting things about deduplication is that it is a great “engine” for reducing data set sizes before replicating that data over a WAN.  This makes deduplication a perfect technology for getting data into the cloud. Some vendors are now offering cloud-based storage, backup and disaster recovery services.  Moving storage away from metro areas (with expensive real estate, expensive power and a high cost of living for IT staff) to data centers in different geographic regions will continue to be a driving trend for IT in the future.

    Tape

    Talking about cloud, many companies continue to use tape the way that many cloud providers are using tape – as the lowest cost storage medium with the lowest power requirements.  To many, tape always seems to be on the brink of death, but in reality tape continues to have the lowest acquisition cost, and for long term data storage the power and cooling costs are extremely low.

    Just like the disk drive industry, tape densities continue to improve to enable businesses to reduce their data center footprint.  The latest generation of tape media, LTO-6, can store over 6 TB of data on a single cartridge not much larger than a smartphone.  With advancements in the storage densities of tape libraries, users can store multiple PBs of data on tape in less than a full rack of space.

    If reducing data center footprint and power consumption is a concern, consider exploring and investing in these technologies. With the smart management of data storage, my friend may be able to postpone his move from the Big Apple.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • HIPAA and PCI Compliance Are Not Interchangeable

    Mike Klein is president and COO of Online Tech, which provides colocation, managed servers and private cloud services. He follows the health care IT industry closely and you can find more resources at www.onlinetech.com/compliant-hosting/overview.

    Mike KleinMIKE KLEIN
    Online Tech

    When thinking about compliance, many companies assume PCI DSS is interchangeable with HIPAA. Otherwise it is assumed that the gap between the two is small. This ignores that HIPAA and PCI DSS compliance protect different types of information, with different audit guidelines, safeguard requirements, and consequences for non-compliance or breaches.

    Origins and Audits

    HIPAA compliance is monitored by Health and Human Services, and the audit is based on OCR (Office of Civil Rights) protocols that are continuously updated and enforced. These are governmental entities, not private companies. KPMG was selected as HHS’ auditor of choice, and investigation of compliance with the Security and Privacy rules comes with the benefit of the fully informed and funded auditing power of a well-respected auditing powerhouse.

    Conversely, PCI compliance is defined by the PCI SSC (Payment Card Industry Security Standards Council). This council is a collaboration including Visa, Mastercard, American Express, Discover, and JCB (Japan Credit Bureau), with these companies having a vested interest in keeping consumer data safe.

    Consequences of Non-Compliance

    The cost of a breach is very different between HIPAA and PCI compliance as well. HIPAA is a US federal law. There are criminal and civil penalties associated with a breach, as well as fines. This means that in addition to stiff financial consequences, willfully negligent stakeholders can go to jail for non-compliance. If a breach occurs, healthcare providers are required to post public press releases in traditional media outlets to inform patients of the potential threat to their information. This damage to the image and credibility of an institution can have long lasting impacts.

    With PCI compliance, there are contractually agreed upon fines, but no criminal charges. You aren’t going to see anyone going to jail for not being PCI compliant. This isn’t to say that PCI costs aren’t serious. A PCI breach could cost anywhere from thousands to millions in fines to the credit card companies, and could result in the loss of card processing privileges, which severely impacts business cashflow. Of course, there is also always a threat to a company’s reputation that might discourage current or future buyers.

    Requirements

    When you peel back the curtain on HIPAA and PCI requirements, they look very different. HIPAA is very focused on policies, training, and processes. It’s more subjective and broad in application, caring about how a company handles breach notification, whether an organization insists on BAAs (Business Associate Agreements) with their vendors, or whether the cloud provider associated with a company has conducted a thorough risk assessment against all administrative, physical, and technical safeguards. To this last point, the final HIPAA Privacy and Security Rules published by HHS recently clarified that data center and cloud providers are, in fact, considered Business Associates that must be HIPAA compliant if there is Protected Health Information (PHI) in their data centers or on their servers. HIPAA doesn’t precisely describe technical specification or methods to achieve compliance. Each Covered Entity and Business Associate is to complete a risk assessment and management plan for addressing each of the HIPAA safeguards.

    The Business Associate Agreement is unique to HIPAA, and extends the ‘chain-of-trust’ and liabilities for protecting PHI from the Covered Entities (healthcare providers), throughout its network of supporting vendors. Any company that stores, processes, or accesses patient health information is automatically considered a Business Associate. As such, they will be held to the full legal liability to keep PHI safe. Turning a blind eye only makes the penalties steeper.

    PCI DSS requirements are much more prescriptive, comparably. The technical requirements are more detailed, explicitly outlining the necessity for processes like daily log review and encryption across open, public networks, while processes around training and policies are not as prevalent. PCI DSS does not have an equivalent of a Business Associate Agreement required between a company that needs to be PCI DSS compliant and its vendors.

    Do HIPAA and PCI Compliance Overlap?

    Well, yes and no. The technical PCI requirements can set up a nice framework that could work as a prescriptive guide for some of HIPAA’s technical safeguard requirements. However, the foundation of HIPAA compliance is a documented risk assessment and management plan against the entire security rule. PCI share this core cornerstone for the basis of meeting compliance.

    The bottom line is that passing a PCI audit does not mean you’re HIPAA compliant, or that KPMG is going to care about PCI when it comes to an evaluation on due diligence to meet HIPAA compliance.

    The reverse is also true. Passing an independent audit against the HIPAA Security and Privacy rules does not imply PCI compliance either. Even with overlap, they’re still separate and should be treated as such. The best course when looking at hosting providers is to request an audit report, read the details, and confirm that HIPAA compliance is based on the OCR Audit Protocols and PCI compliance is based on the PCI DSS. This insures that the business not only understands the difference between each compliance (if both are necessary), but that the company has truly been diligent to keep your data safe. After all, compliance is not a checkmark, it’s a culture.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • A Great Time to be in the Data Center Industry

    Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

    Tom_Roberts_tnTOM ROBERTS
    AFCOM

    Today, you can plug in the words “data center design and build services” into an Internet search engine, and it renders results literally in the millions.

    A decade or so ago, however, data center specialists were scarce. Finding an architectural and engineering group that understood the complexities of the data center and spoke our language proved challenging, to say the least.

    It was certainly a source of frustration for me and my industry peers. I was director of data center operations for a healthcare group back then, and it became painfully obvious that we had outgrown our second-floor office building location and needed more space and efficiency to accommodate present and future growth.

    We approached the project logically, looking at site locations, talking with real estate groups, reviewing utility capabilities and conducting site evaluations. Yet, each time we met with prospective builders and/or designers and brought up our needs for a “hardened” data center with built-in redundancies, N+1 cooling, hot and cold aisles, raised flooring, emergency backups, etc., their eyes glazed over.

    Most of them, while completely proficient in building and designing other structures, didn’t fully grasp the concept that data centers must be able to withstand power outages, natural disasters and equipment failure on a 24/7 basis. It took just as much effort to explain the “room to grow” aspect of the project.

    Then, during the actual design process, it seemed that regardless of what we discussed in meetings, something different came back in the design plans. An obvious gap in communication and imbalance between demand for, and supply of, data center specialists existed.

    Different Ecosystem Today

    Thankfully, that changed soon enough.  IT gained clout and visibility with the maturity of companies like Yahoo, Facebook and Amazon—all start-ups in 1994-1995. Data centers came into a whole new light and had to step up their games to keep pace with the evolution of computing needs. It often required complete redesigns or building from scratch … .and the market responded admirably.

    The need for businesses to have an online presence to complement brick-and-mortar operations to stay competitive ushered in the era of more data, more applications, more servers, more end users, and it all took more energy and “out-of-the-box” thinking to accomplish.

    For example, I didn’t have the multi-million dollar funding required to install dual power feeds for our facilities, so we implemented emergency redundant backup systems instead. It took a lot of meetings and conversations to obtain this understanding. “Back in the day,” data centers housed a bunch of old servers that ran at 2-3 kW per rack, taking up a lot of space and were not very efficient. Now, it’s all about consolidation, doing more with less, and on the average generating 8-12 kW per rack and in many cases, much more.

    Partners Abound

    The good news is you won’t have any problem finding companies that not only speak our language, but do it fluently. The challenge is to find one that will work with you, and at the same time, bring fresh ideas to the table. I recommend you zero in on those that not only understand, listen, and contribute, but that fit your culture too—a major factor in the selection process.

    Your company is likely one of three types: A process culture defines a company that likes to follow the letter of the law and doesn’t want to bend or break any; a normative culture that has very stringent procedures and very high standards of ethics, and procedures match ethics; or a cross between the two – a collaborative culture – that suggests a higher threshold for creativity and willingness to combine efforts.

    So, for example, if you come from a process culture and try to work with a company that just fires out ideas with little regard for getting from A to B to C in that order, your clashing styles will prevent progress and increase frustration. It’s in your best interest to search out companies that match your culture. … a luxury that should be appreciated and not taken for granted.

    It is a great time to be in the data center industry – just think what we will know tomorrow.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Brokering IT Services Internally: Building the Order Process

    Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

    Dick Benton GlasshouseDICK BENTON
    Glasshouse

    In my last post, I outlined the fourth of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: advertise the Ts and Cs. That means developing a simple and easy-to-read list of the terms and conditions under which IT services are supplied. Now we will address actually building the order process, so that services can be provisioned in an automated way that can satisfy today’s on-demand consumers. One of the major cloud differentiators is its ability to support self-service selection followed by automated provisioning; in other words, being able to offer services on a Web page which will trigger scripts to automatically provision the selected resource.

    Historically, the largest roadblock to an expedited provisioning process is the “legacy” approach to service fulfillment. Typically, when the busy consumer seeks access to a resource, he or she is faced with a daunting obstacle course of approvals. Usually, this will involve a lengthy form to complete (sometimes even an online form) with an explicit rationale for the request, along with minute detail on the amount of resources to be consumed and information like peak loadings. Some organizations include an additional section for risk management covering the impact or association the resource may have with various compliance legislation and internal compliance regulation. Other sections will provide space for approval of active directory additions, network access, storage utilization, backup protection and even disaster recovery. Often there will be a section covering the data to be used with the resource and a questionnaire on this data covering its corporate sensitivity and the need for encryption.

    CYA Approach

    Although the process seems to place every conceivable obstacle between the consumer and the resource sought, the process is typically one that has been built over time. Each checkpoint probably developed in response to some painful, embarrassing or uptime-impacting event in the past. The process is designed to specifically eliminate the possibility of those past events recurring. The “cover-your-backside” process is not uncommon in IT procedures.

    Just-in-case provisioning is designed to ensure that the risk of any inappropriate or unauthorized allocation or usage is reduced to near zero. In the past, this has been the foundation of provisioning time frames lasting weeks and even months. Also, it’s why IT is often seen as an impediment rather than an enabler of the business. These forces have also driven an evolution in the defense mechanisms of the resource requestor. The resource requestor learned what the “right” answers were in order to get their request through the process with only a three- or four-week delay. Again, typically there is no detection mechanism nor any procedure for withdrawal of a resource found to have been used in a manner that is different from its “planned” purpose.

  • Creating An Effective Data Warehouse Strategy

    Alan McMahon works for Dell. He has worked for Dell for the last 13 years involved in enterprise solution design, across a range for products from servers, storage to virtualization, and is based in Ireland.

    alan-mcmahon-tnALAN McMAHON
    Dell

    Every company has a stockpile of data – loads and loads of data. That data may not be that useful however, if you are unable to even access it without dedicating copious amounts of time and effort to the endeavor. That’s where an effective data warehouse strategy comes in.

    Contrary to what some companies may still believe, effective data warehouse solutions do not have to be costly. Nor do they have to be complex or limited to a single size and scope.

    What No Longer Works

    In the so-called olden days, which in the high-tech world can be as recent as last year, data warehousing was attempted using two fairly common methods.

    One was relying on external resources to cobble together a system as the company went along. Such systems could contain any number and types of servers, storage arrays and software. When combined, companies hoped such a collection would work as an effective data warehousing solution, although that has become less and less likely of being the case. Disparate units thrown together can create an increasingly complex system that is difficult to monitor, track or manage in an effective manner.

    The do-it-yourself approach also runs into trouble for companies who have limited internal IT resources to dedicate to the creation and management of an effective warehousing system. IT resources may not be large enough or enjoy the availability to focus on implementing or managing a sprawling warehousing system.

    Another old-school method of data warehousing was going for a system based on proprietary technology. While this type of system may offer the capabilities and technology to meet the needs of many businesses, the cost was typically high. Outlay was costly, as were the ongoing contracts required to ensure the systems would be continuously optimized and maintained. Reaching for proprietary systems could also often result in over-provisioning for the small and medium business. Smaller businesses would not necessarily need such an extensive system but were forced to pay for it anyway, believing it was the only available option.

    The drawbacks of former data warehousing methods include high cost, low efficiency, and the simple inability to make any useful sense of the data being stored.

    What You Can Do with a System That is Much More Effective

    Instead of having vast amounts of unorganized and inaccessible data, an effective data warehouse strategy lets you access the data easily and rapidly for a number of uses. Reviewing various types of data allows you to track past and current trends, while predicting future trends and issue – resulting in meaningful business intelligence reports.

    Vast amounts of data stored in an inefficient manner can result in drastically reduced system performance. As data volume increases, so can the amount of time it takes for data to load for even the simplest routine operations. Throw a few queries in there for an attempt to locate a specific item, and the system can lag even further as the system  attempts to sift through or process existing data. These time lags not only affect the employees’ productivity, but they can also affect the company as a whole if downtime or bottle-necked traffic results.

    Extensive and ever-expanding data collections are a major challenge for today’s businesses. Internal and external source are constantly adding more data to the mix in a variety of formats and complexity levels. Duplications and redundant data are neither uncommon nor of any practical use.

    Online Analytical Processing, or OLAP, can be a very handy application for mining data from different data bases, but it places an extreme workload and pressure on a system that may not be designed to handle anything as complex or large.

    Effective data warehousing can also eliminate archaic data storage systems that have long outlived any useful purpose or free up other devices that are too stocked with data to perform additional functions.

    What to Look for in an Efficient Data Warehousing System

    Capacity and performance are the two big factors to review when choosing a data warehouse strategy. The framework should be capable of supporting and balancing the hardware and software comprising the system can contain important features that are vital to today’s enterprises. These include:

    • Ability to handle extensive sequential scans
    • Capabilities compatible with OLAP systems
    • Configurations that implement next-generation servers and storage arrays
    • Rapid installation with minimal impact on daily operations and operational systems
    • Scalability to meet business needs without over-provisioning
    • Ability to increase scale for business growth with cost-effective additions down the road
    • Cost-effectiveness to fit a variety of price points and budgets
    • Available upgrades and updates as technologies advance

    Size Matters

    The option to size availability is a must, to keep processing speeds high and cost low. Small, medium and large data warehousing options should be available to meet the specific needs of your business. Small and medium businesses, for instance, may do well with a 5 TerraByte (TB) platform consisting of a single server with internal storage. Slightly larger businesses may be able to create an effective strategy using a 10 TB platform with a larger server and internet storage array. The largest enterprises, by contrast, may want nothing less than a 20 TB platform based on a large server and fibre channel storage array that can handle the massive loads.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Unleash Your Applications with Cloud-aware Application Delivery

    Kavitha Mariappan is director of product marketing (Stingray Business Unit) Riverbed Technology.

    Kavitha-Mariappan_tnKAVITHA MARIAPPAN
    Riverbed

    The evolution of cloud architectures and their ability to deliver a greater level of efficiency and flexibility has been a hot topic recently. So why put your apps in the cloud?

    Why Should I Use the Cloud?

    Ultimately, business priorities should drive the decision, not simply some arbitrary need to streamline IT resources. It’s evident that the business advantages of moving to the cloud are significant and provide sustaining benefits. There are good reasons for this. One of the most compelling is elasticity—IT resources can automatically scale up and down as required by the business. The risks of under-provisioning and the costs of over-provisioning evaporate.

    Running applications from an enterprise-grade cloud eliminates the need to sink excessive capital into complex and expensive hardware and other infrastructure components. Additionally, it eliminates the recurring and cumulative costs—a kind of tax, really—of maintaining hardware-based, on-premise infrastructure components. And, the cloud, no longer novel, has been widely utilized and is mature enough to support a large variety and scale of applications.

    Shedding much of the costs of delivering services enables you to provide your customers with a much richer user experience and access a greater variety of apps and services. In the cloud, developers can put their ideas to the test sooner and potentially mitigate issues that otherwise might arise only after the application has been moved to production.

    Cloud computing isn’t limited to a collection of virtual machines and storage you rent by the hour in a location far away from your data center. Mature cloud providers offer the ability to extend existing on-premise infrastructures into cloud facilities, creating a unified architecture with the benefits of instant infrastructure. Applications can span both, and users need not notice the difference.

    To better understand the business benefits of deploying applications in the cloud, let’s examine three compelling aspects:

    1.) Reduced complexity. Deploying applications in the cloud reduces the burden of hands-on system administration and allows you to spend more time thinking strategically. Application developers experience greater agility to innovate and contribute to your bottom line and avoid the wastefulness and boredom of day-to-day heavy lifting.

    2.) Reduced costs. The two biggest savings realized by many organizations that move to the cloud come from economies of scale and a usage-based pricing model. Pay-as-you-go brings true capital cost savings, eliminating the need to invest in unused capacity while ensuring that spikes in demand don’t cripple your business. As their processes mature, enterprises minimize operational costs by automating rote tasks using repeatable and standardized components and blueprints.

    3.) Increased flexibility and agility. The cloud offers increased agility, dynamic scalability, and faster speed to market. Imagine a scenario in which your IT department no longer sits idle while waiting for the UPS truck to arrive and no longer camps out in your data center during the weekends to install new hardware. Cloud resources can scale to match demand at any point in time. Automation allows you to simplify and streamline formerly manual and cumbersome IT processes.

    What Should I Worry About in the Cloud?

    Performance, availability, and security top the list of concerns that our customers mention. Notably, these same concerns apply to on-premise implementations, too. Our customers expect that they will receive similar levels of service, efficiency, performance, and security as their applications migrate to the cloud.

    Together, these requirements constitute a certain degree of measurable business continuity. How can a shared platform, not under your control, deliver the same or better response times? How can it protect your applications from security threats? How can it ensure continued customer satisfaction, revenue growth, and productivity when network latency varies from one location to another?

    In reality, these questions apply to any IT infrastructure, whether on premise or in the cloud. Deployment practices that alleviate these concerns work in any environment – traditional data centers, virtualized private clouds, public clouds, and hybrid clouds.

    How Do I Deploy Effectively in the Cloud?

    As you start to deploy business applications in the cloud, take some time to consider—or reconsider—your application delivery infrastructure. Does it provide the performance, flexibility, cost savings, and agility that you need, now and in the future?

    Legacy load balancers sit in front of web and application servers. They accept requests on behalf of external users and manage the dialog with the application. They traditionally focus on enhancing reliability of the back end of the data center, by ensuring availability and also scalability of applications. They implement features such as server offload and content caching to reduce application server costs. Traditional load balancers deal with problems that arise from traffic surges and spikes but with a server-side focus.

    Applications undergo constant evolution, and one rapidly emerging property is a high degree of distributed processing across multiple locations. Application delivery solutions must similarly evolve to meet the requirements of large-scale distributed processing readily available in the cloud. Such requirements include:

    • Enhancing efficiency and response times of applications and services
    • Improving availability between instances that span multiple geographic zones and regions
    • Solving latency problems with content optimization and acceleration tools
    • Ensuring proper protection, using intelligent layer-7 inspection, against known and unknown threats
    • Scaling resources to provide encryption and compression services without affecting performance

    Yesterday’s load balancers and legacy application delivery controllers are not designed for the cloud. The mismatch is clear. Only a modern, cloud-ready application delivery solution can truly help you make this shift – software-based application delivery controllers (ADCs) have emerged as the right solution for cloud-based application deployments.

    Software ADCs are natively designed for virtualization and cloud portability. Pure software solutions are intended for the widest variety of deployments and enable a more flexible application delivery strategy. It’s the foundation of a true Opex-centric resource model.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.