Author: Bill Kleyman

  • The Modular Data Center Market

    This is the forth article in the Data Center Knowledge Guide to Modular Data Centers series. 

    With the modular market developing in the industry, there has been some tremendous innovation and engineering design efforts put into solutions. The modular market is maturing with even more large enterprises actively deploying the modular data center platform. To further illustrate the traction in the modular industry, a recent Uptime Institute 3 survey showed that 41% of their respondents are already considering modular, pre-fab data centers or their components.

    Modular Data Center v3

    Modular Products
    Many of the major server vendors have modular data center products and while they are optimized to work with their hardware, will typically support anything a standard rack supports. Vendors worldwide have engineered their own version of a container or module and incorporated a variety of unique capabilities into their solution. Having one vendor supply all components for and within the rack for a module enables them to engineer it as a complete solution that can then have modular power and cooling products complement the IT module. Modular data center products, including containers, are available from: IO, HP, IBM, SGI, Dell, Cisco, Cirrascale, Google, Elliptical Mobile Solutions, IPSIP, Toshiba, Bull, AST, Schneider Electric, PDI (acquired by Smiths Interconnect), Emerson Network Power, Silver Linings, Telenetix and Active Power.

    Modular Providers
    Within the everything-as-a-Service model, a modular provider is able to offer the entire data center as a service, by quickly adding a module of IT with all supporting power and cooling infrastructure. The idea is to deliver a data center-in-a-box solution. Quickly built-out and provisioned, the data center can be made operational far faster than a standard bricks-and-mortar solution.

    The entire module is available as a package, integrating all aspects of the IT within and subsystems through DCIM or other management tools, such as DCOS (data center operating systems). Modular data center Providers include: IO, NxGen Modular, mSun Modular Data centers, IBM, COLT, Toshiba, Cannon, Pacific Voice and Data, BladeRoom, Pelio & Associates, Dock IT, Lee Technologies (acquired by Schneider Electric), Datapod and Turbine Air Systems (TAS)/ Celestica (CLS). 

    The complete Data Center Knowledge Guide to Modular Data Centers is available for download in a PDF format and brought to you by IO. Click here to download the DCK Guide to Modular Data Centers.

  • Improve Connectivity with HP Virtual Application Networks

    With more devices and more connectivity into new technologies like cloud computing, many organizations are looking for better ways to optimize their networks. The have a solid infrastructure, organizations must remain agile as the reliance on the modern data center increases. In this white paper by ESG and commission by HP, you are able to learn about key trends which closely align with business agility and data delivery. These trends include:

    • Aggressive data center consolidation.
    • Increaseing use of server virtualization technologies.
    • Wide and growing deployment of web-based applications.
    • Consumerization of IT and BYOD.

    Because of these growing trends, companies must deploy highly agile network environments capable of scale and meeting both business and end-user demands. In this white paper, ESG goes on to point out that existing legacy networks will not be able to sustain future growth. Therefore a new “cloud-friendly” architecture is required This new architecture will support rapid growth, handle dynamic environments based on business and IT policies, and provide sufficient levels of automation and orchestration for rapid software-based deployment of end-to-end services. More specifically the network must be:

    • A foundation for connectivity and performance.
    • Virtualized and abstracted.
    • Tightly integrated with adjacent domains and orchestration programs.

    To meet network demands and continue to deliver powerful cloud solutions, HP introduced its Virtual Application Networks, or VAN. VANs are logical, purpose-built virtual networks that leverage existing Flex Network architecture, and are designed to connect users to applications and services, resulting in a scalable, agile, and secure network that streamlines operations.

    hpVAN

    [Image source: Enterprise Strategy Group, 2012]

    Download this white paper to learn about HP’s VAN methodology. This white paper covers a comprehensive approach to designing compute, network, and storage environments, with an overarching theme related to converged infrastructure. HP’s vision into cloud computing has set the wheels in motion for many organizations to move off of legacy networks. In using HP’s VAN solution, organizations can help alleviate short-term challenges while providing a flexible network for future business and IT initiatives.

  • Managing the Data Center – One Rack at a Time

    The modern data center is a combination of technologies all working together to help deliver data. As cloud computing, IT consumerization and big data continue to shape the industry – the data center will continue to remain at the heart of it all. In working with today’s data center infrastructure, administrators must not only build around agility – but efficiency as well. The idea is to simplify data center management by breaking a complex environment into more manageable pieces – the racks. In working with rack technologies, there needs to be tools in place that can understand space, power and cooling. These tools can assist the data center manager in areas such as asset management, real-time monitoring, capacity planning, and process management.

    In simplifying the data center into rack components, administrators are able to better gather and quantify metrics within their infrastructure. For example:

    • Rack – total rack power
    • Rack PDU – power at the rack PDU
    • Device – power consumed by the IT device

    Furthermore, there are direct benefits in managing inventory within the rack environment. At the rack level, there are three primary resources which, as the white paper outlines, must be considered when trying to determine whether the rack can support a new asset:

    • Is there enough contiguous space to house the asset?
    • Is there sufficient redundant power for the asset?
    • Is there enough cooling to remove the heat generated by the asset?

    By using intelligent monitoring tools, not only are you able to answer questions around space, power and cooling – you’re also creating a smarter rack. With connections to existing rack management hardware, the intelligent cabinet can provide the following advanced functionality:

    • Asset management – location of all rack assets down to the rack unit
    • Power management – rack PDU and overall rack power management
    • Rack security – access monitoring and lock control via badge or keypad
    • Environmental monitoring – temperature, humidity, air flow, and other sensors
    • KVM – access to rack devices through a KVM switch
    • Touch screen and keyboard at the front of the rack

    As the data center continues to evolve – there will be a greater need to work with intelligent tools that can analyze the complex relationships between space, power and cooling. Download this white paper from No Limits Software to learn how smart software tools can create smarter data centers.

  • Adapt Your Technology, Services and Organization to Cloud

    Let’s face it – the cloud is here and cloud computing is only going to continue to evolve. Many organizations are either looking at some type of cloud solution or have already jumped in. The reality is that the cloud can be a very powerful platform which can bring numerous benefits to your organization. They key to creating that powerful cloud environment would include deploying the right model, involving the right technologies, and having the right business case.

    In doing this alone, you may face various challenges in terms of best practices, services and just general knowledge around the cloud. This is where cloud innovators can offer some help. HP’s Converged Cloud Workshop helps you gain clarity on your cloud strategy, identify the cloud initiatives that can work for your business, and create a roadmap that defines your steps forward. It is an exploration and expansion of your understanding of the cloud computing model and how it fits in with, and changes the dynamics of, traditional IT solution sourcing.

    HP2

    [Image source: HP – Adapt your technology, services, and organization to cloud]

    In this white paper, you learn not only about the evolution of the cloud, but you are also able to see how a strategic cloud workshop can set your organization on the right path towards the cloud. The workshop involves discussing a range of cloud computing concepts with customers, followed by questions about where they see themselves in their current as well as future cloud computing plans. The Converged Cloud Workshop works through several cloud elements including:

    • Applications
    • Facilities
    • Infrastructure
    • Security
    • Reliability

    Download HP’s white paper today to see how the Converged Cloud Workshop can help your organization integrate the cloud directly with your business and growth plans. In working with cloud computing, it’s always important to involve partners who can help guide the way. By partnering with HP and using their workshop, an organization can better understand the direct impact and fit that cloud computing can have.

  • How Can You Save Up to 30% in Data Center Operation Costs?

    With more users, more devices and many more connections coming into the cloud – the data center has become an integral part of any IT infrastructure. There is much more reliance on data center operations and there is a direct need for optimal efficiency.

    This leads to an increased demand for controlling data center components that goes far beyond hardware management and advanced cooling systems. The complexity of the modern environment calls for a more holistic energy optimization solution.

    Accurate monitoring of power consumption and thermal patterns creates a foundation for enterprise-wide decision making with the ability to:

    • Monitor and analyze power data by server, rack, row, or room;
    • Track usage for logical groups of resources that correlate to the organization or data center services;
    • Automate condition alerts and triggered power controls based on consumption or thermal conditions and limits; and
    • Provide aggregated and fine-grained data to web-accessible consoles and dashboards, for intuitive views of energy use that are integrated with other data center and facilities management views.
    Rich-Miller-Staff-tnRICH MILLER
    Editor-in-Chief

    Moderated by Data Center Knowledge Editor-in-Chief Rich Miller, this webinar will feature an in-depth conversation with Intel’s Jeff Klaus, General Manager of Data Center Manager (DCM) Solutions. The webinar will revolve around specific use cases as they apply to real data center situations. For example, in a jointly tested POC conducted over a three-month period in late 2011 at Korea Telecom’s existing Mok-dong Data Centre in Seoul, South Korea, results showed that a Power Usage Effectiveness (PUE) of 1.39 would result in approximately 27 percent energy savings. This could be achieved by using a 22◦C chilled water loop.

    Jeff-Klaus-smJEFF KLAUS
    Intel

    The 60-minute online conversation will explain Data Center Infrastructure Management (DCIM) and its contributions to managing power and cooling usage in the data center. For example, identifying temperatures at the server, versus at the room or even rack levels, can help data center managers more accurately understand what the real ambient temperature should be for individual servers to have optimal lifespans. Register today to join Rich Miller of Data Center Knowledge and Intel’s Jeff Klaus on April 25, 2013 (2:00pm-3:00pm EDT) to learn how these types of assessments can represent a significant savings in data center environment management.

  • Why Consider a Modular Data Center?

    This is the thirds article in the Data Center Knowledge Guide to Modular Data Centers series. The initial black eye for containers and the modular concept was mobility. The Sun Blackbox was seen on oil rigs, war zones and places a data center is typically not found. As an industry of large brick and mortar facilities that went to all extremes to protect the IT within, the notion of this data center in a box being mobile was not only unattractive, but laughable as a viable solution. What it did do however, was start a conversation around how the very idea of a data center could benefit from a new level of standardizing components and delivering IT in a modular fashion around innovative ideas.

    Faced with economic down-turn and credit crunches, business took to modular approaches as a way to get funding approved in smaller amounts and mitigate the implied risk of building a data center. Two of the biggest reasons typically listed for the problem with data centers are capital and speed of deployment. The traditional brick and mortar data center takes a lot of money and time to build. Furthermore, the quick evolution of supporting technologies further entices organizations to work with fast and scalable modular designs. Outside of those two primary drivers there are many benefits and reasons listed for why a modular data center approach is selected.

    Design

    • Speed of Deployment: Modular solutions have incredibly quick timeframes from order to deploy¬ment. As a standardized solution it is manufactured and able to be ordered, customized and delivered to the data center site in a matter of months (or less). Having a module manufactured also means that the site construction can progress in parallel, instead of a linear, dependent transition. Remem¬ber, this isn’t a container — rather a customizable solution capable of quickly being deployed within an environment.

    • Scalability: With a repeatable, standardized design, it is easy to match demand and scale infrastructure quickly. The only limitations on scale for a modular data center are the supporting infrastructure at the data center site and available land. Another characteristic of scalability is the flexibility it grants by having modules that can be easily replaced when obsolete or if updated technology is needed. This means organizations can forecast technological changes very few months in advance. So, a cloud data center solution doesn’t have to take years to plan out.

    • Agility: Being able to quickly build a data center environment doesn’t only revolve around the abil¬ity to scale. Being agile with data center platforms means being able to quickly meet the needs of an evolving business. Whether that means providing a new service or reducing downtime — modular data centers are directly designed around business and infrastructure agility. Where some organizations build their modular environment for the purposes of capacity planning; other organizations leverage modular data centers for their highly effecitve disaster recovery operations.

    • Mobility and Placement: A modular data center can be delivered where ever it is desired by the end user. A container can claim ultimate mobility, as an ISO approved method for international transporta¬tion. A modular solution is mobile in the sense that it can be transported in pieces and re-assembled quickly on-site. Mobility is an attractive feature for those looking at modular for disaster recovery, as it can be deployed to the recovery site and be up and running quickly. As data center providers look to take on new offerings, they will be tasked with stay¬ing as agile as possible. This may very well mean adding additional modular data centers to help support growing capacity needs.

    • Density and PUE: Density in a traditional data center is typically 100 watts per square foot. In a modular solution the space is used very efficiently and features densities as much as 20 kilowatts per cabinet. The PUE can be determined at commissioning and because the module is pre-engineered and standardized the PUE’s can be as low as 1.1–1.4. The PUE metric has also become a great gauge of data center green efficiency. Look for a provider that strives to break the 1.25 –1.3 barrier or at least one that’s in the +/- 1.2 range.

    • Efficiency: The fact that modules are engineered products means that internal subsystems are tightly integrated which results in efficiency gains in power and cooling in the module. First generation and pure IT modules will most likely not have efficiency gains other than those enjoyed from a similar con¬tainment solution inside of a traditional data center. Having a modular power plant in close proximity to the IT servers will save money in costly distribution gear and power loss from being so close. There are opportunities to use energy management platforms within modules as well, with all subsystems being engineered as a whole.

    • Disaster Recovery: Part of the reason to design a modular data center is for resiliency. A recent Market Insights Report 2 conducted by Data Center Knowledge points to the fact that almost 50% of the surveyed organizations are looking at disaster recov¬ery solutions as part of their purchasing plans over the next 12 months. This means creating a modular design makes sense. Quickly built and deployed, the modular data center can be built as a means for direct disaster recovery. For those organizations that have to keep maximum amounts of uptime, a modular architecture may be the right solution.

    • Commissioning: As an engineered, standardized solution, the data center module can be commis¬sioned where it is built and require fewer steps to be performed once placed at the data center site.

    • Real Estate: Modules allow operators to build out in increments of power instead of space. Many second generation modular products feature evaporative cooling, taking advantage of outside air. A radical shift in data center design takes away the true brick and mortar of a data center, placing modules in an outdoor park, connected by supporting infrastructure and protected only by a perimeter fence. Some modular solutions offer stacking also — putting twice the capacity in the same footprint.

    Operations

    • Standardization: Seen as a part of the industrialization of data centers the modular solution is a standardized approach to build a data center, much like Henry Ford took towards building cars. Manufactured data center modules are constructed against a set model of components at a different location instead of the data center site. Standardized infrastructure within the modules enable standard operating procedures to be used universally. Since the module is prefabricated, the operational procedures are identical and can be packaged together with the modular solution to provide standardized documentation for subsystems within the module.

    • DCIM (Data Center Infrastructure Management): Management of the module and components within is where a modular approach can take advantage of the engineering and integration that was built into the product. Many, if not all of the modular products on the market will have DCIM or management software included that gives the operator visibility into every aspect of the IT equipment, in-frastructure, environmental conditions and security of the module. The other important aspect is that distributed modular data centers will now also be easier to manage. With DCIM solutions now capable of spanning the cloud — data center administrators can have direct visibility into multiple modular data center environments. This also brings up the ques¬tion of what’s next in data center management.

    • Beyond DCIM – The Data Center Operating System (DCOS): As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to truly manage their infrastructure. There will be a direct need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. DCOS looks to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with a data center operating system, expect the following:
    – An integrated end-to-end automated solution to help control a distributed modular data center design.
    – Granular centralized management of a localized or distributed data center infrastructure.
    – Real-time – proactive – environment monitoring, analysis and data center optimization.
    – DCOS can be delivered as a self-service automa¬tion solution or provided as a managed service.

    Enterprise Alignment

    • Rightsizing: Modular design ultimately enables an optimized delivery approach for matching IT needs. This ability to right-size infrastructure as IT needs grow enables enterprise alignment with IT and data center strategies. The module or container can also provide capacity when needed quickly for projects or temporary capacity adjustments. Why is this important? Resources are expensive. Modular data centers can help right size solutions so that resources are optimally utilized. Over or under provisioning of data center resources can be extremely pricey — and difficult to correct.

    • Supply Chain: Many of the attributes of a modular approach speak to the implementation of a supply chain process at the data center level. As a means of optimizing deployment, the IT manager directs ven¬dors and controls costs throughout the supply chain.

    • Total Cost of Ownership:
    – Acquisition: Underutilized infrastructure due to over-building a data center facility is eliminated by efficient use of modules, deployed as needed.
    – Installation: Weeks and months instead of more than 12 months.
    – Operations: Standardized components to sup¬port and modules are engineered for extreme-efficiency.
    – Maintenance: Standardized components enable universal maintenance programs.
    Information technology complies with various internal and external standards. Why should the data center be any different? Modular data center deployment makes it possible to quickly deploy standard¬ized modules that allow IT and facilities to finally be on the same page.

    The complete Data Center Knowledge Guide to Modular Data Centers is available for download in a PDF format and brought to you by IO. Click here to download the DCK Guide to Modular Data Centers.

     

     

     

  • Accelerate Your IT for Better Business Results

    With the emergence of cloud computing, big data and IT consumerization, many data centers and organizations have redesigning their IT infrastructure. One of the biggest drivers within the modern data center is to design an environment capable of scalability, agility and of course efficiency. In creating such an infrastructure, many organizations are working with new types of converged technologies which can help achieve greater amounts of density.

    Based on a recent IDC study, customers who have implemented a converged infrastructure have been able to:

    • Shift more than 50 percent of their IT resources from operations to innovation—flipping the ratio from 70 percent of your people and budget focused on operations to 70 percent dedicated to innovation to improve customers’ experience, increase employee productivity, and make the business more competitive.
    • Cut time to provision applications by 75 percent. Based on the IDC research study, it takes IT organizations 20+ days to deploy a new application in traditional environments—and only five days in a converged infrastructure—a 75 percent decrease.
    • Reduce downtime by 97 percent. Go from an average of 10 hours of downtime per year down to less than 20 minutes.

    There are truly direct benefits in working with evolving converged infrastructure solutions. HP’s Converged Infrastructure is able to offer data centers the flexibility and capability to expand as needed. The core functions of a data center are all tied into one framework where management overhead is reduced and efficiency is increased.

    hp1

    [Image source: HP – Accelerate your IT for better business results]

    Download HP’s white paper on the HP Converged infrastructure to see how this technology can bring direct benefits to your organization. This includes:

    • Accelerate innovation
    • Accelerate responsiveness
    • Accelerate cloud
    • Accelerate security
    • Accelerate disaster recovery
    • Accelerate ROI

    In HP’s white paper, you are able to see how intelligent and efficient technologies – like the converged infrastructure – can not only improve management processes; the infrastructure can also help drive business, reduce operating risks and lower costs.

  • Sweden’s Östersund Gets in the Data Center Game

    The global IT infrastructure is evolving beyond centralized data centers and into a distributed system of globally connected points. With more users, more devices, and a lot more data – the data center has become an integral part of any organization. US-based firms are now trying to bring data closer to the user to help deliver better end-user performance.

    As cloud computing and IT consumerization continue to push technology forward, organizations will need to seek out new places to house their data centers. The city of Östersund, Sweden wants to be one of those new data center destinations, and is outlining the merits of its location for medium to web-scale data centers.

    sweden

    The site at Torvalla Industrial Park, located just outside the City Centre, is prepared for construction and is ”shovel ready”. The Östersund site offer includes:

    • Location in the heart of the Power region
    • National electricity price zone 2 – among the lowest price in Europe
    • Reduced energy tax, 34 % lower than South Sweden
    • Extremely stable grid, no interruptions last 30 years
    • Redundant electricity supply connected to the national grid

    Download this white paper on the new Östersund site to learn about an advanced location capable of delivering a powerfully redundant infrastructure. In designing a robust data center, there are many considerations that fall into place. In this white paper, all of those concerns are addressed and expectations are taken to the next level.

    With the perfect climate, optimal cooling capabilities, and the ability to deliver a packet in 16ms or less to St. Petersburg – the Östersund site creates the optimal opportunity for any organization to distribute their infrastructure. The focus around data delivery and cloud components will only continue to evolve. With that evolution, organizations will need to find new places to locate their data centers; to ensure the consistent availability of information for the end-user.

  • Business Value of Blade Infrastructures

    The modern data center environment is evolving to support more users, more data and a lot more services. As the data center becomes an integral part of any organization, the focus becomes efficiency and a reduction in management overhead. Furthermore, IT environments are continuously striving to increase user density while still improving data center efficiency.

    As more cloud computing projects and more emphasis are placed on the data center, administrators are going to look at more efficient ways to deliver data and user workloads. One of those very effective delivery means is through a highly scalable blade infrastructure. Not only are blades easier to scale, they bring other direct benefits to an organization. This includes:

    • Better density and computing resources
    • Improved network bandwidth control and utilization
    • Better IT facilities management
    • Simplifying the management process
    • Increase data center flexibility

    In this IDC research white paper, sponsored by HP, we are able to see how and where the blade infrastructure can bring direct benefits to an organization. For example, the HP BladeSystem enclosure utilizes Platinum Power supplies with 94% efficiency for increased energy efficiency. Additionally, the HP Dynamic Power Saver mode enables more efficient use of power by placing power supplies in standby mode during periods of low utilization. HP Power Regulator, built for ProLiant, dynamically changes each server’s power consumption to match the needed processing horsepower, thus reducing power consumption automatically during periods of low utilization.

    Download this white paper to see how IDC’s ROI analysis that customers can achieve considerable cost savings and improve the agility of their infrastructure by migrating to an HP BladeSystem environment. Remember, in building out any data center – one of the most important components will be the ability to scale and stay agile. Since the business environment can change at any time, it only makes sense to have an infrastructure that’s capable of the same flexibility.

  • An In-Depth Guide for Data Center Transformation

    The modern data center consists of numerous various vital components all working together to facilitate the delivery of information. Now, more than ever before, the data center has truly become the heart of any organization. Big or small – the current growing reliance on data center environments is evident. During this growth, many administrators began to adopt technologies which directly revolved around efficiency. In some cases it was better cooling systems and power capabilities – in other cases is efficiency revolved around high-density computing platforms.

    Now, many data centers are being tasked with new types of technological requirements. This can range from hosting a virtual desktop infrastructure to running a cloud platform. Many organizations are now adopting some type of cloud model. Whether it’s public, private, hybrid or community – businesses are seeing benefits in a cloud computing platform. The bottom line is this: To truly transform your data center you will need a holistic framework.

    cisco2

    In Cisco’s comprehensive guide, we are able to see the roadmap for a successful transition so that your organization can identify and achieve business goals. The conversation revolves around Cisco Domain Ten. Domain Ten can be applied to a diverse range of data center projects — from cloud and desktop virtualization to application migration and is equally applicable whether your data center is in enterprise businesses, public sector organizations or service providers.

    Download this guide to better understand the data center transformation process and all of the key steps along the way. Cisco’s comprehensive framework allows the administrator to ensure key aspects are considered and where appropriate, action can be taken as you plan, build and manage your data center project.

  • Utility Storage for Virtual and Cloud Computing

    Today’s IT environment is being built around direct efficiencies. This means better resource utilization, improved monitoring, and the consolidation of enterprise systems. Many organizations are building a business around an efficient and well-controlled IT environment. The idea is to create an IT as a service model where administrators can allow for self-provisioned services and automation to help control administrative overhead.

    This is where technologies around the converged infrastructure can really help. Intelligent storage systems can help an organization cut costs and control very vital resources. In HP’s whitepaper, we learn how utility storage creates a unified platform for efficiency and growth. Directly modeled for the needs of virtualization and cloud computing, HP’s converged storage infrastructure leads to three very direct benefits:

    • Distributed environment multi-tenancy
    • Geographical resource federation
    • Cloud, virtualization, and data center efficiency

    Furthermore, converged systems add a lot more than advanced functionality. With the idea to simplify the IT environment, using converged storage platform helps with both easy of management and agility. Now, administrators are able to control logical storage segments while still being flexible with the entire organization.

    Download HP’s whitepaper on converged storage environments to see how an organization can embark on the path to IT as a service. These new types of platforms create an infrastructure capable of advanced automation and self-provisioned services on demand. Intelligent systems like HP’s utility storage will help reduce administrative costs and have other key benefits as well. By reducing the hardware footprint and deploying advanced platforms like the HP’s converged storage, organizations can actually help extend the life of their data center as well.  Consolidated systems operating with more efficient technologies are easier to control, cool and maintain. In fact, according to the whitepaper, companies can extend the life of their data centers by two to five years through a combination of IT strategies. In creating a robust environment, remember to always use intelligent systems which will enhance your ability to be both flexible and agile – while still consistently maintaining infrastructure control.

  • 10 Essential Domains for Moving to Private Cloud

    With the expansion of cloud computing and various cloud services – many organizations are now further considering some type of cloud model. In many instances, companies looking to keep their data outside of a provider are looking to move to a private cloud platform. Where this type of environment can certainly bring a lot of benefits; the deployment and planning process have to be conducted very carefully. One of the first concepts to understand is that there isn’t just one magical cloud product. Rather, cloud computing revolved around the functionality of many different data center and infrastructure components.

    To move to a cloud model, there has to be a solid understanding of those underlying components and how they all work together. Starting with a look at infrastructure and virtualization through process and governance, Cisco’s video discusses the Cisco Domain Ten —the ten essential domains you need to know to get started with the cloud journey.

    cisco1

    [Image source: Cisco.com | Cisco Domain Ten – Simplifying Data Center Transformation]

    Based upon the many cloud deployments — private and public, enterprise, public sector and service provider – Cisco worked to formulate this comprehensive framework to help you transform your data center and guide new initiatives.  In many cases, these new projects may include cloud, virtual desktop, application migration, and data center consolidation.

    Click here to view Cisco’s video on The Cisco Domain Ten framework.  The important takeaway here will be the understanding of the ten key framework areas, or domains, that critical to consider, plan for and address as part of your data center and cloud transformational process.

  • Better Airflow Improves Cooling Capacity, Cuts Operating Costs

    New data center environments are being designed and built to support large amounts of users. Furthermore, these infrastructures are created to handle powerful workloads capable of distributing data and information all over the world. In architecting the modern data center platform, administrators are striving to create and environment built on performance and efficiency. Part of the development process will always revolve around airflow and room-level control.

    In many cases, data centers are built with some of the best equipment, top of the line power management systems, and utilize space optimally. However, in some cases, the all-important process of airflow control is left to the last design minute. In fact, recent research by Upsite Technologies of 45 computer rooms reveals that on average 48% of conditioned air is escaping from unsealed openings and misplaced perforated tiles.

    There are a few ways to deploy a solid air flow management system. Of course, the size and design of your data center will dictate optimal air flow management, the process of the actual control is very important nevertheless. In Upsite’s whitepaper, we are able to see the direct benefits in controlling air flow within the data center environment.  Many sites have implemented air flow management and cooling best practices and have seen some of the following benefits:

    • Improved IT intake air temperatures
    • Improved IT equipment reliability
    • Increased volumes of cooling airflow delivered by perforated tiles
    • Ability to add more perforated tiles and cool more cabinets to the room without compromising raised floor static pressure
    • Increased cooling unit efficiency
    • Increased cooling unit capacity
    • Reduced operating expense

    In creating a solid data center, all aspects of efficiency must be considered. This includes airflow control and management. Download Upsite’s white paper to learn about the direct data center benefits which revolve around bypass airflow management. Furthermore, you’ll see the four necessary steps to creating a better airflow system for your data center environment.

  • Five Myths of Cloud Computing

    Technologies around the Internet and the WAN have been around for some time. However, it wasn’t until very recently that a specific term began circulating which was supposed to emphasize the combination of these technologies. Cloud computing was born out of the idea of a distributed computing system where information was available from numerous different points. Although the idea has certainly caught on – there are still some misconceptions and confusions around the cloud.

    Many businesses have found great ways to utilize a cloud model. Now, they’re able to be more agile, grow faster and even add to their business resiliency. Still, there are those that have never really worked with an enterprise cloud model and are held back by myths and confusion points around the technology.

    In HP’s Five myths of cloud computing, we learn some of the biggest myths currently circulating in the cloud industry. Remember, the cloud is a vast, diverse, model which can accommodate many different types of organizations. Whether it’s a private, public, hybrid or community cloud – there may be a fit for your organization. Still, without full understanding the cloud model, it’s easy to be confused by so many different types of offerings.

    The Five myths of cloud computing whitepaper outlines the key areas where IT managers and business stakeholders should seek more clarification. Specifically:

    • Myth 1: The public cloud is the most inexpensive way to procure IT services
    • Myth 2: Baby steps in virtualization are the only way to reach the cloud
    • Myth 3: Critical applications do not belong in the cloud
    • Myth 4: All cloud security requirements are created equally
    • Myth 5: There is only one way to do cloud computing

    Download HP’s whitepaper on the five myths of cloud computing to see where you are able to adjust your cloud strategy and if your environment is really ready for a cloud computing model.

  • How the Data Center Has Evolved to Support the Modern Cloud

    cloud-rows-dreamstime

    There’s little argument among IT and data center professionals that over that past few years, there have been some serious technological movements in the industry. This doesn’t only mean data centers. More computers, devices, and the strong push behind IT consumerization have forced many professionals to rethink their designs and optimize to this evolving environment.

    When cloud computing came to the forefront of the technological discussion, data center operators quickly realized that they would have to adapt or be replaced by some other provider who is more agile.

    The changes have come in all forms, both in the data center itself and how data flows outside of its walls. The bottom line is this: If cloud computing has a home, without a doubt, it’s within the data center.

    There are several technologies that have helped not only with data center growth, but with the expansion of the cloud environment. Although there are many platforms, tools and solutions which help facilitate data center usability in conjunction with the cloud – the ones below outline just how far we’ve come from a technological perspective.

    • High-density computing. Switches, servers, storage devices, and racks are all now being designed to reduce the hardware footprint while still supporting more users. Let’s put this in perspective. A single Cisco UCS Chassis is capable of 160Gbps. From there, a single B200M3 blade can hold two Xeon 8-core processors (16 processing cores) and 768GB of RAM. Each blade can also support 2TB of storage and up to 32GB for flash memory. Now, if you place 8 of these blades into a single UCS Chassis, you can have 128 processing cores, over 6TB of RAM, and 16TB of storage. This means a lot of users, a lot of workload and plenty of room for expansion. This holds true for logical storage segmentation and better usage around other computing devices.
    • Data center efficiency. To help support larger amounts of users and a greater cloud environment, data center environments had to restructure some of their efficiency practices. Whether through a better analysis of their cooling capacity factors (CCF) or a better understanding around power utilization, modern technologies are allowing the data center to operate more optimally. Remember, with high-density computing we are potentially reducing the amount of hardware, but the hardware replacing older machines may require more cooling and energy. Data centers are now focusing on lowering their PUE and are looking for ways to cool and power their environments more optimally. As cloud continues to grow, there will be more emphasis on placing larger workloads with the data center environment.
    • Virtualization. Virtualization has helped reduce the amount of hardware within a data center. However, we’re not just discussing server virtualization any longer. New types of technologies have taken efficiency and data center distribution to a whole new level. Aside from server virtualization, IT professionals are now working with: Storage virtualization, user virtualization (hardware abstraction), network virtualization, storage virtualization, and security virtualization. All of these technologies strive to lessen the administrative burden while increasing efficiency, resiliency and improving business continuity.

    More appliances can be placed at various points within the data center to help control data flow and further secure an environment.

    • WAN technologies. The Wide Area Network has helped the data center evolve in the sense that it brings facilities “close together.” Fewer hops and more connections are becoming available to enterprise data center environments where administrators are able to leverage new types of solutions to create an even more agile infrastructure. Having the capability to dedicate massive amounts of private bandwidth between regional data centers has proven to be a huge factor. Data center resiliency, recovery and manageability have become a little bit easier because of these new types of WAN services. Furthermore, site-to-site replication of data and massive systems is now happening at a much faster pace. Even now, big data has new types of developments to help large data center quantify and effectively distribute enormous data sets. Projects like the Hadoop Distributed File System (HDFS) are helping data center realize that open-source technologies are powerful engines for data distribution and management.
    • Distributed data center management. This is, arguably, one of the biggest pieces of evidence in how well the data center has evolved to help support the modern cloud. Original data center infrastructure management (DCIM) solutions usually focused on singular data centers without too much visibility into other sites. Now, DCIM has evolved to help support a truly global data center environment. In fact, new terms are being used to help describe this new type of data center platform. Some have called it “data center virtualization” or the abstraction of the hardware layer within the data center itself. This means managing and fully optimizing processes running within the data center and then replicating it to other sites. In some other cases, a new type of management solution is starting to take form: The Data Center Operating System. The goal is to create a global computing and data center cluster which is capable of providing business intelligence, real-time visibility and control of the data center environment from a single pane of glass.

    The conversation has shifted from central data points to a truly distributed data center world. Now, our information is heavily replicated over the WAN and stored in numerous different data center points. Remember, much of this technology is still new, being developed, and is only now beginning to have some standardization. This means that best practices and thorough planning should never be avoided. Even large organizations sometimes find themselves in cloud conundrums. For example, all those that experienced the recent Microsoft Azure or Amazon AWS outages are definitely thinking of how to make their environment more resilient.

    The use of the Internet as well as various types of WAN services is only going to continue to grow. Now, there are even cloud API models which are striving to unify cloud environments and allow for improved cloud communication. More devices are requesting access to the cloud and some of these are no longer just your common tablet or smartphone. Soon, homes, entire business, cars, and other daily-use objects will be communicating with the cloud. All of this information has to be stored, processed and controlled. This is where the data center steps in and continues to help the cloud grow.

  • Cloud Computing: A CFO’s Perspective

    Cloud computing and the technologies surrounding the platform have made a big impact on the modern business. Now, more organizations are looking for ways to leverage the cloud and see where it can create cost savings. Although many IT professionals will fight for a cloud model – in many cases, the CFO needs to make a good recommendation as well.

    The IT infrastructure is an absolutely vital part of any company. In fact, IT is now at the top of the CFO’s agenda. According to Gartner’s The CFO’s Role in Technology Investments, 26 percent of IT investments require the direct authorization of the CFO and 42 percent of IT organizations now report to the CFO. This is why, in recent years, the IT department and the business organizations have become much closer in terms of the technologies the entire unit wants to deploy. Just like any new technology, the cloud can have very positive results for a company. However, these results only come about after thorough planning around cloud computing.

    In this whitepaper, HP takes a deeper look at the cloud – but directly from a CFO’s viewpoint. This means analyzing key benefits in moving towards a cloud platform. This includes:

    • Moving capex to opex.
    • Adding speed and flexibility.
    • Creating instant access to innovation.
    • Creating a better and more resilient environment.

    Download HP’s whitepaper to see how a CFO should view the cloud and where key benefits are located. Remember, there are a lot of uses for cloud computing. Many organizations can leverage a cloud model to reduce legacy systems or create a private infrastructure capable of agile growth. However, the key here is ensuring that the entire business entity can see the direct benefits of cloud computing. That means other IT departments, other business units, and of course – the CFO.

  • Defeating Cyber Threats Requires a Wider Net

    As more organizations and users utilize the Internet, there will be more data, more management needs, and a lot of worries around security. The big push around cloud and the modern cloud-ready data center really revolves around IT consumerization and newly available resources. Just like any infrastructure, the bigger and more popular it gets – the bigger the target.

    Cyber threats have been growing and at an alarming rate. Not only have frequencies increased – the creativity of the intrusions and attacks are staggering as well. There plenty of evidence to supporting this as well:

    • Malware is reaching new all-time highs – Trend Micro, for example, has identified 145 thousand malicious Android apps, as of September 2012.2 Keeping malware at bay, already a “treading water” challenge, is intensifying.
    • BYOD is a growing threat vector – Frost & Sullivan estimates smartphones shipped in 2012 will reach 558 million, and tablets will reach 93 million. With more users using more cloud networks – targets will become larger as well.
    • Distributed Denial of Service (DDoS) attacks are approaching mainstream In a 2012 survey of network operators conducted by Arbor Networks, over three-quarters of the operators experienced DDoS attacks targeting their customers.
    • Exposure footprint is expanding –According to a Frost & Sullivan 2012 global survey of security professionals, slightly more than one-third of the respondents cite cloud computing as a high priority for their organizations now, and that percentage increases to 54 percent in two years.

    With the evident change in the technological landscape, there will undoubtedly be a need to re-evaluate existing security environments. Why is the case? Simple, many existing security platforms are just not enough to handle today’s demands around cyber security. In this white paper, new types of security platforms are explored. Specifically, Arbor’s ATLAS platform is seen as a leader in enterprise-ready security and traffic-monitoring. Between these two sources, Arbor is collecting data from all assigned IP addresses—service-active IP addresses from Arbor platforms and service-inactive IP addresses from darknet-hosted ATLAS sensors.

    Arbor’s ATLAS platform

    Launched in 2007, ATLAS collects network traffic data from sensors hosted in carriers’ darknets, and data from carrier and enterprise-deployed Arbor monitoring platforms. Download this white paper to see how ATLAS and its platform has direct benefits for carriers and enterprises. This includes:

    • More threats are proactively mitigated, resulting in a lower overall risk posture.
    • Less remediation occurs. With fewer attacks being successful, remediation efforts will be fewer in number and smaller in scale.
    • As ATLAS researchers monitor and assess traffic data from Arbor platforms and darknet sensors, carrier and enterprise security analysts gain the benefits of this threat analysis without incurring the work effort.

    Remember, the cyber threat environment will only continue to grow and evolve. Whether your environment is utilizing the WAN or some type of cloud environment – it’s time to evaluate your security infrastructure and see how new, advanced, platforms can help.

  • A Roadmap for Data Center Transformation

    Data centers are more important than ever. As more organizations move towards IT consumerization, cloud computing and distributed technologies, the data center continues to play an integral role in the entire process.

    What’s the Path to the Future?

    You’re invited to learn more about data center transformation when IO Senior Vice President Aaron Peterson has an in-depth conversation with the Editor-in-Chief of Data Center Knowledge, Rich Miller, during the next DCK Webinar. The discussion will revolve around the increasing pace that the global demands being placed on data center providers.

    Register now for the DCK Data Center Transformation Webinar on March 28 at 2 p.m EST.

    There is no doubt that the cloud and data center environment will continue to evolve. Now, we have progressed from what was known as “Data Center 1.0″ to a new type of platform. With more emphasis on affordability, sustainability, integrated operations and many more new functions that revolve around the data center, we are entering the era of “Data Center 2.0.”

    Data Center 2.0

    [Image source: IO Data Center 2.0 Manifesto]

    Join the Discussion

    Register for this event on March 28 to gain a greater understand of the current and future roadmap around data center transformation.  Business and executive leaders have been striving to leverage data center technologies more and more as their organizations continue to grow. In this interview and conversation, Peterson will discuss various vital topics around the existing and future data center transformation areas. This includes:

    • The predominantly static data center and its limitations.
    • The approaching crisis, a “perfect storm” on the data center horizon, made up of supply and demand constraints, which must be prioritized and transformed.
    • The technology-based, sustainable solution that is Data Center 2.0 which represents a fundamental transformation of data center DNA.

    Growing technology demands will place new types of requirements around data center 2.0 environments. Join the discussion and increase your understanding of how business agility and new types of business drivers are evolving the data center.

  • Why Anti-DDoS Services Matter in Today’s Business Environment

    Although the Internet has been around for a while, the boost in cloud computing has increased the utilization of WAN services. Any organization now using the Cloud or some type of Internet-based service must be aware of the security risks that come with the platform. With the evolution of the modern data center – and the use of cloud computing – has created more targets for attackers to go after. The widespread availability of inexpensive attack tools enables anyone to carry out distributed denial of service (DDoS) attacks. This has profound implications for the threat landscape, risk profile, network architecture and security deployments of Internet operators and Internet-connected enterprises.

    With the direct increase in cloud services, organizations are utilizing more Internet services and greater amounts of bandwidth. Because of this, attackers are increasing the size and number of their attacks on targeted organizations. In a recent survey conducted by Arbor Networks the size of volumetric DDoS attacks have steadily grown. The truly troubling piece, however, was the report in 2010 of a 100 Gbps attack. To put that in perspective, that is more than double the size of the largest attack in 2009. This staggering figure illustrates the resources hackers are capable of bringing to bear when attacking a network or service.

    Arbor Networks — Worldwide Infrastructure Security Report, Volume VI

    Image source: Arbor Networks — Worldwide Infrastructure Security Report, Volume VI

    Although these attacks have been simplified in deployment – they’ve certainly evolved in complexity. The methods hackers use to carry out DDoS attacks have evolved from the traditional high bandwidth/volumetric attacks to more stealthy application-layer attacks, with a combination of both being used in some cases.

    In working with DDoS-type attacks, administrators must understand the depth of the DDoS problem. Volumetric attacks are also getting larger, with a larger base of either malware-machines or volunteered hosts being used to launch these attacks. Well-known groups, such as Anonymous, have brought a new type of DDoS attack into scope as well – hactivism. As these attacks become more prevalent, IT administrators must have good visibility into the complex threat environment and the true need for a full-spectrum solution. Download this white paper to see how DDoS can affect a business and the true importance for a solid security infrastructure. In this paper, Frost & Sullivan outline the various points in creating an all-encompassing security solution. Key points include:

    • Integrity and Confidentiality vs. Availability
    • Protect Your Business from the DDoS Threat
    • Cloud-Based DDoS Protection
    • Perimeter-Based DDoS Protection
    • Out-of-the-Box Protection
    • Advanced DDoS Blocking
    • Botnet Threat Mitigation
    • Cloud Signaling

    The increase in cloud computing will result in more DDoS attacks on organizations. Since more targets are being presented, attackers may use a myriad of reasons to target an IT environment. This white paper outlines the key points in understanding DDoS attacks and how to strategically protect your environment. In creating a solid security solution, administrators are able to secure their infrastructure both at the perimeter and the cloud level.

  • Public Cloud Security, Readiness and Reliability

    cloud-rows-dreamstime

    The modern idea of the “cloud” may be something new but a lot of the technology it uses has been around for a while, since 1997 in fact. As with any technology, the most important aspects of deploying a new solution will be an understanding of the platform and, of course, thorough planning.

    Ready for Public Cloud?

    When considering public cloud options, it’s important to understand where there is a direct fit. This means that both key business stakeholders as well as IT executives will need to see the benefits of moving towards a public cloud “Infrastructure as a Service” environment. Although there are many benefits, administrators should take in some considerations when looking at public cloud options.

    • Public cloud and security. This is a major consideration for any organization. Although a public cloud is certainly secure, some organizations have specific regulations as to how their data can be delivered over the WAN. Also, securing the server and application environment will differ when these workloads are pushed through a cloud environment. Special planning meetings and considerations have to go into knowing the type of security requirements and environment might have.

    It’s important NOT to get overwhelmed when we talk about cloud security options. Yes, there are new technologies revolving around ensuring cloud security, but it doesn’t have to be overwhelming. As mentioned earlier, we can break down cloud security at a high-level by examining the following:

    • Security on the LAN: The first steps will be the understanding of the security elements of your LAN. Is data being encrypted internally? Are there ACLs on the switches? How are the firewalls and load-balancers configured for data leaving the local network?
    • Security at the end-point: How is the end-point accessing the data? Is it through a VPN or through an encrypted connection? Is there a secure client involved? Understanding the end-point security setting and policies is important to ensuring that the data reaches its destination safely.
    • Security in the middle: When data is being transmitted over the WAN there have to be security settings in place from beginning to end. That means setting up a secure tunnel for the data to travel, constant monitoring of the links, and proactively maintaining server and LAN security policies.

    Remember one main point as you plan out your environment: Cloud security isn’t really just one component in itself. Rather, it’s a lot of security best practices being applied for the purpose of transmitting data over the WAN. This is where using next-generation security tools can really help. Advanced device interrogation engines as well as intrusion prevention/detection (IPS/IDS) can further secure a cloud platform.

    • Environment readiness and reliability. Although public clouds can be easy to adapt to, some environments may not be ready for a cloud initiative. Having the right infrastructure in place to support a cloud movement may be required. In these cases, organizations should take the time and evaluate their current position to see if going to the cloud is the right move.

    Just like any other infrastructure, it’s important to create an environment capable of supporting business continuity needs. This means understanding the fact that the cloud can and will potentially go down. For example, in a recent major cloud outage – a simple SSL certificate was allowed to expire. This then created a global, cascading failure taking down numerous vital public cloud components. Who was the provider? Microsoft Azure.

    • Deploying the right workload. The larger the workload, Virtual Desktop Infrastructure (VDI) for example, the longer it will take to be delivered. Some core applications require backend database connectivity where a public cloud model may not be the right fit. Before moving to the cloud, make sure to have a complete understanding of what will be utilized in the public cloud arena. From there, a good decision can be made as to whether a given application or even virtual node is the right fit for a cloud model.
    • Maintaining control. Just like a local, non-cloud environment, administrators must retain control of their environment. This is especially important in pay-as-you-go models. With little control or oversight, administrators might be provisioning Virtual Machines (VMs) and resources when they’re simply not needed. This is where a public cloud can quickly lose its value. IT organizations must keep a watchful eye on their cloud-based workloads and resources to know what is being use and that they are utilizing that environment efficiently.
    • End-user and administrator training. The success of almost any new deployment will be user acceptance. If an organization deploys a new public cloud capable of delivering entire workloads to the end-user, there must be core training associated with it. What good is a robust, highly scalable infrastructure if the end-user is confused or not sure how to use it? Since users are often adverse to change, all modifications should be gradual and well documented. Information passed to the user should be easy to understand and simple to follow. With good training and solid support on the backend, administrators can deliver powerful data on-demand solutions to the end-user.

    Cloud computing is here to stay – and there are the many benefits to such a powerful Wide Area Network-based platform. Whether administrators need to provision a new workload or test out an application, a public cloud solution can help an organization stay innovative. Remember, as with any new environment, it’s important to plan out the infrastructure and find the need behind the deployment. When it comes to a public cloud, administrators should evaluate their needs and see how this type of cloud platform can directly benefit them.

    The goal with many recent cloud articles is to debunk the myth that cloud computing is an insecure, Wild West environment. Unlike the dot com bust or other failed technologies, our generation is evolving into a data-on-demand environment where cloud computing acts as the delivery mechanism for vast amounts of information. So while you may not be ready to embrace the technology, it’s important start to understand it and learn the facts, not the hype.