Author: Bill Kleyman

  • The Evolution of the IT Professional – Understanding the Cloud’s Demands

    A staff member in an Equinix facility.

    A staff member in an Equinix facility.

    The technology landscape has truly evolved over the past few years – and with that evolution comes a new demand for the future of IT administration.

    IT managers and engineers are tasked with knowing more, understanding further components within their own environments and must have the ability to truly be creative. The old days of IT saw engineering dedicated to one process. Rare interaction between teams was seen as an exception rather than norm. Cross-IT team collaboration would be usually done at the management level, and even then it wasn’t always successful.

    There is a new breed of engineers being born from the era of cloud computing. New job titles are being created with demands being placed on engineers who are unique and have the ability to communicate. In this post, we’ll outline some of the traits which are in high demand from this new type of IT professional.

    Among many new traits and personality qualities that an engineer may have, the following are beginning to emerge as truly defining characteristics of the new IT pro:

    • Communication. There is a misconception and even a stereotype that IT people are quiet, introverted and often times don’t communicate well. Although in some cases this may be true, many successful IT professionals have broken out of that shell. Communication is crucial to the success of any IT person. The new technological environment calls for people who can not only walk the walk – but explain what route they took. Furthermore, this oftentimes will mean explaining various technology objectives to both end-uses and executive staff.
    • Leadership. Administrators and engineers must take more of a leadership role if they wish to progress in their careers. Usually, this means saying ‘no’ to things. However, a real leader won’t only know when to say ‘no’, but also how to say it. Effectively challenging ideas and collaborating to help evolve solutions is much better than flat out saying ‘no’ to things.
    • Drive. Complacency in IT has always had detrimental effects. With all of these new technologies being released on a seemingly daily basis – being complacent now is worse than ever. IT pros must have the drive to keep pushing forward, learning new things and expanding their horizons.
    • “Thinking outside of the data center.” Cloud computing and distributed infrastructures have created a new line of thinking. IT folks, both young and seasoned, must know how to see the big picture whenever they’re working with a large corporate environment, including a thorough understanding of business goals and objectives. Cloud computing has truly created a new breed of architects who have to incorporate various technologies to establish sound solutions. We’ll get into this in the next blog!
    • Collaboration. The ability to work with various teams and to collaborate on projects – even as a junior engineer – is truly important. Sharing ideas for the sake of best practices and collaboration will help ensure a good deployment. Even more important, it’ll enhance team work!

    The ability to think on your feet and go far beyond simple troubleshooting within an organization can mean the difference between a desk-job and sought after career advancement. Outside of the common “cloud” computing technology what else is driving this type of demand in today’s business IT world?

    The reality is that the logical progression of technologies has not only created the need for articulate engineers and architects – organizations now seek folks who can speak the language of many different technologies. As the data center advances, cloud and data center architects will have to learn about the various components that make up a solid data center infrastructure. This means understanding some of the following technologies:

    • Unified, converged and high-density computing.
    • Various types of network and WAN connection best practices.
    • End-user management and security.
    • IT consumerization and BYOD controls.
    • Data management, replication and even analytics.
    • Understanding around various cloud models, APIs, and deployment strategies.
    • Disaster recovery, business continuity, and backup.
    • Global server and traffic management.
    • Data center design best practices and efficiencies.

    The modern data center, in reality, can be considered the home of the cloud. We can no longer think of the data center as stand-alone physical unit. These data warehouses and processing centers are creating massive connectivity points for an ever-expanding Internet and cloud environment. Today’s data center is really a logical connection point to many other data centers and the services that they may be running.

    Cloud and data center architects of tomorrow need to understand how this vast environment all functions together and how it affects the end-user. One of the biggest challenges for the new breed of engineers is designing an environment around one very important business and technology aspect: mobility. The ability to be agile, very mobile and provide on-demand services are becoming standard requirements for many data center providers.

    Not only will engineers and architects need to have the ability to communicate, they will need to know about many different technologies which directly affect the modern data center. These IT professionals will often act as the liaison between numerous different, still very important, business stakeholders. This means translating user and executive needs into direct IT solutions. For those professionals that can do this – there will be a need for them, both now and in the future.

  • DOD Cloud Adoption Helps U.S. Troops Stay Connected

    army-webmail

    Soldiers at the Fort Stewart, Ga. Education Center work on class material and catch up on their e-mail. The adoption of the cloud by U.S. Department of Defense has helped soliders in the United States and deployed abroad to stay connected with their families. (Photo: U.S. Army)

    As the U.S. Department of Defense (DOD) adopts new types of globally distributed technologies, it is working to improve communications for servicemembers, both on the battlefield and with family at home.

    While those of us in the corporate world may take for granted the ability to clearly relay a message between multiple parties, communications in the military’s IT world can be critically important.

    The military’s shift in technology is happening at all levels and the DOD is fully embracing the new infrastructure push. Just two years after its inception, the Department of Defense Enterprise Email system has reached its one millionth user. This milestone means that the DOD Enterprise Email (DEE) is now one of the largest independent email systems in the world.

    “For the war fighters, using DEE means wherever they are, they can use their email, whenever they need it. It is not necessary to start a new email account when you move or deploy. It is as mobile as the servicemember,” said Air Force Lt. Gen. Ronnie Hawkins, Director of the Defense Information Systems Agency (DISA).

    With so many important users at any given time, the DOD and DISA are working to ensure optimal performance and maximum communication capabilities for U.S. troops. The landscape for the typical soldier or sailor has changed quite a bit. Just a few years ago, communicating with home meant a long wait and a short chat. Now, with better WAN connectivity and solid infrastructure, U.S. soldiers based all over the world can connect with friends and family via everyday connection means. This could mean that a soldier at Bagram Airbase can communicate with his or her loved ones via Skype, Facebook and even Gmail.

    These are truly common tools that civilians use every day. Now, because of the advancement in global distributed infrastructure designs, these same platforms can be used to bring home a little closer to the people defending the national interests abroad.

    Here’s a look at some of the behind the scenes infrastructure projects which are helping to bolster all cloud, Internet and WAN-based communications for the military.

    Data Center Consolidation

    The data center is changing to support more cloud, more data and a lot more users. The DOD quickly realized that it needed need to jump on this evolution bandwagon and update its data center infrastructure. With almost 1,200 data centers, there was a direct need to consolidate and deploy better, more efficient technologies. In fact, the same press release indicates that in using DEE, the DOD is doing just that.

    DEE saves millions of dollars for the department by leveraging the buying power of the entire department. Enterprise services reduce costs by consolidating system hardware requirements and maintenance, eliminating unnecessary and inefficient administration and resource allocation. That means the military services and defense organizations using enterprise services can save money in IT services to preserve more resources for their primary mission.

    Unified computing systems, converged infrastructures and intelligent hardware components are finding their way into the DOD’s data center environment. These high-density environments are capable of being locked down, diversified and given the opportunity to support numerous different services. These technologies are capable of virtualization and even logical segmentation of workloads. This means that administrators are able to granularly control how communications and data enter and leave their data centers.

    Improved Global Infrastructure

    Directly related to these consolidation and efficiency efforts has been the direct improvement within the global data center and communications infrastructure. What can be achieved now with intelligent switching technologies is truly amazing. One logical switch controller is able to deliver hundreds – even thousands – of virtual connections. These connections are able to be controlled and can cross-communicate when necessary. Virtual appliances can be deployed within various points in an environment creating a highly agile system. Software-defined networking has helped the DOD create more redundancy on a global scale. Furthermore, the increase in bandwidth and direct communication links has improved the type of information that can be passed point-to-point. Plus, the speed at which such data travels has greatly increased as well.

  • The Robot-Driven Data Center of Tomorrow

    google-tapelibrary

    Tape libraries, like this one at Google, provide an early example of the use of robotics to manage data centers. Robotic arms (visible at the end of the aisle) can load and unload tapes. (Photo: Connie Zhou for Google)

    There is an evolution happening within the modern data center. Huge data center operators like Google and Amazon are quietly redefining the future of the data center. This includes the integration of robotics to create a lights-out, fully automated data center environment.

    Let’s draw some parallels. There’s a lot of similarity between the modern warehouse center and a state-of-the-art data center. There is an organized structure, a lot of automation, and the entire floor plan is built to be as efficient as possible. Large organizations like Amazon are already using highly advanced control technologies – which include robotics – to automate and control their warehouses.

    So, doesn’t it make sense to logically carry over this technology to the data center?

    Robotics in the Data Center

    As the reliance on the data center continues to grow, full software and hardware robotics automation is no longer a question of if, but a matter of when, technologists predict. Robotics organizations, like Chicago-based DevLinks LTD are already having conversations and creating initial designs for data center robotics automation.

    Scott Jackson, Senior Robotics Programmer at DevLinks, says it’s becoming quite feasible to have a robot fetch a drive, blade or even a chassis and deliver it to a central bay for replacement.

    “Simple RFID tags, laser and barcode identifiers can create true data center automation,” Jackson explains. “For example, you can tag drives with RFIDs and assign them to be wiped, destroyed and reused as needed.” Conveyor systems are able to run in parallel to robotics within the data center environment.

    There are already working examples of robotics in the data center. Tape archives seen at Google and high-performance computing data centers use robotic arms to locate and retrieve backup storage tapes.  (For an example, see this video of a system in action at the NCAR data center).

    What Will Be Different?

    What might a robot-driven “lights-out” data center look like? There would be rail-based robotics capable of scaling the entire data center. Here’s an interesting wrinkle: the modern data center would no longer be limited by horizontal expansion space. When using robotics, data centers can literally scale upwards. Utilizing space in the best possible manner is always a challenge for data center providers, so having the ability to scale both horizontally and vertically becomes a huge advantage.

    “These robotics can scale the entire rack, which can now be much taller because of these intelligent robots can reach higher,” said Jackson. “Once a part is removed, a conveyer at the bottom can move the part to the appropriate floor space. Furthermore, detailed vision technology has progressed a long way as well. Solutions like Cognex are able to allow machines to take pictures of a device, barcode and many other variables to help identify the part’s destination or origin.”

    Large organizations that invest heavily in their data center infrastructure are actively exploring robotics solutions to help them better control their data centers. IT shops such as Amazon and Google are looking at ways to create a fully automated, lights out data center. AOL has taken a first in that direction with an unmanned data center facility.

    The Cost Equation

    As with any technology, costs for custom data center robotics will start high and come down as time progresses and platforms become smarter. Smaller robotics are already becoming less expensive. Manufacturers like FANUC develop large machines; but they also create smaller, more agile robotics. Models like the LR and the Mate M-1iA are paving the way for super-agile, fast, robotics capable of granular part identification and distribution.

    Both data center, automation, and robotics technologies have come a very long way over the past decade. From the warehousing perspective, robotics already know where everything is located, how to put things in order and are able to directly interact with the human-created automation scenarios. Because of robotics, something very interesting has happened: Instead of the human going to the warehouse, the warehouse comes to the human.

    Soon it will be possible to do this at the data center level.

    This would enable entirely new approaches to operations. Your data center will be able to run at a different temperature level, you won’t need any lights, and you can directly integrate your new robotics platform into a modern-day automation and orchestration platform. From a central command center, the human operator can maintain visibility into their data center environment, the robotics infrastructure and the workloads that are being managed. This can all be done without the need of a single person on the data center floor.

  • Gaining More Efficiency from Power Distribution Within Data Centers

    As the reliance on the modern data center continues to grow, organizations are continuously looking for ways to optimize their environments. Today’s business demands are placing new challenges on the data center infrastructure. One of those challenges revolves around supplying additional computing power while using less energy in a smaller space. Furthermore, data center managers are tasked with staying within budget constraints and maintaining mission-critical reliability.

    The IT world is seeing a lot more cloud computing, more devices and a lot more consumerization. All of these new trends add a lot more reliance to the data center infrastructure. Furthermore, it creates new demands around resources. According to this white paper, ever-increasing server densities are causing an increase in kW power density, resulting in increased cooling requirements in today’s data centers. For every kW increase in power, an equal amount of cooling capacity is required. This never-ending cycle of increasing power and cooling requirements translates into more and larger power cables under the floor – stealing valuable cooling space. In this white paper from Raritan, you will learn how with an overhead bus system, there is no jungle of wires to obstruct air flow under the floor, making it one of the most energy efficient and safe power systems available in the market today.

    raritandc

    [Image source: Raritan]

    There are many benefits to moving towards an overhead track busway system. Effectively, any location in the data center can have the racks installed, moved, reconfigured or removed without affecting anything else in the space, and without risking an unplanned outage. Additionally, this white paper covers the following benefits in moving towards a track busway:

    • Creating a sustainable environment.
    • Improving scalability.
    • Creating more usable space in the data center.
    • Improved monitoring around power usage and other environmental variables.
    • Additional cost savings.
    • Creating power to spare

    Download this white paper today to learn how power management solutions play a fundamental role in implementing more versatile data centers. For any organization, having a data center which can quickly evolve to address the demands and challenges of the future is a vital part of the business agility process.

  • Why Data Centers are Shifting Towards Converged Infrastructures

    With more virtualization, IT consumerization and a lot more data – the data center needs a new way of deploying high-density computing platforms. In this white paper, IDC (with sponsorship from VCE) outlines the rapid growth in high-density computing needs. By downloading this paper, you can learn how IDC  found that higher utilization of IT assets and operational efficiency — which results from running more virtual machines on new-generation servers — reaches a plateau and often levels out at a certain point. This happens because the shift to virtualized servers often leads to strains in other areas of the infrastructure:

    • Virtual server sprawl increases server/storage/network stress and the accompanying administrative burdens required to deal with this stress. This makes support/maintenance more challenging and threatens application performance.
    • Handling this anticipated pressure by overloading/overprovisioning storage and data network facilities forces time-consuming, costly, and often unnecessary hardware upgrades.
    • Application performance and recovery behaviors (data recovery and cleanup) on error conditions can vary unexpectedly, stalling plans to migrate more business critical applications to virtual environments.

    The reality within the data center world is pretty direct: The infrastructure management challenge in has increased, as depicted in the “Virtualization Management Gap” in Figure 1. Furthermore, IDC and this white paper indicate how recent research shows that IT departments now spend three-quarters (76.8%) of their time and resources maintaining the environment and less than a quarter (23.2%) on value-added activities.

    IDC-Figure1

    In this white paper, IDC outlines how the data center is entering a new business cycle in IT where customers are prepared to trade choice for both ease of installation and simplicity of management. While the majority of customers are still evaluating converged systems, over a quarter (27.4% — approximately double the previous year’s percentage) are currently using or planning to use them. IDC expects adoption to increase as 44% (also double the previous year’s percentage) of those considering convergence will likely adopt in the next three years.

    There are direct data center technologies which help deliver high-density computing, excellent resource utilization and of course the ability to converge an infrastructure. Formed by Cisco and EMC with investments from VMware and Intel, this white paper outlines the vBlock platform and how VCE markets an integrated converged infrastructure solution for the data center. In fact, VCE develops a range of platforms and solutions for virtualized environments based on components from Cisco, EMC, and VMware. Download this white paper today to learn how VCE’s goal is to accelerate the adoption of converged infrastructure and cloud-based computing models which prove to dramatically reduce the cost of IT while improving management and growth capabilities.

  • How Cloud Computing Has Empowered The End User

    cloud-keys-dreamstime

    Until the last couple of years, organizations have been focusing technologies around corporate efficiency, growth capabilities and the ability for business continuity. Of course, these are all still important. However, with the advancements around cloud computing, WAN technologies and virtualization, a new trend has begun to emerge. Because of cloud computing and IT consumerization, there is now a distinct focus on the end user.

    Targeted data center technology is driving better performance, happier users and improved productivity. More users are bringing their own devices into the workforce. Not only has IT consumerization changed the IT playing field, more companies are conducting workforce analysis to see how they can make their employees happier and more productive. A lot of those results came back with something simple: work place technology flexibility.

    As more organizations begin to leverage cloud technologies, they will turn their focus to optimizing the end-user experience. Furthermore, they are going to try their best to optimally deliver more workloads with better resource utilization, both at the data center and at the end-user level. There are already more solutions which directly help data center administrators control users and the information that they are trying to access. Going forward, be prepared to see many more user-centric technologies start to rise up. Let’s understand why:

    • Remote users. More contractors and users are now logging in remotely. There has to be a system in place that’s capable of supporting such an environment.
    • Flexible work schedules. More employees are asking to work from home, or during their own hours. For many organizations, this isn’t an issue. However, delivering a positive user experience over the WAN to someone’s house can prove to be a challenge.
    • International user base. Technologies are allowing us to re-provision hardware and software to accommodate new time zones. This means we don’t have to have duplicate resources to support more users.
    • IT consumerization. Almost every organization is allowing their users to bring in an iPad or smartphone into an environment. These organizations are continuously tasked with allowing users to connect to corporate data.

    Cloud computing has facilitated the growth in end-user devices being brought into the corporate environment. Furthermore, more organizations are creating truly distributed platforms where users have the freedom to log in from anywhere, anytime and on virtually any device. End-user optimization and the performance of an application or desktop are vital for optimal workforce productivity. When the focus turns to the end-user, here are some technologies to keep an eye on:

    • Complete User Abstraction. Imagine being able to carry all of your settings with you at all times. I mean literally – all of them. From how your applications function, to the slightest detail in your profile – all yours. It’s happening now and will continue to grow. Organizations like AppSense are presenting the ability to abstract not only the user, but the hardware and software layer as well.
    • More ShareFile/Dropbox. Right now, companies don’t like Dropbox. However, there is a need for a secure file sharing platform. It will grow and it will enter the enterprise. Organizations can now create Dropbox-like environments and completely silo their data. That means information doesn’t have to leave a data center – or even a region.
    • Content Optimization. Not just WAN optimization. Specific QoS at the content level. Things like ICA, PCoIP, Flash, Video and Audio are all end-user centric. This type of content will be easier to manage. And, it won’t require a big hardened appliance to do so.
    • Mini-WAN Accelerators. Much like their bigger brothers, these would be given out to very remote employees. Why not optimize traffic into their home to ensure a better user experience? When the costs come down, this technology will see some big growth.
    • More Cloud APIs/Connectors. More applications are being built in the cloud space. As a result, there has been a greater emphasis around cloud connectors and APIs. The idea is to help the user connect faster to more applications around the web. Moving forward, these technologies will continue to grow and expand as more cloud-based platforms are brought up.
    • More IT Consumerization Control. Almost every organization is facing the BYOD truth. There are more users, more data and a lot more devices trying to connect into a network. Instead of blocking users, many data center administrators have switched tactics and are now trying to empower the end-user. Solutions like XenMobile or MobileIron create granular BYOD policies and controls to allow administrators to deliver more content to the end-point. Remember, the goal isn’t to block or track devices – it’s to allow users to become more efficient as well.

    Cloud computing and IT consumerization weren’t just new technological platforms. They were a new way of thinking. IT administrators are now working with a widely distributed networking infrastructure with components possibly being located all over the world. Furthermore, there is the challenge to deliver more applications, workloads and data to end-users which are not using traditional means of access.

    There are core benefits in focusing your IT efforts around the end-user. Not only will organizations create better user personality profiles, they’ll be able to align their business around end-user functionality. This means creating an environment which allows the end user to become more efficient, utilize more devices, and – very importantly – enjoy using your company’s technology platform. At the end of the day, this all translates to a happier and more productive user.

  • Ground to cloud with PSSC Labs cloud-ready data centers

    The modern IT landscape has shifted towards the data center infrastructure. At the heart of the cloud, and – in reality – any organization, is the data center environment. More businesses are reliant on the functions of the core data center platform to help them function on a day-to-day basis. With more workloads, a lot more virtualization and the addition of cloud computing; more demands are being placed on the data center infrastructure. These new needs don’t only revolve around greater amounts of computing power. There is a direct need to deploy high-density computing systems in a very efficient manner.

    In this white paper, you’ll learn how building a private cloud isn’t always an easy task. Whether it’s staff-related or just a lack of experience, your organization does not have to watch the private cloud industry pass you by. This white paper will discuss how there are partners and vendors who are capable of not only delivering a fully functional private cloud solution – they can help you control, maintain and manage it as well. In working with PSSC Labs and their data center to cloud initiative, organizations can leverage a true turnkey private cloud solution.

    The idea behind building a cloud-ready data center will literally span the entire process. This means:

    • Custom server design, manufacturing and deployment.
    • Server integration and testing.
    • Rack integration and installation.
    • Delivery logistics
    • Deployment, support and monitoring

    pssccloud

    [Image source: PSSC Labs]

    The power of this cloud model is that it can be custom built, provisioned, delivered and installed with minimal effort. Furthermore, this type of private cloud can grow with the needs of the organization since the technology is directly designed around scale. Remember, these designs are built around direct scalability and efficiency. Download this white paper to learn how your data center can be designed to support a robust private cloud while still lowering the total cost of ownership and simplifying management.

  • How to Ensure Your Monitoring Meets Your Needs

    With IT consumerization, more devices, and a lot more data – the reliance on the corporate data center will only continue to grow. Administrators are being tasked with running a leaner and more efficient data center all while keeping costs down. It’s clear that resources will always be finite. Furthermore, some resources can be very expensive. In creating a truly efficient data center environment, administrators must control how and where their resources are allocated. The reality is simple, monitoring in data centers responds to key industry concerns for energy consumption, availability and costs, maintaining an optimal environment for IT equipment and for coping with increasing total cost of ownership.

    In this white paper from Raritan, you will learn the valuable benefits behind creating an efficient data center monitoring platform. Beyond setting up a good infrastructure monitoring system – the paper addresses the following as well:

    • Why monitor?
    • Who monitors and what do they monitor?
    • How are they monitoring?
    • What do they do with the information?
    • Where is the monitoring fit and purpose?

    Download this white paper today to see how current monitoring and analysis systems appear stretched when answering the critical questions of:

    • How and where can I save energy?
    • How can I accurately track costs and identify cost savings?
    • How can I track power availability (so I can ensure that power within the facility goes where it is needed while reducing wastage where it is not needed)?

    According to the paper, a major cause for dissatisfaction is that deployment of technology is not moving as fast as corporate and facility requirements are moving. This indicates that the speed with which the requirements for actionable information and analysis has grown and changed has not been satisfied by the original technologies deployed. The future data center must be designed around direct efficiency and a granular monitoring system. This way, organizations can plan their resources for both current and future demands.

  • Unleash the Full Potential of BYOD with Confidence

    In working with today’s business environment – managers now have to answer demands around more users, more data, more cloud and; many more devices. Of course, organizations don’t want to take ownership of user-owned devices. However, they still want to manage and control the workload that is delivered to these end-points.

    IT consumerization will only continue to grow and evolve. Even now, users are utilizing 3-5 devices to access the Internet or utilize corporate resource. Because of this, many organizations have run into challenges around engineering and managing a BYOD solution. This includes:

    • Onboarding users
    • Ensuring high service quality and availability
    • Maintaining security and mitigating risk
    • Supporting diverse users and devices
    • Enable a consistent user experience
    • Accelerate deployment cycles with security

    This is where an intelligent BYOD design can really help. Instead of deploying fragmented solutions – organizations must look at a unified platform that can help them control devices and still stay secure as well as agile.

     

    In designing a unified BYOD environment, it’s important to work with technologies which are capable of supporting this type solution. In this white paper from HP, you will learn about the unified BYOD infrastructure offering. In creating an easy-to-operate platform, administrators are able to gain more control over their infrastructure and the user end-point environment. As outlined in this white paper, HP’s BYOD solution benefits include:

    • Universal policy provisioning and enforcement
    • Flexible access control with device fingerprinting and self-registration portals
    • Device posture assessment and control
    • Rich traffic shaping and bandwidth management tools
    • Comprehensive usage and performance reporting
    • Detailed user behavior analysis
    • Single pane-of-glass management across wired and wireless infrastructure
    • Unified wired and wireless network
    • Network ready for SDN

    Download this white paper today to see how HP’s BYOD solution can help your organization deploy a logical, unified BYOD-ready environment. By creating an agile platform ready for IT consumerization, your organization can create a more power data center as well as a more prodtive workforce.

  • A Look at the Best Way to Build a Cloud

    Business agility and the needs of today’s business environment have pushed organizations into new technological areas. This includes deploying cloud computing platforms to help meet the needs of market. Currently, there are more users, more devices, more data and lot more reliance on the data center. All of these new demands require an agile infrastructure built on intelligent systems and a solid design. In working with unified, intelligent designs, administrators are able to deploy platforms which are capable of high-density user scalability. Furthermore, these systems simplify management create direct benefits for a cloud-ready environment.

    One of these technologies, as outlined by the white paper, is the HP Converged Infrastructure. This type of platform helps organizations overcome the rigidity and high cost created by IT sprawl. In this white paper by IDG Tech and HP, you will see how the HP CI architectural blueprint eliminates silos and integrates technologies into shared pools of interoperable resources, all managed from a common management platform and all based on standards and customer choice. The result is a data center that delivers a new level of simplicity, integration and automation. More resources can be applied to innovation to deliver your desired business outcomes, including faster time-to-revenue, lower acquisition and implementation costs, flexibility to respond to business changes, and lower risk.

    In deploying an agile cloud environment, organizations should work with platforms which unify technologies and simplify the entire cloud deployment process. Download this white paper to learn how working with technologies like the HP CloudSystem and the HP 3PAR Storage platform can help create a formula directly built around efficiency.

    By unifying cloud-ready technologies, your organization can find benefits in numerous different ways. This includes:

    • Save Time: Minutes, not hours or days to provision a service
    • Be Efficient: Provision and grow resources on demand
    • Stay Secure: Role based access
    • Avoid Errors: SPM discovers and verifies configurations
    • Provision Services: Includes Storage and SAN Fabric

    When it comes to cloud computing – the best deployment method will revolve around a solid cloud foundation, good planning, and an eye towards the future.

  • PSSC Labs Focuses on Right-Sizing Private Clouds

    There is a change happening within many organizations where new technologies like cloud computing are being explored. Private cloud platforms are shaping how the new data center server landscape looks. In moving to a private cloud – the benefits can certainly be plentiful. The ability to use the Internet to help distribute data over vast distances has been around for some time. However, the idea around cloud computing has only become a reality over the past few years. This white paper examines the private cloud design and illustrates the right use-cases for this type of deployment. Remember, public clouds may initially appear attractive, however there certain elements IT administrators need to be aware of:

    • Unknown cost structures
    • Relinquishing control
    • Public data center lock down
    • Regional site resiliency issues
    • Poor performance & limit resource allocation

    Organizations are able to develop an infrastructure capable of great performance and scale. By deploying a private cloud infrastructure, companies are supporting more users, more functions and adding more business value with significantly lower expenses. In this white paper, you will learn how PSSC Labs helps create a private cloud which is able to accomplish this by “right sizing” their computing platform.  In addition, a private cloud allows for greater dedicated computing performance.

    In this white paper from PSSC Labs, you will learn about the examples of practical uses for private cloud technologies, including:

    • Virtual desktops and applications.
    • Files and data services.
    • Private cloud portals and collaboration spaces.
    • Compliance or regulatory-based data delivery.
    • High performance computing resources for design & engineering.

    The cloud revolution will only continue to expand. As more organizations jump on the cloud computing bandwagon, they’ll be able to leverage even more benefits of a widely distributed, highly-connected environment. Download this white paper to learn how to create a robust and agile private cloud infrastructure using PSSC Labs.

  • Start Small, Grow Tall: Why Cloud Now

    Over the past year, more organizations have jumped onto the cloud computing bandwagon. There’s really no doubt that many companies have already adopted some part of the cloud model. IT administrators are seeing the direct benefits in moving towards a cloud computing platform. There is the clear ability to scale, become more efficient and even simplify management.

    Still with cloud computing being a relatively new concept for some organizations, managers have to be clear in what their cloud goals really are. Because the cloud is still only beginning to make its way into the corporate data center, ccompanies face a number of obstacles to cloud adoption. In this white paper from HP, you’ll learn about these challenges and how to overcome such cloud obstacles.  In working with a cloud model, some obstacles may include:

    • Differences between business and IT executives about the pace of adoption;
    • Differing stages of maturity within the cloud adoption continuum; and
    • The need to avoid compromising the cloud’s benefits with scattershot, uncoordinated adoption.

    The general idea behind cloud computing is that the platform can delivery some pretty powerful benefits. They key sits in the planning, use-case and deployment. Using a step-by-step deployment approach is one way to control the cloud initiative. It’s also important to work with a very agile cloud platform. In this white paper, you learn about the HP CloudSystem and how this technology is tailored for the requirements of enterprises and service providers at various stages of cloud maturity. These offerings include:

    • Entry configuration for infrastructure as a service (IaaS) with HP CloudSystem Matrix that lets IT customers provision infrastructure and applications in minutes.
    • Full-scale deployment of private and hybrid cloud environments with HP CloudSystem Enterprise, which lets customers unify management across private, public, and hybrid clouds and adds advanced infrastructure-to-application lifecycle management.
    • Advanced capabilities for service providers with HP CloudSystem Service Provider, facilitating deployment of public and hosted private clouds that deliver complete service aggregation and management.

    Download this white paper today to learn how the HP Cloud platform can help your organization become more agile and ready for the cloud environment. Remember, with BYOD, IT consumerization, more data and more users – there will only be a greater reliance on the data center and the cloud.

  • Trends Driving the Enterprise Wireless LAN

    With more devices, more data and many more connections to a given network – the conversation around Wireless LAN (WLAN) technologies continues to thrive. Organizations are continuously working to optimize and create efficient WLAN environments capable of meeting the ever-changing needs of the end-user. As more users bring in their own devices which require connectivity into a WLAN platform, organizations may need to re-evaluate their existing platform and see where they can optimize. According to this white paper, a key driver in this trend is business users demanding the usability and functionality experienced with consumer devices. This means requirement for smartphones, tablets, laptops, and media-intensive applications in business environments are putting greater, and different, demands on the enterprise WLAN.

    HO-WLAN

    In this white paper, HP and Indaba outline the key WLAN market dynamics including:

    • Employee productivity
    • Business process throughput
    • Customer/end-user satisfaction
    • Cost savings
    • FTE savings
    • Tangible vs. intangible benefits

    In creating a powerful WLAN environment, managers must analyze the capabilities that are being offered today and see how they will meet demands both now and in the future. This is why it’s important to work with flexible platforms which are capable of direct scalability. Download this white paper to learn about HP’s WLAN-specific product strategy. This means a focus on accessing wire-line performance via Wi-Fi certified GbE WLAN client access. Furthermore, this white paper will outline how this type of technology can deliver up to a 50% increase in user density and performance for delivery of multimedia and cloud-based applications. In addition,

    When developing a WLAN strategy, IT administrators must always try and consider management as part of the platform. When controlling WLAN devices as well as the entire WLAN infrastructure, ease of management is a must. In this white paper, HP and Indaba outline the importance of a good  management console – one that provides centralized configuration of multiple access points, supports up to 2,500 mobile devices, and wired and wireless device status and network performance. All of these WLAN technologies are built around helping your business stay agile and continue to meet today’s end-point demands.

  • Modular Data Center Due Diligence

    This is the fifth article in the Data Center Knowledge Guide to Modular Data Centers series. Modular solutions can benefit a variety of businesses and requirements — but not all. Similar to any data center project, proper planning is paramount. While predicting future IT requirements can be more guessing than science, it is still a vital part of the larger strategy. Investigating a modular approach means optimizing your research and making that perfect fit for realizing your objectives. Here are some items to consider when investigating modular products or providers.

    Modular Products
    • Is the product UL and/or CE certified? What local or state codes may be applicable to bringing this type of device to your site?
    • Will you need additional protection for the module? While many of the modular solutions are able to withstand a great deal of outside conditions, there are security factors to consider as well as how to optimally fit the modules into the structure or site you have.
    • On-site integration — can your facility/site accommodate modules and the overall power requirements?
    • What voltage distribution is required to the module and how will you provide it?
    • Do you require true mobility in a modular solution?
    • Does disaster recovery play a major role within your organization?
    • Integrated modular data center or separate power and cooling modules?

    Modular Providers
    • Where do you need the modular solution provided? On-site, dedicated site or colocated with the provider?
    • What integration options are available to manage and automate IT and infrastructure within the module?
    • What type of monitoring and security is required?
    • What data needs to be collected and reported?
    • Are you providing some type of distributed cloud solution?
    • Does the modular solution have a solid DCIM or DCOS option capable of spanning multiple data center modules?
    • Are there provisions for future management technologies such as DCOS?

    In both approaches the foundational data for eval-uation is power. Match the IT needs and forecasts for power consumption with the right-sized modular implementation in 100 – 500kW increments. Additionally any energy efficiency or environmental guidelines for the organization should be followed. Invite facilities, IT and all pertinent parties to the table to select the best fit for a holistic, optimized data center strategy.

    To Summarize this article series, in some regards the decision about modular mirrors that of build vs. buy. The choice is to put a lot of up-front capital into constructing a facility that you estimate will fulfill IT requirements for the next decade or so, or to build (and expense) modularly in increments that will match IT needs in years to come. Similarly there are cost analysis exercises to look at between the operational costs of running a large facility with matching infrastructure, or the cost per module or modules deployed and the efficiencies in both. Modular data centers are somewhat of a disruptor to the traditional build vs. buy decision, as it offers an alternative approach to building that can save significant capital expense and operational expense over the constructed data center.

    Although modular data center solutions are relatively new; cloud computing and the power of distributed technologies have made this type of platform a viable option. Managers are able to quickly understand the cost of a solution and deploy a data center which can directly integrate with the needs of the organization. Second generation modular designs are purpose built around today’s IT 2.0 environment.

    While modular solutions are increasingly taking market share, they are still not a perfect fit for every need. Like all other aspects of a data center strategy, it requires knowing what IT needs are now and in the future, and what the specific requirements are for efficiently optimizing the sup¬porting data center infrastructure. In many cases the modular product or provider are a perfect fit for a retrofit, expansion or new data center project. Finding the right modular solution means knowing which one will benefit your needs the best. Taking a modular approach toward data center design is an innovative way to tightly integrate IT and facilities, and deliver it with extreme agility.

    The complete Data Center Knowledge Guide to Modular Data Centers is available for download in a PDF format and brought to you by IO. Click here to download the DCK Guide to Modular Data Centers.

  • An Open, Integrated Platform for Cloud Services

    There has been a shift in technology where cloud computing and the emphasis around WAN technologies has skyrocketed. There are more users, more devices and a lot more data for organization to utilize and try to manage. Now, many companies are moving towards some type of cloud computing model to help them achieve their business goals. Why is this important? According to HP’s whitepaper on the CloudSystem, cloud computing is a key component of an organization’s ability to gain unencumbered access to information technology—to access “Infrastructure Anywhere, Applications Anywhere, Information Anywhere, or better said: Services Anywhere.”

    hpconvergedcloud

    [Image source: HP CloudSystem]

    In order to deliver on the “Services Anywhere” promise, organizations will have to think differently about IT and how cloud computing plays a direct role. The key will be understanding the unique requirements of each service, such as availability, cost, performance, and regulatory needs, then address them in the most efficient and cost-effective way.

    When designing a cloud environment, building around adaptability and expansion are key concerns. As HP outlines in this whitepaper, CloudSystem is an integrated system that unifies the control and delivery of cloud services, whether their provenance is your data center, or an external source such as HP Cloud Services or Amazon Web Services. With HP CloudSystem, you get a secure, scalable cloud solution that includes:

    • A complete, integrated system to build and manage cloud services
    • Single services view across private, public, and hybrid cloud
    • Multi-hypervisor, multi-OS, heterogeneous infrastructure
    • Intelligent automation and lifecycle management; infrastructure-to-application
    • Broker service delivery across multiple clouds from a single, integrated point of control
    • Scalability and elasticity
    • Prepackaged service design tools

    In this white paper you will learn that, as a part of the HP Converged Cloud architecture, clients have a simplified, integrated platform that is easier to manage and provides flexibility and portability between private, public, and managed clouds. Download this white paper to see how HP’s Cloud system and Converged Infrastructure can help you deliver a more power cloud computing environment.

  • Deploying Intelligent Storage Solutions with HP Converged Storage

    With more users, devices and connection points to the Internet – data is being generated at very rapid speeds. In fact, a recent Digital Archive Market Forecast showed that by 2015, the total worldwide cumulative digital archive capacity is projected to be at 300,000 petabytes. As the reliance on the data center continues to grow and more organizations move towards a cloud computing model – the storage components will still sit at the heart of the infrastructure. There is a growing need for storage solutions which are intelligent and are capable of scale. One effective way to deliver these platforms is through HP’s Converged Infrastructure – specifically, Converged Storage. In this white paper, you will learn about HP’s strategy for delivering Converged Storage that improves the ability of your business to capitalize on information.

    hpconvergedstorage

    [Image source: HP Converged Storage]

    In being innovative while delivering standard platforms which provide the foundation for server and storage products, HP is not only able to supply outstanding storage products, but also unique advantages that no other storage vendor can offer. In this white paper, you will learn about these specialized advantages which include:

    • Flexible deployment options including a range of form factors (rack, tower, blade, hybrid) as both physical disk systems and as software-defined storage
    • Easier administration through common management interfaces for remote support and service
    • Simplified hardware maintenance via common component leverage with servers
    • Greater visibility into operational metrics (like power and cooling) with a “sea of sensors” for the data center
    • Converged networking to reduce cable sprawl and lower costs
    • Enhanced performance through standards-based storage hardware innovation

    Download HP’s white paper to see how they are extending Converged Storage into new solutions and segments with a new initiative that introduces the next evolution of the HP Converged Storage strategy and vision. Remember, resources are always going to be finite – This is why working with HP’s intelligent Converged Storage infrastructure will help your organization gain more control over your data center and storage requirements.

  • Converging the Data Center Infrastructure: Why and How

    The modern infrastructure has advanced beyond the standard server. Now, with BYOD, IT consumerization, cloud computing and big data – there is a greater reliance on the data center than ever before. More organizations are looking for way to optimize their data enter environments in an effort to meet industry demands. Through advanced technologies including virtualization, and high-density computing – your company can revisit the data center infrastructure conversation.  This means planning around new platforms which are not only capable of handling current computing needs, but future growth demands as well.

    According to the white paper:

    IDC finds that higher utilization of IT assets and operational efficiency — which results from running more virtual machines on new-generation servers — reaches a plateau and often levels out at a certain point. This happens because the shift to virtualized servers often leads to strains in other areas of the infrastructure:

    • Virtual server sprawl increases server/storage/network stress and the accompanying administrative burdens required to deal with this stress. This makes support/maintenance more challenging and threatens application performance.
    • Handling this anticipated pressure by overloading/overprovisioning storage and data network facilities forces time-consuming, costly, and often unnecessary hardware upgrades.
    • Application performance and recovery behaviors (data recovery and cleanup) on error conditions can vary unexpectedly, stalling plans to migrate more business critical applications to virtual environments.

    There is no question that the IT landscape will continue to evolve. As companies face a future in which they will need to deploy and effectively use hundreds, thousands, and even tens of thousands of server (and/or desktop) application instances in a virtual environment, they should consider deploying optimally (e.g., densest, greenest, simplest) configured converged infrastructure systems (server, storage, network) that are managed as unified IT assets.

    In this white paper done by IDC and sponsored by VCE, you will learn about the interesting research done with five organizations. The research revolves around research with five companies that have implemented Vblock Infrastructure Platforms. According to the research in this white paper, VCE shows substantial business benefits associated with IT convergence and improved asset sharing.

    IDC and VCE results also indicate reduced IT costs per unit of workload, faster deployment, and reduced downtime. These organizations reported reducing calendar time for deployment of new infrastructure from five weeks to one week and reducing staff time to configure/test/deploy by 75%. Download this whitepaper to learn how your environment can benefit from a new converged infrastructure. This includes learning how to utilize the reduction in infrastructure hardware costs and IT staff time to manage operations – which will not only lower the average annual datacenter cost but also help increase efficiency and scalability.

  • The Importance of Networking in Data Center TCO

    Today’s modern infrastructure is built around data-on-demand and a constantly connected end-user. More organizations are utilizing elements of cloud computing and are striving to be more agile. One challenge facing many organizations is that they are inhibited by their legacy network infrastructures, which are often proprietary. Their networks are no longer able to meet current business demands, let alone support business growth or new technological innovations. It’s important to find that right medium of technology where your organization can increase network capabilities and performance without increasing costs.

    This white paper from HP takes a look at next generation technologies and how they’re able to improve a data center infrastructure. Furthermore, there is an emphasis on using intelligent systems which can also meet the needs of today’s business demands. With a modern network environment, organizations are able to gain competitive advantages with network infrastructures that are open and based on industry standards.

    Download this white paper to learn how a modern network infrastructure can help both your data center and reduce your TCO. These benefits include:

    • Modern network infrastructures use open and industry-standard components, eradicating costly vendor lock-in.
    • Modern networking technologies are automated and intelligent, so your IT staff can dedicate more time to supporting business growth.
    • Network management is centralized, simplified, and streamlined through a single-pane-of-glass management platform, removing the need to maintain multiple management platforms.
    • Branch office network management is simplified and doesn’t require dedicated IT staff at remote locations.

    Not only can a modern network infrastructure improve scalability, security and management – it can also increase the capabilities of the entire organization. The key idea here is to utilize smart – open – technologies to facilitate network migration and mitigate the risk and cost of change when the network needs to adapt to new business needs.

    hpflexarch

    [Image source: HP FlexNetwork Architecture]

    With an ever-growing emphasis on cloud computing and IT consumerization, more organizations will have to ensure that they are able to meet these new demands. This white paper outlines how the FlexNetwork architecture allows enterprises to meet contemporary business challenges like those previously mentioned using open, industry-standard protocol implementations.

  • How Intelligent Storage Controllers Have Revolutionized the Industry

    The storage of data (and lots of it) is a continued business demand. The storage industry is evolving to keep pace.

    The storage of data (and lots of it) is a continued business demand. The storage industry is evolving to keep pace.

    The data center environment continues to evolve. Current market and business demands have changed to revolve around cloud computing, more devices, and a focus on the end-user computing experience. Large or small – infrastructure is what has been keeping organizations operational. Within the data center, numerous technologies all work together to help bring powerful technologies to other sites, branches and the end-user. A major part of this environment has always been the storage component.

    Over the past few years, the storage controller has advanced far beyond a device which only handles storage needs. With more cloud and IT consumerization – managing data, space and future storage requirements has become a greater challenge. So, as other technologies evolved; storage did as well. With modern storage appliances, organizations are able to do so much more than ever before. In all effects – storage has helped revolutionize how we work with and control data. Remember, resources are still expensive. So, why not deploy intelligent technologies which not only optimize, but can scale directly as well.

    • Logical storage segmentation/multi-tenancy. As organizations grow – many will develop regional departments or branch offices. In some cases, administrators would have to deploy a new storage controller to numerous locations; even if they needed just a bit of non-replicated storage. Now, modern controllers can be logically split to facilitate the delivery of “virtual storage” slices to various departments. Unlike simple storage provisioning – the branch administrator would receive a graphical user interface (GUI) and a “virtual controller.” To them, it looks like they have their own physical unit. In reality, there is a main storage cluster which has multi-tenancy enable. The primary admin can see all of these slices, but the branch administrators will only see the slice that they are provided. Those private instances can be controller, configured, and deployed all without impacting the main unit.
    • Storage thin provisioning. Storage utilization and provisioning has always been a challenge for organizations. With virtualization and many more workloads being placed onto a shared storage environment, organizations needed a way to better control data. With that came the technology around thin provisioning. Thin Provisioning utilizes the on-demand allocation of blocks of data versus the traditional method of allocating all the blocks at the very beginning. In using this type of storage-optimized solution administrators are able to eliminate almost all whitespace within the array. Not only does this help with avoiding the poor utilization rates, sometimes as low as 10 percent- 15 percent, thin provisioning can also optimize storage capacity utilization efficiency.  Effectively, organizations can acquire less storage capacity up front and then defer storage capacity upgrades in line with actual business usage. From an administrative perspective, this can reduce data center operating costs, like power usage and floor space, which is normally associated with keeping large amounts of unused disks spinning and operational.
    • Connecting to the cloud. No core data center function can escape the demands of the cloud. This includes storage technologies. With more systems connecting into the cloud, storage technologies have adapted around virtualization, cloud computing, and even big data. There really isn’t any one major, cloud-related, storage advancement. Rather, numerous new features and technologies have surfaced which directly optimize, secure and manage cloud-based workloads. For example, solid-state and flash storage arrays have been growing in number when it comes to high IOPS workloads. Technologies like VDI require additional resources to allow hundreds and even thousands of desktops to operate optimally. Another example is geo-fencing data and storage. In creating regulatory compliant storage environments, organizations can now fully control where their data goes and where the borders are required. Not only does this help with file sharing, it helps companies control how their data lives in a public or private cloud scenario.
    • Controlling big data. It really didn’t take too long for storage vendors to jump on the “big data” bandwagon. The big picture here is that data and the utilization of data will continue to grow. Storage vendors like EMC and NetApp took proactive approaches in partnering and deploying intelligent systems capable of supporting big data initiatives. For example, Open Solution from Netapp delivers a ready-to-deploy, enterprise-class infrastructure for Hadoop so businesses can control and gain insights from their data.  Furthermore, in partnering with server makers – storage vendors are now able to deploy validated reference architectures which provide reliable Hadoop clusters, seamless integration of Hadoop with existing infrastructure, and analysis on any kind of structured or unstructured data. From EMC’s perspective, their powerful Isilon scale-ready platform for Hadoop combines EMC’s Isilon scale-out network-attached storage (NAS) and EMC Greenplum HD. In working with these types of technologies, organizations are able to utilize a powerful data analytics engine on a flexible, efficient data storage platform.

    With so many vendors pushing hard to advance the storage market, the above list can truly become much longer. Market trends clearly indicate growth in the consumer market as well as within the business organization. This means more end-points, many more users and a lot more data. Furthermore, high resource workloads demand smarter storage solutions which work to prevent bottlenecks.

    In creating your data center, always plan around core components which are driving technological advancement. This means deploying scalable servers, solid networking components, and an intelligent storage system which can control growing data demands. As the market continues to push forward, administrators will need to work with storage solutions which meet business requirements both now and in the future.

    For more on storage news and trends, bookmark our Storage Channel.

     

     

  • HP Converged Infrastructure Reference Architecture Design Guide

    The modern data center environment has become the heart of almost any organization. Because of this, there has become a greater emphasis on creating efficiency around data center systems. This means eliminating complex distributed resource platforms in favor of optimized converged infrastructures.

    In HP’s Architecture Design Guide, you are able to see how information technology as a “service” has moved from concept to reality. Early adopters have already deployed major solutions, and it has become a standard objective for mainstream Information Technology (IT) architects and planners. HP has adopted the term “Converged Infrastructure” to describe how HP products and services can address this approach. This Reference Architecture guide provides a business and technical view of the adoption process.

    Today’s IT demands span the data center, from capacity to technology, processes, people, and governance. In this technical guide, HP outlines how its Converged Infrastructure opens the door to new approaches, and can enable IT management to defer or avoid costly data center expansions. For example:

    • Simplification: Collapse siloed, hierarchical, point-to-point infrastructure into an easily managed, energy-efficient, and re-usable set of resources.
    • Enabling growth: Efficiently deploy new applications and services, with optimum utilization across servers, storage, networking, and power.
    • On-demand delivery: Deliver applications and services through a common framework that can leverage on-premise, private cloud, and off-premise resources.
    • Employee productivity: Move resources from operations to innovation by increasing automation of application, infrastructure, and facility management.

    HP Converged Infrastructure enables organizations to achieve these goals while getting ahead of the growth curve and the cost curve. In working with data centers today, administrators must design environments capable of high density and efficient scalability.

    hpconverged2

    [Image source: HP: The transformation to HP Converged Infrastructure]

    This Reference Architecture Design Guide will outline all of the major components which fall into the HP Converged Infrastructure design. This includes:

    • Virtual resource pools
    • Working with FlexFabric
    • Using the Matrix Operating Environment

    Download HP’s Architecture Design Guide to learn how to create a more efficient data center platform. As the need for user density grows, it’s important to ensure that the right technologies are in place to help facilitate that expansion.