Author: Serkadis

  • Samsung’s ATIV Windows 8 Smart PC Pro 700 slate now includes integrated LTE

    Microsoft’s Surface Pro doesn’t yet come with integrated mobile broadband, so if that’s what you’re after, Samsung’s Windows 8 tablet might be worth a look. The company announced Monday that it has a new model of its Windows 8 slate, the ATIV Smart PC Pro 700T, which comes with a built-in LTE radio. The 11.6-inch tablet with docking keyboard can roam around the country on AT&T’s LTE network when Wi-Fi can’t be found.

    ATIV 700 on AT&TThe connectivity comes at a cost though: Samsung’s retail price for this particular model is $1,599. Hopefully, that’s just a suggested price because the non-LTE version of the slate lists for $1,199.99 and can be had for slightly less online. Connectivity is certainly valuable — especially when there’s none to be found — but a $400 premium seems a bit excessive.

    Aside from the integrated mobile broadband radio, I see no other differences in this tablet and the non-LTE model.

    It runs on Intel’s Core i5-3317U processor with Intel 4000 graphics, includes 4 GB of RAM, 128 GB of flash storage memory, and comes with Microsoft Windows 8. The 11.6-inch touchscreen provides 1920 x 1080 resolution and attaches to a full keyboard dock for laptop use. Samsung estimates the the 1.98 pound tablet to run for up to 8 hours on a single charge.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Big Brother and sensor-based surveillance will ensure you wash your hands

    Packing some RFID tags and a few sensors into a plastic bracelet might hold the key to better hand washing. MIT’s Technology Review profiled IntelligentM, one of many companies trying to help stop the spread of infections in hospitals by using sensors and connectivity to police the hand-washing habits of doctors.

    But IntelligentM and it’s ilk are just another example of how employers and maybe even governments will use connectivity plus sensors for surveillance. The question then becomes, how much of our privacy are we willing to trade for the benefits of lowering infections in a hospital setting, or catching criminals or even improving traffic safety by monitoring cars?

    The IntelligentM bracelet contains RFID tags and an accelerometer and can track when, where and how well a doctor washes his or her hands. The bracelet vibrates, for example, to let the doctor know when their hand washing effort is sufficient.

    The RFID tags interact with receivers near the sinks and at the doors of patient’s rooms to communicate with the bracelet. The data collected is uploaded at the end of the shift. Because the IntelligentM bracelet relies on RFID, it is somewhat less intrusive, tracking hand washing at sinks and at the doors to patient rooms, as opposed to the real-time movements of the doctor around the hospital.

    However, these bracelets are part of a larger shift of monitoring workers to ensure compliance with company procedures and perhaps ensuring productivity. For example, the Wall Street Journal recently wrote about how some employers are using physical trackers placed on employees and around the office from a company called Sociometrics to track how people move around and interact.

    And last March at our Structure:Data conference my colleague Mathew Ingram got into a debate with the CEO of Cataphoroa, which analyzes employee emails, IMs and other electronic messages for risky or illegal behavior. But instead of looking for trigger words, the company’s software mines the entire text to understand the “digital tone” of the employee. That data can be used to prevent risk, but also search for the company’s “best” managers.

    And that’s the rub in many of these cases: more information — as long as it’s interpreted accurately — will benefit those who perform well or fit within the norms of the group. However, those using the data have their own norms and value judgments they bring to the analysis, which puts the scrutinized at risk. It’s hard to argue with promoting hand washing in hospitals. However if that same data sees a doctor going into a patient room more times than the hospital’s best practices call for to talk to a patient, then that same doctor might get penalized.

    The courts have thus far been fine with employers monitoring employee email and communications in the workplace, but not with requiring a password to someone’s Facebook account. As more and more sensor-based surveillance occurs we’re going to need to new rights for employees, especially around employers applying their morals and norms to the workplace.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • $338 Million Powerball Ticket Winner To Be Announced

    Saturday night’s Powerball drawing saw a ticket sold in New Jersey win the $338 Million jackpot. The winner is supposed to be announced on Monday.

    The winning numbers were: 17, 29, 31, 52, 53, Powerball: 31.

    The last jackpot winner was Dave Honeywell from Virginia, who won the $217 million ticket in February.

    In addition to the jackpot winner, Saturday’s drawing saw a Match 5 Power Play winner ($2 million) in Iowa, and $1 million Match 5 winners in Arizona, Florida, Illinois, Minnesota, North Carolina, New Jersey, New York, Ohio, Pennsylvania, South Carolina, and Virginia.

    There were a total of 2,486,305 winners on Saturday, and $30,662,765 in non-jackpot prizes won.

    The jackpot is now back down to $40 million.

  • Cloud News: Red Hat, Panzura, Avaya

    News from the cloud computing sector includes developments from Red Hat, Panzura and Avaya:

    Red Hat collaborates with Code for America.  Red Hat (RHT) announced a collaboration with Code for America (CfA), a non-profit organization that partners with local governments to foster civic innovation, focused on using technology to increase civic engagement. The collaboration brings Red Hat’s OpenShift Platform-as-a-Service (PaaS) offering to CfA Fellows and partner communities free-of-charge to help achieve CfA’s goal of fostering collaboration between city hall and city residents and innovative problem solving through technology. In a contribution worth approximately $300,000,the CfA Fellows will have access to OpenShift free-of-charge for one year and have the option of one additional year of free hosting and services. OpenShift supports many popular frameworks, such as Zend, Java EE, Spring, Rails, Play, with built-in platform support for node.js, Ruby, Python, PHP, Perl and Java. OpenShift offers an application platform in the cloud that manages the stack so that developers can focus on their application code.

    Panzura selected by California State, Northridge.  Cloud storage provider Panzura announced that California State University, Northridge (CSUN), selected the Panzura Global Cloud Storage System to transform its off-site data backup protection process by utilizing Panzura’s encrypted Quicksilver cloud storage controllers. CSUN will streamline its off-site data protection efforts and significantly reduced storage needs, while also shifting CapEx to OpEx. The university manages approximately 300TB of NAS/SAN storage, protected by tape and disk storage. Backup processes were becoming increasingly slow and cumbersome and they needed to transition from the endless CapEx cycle of refreshing tape backup equipment and provisioning more capacity for offsite data protection. “Our goal was to increase process efficiency and reduce storage footprint for off-site backups by eliminating tape and avoiding use of campus-owned disk for off-site backup storage,” said Chris Olsen, Sr. Director of Infrastructure Services and ISO at CSUN. “We had some initial hesitations about cloud storage, including the cost to get data to the cloud, data security, and controlling capacity. With Panzura’s Global Cloud Storage System, we simply pointed our Symantec NetBackup application to the Panzura Quicksilver Cloud Storage Controller and the cloud became our backup target, while RSA 4096-bit encryption protected our data with us owning the encryption keys. The solution was straightforward to deploy and the deduplication exceeded our expectations.”

    Avaya Collaborative Cloud for Cloud Service Providers. Avaya announced additions to the Avaya Collaborative Cloud with new offers specifically designed for cloud service providers (CSPs) that allow them to brand and deliver Avaya’s unified communications, contact center and video solutions. With these new solutions CSPs can help organizations off-load the challenges of managing BYOD environments, widely dispersed workforces and the shifting demands of end-customers. The new offers enable CSPs to evolve and augment enterprise communications with cloud-based solutions as well as provide greater interoperability across vendors, domains and protocols. ”With Avaya Collaborative Cloud, cloud service providers can offer a differentiated UC, contact center or video solution to enterprises,” said Joel Hackney, SVP and general manager, Cloud Solutions, Avaya.

    To see other cloud computing news, visit our Cloud Computing Channel.

  • Exclusive Video Is Coming To Spotify [Report]

    Spotify has become to the go-to source of music entertainment for many people on the Internet, and soon, users may also be watching shows with the service.

    Business Insider cites “two sources briefed on the company’s plans” as saying the company is planning on investing in original video content that would compete with providers like Netflix and HBO.

    From the sound of it, we’re not looking at movie and television show availability in a format like we’re used to for music from Spotify, but rather video content produced specifically for Spotify users. Perhaps this will be something of a water testing into video for the company, and if successful, we’ll see it grow to compete more directly with the Netflixes of the world. But that’s a big if, and certainly a lot to ask of a service that we don’t even know for sure is going to exist.

    Nicholas Carlson reports:

    Our sources said that Spotify is looking for partners that can help it fund and create exclusive content. It is unclear if these talks would lead to a new round of investment in Spotify itself.

    Spotify, so far, has not commented on the rumor.

    Earlier this month, Spotify announced that it has added a million paid subscriptions in three months.

  • Google Kills Blocked Sites Feature

    About two years ago, when Google was in the early stages of the Panda update, it launched another means of helping users get more quality results in front of them. This one, unlike the Panda update, left it more up to the users, giving them more control of their own. That was the domain blocking feature.

    The feature has now been killed. Late last year, people were noticing that the feature wasn’t working. Now, Google has officially acknowledged its demise.Google says in a message on its Inside Search site (via Search Engine Roundtable):

    The Blocked Sites feature is no longer available. To block particular sites from your search results, we recommend the Personal Blocklist Chrome extension from Google. You may also download your existing blocked sites list as a text file.

    Google doesn’t offer much in the way of explanation as to why they killed the feature. Most likely, it wasn’t being used a whole lot, and really, isn’t the feature kind of an admission that Google is not getting results right?

    At least for those who want to continue blocking sites, Google provides an alternative. That’s more that Google Reader users got.

  • Network News: Zayo Partners With Internet2

    Here’s our review of some of today’s noteworthy links for the networking sector of the data center industry:

    Zayo and Internet2 bring 100G to the north. Zayo Group announced its partnership with Internet2 to add substantial new capacity on Zayo’s fiber route from Chicago to Seattle. The system will have greater than 4 terabits of overall capacity to support Internet2’s new 100G network. Set to be completed in the spring, the infrastructure will extend the capability to reach the nation’s leading research and education network’s 100G services to universities and research centers in Idaho, Montana, North Dakota, Minnesota, Washington, and Wisconsin. The project will provide new 100G national backbone paths between Internet2’s core routers in Seattle and Chicago, reduce latency for time-sensitive applications and increase capacity for global innovation with partners throughout the west and Asia that connect through Seattle. “This project demonstrates Zayo’s commitment to building strong strategic alliances with the research and education community and our investment into our extensive fiber footprint,” says Zach Nebergall, vice president of Wavelength Product Group at Zayo. “Internet2 is helping to bring substantial amounts of additional capacity to the research and education community via its partnerships.”

    CenturyLink Deploys Ciena 100G.  Ciena (CIEN) announced that CenturyLink (CTL) recently utilized its converged packet optical platform with WaveLogic coherent optical technology to modernize and upgrade its network that spans more than 50 metropolitan locations across the United States. With the upgrade CenturyLink can offer 1GE, 10GE, 100GE and equivalent wavelengths, utilizing Ciena’s 6500 Packet-Optical Platform. The 6500 platform will also offer integrated packet switching which gives CenturyLink agility in the delivery of groomed Ethernet services to its enterprise customers. “CenturyLink understands the increasing need for scalability, capacity and high-speed network services for today’s business requirements,” said Pieter Poll, senior vice president of national and international network planning, engineering and construction, CenturyLink. ”Ciena’s converged packet and coherent optical technology allows us to provide speed and capacity improvements to our international and domestic regional networks, creating a true, end-to-end 100G network to deliver today’s bandwidth-intensive services and applications.”

    Level 3 to build data center in Bogota Columbia. Level 3 Communications (LVLT)  announced the construction of its newest data center in Bogota, Colombia, as a result of increased demand for IT services among its customers. This new, 500 square-meter Premier Elite data center, designed to support managed services, will provide onsite technical staff, high levels of availability, enhanced security and high power density cabinets and suites. ”The Colombian market shows a growing demand for colocation, housing, hosting and value-added services,” said Luis Carlos Guerrero, sales vice president for Level 3′s Andean region. “The trend to outsource these services to a trusted business partner – one that will support the customer in its expansion strategy – is crucial for companies today so they can focus on their core business.”

    Extreme Networks solutions tested by EANTC. Extreme Networks (EXTR)  announced that its high performance cloud and Mobile Backhaul Ethernet switching solutions were among the first to be tested by European Advance Network Test Center (EANTC) for Carrier-focused Software Defined Networking (SDN), MPLS and Hybrid Timing combining Synchronous Ethernet (Sync E) and IEEE-1588 Precision Time Protocol (PTP). EANTC’s final test plan for the SDN/MPLS and IPv6 testing was rigorous and included 51 test outcomes with 19 of them making fresh paths to SDN testing. The SDN OpenFlow tests highlighted Layer 2 & 3 forwarding, OpenFlow topology discovery, failure recovery in OpenFlow, and policy based routing. ”Extreme Networks continues to deliver first to market and high performance SDN and Mobile Backhaul Ethernet solutions for sophisticated multi-tenant data centers and mobile 4G networks,” said David Ginsburg, CMO for Extreme Networks. ”Our successful completion of the EANTC organized testing in Berlin in 2013 further validates our ability to support the network architectures required by new carrier service offerings.”

  • Sony brings Xperia ZL smartphone to US but it’s costly at $719

    Hoping to grow its share of the of smartphone market, Sony is now taking pre-orders for its Xperia XL handset. Dubbed the “world’s most compact smartphone with a 5″ display,” the Xperia ZL ships on or around April 8. And on paper, it’s arguably the best smartphone Sony has created yet. But without a carrier partner in the U.S. to subsidize the cost, consumers will pay Sony outright for the phone, to the tune of $719.99.

    What do you get for that kind of cash? In terms of hardware, the Xperia ZL rivals the flagship phones from any other handset maker these days. Sony’s Reality Display offers 5-inches of full high-definition resolution and uses the company’s Mobile BRAVIA Engine 2, bringing Sony’s television technology to the small screen. Even with the large display, Sony kept the phone dimensions relatively small at 5.18 x 2.7 x 0.39 inches.

    White Xperia ZLThe main camera is 13 megapixel camera with Exmor RS sensor and f/2.4 aperture. A range of photo modes, 1080p video support and HDR capture for both stills and videos are included.

    Sony includes 2 GB of memory to run Google Android 4.1 — not Android 4.2, sadly — on Qualcomm’s 1.5 GHz quad-core Snapdragon S4 Pro chipset. Internal storage tops out at 16 GB, but can be expanded by up to another 32 GB through removable storage. At this price, the handset only supports HSPA+ networks on down. An Xperia ZL model with LTE support is available for $759.99.

    While the phone may be worth the price, I wouldn’t expect Sony to sell too many units here. The U.S. is only now just flirting with full price phones as consumers have been addicted to hardware subsidies for years. T-Mobile is the first of the big four to be making any headway towards a BYOD model, and as the smallest of the four, won’t have big traction to help Sony move the Xperia ZL.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Yahoo Acquires Summly To Integrate Into Its Own Mobile Offerings

    Yahoo announced on Monday that it is acquiring mobile product company Summly, which has an app (or had, at least) billed as “pocket sized news for iPhone”.

    Summly Founder Nick D’Aloisio and his team will join Yahoo in the coming weeks, and the Summly app will close. Yahoo is acquiring the technology, and will use it in its own mobile experiences soon, a Yahoo spokesperson tells WebProNews.

    Here’s a look at Summly:

    Summly Launch from Summly on Vimeo.

    Yahoo SVP, Mobile and Emerging Products, Adam Cahan, writes in a blog post:

    At the age of 15, Nick D’Aloisio created the Summly app at his home in London. It started with an insight — that we live in a world of constant information and need new ways to simplify how we find the stories that are important to us, at a glance. Mobile devices are shifting our daily routines, and users have changed not only what, but how much information they consume. Yet most articles and web pages were formatted for browsing with mouse clicks. The ability to skim them on a phone or a tablet can be a real challenge — we want easier ways to identify what’s important to us.

    Summly solves this by delivering snapshots of stories, giving you a simple and elegant way to find the news you want, faster than ever before. For publishers, the Summly technology provides a new approach to drive interest in stories and reach a generation of mobile users that want information on the go.

    “Our vision is to simplify how we get information and we are thrilled to continue this mission with Yahoo!’s global scale and expertise,” says D’Aloisio. “After spending some time on campus, I discovered that Yahoo! has an inspirational goal to make people’s daily routines entertaining and meaningful, and mobile will be a central part of that vision. For us, it’s the perfect fit.”

    “With over 90 million summaries read in just a few short months, this is just the beginning for our technology,” he says. “As we move towards a more refined, liberated and intelligent mobile web, summaries will continue to help navigate through our ever expanding information universe.”

    The acquisition is expected to close in the second quarter. Terms of the deal are not being disclosed.

  • Multi-County Initiative Get Local Attention

    It was nice to see the recent broadband meeting held in Hinckley get attention from the local press. I wrote about the meeting earlier – it was a great gather of several counties: Pine, Kanabec, Mille Lacs, Carlton and Aitkin. All came to make a plan to expand broadband in their area.

    The Kanabec County Times picked up on the economic development opportunities that broadband could bring…

    From an economic development perspective, [Bernadine] Joselyn said research shows that economic growth follows telecommunications investment.

    “Companies seeking new locations quickly bypass communities without world-class broadband,” she pointed out. “Many of Minnesota’s highest earners, including retired or semi-retired professionals, would prefer to live next to a lake or on a hobby farm. Unconnected communities stand little chance of attracting or retaining these potential taxpayers, not to mention recent college graduates.”

    Bill Coleman, president of Community Technology Advisors, serves as a catalyst to bring people together in support of regional broadband goals. “The future is already here,” Coleman observed. “It’s just not evenly distributed.” Coleman and Connect Minnesota’s Bill Hoffman encouraged people to visit the Connect Minnesota website to view an interactive map identifying broadband providers and connection speeds.

  • Yahoo Building a Bigger Computing Coop

    The exterior of the Yahoo Computing Coop buildings in Lockport, New York. The data center opened for business today.

    The exterior of the Yahoo Computing Coop buildings in Lockport, New York. The company is planning to expand its campus in Lockport. (Photo: Yahoo)

    Yahoo’s ultra-efficient “chicken coop” data center in upstate New York is getting bigger. The Internet company has announced plans to invest an additional $168 million in the campus for its hydro-powered, wind-cooled server farm in Lockport, N.Y. The expansion will include an additional 7.2 megawatts of data center space, along with a call center. The projects are expected to create 115 jobs between them.

    The expansion was expected, as Yahoo indicated last year that it would buy additional land at its property in Lockport. The company is seeking breaks on property taxes and sales taxes on servers and equipment for the project, according to the Buffalo News. The New York Power Authority will expand the site’s power capacity to support the new construction.

    “We are happy to be a part of the Western New York community and are excited about our expansion plans,” said David Dibble, executive vice president of central technology for Yahoo. “We are appreciative of our close partnerships with local municipalities and are grateful to our outstanding workforce in Lockport. Yahoo is committed to being an environmentally responsible company, and we thank New York state and local authorities for working with us to ensure we continue to power our data center with clean energy.”

    The Yahoo Lockport facility, which is optimized for air-cooling, is one of the world’s most efficient data centers, operating with a Power Usage Effectiveness (PUE) of 1.08. The data center, which is supported by hydro-electric power from the NYPA, requires mechanical cooling for a handful of hours each year.

    The first two phases of the Lockport project, built in 2010 and 2011, featured multiple 120-by-60 foot prefabricated metal structures using the Yahoo Computing Coop data center design. The coops -modeled on the thermal design of chicken coops- have louvers built into the side to allow cool air to pass through the computing area. The air then flows through two rows of cabinets and into a contained center hot aisle, which has a chimney on top. The chimney directs the waste heat into the top of the facility, where it can either be recirculated or vented through the cupola. See A Closer Look at Yahoo’s New Data Center for more photos and video.

    This approach to heat management allows the Lockport data center to operate without chillers, which provide refrigerated water for cooling systems and are among the most energy-intensive components of a data center. The facility uses an evaporative cooling system during those 9 days a year when it is too warm to use fresh air. The buildings were positioned on the Lockport property to allow Yahoo to bring in cool air from either side of the coop, based on the prevailing winds.

  • You Can Get Keyword Data From Facebook Graph Search in Google Analytics

    Will Facebook’s Graph Search become a major piece of successful online marketing strategies? It’s still in its infancy, and does only a small fraction of what it promises to do at this point, but just given the fact that it’s the search feature of Facebook (over a billion users), it seems like something that should play a significant part.

    Not only does Graph Search not currently have all the functionality that Facebook has planned for it, but it’s also still in the process of slowly rolling out. And I do mean slowly. Any notions you have about Graph Search thus far are simply incomplete. What’s available now is nothing compared to what will be available.

    Even still, some have big hope for Facebook’s revamped search and its potential effects on small businesses. Consider this infographic from Advantage Capital Funds:

    Infographic: Can Facebook Graph Make You Money?

    Infographic by Advantage Capital Funds

    That’s all fine and good, but online marketers need data. When it comes to search marketing, keyword data is obviously of the utmost importance (though it’s getting harder to come by thanks to the whole “not provided” ordeal), but this isn’t something that’s readily available from Facebook. You can’t just look at your search data in Google Analytics and see the Graph Search referrals, because Graph Search is part of Facebook, which Google considers social rather than search, even though Graph Search sends users to Bing results in cases where Facebook’s own data doesn’t match the query.

    It’s entirely possible that the situation will get better for webmasters and marketers in the future, but for now, there is a workaround, which Glenn Gabe discusses in a blog post (via Search Engine Land).

    Facebook does have keyword data available via referral strings. As Gabe noticed, the keyword is being passed along int he referrer. He shows this example:

    Graph Search keyword

    “As you can guess, I jumped into Google Analytics to see how this was being picked up,” Gabe writes. “Since Facebook isn’t an official search engine in GA, it was still showing up as a referring site (without the keyword showing up). But, since the q= querystring parameter was being passed in the referrer, I knew I could surface those keywords via advanced filters. So, I quickly set up a new profile and added a filter that would capture graph searches from Facebook. And it works.”

    Gabe goes on to provide step-by-step instructions for doing this, so check out the post if this is something you want to do.

    Tracking this data is bound to make Graph Search a lot more helpful to your business. And wait until the product really gets into full swing.

  • GPU News: Cray Awarded $32 Million Contract

    Surrounding the GPU Technical Conference this week in San Jose – Cray, Cirrascale and Penguin Computing have GPU announcements:

    Cray awarded $32 million Swiss contract.  Cray announced that it has signed a contract with the Swiss National Supercomputing Centre (CSCS) to upgrade and expand its Cray XC30 supercomputer. When the upgrade is completed, the system nicknamed “Piz Daint” will be the first petascale supercomputer in Switzerland. Currently a 750 teraflop Cray XC30, it will be transformed to include NVIDIA K20X GPU accelerators. CSCS is the first customer to order a Cray XC supercomputer with NVIDIA GPUs. ”Piz Daint will help advance the research projects of our diverse user community by leaps and bounds,” said Thomas Schulthess, director of CSCS. “With GPU acceleration integrated into Cray’s latest generation supercomputer, the application performance and the energy efficiency of our simulations will improve significantly. We are very excited about the collaborative development of a truly general-purpose, hybrid multi-core system with Cray.” The contract is valued at more than $32 million, and the upgraded system is expected to be operational in 2014.

    Cirrascale launches GB5400 GPGPU blade server.  Cirrascale  announced the next generation of its GB5400 blade server supporting up to eight GPU cards. Utilizing a pair of the company’s proprietary 80-lane PCIe switch-enabled risers, the GB5400 supports up to eight discrete GPGPU (General-Purpose Graphics Processing Unit) cards in a single blade. ”We have redesigned the GB5400 to handle the latest cards from leading GPU providers, including NVIDIA and their wide assortment of high-end GPU cards,” said David Driggers, CEO, Cirrascale Corporation. “Our customers and licensed partners in cloud and High Performance Computing are asking for this increased density and performance, while maintaining the ability to scale the solutions they choose. We’re confident that the GB5400 meets these needs, and in fact, surpasses them.” The Cirrascale GB5400 blade server, including the entire GB series line of GPGPU solutions, as well as the Cirrascale proprietary PCIe switch-enabled riser, are immediately available to order and are shipping to customers now.

    Penguin Computing unveils Relion 2808GT. Penguin Computing announced the availability of the Relion 2808GT. The Relion 2808GT supports eight GPUs or coprocessors in two rack units and provides a higher compute density that any other server on the market. The GPUs are supported by a dual socket platform based on Intel’s Xeon E5-2600 product family. The Relion 2808GT also features an on board Dual 10GbE BASE-T controller and up to 512GB of ECC memory. “Penguin has been delivering integrated GPU computing clusters since the version 1.0 of this technology,” said CEO Charles Wuischpard. “The new Relion 2808GT platform in conjunction with the latest GPU and coprocessor technology delivers unprecedented levels of performance. The Relion 2808GT enables our HPC customers to further accelerate their research by shortening the time to result for their simulations.”

  • Comparing Cost of a Custom Data Centers

    This the third article in series on DCK Executive Guide to Custom Data Centers.

    It should be noted that a custom data center design may cost somewhat more than a standard data center. This aspect should be examined closely, a higher initial Capex alone (whether amortized or factored into a lease) should not be the deciding factor alone. It is possible that over the long run it can actually represent a lower Total Cost of Ownership (TCO) if the custom design results in lower operating costs from improved energy efficiency. Data center designs have also been evolving, particularly over the last several years to improve energy efficiency. There have been several new designs involving the use of so-called “Free Cooling”, which can greatly impact the TCO.

    Higher Power and Cooling Densities
    Most standard general purpose data center designs can accommodate 100 -150 watts per square foot (and/or an average of up to 5 kilowatts per rack). This design is typically based on the use of a raised floor as cool air delivery plenum, coupled with down-flow perimeter cooling units. This design has the inherent advantage of a proven track record with standard cooling equipment and offers the ability to easily accommodate moves, additions and changes by placing (or replacing) floor tiles to meet the heat load of the rows of racks as needed (until the maximum cooling capacity per rack limitation is reached).

    Some organizations have moved to significantly higher power density levels, ranging from 10-25 kilowatts per rack. While some data center cooling designs can accommodate more than 5 kilowatts per rack, typically it is available on a limited case by case basis. Most standard designs cannot properly cool large quantities of high density racks across the entire data center. These higher power densities requirements typically are valid candidates for a custom data center.

    Designs for Extremely High Energy Efficiency
    While good energy efficiency is important to any data center, there are two areas where some new developments are occurring that can significantly improve the energy efficiency of the major infrastructure, but may have some other limitations.

    Power Systems
    In the US market most data centers will typically use industry standard voltages within the data center; 480 volts AC for the UPS and cooling equipment, which is then stepped down to 208 or 120 volts AC for most IT equipment. However, there are some systems which are beginning to find their way into US data centers which are purported to be more energy efficient than the standard power systems. They generally fall into two categories: First the European type systems which are based on distributing 400/230 volts AC within the data center to power the IT equipment. Since this system can be implemented relatively easily and supports virtually any new IT equipment with no change, it is beginning to make some inroads in the US market.

    The second is Direct Current “DC” based systems, which generally fall into two sub-categories; one at 380 volts DC and the others at one or more lower voltages; 48 volts DC (US telephone system standard) and several other variations based on other lower DC voltages. It should be noted that while these DC based systems have been built and are in operation in a limited number of sites, however at this time they generally require specially designed and custom built or modified IT equipment. There are technical and economic pros and cons to all these DC based systems and are still actively debated, but is beyond the scope of this article to explore this in detail. However, before committing to a DC powered design be aware that a DC based system cannot easily or cheaply be retrofitted back support to US standards AC based, off-the-shelf computing equipment, if a universal DC IT equipment standard does not emerge.

    It should be noted that while older data centers had much greater losses in their electrical power chain, this was primarily due to older technology UPS systems. The newest UPS systems are far more energy efficient than their predecessors and therefore minimize the energy saving difference that the non-standard power systems offer. Consider this carefully before moving toward a non-standard power system.

    Alternate and Sustainable Energy Sources
    In most cases the data center simply purchases electricity generated by a utility. The origin of that power has become a source of public awareness and has been criticized by some sustainability organizations, even if the data center itself is a new energy efficient facility. This can impact the public image and reputation of the data center operators. In some cases this has impacted the potential location of the data center, based on the type of fuel used to generate the power, whereas previously those decisions were strictly driven by the lowest cost of power. Some new leading edge data centers have even begun to build solar and wind generation capacity to partially offset or minimize their use of less sustainable local utility generation fuel sources, such as coal. This would certainly fall under the category of a custom design and however it would also change the TCO economics, since it raises the upfront capital cost significantly.

    Cooling Systems
    Of all the factors that can impact the energy efficiency (and therefore OpEx) cooling represents the majority of facility related energy usage in the data center, outside of the actual IT load itself. The opportunity to save significant amounts of cooling energy by moderating the mechanical (compressor based) requirements and the expanded use of “free cooling” is enormous.

    One of the areas where an investment is customization can produce significant OPEX saving is the expanding use of “Free Cooling”. The traditional standard data center cooling system primarily consists of standard data center grade cooling systems (CRAC – CRAH, see part 3 “Energy Efficiency” for more information) typically placed around the perimeter of the room blowing cold air into a raised floor. This is typically a closed loop air path, there is virtually no introduction of outside fresh air. This means that mechanical cooling is the primary method that requires significant energy to operate the compressors to effect heat removal. This is the time tested and most commonly utilized design. Some systems include some form of economizers to lower the amount of annual cooling energy, but few standard systems can totally eliminate the use of mechanical cooling.

    However, more recently some data centers have been built using so called “Fresh Air Cooling”, which brings cool outside air directly into the data center and exhausts the warmed air out of the building, whenever outside conditions permit. There are many variations on this methodology and it is still being developed and refined. This method was pioneered and built mostly by Internet giants such as Facebook, Google and Yahoo and would be considered unthinkable only a few years ago for an enterprise class data center. While this is not yet a widespread commonly accepted method of cooling, it is being considered by some more mainstream operators for their own data centers. Of course, its effectiveness is greatly related to climatic conditions and therefore is not ideal for every location. (Please see part 3 “Energy Efficiency”.)

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

  • Ultra HD TV might get off to stronger than expected start

    Ultra HD TV Sales 2013
    Ultra HD TVs with 4K resolution were the talk of the Consumer Electronics Show this year, but not even the director of industry analysis at the group that organizes CES each year was convinced Ultra HD TV would see meaningful adoption in 2013. According to a recent report, however, HDTVs with Ultra HD resolution might get off to a stronger than expected start. Digitimes claims its unnamed market observer sources expect display panels destined for Ultra HD TVs to account for as much as 20% of HDTV panel shipments in 2013. China-based TV vendors are expected to account for much of the early demand, and Innolux and AUO are named as the main suppliers of 4K display panels. Digitimes notes that pricing is still a barrier for widespread adoption, as Ultra HD panels cost about twice as much as comparable 1080p displays.

  • Why I totally support background checks… for members of Congress and the Obama administration

    In the midst of the big debate about “background checks,” I thought it important to weigh in on the issue and profess my full and total support for background checks…
  • Urgent health warning issued over Adya Clarity detox liquid containing aluminum, sulfuric acid

    The non-profit Consumer Wellness Center (www.ConsumerWellness.org) has issued a consumer health warning over Adya Clarity, a “detox” product that was seized by the FDA in 2012 and tested at over 1200ppm aluminum. The product is currently being marketed through a series…
  • Cyprus bank bailout agreement is pure theft: 40% of private deposits to be looted from selected accounts

    A brand new looting arrangement has been reached concerning Cypriot banks. It involves seizing the funds of all accounts over 100,000 euros, then stealing up to 40% of those funds sometime over the next few weeks, or whenever EU bureaucrats get around to deciding exactly…
  • Vaccine victory: Widespread resistance from parents to HPV jab for daughters shows truth is spreading far and wide

    Parents with young daughters are increasingly wising up to the human papillomavirus (HPV) vaccine scam, according to a new study published in the journal Pediatrics. Based on the latest figures, more than 16 percent of parents in 2010 rejected popular HPV vaccines like…
  • Youth with diabetes at greater risk following transition from pediatric to adult care

    Type 1 diabetes is a condition in which the body does not produce insulin and cannot convert sugar, starches and other food into energy. Generally diagnosed during childhood or adolescence, the disease requires lifelong access to medical care and intensive daily self-management.
     
    As children with Type 1 diabetes grow into young adults, they must leave their pediatric health care providers for adult providers. But the timing of this process and its impact on the young people’s health had not been fully explored.
     
    In a new study published in the April issue of the journal Pediatrics and currently available online, UCLA researchers found that young people with Type 1 diabetes who had transitioned from pediatric to adult care were 2.5 times more likely to have chronically high blood glucose levels, putting them at higher risk for heart attacks, strokes, blindness and kidney failure later in life.
     
    The estimated median age of patients when this transition occurred was 20.1 years, the researchers said, and 77 percent had left pediatric care by age 21. 
     
    The findings suggest that young adults need additional support and guidance when leaving their pediatric providers to avoid the risk of poor diabetes control.
     
    “The transition to adulthood can include changes in health care providers, insurance and often living situations as patients move from high school to college or work,” said the study’s lead author, Dr. Debra Lotstein, an associate clinical professor of pediatrics at the David Geffen School of Medicine at UCLA and Mattel Children’s Hospital UCLA. “These transitions can be challenging for anyone, but youth with a chronic health problem like diabetes are at risk of losing the support of their health care providers and their family that helps them stay healthy. When this transition goes poorly, it increases the risk of worse health outcomes in adulthood.”  
     
    Previous research on youth with Type 1 diabetes in the U.S. had looked primarily at young people from a single diabetes specialty center or a single geographic area, or it had examined youth at just one point in time — either before or after leaving pediatric care. The current study, however, involved the largest national cohort of youth with Type 1 diabetes in the U.S. to be followed over a period of time.
     
    Researchers analyzed data from the multi-center SEARCH for Diabetes in Youth Study, which has tracked children and young adults with diabetes from six centers across the country since 2002. The cohort included 185 adolescents and young adults with Type 1 diabetes who were enrolled in the study in the year after their diabetes was diagnosed. The youth included in these analyses were cared for by pediatric diabetes physicians at the time of their initial study visit and were followed for an average of 4.5 years. 
     
    The authors found that a young patient’s type of insurance — public versus private insurance, for instance — made no difference in the switch to adult care, but they did observe that older patient age, lower levels of parental education and lower baseline blood-glucose levels were independently associated with increased odds of transitioning to adult care. 
     
    “One surprise was that those patients with poor diabetes control were more likely to stay with their pediatric providers, compared to others,” Lotstein said. “We theorized that that the doctors have a higher level of concern for those patients with poor control and may care for them longer in an attempt to prevent their condition from worsening.”
     
    The next stage in the research, the authors said, is to directly follow young adults transitioning to adult care to see what happens as they age and to examine how different types of support aimed at easing the transition affect health outcomes.  
     
    The SEARCH for Diabetes in Youth Study is funded by the Centers for Disease Control and Prevention, and National Institutes of Health’s National Institute of Diabetes and Digestive and Kidney Diseases.  
     
    Additional study authors included Michael Seid (Cincinnati Children’s Hospital Medical Center); Dr. Georgeanna Klingensmith (University of Colorado Denver School of Medicine); Doug Case, (Wake Forest University School of Medicine); Jean M. Lawrence, (Kaiser Permanente Southern California); Dr. Cathernine Pihoker (University of Washington); Dr. Dana Dabelea (Colorado School of Public Health); Elizabeth J. Mayer-Davis, (University of North Carolina); Dr. Lisa K. Gilliam (Kaiser Permanente Northern California) Dr. Sarah Corathers (Cincinnati Children’s Hospital Medical Center); Dr. Giuseppina Imperatore (Centers for Disease Control and Prevention); Dr. Lawrence Dolan (Cincinnati Children’s Hospital Medical Center); Andrea Anderson (Wake Forest University School of Medicine); Ronny A. Bell, (Wake Forest University School of Medicine); and Beth Waitzfelder, (Center for Health Research, Kaiser Permanente, Hawaii).
     
    The authors have no financial ties to disclose.
     
    For more news, visit the UCLA Newsroom and follow us on Twitter.