Author: Serkadis

  • Search and Remove Duplicate Files Quickly and Free

    With today’s tendency to hoard files on the hard disk, it should come as no surprise that the same file may be located in more than one place, thus chipping the free space you would otherwise use for storing more relevant data.

    Although there are plenty of utilities that can clean up the computer of unnecessary files and thus claim free space, only some of the… (read more)

  • What The Industry Thinks About Google Reader’s Demise

    Google recently dropped the bombshell that it is closing down Google Reader, much to the chagrin of its loyal user base. I’ve done my share of ranting about it, and discussed why some businesses may want to be more strongly thinking about their email strategies. We’ve since reached out to a handful of prominent bloggers and industry professionals for some additional perspectives on what the closing of Google Reader means for blogs and publishers.

    “I think it’s net positive that Google is shutting down its reader,” Automattic/WordPress founder Matt Mullenweg tells WebProNews. “It encourages people [to try] the great new experiences that have been developed over the past few years, including the WordPress.com reader.”

    According to Mullenweg, the open source WordPress software is used by 16% of the web.

    And trying new experiences we are. Feedly, for one, is getting a great deal of attention since Google’s announcement. Two days later, Feedly announced it had already seen 500,000 new users coming from Google Reader. At times they’ve had trouble keeping up with the demand.

    “I think that Google Reader is a standalone technology and not indicative of whether the world will shift away from RSS,” says Human Business Works CEO and all around popular social media guy Chris Brogan. “The notion that social networks and human sharing has replaced RSS is like saying that fireplaces have replaced central heating. Quaint, but not effective.”

    Not everyone quite agrees with that sentiment, however.

    Jeremy Schoemaker, author of the popular ShoeMoney blog, says he has about 70,000 RSS readers but that the amount of traffic from them has dropped significantly.

    “For me Social Media has become the new RSS,” he says. “I use a free service called Twitter Feed, that automatically posts my new posts to Twitter and my Facebook personal and fan page. I see far more traffic from that then any news reader. I haven’t thought of it until now but I haven’t logged into my Google Reader account for years. I don’t ever think RSS will die, but it will used more as an API like tool to interact with websites than a reader.”

    Long-time blogger and EVP/Global Strategy and Insights for Edelman, Steve Rubel, tells us, “The majority of large sites won’t see an impact. Most of their traffic now comes from Twitter and Facebook. In addition Google (search) is a large source of traffic. The smaller sites, however, will be impacted. Their more dedicated readers are using Google Reader. These sites will need to gravitate to other forms of distribution such email newsletters and other vehicles.”

    “It’s hard to say,” says Search Engine Land and Daggle blogger Danny Sullivan about the impact of Reader’s demise. “Technically, all those readers can easily continue to be readers by taking their feeds elsewhere. In practice, some might not make the effort. I expect that some blogs that see traffic from RSS are about to take a hit, though it might not be anywhere near as bad as they fear. We have, of course, been through this before after the decline of Bloglines.”

    TopRank Online Marketing CEO Lee Odden tells us, “It’s a disappointment and a little puzzling that Google would shut down reader. What’s next, FeedBurner? Probably. Google is a data-driven company, so clearly they have their reasons. The cost must now outweigh the goodwill created by offering a free and useful service like reader. Still, I have to wonder if there isn’t useful usage data with reader that Google could use?”

    “For content marketers, the main consideration is the impact on reach of content,” he adds. “If a substantial portion of a blog’s readers are using Google Reader, it’s a big deal. The blog would do well to point those readers to another service like Feedly.”

    Zee Kane, CEO of The Next Web, says, “I think older blogs, perhaps primarily ‘tech blogs’, might experience a degree of negative impact as many (ourselves included) have hundreds of thousands of RSS subscribers. Many of our readers our early adopters and geeks who consume (technology) news as though their life depended on it, Google Reader is/was an undoubtedly brilliant way of doing so. With Google Reader disappearing, we’ll see an even heavier focus on social as a means to distribute stories, as a way to rank stories and as a means to increase readership.”

    This is, of course, a small sampling of industry opinion, but it’s interesting to hear people’s different takes on the effects. Really, we won’t know what impact it truly has until Google Reader is finally gone. In the meantime, other services will pop up, and existing alternatives will strive to improve and outdo their peers.

    There for a while it was starting to look like Google was really pushing for an end to the RSS format, as even its RSS Subscriptions Chrome extension disappeared from the Chrome Web Store. Thankfully, that was said to be a mistake, and it came back. Meanwhile, Google is phasing out links to Google Reader from its other properties. We’re still waiting to find out if Google will keep the RSS option alive in Google Alerts, which seems to be experiencing its own negligence from the company. Interestingly, Google is giving advice on how to build news readers for Android.

    RSS.com is currently on sale with a $200 million asking price.

  • New low-end iPhone expected to cost half as much as iPhone 5, margins seen at 38%

    Low-cost iPhone Margins
    Apple (AAPL) is expected to switch things up this year and launch not one but two new iPhone models. Industry watchers believe one will be an incremental update to the current iPhone 5, dubbed “iPhone 5S,” that will feature the same design with an updated camera and processor. The second will reportedly be a brand new low-end iPhone that may help Apple gain share in emerging markets. Regarding the latter, Credit Suisse analysts said in a recent research note picked up by ValueWalk that the new cheaper iPhone will likely achieve an average selling price of $329. Apple’s current iPhone 5 starts at $649, so it’s understandable that investors are concerned over crunched margins. According to Credit Suisse’s analysis, however, Apple should be able to pull off 38% gross margins with the entry-level iPhone, suggesting the new handset might not be as much of a burden on Apple’s sliding margins as anticipated.

  • Channel Intelligence Execs On Why This Was The Right Acquisition For Google

    Last month, Google signed an agreement to acquire Channel Intelligence to improve Google Shopping. At the time, a Google spokesperson told WebProNews, “We want to help consumers save time and money by improving the online shopping experience. We think Channel Intelligence will help create a better shopping experience for users and help merchants increase sales across the web.”

    Earlier this month, the deal was finalized. We had a Q&A with Channel Intelligence CEO Doug Alexander and co-founder Rob Wight about why this was a good pick up by Google, and how search-based ecommerce is evolving.

    Neither will be working for Google. Wight will continue the role he began two years ago, as founder and CEO of myList, and Doug Alexander will support him in this role while he continues as President of ICG, the company Google bought CI from. myList was spun off from Channel Intelligence in 2012, and it has operated, and will continue to operate as a separate entity.

    “[Brand visibility in ecommerce] is important for the same reason that it’s important to be on a shelf in a store,” Alexander tells WebProNews. “It is how you get into the consumer’s decision set when they are ready to purchase. And just like in a store, brands that do a better job of merchandising and a better job of giving the customer the complete and accurate information they need to make a purchase (correct pricing, sizes, availability, etc.) will be the brands that customers choose to buy.”

    “[Google’s new paid inclusion model for Google Shopping] gives Google an opportunity to create better shopping experiences for consumers,” he says. “Now that it is paid, retailers will focus on presenting the very best offering to win the consumer’s attention. When it was free, it was easier for retailers to treat this channel more casually.”

    “For over ten years, CI has focused on making it easy for consumers to find and buy products online, whether the buying process initiates on a brand’s website or on a shopping platform,” he says, on why this was the right acquisition for Google. “Our expertise with product data optimization and our deep relationships with retailers, manufacturers, publishers and agencies makes CI a natural fit with Google’s ongoing innovation to create outstanding consumer shopping experiences.”

    “It should help all businesses, regardless of size, be more successful in reaching consumers with their products within Google Product Search,” he adds. “What will it mean for consumers? It should mean that consumers will have an even easier time finding, researching, and buying products online.”

    Wight thinks search-based ecommerce is evolving into a more social media based experience.

    “One of the places people naturally go when they’re thinking of making a purchase is to their friends,” he says. “You’ve seen the studies – Nielsen reported a year ago that 92% of people say they trust recommendations from people they know … which is well over the 47% who said they trust ads on TV or in magazines. Social media enables this discovery process to happen online, and people are already trading information – lots of information – there about the products and services they trust. It’s just still in a really fractured kind of way, with one-off conversations and, generally speaking, incomplete information, which is inefficient. So it’s not a matter, necessarily, of people changing anything they’re doing on Google. It’s more a matter of making what they’re already doing in offline conversations and within social media a lot more effective, and hopefully a lot more fun.”

    “A friend of mine posted a really cool GPS tracking watch the other night,” says Wight. “He was raving about it. I was really intrigued, and pretty sure I wanted one too, but I didn’t want to leave where I was to go search for the price, colors, retailers, and all of that, so I just kept scrolling. If the picture my friend had shared has included the important product info, had shown me that 5 other of my friends already also had the watch, and had a simple link to buy it, I’m pretty sure I’d have made the order right there.”

    “There are a lot of ‘disembodied heads’ of products floating around in social media,” he adds. “Just pictures, with a comment or a like. They’re just begging to be gathered up into one complete representation of the product, along with the rest of the kind of merchandising they’d get in a store, or that the brand would put on their product page on the brand site.”

    Wight also sees another opportunity in social for brands to be able to act with more knowledge.

    “This is good for consumers, for the brands, and for the social platform,” he says. “Ads are spam when they’re irrelevant, but [when] they’re relevant, they’re welcome. Google has done a great job with this: when I search for, say, a wetsuit, Google shows me ads for wetsuits. Helpful. I love that. I’ve told Facebook that I like triathlons, but up until now, there hasn’t been a way for me to signal when I’m in the market for a triathlon bike. If I’m talking about them, posting pictures of them to friends, pulling together a list of ones I might want to buy, that’s hugely useful for the brand to know, right? What brand wouldn’t advertise to me on Facebook if they knew I was actively interested? And as a consumer, these advertisements would be welcome, not an intrusion, because they’d be relevant … meaning I’d value the ads more, the platform more … a win for everyone.”

    Google acquired CI for $125 million.

  • Apple now powering its cloud with solar panels, fuel cells (photos)

    Apple has turned on the first halves of both its massive solar panel farm and adjacent fuel cell farm, and is using the systems to provide power for its $1 billion, 500,000 square-foot data center in Maiden, North Carolina. The clean power projects are some of the largest non-utility owned systems in the world, and they’re part of Apple’s plan to use 100 percent clean power for its data centers. Apple revealed the information in a new environmental report on Thursday.

    Bloom Energy

    The Maiden data center was built partly to power its iCloud services and is part of Apple’s overall plan to increasingly provide music, media and other applications from its cloud computing infrastructure. So think about it this way — when you backup your new Rihanna album, the servers that are powering it could (probably indirectly) be using fuel cell or solar power.

    The fuel cell farm (pictured above) was developed by Silicon Valley startup Bloom Energy, and the live system is currently at 4.8 MW. Apple has decided to double the size of the fuel cell farm and will be installing more fuel cells shortly to make the farm 10 MW. An Apple spokesperson tells me that those new fuel cells will be live very soon.

    Apple Solar Farm

    Apple’s live 20 MW solar farm uses solar panels from San Jose’s SunPower. Apple has also decided to double the size of the solar panel farm and is building another 20 MW solar panel farm next to this current one. Apple says in its new environmental report that its second solar farm will be completed by the end of the calendar year, and when that is finished and live Apple will have an installed annual capacity of 167 million kWh from local Apple-owned clean power projects.

    Apple Solar Farm

    Apple is producing enough clean power at the Maiden facility to provide 60 percent of the total energy for the data center. Remember it’s a big facility and servers are energy hogs (even if Apple’s facility is very energy efficient). Apple gets the rest of the power from the local grid, where Duke is the utility. While North Carolina largely gets its power from coal and nuclear, Apple is buying renewable energy credits to offset the rest of the grid-power used.

    Apple is using this combination of direct clean power generation, buying clean power from providers and buying renewable energy credits to reach 100 percent clean power for all of its data centers. Apple is building large data centers in Prineville, Oregon and Reno, Nevada and is using this approach in those locations, too.

    Apple says in its new report across all of its buildings — both data centers and corporate offices — 75 percent of its energy usage comes from clean power. That’s up from 35 percent in 2010.

    Clean energy generation on the scale that Apple is doing for its data center is new, so the infrastructure is a mix of some complex and sometimes indirect methods. State incentives and laws are also different in each state, so that changes the strategy per state. For example, in California Apple can buy clean power through a program called Direct Access, but in North Carolina Apple decided building actual power plants was a better option.

    In Maiden, Apple is selling the power from the fuel cells to the local utility, so it won’t necessarily be using the fuel cell energy onsite to power the data center. The fuel cells will use biogas, and Apple can earn money by selling the power and associated Renewable Energy Credits to Duke. The biogas used will come from landfills, and will be “directed biogas” which means it will be injected into natural gas pipelines and not used directly in the actual fuel cells at the site.

    Check out my previous series on this data center:

    Here’s some more background reading on these projects:

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google Axes Frommer’s Print Travel Guides [Report]

    According to a report from travel news site Skift, Google has quietly killed Frommer’s print travel guidebooks.

    News that Google acquired Frommer’s came out back in August. Google would be buying the brand from John Wiley & Sons. Terms of the deal were not disclosed, but according to All Things D, the deal wasn’t “a huge one”. Frommer’s would be incorporated into Zagat.

    Google said at the time, “The Frommer’s team and the quality and scope of their content will be a great addition to the Zagat team. We can’t wait to start working with them on our goal to provide a review for every relevant place in the world.”

    As TechCrunch’s Sarah Perez noted, “No definitive decision has been made on the Frommer’s printed guides, but the deal is supposed to enable users discover reviews across Google, which means online.”

    Skift’s Jason Clampet reports today:

    The last two Frommer’s books to roll off the presses were guides in the all-color Day-by-Day series devoted to Napa and Sonoma and Banff and the Rockies, and went on sale in early February. The last book in the traditional complete guide series was Frommer’s Florida in late December.

    Starting with Frommer’s New York City With Kids, which can still be found on Amazon, Barnes & Noble, and in other bookstore inventories and was supposed to publish on February 19, the entire future list of Frommer’s titles will not see the light of day. Many of the authors attached to these 29 titles told Skift that they were informed by editors now working at Google that the books would not publish. Some authors were told that the books would merely be delayed before new contracts were signed. None of the authors contacted reported that their titles would appear in print.

    Greg Sterling speculates that Google will ultimately kill the Frommer’s brand, suggesting that it can’t survive without the print guidebooks, though the Frommer’s site is so far still alive and well.

    We’ve reached out to Google for comment, and will update if we hear back.

  • I Hope Google’s Not Killing Alerts Too

    Google Alerts have been messed up for a while, and it doesn’t appear that Google is doing much about it. If they are, they’re being very quiet about it, even though others have been vocal about the issues.

    Danny Sullivan at Search Engine Land wrote about the issue over a month ago, saying that over the prior several weeks it had become “nearly useless”. The main problem is that the alerts aren’t returning all the links they should be returning, particularly for those who opted to receive “everything” on the keywords they selected. Sullivan says Google told him at the time that things were improved not long after he wrote the story. Now, they’re still not better.

    The issue is in the spotlight again today, as TheFinancialBrand.com vented about it in a post that got picked up by tech news link highlighter Techmeme.

    Google recently announced it would be shutting down Google Reader, as you probably know. For a short time, Google’s RSS Subscription Chrome extension was missing too, though it has since come back. Some were questioning whether or not Google would continue to support the RSS option for Alerts if the company is distancing itself from RSS, though the return of the Chrome extension is a good sign. Another good sign is that Google is actively encouraging others to build news readers.

    The lack of action on Google Alerts isn’t a great sign though. I remember a similar negligence with Reader, and they ended up shutting that down. Hopefully Google Alerts won’t soon be on the “spring cleaning” list too.

    We’ve reached out to Google for comment, and will update if we receive one.

  • Squeezing More Efficiency Out of Microsoft’s Cloud

    Microsoft-dublin-newhall-47

    The new data halls in Microsoft’s Dublin data center feature white cabinets and narrower hot aisle containment systems. Rows of cabinets are nestled into each containment enclosures, the structures with the green end doors. Newly-arrived cabinets are in place and waiting for the remainder of the row to be filled and then enclosed. The white cabinets are a new feature, reflecting available light and allowing Microsoft to use less energy on overhead lighting. Click the image to see a larger version. (Photo: Microsoft)

    DUBLIN, Ireland – After you’ve built one of the most efficient data centers on earth, how do you make it even better? One refinement at a time, as  Microsoft has found in its data center in Dublin, the primary European hub that powers the company’s online services throughout the region.

    When the $500 million Dublin facility came online in 2009, it was an early example of a data center operating with no chillers, relying almost totally upon fresh air to cool thousands of servers in the 550.000 square foot facility, which powers the company’s suite of online services for tens of millions of users in Europe, the Middle East and Africa

    Early last year Microsoft added a $130 million expansion that nearly doubled the capacity of the data center. The expansion allowed Microsoft to implement several new tweaks to its design that have allowed it to more than double the compute density of each server hall while using less power.

    Along the way, Microsoft has also improved the facility’s energy efficiency, lowering the Power Usage Effectiveness (PUE ) from 1.24 in the first phase to 1.17 in the newest data hall. The PUE metric compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion. The average PUE for enterprise data centers is about 1.8.

    Date-Driven Refinement: The Next Phase of Efficiency

    Squeezing more efficiency and density out of bleeding-edge facilities is the next phase in the data center arms race. It’s a process that other leading players will be undertaking as they seek to get more mileage out of new server farms that came online in the huge construction boom from 2007 to 2010.

    “We’re all moving towards constant evolution and improvement,” said David Gauthier,  Director of Data Center Architecture and Design at Microsoft, who helped design and launch the Dublin facility in 2009.

    One key to improvement is relentless review of data from the early operations of new data centers, according to David Gauthier, Director of Data Center Architecture and Design at Microsoft. As it studied the operating data it collected, Gauthier says Microsoft found that it could be more aggressive in its use of free cooling.

    “We were being conservative at first, because it was new and we hadn’t done it before,” said Gauthier. Microsoft had installed a small number of DX (direct expansion) cooling units in the first phase to provide backup cooling if the temperature rose above 85 degrees. The climate in Dublin, which has ideal temperature and humidity ranges for data center operations, never tested those levels.  The DX units were retired, making additional power available, which was used to install more servers and cabinets in the data halls.

    In place of the DX units, Microsoft added a less energy-intensive backup system to address “just in case” scenarios of unusually warm weather.  It used adiabatic cooling, in which warm outside air enters the enclosure and passes through a layer of media, which is dampened by a small flow of water. The air is cooled as it passes through the wet media.

    But Microsoft has now shelved the adiabatic systems in its most recent data halls, as Dublin’s weather simply doesn’t require it. “The climate in Dublin is awesome,” said Gauthier.

    Greater Density, Same Power Footprint

    Inside the data center, Microsoft is using more powerful and efficient servers, and configuring data halls to house more cabinets and servers. Each row of cabinets is housed in a “server pod” featuring a hot aisle containment system, with the cabinets housed in a fitted opening in the side of a fixed enclosure.

    Microsoft designed the contained hot aisles so they could easily use cabinets of different heights, with one enclosure fitted with some 40U cabinets and some 48U cabinets, for example. This allows the company flexibility if it opts to use different server vendors. It has also narrowed the hot aisles themselves, which frees up more space for servers n each data hall.

    These refinements, along with advances in processor power and efficiency, have helped boost Microsoft’s server power

    Other recent refinements include the installation of energy-saving LED lights tied to motion sensors, meaning Microsoft uses less energy to power its lights, and only uses them when staff are present in a room.  It has also adopted white cabinets, which can save on energy since the white surfaces reflect more light. This helps illuminate the server room when less intense lighting.

    The focus on energy savings extend to the backup power systems. Microsoft uses short-duration UPS units, which provide about 1 minute of runtime during a utility outage before shifting load to the building generators. This approach allows Microsoft to forego a huge battery room in favor of a smaller enclosure within its power room. Rather than cooling the entire power room to protect the battery life, the enclosure is air conditioned, using ony eneough energy to cool a small space instead of the entire room.

    Microsoft is not alone in the effort to pursue energy gains in company-built facilities. Google recently “gutted” the electrical infrastructure of its data centers in The Dalles, Oregon to upgrade it for more powerful servers. The facility in The Dalles was built in 2006.

  • If the future of BI is Hadoop, SQL and the cloud are the glue

    Starting with the well-known quote — “A good way to predict the future is to invent it” – Ravi Murthy, engineering manager at Facebook, kicked off an interesting panel discussion at GigaOM Structure:Data 2013 Thursday with four industry experts on business intelligence (BI) and Hadoop. Hadoop has a big place in that future, but not by itself. The conclusion? Applications and SQL databases built atop Hadoop are needed for better BI, noted the panel.

    “Why are so many systems being built in the BI landscape? If Hadoop can deliver the promise, why have all these other solutions?” asked Murthy.

    Ashish Thusoo, co-Founder and CEO at Qubole, said that putting SQL on top of Hadoop just makes sense. “As a system, Hadoop is not a low-latency system, opening the need for faster SQL-based systems to query the data. And there’s probably only space for half-dozen of these solutions in the market; not dozens.”

    Agreeing with Thusoo was Tomer Shiran, director, product management at MapR Technologies. “With our open source Apache Drill we’re enabling lots of differing BI use cases allowing companies to do different things with Hadoop. One use case is ability to interactively query and explore data.” Apache Drill is an interactive, low-latency SQL way to get at the data reservoir in Hadoop. Ben Werther, founder and CEO, Platfora completely agreed, saying that customers looking for much more agile approaches to data exploration without building more IT work.

    But Hadoop is still an important underlying part of the puzzle. Justin Borgman, CEO, Hadapt noted that “Hadoop scales so cost effectively; it’s a landfill where you can dump everything. That opens up new opportunities to explore that data including indexing to boost performance and interactivity across a broader data set.”

    When asked for a use case of the benefits, Werther pointed out an unnamed customer. “They had 50 analysts working against SQL stores in a very siloed fashion. We moved them to a Hadoop-based stack and built a data reservoir. Only 5 of the 50 were able to be productive before. Within a week, all 50 became productive.”

    Of course, the cloud is also part of BI’s future, although it’s not without risks. Sure, running Hadoop in the cloud is very elastic so that you can use as many resources as you need in near real-time. But the issues of security and data gravity in particular are worth noting: Generating data in the cloud could make it tough to move out in the future and may require more apps build on this data to also be in the cloud.

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • BlackBerry says 100,000 BlackBerry 10 apps now available

    BlackBerry 10 Apps Milestone
    As BlackBerry (BBRY) continues to fight the bloody battle for No.3 in the global smartphone market, apps are becoming less of a problem for the company’s new platform. Sort of. BlackBerry announced on Thursday that the BlackBerry World app store is now home to more than 100,000 BlackBerry 10 apps. The news comes just seven weeks after the 70,000-app milestone was reached. BlackBerry listed a host of popular apps and games alongside the announcement, but a quick look around BlackBerry World shows that the company still has a ways to go — many top apps are nowhere to be found and BlackBerry 10 seems to be running into the same problem BlackBerry’s PlayBook suffered from, where most of the available apps are just filler. Hopefully now that the 100,000 threshold has been reached, BlackBerry can focus on quality over quantity. The company’s full press release follows below.

    Continue reading…

  • IBM rethinks the transistor to keep scaling compute power

    IBM has come up with a model for a new transistor, the device that’s at the heart of every chip and the foundation of our tech-heavy society. The potential IBM breakthrough is a new coating for the transistor that allows the device to read ionic signals as opposed to electric ones. This is important because it could help enable chipmakers to put more transistors on a chip.

    The tech giant hasn’t actually built a chip with the new transistor; instead, it has demonstrated a rough circuit. IBM expects the technology to leave the lab within the next five to seven years, and assuming that happens it can actually be produced using the same processes used today.

    Why we need a new transistor

    The benefit to this fundamental shift is that we can continue to make smaller chips with more processing power and keep to the schedule set by Moore’s Law. That “law” dictates that we double the number of transistors on a chip every 18 months (or two years). However this doubling has pressed the chip industry to the limits — packing those transistors onto small chips is like cramming a bunch of angsty teenagers into a small space.

    One might argue that Moore’s Law doesn’t matter, but the ever decreasing cost of computing power is the reason Google is able to deliver its awesome search index for the pennies people pay in search advertising, or why Facebook can spend hundreds of millions on its infrastructure and still not charge you a thing.

    A big problem associated with smaller transistors (and more of them) on a chip is leakage — electrons are noisy and they lose a lot of power and heat. This new coating and the use of ions as signals reduces leakage, which means chipmakers can continue placing more of them on the chip and the cost for computing will continue going down.

    And that is a good thing.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Sabey Opens High-Rise Manhattan Data Tower

    sabey-intergate-manhattan

    Sabey Data Centers has retrofitted the Verizon building at 375 Pearl Street as Intergate.Manhattan, a data hub for the 21st century. (Photo: Sabey)

    Some New Yorkers who look upon the huge Verizon high-rise at 375 Pearl Street have trouble seeing past its foreboding stone facade. The team at Sabey Data Centers saw it as a blank canvas: an opportunity to remake 1 million square feet of Manhattan real estate as a high-tech data hub.

    “This has been an extremely exciting project,” said John Sasser, Vice President of Operations at Sabey Data Centers.  ”There aren’t many opportunities to go into a 32 story building and remake it as a purpose-built data center.”

    On Wednesday Sabey opened the doors on the new Intergate.Manhattan, having completed an extensive retrofit and commissioning process. Sabey, a Seattle-based developer, outfitted the property with all new core infrastructure and upgraded the power capacity from 18 megawatts to 40 megawatts.

    The building was developed in 1975 as a Verizon telecom switching hub and later served as a back office facility. Verizon continues to occupy three floors, which it owns as a condominium. The property was purchased in 2007 by Taconic, which later abandoned its redevelopment plans. Sabey and partner Young Woo acquired the building in 2011.

    Plenty of Challenges

    Sabey saw an opportunity at 375 Pearl, but there were many challenges as well, according to Leonard Ruff, a principal with the design firm Callison, which has partnered with Sabey on several of its data center projects. Ruff and Sasser shared details of the retrofit project earlier this month at the DatacenterDynamics Converged conference in New York.

    One issue was more than 35 years of undocumented changes in the building’s mechanical and electrical systems, according to Ruff. The vertical chases were highly congested with conduit, wiring and piping, and the tight site footprint didn’t allow much space for storing construction supplies and equipment. There was also the presence of Verizon and “severe penalties” should the construction process interrupt the telco’s operations, which continued apace on floors eight through 10.

    The building provided an opportunity to re-think data center operations in a vertical layout. “We can take that 40 megawatts and spread it across the building in whatever way makes sense,” said Ruff.

    The 13.2 kV electrical service enters the building at a substations on the second and third floor. The fourth and fifth floors house the UPS infrastructure (double-conversion systems with efficiency of up to 97 percent), while diesel backup generators are housed on floors two, three, four and 31 and vent their exhaust through the roof.

    The initial phases of data center technical space are being deployed on four floors – 6, 7, 11 and 12. Each floor has generous vertical space, with ceiling clear heights of between 14 and 23 feet. Sabey will offer hot aisle containment for its customers, with a water-side economization system supported by five cooling towers on the roof. The roof can accommodate up to 16 cooling towers if needed as Sabey expands it data center operations to additional floors within the building.

    Diesel fuel will be stored in the basement, which has a larger footprint than the high-rise section of the building, allowing Sabey to store 155,000 gallons of fuel at present, with the ability to add another 100,000 gallons as it expands. Sasser said the basement level at 375 Pearl remains more than a dozen feet above the high water mark seen during Superstorm Sandy, but said Sabey is taking no chances and equipping the fuel depot with submersible pumps.

    Ceremony Marks Building’s Opening

    On Wednesday, the building was opened to New York media for a ceremony with city officials. That included Mayor Michael Bloomberg, who used most of his podium time to speak about anti-crime measures to address political news developments in New York.

    Company president Dave Sabey took it in stride, noting that the city’s progress on crime helped construction workers and staff feel safe during the development process. “This is a big day for Sabey, and for New York City,” said Sabey.

    Sabey now operates 3 million square feet of data center space as part of a larger 5.3 million square foot portfolio of owned and managed commercial real estate. The company has developed a national fiber network to connect its East Coast operations with its campuses in Washington state, where it is the largest provider of hydro-powered facilities.

    Sabey’s data center properties include the huge Intergate.East and Intergate.West developments in the Seattle suburb of Tukwila, the Intergate.Columbia project in Wenatchee and Intergate.Quincy.

    The opening of Intergate.Manhattan comes amid an eventful period for the New York City data center market. After buying 111 8th Avenue, Google has discontinued efforts to lease vacant space at the historic New York telco hub, apparently intending to dedicate the remainder of the building for use as Google office space. At 60 Hudson Street, newcomer DataGryd is marketing new space.  Meanwhile, two buildings have recently added new data center space. Data Center NYC Group has recently opened space at 121 Varick Street, while Telehouse has acquired data center space at 85 10th Avenue.

    All this comes against the backdrop of Superstorm Sandy, which is prompting a variety of responses in data center operations and real  estate as companies assess the storm and its implications for disaster recovery. While some companies are now wary of New York, others are seeking new space outside of the financial district, which experienced the brunt of the flooding from Sandy’s storm surge. Those companies represent some of the most promising prospects for new space in other areas of Manhattan, including the Sabey building.

  • Happy Birthday! Twitter turns 7

    I’ve been on Twitter so long, I forgot just how short a time that really is — or how much has changed since March 21, 2006. The service claims 200 million active users tweeting 400 million times a day. But the real measure is much larger — how Twitter, and other innovations arriving around the same time, fundamentally changed billions of lives five to seven years later.

    The service’s editorial director, Karen Wickre, calls Twitter a “global town square”, which is appropriate description. People gather to look, listen, gossip, grab news or listen to the town crier. I’ve often grumbled about the 140-character limitation, but brevity has benefits. Statements are succinct. No one talks on and on and on without interruption. If anything, butting in defines Twitter interaction. You will be heard whether or not anyone wants you to be.

    “The percentage of internet users who are on Twitter has doubled since November 2010, currently standing at 16 percent”, according to Pew Internet. Eighteen to 29 year-olds (27 percent), African Americans (26 percent) and urbanites (20 percent) are most-likely to use the social network. This pales in comparison to Facebook — 67 percent of U.S. Internet users and 86 percent of 18-29 year-olds.

    Twitter’s start marks a golden age of innovation that transforms the lives of a large chunk of the earth’s population. Facebook and Twitter opened to the public the same year — 2006. YouTube did the same at the end of November 2005, but Google’s acquisition 11 months later took the service everywhere. Then in June 2007, Apple released iPhone, which forever changed mobile devices. All the while, Google pushed forward development of Android and Chrome, which came to market in late 2008. Eveything today is different because of these products — and others surrounding them.

    I remember how hard watching video online was before YouTube. Now every major TV network streams programs, while services like Amazon, Hulu and Netflix develop compelling, original programs for streaming not traditional broadcast. YouTube is the global pulse of original video content, with the service now claiming 1 billion unique visitors a month. Facebook’s 1 billion number is more — unique users. While Twitter’s reach isn’t as wide, its impact, particularly as a tool used with the others, cannot be overstated.

    The first real sense of what these tools could mean for connecting people, getting out information and even how it’s reported the world over came in summer 2009 on the streets of Tehran. The best reporting on the Iranian protests wasn’t from CNN or many news organizations but Flickr, Twitpic, Twitter and YouTube. Tweets, images and videos poured out in real time. Where did CNN get some of its best material? Citizen journalists, like this story and images from CNN’s citizen-driven iReport.

    The process repeated during the Arab Spring — protests erupting across the Middle East during early 2011 — and to many events since. Where once white men in suits controlled editorial content appearing on TV or in newspapers, these tools empower you. Twitter often is where major news breaks first. Much of this reporting comes via mobile devices. Where once TV stations sent out film crews, you can capture the moment by photos, videos or Tweets sent from your phone. Name a major event since 2009 for which Twitter played no part? There are none.

    Something else: Twitter, along with Facebook, YouTube and other social sharing services, are valuable forensic tools — for historians, journalists or law enforcement trying to reconstruct events. Imagine if these tools existed on Sept. 11, 2001. What if people trapped above the raging fires where the planes impacted the Twin Towers could have posted last-minute photos (via service like TwitPic) or videos (to YouTube) or messages to friends and family (Tweets and Facebook Wall). Forensic investigators could have used the videos to better reconstruct what happened to the towers and when, as they sought to understand what brought the towers down and what changes should be applied to future buildings.

    Except for YouTube, none of these tools were available to the masses before March 21, 2006. Happy birthday, Twitter. We can’t remember life without you.

  • Bing Expands Its Knowledge Graph-Like ‘Snapshots’

    Last year, Bing launched its “Snapshots” feature, which shows direct answers for searches on the results page. The feature was updated in December for people and places, and it looks a great deal like Google’s Knowledge Graph. Today, Bing announced that it has expanded the offering to include more.

    “For Bing, search is about more than blue links,” a spokesperson for Bing tells WebProNews. “It’s also about understanding the entities – people, places, things – around us and the relationship between them. With people and places being two of the most common searches on Bing, we want to ensure that whether you’re looking for the details on your new coworker, the official Facebook page of your favorite celeb, or who won best actress in 2010, Bing gives you the answers you’re looking for quickly and in one place.”

    Bing

    Bing’s Richard Qian discusses the changes on the Bing blog. “The underlying technology for Snapshot is designed to develop deep understanding of the world around us not only as a collection of entities (people, places and things) but also the relationships between those entities,” he writes. “Inside the Bing engineering team, we call this technology Satori, which means understanding in Japanese. Over time, Satori will continue growing to encompass billions of entities and relationships, providing searchers with a more useful model of the digital and physical world.”

    “Today, we are inviting people to check out Snapshot to experience our expansion of Satori,” he continues. “Since its introduction in June, we have expanded Satori to include a significantly larger number of entities from more domains with a deeper level of understanding about them. They include people, places, and things which are among the most common searches on Bing. So, whether you’re searching for answers about a celebrity, co-worker, animal, geographic location, or man-made structure, Bing helps you understand the world around you by providing at-a-glance answers about the people, places and things you care about.”

    Bing’s offering sprinkles in social information from Twitter, Facebook, Klout, and LinkedIn for people. Qian shows examples for “leopard, Abraham Lincoln, Lawrence Ripsher, John Kerry, and Mount Everest.

  • Getting beyond the cult of big data

    Basho CTO Justin Sheehy, whose company supports the Riak database, wants people to get beyond thinking that they can implement some kind of “big data strategy,” in order to drive success. In a rapid-fire talk at Structure:Data 2013 in New York, he explained the concept of cargo culting and how organizations today seem to rely on that as opposed to really figure out what they want data to do.

    Cargo culting is derived from the behavior of people in the Pacific Islands during World War II who watched U.S. airmen drop cargo from planes on the islands. They would see someone walk out into a field, wave some batons in the air, and then boxes of clothing and food would fall on the runway. Sheehy said that to this day on certain islands, someone will walk out onto old airfields and wave sticks in the air in hopes that some food and clothing might fall on them.

    He used that analogy to argue that people in business are behaving the same way around big data. They don’t have a real strategy or even goals, but are just hoping to copy the technologies that others are using to wrangle their data. He ended with a bit of a diatribe:

    “Strategy is not about how you do things. Strategy is about why you do things, and why you do things is about you. So if you stop asking about ‘How is this better than Hadoop,’… and instead you start asking questions about why your business should take a specific course of action or not, then you have a chance to stop being a cargo cultist and start being a strategist.”

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • CalPERS CIO: Cleantech has been a noble way to lose money

    The country’s largest pension fund, the California Public Employees’ Retirement System, has lost a considerable amount of money investing in clean technology. CalPERS had close to a 10 percent negative return (9.8 percent) on the around $900 million that it’s put in the cleantech sector, which includes $460 million that it’s put into clean tech venture funds, said CAlPERS CIO (chief investment officer) Joseph Dear at the Wall Street Journal’s Eco:nomics conference on Wednesday night.

    Over the past five years, CalPERS has been one of the largest sponsors of cleantech venture-backed innovation. Dear said CalPERs received one tenth of the capital back from its cleantech fund, and has now dialed back on both its investments in venture capital as well as cleantech VC funds. CalPERs has also tried to be more careful with its partner selection, he said.

    “Just because it’s a good idea doesn’t make it a good investment … This has been a noble way to lose money.”

    Across the CalPERS portfolio, Dear said, he has to make a 7.5 percent a year return.

    The indictment is the most definitive one I’ve heard to date confirming that for the most part, the experiment of venture capitalists investing in cleantech didn’t work. There are a few funds that have done fine, and are launching new cleantech funds like The Westly Group. Khosla Ventures, Braemar Energy Ventures, DBL Investors and Lux Capital are also still investing in cleantech. But the list continues to shrink.

    VantagePoint Capital Partners CEO Alan Salzman said during the same panel that corporations are stepping in to pick up some of the cleantech investments. VantagePoint recently stopped raising a planned billion dollar cleantech fund due to lack of interest from limited partners.

    But according to the Cleantech Group’s figures, corporate investing also has been dropping every quarter throughout 2012. Corporate investors put $31 million into cleantech startups in the fourth quarter of 2011, and that dropped to $26 million in Q1 2012, $24 million in Q2 2012, $20 million in Q3 2012, and $18 million in Q4 2012.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • At the Optical Transport Conference News, a 100G Party

    At the OFC/NFOEC (Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference) conference in Anaheim, California this week several vendors have competing 100G technology announcements, fueling the ability to drive big data through ultra-fast networks.

    Juniper launches small supercore and 100G routing interface.  Juniper Networks (JNPR) announced the new PTX3000 Packet Transport Router.  Featuring a 10.6 inch depth design, it can rapidly scale up to 24 terabits per second (Tbps), which allows it to simultaneously stream HD video to as many as three million households. The router follows Juniper’s 2011 introduction of the Converged Supercore, a new architecture to bring together the packet and transport worlds. Additionally Juniper announced an integrated packet-transport physical interface card (PIC) with two-ports of line rate 100 Gigabit forwarding for the entire PTX family, which will now enable service providers to cost-effectively interconnect sites more than 2000 kilometers (1,243 miles) apart.  ”To effectively deliver advanced services and remain competitive, service providers need a core network solution that will help streamline their business and reduce operational costs,” said Rami Rahim, executive vice president, Platform Systems Division, Juniper Networks. ”The Converged Supercore is an innovative platform that enhances service provider economics while providing greater value to their subscribers. Following on the heels of the revolutionary PTX5000, the PTX3000 extends these benefits to new markets and geographies with a solution that is tailored for their specific needs.”

    Kotura launches 100G with WDM in dense package.  At the OFC/NFOEC event Kotura demonstrated its Optical Engine in a Quad Small Form-factor Pluggable (QSFP) package. Kotura is the only photonics provider to demonstrate WDM (Wavelength Division Multiplexing) in a 100 gigabits per second (Gb/s) 4×25 QSFP package with 3.5 watts of power. “The QSFP package enables our customers to fit 40 transceivers across the front panel of a switch, providing 10 times more bandwidth than CFP solutions,” said Jean-Louis Malinge, Kotura president and CEO. “Because we monolithically integrate WDM and use standard Single Mode Fiber duplex cabling, our solution eliminates the need for expensive parallel fibers. No other silicon photonics provider can offer WDM in a 3.5 watt QSFP package.”

    Applied Micro Launches stand alone OTN processor. Applied Micro (AMCC) announced the TPO215 processor, a standalone OTN processor that enables 10 x 10G line cards for OTN cross connect and Packet-Optical Transport System (P-OTS) applications. Delivering advanced framing, mapping and multiplexing, the TPO215 doubles the capacity of existing OTN framers while providing advanced security features. The product supports 10 x 10G channels for a total capacity of 100G. “AppliedMicro continues to pioneer technologies that will drive a new generation of networking equipment for telecommunications, data center and cloud connectivity,” said George Jones, vice president and co-general manager, Connectivity Products, at AppliedMicro. “The desire to transition to packet-aware optical transport networks requires network equipment vendors to partner with semiconductor companies that have established expertise in the latest optical networking solutions. This processor helps enable the required infrastructure for dramatically improved user experiences.”

    Broadcom enables higher density 100G long haul.  Broadcom (BRCM) announced a fast CMOS transmitter PHY for long-haul, regional and metropolitan data transport. The BCM84128 100G transmitter achieves an aggregate data rate of 128 Gbps at a low power draw of only two watts. Using 40 nanometer CMOS process technology it provides a full-rate clock output at 32 GHz and paves the way to 100G long-haul networks. ”The BCM84128 high performance transmitter PHY reflects the industry-leading innovation we are known for, allowing OEMs to leverage 100G PHYs developed in standard CMOS process technology with its inherent advantages of lower power and reliability,” said Lorenzo Longo, Broadcom Vice President and General Manager, Physical Layer Products (PLP). “Today’s introduction provides Broadcom with the opportunity to participate in a new market segment and pave the way for 100G optical transport.”

  • Schmidt: Chrome And Android To Remain Separate (But With More Overlap)

    Some of us have expected that Google’s Android and Chrome operating systems would eventually converge into one operaring system. That’s mostly because Google co-founder Sergey Brin once implied that this would be the case. Since then, Google has given off other signals that this could potentially happen.

    For example, we’ve recently seen indications of Android’s Google Now functionality coming to Chrome. Even more recently, Android chief Andy Rubin has stepped away from leading the operating system, as Google now has Sundar Pichai leading both Android and Chrome.

    Now, former CEO and current Executive Chairman Eric Schmidt has come out and said that Android and Chrome will remain separate products, though we can expect more overlap between the two, according to a report from Reuters. He also said that rumors about him leaving Google were “completely false,” which is helpful to know.

    It will be interesting to see just how much overlap does take place between Google’s dueling operating systems, particularly as it is now pushing notebooks with touchscreens (the Chromebook Pixel).

    Eventually, it seems, it would make sense for the two to converge as the overlap increases, but even if it’s not going to happen in the near future, who is to say that it never will?

    Android has already attained massive popularity, but Chromebook options and availability are really just starting to take off. This week, the company announced availability in six more countries.

  • A great visualization of Apple and Google’s smartphone market dominance

    Smartphone Market Share Chart
    Just over half a decade ago, the smartphone landscape in the United States looked absolutely nothing like it does today. Companies like Microsoft (MSFT), BlackBerry (BBRY) and Palm (RIP) dominated the market in the U.S. and even Symbian had a healthy share in 2005. A major shift began in 2007 when Apple (AAPL) debuted the iPhone, and any hangers-on were quickly dispatched over the coming years after Google (GOOG) unleashed Android. We all know the story, but a picture is worth a thousand words and comScore issued a great chart during its recent “Mobile Future in Focus” webinar that shows just how quickly and decisively iOS and Android took over the U.S. market. The chart follows below.

    Continue reading…

  • What Eric Schmidt REALLY SAID about the future of Android and Chrome OS

    Eight days ago, Google dropped an atomic bomb on the Android Army, with Andy Rubin’s sudden departure as commander-in-chief. Sundar Pichai, who is responsible for Chrome and Apps, assumed Android leadership. The change led to much speculation that the operating system would sometime soon merge with Chrome OS. As the fallout spreads, an answer arrives: The question is irrelevant.

    Google Executive Chairman Eric Schmidt tells reporters in India that Android and Chrome OS will not merge but converge, says Reuter’s Devidutta Tripathy. But there’s no quote, just paraphrase, which worries me about context. Fortunately, there is a video that provides context and reveals a different priority: Chrome.

    Someone asked Schmidt if either operating system would suffer Google Readers’ fate. “No is the answer. We don’t make decisions based on who the leader is”. Google makes decisions “based on where the technology takes us”.

    From today’s many new stories, which focus on the two operating systems remaining separate, you might assume Schmidt addresses question: “Will they merge?” Instead, he responds to one about a Google Reader-like execution and then changes direction.

    “Chrome and Chromium are the world’s best HTML5 development and authoring systems”, he says. “You should be using Chrome. It’s faster, it’s safer, it’s more secure than any of your other browser choices. In Android, which is more of a Java-like development environment, it solves a different problem”.

    This confirms what I observed last week. Rubin’s stepping aside and Pichai taking responsibility is about the browser, not an OS merger. “There will be more commonality for sure, but they certainly are going to remain separate for a very, very long time”, Schmidt says. “They solve different problems”.

    But the context for the problems solved is Chrome. On Android, “it” — Chrome — “solves a different problem”. Google’s go-forward platform of choice isn’t Android or Chrome OS but the browser.

    Chrome and Chromium fulfill Netscape’s late-1990’s vision for a browser-based platform, something Microsoft sought to prevent.

    The browser, as development platform, can co-opt operating systems like iOS, OS X or Windows, while also fronting Chrome OS. The browser is more natural fit for Google services and anchors them anywhere.

    By contrast, Android, while hugely popular, is constrained by OEM partners like Samsung. Google delivers fresh features to Chrome and Chrome OS users about every six weeks. Chrome Beta for Android updates are considerably more frequent.

    Android updates are less frequent and carriers and device manufacturers logjam dispatch. For example, Jelly Bean, which released in July 2012, makes up just 25.5 percent of the devices accessing Google Play in the 14 days before March 5. Meanwhile, during fourth quarter, Samsung accounted for about 43 percent of Android smartphone sales, according to Gartner. Who primarily controls the user experience? Not Google.

    Chrome, by contrast, is all about Google. The platform connects the company’s core services and generates revenue via advertising. Chrome can go everywhere, and — to re-emphasize — co-opt other operating systems. The browser engine supports apps, too, whether web-based or running on Android.

    I think Schmidt is decidedly clear about Google’s platform of choice, which absolutely favors Chrome OS over Android long term. Chrome is why.