Author: Jordan Novet

  • Why Intel thinks data democratization is a good bet

    Intel doesn’t see money in the blending of personal data with publicly available data sets, not yet, anyway. But the chip maker does suspect that a market could form around that sort of behavior down the line. That’s why the company has been backing several initiatives around data lately, from wethedata.org to the upcoming National Day of Civic Hacking. It also introduced its own version of Hadoop. Much of these projects are targeted at democratizing data, a major goal of which is to make data easily accessible for large swaths of people.

    Brandon Barnett

    Brandon Barnett

    Perhaps it’s not surprising that a big company like Intel retains ethnographers, anthropologists and other social scientists on staff. Those people, who work inside Intel’s Interaction and Experience Research Lab, study how people around the world use technology and how technology affects cultures.

    Lately, the researchers have been exploring the implications of the changing role of data in the computing ecosystem. The idea is to figure out how the growing openness of data from companies and governments could present business opportunities for Intel, said Brandon Barnett, director of business innovation at Intel.

    It’s only natural that this is a bit amorphous — Intel doesn’t know exactly what will come out of it all. But as it throws support behind data-democratization efforts, the company has in mind a few use cases that hint at where commercialization could come in to play.

    • An Intel researcher has built an application on top of multiple publicly available data sets depicting the trees and their pollen activity in Portland, Ore. The idea is for people to be able to find routes around the city that avoid trees that could cause allergy attacks. Without the application, people who move to town with allergies but no tech savvy might have a hard time commuting.
    • High school students looking at colleges should be able to plug in information about their academic performance and non-accredited extracurricular activities and get recommendations for schools they should check out, based on demographic data, grant availability and workforce demand. While the data might be out there, it’s not all navigable in one place, and it’s not interactive. (This is one area that Intel wants people to work on.)
    • Intel let Londoners look at their home energy consumption and compare it with others in the same age bracket or type of home. The trick here was to open up the data that’s relevant to certain users and to display it in a simple way. (Opower can do something related — compare people’s energy consumption with that of their neighbors.)
    • It could be possible to suggest better public-transit routes by crossing people’s transit habits with data on lateness of buses based on traffic. That sort of data can otherwise remain in a silo and never get shown to people who could actually benefit from alerts based on it.
    • A next-generation music application might be able to watch what music a person listens to, and then offer a ticket to see a preferred band when it stops by a local venue on a day the person has some free time. Such a service would take personal data and run with it by removing the hassle of identifying a band, checking out the tour schedule, finding a date that works and scoring a ticket.

    It’s nice to see that Intel has ordinary people in mind, at least for now. It’s interested in constructing applications for lots of people to understand their own data and put it in context with outside data that’s narrowly tailored, “so we don’t end up with a bunch of hackers across the country coming up with better visualization tools,” Barnett said.

    Of course, more public-private data mixing and analyzing could bring about the need for more computing power, and Intel wants to be ready if and when that situation arises. Currently available chips might be up for the job, or maybe a new model will be warranted. Intel has shown willingness to make custom chips for a single webscale company, Facebook, so delivering special equipment to fit a growing use case might not be unthinkable, assuming the scale is there.

    Feature image courtesy of Shutterstock user Andrea Danti.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Cisco to acquire energy software startup JouleX for $107M

    Cisco on Wednesday said it plans to buy JouleX, a company that has developed software for tracking data center energy use, for $107 million in cash and incentives. The deal is expected to close in the fourth quarter of this fiscal year.

    JouleX software lets customers monitor energy use of all devices attached to a network. Reports on usage can stream in from a data center, rack, slot or business-unit level, among others. Customers can keep an eye on carbon emissions, set alerts based on energy use and enforce policies in order to lower costs.

    Cisco already has EnergyWise energy-management protocol for tracking energy use inside networks. The JouleX software will be able to integrate with EnergyWise.

    The combination of the two “will provide customers with a simple way to measure, monitor and manage energy usage for network and IT systems across the enterprise, without the use of device-side agents, hardware meters or network configurations,” according to a statement.

    Webscale data center operators have been keen on highlighting their energy efficiency, with disclosures coming from Facebook, eBay and Google. Greater adoption of the JouleX software as a result of the acquisition could lead enterprise data center operators to boast of their own improvements on power-usage effectiveness (PUE).

    At the same time, lower energy costs on premise could also reduce the likelihood that an enterprise customer would feel pressure to look to a public or hybrid cloud model. And that’s good for Cisco, which wants to ensure that companies keep buying switches and other network gear.

    Meanwhile, the deal is a boon to Atlanta-based JouleX’s investors. Most recently the company raised $17 million from Flybridge Capital Partners, Intel Capital, Sigma Partners, Target Partners and TechOperators.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Five9 takes on $34.5M to get more call centers aboard the cloud

    Like other industries, call centers and their contact-center relatives have been looking more to cloud computing and relying less on on-premise gear. One of the beneficiaries of that trend is Five9, which makes software for call- and contact-center employees that runs on Five9 servers in colocation facilities.

    Five9 was founded in 2001 “as a pure-play Software as a Service (SaaS) back in the ASP (application service provider) day,” said Mike Burkland, the president and CEO. Over the years, the San Ramon, Calif.-based company has moved from doing business mainly with small and medium-sized businesses to large companies, too. It claims more than 1,800 customers.

    The company’s software lets customers have automated call-answering with voice recognition. It routes calls to available representatives and lets employees handle both inbound and outbound calls. It’s possible for managers to keep an eye on employee efficiency, and the system hooks in easily with customer-relationship management (CRM) software. Now, Burkland said, it’s just a matter of selling the concept to more companies.

    That’s where Five9′s new round of funding comes in. On Wednesday the company announced $34.5 million in funding, including $22 million of series D equity led by SAP Ventures and $12.5 million in bank revolver debt from City National Bank. Previous investors Adams Street Partners, Hummer Winblad Venture Partners and Partech International also participated in the equity round. The company has now raised $71.6 million in total venture funding.

    In its campaign to phase out proprietary software running on these facilities’ on-premise gear, Five9 naturally faces competition from vendors offering those solutions, particularly Avaya, Genesys and Cisco, Burkland said. As for smaller cloud-based competitors, Burkland said, “They tend to come and go.”

    Part of the appeal of horizontal SaaS products from Five9 and their ilk is that customers don’t need to worry about paying up front for hardware or on an ongoing basis for external support or internal management. The pay is more granular, as customers pay per seat per month. While there are fewer revenue streams to count on, the revenue comes in more frequently. “There’s very low risk profiles compared to their classic enterprise software brethren, which used to have to make every new quarter with customer sales,” Burkland said. The trick is to keep existing customers while adding new ones, and that’s what Five9 is aiming to do.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • ‘Other’ server vendors keep gaining momentum, while HP, Oracle slide

    New sales figures out from Gartner Tuesday show that server shipments from no-name vendors other than Hewlett-Packard,, IBM, Fujitsu gained marketshare in the first quarter of 2013 compared to the year-ago period. The decrease of server business at big vendors such as IBM is not the takeaway — remember that IBM reportedly considered selling its server division — so much as the rise of little-known vendors such as Quanta and Wistron that customize their gear for big customers. The pattern isn’t new, but it’s becoming more evident.

    In the first quarter of 2011, “other” vendors claimed 32.1 percent of marketshare in terms of unit shipments. HP had almost 30 percent, and Dell had 22 percent. Now the “other” guys in aggregate have moved forward and hold 37.5 percent of the market in terms of units shipped; HP has dropped to 24.9 percent, while Dell’s share was roughly flat. IBM in the same period fell from 11.8 percent to 9.9 percent.

    Source: Garnter

    Source: Garnter

    HP is trying to shake things up with its new energy-efficient Moonshot servers. The microserver bet looks like a good one, although Moonshot’s adoption is an open question. An HP executive told my colleague Barb Darrow that the data showing the growth of white-box vendors emerged “before the world knew about Project Moonshot.”

    Dell, for its part, grew year over year last quarter, but far more modestly than the “other” vendors. It’s unclear what going private will mean for the company in terms of its server sales.

    Cisco grew, too, but its chunk of marketshare — 2.3 percent — is small, so growth appears larger than it actually is.

    Oracle is absent from the list of vendors measured by server shipment, although its server business doesn’t look promising, either, at least not as much as for the “other” vendors. Oracle server revenue slipped more than 27 percent year on year.

    The growth of Facebook, Google and other webscale companies out of colocation facilities and into their own data centers has introduced the opportunity to go with tailor-made gear that make sense at economies of scale. (At GigaOM’s Structure conference in San Francisco in three weeks, this topic might very well come up in conversation with Werner Vogels, chief technology officer at Amazon.com, and Jay Parikh, Facebook’s vice president of infrastructure engineering.)

    Some companies running lots of applications have moved in this direction as well — take Amazon and Rackspace, for example. Hence the rise of companies that do this for webscale players.

    With more hardware designs becoming available through the Open Compute Project, still more business could flow to no-namers in the years to come. And that’s to say nothing of switches, a piece of hardware certainly ripe for commoditization.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • A gigabit is not enough. New research takes us to 400Gbps.

    This article was updated at 11:46 a.m. PT to correct the bandwidth measurement in the article’s excerpt and to clarify a statement about signal-to-noise ratio.

    Researchers at Alcatel-Lucent’s Bell Labs have devised a new way to transmit data super-fast over fiber cables: using “twin waves” of information rather than just one and them bringing them together when they arrive at their destination. The result cut down on signal distortion and led to rates of 400 Gbps across a record distance of 12,800 kilometers, or more than 7,900 miles, according to the research paper, published online Sunday by the journal Nature Photonics.

    The pairing of signals in essence cancels out the ups and downs — peaks and troughs, in physics terms — of data. That means the signal-to-noise ratio improves, which lets fiber optic communications travel farther without more gear along the way to boost the signal. That’s a big deal.

    The researchers’ demonstration suggests that telcos can apply this approach to delivery work at great distances — in other words, in fiber moving bits under the sea. But the method could also one day speed up connections inside data centers. For that to happen, fiber needs to become popular in data centers first.

    Speeds faster than 400 Gbps are not unheard of, but the distance here is the key. Researchers have managed to send data at speeds exceeding 100 terabits per second, although it wasn’t clear how far the speeds could be sustained. Last year Verizon clocked in at 21.7 terabits per second across more than 900 miles of broadband with the help of NEC’s “superchannels.” The concept at work was the combination of a bunch of 100 Gbps fiber for one stream. “Imagine taping together a bunch of cocktail straws in order to suck up more liquid,” my colleague Stacey Higginbotham explained. The Bell Labs researchers have taken a different tack.

    Gradually we are moving closer to the 100 gigabit per second age Stacey forecast a couple of years ago. And it could be that the proliferation of gigabit ethernet in more and more cities will prompt the emergence of 100 terabit per second backhaul.

    While companies implementing faster speeds will have the effect of making competition more lively, consumers and business customers will benefit too. With advancements like the twin-wave research, real-time, data-rich applications will become feasible more quickly, and the window of what’s possible will open wider.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Easy Solutions takes on $11M to protect banks’ coffers

    While tech companies must be watchful of the data entrusted to them, banks and other companies that run financial transactions need to take steps to protect their money as well as their data. Bigger guys often can ask engineers to build fraud-prevention systems, but smaller entities might not have that option. Hence the rise of Easy Solutions, which has taken on plenty of customers across Latin America and is now hungry to get its software installed at companies in the U.S., Europe and Asia.

    To help Easy do that, Medina Capital is backing the company with $11 million in Series B funding, bringing the total the company has raised to $14.2 million.

    Easy has different kinds of software available to cover customers on multiple fronts. The products can be deployed on premise or in the cloud.

    Beyond monitoring online transactions, the company also tracks transactions coming through phones, ATMs and point-of-sale terminals. As individual customers rack up transactions, Easy uses neural networks to paint profiles of behavior against which future transactions are measured. The software can show whether a transaction is potentially fraudulent in less than half a second, said Easy’s CEO, Ricardo Villadiego (pictured).

    “We store every transaction that is occurring,” he said. Even though that might seem like a lot of data to manage, whether in customers’ data centers or in the cloud, it’s actually compact — around 280 bytes per account, Villadiego said.

    And when Easy hasn’t picked up much user data to assess risk, the software analyzes browsing patterns and other variables that could signify issues, Villadiego said.

    Easy software verifies that users are who they say they are with multi-factor authentication. It can also spot and prevent phishing, pharming (directing traffic to a different website), malware installations and other nefarious activities. Altogether, Easy sees competition from RSA and Trusteer.

    The dashboard on Easy Solutions' Detect Monitoring Service

    The dashboard on Easy Solutions’ Detect Monitoring Service

    While safeguarding money is ultimately the main focus for Easy Solutions, it also bolsters security for websites, which is an area that’s received lots of investment recently. In the first quarter of the year, IT security investments from venture capitalists fell hard and fast, in a way not seen in years. Last week Blue Coat Systems acquired Solera Networks, and Skyhigh Networks got $20 million in Series B funding. And now with the Easy funding, it looks more like the investor excitement will stick around for another quarter.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • How Ancestry.com transforms mounds of data into legible digital records

    Sure, genealogy nerds might have fun poking through U.S. Census records, birth certificates and other documents in pursuit of information about their relatives on Ancestry.com. When it comes to showing off individual records to friends and relatives, though, the presentation can lack punch, and telling the whole story of an ancestor’s life isn’t straightforward.

    The people behind the Ancestry.com service have realized this. Now they’re making the most of their 4PB storehouse of official personal records, user-submitted information and other data with a new feature delivering sleek computer-generated but customizable summaries of information available on users’ ancestors.

    Ancestry started rolling out the feature, known as Story View, earlier this quarter to a tiny share of its customers, and now it’s active for 10 percent of them. The plan is to analyze the use of Ancestry with and without Story View and round out the feature before making it generally available, probably later this year, said Eric Shoup, the company’s executive vice president of product, in a recent interview. Already Ancestry has made the feature more interactive by letting users move around a single page the images of documents and edit the associated bodies of text derived from the documents.

    How it works

    Story View builds on top of Ancestry’s already highly evolved tools for mining data about relatives, including some handwritten records. But sometimes only critical fields, such as name and place of residence, have been processed for inclusion in Story View. A customer can access a handwritten record, scroll down to the row in which a relative is described and toggle across columns to see data that hasn’t been processed, such as that person’s occupation.

    Ancestry is working on getting more out of its handwritten records by gradually directing armies of “keyers” to decrypt handwriting and turn it into searchable text. Street addresses have been added in this way, and other fields will come later. And since Ancestry continues to add records to its repository, life stories will gain more sources to draw from as well.

    The Story View life summary for one of Shoup's relatives

    The Story View life summary for one of Shoup’s relatives

    To generate one-paragraph summaries drawing on information from multiple documents (check out the top paragraph in the picture above), Ancestry looked to Narrative Science, a company founded in 2010 to make machines turn out readable copy. Early use cases came in the production of coverage of sports events and public companies’ earnings reports, but now Narrative Science technology is handling much more personal information.

    When Ancestry first got involved with Narrative Science, it was only possible to produce data in big batches, said Reed McGrew, lead developer on Ancestry’s narrative and context services team. “They’ll produce huge numbers of financial reports, and that’s not really the experience we’re trying to deliver,” McGrew said. “Because it was meant for batches, it was pretty slow.”

    Within a few months, Narrative Science came out with a new API that could work on a more granular level. “On kind of a user-by-user basis, they generate our life stories,” McGrew said.

    Ancestry knows a thing or two about serving up genealogy information. The company’s editors provided editorial standards, or “rules,” for how the data should inform the narratives and how the narratives should sound, McGrew said. One Ancestry standard? “We don’t talk about births that happen to mothers less than 10 years old,” he said. “They’re more likely keystroke errors. They do happen in reality sometimes, for sure, but more often than not, when we find them, they’re errors.”

    One of several records containing information on a relative of Shoup's

    One of several records containing information on a relative of Shoup’s

    Underneath a picture and life summary of an ancestor in Story View are zoomed-out pictures of documents, instead of discrete fields of structured text. Next to the images, Ancestry can plug in blurbs generated from information in the document. Those draw from a system that engineers drew up in house. Once Ancestry has found all the records associated with a person, it selects specific facts to pull out of them based on Ancestry editors’ rules, and assembles them into full sentences. Once the document-based blurbs are displayed in the browser, customers can edit and save them before sharing.

    Sharing ain’t easy

    The challenge is not the creation and storage of new data and websites that users create, said Scott Sorenson, Ancestry’s chief information officer. Storage has gotten cheaper and cheaper, and that trend should continue. Accurate processing of handwritten records generally is not an issue, either. Often the keyers are in China, Sorenson said. “The Chinese character set is much larger than our alphabet,” he said. “They’re actually very skilled at keying these records.”

    The real hard part is to make sure the service is highly available, to serve up all the right document and text for millions of users and keep the site from crashing when traffic peaks. But since one goal of Story View is to get more people checking out content on the site and eventually signing up, that would be a good problem to have.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Australian researchers get closer to scalable quantum computing

    Researchers in Australia are making progress in their quest to construct a scalable quantum computer, having developed a method for extracting information from an electron racing around a phosphorus atom in silicon, the MIT Technology Review reported Wednesday. The achievement suggests that commercial use — and, therefore, wider implementation of a probabilistic computing model much faster than current systems — could be just a wee bit closer.

    The idea of a operating a quantum computer with a quantum bit — or qubit — based on a phosphorous atom harks back to a vision articulated by Australian Bruce Kane in research published in Nature in 1998. “The realization of such a computer is dependent on future refinements of conventional silicon electronics,” Kane explained in the abstract to his paper. Researchers in Australia have been striving to put Kane’s concept into practice for more than 10 years, the MIT Technology Review article notes, and their latest step is to get information from an agitated electron:

    These guys implanted a single phosphorous atom in a silicon nanostructure and placed it in a powerful magnetic field at a temperature close to absolute zero. They were then able to flip the state of an electron orbiting the phosphorous atom by zapping it with microwaves.

    The final step, a significant challenge in itself, was to read out the state of the electron using a process known as spin-to-change conversion.

    The end result is a device that can store and manipulate a qubit and has the potential to perform two-qubit logic operations with atoms nearby; in other words the fundamental building block of a scalable quantum computer.

    Still, the Australians have work to do, according to their paper, which they submitted on Monday:

    Future experiments will focus on the coupling of two donor electron spin qubits through the exchange interaction, a key requirement in proposals for scalable quantum computing architectures in this system. Taken together with the single-atom doping technologies now demonstrated in silicon, the advances reported here open the way for a spin-based quantum computer utilising single atoms, as first envisaged by Kane more than a decade ago.

    Meanwhile, researchers in England have done work of their own on quantum entanglement involving phosphorus atoms.

    Quantum computers from Canada have seen some commercial adoption, with Lockheed Martin and a Google-initiated lab signing up for D-Wave Systems quantum computers. If competitors from England and Australia come onto the scene, further innovation could follow and cause prices to fall.

    Feature image courtesy of Shutterstock user isoga.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Facebook sheds some light on what it can get out of Parse

    When Facebook acquired Parse last month, it was unclear what good could come of the deal for Facebook. On Thursday, Facebook executives didn’t share detailed new plans for its developer platform or Parse per se, but they did lay out broadly how the social networking giant can benefit.

    Mike Vernal, Facebook’s director of engineering, said the integration of Parse technology could boost ad sales by making development of cross-platform mobile apps easier for developers to build and run.

    If a startup builds an iOS app with a way to connect into Facebook, great, but its reach is limited to the number of people with iOS devices. Then the developers would have to start over to build a version of the app for Android and Windows Phone operating systems.

    That’s where Parse comes in. As a provider of a Mobile Backend as a Service (MBaaS) with software-development kits for multiple operating systems, Parse lets developers quickly build out applications without having to worry about managing servers. When a startup expands its offering from just iOS to Windows and Phone and Android apps and drops them in app stores, promotion becomes important. Facebook can help with that, by getting ads in front of users. The ads expose the applications to the startup’s app, excite users and — here’s the important part — get more ad revenue.

    Getting more from mobile has been a key area for Facebook, and that’s why the Parse deal begins to make more sense. This is particularly important following the mixed reception of Facebook Home.

    Aside from being an ad revenue driver, Parse makes sense from a content perspective. Not every Facebook user updates his or her lists of favorite things and other fields, so enabling fresher content from more external sources is desirable; it could boost engagement. Facebook recently rolled out to all users the ability to be selective about what content third-party applications can push back to Facebook, and now users can confidently approve of this sharing of stories into the news feed and timelines through more and more apps that developers come up with.

    Down the line, Facebook also wants to make this data more accessible through its newish Graph Search tool, Vernal said. That move would scratch another item off Facebook’s long Graph Search to-do list.

    As for Parse, it will keep running the way it has been, Sukhar said, whether developers want to use Facebook as a means of promotion or not.

    One unanswered question is what will happen to all the apps developers run on Parse. “It’s business as usual, so we’re actually staying on Amazon Web Services,” said Ilya Sukhar, a co-founder of Parse (pictured). But Facebook has a boatload of custom-built infrastructure. Couldn’t it just move Parse-backed apps to Facebook data centers, effectively turning Facebook into a quasi-cloud service provider? Apps will keep running on AWS “right now,” said Facebook’s director of product management, Doug Purdy. But the key words are “right now.”

    Purdy made it clear that Facebook wants to just enable third-party developers to build and run apps that people can enjoy regardless of the device they choose. It turns out that’s in Facebook’s best interest, too.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Box acquires Folders technology to enrich iOS offering

    Box indicated on its blog Thursday that the cloud-storage company has “acquired the technology” for the Folders iOS app enabling users to open many kinds of files on the iPhone. The deal marks Box’s third acquisition, closely following on news of the Crocodoc deal.

    While Box has been taking an industry-by-industry approach to enterprise adoption, the Crocodoc buy showed that Box is also serious about serving up a slick and intuitive consumer-grade user experience for the enterprise. The Folders deal is more proof of that, and offers important capabilities that keep Box competitive as enterprises let employees bring their own devices — many are iOS based — into the workplace.

    Folders code viewer. Source: Folders

    Folders code viewer. Source: Folders

    In picking up Folders, Box gets an app that can do a bunch of neat tricks. Files can be copied and deleted. The app can work with the Mail app, upload files to a cloud and download pictures to an iPhone’s Camera Roll. Offline access is available. Users can search and flip through pages of PDFs. The app opens Microsoft Office files in full screen. A text editor has support for markdown, and a code viewer lets developers highlight code in preview mode.

    Users can also search files stored across Box, Dropbox or Google Drive. But perhaps this support for multiple clouds could fall away as Folders gets absorbed into Box — Google and Dropbox, after all, are key competitors against Box in the fight to be the Dropbox of the enterprise.

    The Folders technology will be worked into “the next generation Box for iOS” that’s currently in the works, according to the Box blog post from Sam Schillace, Box’s vice president of engineering. (Schillace will talk with my colleague Derrick Harris at GigaOM’s Structure conference in San Francisco on June 20.)

    The Folders app was designed by Martin Destagnol, the CEO of Reedian. It’s unclear how Reedian will be affected by the acquisition.

    Even though getting enterprise adoption is important, in day-to-day reality, sometimes it’s the small things that matter to people. If enterprise employees see that they can open certain documents on their mobile phones, they might be less likely to get annoyed. And if employee discontent is minimal, companies could end up sticking with Box instead of flocking to other cloud-storage providers. The Folders deal looks like it will help Box move closer in that direction.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • ConteXtream claims mystery carrier has deployed its SDN technology to 40M customers

    ConteXtream, an early player in software-defined networking (SDN), has apparently deployed its software for around 40 million subscribers of a wireless service provider in the United States, although it refuses to say which company is using the services. Telecommunications companies have been kicking the tires on implementing software-defined networks for the sake of efficiency, cost savings and agility, but a big one like this suggests that the benefit could outweigh the cost.

    With offices in Israel and Palo Alto, Calif., ConteXtream was founded in 2007 and focuses on using software to help service providers virtualize network functions. It has picked up investments from Comcast Ventures and Verizon Ventures, among others. “You can imagine that those two investment arms are very interested in what we do,” said Nachman Shelef, ConteXtream’s CEO and a co-founder.

    The customer’s total subscriber base is least double the number that the virtualized network covers, Shelef said, meaning that it could be either AT&T or Verizon, and given that Verizon has invested in ContexTream, it seems hard to imagine it would be selling services to its investor’s biggest rival, although Shelef wouldn’t say one way or the other.

    Running ConteXtream’s Grid software on servers would allow a wireless carrier to more quickly roll out new revenue-generating features while also looking at traffic flows and directing certain subscribers only to the services they need, such as content filtering, customized billing, video optimization and subscriber statistics. “The old way of doing this was all the traffic goes through all functions, whether it needs to or not, without identifying each flow,” Shelef said. As a result, network appliances can be used more efficiently.

    It’s a step toward the future of running all network functions as software on servers. “That vision is still very far off,” Shelef said.

    Early SDN deployments have come from managed-hosting providers. Webscale deployments from Amazon and Facebook appear to be in the works. Enterprises have been slower to jump on board, even though many have expressed interest in SDN.

    ConteXtream and other SDN vendors are eager to capitalize on the continuing hype cycle, and VMware is no exception to that, following its $1.26 billion acquisition last year of Nicira. Using software to virtualize networks is one component of VMware’s software-defined data center vision, which VMware CEO Pat Gelsinger will discuss with my colleague Om Malik at GigaOM’s Structure conference in San Francisco on June 19. Also at Structure, Juniper Networks executive Bob Muglia will talk SDN with GigaOM Research Analyst David Linthicum.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Skyhigh Networks gets $20M to lift IT out of the shadows

    The use of shadow IT — storing and sharing files on non-sanctioned clouds from Box, Dropbox and others, partly propelled by the bring-your-own-device trend — is not news, because it’s been going on for years despite the compliance and security problems it can pose. But IT leaders are fighting back, and new investment in security startup Skyhigh Networks suggests that they’re hungry for tools that reveal the use of cloud services and quantify the potential for data breaches and other risks.

    The company announced a $20 million Series B venture funding on Wednesday, bringing the total raised to more than $26 million. Sequoia Capital led the new round, which also contains a contribution from Greylock Partners.

    Along with highlighting problematic use across multiple cloud services, the Skyhigh software also lets IT administrators take steps to minimize impact of the rogue behavior by controlling access to certain clouds and encrypting data, which could make activity more secure. Cisco and Equinix use the Skyhigh product. Skyhigh wants to add more customers and also invest in marketing and engineering with the new funding.

    The news falls in line with an increase in investments in security recently. In addition to the Skyhigh investment, Blue Coat Systems has announced plans to acquire Solera Networks, and McAfee said it would buy Stonesoft.

    But shadow IT is just one challenge facing CIOs these days, along with the push to try cloud services and implement big data projects. My colleague Barb Darrow will discuss challenges like these with the CIOs of the Clorox Co. and the Pabst Brewing Co. at GigaOM’s Structure conference in San Francisco on June 19.

    Feature image courtesy of Shutterstock user alexmillos.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • How robots can do more in data centers and lower the costs of operating the cloud

    Machines are giving us better and better suggestions for things to read, restaurants to eat at and people to date. Behind the curtains, some of the ways these services are being delivered are also being automated.

    An article out Wednesday from Data Center Knowledge envisions the next few steps for automating operations inside the data centers. Robots can move literally higher up the stack than humans and still be safe, which means data center builders can build vertically instead of horizontally. That could bring better use of data center floor space.

    If robots do all the work on the floor, lights might become unnecessary, and poof: just like that, a line item can be nixed from the budget. Deploying robots could also lead to less downtime, as they could act with more certainty than people when it comes to replacing a server or another hardware component.

    Using robots to grab equipment is “becoming quite feasible,” and Google does it to get backup storage tapes, according to the article. Most gear isn’t really made for machines to handle, though, so this area might be in need of tinkering before it can get widely adopted.

    The article also makes mention of unmanned data centers, including one operated by AOL. Apple revealed plans last year to build one of these facilities in Prineville, Ore., before saying it would expand the site to add data centers where some people would work. As more companies move in that direction, prices will drop, leading to further market penetration.

    Despite this, the article suggests that data centers will still need administrators, so not everyone working inside data centers will lose their jobs as this wave of automation carries through — for now.

    Meanwhile, data center admins can also optimize their facilities by changing out hardware and software to match use cases. Pat Gelsinger, CEO of VMware, will talk about his vision for the software-defined data center, and Andrew Feldman, general manager and corporate vice president of AMD, will talk about how companies can do these things at GigaOM’s Structure Conference in San Francisco on June 19.

    With these sorts of upgrades, while the initial capital expenditures might be high, they could bring operating expenses down for public, private and hybrid cloud providers, resulting in price drops for customers in time.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Blue Coat to acquire Solera and sweeten network-security story as cyberattacks continue

    There’s been lots of investor interest recently in backing startups focused on making IT more secure, at least in part because of the barrage of news of cyberattacks against government agencies and companies, including a defense contractor. Now comes more security investment activity. Blue Coat Systems, a vendor of network gear for security and WAN optimization, is bolstering its hardware and software strategies with the acquisition of Solera Networks.

    Terms of the deal were not disclosed.

    Among Solera products is a BlackBox Recorder that, true to its name, records all network activity, which analysts can play back after attacks to see what happened. Solera Deepsee software conducts deep-packet inspection to reveal the nature of applications and associated files in play inside a network. It also brings up timelines of suspicious activity, runs analytics and enables integration with other network tools, such as Palo Alto Networks. Dashboards and other visual features are available. Solera also makes appliances that monitor traffic, storage boxes and a virtual appliance available as a VMware virtual machine.

    The deal follows news earlier this month of Blue Coat’s acquisition of the SSL product line from Netronome. The thinking behind that deal was to offer enterprises more visibility into both inbound and outbound traffic and apply policy across all traffic, according to a Blue Coat statement.

    Also this month, McAfee agreed to pay around $389 million for network-security provider Stonesoft.

    In the midst of the attention on cybersecurity among other enterprise IT hot spots, it’s not surprising to see vendors rushing to bolt on products and services and then pinging potential customers to promise the solution to their problems (and then announcing sales gains). The real indicator people should be on the lookout for, though, is a drop in cyberattacks, and judging by some recent headlines, that might still be a ways off.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • VMware lays out prices for hybrid cloud offering — now customers have the ball

    VMware re-announced its long-awaited vCloud Hybrid Service as an Infrastructure-as-a-Service (IaaS) play for current vSphere customers to use. It will become available in an early access program in June and generally available in the third quarter of the year.

    The company is pitching the platform for both legacy vSphere applications already running in company data centers  and for brand new applications designed from the ground up.  VMware execs up to and including CEO Pat Gelsinger promised “seamless” interoperability between on-premises implementation and vCloud Hybrid Services.

    They promised it will let customers move data from on-premise infrastructure to public clouds on Layer 2 or Layer 3 networks and create the same virtual-networking infrastructure like load balancers and firewalls. Management will happen all inside current VMware software tools. Managing and moving virtual machines will be possible inside vSphere through a free-plugin. The idea is to help customers move existing applications around and develop new applications on the public cloud. Some customers will want to run specified applications on the public cloud and keep key data on premises, said Gelsinger, who will be a featured speaker at GigaOM Structure next month.

    Bill Fathers, VMware’s senior vice president and general manager of hybrid cloud services, described vCloud Hybrid Service as the easiest public cloud to adopt. It will be available through current partners, so licensing won’t be different. And customers can get support for the vCloud Hybrid Service from VMware, just as they can for other services.

    Partners that endorsed the platform included  Tibco, Microsoft, SAP, Puppet Labs (see disclosure) and Pivotal, VMware’s step-brother that is co-owned by VMware and parent company EMC. VMware  holds a significant stake in Puppet. “VMware will be the first and only cloud provider to provide SAP software, including HANA, as a subscription service on premise and in the cloud,” Fathers said.

    The vCloud Hybrid Service actually has two flavors: a Dedicated Cloud mode has “physically isolated and reserved compute resources” for predictable workloads and a Virtual Private Cloud for seasonal workloads that require greater elasticity but are multitenant in nature. The former service will start at 13 cents an hour for a 1 GB virtual machine with a single processor on an annual basis, while the latter will start at 4.5 cents an hour on a monthly basis. But those prices will come as year-long licenses. Fathers said he expects customers to use both in parallel. The pricing model helps, but it doesn’t provide insight into the cost of storage and networking services.

    vmware price 1

    To provide the infrastructure for the vCloud Hybrid Service in the United States, VMware will pull from infrastructure in Santa Clara, Calif.; Las Vegas; Dallas; and Sterling, Va. Fathers said the plan is for “an asset-light model” in which the facilities in those cities are “third-party data centers.”

    Beta customer, Julio Sobral, senior vice president of post production for Fox Broadcasting, said the movement of certain applications to VMware’s public cloud, particularly collaboration tools for dispersed employees, had, in fact, been “seamless.

    vmware price 2

    Other beta customers include the state of Michigan, the city of Melrose, Mass.; and Planview. The question is how many of VMware’s roughly 500,000 customers will move onto the service too, rather than keep using IaaS providers such as Amazon (a amzn) Web Services for certain applications, as some customers have.

    There could also be friction with existing VMware cloud partners. They have been underwhelmed by the offering and the service provider partners not selected to host the offering now feel they are competing with their supplier, as my colleague Barb Darrow has noted.

    DisclosurePuppet Labs is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Orchestrate.io gets $3M to crunch many kinds of data in the cloud

    As a co-founder of Basho Technologies, the company behind the Riak database, Antony Falco observed that companies already had lots of databases. It makes sense, given that not every database was created equal. But Falco noticed an inherent structural problems with using multiple databases.

    Keeping data isolated inside any one database prevents companies from making discoveries across multiple data sets. Plus, he said, at least one database tends to have trouble at any given time.

    Earlier this year, Falco started Orchestrate.io to respond to these issues. The company provides a single API through which customers can send data from multiple databases. This way, customers can join, say, geolocation data, time-series data and tweets, drawing graph relationships and doing full-text searches on top of it all.

    To build out the infrastructure to do this with multiple cloud providers and bring on customers, Portland, Ore.-based Orchestrate.io is taking on $3 million in seed funding. True Ventures is leading the round (see disclosure) alongside contributions from Frontline Ventures and Resonant Venture Partners.

    Some companies were already testing out the Orchestrate.io service, although Falco declined to identify them. He said the price of using the service is tied to the number of queries per second customer make.

    When it comes to competition, Falco said, “Certainly there’s Amazon.” On Amazon Web Services, customers can get a slew of tools, from RDS for relational databases to DynamoDB for nonrelational work to Elastic MapReduce for Hadoop. And, of course, if companies don’t buy into the Orchestrate.io logic, existing databases constitute challengers. But Falso has an answer for that. “Databases can do most of these queries,” he said. “The problem is, they can’t do them efficiently and at scale at the same time.”

    After the company comes out of private beta, Falco thinks Orchestrate.io has the potential to be a go-to provider for lots of different kinds of data-analysis services, Falco said, just as companies look to Twilio for voice services and SendGrid for email. “(There’s a) shift of operational burden from a corporation or the end user to a service provider,” he said. “I think we’re just part of the trend. You’re going to continue to see that over the next several years.”

    Disclosure: Orchestrate.io is backed by True Ventures, a venture capital firm that is an investor in the parent company of GigaOM. Om Malik, founder of GigaOM, is also a venture partner at True.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • With push for data democratization, Intel tries to play both sides of the big data debate

    Intel has been taking steps in recent months to promote the democratization of consumer data — the idea that consumers should be able to check out the information that companies are collecting on them — even though it might not be immediately obvious how the chip maker could generate revenue through the initiatives, according to an article from the MIT Technology Review.

    Intel Labs is engaged in a research partnership with wethedata.org, a “hub of conversation, news, and events celebrating innovative communities who are each focused on democratizing data in their own way.” Intel also has contributed to a hackathon for building tools consumers can use to understand publicly available data, and it’s sponsoring the National Day of Civic Hacking for getting people across the country to come up with ways to analyze open data sets.

    It’s somewhat surprising for Intel to be pushing for data democratization. Intel chips are at the heart of servers THAT companies and government organizations use to crunch heavy loads of consumer data. And Intel also has come out with its own Hadoop distribution for handling big data.

    The sort of rhetoric floating around the Wethedata.org site — “we are the customer, but our data are the product. … How do we regain more control over what happens to our data and what is targeted at us as a result?” — seems more likely to come from a nonprofit or even a government agency than from a collaboration that includes a corporation such as Intel. But that might help explain why the efforts are noteworthy.

    Intel isn’t the only one active on this front. Andreas Weigend, a former chief scientist at Amazon.com, often raises the topic with executives in his consulting work with big companies around the world, partly because some data is simply wrong, and consumers ought not be penalized for it. And as the MIT Technology Review article notes, legislators have taken stabs at the issue, albeit with little success so far.

    Now that Intel is on board, perhaps more tech companies will join in and the prompt the tide to change. And if that happens, interesting questions would arise, such as how exactly companies would roll out more data on its customers, whether companies should give consumers access to algorithms that factor into decision-making and how much visualization software and other tools should be made available.

    Feature image courtesy of Shutterstock user Lasse Kristensen.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • New networking features make hybrid clouds possible on Google Compute Engine

    Toward the tail end of Google I/O on Friday, Sunil James, a Google product manager (on left in picture), and John Cormie, a software engineer focusing on networking for Google Compute Engine (GCE), showed off new network capabilities for GCE that can enable hybrid clouds running between GCE deployments and on-premise data centers.

    GCE customers are now able to do things like establish virtual private Layer 3 networks and assign static public IP addresses to instances, James said. Connecting networks will also become possible. And a load-balancing service is on the way “as part of the native fabric for Google Compute Engine,” James said.

    Developers interested in trying out GCE load balancing can fill out a form to do so. Developers can also sign up for early access to all emerging Google Cloud Platform features.

    The load balancing and routing services are the sorts of things that could help more businesses make the decision to try real projects on the newly publicly available Infrastructure as a Service (IaaS) piece of the Google Cloud Platform.

    And the new capabilities move Google a few steps closer campaign to becoming a top, widely used IaaS provider — if not one day bigger than Amazon Web Services then at least No. 2. That position is already feasible for Google as it is.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The future, according to Google

    During a fireside chat with four Google Research heavyweights — artificial-intelligence guru Peter Norvig, Google Glass guy Thad Starner, MapReduce paper co-author Jeff Dean and distributed computing wizard Alfred Spector — on Thursday, an audience member sucked up the air in the overcrowded room when he asked “where we’ll be 10 years from now.”

    Without a doubt, the panel, at Google I/O, was an apt forum for that question. If any company is innovating in a big way, it’s Google, with recent advancements in voice recognition, wearable technology, quantum computing and other realms. So it wouldn’t be surprising to see some of the Google luminaries’ ideas actually come into being. Here’s what they had to say:

    “Speech recognition and vision are showing dramatic improvements over the last few years. We just need to scale them up and make them work better. … They’re (mobile devices) going to vanish into much smaller devices that you carry around and aren’t full-size laptops.” — Jeff Dean

    “We’re getting more contextualized. The computer is not what you go to to use. It’s something that’s around you all the time and sort of more integrated into your life, rather than a separate thing.” — Peter Norvig

    “I would argue that we’re currently living the singularity, where the tool stops and the mind begins will start becoming blurry.” — Thad Starner

    So there you have it, folks — the computer as a smaller and more natural extension of the human brain. Now, let’s set the kitchen timer for 10 years and see what actually happens.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Why Google thinks the GPU is the engine for the web of the future

    For years, the internet provided users with static clumps of information stored and refreshed in databases on the back end. But as interactive games, animations and fancy scrolling have become popular, graphics have become fancier and screens richer. Throughout this evolution, hardware components on users’ devices have gotten more capable, but now Google seems to think the GPU is the best tool for the internet of tomorrow.

    At a talk at the Google I/O conference on Thursday, Googlers Colt McAnlis (pictured), a developer advocate working on Chrome games and performance, and Grace Kloba, the technical lead on Chrome for Android, gave developers some tips for making better use of the GPU. Doing some of these things can help websites display their graphics as soon as possible and become optimized for “touch events” such as scrolling without sacrificing performance.

    Chrome developers can split up many website components into GPU layers, each of which can be subdivided into a bunch of tiles for an entire page — think of a grid overlaid on top of the page. Instead of asking the CPU to upload the pixels to the whole screen area, the GPU caches those tiles inside its memory when a page is accessed and then serves up select tiles in response to user behavior, such as scrolling. This approach “allows the CPU to drink margaritas and essentially chill out while the GPU does all the heavy lifting,” McAnlis said.

    But there’s a tradeoff to this layering approach. Making many layers can result in entirely too many tiles, and the GPU “has a static, non-growable memory resource in its texture cache,” McAnlis said. “If the cache is full, you have to push old tiles out of the cache before you put new tiles in.” And that can result in a decrease in performance.

    In short, developers have to figure out the right number of layers for each page. For example, if a user ends up not using a tile that is loaded and cached on the GPU, it’s a waste of a GPU compute cycle. Developers can learn more about the use of GPU inside Chrome in the Chromium Project’s design documents and get insight into GPU use with the Trace Event Profiling Tool. Developers can also run experiments through Chrome, McAnlis said.

    To demonstrate good use of layers, McAnlis pointed, perhaps unsurprisingly, to a Google site, the mobile version of the Google I/O conference site. “Look at the source code,” he said. “It’s a great example.” The header is its own layer, he said, and it expands and contracts and adjusts the times of conference sessions as the user scrolls up and down the page.

    The winners on the web over the next few years will be the sites that can serve rich, compelling content as fast as possible. It looks like Google believes taking full advantage of the GPU might be the best way to accomplish that goal.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.