Author: Jordan Novet

  • Google, NASA quantum computing project could bring stronger machine learning to the masses

    It’s been almost two decades since Peter Shor came up with a a breakthrough algorithm for finding the prime factors of a number with a quantum computer, sparking great interest in quantum computing. But commercial adoption has been pretty much nonexistent. On Thursday, though, Google came forward with news that it’s launching a Quantum Artificial Intelligence Lab that will include a quantum computer, apparently making it the second company to pay for a quantum computer. The development suggests that quantum computing could finally be taking off.

    Earlier this year Lockheed Martin shared details of its implementation of a D-Wave Systems quantum computer, which reportedly cost $10 million: The contractor is using the computer to develop new aircraft, radar and space systems.

    Now Google is taking steps at incorporating more quantum computing into its operations with the Quantum Artificial Intelligence Lab, which will be located at the NASA Ames Research Center in Moffett Field, Calif. Researchers from the Universities Space Research Association will be able to use the machine 20 percent of the time, Forbes reports. That could lead to lots of interdisciplinary thinking and collaboration.

    For Google, though, the goal of the initiative is to make strides in machine learning, according to a Thursday Google Research blog post. The best results could trickle down to end users, perhaps in search results and speech-recognition applications.

    Quantum computing could mean smarter smartphones

    Google has already assembled machine-learning algorithms that involve quantum elements, Hartmut Neven, a Google director of engineering, explained in the post:

    One produces very compact, efficient recognizers — very useful when you’re short on power, as on a mobile device. Another can handle highly polluted training data, where a high percentage of the examples are mislabeled, as they often are in the real world.

    It’s not hard to imagine how quantum computing could inform machine learning on a smartphone with just a drop of battery life left. It could be that a smarter smartphone one day will take a minuscule amount of input and determine with a high probability who a user wants to talk to or what information it needs right away, rather than forcing the user to cycle through a string of commands and risking the death of the battery altogether.

    The applications might have arisen after Google’s earlier partnership with D-Wave, which came to light in a different blog post from Neven in 2009.

    Google has already used machine learning to recognize faces and other things in photos and videos. New technology Google executives talked about at the Google I/O developer conference in San Francisco on Wednesday also appears to use machine learning to stitch together photos and clean them up.

    What Google has learned so far is the best results come from blending regular binary computing using ones and zeros with quantum style computing. Quantum computing accommodates the space between a one and a zero with quantum bits of information, or qubits. It can express likelihood as well as take shortcuts by approximating when handling certain kinds of workloads. Given what Google has observed thus far, it could decide to build hardware combining quantum and classical computing capabilities.

    For now, though, Google is diving deeper into quantum computing with the D-Wave machine. The move could kick off a sort of arms race for webscale companies to buy quantum computers and come up with new notions by way of probabilistic logic. In this way, Google could help push the development of quantum computing much like its invention of MapReduce changed the way firms do distributed data processing.

    In any case, quantum computing has a long way to go before reaching commercial viability. That could take decades (so far it has). But because the organization at the helm of the quantum research is Google and not IBM or Bell Labs, regular people could start seeing much more of the advantages in just a few years’ time, which in turn could drive commercialization.

    Feature image courtesy of Shutterstock user pixeldreams.eu.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google gains appeal for cloud services, but there’s this company called Amazon

    With Google opening up its Google Compute Engine (GCE) for anyone and expanding the feature set of its Google Cloud Platform, the web giant appears to have its gaze fixed on easing Amazon Web Services’ lock on the Infrastructure-as-a-Service (IaaS) market. But it won’t be easy, with many startups and enterprises already entrenched in AWS thanks to its early general availability and plethora of services.

    Some developers hanging out at the Google I/O conference in San Francisco on Wednesday thought Google could be a viable option for certain workloads going forward, but they don’t see it as the it cloud for today. And that might be all right, because adoption of IaaS clouds is still far from complete, and because Google is indicating that it has plenty of ideas for enhancing the Google Cloud Platform.

    “We’ll continue to add new services which lower the amount of tedious grunt work that developers have to do,” Greg DeMichillie, a director of product management for the Google Cloud Platform, told members of the press in a roundtable discussion following the Google cloud announcements. Better networking services could be one area for innovation, he suggested.

    Indeed, my colleague Barb Darrow has expressed on multiple occasions that Google’s position in the IaaS world is worth watching. The trouble is, the road ahead looks steep.

    The current cloud market

    A July-October 2012 survey of 100 IT professionals at medium and large enterprises from 451 Research showed that 19 percent that were running IaaS deployments were doing so on Amazon, considerably more than on other options. Verizon came in second with 8 percent, followed by Rackspace with 5 percent. Google apparently held 1 percent or less. Looking toward the future, respondents named the vendors they expected their companies to move to, with CenturyLink, Amazon and Verizon coming out on top. Google had 1 percent or less there, too.

    Why the lack of presence from Google in the standings? For one thing, “Amazon has been pushing this game along for a long period of time,” said Peter ffoulkes, research director at 451 Research. The other factor is that not many enterprises are ready to run on public clouds. ffoulkes fully expects Google to show up in the rankings in forthcoming surveys, but it’s too early for him to say when.

    To be fair, since the 2012 survey wrapped up, Google has added to the Google Cloud Platform, with moves such as adding capabilities to BigQuery. It’s also acquired Talaria for software that could make Google server use more efficient. And remember that Google Compute Engine launched less than a year ago and just became generally available today.

    Google has serious work to do in making the Google Compute Engine a top choice for enterprises. For one thing, Google has not (yet) opened a marketplace of services on par with AWS. Such a step could help Google in its efforts to drive more developers onto GCE.

    What developers think

    Google has a few opportunities to gain marketshare with GCE. One startup I spoke with has run workloads on Google App Engine (GAE) for a few years but still does data analysis and data mining on on-premise servers. Since GAE and GCE hook in well with each another, the startup is looking at moving the on-prem activities to GCE. Another area of opportunity is around using GCE for narrowly tailored high-performance workloads that scale out. Engineers at one major retailer in the United States said they were exploring public clouds for certain jobs, and Google Compute Engine is a possibility for exactly this sort of thing. Generally speaking, strong results could lead to larger deployments beyond tests and lower-priority applications.

    Developers praised Google for introducing granular pricing down to the minute instead of the hour after a 10-minute minimum and increasing the size of a persistent disk from 1 TB to 10 TB.

    But just as AWS has had notable service issues, Google App Engine, the Platform-as-a-Service (PaaS) piece of the Google Cloud Platform, has had multiple service disruptions of its own, and that doesn’t help adoption.

    Plus, several developers noted that Amazon was the forerunner in the AWS market, which seems to be a major reason why Google faces a steep road. One developer said his hosted VoIP company just moved from on-premise servers to AWS. Translation: Too little, too late, Google.

    The lock-in question

    However long it takes for Google Compute Engine to get on the board in the IaaS conversation, the ease of migration from AWS and other IaaS providers to Google will eventually become a hot topic. What sort of lock-in issues could arise? That’s been a good question since cloud computing took off a few years ago and as options have proliferated. Amazon in particular has faced criticism on the lock-in point.

    Performance is a whole other matter. Will GCE be a kind of exotic car of public clouds? Different customers will have different answers to that question, as not all workloads were created equal. Benchmarks attempt to give some insight into this, but they have drawbacks.

    As developers try spinning up instances on GCE and do comparisons for themselves, the subject of price will come up. Google foresees more price cuts to its cloud services, as it’s in the company’s best interests to make its infrastructure as efficient as possible. That could entice more enterprises to join in. At the same time, AWS is likely to keep growing, slashing its prices and speedily bolting down enterprise customers. (To get a peek at what Amazon has in mind, check out GigaOM’s Structure conference in San Francisco on June 19, when Werner Vogels, Amazon’s chief technology officer, will take the stage.)

    However the game plans play out, Google is optimistic at the moment. “It’s obviously a hugely important use case for us, a hugely important customer set,” DeMichillie said of enterprise users. “It’s early days, but we think over the next 12 months, we expect to see a pretty big upswing in that.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • How Google is setting the new search standard with voice and knowledge graph

    Google’s search capabilities are king, and they’re getting richer now with features including the use of more powerful voice recognition on mobile devices and desktops,

    At its Google I/O conference Wednesday, company execs introduced “conversational search” capabilities. As Google implements its “hotwords,” users will no longer need to click the microphone in the search bar to start using voice recognition. All users have to do is say, “OK, Google,” and then speak commands. Google relies on natural language processing to figure out what users want to do and then serves up results.

    Combine that with Google search’s ability to go beyond serving up graphs and other data in response to user questions and actually weave in additional information Google thinks users are looking for. For example, if you search for China, Google will not only show changes in population over the decades, but it will also graph the countries  China’s population is often compared to — India and the United States.

    This is possible as Google keeps expanding knowledge graph, which now has more than 570 million entities, such as people, places and things, said Amit Singhal, a senior vice president and Google Fellow.

    Coming soon: More knowledgeable searches

    The knowledge graph operates with searches in English and eight other languages. Starting today, Singhal said, it will be  available in simplified Chinese, traditional Chinese, Polish and Turkish.

    Google is also integrating personal data into searches in Chrome on desktops and laptops, which makes loads of sense. Flight reservations, restaurant reservations, package deliveries, and other user-generated information can be rapidly pulled up in the familiar interface of Google search results. That could put an end to going through emails of paper for this sort of information, saving users time.

     Johanna Wright, vice president of search and assist for mobile at Google. Source: Janko Roettgers

    Johanna Wright, vice president of search and assist for mobile at Google. Source: Janko Roettgers

    Google has provoked lots of buzz and some concerns with its Google Now feature on mobile devices. The application will soon allow users to set reminders — to call someone, buy something — and expect them to occur only at the right time.

    Parlaying personal and general data

    Johanna Wright, vice president of search and assist for mobile at Google, took some of these new and upcoming features for a spin. As an example, she said she wanted to plan a day trip to Santa Cruz, Calif. So she said “OK, Google” — bringing Google to attention — “show me pictures of the Santa Cruz boardwalk.” Up came multiple pictures in a horizontal bar at the top of search results. She wanted to know the length of the trip and said, “OK, Google, how far is it from here?” Google figured out that “here” was her current location, in San Francisco, and “there” was Santa Cruz and displayed a map and spoke back that the drive would take an hour and 21 minutes.

    She then asked seafood restaurants and got a list. Then she asked Google a tough question: “How tall do you have to be to ride the Giant Dipper?” Google came back with, “You must be at least 4 feet 3 inches tall to ride the Giant Dipper. “Nice,” she said. “Looks like my son can go on.”

    On a mobile device, Wright also directed Google Now to send a quick email based on her voice commands, which happened right away, and set a reminder for her to call a friend when she arrives in New York on a business trip. FInally, she was able to tell Google to show the pictures she made during a previous trip. And about 16 pictures came right up.

    The combination of personal data with more traditional search data is a logical next step for Google, which has no shortage of either. While Google Now has critics, it could become more popular with these new features. And how could people — investors included — question Google’s innovations in search, its core product. The voice recognition capabilities make searching still more intuitive and set the bar still higher for everyone else.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google turns up location data usage on Android apps

    Amid all the announcements Google is making at the Google I/O conference in San Francisco this week are three APIs that show Google wants to make the most of the sensors in Android devices and will let developers incorporate rich location data into their apps.

    The first API, called Fused Location Provider, will use very little battery power — less than 1 percent per hour, said Hugo Barra, director of product management for Android — to share Android device users’ locations.

    The second API is Geofencing, which “lets you define virtual fences around geographial areas” and creates triggers whenever a user enters or exits a location. And, get this, users can have “over 100 geofences simultaneously active per app,” Barra said.

    Google's new Geofencing API. Source: Janko Roettgers

    Google’s new Geofencing API. Source: Janko Roettgers

    Finally, the Activity Recognition API will track users’ physical activities and uses machine learning to determine exactly how users are moving — whether they’re walking, running, riding in a car or just idling, Barra said.

    Taken together, the APIs could let companies gain far more intelligence about their customers through Android apps, without annoying them with user-experience issues like battery drain. At the same time, apps making use of these APIs could make consumers more conscious of how and when they are being tracked — can companies see where customers are all the time? — and could lead to new discussions and best practices around privacy.

    Developers interested in using these APIs can sign up to get access to them through Google Play Services today.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • How Stanford’s Andreas Weigend leads by example in pursuit of data symmetry

    Advances in technology tend to lump people into three categories: the indifferent, the luddites and the trendsetters, who, by virtue of their behavior and the beliefs they share with others, influence the future. When it comes to how companies use the ever-growing supplies of data on consumers, one of those trendsetters is Andreas Weigend, once the chief scientist at Amazon.com and now a lecturer at Stanford University.

    At a talk alongside other data scientists in San Francisco in April, he brought up the notion of a single place where consumers could see the data companies collect. This sort of thinking suggests that Weigend is part of a group of people defining what data sharing should look like in the years to come, and how both companies and consumers will have to adapt.

    Perhaps another indication is that he’s got fans. After the April talk, he took a few students and friends out to dinner at a family-style Italian restaurant. Long after the meal, some students lingered and asked him questions, as if he were an oracle or celebrity. And it is easy for people to listen to him talk for hours. He frequently makes references to foreign people, places and companies and seems to take it for granted that you are just as worldly as he is. If you engage him in conversation, you will immediately receive a vigorous response, as if he is pre-programmed to share his views, so as to have the best shot at getting others on board. This is not a man who keeps his hunches to himself. He looks the part of an idea guy, with blond curls fizzing up from his head.

    Andreas Weigend. Source: Flickr user alvy.

    Andreas Weigend. Source: Flickr user alvy.

    When your data is no longer your data

    When it comes to being transparent with data, Weigend thinks Amazon has done a pretty good job. “One of the things we worked for at Amazon was to make it trivially easy (to show) all of the things you clicked on,” he said. The site also lets customers see what they purchased. Those are key data points for Amazon’s recommendation engine, which Weigend describes as a grid — if you view or purchase one item and then another, Amazon can line up that performance with that of other users and then to serve up items you might like.

    Amazon also uses customers’ purchase history to help improve the user experience for purchases that customers attempt to make in real time. “If you buy a book which you have bought before, Amazon tells you, ‘Are you sure? You bought this item already, on December 17, 2007,’” he said. “It’s trying to help you minimize regret. It’s trying to help you make a better decision. This is how we refine the raw data, the data you created, in order to help you make a better decision.”

    Other companies are not so revealing. “Some airlines don’t remind you, ‘Look! Your miles are expiring in three months,’” he said. He has nothing against airlines. It’s just that he flies a lot — he splits his time between homes in San Francisco and Shanghai and attends many conferences each year — and has plenty of examples to share in the context of flight.

    An airline customer-service representative won’t permit Weigend to hear about his previous customer-service calls over the phone or see that data on the screen behind the counter at an airport, even though Weigend was the one who helped the airline create that data. And flight attendants might use fake names on name badges, even though they could conceivably access customers’ names. Weigend has a word for this sort of peculiarity: asymmetry. He is trying to fight against it.

    As a consultant, he tries to change the way companies generate, analyze and share data about users and customers, among other things. That might mean advocating for data symmetry. It also might mean motivating companies to assign costs to problems such as unresolved customer calls and then figure out ways to improve the situation. His ideas stem from experiences such as ensuring that particle-physics data wasn’t being thrown off with dirt on a photo plate at CERN and incorporating external data sets to arrive at new insights while reviewing financial data for Goldman Sachs and other companies.

    Voice recordings, itineraries, maps

    Still, the world is not yet as Weigend feels it should be. How does Weigend live in this imperfect world so lacking in data symmetry? He leads by example, in a sense.

    On his personal website, Weigend lists his flight reservations. When he does call an airline customer-service representative, as soon as he hears someone say this call may be recorded, he retorts that he will most certainly be recording the call. He carries around a voice recorder, and he has a mic that can hide underneath his shirt.

    recorderRecording customer-service calls might come across as a bit awkward. But should it really push our social buttons? One day it could be common for companies to share that sort of data with customers, and then it will not seem so surprising.

    Weigend also uses mobile devices in combination with Google Latitude for keeping tabs on his whereabouts, going back several years. He makes current location data public on his website and shares it with friends.

    When it comes to Latitude, he knows he is an “edge case.” But his father, Johann Weigend, spent years as a political prisoner in East Germany, where the government was convinced he was an American spy. “I believe in having people know where I am. If something happens to me, somebody at least knows where I am,” said Weigend, a native of Germany, no stranger to issues of personal privacy.

    When data is just wrong

    In using geolocation, Weigend has become aware of a problem he calls sketchy data. He believes users should be able to correct data, because it’s not always right. At least once, Google Latitude has shown Weigend was in one place (Weehawken, N.J.) when he was actually in another (the west side of Manhattan). Google might think Weigend is in a head shop when in fact he is visiting his friend who lives above the head shop. And it’s not unusual for county officials to enter real-estate data into computer systems wrong, he said.

    Some websites permit users to change their data, such as Amazon.com, on which customers can remove items from their purchase history. And it asks if something is a gift, so the system won’t use gifts to modify its algorithms on users’ actual preferences. Weigend likes those options a lot.

    Where personal and business meet

    How do his personal patterns overlap with his perspectives about what companies should do? It might come down to the best way to help people and companies and engender trust among all.

    “Getting people to think about the amazing world of big data, that’s more about Hadoop and all that poop,” he said. “It really is about the questions that we ask. What world do we want to create? And that’s, you know, my little part in this world.”

    Feature image courtesy of Flickr user aweigend.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Software AG makes app development easy for business users with Live software

    Software AG, the German software company, wants to make it easy for people who can’t write code to plan, build and run their own business applications, just as people with programming chops can build on top of Platform-as-a-Service (PaaS) offerings such as Engine Yard and Heroku. Toward that end, it’s unveiled a new line of products, Software AG Live.

    The software line includes a trio of components that function on their own but can work well together: a tool for collaborating and laying out the functions, processes, inputs and outputs of an application; a system for assembling and tweaking pieces of the application itself; and a vehicle for integrating data from existing applications. The applications that come out of Software AG Live can also run on mobile devices, which falls in with the trend of using PaaSes to build mobile apps.

    AgileApps Live is the Platform -as-a-Service (PaaS) component of the new Software AG Live product line.

    AgileApps Live is the Platform-as-a-Service (PaaS) component of the new Software AG Live product line.

    With this PaaS, “subject-matter experts are now empowered to build their own solutions and apps,” said Ivo Totev (pictured), head of Software AG’s cloud business unit and a member of the company’s executive board.

    The platform is available now, and the other pieces are on the way. They will all be able to run on the Software AG cloud hosted on Rackspace, a spokesman said, but can be deployed on other clouds or on premises.

    Last month SoftwareAG said that it had acquired LongJump, which previously provided part of the new software bundle. Now, a few weeks later, Software AG is turning around and announcing the full line under a new name.

    “This really opens up some new potential audiences to Software AG,” said John Rymer, a Forrester Research VP and principal analyst focusing on application development. “If they can get the integration right, they can actually offer a pretty broad spectrum of development experiences and runtimes compared to the competition.” Competitors in the area of visual and cloud-based platforms for application development include Mendix, Rymer said.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Wal-Mart gets PaaS and social software chops through OneOps, Tasty Labs buys

    It’s clear that Wal-Mart Stores wants to stay on top as a major online retailer in the United States and abroad, as it takes steps to turn stores into fulfillment centers and bolster its virtual capabilities. What hasn’t been clear is how that transition will look, or how long it will take.

    @WalmartLabs gave people a glimpse at its playbook on Tuesday by disclosing in a blog post the acquisition of OneOps, a finalist in the LaunchPad competition at GigaOM’s 2012 Structure conference, as well as Tasty Labs, whose CEO, Joshua Schachter, founded Delicious.

    Terms of the acquisitions were not disclosed.

    According to the @WalmartLabs blog post, OneOps has “developed a Platform-as-a-Service (PaaS) capability that enables us to significantly accelerate our PaaS and Private Cloud Infrastructure-as-a-Service (IaaS) strategies.” The OneOps acquisition could strengthen Wal-Mart’s chances of rapidly building, tinkering with, testing and deploying new applications onto walmart.com and other online retail sites affiliated with the company, such as samsclub.com. The unification of the back-end components of many sites is an ongoing project for Wal-Mart.

    As for the Tasty Labs buy, it could lead to additional social experiences for site visitors. Wal-Mart has been incorporating social elements into its sites, such as pins from Pinterest as well as recently reviewed and fast-moving items. But with Tasty Labs’ Jig product, customers have been able to type in their needs and get responses about products that could help meet said needs.

    However Wal-Mart ends up making use of OneOps and Tasty Labs, the company could well keep acquiring companies as it strives to enhance its hardware and software arsenal. Wal-Mart might find a new acquisition target at this year’s Structure conference, coming up on June 19-20 in San Francisco, where another batch of companies will compete in the LaunchPad event, including Factor.io, Mertica and SaltStack.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Data center space in parts of New Jersey is 4 times the cost of space in Manhattan

    Companies want to pay $600 or more per square foot to locate their gear close to high-priority data centers in New Jersey that host the Nasdaq and the New York Stock Exchange, a rate four times the cost of square footage across the river in Manhattan, according to the New York Times, which covered the price differential as part of a story on Equinix and other data center operators.

    The story shows that, like in all real estate, finding a home for your servers is all about location, location, location. In this case, it’s not school districts or an urban core drawing customers — it’s proximity to data. As we’ve explained, there are benefits to locating auxiliary data and services in close proximity to big repositories of financial data. It can keep latency (and bandwidth costs) low, while making big-money trades as speedy as possible and rapid data analytics as close to real time as can be.

    The law of data gravity at work in New Jersey, where major data center construction and expansion has been forecast, has also played out to some extent in northern Virginia, where Amazon Web Services’ US East infrastructure runs, as well as in New York, even with the superstorm Sandy.

    Beyond reporting the New Jersey phenomenon, the Times shows that the circumstances are ripe for data center operators such as Digital Realty Trust and Equinix to function as quasi-utilities, even though they are not regulated as utilities. They negotiate power deals with utilities and then resell power to customers that want to run servers inside the data centers. And the data center companies can post robust profits when customers agree to pay for, say, double the amount of power they typically use.

    As much as those profits might bode well for Equinix and their ilk, power could turn out to be an issue. Energy costs could increase, and outages can happen, but also, as the article points out, demand for power, in New Jersey and elsewhere, could keep increasing beyond the capacity of the data centers, which means that the data center companies might not be able to use their facilities completely effectively.

    It could be that data gravity begets data center construction. If that’s the case, places such as New Jersey could see data center construction for years to come.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google I/O sensors will detect motion and generate data for real-time visualization

    While there will be no shortage of smartphone-equipped developers and media recording the goings-on at the Google I/O developer conference later this week, Google plans on conducting its own experiments. To get the most out of its developer conference at the Moscone Center in San Francisco later this week, it will deploy a bunch of Arduinos throughout the venue to detect humidity, motion, sound and temperature.

    According to a Monday blog post from Michael Manoochehri, a Google developer program engineer, Google will take the data coming in from the Arduino boards and visualize it all in real time with Google Cloud Platform services such as Google Compute Engine and BigQuery. And it’s no teensy-weensy data set:

    “Altogether, the sensors network will provide over 4,000 continuous data streams over a ZigBee mesh network managed by Device Cloud by Etherios.”

    The visualizations will be on display on screens during the conference. And Google said it will make the Cloud Platform code and the resulting data available in open source.

    O’Reilly Media has used Arduinos at events for similar purposes before, as I reported in February. How are the deployments different? For one thing, Google uses the Google cloud — surprise, surprise — while O’Reilly has used Amazon Web Services. The question is whether the project will persuade non-Google developers to try using the Google Cloud Platform for their own programs to crunch data generated by sensors.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Dyn picks up mobile monitoring company Trendslide and looks toward devops

    Plenty of companies have woken up to the reality that IT administrators, executives, salespeople and others want to use their mobile devices to get a sense of their operations. Now Dyn, which provides DNS and email-delivery services for enterprises, has bought Trendslide, which has built out simple mobile dashboards for keeping track of sales data and other inputs.

    But Dyn doesn’t want to offer that to its customers; it will transform the app into a tool for devops. In other words, instead of integrating with Salesforce.com, Marketo, Google Analytics and other data sources, users will be able to bring together application-performance management data from providers such as New Relic and Compuware’s Gomez alongside analytics on domain-name server queries and big email campaigns. That sort of mobile app could make it easy for devops guys to see from home if applications are operating without issue, email campaigns are working as they should and DNS queries are going through data centers as they should and within the bounds of what customers have signed up for. If there are issues, devops should be alerted that they need to act.

    The acquisition, whose terms were not disclosed, ties in nicely with website-performance analytics technology from Verelo, which Dyn also has acquired.

    Cory von Wallenstein, Dyn’s chief technology officer (pictured), was the original investor in Trendslide. He will speak about upgrading infrastructure while maintaining uptime at GigaOM’s Structure conference in San Francisco on June 20.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Twitter reportedly plans to expand Sacramento data center space

    Twitter is leasing more data center space at RagingWire’s 500,000-square-foot campus in Sacramento, Data Center Knowledge reported Friday, attributing its report to “industry sources.” There’s no square-footage figure available, but power use for the expansion is pegged at “more than 20 megawatts.” And so the mystery about Twitter’s infrastructure continues.

    The reported expansion comes on top of Twitter’s existing infrastructure footprint, which apparently includes space at the Sacramento facility and might also include space at the data center custom-built for Twitter, which the company moved into in 2011. At that time, former Twitter vice president of engineering Michael Abbott wrote that the social network had arrived at its “final nesting ground.” But it seems that nesting ground was not big enough.

    Like Facebook, Twitter is not a site for flat traffic. The infrastructure needs to accommodate traffic spikes — think of how people clung to Twitter during Hurricane Sandy — and having more space can keep Twitter ahead in those types of situations.

    Keeping latency low as monthly active user count increases — it was at more than 200 million in December, up from 100 million in September 2011 — is likely a high priority, too.

    More data center space also makes for better backup capability. Remember when Sandy proved the importance of being ready for disasters with flooding and power outages on the East Coast? A bigger footprint for Twitter translates into lower likelihood of a fail whale.

    Twitter’s infrastructure expansion comes following news of other webscale players bumping up their respective footprints. Facebook reportedly will build a new data center in Altoona, Iowa, with the first phase measuring 476,000 square feet and costing $300 million. Also in Iowa, Google said it would expand its data center in the city of Council Bluffs, and Data Center Knowledge reported last month that LinkedIn is expanding its data center space, too.

    Twitter did not respond to a request for comment.

    The company is worthy of merit for talking about its features in public and open-sourcing many of them, although it has been cagey about disclosing information about its infrastructure and the causes of service disruptions. We know very little about Twitter’s infrastructure, in contrast to Facebook and Google’s installations. In the past it was unclear where the custom-built data center was, as the plan in 2010 was for the Salt Lake City area, but then it was reported that Twitter was actually moving servers to Sacramento. Twitter’s status in Atlanta is another unsolved mystery.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google Drive document lists go down, then come back up

    Google Drive had some issues midday Friday, as users took to Twitter to report that they were finding their Drive file containers empty.

    Google acknowledged that issues were afoot by indicating a “service disruption on its Apps Status Dashboard.

    Google Drive service disruption reported on Google's App Status Dashboard

    Google Drive service disruption reported on Google’s App Status Dashboard

    “We’re investigating reports of an issue with Google Drive. We will provide more information shortly,” Google reported.

    Google PR had no further information to share.

    According to the dashboard, the last time Drive service was disrupted was on April 17, for around three hours. Gmail, Talk, Groups, Contacts and other Google products were also affected. The day before that, there had been “a misconfiguration of (the) user authentication system,” which prompted login requests to ping fewer servers than what is normal. The problem turned out to be a capacity issue, as opposed to a heavy influx of traffic.

    Before the April 17 incident, there were “disruptions” to Google Drive on March 18, 19 and 21.

    Despite Friday’s disruption, files did show up back on the main drive at around noon Pacific time, though, so the disruption did not last long.

    Nevertheless, this sort of event doesn’t help Google’s efforts to bring enterprises on board with Google Apps. It might also hurt Google prospects at gaining customers on Google Compute Engine and the Google Cloud Platform, as more enterprises flock to and expand their use of Amazon Web Services and other public clouds.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Box acquires Crocodoc to make document previews richer

    While Box seems to be among the leaders in the race to become the Dropbox of the enterprise, it wants to be easy for individuals to use, as to get their companies to sign up as paying customers. In order to make that happen, Box is acquiring Crocodoc, which lets developers convert PDFs, Word documents and other files into HTML5 for clear display in web browsers. Terms of the deal were not disclosed.

    “We have to build a consumer-grade experience,” Aaron Levie, Box’s co-founder and CEO, said at a Thursday meeting at its San Francisco office. The deal will be Box’s second acquisition; it acquired Increo in 2009, a company representative said.

    Crocodoc already takes care of this type of document conversion for files on several sites, including the recruiting function on LinkedIn and document sharing on Yammer. Once a document is there, users can see the details of fancy typefaces and add comments on desktops and mobile devices, without Flash or plugins required. Other document functions, like editing, are not yet available.

    “We want to bite off different pieces of that puzzle,” said Ryan Damico, Crocodoc’s co-founder and CEO. “In the end game, we want to cover all of them.”

    Yammer document previewing via Crocodoc technology

    Yammer document previewing via Crocodoc technology

    Box will swap out its existing previewing mechanisms with the Crocodoc technology in the next few months. Also coming are new versions of previewing, such a carousel with pages passing by, a sliding option, a scrolling option and perhaps a page-flipping option, Damico said. Box will also enable developers to keep using the Crocodoc API to upload documents, spin them around into HTML5 and then do things with HTML5-enabled content to embed in their own websites.

    “This is what Instagram is to Facebook,” Levie said. “Photos are important to them; documents are deeply important to us, and they’re deeply important to business use cases.”

    While it’s been four years since the previous Box acquisition, Box does want to keep building out its product lineup. “We intend on being very acquisitive,” Levie told me. Box has raised $312 million to date, including contributions from Andreessen Horowitz, Draper Fisher Jurvetson and NEA. Crocodoc has raised more than $1 million, with investors at SV Angel and 500 Startups.

    As Box makes itself into more of a platform than simply a venue for document storage and sharing — last month I wrote about its health care applications and compliance with the Health Insurance Portability and Accountability Act — it also needs to make sure documents show up clearly. It does seem like a no-brainer, which is why it seems like this acquisition should have come earlier.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • How analzying Wikipedia page views could help you make money

    Plenty of companies have been looking at software for analyzing private large data sets and combining it with external streams such as tweets to make predictions that could boost revenue or cut expenses. Walmart, for instance, has come up with a way for company buyers to cross sales data with tweets on products and categories on Twitter and thereby determine which products to stock. Here’s another possible data source to consider checking: Wikipedia.

    No, this doesn’t mean a company that wants to predict the future should take a guess based on what a person or company’s Wikipedia page says. However, researchers have found value in page views on certain English-language Wikipedia pages. The results were published Wednesday in the online journal Scientific Reports.

    The researchers looked at page views and edits for Wikipedia entries on public companies that are part of the Dow Jones Industrial Average, such as Cisco, Intel, and Pfizer, (pfe) as well as wikis on economic topics such as capitalism and debt. Changes in the average number of page views and edits per week informed decisions on whether to buy or sell the DJIA. In other words, a major increase in page views could have prompted a sale, followed by a buy to close out the deal, or vice-versa (decreases in page views, say, would cause a buy, followed by a sale).

    The researchers compared this investment strategy with a random investing strategy. What they found is that returns based on views of the DJIA company Wikipedia pages “are significantly higher than the returns of the random strategies,” to the tune of a 141 percent return, according to a news release.

    How returns on strategies based on view and edit data for Wikipedia entries on companies in the Dow Jones Industrial Average, courtesy of Scientific Reports.

    How returns on strategies based on Wikipedia view and edit data for Wikipedia entries on companies in the Dow Jones Industrial Average, courtesy of Scientific Reports.

    There was also a significant difference between returns from the random strategy and the returns on the strategy tied to page views of economic topics. The yield would be 297 percent higher than what was put in in that case.

    Returns on strategies based on view and edit data for Wikipedia entries on economic topics, via Scientific Reports

    Returns on strategies based on view and edit data for Wikipedia entries on economic topics, via Scientific Reports

    To check that there wasn’t a hidden variable in the data on views of company and topic pages, the researchers compared the earnings on Dow Jones investments tied to page views of actors and filmmakers, which had just as many page views as the pages on the DJIA companies. Indeed, they found no statistical significance there. And that makes sense in theory — who checks out Matt Damon’s Wikipedia entry before making an investment? But checking a Wikipedia page on Cisco might be a more reasonable action before investing in Cisco.

    Incidentally, some of the researchers behind this project have also investigated connections between the Dow Jones and the use of certain financial search terms on Google. Other researchers have previously found connections between Google search patterns on stocks and stock price changes over time.

    While predictive analytics has become a hot area — with applications from social media conversations to crime, from the flu to retweets — data scientists often acknowledge that people need to be sure the data they want to use for analysis is solid and reliable. Edit data from Wikipedia isn’t inherently reliable in the sense that anyone can edit it — and it turns out to be not statistically significant. Page views could perhaps be manipulated by a computer pinging Wikipedia again and again, which could throw off an algorithm pulling page view data in real time.

    And tweets can be all over the place — there’s no style guide or fact checking for Twitter. So getting a good read on sentiment based on tweets from, say, Stocktwits can be hit or miss. And Google’s Flu Trends feature, heralded as an early use of crowdsourced data, reportedly overestimated flu breakout late last year.

    Clearly, there are caveats to these data sets. Still, it’s neat to see new models emerging on the uses of public data, and some people who want to make money off Wikipedia metadata might want experiment with it. Just don’t blame us if the experiments backfire.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Difference in opinion might have prompted Fusion-io leadership changes

    The stock market was shocked Wednesday by news that two founders of flash memory vendor Fusion-io, CEO David Flynn and Chief Marketing Officer (and one-time CEO) Rick White, have suddenly left the company. Some analysts are speculating that a difference in opinion over the company’s future direction could be to blame for the shakeup.

    Former Hewlett-Packard executive and Fusion-io board director Shane Robison (pictured) takes the CEO position. Robison previously was executive vice president and chief strategy and technology officer at HP. Reuters reported that Robison was involved with HP’s Autonomy and Palm deals. Flynn and White will get into investing, according to a statement.

    With Flynn and White leaving, it’s possible that there was a disagreement about the company’s future direction, rather than an operational matter causing issues, according to a

    Departure due to operational reasons or strategy disagreement? Given that both David Flynn and Rick White are founders, we think the issue may not be operational but more of a disagreement regarding the long term direction of the company. We continue to believe that Fusion-io has potential for long term growth and could be seen as a possible acquisition target for numerous large cap tech companies including NetApp, EMC, Cisco, IBM and possibly Oracle, Intel and Samsung.

    Fusion-io, which announced last month that it had paid $119 million for NexGen Storage, has been in an acquisition state of mind in the past few years. It also has picked up IO Turbine and ID7.

    As the company has matured since its 2005 establishment, it has become more of a known quantity in storage, with webscale customers in Apple and Facebook and original-equipment manufacturer relationships with Dell, HP and IBM.

    The company is not directly selling NexGen’s hybrid-storage gear; rather, it will rely on systems integrators to do that, Flynn told me last month. That means the company won’t compete against its existing customers, such as HP and IBM. The idea is to help systems integrators compete with the likes of EMC for business from small and medium-sized businesses, Flynn said.

    At the same time, through the NexGen acquisition Fusion-io got a hold of ioControl software that could help diversify Fusion-io’s software capabilities. Flynn clearly positioned the NexGen buy as a software-defined storage grab. But it wouldn’t be surprising if Flynn and White don’t want to work for a software-first company. Robison, meanwhile, seems to be eyeing the software-defined storage space for his new company. “Fusion-io has an incredible opportunity to continue to transform the software defined storage industry,” he said in the company statement.

    Regardless of what happens at Fusion-io, recent commitments to flash memory on the part of EMC and IBM promise that it will continue to be a hot space. Smaller vendors such as Fusion-io, Violin Memory and Virident might want to shift and prepare for acquisitions by the bigger guys, and that might not be what Flynn and White had in mind all these years.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Smaller equals scale for Riverbed’s new mini application-delivery software

    The computing services available through Amazon Web Services and Windows Azure enable customers to pay for as much compute and storage resources as they use. But what about functions like load balancing across servers, content compression and data encryption?

    Some of these functions can run on specialized hardware, on gear sometimes called an application delivery controller (ADC). Riverbed Technology has managed to take its virtual version of an ADC — courtesy of the 2011 acquisition of Zeus — and miniaturize the software so a bunch of copies of the application can run on a single machine. Customers will be able to get their hands on Riverbed’s Stingray Services Controller in the third quarter.

    The mini-ADC instances can be easily added or subtracted to best match the needs of different applications, making it easier for customers to scale their ADC use to the jobs at hand. It’s different from the usual way of making network administrators guess how much throughput they will need for the ADC and then having to stick with it longer than is necessary. That’s one reason why Riverbed is calling the new version of the software ADC as a service. The other reason is that users will be able to deploy and manage ADCs on their own, enabling self-service.

    The miniaturization also means different workloads can run on different ADCs, putting an end to what Kavitha Mariappan, director of Riverbed’s Stingray product line (pictured), called “the noisy-neighbor problem” — basically decreased performance.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Now testing its SDN controller, Juniper hones in on release later this year

    Following Juniper Networks’ acquisition of software-defined networking (SDN) startup Contrail, the company’s new JunosV Contrail controller for software-defined networking is now being deployed in beta tests with AT&T, the China Mobile Research Institute and other enterprises and service providers. The controller will become broadly available in the second half of the year.

    Juniper says its Contrail software will be widely adoptable across devices from many vendors because it supports common protocols such as BGP and XMPP, although not OpenFlow. The controller “doesn’t have a dependency on a particular protocol like OpenFlow,” said Brad Brooks, vice president of business development at Juniper. So network admins can run the software on much of their existing Juniper or Cisco gear. Still, the architecture of the Contrail software is flexible and could support OpenFlow in the future, said Jennifer Lin, Juniper’s senior director of product management (pictured).

    As it prepares to sell the controller as part of a new software-licensing model for the company emphasizing the ability to pay for it independent of hardware, Juniper is following through on a long-term strategic shift. The strategy could help Juniper stay relevant with software that can work with current and forthcoming hardware, preventing commoditization while enabling centralized configuration and programmability and applications higher up the stack.

    Those objectives are increasingly important as data centers become more complex and fluid with virtualization, multitenancy and rapid scale-out. Juniper wants to run networks that allow those things to happen easily — and stay in the hardware business, too.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Funding soars for security startups as cyberattacks keep coming

    Cyberattacks hitting one company after another — including defense contractor QinetiQ — have garnered plenty of headlines in recent months. And while that’s got to cause headaches for victims, it might not be such a bad thing, because it makes governments and other businesses notice. It turns out that venture capitalists have taken note, too, and have been putting more of their dollars behind security startups in hopes that those companies go big.

    The numbers bear out the trend. In the first quarter of 2013, VCs dumped nearly $353 million into IT security deals, up 90 percent over that quarter the previous year, according to MoneyTree Report data provided to GigaOM by the National Venture Capital Association. If you divide the total funding by the number of deals, the average amount was more than $16 million, up 125 percent over the $7.1 million amount in the first quarter of 2012.

    Average Q1 IT-security startup venture funding

    Security startups that have taken on VC funding rounds this year include Cylance, TraceVector and vArmour Networks, among others.

    The intersection of big data and security has been a hot space, as companies move to collect lots of information and analyze it all as fast as possible, just as companies want to derive insights on increasing and more complex data sets that can lead to overhead reductions and new revenue streams. For example, in October, EMC said it would buy Silver Tail Systems, which tracks web and mobile-app traffic and points to unusual behavior and violations that customers can set. To separate the wheat from the chaff of vulnerabilities that multiple security systems might discover and to use security staffers efficiently, Risk I/O prioritizes issues. Last year it got $5.25 million.

    Are the cyberattacks nudging VCs to shell out millions? Shirish Sathaye, a general partner at Khosla Ventures, which has invested in Cylance and TraceVector along with Lookout and DB Networks, thinks the cyberattack news onslaught is making a difference.

    “The first reason is, yes, every time you open a newspaper, you read about somebody being attacked,” he said, whether against consumers or companies. The likelihood and complexity of attacks only become greater as more people get online, often with multiple devices.

    The multiplicity of devices accessing a network — a trend in its own right — could pose security challenges on its own, and Tenable Network Security picked up $50 million last year following the addition of features that look for vulnerabilities popping up as mobile devices and provide information on devices such as whether they are jailbroken. Last month Ionic Security, which keeps data encrypted as it moves to devices, said it had raised $9.4 million in new funding.

    Another reason for the security funding boom, Sathaye said, is the success of network security player Palo Alto Networks. Prior to its public offering last year, threats might well have helped its appeal.

    From Sathaye’s point of view, it’s critical to nurture the ecosystem of options for strengthening network and endpoint security. “As bad guys keep innovating, good guys have to innovate at least as fast as them, if not faster,” he said. With more money going toward IT-security startups, it does seem that plenty of other VCs think there’s an opportunity here.

    Feature image courtesy of Shutterstock user LeventeGyori.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • When a defense contractor gets hacked repeatedly, you know cybersecurity is a problem

    QinetiQ North America, a prominent defense contractor to the U.S. government, endured extensive on-again-off-again hacks in 2007-2010 from spies in China, resulting in the loss of many terabytes of sensitive data, including more than 10,000 passwords, chip architecture for military robots and weapon information, according to an article from Bloomberg Thursday.

    The hackers accessed confidential data across multiple facilities from laptops and servers alike, the article stated. To avoid being observed on a company network, in one instance the hackers siphoned out data in small quantities. And QinetiQ’s own employees apparently removed software put on their computers to detect malware after becoming frustrated with how it impacted the performance of their computers: with the IT department’s permission.

    Depite the known hacks, the federal government awarded a cybersecurity contract to QinetiQ in 2012, according to the article. QinetiQ sells two cybersecurity products, the Knowledge Discovery Appliance and the Social Engineering Protection Appliance among other offerings, although the article noted that many defense contractors have also suffered from cyberattacks.

    While federal agencies have investigated the hacks, QinetiQ retains its ability to work with military technology, according to the Bloomberg report, even though hacks have resurfaced many times over a several-year period, and even when it’s in the government’s best interest to shut down what has effectively served as a back door into federal networks. The article reported that “the State Department lacks the computer forensics expertise to evaluate the losses.” That’s pretty bad — and the problem might only get worse as the the federal government looks at ways to consolidate its IT footprint.

    Following on a string of cyberattacks on companies earlier this year, the news of the QinetiQ hacks is another example of the need for better security protections for businesses and other organizations. It also calls into question whether the feds can do more to prevent cyberattacks.

    And it points to an opportunity. If this is the golden age of enterprise IT, brought on by big disruptions such as cloud computing and the bring-your-own-device trend, security could become an even hotter space over the next few years for VCs to back.

    Feature image courtesy of Shutterstock user alexskopje.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • With hundreds of customers, Asana now seeks enterprise adoption

    Facebook co-founder Dustin Moskovitz and Google and Facebook veteran Justin Rosenstein founded the collaboration- and productivity-focused Asana in 2009. It released a free version for teams of up to 30 in 2011, and premium versions followed in April 2012. Now the company is adding features to appeal to larger businesses and groups — specifically those with more than 100 users. The software’s new capabilities amount to the next logical step in Moskovitz and Rosenstein’s plan to increase productivity for people the world over.

    The new features are together referred to under the Organizations rubric. They include a way to organize employees into teams; visibility for a high-level executive to see what members of all teams are up to; the ability to hide certain teams from the rest of the company; unified inboxes and task lists for people on multiple teams, and roles for IT and other administrators to monitor use and set security and access policies.

    The Asana office in San Francisco.

    The Asana office in San Francisco.

    Asana itself is small, with 40 employees. Perched on the ninth floor of a high-rise building outside of San Francisco’s busy South of Market neighborhood, it’s removed from the hustle and bustle. Working at Asana comes with perks, including yoga and skills coaching. Moskovitz and Rosenstein carry no titles other than founders.

    If Asana succeeds in penetrating the enterprise, case studies might look back on the company’s way of doing things and suggest that other ambitious startups follow its lead (and, of course, use the Asana software, as Asana does internally). For now, though, the difference between the Asana setting and the corporate world is stark.

    Moskovitz and Rosenstein, as usual, declined to disclose revenue, so it’s hard to assess how well the company is actually doing. To date the company has taken on $38.5 million in venture funding from Andreessen Horowitz, the Founders Fund and others. Tens of thousands of “teams” — companies or business units — use Asana, including Airbnb, Disqus, Foursquare, Pinterest, and Uber. Fewer than 1,000 pay for it.

    As a freemium product, Rosenstein said, you “always prioritize gaining more users and growing the company. It (Asana’s profit-and-loss statement) looks a lot like other big freemium companies — same business model. … We want to instead take the market and have everyone use (Asana) as fast as possible. Revenue will help with that, but growth is our number-one priority.”

    Everything is going according to plan for Asana, then. But as for getting boatloads of companies to pay for the service and making the company profitable, it’s unclear just how long that will take.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.