Author: Jordan Novet

  • Big Switch open-sources software to ease the move to commodity switches

    Hot off a new round of funding, Big Switch Networks says it now has open-source and commercial software to help companies scale out networks more easily and cheaply with commodity switches, further threatening the likes of legacy network gear sellers.

    Big Switch’s new Switch Light software implements the OpenFlow networking protocol in physical and virtual switches. It lets data center administrators automatically and centrally send out policies from one location when new switches are added to the network, instead of having to go through with a time-consuming, hands-on process.

    The open-source version of Switch Light is available for free because “we want to make sure (OpenFlow) industry standards are enabled in the data plane,” said Jason Matlof, vice president of marketing at Big Switch. The commercial version comes with technical support and is more scalable and highly available than the open-source version, Matlof said.

    Switch Light is based on existing open-source technology developed a few years ago under the name Indigo. Customers can sign up to use the Switch Light software under a licensing agreement along with Big Switch’s other software-defined networking products — the Big Switch Controller for the network’s control plane, the Big Virtual Switch and the Big Tap monitoring program.

    Consider the news another blow to Cisco, as the Big Switch software is aimed at customers that want to move away from lock-in from the legacy network hardware vendor and shift elements of their network stacks to white-label suppliers. Cisco still holds 65 percent of marketshare for Ethernet switches. Arista and Juniper play here, too.

    As I reported earlier this month, Quanta Computer is keen on selling network gear such as switches directly to companies through a newly formed subsidiary, Quanta QCT, just as it has shifted from a primarily original-design manufacturer to a direct seller of servers. Quanta and other commodity switch makers, such as Supermicro, could benefit from the Switch Light release as well.

    All eyes are on Cisco and Arista to make the next move. Meanwhile, as the Switch Light affords Big Switch a more rounded out product line, the company could again look like a good buy, just as it did last summer.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Zynga CIO Chrapaty jumps ship, becomes cloud-storage CEO

    Debra Chrapaty, a former executive at Cisco Microsoft, and E*Trade, will leave her CIO post at Zynga next month to become CEO of enterprise cloud-storage vendor Nirvanix.

    Chrapaty’s move isn’t a complete surprise, given that (a) cloud storage is hot, (b) Zynga is not and (c) Chrapaty has served as Nirvanix’s executive board chairwoman since November.

    Chrapaty has the infrastructure chops to lead a company keen on expanding clouds. At Zynga, she oversaw the company’s switch from Amazon Web Services to its own specialized cloud. Earlier, she was in charge as Microsoft expanded its data center footprint. She has also been president and chief operating officer of E*Trade and senior vice president of the Collaboration Software Group at Cisco.

    Based in San Diego, Nirvanix has racked up more than 1,200 customers and $70 million in venture funding. Backers include Khosla Ventures and Intel Capital. Peddling as it does public, private and hybrid clouds and cloud gateways for enterprises, it sees competitors in many directions, including Amazon and Microsoft on the public front, Dropbox and Box in the smaller-business area, and EMC and NetApp on the legacy-vendor stage.

    Chrapaty nevertheless sounded optimistic as she talked about her new gig. “What I’m excited about for Nirvanix is we actually offer this unique solution that’s both public- and private-cloud based, highly available, secure for the enterprise, but also variable deployment models,” she said. “So we’re meeting a unique need for the Fortune 1000 companies.”

    In the coming months, Nirvanix will make some decisions about expand its focus to products other than storage, which could distinguish it, but Chrapaty declined to go into detail about the possibilities. We’re eager to see the game plan, as it could bring more enterprises into the cloud-storage fold and make Nirvanix a bigger player.

    What will come of Zynga? That’s a whole other story. Chrapaty is the latest executive to leave the one-time social-gaming star, and the stock has lingered under $4 since August.

    Zynga on Monday sent a statement from its chief operations officer, David Ko, naming Chrapaty’s successor:

    “We thank Debra for her leadership and contributions to Zynga over the past years and wish her luck in her future endeavors. Today, I am proud to announce that Dorion Carroll is moving into the role of Chief Information Officer effective immediately. Dorion is a 25-year engineering veteran with deep experience developing products and services as well as scaling teams from start-up phase to large companies. As one of our Zynga Fellows, Dorion has provided direction, leadership and management across numerous technology and products teams at Zynga over the past 3 years as well as being one of our most senior technology leaders. Our global network of players relies on the exceptional talent of our technology teams and the services they provide. We look forward to Dorion’s leadership to bring even more prioritization and focus to these teams.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Foundation wants a better way of combing through isolated data on nonprofits

    Sure, plenty of websites maintain free data on public health, nonprofits and startups focused on health and governments’ health initiatives. The thing is, they don’t combine easily to show the bigger picture. The Bill & Melinda Gates Foundation is offering a $100,000 grant to solve the problem, which it calls a matter of data interoperability.

    One scholar has likened the data interoperability predicament to a bunch of Legos, Lincoln Logs and Erector sets — lots of building materials, but they don’t come together seamlessly. Kids should be able to bring all those toys together in one magnificent sculpture that sticks together elegantly without Krazy Glue or duct tape, just as different kinds of users should be able to evaluate data from different sites without having to normalize it all. A potential donor considering an investment in, say, a health nonprofit shouldn’t have to spend lots of time digging and bringing information together from Guidestar, Glasspockets and local and state government sites in order to check a nonprofit’s goals and progress and compare against other nonprofits.

    The resulting product could draw on natural language processing, APIs for grabbing data and compelling visualizations with maps and other Tufte-approved images. The deadline for submissions is May 7.

    Just as companies and federal agencies find themselves awash in big data and in need of clarity as more data sets become available for analysis, the nonprofit space needs its own way to make sense of it all. The result of the challenge — a common place to check out interdisciplinary data — could be introduced to other areas with big public data repositories, too.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Lockheed Martin wants to use a quantum computer to develop radar, aircraft systems

    Lockheed Martin is looking at several challenging applications for the quantum-computing hardware it has purchased from from D-Wave Systems, the New York Times reported Friday. The use of quantum computing is a big deal because as we depend more on computing, we’re going to need different types of processors. Lockheed’s commercial use suggests that the probabilistic problem-solving approach and breakneck speed of quantum computing could be more widely adopted in the near future.

    For the record, D-Wave and Lockheed formed their commercial relationship a couple of years ago, although at the time the defense contractor apparently didn’t discuss possible applications. Now there are some specifics on how Lockheed could employ its D-Wave computer, following projections on other types of applications.

    Lockheed Martin will use its D-Wave computer “to create and test complex radar, space and aircraft systems,” the Times’ Quentin Hardy wrote. “It could be possible, for example, to tell instantly how the millions of lines of software running a network satellites would react to a solar burst or a pulse from a nuclear explosion — something that can now take weeks, if ever, to determine.”

    Rather than working with binary yes-or-no questions — ones and zeros — quantum computing is more probabilistic, also allowing a combination of zero and one to simultaneously answer many questions with quantum bits of information, or qubits, and tell users more about the likelihood of a situation. It’s not necessarily useful for all kinds of computing, but it could solve problems that current computers can’t.

    It’s also a great way forward for computing to keep following the spirit of Moore’s Law, in the sense that it could permit more powerful computing than what’s possible today. The question is how soon it will become commercially viable. The quantum computer cost Lockheed $10 million, according to one report, so it will take some time and more commercial interest before the price can come down.

    Commercial applications of quantum computing are a long time coming. In a 2010 GigaOM Research report on quantum computing (subscription required), my colleague Stacey Higginbotham wrote that commercial viability could take decades, not years.

    Feature image courtesy of Shutterstock user R.T. Wohlstadter.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Big data needs people, leaders and real-time analytics: A Structure:Data 2013 recap

    In the afterglow of GigaOM’s Structure:Data conference this week, a few big-picture trends and surprising quotes stuck with us.

    Data needs people, my friend

    Despite the much-discussed power of data, there are roles for people to play in big data projects. Data increasingly influences companies’ decision making processes, but several speakers hit on the notion that people should be involved in big data storage and analysis.

    It all starts with a human question. Before machines generate answers, employees from many departments should feel empowered to ask good questions of data, said John Sotham, vice president of finance at BuildDirect.

    Beyond questions, humans need to decide which algorithms to employ and which data to use to answer questions, said Scott Brave, founder and chief technology officer of Baynote.

    In data science, machine use algorithms to make decisions with clean data for the sake of prediction and optimization, said Sean Gurley, chief technology officer of Quid. But in “data intelligence,” humans “create, change and shape the world we’re in” using small sets of messy data, he explained.

    Sometimes algorithms don’t bring the best results as well as people can. One website crowdsources identification of the top news to people, as my colleague Kevin Tofel wrote. And at times, it’s wise to throw lots of people at big data challenges. With TopCoder, there are competitions to discover the best software architecture, algorithms and analytics, said the company’s chief technology officer, Mike Lydon.

    There was an exception to the man-and-machine rule. The software BeyondCore’s software makes machines crunch all available variables to isolate the biggest profit generators. It displays charts and audibly tells you its findings.

    It takes leadership

    Becoming a data-driven company requires a human push, said Paul Maritz, chief strategist at EMC. “Change requires leadership,” he said. “It requires people to understand what is happening and really get behind it and drive organizations to transform, because none of us really like to change,” he said. Only then can companies discover better ways to make money.

    Meanwhile, Amaya Souarez, director of data center services at Microsoft, said that lots of internal data doesn’t automatically affect changes in strategy. “The data will help you in your discussions, but it’s not everything,” she said. “It really does take a lot of personal interaction and commitment to that relationship,” she said.

    We want analytics and we want it now

    Whether in Hadoop or in specialized databases, our speakers showed why they want to see big data analytics to happen in real time.

    Muddu Sudhakar, vice president and general manager of the Pivotal Initiative’s Cetas cloud and big data analytics platform, called for “Hadoop high throughput, low latency.” And SQLstream CEO Damian Black said that 2013 “seems to be the year where it’s all happening now. All Hadoop distributions are talking about streaming technology.”

    Ashok Srivastava, chief data scientist at Verizon, talked about what machines could do if they process data in real time: go through millions of new pictures users make on their cell phones and predict the health of a person or a machine based on changes over time. Similarly, Maritz identified an opportunity telecommunications companies have yet to take advantage of: texting customers to apologize for a dropped call. “They can’t even do that today, let alone do more ambitious things on top of that,” Maritz said.

    Big data words to the wise

    Executives, IT administrators and others will likely discuss these themes in the coming months. A few statements from speakers also stand out:

    “…What’s really most intriguing is that you can be 100 percent guaranteed to be identified by simply your gait — how you walk.” — Ira “Gus” Hunt, chief technology officer of the CIA, in a statement on the capabilities of a three-axis accelerometer

    “Hadoop is hard — let’s make no bones about it. It’s damn hard to use. It’s low-level infrastructure software, and most people out there are not used to using low-level infrastructure software.” — Todd Papaioannou, founder and CEO of Continuuity, in a statement on his lessons from Yahoo, where he was chief cloud architect

    – “I get asked all the time to explain, How is Riak better than Hadoop?” — Justin Sheehy, chief technology officer of Basho Technologies, in a statement about how hype surrounding Hadoop and big data gets in the way of real discussion about solving data problems

    – “What if you could send your sperm over email to somebody else and print the sperm on the other end?” — Naveen Jain, founder and CEO of Inome, in a statement about disruptions in big data from other industries

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • No, not every database was created equal. Here’s how they’re stand out

    SQL or NoSQL? In-memory or hard disks? Graph? These questions have been top of mind in recent years as developers and IT administrators check out new-age databases capable of handling scale-out data sets. Executives from four databases showed how they stand out in a hot market at GigaOM’s Structure:Data conference on Thursday.

    Emil Eifrem, CEO of Neo Technology, touted the power of Neo4j and other graph databases to show relationships among disparate varieties of data with nodes, edges and key-value properties. (Think of Facebook’s Graph Search as one version.) The style takes inspiration from the connections among neurons and synapses inside the brain, Eifrem said. But, like other NoSQL databases, Neo Technology’s Neo4j product doesn’t use the SQL programming language, which could limit its adoption among enterprises.

    Damian Black, CEO of SQLstream, touted his database’s use of SQL, calling it “lingua franca for data management.” Sure, it isn’t the easiest language to use. Still, “you know it’s going to save, it’s going to work,” he said. “It’s auto-optimizing. It’s proven.” Plus, it might be easier to find developers who can use it. As specialized databases get more attention, that’s become a more important point, said the moderator of the talk, GigaOM Research Analyst David Linthicum.

    Different databases have different sweet spots. For Ryan Garrett, vice president of product of MemSQL, it’s comparing real-time data — from the trading floor, say — with recent historical data from perhaps a day or a week ago. Andrew Cronk, CEO of TempoDB, said his database excels at crunching time-series data in long columns coming off of many new connected devices.

    Black believes storing data in memory provides a clear advantage. “It’s obviously going to be faster if you’re pulling it from memory,” he said. Eifrem took issue with that notion, saying that Neo4j runs on wherever sufficient memory is available. “Generally speaking, we want to be as horizontal as possible,” he said.

    Legacy database vendors such as Oracle still command large swaths of the database market. But specialized databases such as the four on display here could keep chipping away as data sets get larger and larger. Because there are so many flavors, a few databases could become leaders, rather than just one, as they really do have different strengths and weaknesses and use-case sweet spots. At least they do for now.

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below.


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Why Guavus analyzes lots of telecommunications data before storing it all

    It’s not unusual to think that if data scientists want to analyze data, the first step is to collect it and spend a lot of time looking at it — asking questions, refining data sets and then getting some possible answers. But at Guavus, the emphasis is on analyzing petabytes of data as soon as it comes in to deliver real-time results, Anukool Lakhina, the company’s founder and CEO, told attendees at GigaOM’s Structure:Data conference on Thursday.

    A decade ago, when Lakhina worked at Sprint Labs, Sprint employed deep-packet-inspection probes to collect information about how subscribers were using the telecommunications company’s services. It was a good idea — “if we knew how they were interacting, we’d be invisible, we’d know everything about our business,” Lakhina said. But the data couldn’t really be harnessed quickly. FedEx trucks drove around and picked up quickly-filled storage arrays sitting next to the probes around the Sprint network. Engineers jokingly referred to the process as the “package-switch network,” rather than a packet-switch network, Lakhina said. Once the data was collected, researchers reviewed roughly day-old data and matched it with other data. They reported their findings and were roundly turned away, because the data was, well, dated.

    Guavus, founded in 2006, automates the FedEx model, so telcos can derive insights from data immediately. Guavus offers its customers customizable dashboards with the self-service simplicity of a consumer application.

    The change in thinking from store first to compute first has led to a lot of clear return on investment, at least as Guavus has applied it. One service provider using Guavus discovered that some cab drivers were supposed to be using the network for credit card transactions but were actually carrying live video streams. The use violated the end users’ contract terms and resulted in renegotiations. Another Guavus customer used the product to respond during customer-care calls and explain why end users were getting charged extra for large data use. Data from Guavus can also let customers pass down intelligent information to end users through self-service portals.

    These are $100 million problems, Lakhina said. “And you don’t need to do a lot of hunting around to discover these big use cases,” he said.

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Hadoop applications abound, but Hadoop still needs improvement

    Since Hadoop emerged in the early 2000s, a whole ecosystem has sprouted up. The buzz still has not abated, nor has the need for further development of Hadoop as a platform. Entrepreneurs who build applications around Hadoop talked about the use cases they see the most and the next essential steps for the ecosystem to grow at GigaOM’s Structure:Data event on Thursday.

    Jonathan Gray, founder and chief technology officer of Continuuity, said he’s seen lots of Hadoop implementations for analytics applications for gaming and advertising purposes. “Those guys have tons of data,” he said. “(They) run off the back of analytics.” Popular use cases include attribution and retargeting of advertisements.

    While use cases have popped up across many industries, there isn’t one Hadoop panacea. Different parts of the Hadoop ecosystem — such as HBase, MapReduce and so on — might fit different sorts of applications, said Muddu Sudhakar, vice president and general manager of the Pivotal Initiative’s Cetas cloud and big data analytics platform.

    Sudhakar identified a few ways in which Hadoop needs to adapt further.

    For starters, he said, “Hadoop needs to be virtualized,” to enable the sort of dynamic resource management popular in public clouds. For now, power consumption for Hadoop deployment across many servers can be very expensive. “We are living in the cloud world, so that whole thing needs to be solved,” he said.

    Hadoop might be able to process large data sets, but it could take you a while. That’s why Sudhakar called for “Hadoop high throughput, low latency.”

    Look to Google to get a sense of where Hadoop capabilities need to go, said Omer Trajman of WibiData. “It’s amazing to look (when) Google sends you messages from the future,” he said. The ultimate feature is I’m Feeling Lucky — just tell me what’s next. That’s what everyone else is going to need.”

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Six ideas from entrepreneurs for solving your big-data problems

    Entrepreneurs from six big data startups took the stage Wednesday at GigaOM’s Structure:Data conference to share insights on the industry as a whole. Taken together, one gets a sense of the ideal way to crunch big data in an enterprise or any other organization with large data sets on their hands.

    • Just because you have a lot of data doesn’t mean you’re doing a good job of acting on it. Numenta CEO Rami Branitzky made the point with an example. Data scientists working at utility companies might act on just 0.5 percent of data, and it might take them three weeks to build a model, let alone deploy it. A better solution, Branitzky said, would derive insights immediately as fast as data streams come in, just as the brain processes information pretty much as soon as a person captures it through the five senses.
    • Sure, Hadoop is hip and hefty — just ask my colleague Derrick Harris, who recently wrapped up a four-part series on it — but it ain’t necessarily easy for statistics-savvy data scientists familiar with quick and dirty programming languages such as Python to wrangle data with Hadoop in Java, said Doug Daniels, chief technology officer of Mortar Data. Hence the company’s offering of Hadoop available for deployment through Python, which could make more sense for certain customers.
    • Airlines have modernized pilot dashboards over the years, although multiple iterations haven’t necessarily added more new measurements for pilots to keep track of, said Stephen Messer, co-founder and vice chairman of Collective[i]. Instead, the companies put right in front of pilots’ eyes the information most relevant to them at any given moment. “Is this the best technology out there? No. It’s taking existing technology and reutilizing it,” Messer said. Similarly, his company seeks to give customers existing technology that’s easily accessible and therefore very powerful.
    • Asking questions of your data is only effective if you know the right questions to ask. But what if you don’t? Arijit Sengupta, CEO of BeyondCore, showed off his company’s answer to that question — software that quickly computes thousands of options based on all available variables to show charts and actually talks to you to identify the biggest drivers of, say, profit.
    • The number of “open-data APIs” that can provide data freely to the public has grown in the past five or six years from fewer than 100 to more than 8,000, said Sharmila Shahani-Mulligan, founder and CEO of ClearStory Data. Companies should be able to take advantage of all that publicly available sets by easily crossing it with privately held data to draw new insights, she said.
    • As Ayasdi Co-founder and CEO Gurjeet Singh sees it, the popular word “insight” should have a commonly accepted definition. He proposed one: an actionable truth about a problem discovered from data. By “actionable,” he meant that it should be compact, because otherwise it’s unlikely that anyone will act on it. Regarding “truth,” it can’t be random. “In large data sets, it’s easy to find whatever you want to find.” There must be statistical proof bearing out a theory. And it must be “discovered” as a result of a customer’s questions.

    Check out the rest of our Structure:Data 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • For big data achievements, IT and analysts need to work together

    One trend emerging throughout GigaOM’s Structure:Data conference today is the collaboration between man and machine to solve big-data problems. Speaking with Phil Francisco, vice president of product management for big data at IBM, and Emile Werr, head of enterprise data architecture at the New York Stock Exchange, my colleague Barb Darrow spent a session Wednesday explaining how people — a company’s IT experts and business experts — sometimes need to work in different ways to achieve the same business goals.

    Developers need to build systems for crossing lots of data sets from legacy data warehouses as well as Hadoop clusters and make available options for visualizing trends that might otherwise be obvious, Francisco said. That’s when business experts come into play and ask questions and derive insights that could lead to new strategies and campaigns.

    How does that work in practice? Facing greater volumes of data, the NYSE has trained business analysts as “data architects” to develop a system with IBM products for capacity planning and spotting patterns to detect fraud in billions of transactions each day, Werr said. Analysts also need to be able to figure out if a a possible fraud case is a false positive. Those are early-stage use cases for analyzing data in near-real time.

    For now, financial deployments tend to play out on premise. Werr pointed out places where public clouds make sense. Developers can test out new data architectures for data sets. But the cost advantage of running on production scale on Infrastructure as a Service (IaaS) such as Amazon Web Services is appealing, Werr said. But, at least for now, bandwidth across multiple data centers is an issue, he said.

    Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below.


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • How Aetna uses patient data to prevent diabetes and heart attacks

    With 30 million customers and more historical and new data information coming in all the time, Aetna has big data. Under a new initiative, Aetna Innovation Labs, the insurer is trying out several approaches to use all that data to keep patients healthy, lower customer costs and decrease the company’s own spending on health care.

    Speaking with my colleague Ki Mae Heussner at GigaOM’s Structure:Data conference on Wednesday, Aetna’s head of innovation, Michael Palmer, opened up about the Innovation Labs’ initiatives, the challenges they face and the neat opportunities ahead.

    The Innovation Labs has focused on a few conditions since starting last year, including cancer, heart disease and metabolic syndrome. There are five telling signs to metabolic syndrome, including a large waist circumference and high blood pressure. Metabolic syndrome is a sort of gateway for diabetes, cardiovascular issues and other conditions, so Aetna wants to prevent patients from getting metabolic syndrome in the first place. It has started using data on 18 million of its customers’ employees to tell doctors which of the five signs of metabolic syndrome their patients are likely to get in the next year, Palmer said.

    Aetna appears to want to pull in more kinds of data for patients and physicians to examine to provide better care. Asked if Aetna will start working in genomic data, Palmer said the Aetna Innovation Labs are running a pilot with some companies to allow patients to get their own genetic information and receive genetic counseling. “That will drive a wellness program driven around genomics,” he said.

    And social information from patients could provide much-needed feedback to show how effective a medication is in real time, for that patient and for others. “Some companies are doing that,” Palmer said, referring to incorporating social streams into health care. “The challenge is, a lot of HIPAA laws prevent the ability to connect those two in a way that would be ultimately useful,” he said.

    Aside from regulatory compliance, patients simply might not want to share as much data as insurers and doctors want them to, as it could get out to the wider public. Keeping health data out of employers’ hands, for example, is playing out right now, as Ki Mae mentioned, with media outlets reporting on Wednesday that CVS employees provide health information or pay up. But Palmer said Aetna has been working with large patient data sets for many years, though, and is cognizant of complying with HIPAA laws.

    It’s ironic, but Aetna — and probably others in the health care community — finds a challenge in getting people to work on their own health. Companies want to see that, so their employees will stay healthier and work more, Palmer said.

    Toward that end, Aetna is looking to work with other companies, mainly small startups, on what Palmer called “medication adherence” — texting, calling or otherwise contacting patients about taking their medications and doing other things to improve their health. It’s not easy, though, because, as Palmer said, what works for some patients doesn’t necessarily work for others.

    Check out the rest of our Structure:Data 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • EMC to bring out enterprise version of its Syncplicity file-sharing service

    Storage vendor EMC is establishing an enterprise edition of its Syncplicity file-sharing Software as a Service, less than a year after EMC acquired Syncplicity.

    EMC is aiming the new product at companies with more than 25 users. It can work with cloud-based storage and on-premise storage appliances such as EMC’s Isilon network-attached storage and Atmos object-based storage. That could make Syncplicity’s Enterprise Edition an appealing option for IT administrators who want to support the bring-your-own-device trend with a simple user interface but don’t want to worry about the security implications of using Dropbox and other offerings.

    Companies that use EMC’s Documentum enterprise content-management software can also sync files in the Syncplicity Enterprise Edition. Syncplicity keeps a customer’s data centers in multiple geographies automatically updated, and users access files from the nearest data center to keep latency low.

    Instead of requiring customers to pay a different amount of money each month to reflect elastic use of storage resources the way Amazon Web Services does for its Simple Storage Service (S3) and other products, Syncplicity customers pay based on the number of users — a more palatable option for larger businesses with hundreds or thousands of employees, said Jeetu Patel, Syncplicity’s general manager.

    While EMC might have wanted to have a product for small businesses in its line when it bought Syncplicity last year, it’s not surprising to see the company create a version that makes sense for large businesses, too. EMC is no Box, much as Box wants to add enterprise customers. But now it comes closer, by crossing ease of use with the security advantages of on-premise deployments.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • 10gen rolls out new features to woo more enterprises to MongoDB

    Honing in further on the enterprise market, MongoDB creator 10gen is bringing out features for enterprise customers and announcing upgrades of existing products for all users of its open-source non-relational database.

    10gen had greater business adoption in mind last year when it raised $42 million and vowed to focus on research and development to improve MongoDB. Now, around 60 percent of customers are enterprises, said Kelly Stirman, the company’s direct of product marketing.

    Once signed up for the MongoDB Enterprise software, customers can use their own on-premise hardware to run an extension of the MongoDB Monitoring Service to track MongoDB deployments with more than 100 metrics and receive alerts. MongoDB Enterprise comes certified for deployment on several operating systems. It also supports the Kerberos authentication protocol, which is popular among insurance companies and banks, and can hook in to customers’ existing monitoring services, such as Nagios. And it introduces roles for giving certain abilities to certain database users.

    New features in the MongoDB 2.4 release available to all users include full-text search for querying the database, an option to evenly shard data across machines, more accurate measurements of the distance between locations, the ability to count items in a database 20 times faster than before and the ability to maintain and query leaderboards of, say, the top 50 scorers in a baseball league.

    The NoSQL database market is crowded, and differentiation is important. That’s why it’s a good thing 10gen, which is based in New York and Palo Alto, Calif.-and has other offices in Australia, England, Ireland and Spain, will increase its headcount by 75 percent in the next year, Stirman said. The time between product releases is getting shorter and shorter, he said, which means that still more improvements could be just a few months away.

    Feature image courtesy of Flickr user junyaogura.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Salesforce rolls out new mobile features for its Chatter social network

    Software-as-a-Service (SaaS) giant Salesforce.com is announcing new features for its Chatter social network’s Android and iOS native apps, as part of a larger effort to enhance its mobile offerings.

    Inside the Chatter app for smartphones and tablets, users will be able to view and edit the status of deals in progress, share and view files, assign tasks, start polls and see updates in real time. Some of those abilities are available now, and others will come in the second half of the year. Until this week, though, users have been limited to viewing and posting status updates and browsing through user profiles.

    The new functions let salespeople and other employees do more of their work and keep track of projects on the go, whether during the morning commute or outside the door of a potential client. The most talented salespeople might not want to be tethered to a desk to use legacy customer-relationship management software; they want to work with the mobile devices they know, said Anna Rosenman, senior manager of product marketing for Salesforce Chatter.

    The rollout follows a Salesforce announcement last month about new live chat and co-browsing capabilities for mobile users of the company’s Service Cloud product.

    Stay tuned for other mobile announcements from Salesforce later this year, a spokesman said. How will the company execute on its mobile strategy? Look for it to make acquisitions, as it did for the co-browsing technology. As my colleague Barb Darrow reported last month, Salesforce will “be aggressive and look at everything” in terms of acquisition prospects, CEO Marc Benioff told analysts.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • AppNeta lands $16M as networking and application monitoring heats up

    AppNeta, whose monitoring services let devops teams track the performance of a website, the networks it uses and the external applications it depends on, announced on Monday a $16 million Series C round of venture funding, demonstrating that investors still like the area despite a crowded market.

    Bain Capital Ventures, Business Development Bank of Canada, Egan-Managed Capital and JMI Equity led the round, which brings the total AppNeta has raised to $47.8 million.

    AppNeta, which has offices in Boston and Vancouver, B.C., offers Software as a Service (SaaS) that gauges the performance of the components of a customer’s site and lag times attributable to web servers, the network and an end user’s browser. Those services are known as application-performance management (APM). The SaaS also breaks out performance of the networks underlying the apps and data the site depends on to run on end users’ devices — what’s called network-performance management (NPM). The data from AppNeta can quickly show devops employees when and how performance is not meeting service-level agreements (SLAs) and take action accordingly.

    The APM market looks a bit like a venture capital and product-feature arms race. Last week New Relic, fresh off an $80 million round of funding, announced the ability to monitor end users’ mobile experiences. After recently picking up $50 million, AppDynamics moved more toward IT automation in a new product release last week, and it appears poised to add mobile-app support in the near future.

    The NPM market lacks the current momentum of APM, although there is competition from SevOne, which got $150 million in January; Riverbed and other vendors.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Dell, Cisco looking at vendor-led SDN consortiums, but is it too late?

    Dell is joining a growing list of vendors that want to develop their own standard for software-defined networking (SDN), and perhaps dilute the influence of customer-led standards in the data center.

    Last week, the nonprofit Object Management Group (OMG) said Dell intends to create an OMG committee on SDN. The Dell news follows rumored moves by Cisco, Citrix, HP and other vendors to sponsor or contribute to a new consortium code-named Project Daylight. Reports of the project surfaced last month, and others have followed.

    Vendors’ efforts to set standards boil down to a matter of securing as big a place in the data center as possible. In recent years, webscale companies such as Amazon, Facebook and Google have effectively prompted the efforts by moving away from legacy IT vendors and toward custom gear to better fit their needs at the their huge scale.

    In 2011, Facebook announced the Open Compute Project, an effort for customers such as Facebook to name their own needs and wants for servers and other data center components. Dell has joined the Open Compute Project. But now not one server going in to Facebook’s newest data center, in the Swedish city of Luleå, comes from a traditional server maker like Dell.

    Open Compute isn’t the only new customer-led standards organization. Facebook and several service providers, including Verizon and Comcast, created the Open Networking Foundation in 2011 to build a standard around the OpenFlow networking protocol. Dell’s SDN committee looks like an attempt to ensure a place higher up the stack.

    But it could be too late for network-appliance makers to get out in front of enterprises with standards of their own, as SDN startups capture more and more customers and their products become more easily adaptable.

    “The networking industry needs clearly defined leadership in the SDN technology space, and Dell is taking an important step to coalesce a standard under OMG through an open, international transparent standards process,” OMG said in the announcement that OMG it released Wednesday. That sort of language probably grates on the ears of Embrane, Pertino and other SDN startups, as they have already made inroads and could have trouble gaining a foothold in a standard with Dell and other hardware-focused players, let alone taking the lead.

    As companies throw around new definitions for SDN, it becomes harder to understand what it is, and what it’s not, just like what has happened with the term cloud computing and the variations on it. SDN already has OpenFlow, thanks to the Open Networking Foundation. Do vendors really need to try to challenge those standards? Shareholders might want to see that sort of maneuvering, although it could be too little, too late.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • How an unknown Taiwanese server maker is eating the big guys’ lunch

    It all started, Mike Yang says, with a conversation he had with Facebook’s vice president of technical operations in 2007 or 2008. Rather than source servers through a traditional vendor like IBM for its  data centers, Facebook turned to Quanta.

    Back then, Quanta didn’t sell servers directly to customers, it only built them for traditional server vendors who then put their name on them and sold them to customers. Fast forward a few years, and a majority of Quanta’s server revenue stems from direct deals — 65 percent in 2012, and a forecasted 85 percent this year. Now, it counts other large-scale server buyers such as Rackspace and Amazon among its customers.

    Mike Yang. Source: Quanta

    Mike Yang. Source: Quanta

    Yang, the man in charge of Quanta’s cloud computing business unit, beamed during an interview on Thursday as he spoke about how the company can directly offer energy-efficient and high-performance products for webscale customers and smaller ones, too. If the Taiwan-based hardware maker’s 85 percent forecase proves out, it company could become a more recognized supplier for cloud computing venues, further threatening old-line server vendors like Hewlett-Packard and Dell.

    The company, with U.S. headquarters in Fremont, Calif., didn’t show projections of server revenues in dollars or server shipments in total but said it shipped 1.2 million server motherboards in 2012 and plans to ship at least 10 percent more — 1.32 million motherboards — this year.

    Quanta appears to be on a roll with Quanta-brand direct server sales growth. At the same time as it’s doing custom jobs for webscale customers, it’s also promoting direct sales of other gear, including off-the-shelf storage and network appliances, to smaller customers through a subsidiary Quanta established last year, Quanta QCT.

    The company has a few strategies in mind for shifting from an original-design manufacturer to a name brand in its own right, at least in servers. It sees full racks of equipment, under the Rackgo name, as a major seller this year. The Rackgo offering, which includes compute, storage and network appliances, can appeal to customers because there’s simply one company to go to when problems arise, Yang said.

    And then, of course, there’s the Open Compute Project — the Facebook-led open-source hardware initiative that kicked off Quanta’s evolution as a direct server vendor. Quanta will come out with multiple products based on Open Compute specifications later this year, although exact timelines weren’t immediately available.

    Next month, the company will open an office in Seattle in order to be closer to customers. It counts Seattle-based Amazon as a customer, and Yang said Quanta has other customers in the area, although he declined to name them. Microsoft, which is building huge data center capacity for Windows Azure and its Live offerings, is a short drive from Seattle, in Redmond, Wash., and Seattle is much closer to Quincy, Wash., a hotbed of data centers, than is the Fremont office. Quanta will add more U.S. offices for sales and service this year, Yang said.

    Quanta is also opening up to the press, rather than silently working behind the scenes. That campaign started last year.

    The company’s business model has undergone a sea change. If the upward trajectory keeps up and the server-market dynamics keep shifting in its favor, Quanta could become one of the stalwart name brands of IT technology.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Facebook names Schroepfer as its new CTO, as infrastructure takes center stage

    Facebook has appointed Mike Schroepfer as its new chief technology officer, AllThingsD reported Friday. Schroepfer has been vice president of engineering since 2008. Before coming to Facebook, he was vice president of engineering at Mozilla.

    Schroepfer is keenly aware of Facebook’s hardware and software needs. In 2009, a few months before Facebook opened up about building its own data centers, he spoke about Facebook’s uniquely high user demand, saying that users were spending 8 billion minutes on the site a day at the time. Since then, he has worked on launching the Timeline product and adding a Facebook engineering New York outpost.

    Facebook’s Jay Parikh, who spoke at our Structure:Europe conference last year, has also made a mark on Facebook’s infrastructure improvements, such as cold storage and abundant use of flash memory.

    Facebook’s previous CTO, Bret Taylor, said last June that he would leave the company. It’s not clear yet who will take up the title Schroepfer is vacating.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • DeveloperAuction nets $2.7M to connect employers and developers

    DeveloperAuction, a site where companies can bid for developers seeking jobs, has taken on $2.7 million in first-round funding, suggesting that investors think tech recruiting can be disrupted again, even with LinkedIn, Stack Overflow and other online venues playing in the space.

    NEA and Sierra Ventures led the DeveloperAuction funding round, and Crosslink Capital, Google Ventures, SoftTech VC and Step Partners contributed as well.

    Last month I wrote about how venture capitalists were showing interest in DeveloperAuction despite its shortcomings, which span from monetizing a recruiting process that already happens naturally to using an arguably questionable word, auction, in the company’s name.

    Even so, the site does operate on a concept that challenges the generic recruiting model. (According to a statement, companies have floated more than $225 million in job offers.) The idea of asking companies to appeal to job seekers strongly contrasts the usual way of making many job seekers compete for individual openings. It’s a refreshing approach. And Matt Mickiewicz, a co-founder and CEO of the company, told me in an interview last month that he wants to bring the DeveloperAuction way to other verticals.

    The question is, how many types of jobs have consistently high demand and low supply in many geographical areas, as is the case for developers? Now that VCs have a stake, the question could become more important than it was when news of the company first surfaced a few months ago.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Facebook tweaks its algorithms to improve Graph Search; comment search coming

    Facebook is trying to improve the algorithms and expand the reach of its Graph Search function. Srinam Sankar, engineering manager on the social network’s search infrastructure team, detailed its plans in a blog post the company published on Thursday. While Graph Search still isn’t available to every Facebook user, its evolution is worth following, as it’s a large-scale implementation that could offer lessons for startups and enterprises dealing with continually growing databases.

    Since its release to a select few users in January, Graph Search has caught attention for privacy reasons and for being less than ideal for marketers. Ordinary users have noticed shortcomings, too — searches don’t take status updates and comments into account, and results might turn up information that’s just not up to date. After attending a briefing on Graph Search at Facebook’s headquarters in Menlo Park, Calif., I pointed to a few of the improvements engineers were kicking around.

    Thursday’s blog post suggests that engineers are making progress:

    “We are also extending our search capabilities to do better test processing and ranking and have better mobile and internationalization support. Finally, we are also working on building a completely new vertical to handle searching posts and comments.”

    As Facebook users play around with Graph Search, Facebook can observe how it’s used — and take user feedback — and adjust accordingly. One way to see if the work is paying off is click-through rate, Sankar writes. Once engineers come up with a possible means to improve the algorithms, they test it, try it out on a small group of users and then compare results.

    So, almost two months after the Graph Search beta launch, how is it ranking search results? Exact algorithms are not publicly available. But here are some character traits that give a sense of how they work:

    • The search engine for querying Unicorn, the database underlying Graph Search, doesn’t have to spit out exactly what you ask it to. It can consider several factors on the back end before serving up results that you might actually find more relevant: how far away you are from places you search for, such as restaurants; how many degrees of Facebook-friend separation are there between you and the people you search for; how similar search results are to search queries; and what you have searched for in the past.
    • Unicorn brings together results in no specific order. But then they get ranked by their scores, which require metadata to be associated with each person, place or thing searchable with Graph Search.
    • When you start typing in a search string — take the words “people who live in s,” for example — Graph Search uses natural-language processing to make suggestions for searches of places that start with the letter s, rather than people whose names start with s.
    • Instead of offering search results based on the highest scores alone, Unicorn eliminates duplicates. For instance, a bunch of pictures of Mark Zuckerberg might not be what a user had in mind when she searched for “photos of Facebook employees.”
    • If results include people, places and things, not just one of those, then Unicorn needs to normalize the results and put them all in the best possible order.
    • Queries combine more than one of those elements — say, restaurants that friends like” — require Unicorn to score both restaurants and friends and share data between the two scoring efforts.

    It’s nice to see that engineers are trying to improve Graph Search. The product could become more fun and useful. But there’s plenty more work to do, so don’t be caught off guard when Facebook announces further adjustments in coming months. With such a visible product, Facebook will have to keep making improvements, and that’s all the better for developers looking to refine search engines of their own.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.