Category: News

  • Traditional authentication is outdated, but what do consumers really want to replace it with?

    Let’s face it — secure online authentication is a chore. Except for a couple of people who enjoy using very complex passwords and/or a password manager, most of us find it difficult to use a secure combination of characters for each and every website where we have an account. Two-factor authentication is also not all that comfortable to manage, requiring use of a secondary means of generating a secure code. Often that’s a token given by the bank, a text message sent by the service provider, or an app.

    Is that modern? Well, it depends on your definition of the word modern. I consider the online authentication today to merely be just a slight evolution from the methods which we have used in the last decade. That’s not to say that is a bad thing, but certainly not where visionary pictures, videos or predictions from not too long ago would have us today. We’re not using flying cars, that’s for sure, nor some wonder authentication method for that matter.

    According to a study carried out by the Ponemon Institute, which surveyed 1,924 consumers in Germany, United Kingdom and United States, two thirds of the respondents would consider using a “multi-purpose identity credential” that is issued and managed by a trustworthy organization in order to authenticate various services.

    The highest number of respondents, 32 percent that is, answered that they would prefer using “information contained inside a mobile device”. A biometric system and an ID card with an RFID chip follow, both favored by 23 percent of the respondents. The least popular method is, wait for it, a chip implanted in one’s body. Unsurprisingly, the last one got a mere 1 percent of the respondents behind it.

    Judging by the other data provided by the study, the answers above shouldn’t come as a big surprise. Only five percent of the respondents haven’t experienced authentication failures during online transactions, while 26 percent, 29 percent and 34 percent of respondents from the US, UK and Germany, respectively have experienced this issue frequently.

    Furthermore, 54 percent, 53 percent and 47 percent of the respondents from the US, UK and Germany, respectively, have said that it takes too long to reset a username or password. The same numbers, and higher, also apply to forgetting too complex or too long passwords and getting locked out of websites due to authentication failures.

    What is interesting though is that consumers don’t want the easy way out either. According to the study, 46 percent, 45 percent and 65 percent of the respondents from the US, UK and Germany have trouble trusting systems or websites that rely on passwords as the sole means of authentication.

    Slightly lower numbers, 38 percent, 37 percent and 46 percent for folks surveyed display a similar distrust towards systems or websites that do not ask the user to change the password on a frequent basis.

    After all is said and done, security is still a very serious problem with potential risks that shouldn’t be taken lightly. From a personal point of view I find it rather difficult to believe that enough consumers are actively searching for various ways in which they can protect their online accounts or other important ones.

    It’s in our nature, as tech-oriented folks, to be aware of the dangers lurking at every corner but for the most part I think the number of people who do something about it, where it matters, is quite low overall. For this reason I believe that new means of authentication and new devices designed to keep our online endeavors safe will not be adopted easily.

    Infographic Credit: Nok Nok Labs

  • How the public is reshaping media at Reddit, Vox and LinkedIn

    Digital technology is transforming not just how media is made, but who is making it. At elite digital brands, readers and the general public are having an unprecedented role in shaping media and content creation.

    At a paidContent Live session hosted by Slate editor Jacob Weisberg, three media companies described a new breed of creators who are equally at ease with content and technology. This has led to the emergence of non-traditional media influencers such as comment communities at Reddit, and at Vox Media sites The Verge and SB Nation.

    “In the day, if you wanted to create media, you had to start as an intern making coffee,” said Vox CEO Jim Bankoff, adding that now anyone with $100 can make a movie. He explained that this democratization of content creation has resulted, in some cases, of Vox hiring people on the basis of their comment contributions.

    This has led to a culture of empowerment in which everyday people are as fluent in media as many traditional journalists. More and more, they are taking to public platforms to not just report, but to take part in the news.

    Erik Martin, GM of Reddit, cited “random acts of pizza” — a community on the site that sends pizza as a gesture of report, most recently to emergency workers in Boston.

    But does this new culture of public participation also has a dark side? Slate’s Weisberg pointed to Reddit’s current efforts to identify the Boston bomber, including posting a suspect’s photo on the site, as approaching vigilante justice.

    “Is Reddit about to be Richard Jewell?” asked Weisberg, referring to a police officer who foiled an Atlanta Olympics bombing plot only to be falsely accused as a suspect in a traumatizing “trial by media.”

    Martin said he regarded the role of Reddit employees as “groundskeepers” who helped discrete communities determine their own standards.

    Dan Roth, a former Fortune editor who now oversees news on LinkedIn Today, offered a further example of how non-journalists are creating media. He described how executives like Virgin Airlines CEO Richard Branson are now writing regular columns in their own voices. While such contributions in traditional media typically amounted to no more than press releases, Roth said that readers’ ire at inauthenticity has forced even corporate executives to reevaluate how they write.

    Check out the rest of our paidContent Live 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Reinhart, Rogoff, and How the Macroeconomic Sausage Is Made

    After watching a presentation by Kaggle founder and CEO Anthony Goldbloom at a conference last year, I went up to the front of the room to ask him a question about macroeconomics.

    Kaggle organizes competitions in which data scientists (which in most cases means anybody who wants to sign up) compete to build predictive models based on huge troves of data. Goldbloom founded the company after working as a macroeconomic modeler at the Reserve Bank of Australia and the Australian Treasury.

    “Could you use the Kaggle approach to make macroeconomic predictions?” I asked him.

    “No,” he replied. “Not nearly enough data.”

    I couldn’t help but think back to that as controversy erupted this week over Harvard economists Carmen Reinhart and Kenneth Rogoff’s oft-cited three-year-old finding that economic growth plummets when a country’s debt-to-GDP ratio exceeds 90%. Three University of Massachusetts economists — Thomas Herndon, Michael Ash, and Robert Pollin — came out with a working paper that recrunched the Reinhart and Rogoff data set and arrived at a very different result: instead of average -0.1% growth in countries with debt/GDP of more than 90%, they came up with 2.2% growth.

    Most of the attention since then has focused on an Excel error that Herndon, Ash, and Pollin found — which caused five countries to be excluded from the analysis — and Reinhart and Rogoff have subsequently acknowledged. That’s pretty embarrassing, but it only changed the result by 0.3 percentage points. Most of the difference had to do instead with how Reinhart and Rogoff weighted the results from different countries. They chose to give each country’s average growth in a particular debt/GDP range the same weight, regardless of how many years the country had been in that situation. As Herndon-Ash-Pollin write, this isn’t an indefensible approach (they do argue that Reinhart and Rogoff should have devoted a lot more ink to defending it). But by taking a different approach, and instead weighting countries’ results by how many years they were above 90% debt/GDP, they were able to get a very different result.

    This is watching the sausage of macroeconomics being made. It’s not appetizing. Seemingly small choices in how to handle the data deliver dramatically different results. And it’s not hard to see why: The Reinhart-Rogoff data set, according to Herndon-Ash-Pollin’s analysis, contained just 110 “country-years” of debt/GDP over 90%, and 63 of those come from just three countries: Belgium, Greece, and the UK.

    This is a problem inherent to macroeconomics. It’s not like an experiment that one can run multiple times, or observations that can be compared across millions of individuals or even hundreds of corporations. In the words attributed to economist Paul Samuelson, “We have but one sample of history.” And it’s just not a very big sample.

    So what to do about it? One response is to dig for more data, and Reinhart and Rogoff have been doing that, going back to 1800 to examine episodes of public debt overhangs. Another is to have different people crunch it in different ways, which is what Herndon-Ash-Pollin did, or assemble different data sets, as several other scholars have done.

    But the biggest challenge may be how to present it. My reading of Reinhart-Rogoff, Herndon-Ash-Pollin, and the other papers linked to in the preceding paragraph is that rising debt loads do weigh on growth. Yes, there’s causation at work in both directions: low growth results in bigger debts — which has clearly been the case in the U.S. over the past couple of years. But attempts to separate that effect out by looking at growth rates well after a spike in debt do indicate slower growth after higher debt. And for economists of every school but so-called modern monetary theory, it’s logical that big debts would eventually eat up resources and slow growth.

    What there isn’t, though, is an obvious tipping point where debt becomes too high, and deficit spending becomes a drag rather than a stimulus. At least not one that’s obvious before the fact. The initial Reinhart-Rogoff research seemed to indicate a sharp dropoff in average growth after debt passed 90% of GDP. But they also reported a significantly smaller dropoff in median growth, and their subsequent analyses, as well as the Herndon-Ash-Pollin rework of their data, similarly show a dropoff but not a dramatic inflection point.

    In the 1990s, the consensus seemed to be that for the U.S. the inflection point was a public debt/GDP ratio of 50% — which is exactly what the country was nearing at the time. Higher than that, and the bond market vigilantes would punish the U.S. with much higher interest rates on government debt. The central teaching of what came to be known as Rubinomics was that cutting the deficit would actually stimulate the economy as it brought interest rates down.

    Now, of course, U.S. public debt is up to 76% of GDP, yet the bond market vigilantes all seem to have retired or moved to Europe. In the long aftermath of a global financial crisis, with deflation a real threat, the U.S. can get away with running huge deficits with no immediate consequence. In fact, the Keynesian reasoning goes, big deficits now will lead to a better long run growth picture (and thus lower future debt/GDP ratios).

    Is this reasoning correct? Well, right now the evidence would seem to support it: The U.S. is muddling through, while austerity measures have pushed Europe back into recession and most of Southern Europe into depression. For whatever it’s worth, Reinhart and Rogoff have advocated continued deficit spending too — at least for now.

    But this is macroeconomics. It’s hard to muster conclusive evidence, and almost impossible to generate much in the way of useful predictive ability. One response to this fog would be to throw up our hands and not do anything at all. Another is to acknowledge that our knowledge is limited and proceed anyway on a mix of data, theory, and intuition.

    This, to a certain extent, is what the Reinhart-Rogoff project of the past few years (most notably their book This Time is Different) has been all about. It’s a combination of history, data-crunching, and informed opinion — intended to be consumed and debated by an audience of far beyond academic macroeconomics. Which is exactly what’s happening now. That can’t be a bad thing, can it?

  • Various reports indicating that Samsung’s flexible display will be delayed

    samsung_flexible_display_rumored_for_the_Galaxy_Note_2

     

    It appears that the world will have to wait a wee bit longer for Samsung’s innovative flexible display to hit devices. ETnews reports that because of the flexible display’s vulnerability to moisture and oxygen, Samsung’s display unit is reviewing the encapsulation technologies to help protect the flexible display unit and more importantly, the guts of a device found underneath the display. In other words— Samsung hasn’t quite figured out how to have a flexible display that wouldn’t allow moisture and air to seep in any given device, so it is working on alternative methods which would essentially seal the flexible display of a device, regardless of how a device is bent or angled.

    Because of the delay, the world should expect to see the arrival of the first smartphones featuring flexible displays by the end of 2013. Hopefully Sammy will work out all those kinks by then or it’ll see its competition beat it to the punch.

    source: SamMobile

    Come comment on this article: Various reports indicating that Samsung’s flexible display will be delayed

  • Learn How DCIM Can Save You 30 Percent in Data Center Operations Costs

    Plus make your Data Centers More Productive and Reliable.

    Please join Rich Miller and Intel’s Jeff Klaus April 25 for a webinar, titled, Is It Worth 60 Minutes to Save Up to 30 Percent in Data Center Operation Costs?, where they will explain Data Center Infrastructure Management (DCIM) and its contributions to managing power and cooling usage in the data center.

    Klaus will focus on the servers that account for the majority of the power consumed in the data center and explain how managers can introduce a holistic energy optimization solution. Accurate monitoring of power consumption and thermal patterns creates a foundation for enterprise-wide decision making with the ability to:

    • Monitor and analyze power data by server, rack, row, or room;
    • Track usage for logical groups of resources that correlate to the organization or data center services;
    • Automate condition alerts and triggered power controls based on consumption or thermal conditions and limits; and
    • Provide aggregated and fine-grained data to web-accessible consoles and dashboards, for intuitive views of energy use that are integrated with other data center and facilities management views.

    Identifying temperatures at the server, versus at the room or even rack levels, can also help data center managers more accurately understand what the real ambient temperature should be for individual servers to have optimal lifespans. This assessment of real temperatures has enabled data centers to increase the overall room temperature by one to two degrees, which can represent a savings in excess of 30% the air-conditioning expense.

    This Data Center Knowledge Webinar on DCIM will take place on April 25 at 2:00 p.m. Eastern / 11:00 a.m. Pacific. Please register today and be sure to participate in the Q&A session, asking your questions on these case studies as well as your questions on DCIM and data center efficiency.

    Click here to register for this event.

  • I bought the fastest server so why is my app slow?

    It may seem obvious that having the “best” solution doesn’t guarantee a better outcome, but it seems in IT we don’t always see it that way. It seems that we often forget that there are larger issues at play than whether or not a piece of our infrastructure or one of our applications is “the best,” so here’s how I like to think about how to determine what is often a subjective and variable concept in IT.

    The thought to write about what “best” means in technology came to me after reading Joe Weinman’s book “Cloudonomics”. In the book he points out several times that having the best technology doesn’t guarantee that you’ll end up with the best solution or service. So how do you determine what’s best?

    Some definitions of “best” in IT include:

    • Most comprehensive solution
    • Lowest priced solution
    • Easiest solution to install and get up and running
    • Best performing in high latency situations
    • Doesn’t require capital
    • Doesn’t lock me in
    • Highest price

    blog-backblaze-datacenter-pods
    I’m guessing that after having read the seven bullets above, you’re already starting to get a sense of how we sometimes make assumptions about the appropriateness of an IT solution based on incomplete considerations. It seems self-evident what best should mean, but it’s often true that we don’t recognize how priorities can contradict or shift our definitions.

    Buying the best

    When you buy the best solution in any product category you assume that you are in fact getting the most appropriate solution for the money. That value might be any combination of things from a name (such as Hermes,) to performance or sex appeal (like Ferrari,) or maybe time per transaction (I won’t wait in line at Whole Foods markets).

    The reality is that best could mean all of those things or none of them. While it’s almost certain that a bag you buy from Hermes is going to be well made, is it 30 times better than another brand? Is a $1,000 bottle of wine 20 times better than a $50 bottle? If money is no object to you, then the answer is more likely to be yes, a $1,000 bottle is better. On the other hand, if you value the ability to have different bags for different events, or simply prefer the ability to buy a new bag more often, then the $150 bag is probably “best” compared to the Hermes bag.

    Buying IT is no different, except that it’s immensely more complex. In IT there are myriad variables that affect the ability to get the most from any solution. These variables include price, features, latency, maintenance, flexibility, open vs. proprietary, required training, user interface, APIs, and more. What about your team’s ability to sell the solution as the right choice to your customers? How about whether or not you’ve got the correct organizational and financial structure to support the solution appropriately?

    How to determine what’s best

    cloud servers
    The decision process for every new technology or solution selection needs to cover a wide list of criteria. These criteria will mostly all be the same for each business but the priorities will change depending on your organization.

    The majority of the standard selection assumptions (need, ROI, cost, etc) are well understood, but even among those “standards” there is room for better decision making and prioritization. I like to include the following non-standard criteria when my team is making a solution selection.

    • How much value do we get out of the solution at 80 percent of total feature set?
    • What other capabilities does this solution open the door to in the future?
    • How many of my customers need to use the solution and to what extent before it adds new value?
    • What organizational support (business and IT) do I have for the long term “ownership” needs (staff, training, champions, budget, lifecycle, etc.)?
    • How does this solution position my team to execute against larger IT and business visions?
    • Does this solution leverage other partners and technologies already in use?
    • What’s the time to install vs. cost to purchase or time to benefit? (In other words, will I get 30 percent net new benefit value in year one vs. nothing in year one and two, but 80 percent in year three from another solution?)
    • Ownership risk assumptions (what assumptions are you making at the front end of any solution selection and are those assumptions still accurate? With modern cloud/SaaS etc., you might not have the ownership risk you “enjoyed” with many legacy platforms)

    While I could easily argue that all of the above bullet points are important to every organization, they must each be measured against the organization’s situation at the time. Are you low on cash, but growing fast, do you have a higher risk profile or regulatory concern? The process of prioritization can only be done by the business making the purchase.

    Seems like an oxymoron

    Sometimes the best purchase is the purchase you avoid, other times the best purchase is the one you didn’t make. IT is littered with examples of purchases that would have been better left undone. However, just as common are those purchases that were never made because they weren’t “perfect”. When you’re looking for perfect, keep in mind that there’s no such thing in software and many cases in hardware, but if you can solve a problem even at 70 or 80 percent, the purchase might still be better than waiting for the “perfect” option.

    The test and fail option is much more real today than it ever has been and it’s a good thing. Now you can test, fail, and retry three or four times all for less effort and cost than making one selection in the past. So, step forward boldly, but don’t forget that when you are thinking “best” make sure you’ve really developed a case for what best means to your organization.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Are You Listening to Your Most Important Customers?

    You don’t see many tweets touting companies as #betterthanaverage. But there’s plenty of social media chatter about really good or bad customer experiences. The same holds true for the feedback companies receive directly via e-mails or phone calls. Customers are most likely to offer feedback when they have either a really bad experience or a great one — and they almost never say a word when their experience falls somewhere in the middle.

    The problem is, that silent majority in the middle typically drives the success or failure of a business.

    Measuring the voice of the silent majority starts with understanding the difference between collecting feedback and measuring the customer experience. Feedback is opt-in, and inherently reactive, because businesses focus on addressing the issues raised by the “squeaky wheels.” Measurement is random and representative, which allows businesses to prioritize changes based on everyone’s experience: lovers, haters, and those who fall somewhere in between.

    Consider the graph below, which compares opt-in feedback and random-sample measurement (such as the kind designed by my company, ForeSee) to a normal distribution curve. The gray line represents the normal curve. The blue line shows satisfaction scores based on 44,000 opt-in feedback surveys, which can be deployed by posting a button on the homepage that says “give us your feedback here.”

    Thumbnail image for Opt-In Feedback vs. Random-Sample Measurment

    The yellow line shows satisfaction scores based on more than 5,000 random-sample measurement responses — these are collected from a small sample of web visitors (usually 1% or less of overall site traffic) randomly intercepted and invited to take a survey.

    As you can see in the chart, the random sampling does a better job of measuring the wider range of customer experiences, rather than just the very happy and very unhappy customers that often respond to an opt-in feedback button. Measuring social listening (another form of feedback) could create an even more dramatic high satisfaction score, and it’s something we’re researching now. But the bottom line is if you track opt-in feedback only, you’re missing the silent majority.

    Now, I am not suggesting that we ignore feedback — any time a customer wants to tell you something, you’d better be listening. The beauty of feedback is that customers or users put issues and improvements into their own words. That’s important information to track and act on quickly. I’m making the case that you should drive your decisions and future investments by real measurement data.

    A good approach to customer experience measurement will allow organizations to answer some specific questions:

    How am I doing? What is my performance? You can’t manage what you don’t measure, and you can’t begin to evaluate or improve until you know where you are right now.

    Sterling Jewelers — the largest specialty jeweler in the U.S. and a ForeSee client — utilized customer experience analytics to help evaluate website redesigns for their Kay Jewelers and Jared the Galleria of Jewelry brands. They asked customers about merchandising, navigation, functionality, and other website elements in order to understand what was driving revenue, loyalty, and recommendations. Scientific measurement gave them a metric the company could rally around, whereas opt-in feedback had never provided a strong enough case to create strong internal consensus for change.

    Where should I focus my efforts? Customer satisfaction, when measured properly, predicts sales, loyalty, and recommendations. Research from the University of Michigan shows that customer satisfaction can even help predict stock prices.

    Some research we recently conducted on top luxury brands illustrates this point. When customers were asked for feedback about what they like best about a luxury brand website, they typically said things like “great clothing.” When asked what they liked least, they often complained about the price. The research shows that if luxury brands were to evaluate the relative success or failure of their websites based on that feedback, they could mistakenly think that changing the price is the obvious way to improve customer satisfaction. But with proper customer measurement, luxury brands would learn that the merchandise is really the top priority for customers. Price, meanwhile, is typically the lowest-scoring factor. So in this case, merchandise improvements will have a much more significant impact on satisfaction than changes in price.

    Perry Ellis used analytics based on continuous measurement of the customer experience to discover that people who were engaged by an associate when entering a store were 10 percent more satisfied, and they spent roughly 50 percent more than those who weren’t greeted. This is the kind of intelligence that a company can use to focus its initiatives and improve its bottom line.

    Why should I take action? Will the payback be worth it? A good customer measurement system is a critical tool for managing decisions and setting goals.

    Another one of our clients, PBS.org, used customer experience measurement to learn that many of their website visitors were looking for recipes from their cooking shows, something they didn’t provide online at the time. Visitors were searching exhaustively, and in vain, for these recipes. These insights prompted PBS.org to create a new food site that significantly improved visitor satisfaction. PBS Food now receives more than one million page views per month, a sign that their customer measurement is paying off.

    A company should be able to quantify the impact of alterations in pricing, product mix, content, or customer service before any changes are made, and predictive customer experience analytics are a piece of that puzzle. By addressing the changes that are most likely to improve satisfaction using continuous measurement, instead of addressing issues that get the most complaints, business leaders can better manage their organizations.

    In the new “big data” world, in which organizations collect and analyze enormous amounts of data in order to improve the way they do business, it’s important not to get overwhelmed. Understanding the difference between types of data, putting proper data in context, and knowing which data to “listen” to is the key. While feedback data gives you a chance to react, measurement allows you to be proactive and strategic, giving you an advantage over competitors who are still running around oiling squeaky wheels.

  • The Young Turks is about to become the first news channel with a billion views on YouTube

    Beyonce, Bieber and… The Young Turks? Cenk Uygur’s liberal online video network is set to become the first news outlet to score a billion views on YouTube this week. For Uygur, that’s proof that online video isn’t just a novelty anymore, but a primary source of information for a whole generation. “They grew up on YouTube. That is their TV,” he told me during an interview last week.

    Uygur has been doing political commentary online since 2005, but also brought his brand to traditional media, appearing first on MSNBC and then Current. Asked about the secret for hitting a billion views, he told me it’s partially because other media organizations have long underestimated YouTube and online video in general. “We have been really lucky that this field has been left alone to us for so long,” he said.

    The Young Turks were one of the news organizations tapped by Google for YouTube’s original content push, which included sizeable advances to bring more professionally produced content to the site. Traditional news organizations like Reuters struggled to find an audience for content funded by these kinds of advances on the site, but Uygur told me that the collaboration with YouTube was a full success for his company. “We were the first channel to get refunded,” he said.

    In the coming months, he plans to continue that growth story by adding additional shows, but he also hinted at further TV collaborations, including production work for TV networks and additional platforms for his existing shows.

    So what’s Uygur’s advice for news organizations that have a hard time with online video? “There is one trick: It’s to be authentic,” he told me. Having an authentic voice is the only way to speak to a generation that has grown up with YouTube, he argued, adding: “To them, TV now equals fake. And online equals real.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Don’t Like Facebook Single Sign-On For iOS? Use Google+ Sign-In For iOS Instead

    In iOS 6, Facebook became more closely integrated with the mobile OS. Part of that integration made it easier to implement Facebook Single Sign-On in iOS apps. Of course, some developers may not like Facebook, or maybe they prefer Google+. For those developers, Google has just the thing.

    Join +Silvano Luciani and +Xiangtian Dai from the Google+ iOS team as they show off how easy it is to implement Google+ Sign-In on iOS, using our own iOS quick-start as a guide! Check it out for yourself at developers.google.com/quickstart/ios and leave your questions here!

  • Midokura Launches MidoNet Network Virtualization Platform

    At the OpenStack Summit this week in Portland, Midokura announced the general availability of MidoNet.  First introduced to the U.S. market last fall, MidoNet is a distributed software-defined virtual network solutions specifically designed for Infrastructure as a Service (IaaS). MidoNet is integrated with the OpenStack Quantum networking project and has support for OpenStack Nova network drivers as well. This technology treats networking like one big distributed system.

    “We are excited to disrupt the market and offer the industry general access to our MidoNet network virtualization technology,” said Dan Mihai Dumitriu, co-founder and CEO of Midokura. “Unlike other solutions out there, MidoNet pushes intelligence to the edge of the network, as it takes an overlay-based approach to network virtualization and sits on top of any IP-connected network. Given this, MidoNet makes it easier for enterprises to build fully featured, secure and scalable clouds.”

    Japan-based Midokura also recently announced that it has raised $17.3 million in Series A funding. The round was led by Innovation Network Corporation of Japan (INCJ), a Japanese public-private partnership. Other investors who participated in the round include NTT Group’s Venture Fund: NTT Investment Partners, L.P. and NEC Group’s Venture Fund: Innovative Ventures Fund Investment L.P.

    “As enterprises and carriers embrace and build out IaaS clouds, an overlay-based network virtualization platform will soon be a must-have technology,” said Mr. Kimikazu Noumi, President and CEO ofINCJ. “The financial support of Innovation Network Corporation of Japan, and other key backers, validates our strategy as well as the work we’ve done over the past three years developing our industry leading product MidoNet. This funding will enable us to accelerate our product engineering, the establishment of partnerships, and the growth of our customer base. We look forward to delivering the most preformant, scalable and fault tolerant network virtualization solution to the IaaS infrastructure market.”

  • Former Google engineer builds service to stop companies from tracking people online

    Former Google engineer builds service to stop companies from tracking people online
    As advertising companies continue to push the boundaries of online tracking in an effort to woo clients with eerily accurate ad targeting techniques, online privacy is seemingly becoming a thing of the past. One startup is looking to stop third-parties from tracking users on the web, however, and one of the company’s co-founders may be in a better position than most to accomplish this lofty goal.

    Continue reading…

  • What CFOs Need to Know

    Matthew Goulding is managing director of Cannon Technologies – consultants, designers and turn-key builders of data centers based in the UK.

    Matt-Goulding_CannonMATTHEW GOULDING
    Cannon Tech

    As a CFO, CEO or CIO with a data center (or possibly several) in your organization, you’re undoubtedly aware that they devour electricity like small cities and that the up-front capital investment, cost of expansion or re-development is somewhere between painful and eye-watering. This column looks at how a very new data center construction method is set to allow true incremental ‘pay-as-you-grow’ whilst reducing total CapEx and OpEx.

    Over recent years, there have been several attempts at ‘pay-as-you-grow’ data center solutions – but whilst these have successfully deferred CapEx, they have led to a more expensive overall solution with higher lifetime CapEx costs and sometimes higher OpEx too.

    There is now a new ‘incremental modular’ pay-as-you-grow technique. So new, in fact, that your data center people may not have heard of it yet. The system has been implemented by Cannon Technologies in conjunction with telecoms network operators around the world over many years, and has now been adapted for data center ‘new-builds’ and upgrades.

    Reducing the Power Cost

    Unless your data center was built very recently it’s probably a massive bricks-and-mortar building with cavernous data-halls full of racks or cabinets which are, in turn, full of servers and other IT ‘hardware’.

    Each server cabinet could contain maybe twenty or so servers – many of which actually do fairly little most of the time. So while each cabinet would only use as much electricity as two or three one-bar fires, they are very inefficient.

    Hot air from all of these cabinets heats up the air (and the walls) of the great cavernous space – and massive computer-room air conditioning units (CRACs) around the perimeter then suck out the heat using almost as much energy again as the IT equipment is consuming. So you’re actually paying for nearly double the amount of power your servers need.

    Recent advances in server technology have massively increased efficiency. A technique known as ‘visualization’ means that one piece of server hardware can pretend to be several dozen servers. This keeps the IT guys happy because they can still have servers dedicated to specific tasks – but it means that the actual hardware works at 80% capacity rather than 10% which power-wise is far more efficient.

    The new generation equipment is also much more miniaturized, so they can squeeze 300 to 500 ‘virtual’ server cores in a single cabinet. This would allow the data center footprint to be reduced were it not for the fact that business and user applications demand more and more servers, storage, etc., year on year.

    Moving to these new style servers improves energy efficiency but creates problems too. Each cabinet now uses as much energy as 30 to 60 one-bar fires and the old CRAC cooling method I described earlier is far too inefficient. So to contain and manage the cooling, a system of rooms-within-rooms has to be built (we call it cold-aisle cocooning) together with installing cooling units very closely coupled to these massive ‘heaters’.

    Compare to Bricks and Mortar

    Bricks and mortar data centers have always been horrendously expensive whether new -build, re-purposing of existing building or rental. Given the need to provide for expansion, buildings have always had to be built, purchased or rented at a very considerably greater size that you currently need.

    Not only is the up-front CapEx several hundred per cent more than the current requirement for cabinet-space, the OpEx costs of operating a half-empty building year on year are exorbitant too.

    It can be ten years before the building is operating effectively at 80 per cent of utilization. And then within another five years or less it’s full and the IT guys need another new building.

    Frankly, for most organizations, it was never a sound business model. To be fair, there used to be few alternatives. But now there are.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • 65 percent of Buzzfeed’s traffic now comes from mobile devices

    Buzzfeed now sees 65 percent of its traffic coming from mobile devices, according to Kenneth Lerer, chairman of Buzzfeed and Betaworks. “Everything is going to the phone,” Lerer said at GigaOM’s paidContent Live 2013 conference in New York Wednesday.

    Lerer co-founded the Huffington Post and now serves as managing director at Lerer Ventures. “It’s the best time in the last eight years to invest in digital content companies,” he said. With the technology for the web as well as mobile having been more or less built out, it’s now time to fill those pipes, he argued. “Content is king at a certain time. And I think content is king now.”

    However, Lerer also cautioned that digital media investments are risky, with timing being everything. “If you’re too early, you lose all your money. If you’re too late, you don’t make any money,” he explained. And when you give cutting-edge companies seed money, it’s hard to predict how their business plans are going to pan out. “You have to kind of take the measure of the person,” he admitted.

    So what are the big trends Lerer is seeing in media, aside from a huge shift to mobile? Video will play a huge role going forward, but there’s also a more fundamental shift in how consumers look at media properties. In short, they ignore everything media execs hold dearly. Instead of expecting a curated front page, they’re much more comfortable with a social feed, Lerer said. “When they see a Buzzfeed – to them, it’s just normal,” he added.

    In the end, the companies that aren’t married to those old rules will prevail, Lerer argued: “I think the world is ending for traditional media companies, but it’s just beginning for digital media companies.”

    Check out the rest of our paidContent Live 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

          

    • Don’t Write Your Own File Picker In Your Google Drive App

      You’re building a Google Drive app, and now you want to implement a file picker. You can either build your own or use an existing service. Google argues against the former in its latest Google Drive SDK hangout:

      Writing your own file picker with the Drive API is easy, right? Not so fast! Watch to find out about the hidden complexity that can turn an otherwise easy task into a pain for users. We’ll show you ways to do it right when you have no choice as well as some alternative approaches that are quick and easy to implement.

      Check out more Google Drive news and tutorials here.

    • Samsung Galaxy S 4 launch details emerge from T-Mobile, Sprint and AT&T

      It’s not as if the Samsung Galaxy S 4 itself is a surprise: the phone was introduced in the U.S. last month. Details on pricing and availability, however, are just arriving now, with T-Mobile and Sprint both announcing their plans, while AT&T has updated its Galaxy S 4 order page with a delivery date. If you select a 16 GB Galaxy S 4 on AT&T’s site in either White Frost or Black Mist for $199 with contract, it says the phone will be shipped on April 30.

      T-Mobile customers can begin ordering their Galaxy S 4 for $149.99 down on April 24, followed by 24 interest-free payments of $20. The carrier eliminated phone subsides and contracts last month. Customers can now pair the $50, $60 or $70 Simple Choice monthly service with their Galaxy S 4. The benefit here is that once the Galaxy S 4 handset is paid off — a cost $629.99 — the $20 handset payments disappear from ongoing monthly bills. T-Mobile expects delivery of the Galaxy S 4 in retail stores on May 1.

      Sprint is still subsidizing smartphones and is charging $249.99 for the Galaxy S 4 in either White Frost or Black Mist with a two-year agreement. Customers on other carriers, however, can save $100 by porting their number to Sprint and opening a new account. Sprint is opening pre-orders on April 18, with phones in Sprint stores on April 27. The carrier advertises unlimited data service for the Galaxy S 4.

      The handset is LTE capable on Sprint’s network but if you’re not in a Sprint LTE coverage area, service will drop down to the carrier’s much slower 3G network. That’s a key difference from the Galaxy S 4 on AT&T or T-Mobile: when those handsets are used outside of an LTE zone, they’ll revert to HSPA+ service, which is typically 5 to 10 times faster than 3G.

      Related research and analysis from GigaOM Pro:
      Subscriber content. Sign up for a free trial.

          

    • Facebook Graph Search Gets Its First Wave of Ads

      A small percentage of Graph Search users are now seeing some ads in their search results, marking the first time that Facebook has put any sort of advertising in their new product.

      Josh Constine at TechCrunch has a shot of the ads, which look like your basic sidebar ads. The Graph Search ads appear between the page breaks in Graph Search results, if there’s more than one page of results on the particular query.

      The ads are not based on your specific Graph Search search – in fact, they’re simply ads targeted to your basic information. This includes age, gender, likes, Open Graph activity, and of course, cookies. Otherwise known as the most basic form of ads that Facebook has employed for years – the ones you see gracing the side of your news feed on desktop. They’ll sport an image, some text, and clicking on them will lead you to a third-party site.

      This is small test. The ads are only being tested on a small percentage of Graph Search users, and Facebook has still yet to roll out Graph Search to all users.

      Of course, this is just the beginning. Eventually, Graph Search ads could be targeted based on keywords. Like Facebook’s Sponsored Search results of yore, businesses could pay to suggest queries inside the Graph Search bar. But for now, most of you won’t see any ads in Graph Search (considering you even have it). And for those who do, they’ll simply be standard info-targeted ads at the page breaks on search results.

      Still, it’s a start. And you know more is coming.

    • Fox Pulls Family Guy Episode Over Boston Marathon Joke

      The network TV show Family Guy is known for its satirical and sometimes insensitive humor, but when jokes, in hindsight, hit too close to reality, even Fox won’t stand behind its hit show.

      According to a Reuters report, Fox this week has pulled the Family Guy episode “Turban Cowboy” from the Fox website and Hulu due to a joke in the episode that involves murder at the Boston Marathon. The studio also announced that the episode will not be rebroadcast.

      The episode, which aired on March 17, 2013, depicted the show’s main character, Peter Griffin, as he drives a car through Boston Marathon runners. He is then interviewed by sports reporter Bob Costas, who asks him how he “won” the marathon.


      Original Video– More videos at TinyPic

      Later in the same episode, Peter is depicted as accidentally setting off two separate bombs using his phone. These clips were later edited together to make it seem that Family Guy had predicted the bombings at the Boston Marathon this week.

      Family Guy creator Seth MacFarlane has condemned the clip, calling it “abhorrent.”

    • 5 Emergency Items Every Motorist Should Have

      Image:

      It dawned on me recently as I was driving along in the middle of nowhere California that most drivers aren’t properly prepared for those “Oh SHIT!” moments. Therefore I’ve decided to make a quick list of things that you should never be without, and it applies to all drivers, regardless of your daily distance traveled.

      1. Cell Phone: These little devices are generally the first line of defense when you’re stranded out there on the open road. They give you comfort, provide you with the contact and emergency information of those that can help, and give you the ability to call on #2 on the list. Oh… and don’t forget to keep your cell charger in the car at all times.

      AAAPremier

      2. AAA Card:
      The American Automobile Association (or Triple-A) has been around since 1902 with the main purpose of helping stranded motorists wherever they may be. Now here’s the deal. If you haven’t signed up for their service then do so NOW! And don’t be cheap – get their “Premier” service. It gives you 200 miles of free towing and only costs around $119.00 per year. Trust me on this, because if you’ve ever been stranded without any type of towing service then you already know that it costs upwards of $4.50 per/mile for a tow and that can add up real quick!

      FixAFlat

      3. Can of Fix-O-Flat:
      Many of today’s newest cars come with 19-inch or larger wheels. This means that odds of you actually having a spare are slim to none. Some manufacturers are kind enough to provide you with a can of the stuff already, but if you’re like me, then you always know that it’s better to have a spare. Also, if you have one of these kits already in your car, make sure to check the expiration date (yes they do expire) so you’re not S.O.L. if a tire goes South.

      RoadSideFlares

      4. Roadside Flares: Being stuck on the side of the road sucks, especially at night when oncoming automobiles can’t see you. Carrying 3 or 4 roadside flares solves this problem by not only alerting other drivers to your presence, but also giving you piece of mind so that no one (hopefully) slams into you.

      FirstAidKit

      5. First Aid Kit: From slamming that finger in the car door, to hitting your head on the hood-latch while checking the oil, a small first aid kit can come in handier than you know. Get a good one with some bandages, band-aids and a good disinfectant just in case you really mess yourself up.

    • Rep. Mike Rogers Blocks Pro-Privacy Amendments From Being Added To CISPA

      The House will vote on CISPA this week. This vote will decide whether or not the House majority thinks companies should be able to share your private online information with the government while enjoying total legal immunity. The second debate of the bill shows that the bill’s proponents don’t care about your privacy at all.

      The EFF reports that CISPA went up for debate before the rules committee. During the hearing, congressmen were able to question the bill’s author, Rep. Mike Rogers, on the more troubling parts of the bill. The entire report is a little depressing as Rogers argued that CISPA has enough privacy protections already, and that the bill’s opponents are 14-year-olds living in their basement.

      Those who questioned CISPA at the hearing had the same concerns that the White House expressed in its veto threat. The two main concerns were that not enough was being done to protect private information before it’s sent to the government, and that the bill doesn’t require the bill to go through a civilian agency first. Two valid concerns, and concerns that Rogers says are moot points.

      In response to the first concern, Rogers says that identifiable information can’t be sent to the government because it’s all “zeroes and ones.” He seems to be under the impression that the government will be too busy scanning binary for cyberthreats that it will never collect any personally identifiable information from the content being shared with it either. Roger’s view displays a level of ignorance that shouldn’t be tolerated among Congress.

      The second concern was framed in the context of how it would hurt the Web economy. Rep. Jared Polis said that allowing companies to share your private information with the government, including military agencies, would decrease the users’ trust in the Internet. He argues that online services would see a decrease in business thanks to decreased trust in their services:

      This directly hurts the confidence of Internet users. Internet users – if this were to become law – would be much more hesitant to provide their personal information -even if assured under the terms of use that it will be kept personal because the company would be completely indemnified if they ‘voluntarily’ gave it to the United States government.

      It appears that Rogers didn’t even provide a proper response to this concern. He just said that it wouldn’t be a problem and moved on.

      Rogers’ response is why CISPA is so dangerous to begin with. Every concern that’s brought up is met with a simple response of “It won’t be a problem.” Such a response does nothing to dissuade fears. In fact, it makes us fear CISPA more if its author can’t even mount a proper response to its critics. In any other debate, arguing that a problem isn’t a problem without the proper evidence to back it up would be laughed off the stage. It’s apparently not only welcome, but encouraged, in the House though.

      After providing non-responses to the concerns brought forward by other representatives, Rogers also blocked a number of pro-privacy amendments from making into the final CISPA that will go before the House for a floor vote. One such amendment came from Rep. Adam Schiff that would have automated the removal of identifiable information from data before it was shared with the government. In the current CISPA, the bill leaves it up to the government to remove any identifiable information after it’s already in their hands.

      We’re likely to see a vote on CISPA today or tomorrow. The vote isn’t likely to last long, and Rogers will most likely attempt to just ram it through without any more debate. We’ll let you know how the vote went, but don’t expect good news.

    • Nexus Tablet Sales Estimate Shows The Nexus 10 Is Probably Not A Popular Option

      nexus10-1

      Nexus tablet device sales remain a bit nebulous, since Google doesn’t give out specific numbers around them. But industry watchers, and Benedict Evans in particular, often try to pierce the veil to find out where the Nexus brand stands compared to the rest of the industry when it comes to sales. The Nexus 10, it seems, probably pales in comparison to most.

      The tablet, manufactured by OEM partner Samsung, went on sale in November of last year, just shortly after the iPad mini, and based on Evans’ estimates from modeling active Android users, combined with information from Google’s development data based on screen sizes and resolution in use. Based on there being 680 million active Android users by the end of April, Evans says that that means 6.8 million Nexus 7 devices total, and only around 10 percent of that, or 680,000 Nexus 10s.

      As Evans notes, Apple sold somewhere near 10 million iPads during just the final two months of Q4 2012 by comparison. It also announced total iPad sales for its first fiscal quarter of 2013 of 22.9 million iPads total, including bot the iPad mini and the standard iPad. To say that Google’s efforts with the larger Nexus tablet so far haven’t had customers rushing to stores would be fair, even if Evans’ estimate is only loosely accurate.

      Google is said to be preparing to debut a next-generation Nexus 10 already, with an improved CPU and GPU. But the issue here is not really about device quality; many found the Nexus 10 a fine performer, especially compared to many other larger Android devices from other sources.

      Google’s line of self-branded hardware has never been fully about selling product. They started out being more about acting as reference designs, to show OEM partners what was possible with the platform. Lately, thanks to extreme affordability and increasingly impressive hardware like the Nexus 4, however, they’re becoming more popular with general consumers. But an LG-made smartphone that fits a need most consumers already know they have, with a proven product category like the Android smartphone is a far cry from a 10-inch Android tablet, which the market has so far shown little interest in, no matter what the source or the price.

      The inflection point for Android tablet sales still has yet to appear. Google’s Nexus efforts in this case could be a crucial element of helping that happen, but only if the company can also start to aggressively expand software options tailored for Android tablets and make sure customers are aware of why they might want such a device.