Author: Mathew Ingram

  • What would the perfect news application designed for Google Glass look like?

    To say there’s a lot of debate about the “wearable technology” known as Google Glass would be an understatement. Some enthusiasts see it as the future of mobile man-machine interfaces, while others say it is more likely to be the new Apple Newton — in other words, a widely-hyped product that will ultimately fail. But let’s assume some form of head-mounted display becomes commonplace: how will it change the way we consume content, and how will news outlets of all kinds have to change the way they think about what they do?

    Google showed off some prototype apps at the South by Southwest interactive festival that it came up with for its virtual display, including interfaces for photo-sharing and other services that either used voice commands or touch menus that rely on the device’s touch panel (which sits on the side of the headset). One of the apps it demonstrated was a New York Times app — designed by a developer at the newspaper — which mostly just pulled up headlines, but also allowed the user to ask for the story to be read aloud.

    Voice interface, real-time, location aware

    The voice interface for Glass is one of the obvious differences between it and other devices, although both the iPhone and Android phones support similar features for specific tasks via services like Siri and voice search. The need for audio input with Glass is driven in part by the size of the display, which is probably one of the most significant limiting factors when it comes to content: since it projects only a small virtual screen, there isn’t a lot of real estate for images or large chunks of text.

    So what would the perfect news app designed for Glass look like? What follows are a few ideas I came up with — feel free to add your own in the comments:

    • Short excerpts: If you have limited real estate, then you need to be concise, so a headline and a short snippet of text would be ideal — at least as a starting point. In addition to Google News, there are already a number of services that are focusing on this approach for mobile devices, including Circa and Summly (Circa will be part of our startup showcase at paidContent Live on April 17). Theoretically at least, news-wire services would be best equipped for this kind of content.
    • Real-time updates: In addition to concise summaries of news stories, Circa also offers another interesting feature that would be very useful for a device like Glass, which is the ability to “follow” a story and get real-time updates as they arrive. In a sense, this would be like a news-specific version of Twitter — very short, real-time and likely curated or filtered by an editor, whether a human being or an algorithm or both.
    • Designed for voice and touch: As the Google prototype shows, voice is going to be an obvious interface for Glass, and using the touch panel will also be important way of interacting with the content. That means a news app that can be navigated via spoken keywords (next, more, etc.) as well as one that is segmented in some way so that chunks can be chosen quickly and easily with a tap. This would require news outlets to do a fair amount of work with metadata and tagging of their content.
    • Location aware: To me at least, one of the most interesting aspects of a mobile device like Glass is that it knows where you are, and thanks to Google’s image-recognition technology, in many cases it even knows what you are looking at. The potential for adding useful information is huge, and Google has provided a glimpse of what that might be like with its Field Trip app, which adds “augmented reality”-style data. News updates and archives could be a significant source of useful information about specific locations, events and objects.
    • Prescriptive data: In addition to Glass, one of Google’s more interesting pieces of technology is Google Now, the dashboard it provides on some Android platforms (and may be bringing to iOS) that pulls together information from a variety of sources — calendar, email, photos, traffic — to tell a user what they need to know. Robin Sloan and Matt Thompson envisioned this kind of content in 2011 as part of a future in which heads-up displays appear on objects like mirrors, photo frames and eyeglasses.

    Not just news, but useful information

    The migration of content to mobile platforms like Glass — which in many ways is just part of the ongoing evolution begun by mobile phones and tablets — poses a number of challenges for traditional and even new-media outlets. The technological know-how to take advantage of Google’s APIs, and to structure and tag content with metadata that will make it useful, is one challenge.

    Another challenge is the ability to think of information in different ways: not necessarily just as “news” but specific kinds and formats of news, or even more broadly as simply “useful information” for someone wearing a mobile device. This isn’t something that most traditional media outlets are used to thinking of as important, but they are going to have to start doing so.

    That’s not to say every news organization has to suddenly divert resources to the creation of content for Google Glass or other heads-up displays — but it does mean they need to start thinking about what it would involve now, and transforming some of the ways they produce content to take advantage of it. Not only will those skills will be useful for all kinds of mobile devices, but if they don’t start the evolution soon, Google will fill the data gap itself and they will be left on the outside looking in.

    Images courtesy of Flickr users Thomas Hawk and Arvind Grover

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

    • Why a LinkedIn acquisition of Pulse would make sense — content requires context

      According to a number of reports from insiders close to the company — including some who have talked to Om and some who have talked to All Things Digital — LinkedIn is considering an acquisition of Pulse, the news-reading app, for as much as $100 million. At first glance it might seem like an odd pairing: why would a site that is focused on corporate networking want to buy a content-recommendation app? But as the world of content continues to evolve, such a combination actually makes a lot of sense.

      Pulse is one of a number of news-recommendation apps that try to apply algorithms and other filters to suggest content to users — a group that includes Zite (which was acquired by CNN in 2011) as well as News360, Flipboard and Prismatic. Pulse was one of the first to make a big splash, in part because Apple founder Steve Jobs mentioned it on stage during the launch of the original iPad, and also because the New York Times accused the company of copyright infringement for aggregating its content.

      Since its launch, Pulse has grown to the point where it has about 20 million users, but it’s still seen by many as a runner-up to Flipboard in the news-recommendation market , so an acquisition in the $100-million range would likely make sense for the company and its backers.

      LinkedIn is becoming a media network

      linkedin

      For LinkedIn, meanwhile, the purchase of a service that aggregates and recommends content from a wide variety of news sources would be an interesting extension of its recent moves to bulk up the media side of its business. When the company first launched its LinkedIn Today service — which aggregated news based on which links were shared within a user’s network of contacts — it seemed to some (including me) like a side project designed to primarily to drive traffic to the site, which was mostly being used as a place to store a resume or connect with potential employers.

      Since then, however, the company has made a number of other efforts on the content side that are more ambitious — directed by former Fortune magazine staffer Dan Roth — such as the launch of the Influencers program, in which the site gets prominent personalities such as Richard Branson and Reid Hoffman to blog about topics of interest to its users. In many ways, this is analogous to what alternative blogging platforms like Medium and Svbtle have been doing (and WordPress seems to be interested in doing as well).

      So what could LinkedIn do with something like Pulse? Peter Kafka of All Things Digital has one idea, based on a video that Dan Roth made for a Fortune app that had LinkedIn integration — so that users could see who they were connected to at a specific company that was mentioned in the news. But while this might be useful to some, it seems a lot less interesting than using it as a kind of extension of LinkedIn Today: in other words, a way of recommending content that would target users based on their interests.

      It’s all about the “interest graph”

      Newspaper

      As we’ve tried to explain a number of times, this kind of “interest graph” targeting is the holy grail for both content companies and social networks. It’s the reason why Facebook is constantly tweaking its news feed, and why Twitter is pouring resources into improving recommendation filters like its Discover tab and other features — and why Google is trying so hard to get people to share and “plus one” more content through its Google+ network.

      If there’s one overwhelming reality of the digital age, it’s that we are all to some extent drowning in content from an ever-growing range of sources, and we all spend an increasing amount of time trying to filter out the noise and find the signal. LinkedIn has a large and growing graph of the social-network connections between people based on their work — a graph some believe could make the company an acquisition for someone like Bloomberg — and that could potentially be very valuable for users.

      So a LinkedIn-Pulse combination might start as a version of the app that functions almost exactly like the current version, but also tracks content shared by a user’s business-related graph from LinkedIn, and then grows into a larger service incorporated into the site itself. And data from such a service would likely also be very interesting to Pulse partners like the Wall Street Journal, who use the app as a secondary method of distribution and subscription revenue.

      And if such a deal does end up happening, of course, a LinkedIn purchase of Pulse would be just another example of a non-media company (i.e. Facebook, Twitter, etc.) establishing a powerful foothold in an area of the business that has traditionally belonged to newspapers and magazines.

      Images courtesy of Shutterstock / noporn and Arvind Grover

      Related research and analysis from GigaOM Pro:
      Subscriber content. Sign up for a free trial.

      • The future of online etiquette is already here — it’s just unevenly distributed

        As anyone who has missed an important email knows by now, modern communications etiquette is a minefield of unspoken expectations and potential anxiety-inducing behavior. If you need further proof, all you have to do is look at some of the responses to a recent blog post by New York Times writer Nick Bilton about his approach to email, voice mail and texting: some reacted with distaste bordering on horror, while others cheered his take on the topic. Part of the problem is that different users look at these tools differently — and in some cases have wildly different views of what is appropriate and what isn’t.

        For example, Bilton says his father insists on leaving him voice-mail messages but the NYT writer never listens to them, so his frustrated parent eventually called his sister to complain, and she told their father to text him instead — and Bilton adds that his mother has progressed to the point where they communicate mostly through Twitter. Is this a son helping his parents adapt, or a rude refusal to meet them on their own turf? Many saw it as the latter:

        Screen Shot 2013-03-11 at 1.24.32 PM

        We have too many ways to communicate

        Author Ian Leslie noted in a response on his own blog that Bilton’s description of what’s wrong with modern communication — whether it’s voice mail or texting or Twitter — and his relationship with his parents misunderstands what communication is for. If you look at them as pure information delivery, Leslie says, then they are riddled with problems. But if you see them as a way of socializing with others who are close to us then they look completely different:

        “The problem here isn’t just that Bilton unintentionally comes off as rather rude… his argument betrays a fundamental misunderstanding of communication. Writing about computers a lot, he assumes communication is all about the transfer of information from one hard drive to another. That being so, the more efficient the transfer is, the better.”

        I think a larger problem Bilton touches on, but doesn’t address directly, is that we have more competing forms of communication available to us than ever before — and not only are different people at different stages in their evolution from one to the other, but people also use them for very different purposes. So for Bilton’s dad, voice mail is a great way of passing on important information, but Nick prefers the real-time nature of texting or Twitter messaging.

        The NYT blogger mentions how a whole new kind of etiquette had to be developed around the telephone, and how debate raged over the appropriate way to answer (Alexander Graham Bell preferred the term “Ahoy!,” which just reinforces why we shouldn’t let the inventors of things decide how we use them). But at least people in the 1920s only had one new form of communication to figure out — we have email, voice mail, texting, Facebook messaging, Twitter and more.

        It gets worse when the person you are trying to correspond with uses all of these tools: I’ve tried to contact someone I know fairly well by email, voice mail, text message, Twitter direct messaging and everything short of smoke signals, and I never know from one day to the next which of those methods (if any) are going to work. We have more ways than ever to communicate, but sometimes that just means more ways to miss each other.

        Not every tool works for every purpose

        In a lot of cases, I think the problem boils down to one of asynchronous vs. synchronous behavior and expectations. Part of the reason why many people (particularly geeks) dislike talking on the phone is that it forces both sides to be present at the same time, instead of allowing a user to consume or respond to the information at their own pace — or multi-task while they are doing so. Phone calls also have no natural time-span.

        The other conflict is over what the purpose of the communication is. Someone who sends a long email or leaves a voice mail asking you to call them back may wish to have a long, rambling conversation purely to socialize, and get offended when you send a curt response (or no response at all). Similarly, if you only ever text or use Twitter direct messages with someone, you may be communicating really efficiently but you miss a lot of the personal nuances that still make up much of human communication.

        And then there are the obvious age-related issues: I have tried valiantly to get my mother to use Facebook, arguing that this is a great way to keep in touch — however transiently — with her grandchildren, none of whom has any interest whatsoever in using email or talking on the telephone. But for my mother, email and the phone are her primary means of connecting with the world, and the former was something that took ages for her to get comfortable with. And now that she has grown comfortable with it, no one is using it any more.

        All I think we can really say for sure is that this state of affairs is likely to continue, if not get worse. As William Gibson said in a different context: “The future is already here, it’s just not very evenly distributed.” And so we are all at different stages of adapting to this new communications future. Perhaps the one thing we need most is to be patient with those who aren’t where we are.

        Images courtesy of Shutterstock / Steve Woods and Arvind Grover

        Related research and analysis from GigaOM Pro:
        Subscriber content. Sign up for a free trial.

      • Plagiarism and the link: How the web makes attribution easier — and more complicated

        Nate Thayer, the writer who touched off a debate this week about how freelancers are compensated, found himself embroiled in another controversy on Friday when he was accused of plagiarizing large parts of the piece that The Atlantic wanted him to re-work for free. In his defence, Thayer and his editor said links weren’t included in the original version due to an editing error, a mistake they later corrected. This failed to satisfy some of the writer’s critics, however, including the author of the piece that Thayer based some of his reporting on.

        If nothing else, the incident helps reinforce just how blurry the line is between plagiarism and sloppy attribution — and also how the the web makes it easier to provide attribution via hyperlinks, but at the same time makes it harder to define what is plagiarism or content theft and what isn’t.

        To Jeremy Duns, who first blew the whistle on what he said was Thayer’s plagiarism, the case seemed open and shut: chunks of the article about North Korea and basketball, including a number of quotes, appeared to have been lifted straight from a piece by San Diego Union-Tribune writer Mark Zeigler on the same topic in 2006. And there was virtually no attribution of any kind in the original version of Thayer’s story, which appeared at the NKNews.com site, apart from one oblique reference to the Union-Tribune — and no links.

        internallinks

        Even as Duns was writing his blog post about this incident of plagiarism, however, links began to appear in the Thayer piece, including a link to Zeigler’s original story. To Duns, this was evidence that the author was trying to cover his tracks, but in a comment to Columbia Journalism Review, NKNews editor Tad Farrell said that the lack of links was due to an editing error and that the site added them as soon as it could. Thayer vehemently denied that he was a plagiarist or that he intended to leave out the attribution.

        So all’s well that ends well, right? In a follow-up post, the CJR’s Sara Morrison said that Duns clearly jumped to the wrong conclusions (since at least one of those who provided a quote that Duns questioned confirmed that they had in fact talked to Thayer for his piece). Duns wasn’t buying it, however, saying the attribution and links were only added later under protest. As he put it:

        “Even hyperlinking to such a huge lift without mentioning the publication or author at all would have been something of a stretch – it’s a hell of a lot of material taken directly to cite with just one bolded word.”

        Interestingly enough, Zeigler wasn’t all that satisfied either: although he said he wasn’t prepared to call Thayer a plagiarist, he didn’t think a couple of small links were enough to give him the appropriate attribution for his work. As he put it: “I don’t think just highlighting a few words of type in a different color necessarily qualifies as a proper attribution,” adding that his story “took a lot of work and a lot of man hours” to report and write.

        The problem is that while adding hyperlinks is a great way of avoiding a charge of plagiarism — something that might have helped Fox News opinion writer Juan Williams and other alleged plagiarists — there is no accepted protocol for how or where to add those links, or how much content someone can cut and paste into their story or blog post without crossing the line from borrowing into plagiarism or copyright infringement.

        How much content is too much to take?

        payment

        This is also the root of the controversy over what some call the “over-aggregation” by sites like The Huffington Post and Business Insider, where large chunks of stories from other sites — and in some cases, the entire story or post — is published, along with a “via” link somewhere at the bottom of the post. Other blogs, including The Verge and Engadget, have been criticized in the past for burying links to the original source of the content they reproduce, to try and disguise what they have borrowed.

        And if you broaden the lens even further, a similar problem is at the root of the fight that Google has been up against in country after country over its use of excerpts from news stories in Google News — stories that come from newspapers and other traditional sources. Germany has passed a law to control the use of such excerpts, even those as short as a single word, and in other countries like France and Belgium, those traditional outlets have sued Google to try and force payment for that content.

        Google’s defence is that it links prominently to the original source, and this drives traffic to the publisher’s site, which is fundamentally the same argument that Business Insider and Huffington Post and others use to defend their aggregation of content. But those whose content is used argue, as Brian Morrissey of Digiday did in a back-and-forth with Business Insider founder Henry Blodget, that taking their content produces far more value for the aggregator than it provides in return.

        So it seems that when it comes to making use of someone else’s content, linking as a way of providing attribution and credit is enough — except when it isn’t.

        Images courtesy of Shutterstock / Zurijeta and Arvind Grover

        Related research and analysis from GigaOM Pro:
        Subscriber content. Sign up for a free trial.

        • Facebook vs. Twitter: How do you like your social news feed, filtered or unfiltered?

          New York Times writer Nick Bilton’s complaints this week about how little engagement his content gets on Facebook sparked a debate about whether the network is deliberately hiding certain types of content in order to promote its paid-reach services — but it also highlighted how much Facebook controls the feed users see, often in ways that they don’t understand or may not even be aware of.

          Facebook is going to be launching some new features for its feed on Thursday, which may include new ways of filtering specific kinds of content and possibly new advertising features. Meanwhile, Twitter continues to show you everything, without filtering or ranking it in any way. Which method is better? That depends on how and why you are using it.

          Much of Bilton’s criticism revolves around what some call the “subscribe” function, which allows users to get updates from others without having to ask their permission. When it launched in the fall of 2011, it was widely seen as an attempt to copy Twitter’s “asymmetric following” model, since Twitter lets users get updates from whoever they wish — whereas Facebook’s model has always been symmetric, in the sense that users must agree to be friends before they can see each other’s updates. Late last year, Facebook changed the name of this feature to “follow,” which made the similarity to Twitter even more obvious.

          Do you want to see everything in your feed?

          twitter bird tweets logo drawing

          As an attempt to copy Twitter, the follow feature seems to be largely a failure — at least if the experiences of Bilton and others who have complained about Facebook’s newsfeed, such as billionaire entrepreneur Mark Cuban, are anything to go by. They say they don’t get much engagement, which makes them question whether their content is even reaching their subscribers, and whether Facebook is tweaking their feed so that certain kinds of updates don’t show up as frequently.

          The last time this topic came up, when Cuban and actor George Takei were criticizing the network because of the lack of engagement from subscribers, a number of Facebook users attacked the company for filtering their feeds and not showing them all of the updates from pages or individuals they were following. Some users said the equivalent of: “If I subscribe to someone, I want to see all their updates, not just the ones that you choose to show me.”

          In a nutshell, this is the fundamental difference between Twitter and Facebook: the former doesn’t apply any filters to the stream of updates users get, apart from those required by law — if you follow a couple of thousand users, as I do, then you get all of the updates from all of those users, and they flow past you in a giant river of undifferentiated tweets, in reverse chronological order.

          Facebook, however, applies all kinds of algorithmic tweaks to a newsfeed based on what some call EdgeRank (although this isn’t a term Facebook uses internally, according to Anthony De Rosa of Reuters), and therefore some updates are more prominent than others, and in some cases updates may never appear at all. Users have control over some of the knobs and dials that will hide or reveal certain kinds of posts, but there is also a lot of filtering that goes on behind the scenes, which makes Facebook a bit of a Google-style black box.

          It’s hard to know what you’re missing

          637885_-top_secret-

          Depending on how you see them, these two different approaches can be a good thing or a bad thing: Twitter’s method is theoretically more transparent and comprehensive, since it is completely unfiltered — but it can also be overwhelming, and the network has worked hard to try and help users cope with this vast stream of content, via things like the Discover tab. Facebook’s method seems a lot more invasive and secretive, but at the same time it can make it easier to cope with the never-ending ocean of content — an average of 2,000 posts a day for each user.

          Danny Sullivan of Search Engine Land has a useful analogy for the difference between the two: Twitter is a little like real-time TV news, while Facebook functions more like a DVR that lets you watch things after they have happened (although to some extent the network chooses what to show you, which your DVR doesn’t). They are two very different experiences of a social stream.

          While Facebook users might complain that they want to see everything their social graph posts, the reality is that they likely wouldn’t see everything anyway — unless they sat on their computer all day long reading everything that was posted. Most die-hard Twitter users likely don’t see everything their followers post either, unless they watch the network 24 hours a day, and many use lists (as I do) to try and cope with the volume of content that is posted, or services like Paper.li that allow them to “time shift” that content and catch up with it later.

          In the end, the question hangs not just on how you want to handle that stream of updates from your social graph, but who you trust to do that management for you: in the case of Twitter, you are pretty much on your own, and that can be chaotic — but there is a certain purity to it. With Facebook, you have some tools at your disposal to manage that content, but the network itself also does a lot behind the scenes without telling you much about how it works. Facebook says it’s for your own good, but how do you really know what you are missing?

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • The new economics of media: If you want free content, there’s an almost infinite supply

          Writer Nate Thayer set off the media equivalent of a fragmentation grenade on Tuesday, with a lament about the state of freelance writing that sent virtual shrapnel flying in all directions. The main target of his ire was The Atlantic, which he says asked him to rewrite one of his pieces and offered to pay him nothing — and this was seen by many as a symbol of the parlous state of online writing, not to mention the general decline of the media. Is that fair? Not really. But there’s no question the economics of content have changed.

          The article that The Atlantic wanted Thayer to repurpose was a long feature about how the relationship between North Korea and the U.S. revolves around basketball, pegged to a recent trip by American basketball star Dennis Rodman. Olga Khazan, a relatively recent addition to the Atlantic‘s editorial staff, sent an email asking Thayer to submit a shorter version for the magazine, and when the writer asked how much the Atlantic was prepared to pay, the editor said zero — but offered exposure as an inducement:

          “We unfortunately can’t pay you for it, but we do reach 13 million readers a month. I understand if that’s not a workable arrangement for you, I just wanted to see if you were interested.”

          The economics of writing have changed

          Needless to say, Thayer was a little offended at this, as he describes on his blog (he also provided a somewhat more colorful response to New York magazine). For Thayer, and for many who responded both on his blog post and on Twitter, this was just another sign of how far the media have fallen, and how little people value good writing. Eventually, the Atlantic‘s editor-in-chief apologized for offending the writer, saying the case was “unusual,” and that all the editor was trying to do was help Thayer’s work find a larger audience.

          Felix Salmon tried to analyze what happened to Thayer in a blog post at Reuters, and came to the conclusion that freelancing is a lot harder to make a living at than it used to be — in part because online media works in such a way that having staff writers is a lot more efficient than using outside contributors. But I think he missed the most important aspect of what Thayer’s treatment says about the practice of writing now, and the economics of digital media (writer and editor Jane Friedman has a good overview of the issues).

          In some ways, it’s odd that the Atlantic would even bother to ask Thayer for permission to run a condensed version of his piece: many outlets would have simply excerpted large chunks of it with links back to Thayer’s original — the way that GlobalPost did — since that costs nothing and achieves virtually the exact same thing (Thayer even mentions this possibility in his blog post). Whether you believe this is right or wrong, it arguably serves a purpose in the media ecosystem. And we are more or less stuck with it, whether you like it or not.

          Some will always be willing to work for free

          As former YouTube staffer Hunter Walk pointed out on Twitter, and Matt Yglesias noted at Slate, there is no shortage of free writing out there — in fact, the supply of free writing is theoretically infinite, since there will always be people who want to write and are willing to be compensated in other ways: by broadening their reach, enhancing their reputation, etc. This is why new publishing platforms like Medium and Svbtle are having some success, not to mention the rapidly expanding LinkedIn “Influencers” program.

          This same process was famously — or infamously — also the foundation of The Huffington Post, and sparked a huge amount of controversy about that company’s practice of not paying its bloggers. As a number of people pointed out at the time (including me), there will always be people who want to write for free, and that’s not necessarily a bad thing. Unless, of course, you are one of those writers who used to profit from the lack of marketplace competition.

          When it comes to things like media, your real competition isn’t the product that is better than you, but the one that is good enough to satisfy your customers — and if readers are happy to patronize media outlets that use writing they got for free, or writing they have aggregated and excerpted, there is precious little that freelance writers or any of us can do about it. Our only option, as a number of commenters at Hacker News pointed out, is to make it clear that we want better quality writing by actually paying for and/or clicking on it.

          The part that Thayer and his supporters aren’t talking about is how much easier it is for writers of all kinds to make a living if they want to — not by submitting their work to a handful of traditional outlets, but by turning it into e-books and Byliner singles and other formats, something that has expanded the field of writing more than just about anything since the printing press. Are there new economics for writing? Yes. Are they unrelentingly evil and negative? No.

          Images courtesy of Shutterstock / patpitchaya and Poynter

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Why the Washington Post is smart to try sponsored content, and why others should too

          Like virtually every other traditional media outlet, the Washington Post has been squeezed hard by the decline in print advertising revenue and the inability of digital ad revenue to fill that gap. Unlike almost every other outlet, however, the Post has resisted putting up a paywall (for now at least) and instead has been experimenting with other methods of monetization. Its latest venture is sponsored content — something that is controversial, but deserves to be tried by anyone interested in figuring out how digital content works now.

          As noted by my paidContent colleague Laura Owen and by Digiday, the Post has launched a program called BrandConnect, which gives advertisers the ability to create content — either by themselves or by working with the paper’s staff — that is then highlighted in a special section of the newspaper’s online front page. The content states pretty clearly that it is sponsored (although that doesn’t seem to have mollified some of the company’s critics so far).

          In all of the important ways, this doesn’t seem all that different from what newspapers have traditionally done with what they refer to as “advertorial” — that is, special sections or articles that are written like newspaper stories but paid for by brands. According to Digiday, no editorial staff are involved in creating the content, and the sponsored headlines appear in a small box that looks different from the rest of the page, much like Techmeme’s sponsored posts.

          WaPobrandconnect

          Critics like Andrew Sullivan — who recently left the Daily Beast to start a reader-funded site — argue that sponsored content is ethically dubious, and have raised concerns about the way that BuzzFeed handles such content. As Laura notes, The Atlantic has also come under fire for the way it has done some sponsored features, including one about Scientology (we’ll be talking about this more with Sullivan and BuzzFeed’s Jon Steinberg at paidContent Live on April 17).

          While there are debates around how and when to publish sponsored content, and what kinds of content are appropriate for which media outlets, there are some good reasons why other newspapers and traditional media players might want to experiment with this new format as well:

          • It’s an additional source of revenue: At this point in their evolution, newspapers and other traditional outlets can’t really afford to turn a blind eye to any potential addition to their revenue base, however distasteful it might appear at first glance.
          • It’s something advertisers seem interested in: Rates for traditional display advertising are dropping because advertisers simply don’t see them as valuable enough any more — and arguably neither do readers.
          • It doesn’t have to be ethically compromised: Like any kind of advertising or commercial relationship, sponsored content or “native advertising” can be handled well or it can be handled badly. That doesn’t mean it can’t be done in an ethically responsible way.
          • It can be a valuable service for readers: If advertiser-created content provides something useful that readers are interested in, it’s a win-win for the editorial outlet, since they get paid and readers are satisfied.

          Readers should be the judge of what is useful

          The last point in this list might be the most important one of all: if it is handled properly, sponsored content can serve much the same purpose as unsponsored content — in other words, it can be informative and useful for readers. Isn’t that the ultimate purpose of much of what we call journalism? Media insiders might flinch at the phrase “brand journalism” or “native advertising,” but if content produced by an advertiser is helpful to a reader, is that such a bad thing?

          In an interview with Beet.tv, MIT Technology Review editor Jason Pontin points out that while many journalists may not like it, users often find advertising-related content almost as useful and memorable as traditional editorial content. This was the breakthrough that Google has taken advantage of to build a multimillion-dollar business via AdWords: to many users, those ads aren’t just clutter, but are actually useful content worth clicking on.

          The approach taken by some publications such as Forbes — which has a BrandVoice platform that is similar to what the Washington Post is launching — is that marketing or advertising-driven content from brands is given more or less equal prominence to that created by editorial staff, with the appropriate disclaimers. Corporate bloggers at Forbes have the exact same platform that a staff blogger does, with all the same tools.

          In that environment, it is up to the reader to decide whether something is useful or not useful, interesting or not interesting, valuable or not valuable. Whether it is “advertising” is largely irrelevant. In a sense, it has always been this way — perhaps it is just becoming more obvious now.

          Images courtesy of Shutterstock / Eldorado3D and Poynter

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Remember, Facebook isn’t a platform for you to use — you are a platform for Facebook to use

          Facebook seems to be making users upset and/or confused again with the way it handles its news feed. A few months ago, it was actor George Takei and billionaire Mark Cuban who were upset with what they saw as changes to the Facebook algorithm that made their content less visible, and this time around it’s New York Times writer Nick Bilton, who complained that his posts haven’t been getting as many likes or shares as they used to. The assumption is that Facebook wants you to pay to get this kind of reach, but regardless of whether that’s what is happening, it still sends a valuable message: you are not in control — Facebook is.

          Bilton described in a piece for the Bits section of the Times how his posts used to get as many as 50 or even a hundred likes and shares, from users of Facebook who had signed up to get his feed using the network’s relatively new Subscribe feature. But even though the number of users who subscribe has soared from just 25,000 after the feature was launched to almost half a million now, Bilton said that he gets far fewer responses to his posts — sometimes as little as 10 or 15 likes and shares. After paying Facebook to promote his posts, however, that number increased by almost 1,000 percent.

          Facebook denies it is tuning users out

          I’ve noticed the same kind of phenomenon as Bilton has with my own feed, albeit on a somewhat smaller scale. While Bilton has almost half a million subscribers, I have about 75,000 — but I’ve also found that the content I post is getting a lot less interaction than in the early days of the feature. I haven’t experimented with paying Facebook to promote my posts, but I have no doubt I would see the same kind of increase in activity if I did. That’s kind of the whole point.

          Like button

          The conclusion that everyone seems to be jumping to is the same one that Mark Cuban arrived at when he complained in November about the increasing difficult of reaching his fans on the network: namely, that Facebook is deliberately tuning out (or at least turning down) the signal coming from some users so that it can convince them to use promotional tools like ads and “sponsored stories.” Cuban said he was so irritated by the move that he was diverting almost all of the marketing budget from his various brands away from Facebook to Twitter and other platforms.

          Facebook gave much the same response then that it has made to Bilton’s column (as reported by my GigaOM colleague Eliza Kern): it said that it tweaks its ranking algorithms all the time, in order to try and decrease spam and increase the visibility of content that users like, and that this is not an attempt to market its other services such as advertising or various promotional features. An official post on the Facebook site entitled “Fact Check” says:

          “Our goal with News Feed is always to show each individual the most relevant blend of stories that maximizes engagement and interest. There have been recent claims suggesting that our News Feed algorithm suppresses organic distribution of posts in favor of paid posts in order to increase our revenue. This is not true.”

          Like Google, Facebook is a black box

          It’s worth noting that former YouTube executive-turned-venture-capitalist Hunter Walk came up with some alternate theories about why Bilton and others might have seen a dropoff in their likes and shares, including the fact that some of the followers and subscribers that boosted those numbers were spam accounts or bots who have lost interest. I certainly noticed after the “Subscribe” feature launched that I got a lot of spammy responses as well as likes and shares, and those have died down as well. In that sense, decreasing the amount of activity would actually qualify as a good thing.

          Screen Shot 2013-03-04 at 5.53.22 PM

          Zach Seward of Quartz had another theory that I also think has a lot of merit: in a comment on Walk’s post, he noted that Facebook often devotes a substantial amount of energy to promoting its new features — such as the subscription offering, as well as the “social newsreader” offerings that were launched by a number of newspapers such as The Guardian and the Washington Post. But after a certain time, the network almost always tweaks the ranking algorithm so that these new features are downplayed relative to when they were launched, which often causes problems for those who relied on them.

          The bottom line, of course, is that there is no real way for anyone to know why Facebook’s algorithm behaves the way it does, any more than it’s possible for us to know why certain pages rank high in Google. They are both a black box, and the way they function is a mystery. As I tried to point out to Cuban, Facebook is entitled to do whatever it wants with your news feed, including using it to convince you to pay for promotional tools, because it owns your news feed — not you. It’s good to be reminded of that sometimes.

          Post and thumbnail images courtesy of Fickr user balakov and Flickr user Pew Center

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • If Bradley Manning and WikiLeaks are guilty, then so is the New York Times

          While the trial of Bradley Manning has sparked some interest in certain circles, many people probably think the former U.S. Army private’s case will have little impact on either them or American society as a whole. Harvard law professor Yochai Benkler, however, argues that they are wrong — and that if Manning is found guilty of “aiding the enemy” for releasing classified documents to WikiLeaks, it could change the nature of both journalism and free speech forever.

          Why? Because as Benkler points out, the charge for which Manning is being court-martialed could just as easily be applied to someone who leaks similar documents to virtually any media outlet, including the New York Times or the Washington Post. In other words, if the U.S. government has seen fit to go after Manning and WikiLeaks, what is to stop them from pursuing anyone who leaks documents, and any media entity that publishes them?

          WikiLeaks is a media entity just like the Times

          I’ve argued in the past that WikiLeaks is a media entity, and a fairly crucial one in this day and age, and Benkler clearly agrees. As the Harvard professor (who will likely be testifying at Manning’s trial) describes in his piece:

          “Someone in Manning’s shoes in 2010 would have thought of WikiLeaks as a small, hard-hitting, new media journalism outfit — a journalistic ‘Little Engine that Could’ that, for purposes of press freedom, was no different from the New York Times.”

          New York Times

          And we don’t have to hypothesize about whether the government would have gone after Manning for leaking documents directly to the New York Times instead of to WikiLeaks: as Benkler notes, the chief prosecutor in the case was asked that exact question by the judge in January and responded “Yes ma’am.” In other words, for the purposes of the government’s case against Manning, there is no appreciable difference between WikiLeaks and the Times, or any other traditional media outlet.

          Benkler argues that the government’s behavior constitutes “a clear and present danger to journalism in the national security arena” — not just because it is trying to penalize a whistleblower, but because the state is arguing that Manning is guilty of “aiding the enemy,” a charge that could put him in prison for life. Benkler also notes that unlike the other charges against Manning, aiding the enemy is something even civilians can be found guilty of.

          Isn’t the New York Times aiding the enemy too?

          So if handing documents over to a media entity that subsequently publishes them qualifies as “aiding the enemy” in the eyes of the government, then giving them to the New York Times would fit that description just as well as giving them to WikiLeaks. And if providing classified documents to a publisher can qualify, then wouldn’t the entity that actually published them be guilty as well — regardless of whether it’s WikiLeaks or the Times?

          The First Amendment would seem to protect the NYT in a case like this, and I’ve argued before that it should protect WikiLeaks as well — an argument that former Times‘ executive editor Bill Keller has said he agrees with. But the U.S. government continues to pursue WikiLeaks for its role in publicizing the documents that Manning leaked, and some U.S. legislators have mused aloud about whether espionage charges could be laid against other media entities like the New York Times as well.

          Benkler’s warning shouldn’t be taken lightly: if Manning is guilty of aiding the enemy for simply leaking documents, then anyone who communicates with a newspaper could be guilty of something similar. And if the leaker is guilty, then the publisher could be as well — and that could cause a chilling effect on the media that would change the nature of public journalism forever.

          /Images courtesy of Shutterstock / Nata-lia and Flickr user jphilipg

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • When advertising becomes content, who wins — advertisers or publishers, or both?

          Andrew Sullivan, the former Daily Beast writer who recently launched his own standalone publishing venture, has made it pretty clear that he doesn’t like advertising, which is why his site is supported entirely by reader subscriptions. And he also made it clear in a recent series of posts that he doesn’t like the growing trend of sites like BuzzFeed using what they call “sponsored content” as a replacement for traditional advertising — something he suggested was ethically questionable for media entities of all kinds.

          Like it or not, however, this phenomenon is becoming more and more commonplace — and not just at new-media ventures like BuzzFeed but also at traditional publishers like The Atlantic. Is it the savior of online media, or just another mirage in the advertising desert? This is a question we are going to discussing at length at paidContent Live in New York on April 17, including a panel entitled “The future of native advertising: Blurring ads and content.”

          If it’s useful, does it matter if it’s sponsored?

          The principle behind what some call sponsored content and others refer to as “native advertising” (and newspapers and magazines called “advertorial”) is that marketing messages and other forms of advertising are more successful when they look and feel just like the other content that surrounds them, rather than an annoying and/or irrelevant interruption. If you can make your message useful, the theory goes, then users are more likely to click or remember.

          Twitter good and evil

          The most obvious example of this is the kind of advertising that both Twitter and Facebook offer: namely, features like “promoted tweets” and “sponsored stories.” They appear in a user’s stream just like any other status update or message, but they are advertising that is based on — and in some cases even includes — the activity of a user around specific topics (although Facebook’s version has caused some controversy over the inclusion of status updates).

          BuzzFeed, whose president Jon Steinberg will be on our paidContent Live panel, is one of the leading proponents of this concept: co-founder Jonah Peretti has talked about how the startup decided from the beginning not to use traditional banner ads and other forms of advertising, but to pin its hopes on sponsored content. But critics like Sullivan have complained that the sponsored content is too hard to distinguish from the regular content at BuzzFeed.

          Another form of native advertising is the kind that Forbes magazine specializes in, with its BrandVoice program. In a nutshell, the magazine provides marketers and advertisers with a platform that is indistinguishable — apart from the brand names and disclaimers that are posted on their pages — from the content that appears elsewhere on the magazine’s website.

          Advertising is just another form of media

          Advertising, b&W ad

          Forbes‘ chief product officer Lewis D’Vorkin, who will also be on our panel at paidContent Live, has written about the idea behind this platform, and the idea is that branded or marketing-related content should be given a status that is equal to that of the magazine’s traditional content, and that it should succeed or fail based on whether it is actually useful to readers or not. So the blog written by someone who works for a brand or corporate sponsor looks and functions almost exactly the same as any other blog written by a Forbes staffer.

          A whole separate category of sponsored content or native advertising is what some marketers like to call “brand journalism,” and Kyle Monson of Knock Twice — a former journalist who used to run the “brand journalism” practice at JWT in New York — is going to be on our paidContent Live panel talking about that. This approach sees brands like Coca-Cola and Qualcomm and Intel creating their own content or journalism around topics that are of interest to their customers, without making it explicitly an advertising message.

          So with brands becoming publishers and producing “brand journalism,” where does that leave traditional media companies? And are the blurring lines between sponsored content and traditional media a problem, or are critics like Andrew Sullivan just reluctant to embrace this new way of doing things online? Those are just some of the questions we will be tackling at paidContent Live on April 17, so I hope you can join us to continue the debate.

          paidContent Live: April 17, 2013, New York City. Register Now

          Image courtesy of Shutterstock / Gl0ck and The Everett Collection

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Google may be winning battles with publishers, but it is losing the war

          Google is clearly trying hard to portray a new German law involving the republishing of news as a victory, and some observers seem to agree, saying the company “defeated” publishers who wanted it to pay for the right to publish excerpts. But if you look more closely, this is not an obvious win for the search giant — just as recent deals with French publishers and Belgian publishers were a lot closer to being a saw-off for both sides than an outright win.

          And with every deal it strikes, Google makes it harder to argue that paying publishers for excerpts is unnecessary and even counter-productive — or that there is something to be gained by allowing even large companies to engage in the “fair use” of content for the larger good.

          As my colleague David Meyer has reported, Germany’s lower level of government, the Bundestag, passed a bill on Friday known colloquially as the “Google Law.” It doesn’t officially become legislation until it is approved by the second chamber, the Bundesrat, but it has already caused a firestorm of criticism — much of that stoked by Google and its “Defend Your Internet” campaign. The law was promoted by most of Germany’s major media companies, who believe Google News is stealing their content by including excerpts of news stories.

          Is it a victory for Google? Not really

          Google

          In its original form, the bill would have required Google and others who use even a single word of a publisher’s copyrighted content to pay for the privilege. After what appears to have much lobbying and late-night pressure from the search company, the German legislature tweaked the bill so that the use of a single word or a “small snippet” by services such as Google News would not require licensing or payment — which Google says ius a victory.

          As David notes, however, on closer inspection this doesn’t really look like much of a victory at all: it’s not clear that Google News has been absolved of anything, in fact, since the wording of the bill doesn’t specify what a “small snippet” consists of. The legislation also clearly gives publishers the right to control what a third-party site or service does with their content, and in effect it leaves it up to them to determine what constitutes unfair use.

          In a similar way, Google tried to argue that its deal with French publishers — which involved the payment of $82 million to set up a “digital innovation fund,” as well as a commitment by Google to help publishers with their digital advertising — was a victory, when what it really looks like is hush money or an extortion payment. As in Germany, the search giant might protest that it could have been much worse, but to other publishers and media players in Europe it looks a lot like Google is willing to cave in on its core beliefs if you push hard enough.

          Has Google lost the will to fight?

          Some publishers — even those in the United States — would probably argue that this is a good trend rather than a bad one, and that Google should be paying publishers for their content, even short excerpts (I happen to believe that they are wrong). And Google has obvious corporate reasons for being expedient and cutting deals, even if that involves backing down on its principles, because it needs to do business in these countries.

          Despite all that, however, it still feels as though something has been lost, or is in the process of being lost. In the past, Google’s argument in cases like these — or other cases on similar issues, such as the Google Books lawsuits launched by publishers and authors — has always been that a) the principle of “fair use” should allow it to use short excerpts of both books and news articles, and b) that there is an exchange of value involving the users that Google News drives to a publisher’s content that many media companies fail to appreciate.

          Of course, the U.S. principle of “fair use” doesn’t exist in the same way in most European countries. And perhaps it’s unfair to expect Google to try and somehow force other jurisdictions to see the value of such a principle. But if Google doesn’t do it, then who will? So much of its success has been based on it that it seems a little churlish to just cut a deal with whoever comes along, regardless of the long-term effects that might have on the open web.

          Post and thumbnail images courtesy of Shutterstock / Alexander Santander and Flickr user Pew Center

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Bradley Manning provides more evidence of why we need a media entity like WikiLeaks

          Bradley Manning, the former U.S. army private who is being tried by a military court for leaking classified documents after spending almost three years in jail, admitted on Thursday that he gave information — including a video of an attack by U.S. forces on civilians in Iraq — to WikiLeaks. But Manning also provided some details about his leaking of documents that reinforce why having an independent quasi-media entity like WikiLeaks is important: he says he tried to provide the same information to traditional news outlets, including both the New York Times and the Washington Post, but was ignored.

          This information came out during a statement that Manning read aloud in court, so most of the details couldn’t be immediately verified, but the former military intelligence agent said that he called the New York Times to offer them a story based on the documents he had, but his voicemail message was never returned. Manning said that he also spoke to someone at the Washington Post and described what he had, but no one ever followed up.

          According to some reports, Manning’s call went to the public editor’s voice mail at the Times, which could explain why no one in the newsroom contacted him — as anyone who has ever worked in a large newsroom knows, crank calls and vaguely conspiratorial reports from would-be tipsters come with the territory, and many don’t result in any action. The part of his story about speaking with someone at the Washington Post directly would seem a little more damning, but he apparently didn’t provide many details to the reporter he spoke to.

          Even with all of those caveats, the incident still brings home how valuable it is to have something like WikiLeaks, an entity that Jay Rosen has called “the world’s first stateless news organization.” It’s not that the New York Times or the Washington Post failed to do their jobs as media outlets or journalistic investigators — it’s simply that there was an alternative available where Manning could take the documents that would ensure that they saw the light of day. In the pre-WikiLeaks days, he might never have found a way of publicizing them at all.

          As Jeff Jarvis noted on Twitter, Manning’s confession brings up an even more interesting question, namely: What would have happened if he had gotten through to someone at the Times and they wrote a story, without WikiLeaks ever being involved? Manning might still be on trial for his behavior, but it’s unlikely there would have been the same kind of U.S. government attack on the media entity that published the documents, since the Times is seen as protected in a way that WikiLeaks is not — although it arguably should be.

          Post and thumbnail image courtesy of Shutterstock / Rob Kints

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Disruption guru Clay Christensen says incumbent media players are making a classic mistake

          Harvard Business School professor Clay Christensen, who has helped shape much of the thinking around technological disruption with his landmark book “The Innovator’s Dilemma,” has been taking a close look at the media industry recently — one of the markets that he believes is undergoing a fundamental disruption. In a panel session at the Nieman Foundation on Wednesday, he warned that many existing media entities are still thinking about what they do in the wrong way, just as other industries such as the telegraph and auto industry have in the past.

          A key part of Christensen’s theory is that the incumbent players in a particular industry routinely fail to make the necessary changes to the way they do things, even when they can see the disruption occurring all around them. In almost every case, they see the disruptors as not worthy of their attention because they are operating at the low end of the market, and either don’t see that as important or are too committed to their existing business models.

          Low-end competitors open up new markets

          Existing players are often good at what the Harvard scholar calls “sustaining” innovation, but they are rarely good at disruptive innovation. The latter is the kind that transforms something that used to be complicated and expensive — and therefore available only to the wealthy or those with special skills — and makes it available to a much broader group of users.

          So in telecom, he said, existing companies didn’t see the potential disruption from cheap flip-phones and ubiquitous cellular networks because they were too focused on large corporate customers, not individual users, and their businesses weren’t set up to take advantage of this new market:

          “The flip-phone and wireless made it so affordable and accessible that people around the world could now have access to telecommunications, and in almost every part of the world, the people who were the pioneers were not the existing wire-line players because it didn’t fit their business models… I think you see this playing out in journalism too.”

          Value is created in new places

          Arianna Huffington

          Although Christensen didn’t mention them by name, the obvious low-end competitors in the media business are players like The Huffington Post and BuzzFeed — both of which started at the low end of the value chain but have been moving up steadily, a trend that Christensen’s theory also describes. The Harvard professor also made some positive comments about Forbes magazine, and what it has been able to do online compared with other traditional magazines such as Fortune and Newsweek.

          “Compare, for example, Newsweek and Fortune on one side against Forbes on the next — the core business just got killed. McGraw-Hill sold Newsweek to Bloomberg for a dollar… but with Forbes, while the traditional magazine got commoditized, they’ve created different business models above and below that are really kind of interesting.”

          (Note: Professor Christensen appears to be confusing Newsweek and BusinessWeek here — Bloomberg bought BusinessWeek, while Newsweek was sold for a dollar to the financier behind The Daily Beast).

          The Forbes example reinforces another key point in Christensen’s description of disruption: as one layer of what technologists call “the stack” of processes that make up a business becomes commoditized, it creates value in other layers that can be captured by new players. So in journalism, Christensen says, the job of accumulating and distributing information about the world — something newspapers like the New York Times used to have a monopoly on — has become commoditized:

          “As disruption occurs, it commoditizes a layer in the stack, so what used to be a high value-added activity that was very profitable and others couldn’t replicate, now becomes cheap and easy and anyone can do it. It used to be that news and information was one of those layers in the stack — no one could play that game like the New York Times… but now everyone has access to more information than they could possibly use.”

          Find other jobs that news consumers want done

          Clay6

          The key to managing that disruption, Christensen says, is to find those other value-added businesses or markets or functions — “jobs to be done,” as he calls them — that news or journalism consumers are looking for. One example, he suggests, might be taking in all of the information people are deluged by and telling them what is true and what isn’t (something mainstream media outlets often fail to do, as I tried to describe in a recent post):

          “Are there jobs for which there have not yet emerged viable competitors? I’m awash in information, but I need someone who will tell me what is true, and it’s not clear that anyone has really done that job yet — the New York Times thinks they’ve nailed that, but it’s not clear to me that they have.”

          Christensen also warned — as he has in the past, including in the report that he co-wrote last fall with Nieman Fellow David Skok, entitled “Breaking News” — that many existing players in the media business are trying to innovate within their traditional corporate structure, and that this almost always fails. In answer to a question about the Boston Globe, he said the approach of having a separate site called Boston.com run by a separate team was smart.

          When an audience member said the site was now being run from within the Globe newsroom, however, Christensen changed his mind, saying: “Oh my gosh, really? Then put on your helmet, because it will force Boston.com to conform itself to the newsroom. That’s the way it always works, Sorry about that.” The full audio stream of the interview is available at the Nieman Journalism Lab.

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • What a pig, a goat and an eagle can tell us about the decline of traditional media

          If the rise of social media — and specifically the explosion of “viral” content on networks like Facebook and Twitter — has done nothing else, it has certainly given mainstream media plenty of “user-generated content” to add to their dwindling repertoire of journalism. Almost every newscast seems to include a video of cute animals or some other clip that is making the rounds on the social web. Unfortunately, no one seems to care much whether any of these videos are real or not, and that is a very real problem.

          The New York Times has written about one recent example of user-generated content gone bad: namely, a video clip of a baby pig “rescuing” a hapless baby goat who is trapped in the pond at a petting zoo. Within hours of the clip being posted to YouTube last fall and subsequently shared on Reddit, it had appeared on The Today Show, NBC’s Nightly News, Good Morning America and dozens of other channels — and why not? It was incredibly cute, and had a feel-good message of the kind that morning shows in particular enjoy.

          Of course, the video turned out to be a clip from a new TV show, which the creators manufactured and then uploaded as a kind of viral-marketing ploy. Not only did the baby pig not “rescue” the baby goat, but the producers of the show had to spend hours building an underwater track to even get the pig anywhere near the animal — and in the end they had to use a trained pig, after the one they were originally planning to use showed no intention of going into the pond.

          Does it matter whether these clips are real?

          As the NYT piece notes, when NBC Nightly News host Brian Williams introduced the video clip, he said he “felt duty bound to share this” with the audience, and added that he didn’t know whether it was real or not. Is that enough of a disclaimer to absolve a media outlet of responsibility for figuring out whether something can be verified or not? Many would argue that it is not. Kelly McBride of the Poynter Institute compared it to “a form of malpractice” for journalists (McBride has more on that in a blog post about the incident at Poynter).

          Obviously, part of what shows like Good Morning America do is pure entertainment — in other words, not journalism by any stretch. But clips like the baby goat rescue show up on programs like The Nightly News as well, and the hosts rarely say anything about whether a clip is real or not. In some cases, these videos come right after a news report about something serious. How are audiences to know when something is “just entertainment” and therefore hasn’t been checked?

          In another recent incident, a video purporting to show a golden eagle snatching a small child from a park went “viral” on the social web and showed up on a number of media outlets. It too turned out to be fake — the creation of some hard-working students in a computer-generated imagery course at a school in Montreal. The students deliberately chose something that seemed almost believable, based on “urban legends” of such incidents in the past.

          We need to be careful what we amplify

          Interestingly enough, the clip was debunked within hours of being uploaded, by another young programmer with some expertise in computer-generated imaging (as well as by other outlets such as Gawker, which pointed out obvious signs others could have noticed). But as with many corrections in a digital age, it took longer for the truth to propagate than it did the original video — and many of the outlets that shared the original didn’t bother to update their audience with the facts.

          Om wrote recently about how one of the key responsibilities of journalists in this new age of “democratized distribution” of information is to pay attention to what they choose to amplify and what they don’t, and incidents like the baby goat video bring that home with a vengeance.

          If all a media outlet is doing is sharing the latest video from Reddit or a tweet from a celebrity, how is that adding anything meaningful to what viewers can get elsewhere? It isn’t. And if traditional media continue to imitate their online competitors like BuzzFeed or Reddit without adding anything of value, then they will likely find that audiences are happy to go to the original source of that content rather than relying on the TV news to find it for them.

          Post and thumbnail images courtesy of Shutterstock / Donskarpo

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Variety doubles down on digital — drops paywall in what it calls “end of an error”

          Jay Penske, who bought the century-old Hollywood tabloid Variety in a fire sale last year, has clearly gotten religion about the power of the web — which isn’t surprising, since his Deadline Hollywood site is likely one of the factors that helped bring about Variety’s demise. So it shouldn’t come as a shock that Penske is dismantling much of the existing magazine, including its daily print edition, and is getting rid of the paywall in a move he described as “the end of an error.”

          Variety announced the moves early Tuesday, saying the tabloid will drop its daily print edition as of March 1 and publish only a weekly version on paper. The paywall, which charged users $250 a year for access to Variety content, comes down at the same time — Penske called it “an interesting experiment that didn’t work” — and in a somewhat unusual decision, the paper’s editor has been replaced with three editors, each of whom will run different sections of the magazine.

          Penske, the son of famed race-car driver and NASCAR operator Roger Penske, isn’t a newcomer to the power of digital: he was a co-founder of Mail.com, which he sold to a German internet company in 2010, and before that helped start a mobile company aimed at children called Firefly.

          The new owner bought Variety in October from owner Reed Elsevier for $25 million, after the European publishing conglomerate reportedly cut the price it was asking for the magazine — once reportedly valued at more than $200 million — by 25 percent. Penske added it to a stable of online properties that includes the Deadline site and MovieLine.com, as well as the well-regarded technology blog Boy Genius Report and HollywoodLife.com, a site run by former the former editor of Cosmopolitan, Bonnie Fuller.

          If Penske was hoping that his moves would be applauded by his other sites, he doesn’t know veteran Hollywood gossip writer Nikke Finke, who runs Deadline Hollywood: in a scathing post about the dropping of the paywall and the decline of what she called “the beleaguered trade,” Finke said editorial morale at the entertainment trade magazine “is at its lowest ebb and anxiety is running sky high,” and described advertising as “non-existent” and readers as “few and far between.”

          Sharon Waxman, who runs a competitor called The Wrap, also warned that Variety could have a lot of work on its hands, since — like many newspapers and magazines — print advertising in the daily edition likely made up a large proportion of its revenues.

          Post and thumbnail images courtesy of Shutterstock / wavebreakmedia and Flickr user Pew Center

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Should you be worried about the new “six strikes” anti-piracy rules? Yes and no

          A new system designed to combat copyright infringement was launched in the U.S. on Monday, a joint venture between content companies and internet service providers known as the Copyright Alert System. The name sounds harmless enough, and its supporters argue that it is an appropriate balance between copyright and an open internet — but critics argue that the so-called “six strikes” process is the thin edge of an increasingly broad wedge that copyright holders are trying to drive between consumers and digital content.

          The new rules, which have been in the works for over a year and have been repeatedly delayed, are being administered by the Center For Copyright Information — a non-profit entity made up of theoretically independent representatives from agencies like the Internet Education Foundation and the Future of Privacy Forum, and includes Jerry Berman, a former director of the Electronic Frontier Foundation, as well as Gigi Sohn of Public Knowledge. They have partnered with five of the largest ISPs, including Verizon and Comcast.

          Part of what makes this new strategy difficult to understand is that each service provider’s method for implementing the rules is different. Verizon says that after several warnings via email and popup message, users who are downloading or sharing copyrighted content will be given several options, including a temporary reduction in their internet speed. AT&T’s policy apparently says that after several warnings a user’s ability to access popular websites will be blocked until they complete a course in understanding piracy and copyright infringement.

          So should you be afraid of these new rules? That depends. Are you are only worried about how they might affect you directly, or are you concerned about the ways in which private corporations are seeking to snoop on and limit your behavior? Let’s break these two viewpoints down:

          Why you shouldn’t be worried:

          It doesn’t affect all internet service providers: Although providers like Comcast and Verizon are huge, they don’t cover all internet users in the United States, so it’s possible that you might not even be affected by the new restrictions even if you do download a lot of copyrighted content.

          You get six strikes, which is probably more than you need: Copyright owners and the Center for Copyright Information say that the intent of these new rules is to go after the most egregious downloaders and sharers of content, not the person who occasionally downloads a new song or a movie. So if you don’t do a lot of peer-to-peer file-sharing, you probably won’t be affected.

          You won’t get cut off, just lectured and irritated: Even if you do get flagged for something, the worst that most of the ISPs say they will do is limit your download speeds, show you popup warnings or send annoying emails. And some have said even if you ignore them, nothing will happen (although they could always change their minds about that later).

          There are lots of ways around these restrictions: One of the criticisms of such rules isn’t that they are too invasive, but that they don’t work against the really hard-core file-sharers that are allegedly the target of this strategy — since virtual private networks, proxy addresses, cloaking software and other tools can make it almost impossible to detect infringing downloads.

          Why you should be worried:

          Your ISP is going to be doing some heavy snooping: One of the broader risks that groups like the EFF point to in their criticism of these new restrictions is that they rely on ISPs snooping on their users to an almost unprecedented degree — and this raises the same issues about privacy that debates around technology like “deep packet inspection” have. The potential downside is fairly significant.

          The new rules don’t take into account fair use: Much of the material produced by the Center for Copyright Information makes it sound as though anyone downloading or sharing any copyrighted content is breaking the law — but that’s not the case at all. There are many instances in which the principle of fair use applies, and these rules don’t take that into account.

          Copyright holders are unlikely to stop here: One fear about the six-strikes process is that it is just the latest move in an ongoing attempt by copyright holders and content companies to exert more and more control over what users can do, and that allowing it to proceed only encourages them to pursue even harsher measures such as SOPA and PIPA.

          This puts commercial entities in place of laws: One of the biggest criticisms from free speech and open-web advocates is that the six-strikes rules essentially allow private corporations — movie studios, music labels and large telecom providers — to set up a quasi-legal process for pursuing their copyright claims, when the legal system is the appropriate place for those arguments.

          The bottom line: There’s reason for concern

          In the end, while this move may not affect you directly — or may only be a minor irritation in your daily life — the fact remains that it marks another attempt by content owners to exert their influence in areas that should belong to the courts and should in principle be protected by things like the First Amendment and the principle of fair use, neither of which are even mentioned by the promoters of this process.

          Not only that, but as my colleague Jeff Roberts notes, focusing on these kinds of efforts feels a lot like what the music industry did while it was trying hard not to innovate as the web grew bigger and bigger. The risk for copyright owners is that they rely too much on these kinds of measures, instead of working to create a market and a digital ecosystem that fosters the creation, sale and distribution of content in a way that works with the web instead of against it.

          Post and thumbnail images courtesy of Shutterstock / Cienpies and Flickr user Pew Center

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Why Marissa Mayer’s ban on remote working at Yahoo could backfire badly

          Not long after her arrival at Yahoo, new CEO Marissa Mayer started handing out carrots to her new employees, including new smartphones, free food and other Google-style amenities. Now she has brought out the stick: namely, a directive that employees are no longer allowed to work from home, something that is expected to affect as many as 500 Yahoos. Mayer’s move has its supporters, who argue that she is trying to repair Yahoo’s culture — but in doing so, she could be sending exactly the wrong message for a company that is trying to spur innovation after a decade of spinning its wheels.

          In the internal memo published by All Things Digital, Yahoo’s head of human resources said that the company wanted to improve the working environment at the company, and in order to do so, it needed people to work in the same physical location. According to the memo, “speed and quality are often sacrificed when we work from home,” and therefore working at home was no longer going to be supported — in other words, find a way to work at the office or quit:

          “To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings… We need to be one Yahoo!, and that starts with physically being together.”

          Yahoo says it needs to re-build its culture

          Although most of the responses from tech-industry insiders have been resoundingly negative, the Yahoo plan does have its supporters: some say the company has fallen so far behind its competitors after years of inaction and bad strategy that Mayer needs to bring the scattered remnants of its corporate culture together, and one of the best ways to do that is through physical proximity. In other words, the company’s “insights from hallway discussions” argument has some truth to it.

          According to some ex-Yahoo staffers, many of those who currently have work-at-home arrangements are disgruntled employees who provide little value, and so forcing them to work in an office is either a) a way of getting them to drop this attitude, or b) an easy way to get them to quit and save the company some money. Either way, the argument goes, Yahoo as a whole winds up benefiting financially. But at what cost to the company’s reputation?

          Yahoo has also taken fire from critics who see the move as an attack on employees who can’t afford to work in an office, including single mothers and others who require more flexible work arrangements. This is an argument that the company should theoretically be more open to, they say, because Mayer herself is a new mother — although she also happens to be one with a built-in nursery in her office according to some reports.

          Many argue that remote workers are more efficient

          The debate over whether employees are more productive in the office or at home has been going on for at least a decade, if not longer, and there is still plenty of disagreement on both sides. In addition to the impromptu hallway conversations and other social benefits of working alongside other people — which are clearly very real, as I and many other remote workers will admit — some managers believe employees who work at home invariably goof off and get less done (although as our GigaOM Pro analyst Stowe Boyd argues, this often says more about those managers than their staff).

          Companies like Automattic, however — the for-profit arm of the WordPress community (see disclosure below) — say they are more efficient and friendlier as a workplace without any real corporate office to speak of, and distributed teams like those behind Wikipedia and Linux have been able to accomplish incredible things without a traditional office environment. Surveys repeatedly show that companies with more flexible working arrangements are more efficient than those without.

          Most technology companies (including GigaOM) support remote working because it provides a lot more freedom for employees, and because giving staff the opportunity to live virtually anywhere and work wherever they wish broadens the available talent pool enormously. And isn’t that what Yahoo theoretically wants to do, or should want to do? Maybe people are already pushing down the doors demanding to be hired at the company, but if so then it’s a well-kept secret.

          What message does this send about Yahoo?

          Marissa Mayer

          I think David Heinemeier-Hansson of 37signals puts his finger on the problem in a recent post about Mayer’s decision, in which he says that Yahoo’s move is “an admission that Yahoo management doesn’t have a clue as to who’s actually productive and who’s not.” He goes on to argue that, for a company that is so desperately in need of talented employees who are willing to go the extra mile to rescue the former web giant, the decree abolishing remote working isn’t going to help, but will rather do the opposite:

          “Are you going to be filled with go-getter spirit and leap to the opportunity to make Yahoo more than just “your day-to-day job”? Of course not. Yahoo already isn’t at the top of any “most desirable places to work” list. A decade of neglect and mounting bureaucracy has ensured that. Further limiting the talent pool Yahoo has to draw from… is the last thing the company needs.”

          The danger for Yahoo here is that a decision driven by what are theoretically positive motives — to get employees to feel more like a team, to encourage innovation through serendipitous encounters, and to drive low-performing staff away — could wind up sending exactly the wrong message: namely, that it is a bureaucratic and centrally-controlled organization with no interest in being flexible when it comes to the living arrangements of its employees.

          Disclosure: Automattic, the maker of WordPress.com, is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

          Post and thumbnail images courtesy of Getty Images / Chris Jackson Shutterstock / ER 09 and Flickr user Pew Center

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

        • Marco Arment’s digital magazine and the paywall vs. sharing problem

          When it comes to new-media players worth watching, Marco Arment’s iPad-only publication — known simply as “The Magazine” — is at or near the top of the list, if only because it is a totally new, digital-native media venture that appears to already be profitable according to its founder. So it’s interesting to note that Arment recently announced a significant change by making full articles available for sharing on the web via a metered paywall approach. Like so many publishers, The Magazine’s founder is trying to find a happy medium between charging and sharing. But is there one, and if so where is it?

          As Arment explains in his blog post about the change, the need to open up his magazine’s content more for sharing was brought home by the response to a recent piece he published by Jamelle Bouie on the topic of race and technology writing. As with most of the essays in The Magazine, the writer was free to publish on his own blog as well, which he did — and while The Magazine’s version got plenty of readers, the response to Bouie’s piece after it appeared on his own site was substantially larger:

          “We allow authors to republish their articles on their own sites (or anywhere else) just 30 days after we publish them. Bouie did exactly that, as many of our authors have. Only then did his article explode into the huge discussion I suspected may result from it — and The Magazine wasn’t a part of it.”

          The magazine was cut off from the social web

          The Magazine wasn’t part of this broader web and social-media discussion because Arment initially showed only a short excerpt at the website — as well as a download link for the iOS app — when readers shared a story. As the publisher points out, since his magazine doesn’t rely on advertising at all but gets its revenue entirely from subscriptions, a web presence with full content seemed like a fairly low priority, if not an outright negative. Arment calls this “the biggest mistake I’ve made with The Magazine to date.”

          “You’d share a link, and everyone would just see the truncated teaser. Some of them would subscribe and see the rest, but most would get turned off by the truncation and just abandon the effort, as we web readers tend to do. Most people with big followings would quickly realize this and, understandably, avoid linking to our articles.”

          paywall

          This is similar to the problem (one of many) that News Corp.’s iPad-only magazine The Daily ran into when it launched: it didn’t even have a website, per se, so initially users who followed a shared link from a subscriber would get a static page. In the early days of the app, in fact, readers were actually sent to an image of the page from the app — something that was impossible to click on or otherwise interact with. The sharing experience was so broken that many likely never bothered.

          Where should the freemium line be drawn?

          Arment’s problem is a microcosm of the tension that publishers everywhere are experiencing, from the New York Times to the smallest local paper. While some media companies — including News Corp. with some its British papers — have chosen to go with what are called “hard” paywalls, where virtually no content is provided to readers for free, almost everyone else is trying to find a happy medium between that and no subscription barrier or paywall at all.

          The NYT started by providing 20 free articles, and giving anyone who came in via a link on social media a free view, a so-called “porous” paywall approach many other newspapers have adopted. But the paper recently cut the number of free articles in half. Andrew Sullivan, meanwhile — who recently launched a standalone blog funded solely by subscriptions — has made virtually of his content free via RSS, but imposed a click-through wall for readers on the site.

          The issue for everyone from Sullivan (who will be appearing at our paidContent Live conference in New York in April) and Arment to the New York Times is how much they need to be part of the social web vs. how much they plan to rely on reader subscriptions. A hard paywall essentially means a publication will be supported solely by existing readers, plus a few new sign-ups here and there — but newer or smaller publishers need the word-of-mouth that sharing brings in order to build awareness (and older brands might as well).

          As traditional advertising continues to decline in value — something that has taken both the New York Times and the Financial Times to the point where subscription revenue now exceeds advertising revenue for the first time — more and more publishers are going to have to confront this tension between paying and sharing. And in all likelihood, there is no single right answer.

          Post and thumbnail images courtesy of Flickr user Giuseppe Bognanni and Shutterstock / Daniilantiq

          Related research and analysis from GigaOM Pro:
          Subscriber content. Sign up for a free trial.

          • How Google did the right thing with the NASCAR crash video, and why it matters

            At a NASCAR event on Saturday, debris created by a serious crash flew into the stands and injured a number of fans. As with many such events, a bystander caught the disaster on video and quickly uploaded it to YouTube, but within a matter minutes it was removed due to a copyright claim by NASCAR. It seemed like yet another case of a commercial entity taking advantage of copyright law to smother free speech — until Google reinstated the video and said NASCAR had overstepped its bounds. In this case at least, the search giant did the right thing.

            The NASCAR crash followed much the same pattern so many news events do now, in the age of real-time and social media: moments after the crash occurred, there were multiple eyewitness photos and videos of the incident, including one particularly horrific one captured by university sophomore Tyler Anderson, who was sitting just to the left of the section that was hit by the debris — including a tire that flew off the race car in question. Soon, a link to the video on YouTube was racing through Twitter and other channels.

            In this case, Google decided to over-rule NASCAR

            Suddenly, however, the video was no longer available, and in its place was a standard YouTube message about the content being removed because of a copyright claim by NASCAR. This raised a host of questions for those who were trying to access it, including: How could the racing entity remove the video so quickly? Why didn’t YouTube protest that it should be protected by the principle of fair use, since it was a news event? And how could NASCAR claim that it had copyright over a video that was created by a fan?

            The latter of those questions was answered a number of hours later, when YouTube reinstated the video and released a statement saying that partners such as NASCAR are only allowed to remove content that breaches their copyright, and the content in question didn’t pass that test (even though NASCAR asserts in the fine print when you buy a ticket that it owns everything fans produce while at an event). Said the YouTube statement:

            “Our partners and users do not have the right to take down videos from YouTube unless they contain content which is copyright infringing, which is why we have reinstated the videos.”

            The other two questions people had are even easier to answer. In a nutshell, Google provides its YouTube partners with an easy way to have content removed almost immediately: it’s a tool called Content ID, and it’s essentially a back-door to the YouTube content-management system. When a company like CNN or NBC or some other partner sees their TV shows or news clips being shared on YouTube without permission, they can submit a form and have it pulled down.

            One of the main reasons why Google does this — and why it doesn’t bother (except in extreme cases) to protest or demand an explanation for takedown requests — is that the Digital Millennium Copyright Act or DMCA only gives services like YouTube “safe harbor” from copyright-infringement charges so long as the company acts quickly when it receives a takedown notice. In effect, there is virtually no leeway for protests or attempts to get a provider to defend their demands.

            As a number of observers — including Jillian York of the Electronic Frontier Foundation — noted during the NASCAR incident, this is just one of the many ways in which the DMCA actually fosters bad behavior, or at least behavior that seems bad if you believe in free speech and freedom of the press. The fact that Google acted quickly to put the content back up is admirable, but it shouldn’t have to do this, and there are no doubt many other important cases in which it hasn’t that don’t involve something as attention-getting as a race-car crash.

            And as Jason Pontin of MIT’s Technology Review pointed out in a recent essay on free speech in a digital era, our speech is to a large degree controlled by private corporations like Google and Twitter and Apple, and in many ways we are still coming to grips with what that means for us as a society.

            Post and thumbnail images courtesy of Flickr user Petteri Sulonen

            Related research and analysis from GigaOM Pro:
            Subscriber content. Sign up for a free trial.

            • It’s not you Facebook, it’s me — okay, it’s partly you: Why I unfriended almost everyone

              There have been a rash of posts of late from people who have quit Facebook or decided to unfriend everyone they know on the network. I haven’t gone that far, but I recently went through what I like to call “The Great Unfriending,” in which I unfollowed or disconnected from almost 80 percent of the people in my Facebook social graph. Doing so has changed the way I use the network, and I think that change — and the reason why I felt compelled to do so — says a lot about some of the challenges Facebook is facing.

              Unlike Julia Angwin, who says she unfriended everyone she was connected to because Facebook “cannot provide me the level of privacy that I need,” I don’t really have any issues with privacy on Facebook. Angwin said that she was troubled by the fact that “when I share information with a certain group or friend on Facebook, I am often surprised by where the data ends up,” and I respect her decision. But that’s not what bothered me about using the social network.

              It’s not the privacy, it’s the overload

              For better or worse, I made a deliberate decision when I joined the service (and Twitter, and almost every other social network) to be as open as possible, and to share almost everything about myself, within reason. I would never say that everyone should do this, and there are plenty of reasons why people keep certain things off the web — information about their children, for example — but for the most part I agree with Jeff Jarvis that the benefits of “publicness” outweigh the disadvantages.

              comscore-facebook

              So if privacy wasn’t the problem, what was it? In a nutshell, information overload. In the same way I’ve had to struggle with my addiction to real-time connectedness on a mobile device (something I wrote about recently that many readers disagreed with), I started to find that Facebook was a painful experience. And the more I thought about it, the more I thought that the problem was partly me — and the way I was using it — and partly the way Facebook was changing.

              I started to think about how some people I admire, including Union Square Ventures founder Fred Wilson, had pared back their use of Facebook by unfriending a lot of people. And such thoughts don’t seem to be unique: a recent survey by the Pew Center showed that two-thirds of users had taken an extended break, and close to 30 percent were planning to use Facebook less.

              Partly Facebook and partly me

              The part of this that I think was my fault stems from the way I set up my account when I first joined Facebook in 2006: in keeping with my desire to push the limits of openness, I accepted friend requests from almost everyone who sent them, even if they weren’t actually “friends.” And yes, I knew at the time that doing this carried some risk, but I didn’t fully appreciate what it would be like, or how it would eventually ruin the experience for me.

              What I wound up with was almost a thousand “friends,” many of whom were people I had met at conferences, or people who were connected to me through others, or some who were just fans of my writing (who can still use the “subscribe” feature). To these people — all of whom I have since unfriended — I would just like to say that you are all wonderful, but I couldn’t take it any more. My stream became a sea of information I had little or no interest in, with only a few scattered pieces of flotsam and jetsam from the people who I am actually close to.

              Like button

              The part of this that I see as Facebook’s fault has to do with how cluttered my stream became, especially with all of the “sponsored stories” and “liked” pages that began to show up more and more — when a “friend” liked a page about Coca-Cola or Ford, for example. And yes, just like the notifications I complained about on the iPhone, I know that Facebook has knobs and dials that you can tweak so that you don’t see certain things. But who has the time to spend twiddling all those dials all the time? I certainly don’t.

              Facebook has just become less relevant

              So what happened after The Great Unfriending? Facebook became a whole lot more usable as a particular kind of network — the one that lets me see what actual friends and family are doing, including those who are far away (the kind of “ambient intimacy” that researcher Leisa Reichelt talks about). Except for my teenaged daughters, of course, who don’t even use Facebook any more, preferring to spend all their time on Tumblr and Twitter. That’s just one of the things that should worry Mark Zuckerberg, I think.

              What I am left with is a more useful network, but also one that I only use for very specific things, and don’t really spend much time on. If I want to connect with people related to work, I do it through LinkedIn; if I want to connect to people through photos, I do it on Instagram or Flickr (which is why Instagram was such a smart acquisition for Facebook to make); and if I want to connect to people I don’t really know, I use Twitter. If I could get more of my friends to use Path, I might use that for friends and family, in which case I wouldn’t need Facebook at all.

              Facebook has a whole series of challenges as it tries to grow and justify its $65 billion market value. But its biggest problem — bigger than the shift to mobile or the need to generate ad revenue — is that it has to not only remain relevant in people’s lives, but offer them more and more things that will keep them engaged. For me at least, and it seems for others as well, they are losing that battle.

              Post and thumbnail images courtesy of Shutterstock / Stuart Jenner and Flickr user Pew Center

              Related research and analysis from GigaOM Pro:
              Subscriber content. Sign up for a free trial.