Author: Mathew Ingram

  • Google Buzz: Should You Cross the Streams?

    What is Google Buzz for? As with any social networking tool or social media service, this is the fundamental question both Google and its users collectively have to answer. Part of the reason why Buzz has come under so much criticism over privacy and other issues (which Google addressed during a panel at SXSW this weekend) is that users and the search company seem to have different ideas about what Buzz is, and how it should be used. To the company’s credit, it has been listening and adapting as quickly as it can, and has even asked sociologist and Microsoft researcher Danah Boyd to come and speak about privacy and Buzz at Google, after she gave a very critical talk at SXSW about the service.

    Apart from the privacy aspect of Buzz, one of the ongoing debates about the service is whether it should be used for aggregating or publishing — or both. When Buzz was first released, many people (including me) did what they thought the creators of the service wanted them to do: they followed a bunch of their contacts from Gmail, and then selected a bunch of other content platforms and social networks to connect to their Buzz stream — Flickr photos, Google Reader shared items, Twitter and so on. But a growing number of Buzz users are unhappy with the practice of connecting various content streams to the service. Like Egon Spengler from “Ghostbusters,” they don’t want people to “cross the streams.”

    One of the most thoughtful critics of this practice has been Mahendra Palsule, an editor at Techmeme who goes by the handle Sceptic Geek. Both on Buzz and on his personal blog, Palsule has described what he sees as the downside of combining multiple streams into one place — namely, repetition, a lack of focus, etc. But mostly, he seems to feel that dumping content from one social network into another defeats the purpose, and that the right thing to do is to create and publish content specifically for Buzz — in other words, to treat it as a separate platform like a blog, rather than as an aggregator.

    This is a real problem with social media today. Everyone wants maximum likes, shares, retweets on each and every thing they share. Their hope, understandably, is that each morsel they throw into social media becomes a feast on which everyone will drool. Well, count me out. If someone is auto-feeding the same thing on all networks, it doesn’t add any value to me to follow them on all networks. Especially if they are not engaging in conversation where their content is landing.

    At least one member of the Google Buzz team seems to agree. Google engineer DeWitt Clinton noted in a recent post that O’Reilly media founder Tim O’Reilly had stopped feeding Twitter into his Buzz stream (which O’Reilly described in a Buzz post). Clinton said: “Tim O’Reilly is now using Buzz the way I think Buzz can be used best — as a publishing and conversation platform in its own right. It is great to see Tim now posting long-form articles on Buzz, and the conversations that follow are just as high-quality and engaging as I had hoped for.”

    Like O’Reilly, I have also stopped importing my Twitter posts into Buzz, partly because tweets seem to get imported in clumps of several posts at a time, which makes them hard to filter or differentiate. But I continue to have my Flickr and my Google Reader connected to the service (although there’s a separate issue with shared Reader items, namely that Buzz users can’t comment on them unless you add them individually to a group of contacts who have that ability). And I am kind of on the fence as far as seeing Buzz as a separate platform that requires unique content. How many platforms can we post content to? Between a blog, Twitter, Facebook, Flickr, YouTube and half a dozen other networks, it can get exhausting to post and monitor unique content everywhere. To me, one of the benefits of services like FriendFeed and Buzz is that they can pull the streams together.

    DeWitt Clinton says that he sees the potential for the service to play a combined role, both as its own platform and as a way of extending the reach of other content services — specifically, blogs. For example, he says the future of Buzz includes implementing a cross-platform commenting protocol called Salmon (which Laurie Sullivan does a great job of describing here), which would allow users to comment on blog posts both through the original blog platform and through Buzz, with both places showing all the comments, regardless of where they originated. As he goes on to note:

    Another trend I’ve noticed is that many people are returning to their full format blogging platforms (wordpress, blogger, etc), and are taking advantage of the real-time and full-fidelity content-preserving syndication to Buzz to engage with their readers here. To me, this is exactly the way it should be, and I’d be personally thrilled if Buzz helps issue in a return to the era of long-form personal publishing.

    How do you use Buzz — or do you see yourself using it at all? Do you think it should be an aggregator or a separate publishing platform of its own? Let us know in the comments.

    Related content from GigaOM Pro (sub req’d):

    Can Enterprise Privacy Survive Social Networking?

    Post and thumbnail photos courtesy of Flickr user Arekev

  • Why Social Media Policies Don’t Work

    Maybe Thomson Reuters was feeling nostalgic about the flurry of negative attention that both the New York Times and the Washington Post got last year when they came out with policies on the use of social media tools such as Twitter and Facebook. For whatever reason, the wire service recently issued new guidelines for its staff, and they suffer from many of the same problems that both the NYT and WaPo policies did. All of these flaws boil down to one thing: A desire to control something that fundamentally can’t be controlled, and a fear of what happens when that control is lost. Without even bothering to enumerate the positive aspects of social media use, the policy starts in with the warnings right away:

    We want to encourage you to use social media approaches in your journalism but we also need to make sure that you are fully aware of the risks — especially those that threaten our hard-earned reputation for independence and freedom from bias or our brand.

    The risks, of course, are everywhere — someone might say something embarrassing, or post a tweet that others could twist to disparage Reuters:

    The advent of social media does not change your relationship with the company that employs you — do not use social media to embarrass or disparage Thomson Reuters. Our company’s brands are important; so, too, is your personal brand. Think carefully about how what you do reflects upon you as a professional and upon us as an employer of professionals.

    The overwhelming message is that, while social media is great and useful for many things (although none of those things are ever mentioned), it is a minefield of potential dangers and even a potential threat to the company’s traditional media business:

    We’re in a competitive business and while the spirit of social media is collaborative we need to take care not to undermine the commercial basis of our company.

    The policy says that “where practical, you should ask someone to check content of Twitter posts,” even as it admits that this is frequently impossible, and warns that supervisors will be monitoring those tweets to see if they cross any lines. It even says that “when using Twitter or social media in a professional capacity, you should aim to be personable but not to include irrelevant material about your personal life.” Why not? No reason is given, but the obvious implication is that it’s “unprofessional” or might “damage the brand.”

    I happen to think the opposite is true (within reason, of course). I enjoy it when journalists I know — like Reuters reporter @bobbymacreports, for example — post things that indicate they are human beings. And I don’t think any less of a media brand when one of its employees posts something that turns out not to be true, because I know that they are doing their best, and will correct what needs to be corrected.

    Right at the end of the new policy, Reuters says something that cuts to the heart of all the difficulties with social media guidelines. The policy baldly states: “Don’t scoop the wire.” So I mentioned on Twitter that Reuters’ own editor-in-chief, David Schlesinger, did exactly that when he was tweeting from Davos last year and posting about a number of newsworthy events. Schlesinger then responded that “some stuff belongs on the wire first. some stuff belongs on tweets. some stuff you can’t always tell immediately.”

    That phrase could just as easily be applied to all of the other potential negative outcomes that Reuters is trying to avoid with its policy. Some things are bad to say on Twitter, and some things are not — and some stuff you can’t always tell immediately. Obviously, you probably shouldn’t chew out a source publicly on Twitter using abusive language. But that’s a little like putting a warning sign on a chainsaw saying “Do not stop chain with hand.” If your employees need to be told that kind of thing, they are probably too stupid to be on your payroll and should be sent to work for your competitors instead.

    If you trust your writers and editors, whom you presumably hired and continue to employ because they are smart and capable, then let them use social media for what it was meant for: engaging with readers in as many ways as possible. Don’t get consumed with fear about a loss of control over them — embrace it.

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

    Post and thumbnail photos courtesy of Flickr user LunaDiRimmel.

  • Hal Varian Is Right: Newspapers Need to Engage

    As part of the Federal Trade Commission’s ongoing hearings into the future of journalism, Google’s chief economist, Hal Varian, gave a presentation on newspapers and their financial problems that is well worth taking some time to read (or view). The slide deck is embedded below, and Martin Langeveld has a great overview at the Nieman Journalism Lab that also includes a transcript of Varian’s presentation. The Google economist (who also wrote a blog post) does a pretty thorough job of explaining the untenable position that newspapers currently find themselves in, and how it isn’t the Internet’s fault (in other words, it isn’t Google’s fault).

    The biggest problem, Varian says, is that the news part of what newspapers do — the hard reporting and crime and investigative stuff that everyone thinks of when they say the word “journalism” — has traditionally been subsidized by all the rest of what newspapers do, such as the automotive section, the travel section, the lifestyle features and so on (which almost no one thinks of when they say the word “journalism”). Those other parts of the paper, unfortunately, are being targeted by subject-specific web sites and services, leaving the news part of the operation unprotected. As he put it:

    Traditionally, the ad revenue from these special sections has been used to cross-subsidize the core news production. Nowadays internet users go directly to websites like Edmunds, Orbitz, Epicurious, and Amazon to look for products and services in specialized areas. Not surprisingly, advertisers follow those eyeballs, which makes the traditional cross-subsidization model that newspapers have used far more difficult.

    Although it’s admittedly a bit presumptuous to expect Varian to come up with solutions to this problem, he’s a little light on the solutions front, mentioning Google’s “FastFlip” experiment as one possible answer, as well as Living Stories and a couple of other Google projects. But one part of his presentation really hit home with me, and that was when he talked about the amount of time people spend with the news online. On average, he said, they spend about 70 seconds a day. Varian says part of the reason for that is people reading online at work, where they have less time to spend with the news.

    That could well be part of the problem, but I think Varian puts his finger on something important towards the end of his presentation, when he says that newspapers “need more engagement.” One of the reasons why the news in general fails to hold people’s attention for very long, and why newspapers have fairly pathetic “time spent” statistics compared to lots of other web sites, is that it does little or nothing to engage the reader. The delivery of most news stories is a bare-bones “here are the facts” approach, with little or no interactivity or room for external input. Why would anyone stick around?

    030910 Hal Varian FTC Preso

    Even when there are tools that are designed for interactivity, such as reader comments on news stories, they are typically ignored by the majority of newspaper writers (with the exception of some bloggers) and therefore become a kind of interactivity ghetto, a haven only for the disturbed and/or the disgruntled attention-seeker. All this despite the fact that research shows readers spend more time with news stories that have comments, and also return to those pages more often.

    As Varian notes in his presentation, newspapers also spend comparatively little time looking at what brings people to their pages, what they are searching for and reading and recommending and commenting upon, all of which provides incredibly detailed and useful audience information. It’s like a retailer not paying attention to what his or her customers are buying, or how much they pay, or what they say about a product – but instead, just putting on the shelves whatever he or she wants to sell.

    Can newspapers change these aspects of the culture and take advantage of the web? If they can’t, then not even Google will be able to help them.

    Post and thumbnail photos courtesy of Flickr user MarcelGermain

  • The NYT Needs to Learn the Value of the Link

    In the coverage of New York Times writer Zachary Kouwe, who resigned recently amid accusations of plagiarism, much has been said about the demands of writing for the always-on web, and how this might have contributed to Kouwe’s missteps -– something the writer himself referred to in a discussion of the incident as described by NYT public editor Clark Hoyt. But Reuters columnist Felix Salmon was the first to put his finger on what I think is the real culprit: a lack of respect for the culture of the web, specifically for the value and necessity of the link.

    Kouwe describes in an interview with the New York Observer how he felt under pressure to cover offbeat news items for the blog as they came up, and would pull together bits and pieces of coverage from elsewhere on a story and then rewrite them into his own post or story. This, he says, is how the plagiarism occurred: by not realizing which pieces of text he had pulled from somewhere else, and which he had written himself. As Salmon notes, what a blogger would do in this case (or at least a good blogger) is link to other sources of material on the same topic rather than rewriting them:

    Anybody who can or would write such a thing has no place working on a blog. If it’s clear who had a story first, then the move into the age of blogs has made it much easier to cite who had it first: blogs and bloggers should be much more generous with their hat-tips and hyperlinks than any print reporter can be.

    Linking isn’t just a matter of etiquette or geek culture (although it is both of those things); it’s a fundamental aspect of writing for the web. In fact, the ability to link is arguably the most important feature of the web as a communications or information-delivery mechanism. Before the web came along, journalism and other forms of media were like islands unto themselves, each trying to pretend that it existed alone, without any connection to what came before it. Links are like bridges and roads, allowing these islands to connect to each other, and making it easier for readers to draw connections.

    Links also make it easier for readers to understand a writer’s perspective, and thus are an important tool in disclosing bias (in an eloquent discussion of how transparency is the new objectivity, author David Weinberger said that objectivity was something “you rely on when your medium can’t do links”).

    Unfortunately, however, those bridges and roads can also take readers elsewhere, and if your business depends (or you think it depends) on keeping those readers on your island, you might think twice about building that bridge. So you might recreate information that exists elsewhere, in the hope that readers won’t notice. Is that part of what pushed Kouwe to rewrite material for the blog? Salmon suggests that it might be. And if it did, the NYT writer is far from alone.

    That’s not to say web-only sites are free from this kind of behavior. Some news sites have become notorious for either rewriting an entire post from a competitor, or excerpting huge portions of the content on their own sites, with just a small link that credits the original source. The economic incentive is the same, whether it’s a web-only outlet or a traditional media web site: to aggregate page views and sell them to advertisers. But at least most web-only sites that do this tend to include links (even if they are in small print at the bottom). Similar behavior in print publications usually comes with no links at all.

    Plenty of mainstream publications have avoided linking out until relatively recently, or at least have linked as little as possible. The New York Times is in that group, despite its status as a leader in so much of what we think of as “new media” online. For a long time, the newspaper’s web site would only link (when it linked at all) to internal NYT topic pages. It has started adding more links to external sites, but many stories still contain no links at all. Lots of newspapers do the same thing.

    In some cases this is a technical issue, in that print-based content management systems often make it difficult to include links. But an even bigger part of the problem is cultural. Traditional print media workers are used to thinking of themselves as the be-all and end-all of information, the only source that anyone could possibly need (despite the fact that many stories are based either wholly or in part on reporting by wire services such as the Associated Press and Reuters), and are loathe to give anyone else credit. That has to change.

    The ethic of the web, as Jeff Jarvis repeatedly points out, is “do what you do best, and link to the rest.” If Kouwe or his employer had fully embraced that approach, he might not have had to apologize for anything.

    Thumbnail photo courtesy of Flickr users Skedonk and Lujaz

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

  • Net a “Fundamental Right,” 4 Out of 5 Say

    Do you feel that Internet access is a fundamental right? Four in five adults in more than 26 different countries agree with you, according to a new poll sponsored by the BBC World Service. The poll asked more than 27,000 adults about their attitudes towards the Internet, and found that 87 percent of those who regularly use the Internet believe that access should be “the fundamental right of all people.” More than 71 percent of non-Internet users also felt that they should have the right to access the global network. In both South Korea and Mexico, more than 90 percent of those surveyed agreed that access was a fundamental right.

    The survey found that most web users are positive about the Internet: close to 80 percent said they felt it had brought them greater freedom, 90 percent said they thought it was a good place to learn, and just over 50 percent said they enjoyed spending their time on social networking sites like Facebook and MySpace. However, some expressed concern as well, with almost half saying they did not agree with the statement that “the Internet is a safe place to express my opinions.” Germany (with 72 percent) and South Korea (70 percent) had the highest proportion who felt the Internet was not a safe place.

    According to the poll, most users believe that the Internet should not be regulated by governments. More than half of the Internet users surveyed said that “the Internet should never be regulated by any level of government anywhere,” including large proportions of the population in South Korea (83 percent), Nigeria (77 percent), and Mexico (72 percent). A large number of those surveyed said that they didn’t think they could cope without the Internet, including 84 percent of those polled in Japan and 81 percent of those in Mexico.

    Those who were surveyed in the United States were more likely than the average to say the Internet has given them freedom (85 percent compared to 78 percent worldwide). They were also among the most likely to say that they feel able to express this freedom in speech, with 55 percent (compared to 48 percent worldwide) agreeing that the Internet is a safe place to express their opinions.

    Thumbnail photo courtesy of Flickr user Stefan

    Related content from GigaOM Pro (sub req’d):

    Is Google’s China Problem a Groundswell of the Closed Internet?

  • Outside.in to AOL’s Patch: Bring It On

    Mark Josephson, CEO of hyper-local news aggregator Outside.in, doesn’t seem all that concerned about AOL’s plans to pour $50 million into its own hyper-local news operation, Patch.com. That’s because while AOL is trying to generate its own custom content for dozens of small cities and towns in New York state and elsewhere, Outside.in is happy to take on the much less resource-intensive job of pulling together what is created by others — from traditional media outlets such as newspapers and TV affiliates to local bloggers and even municipal listings and announcements. If anything, the expansion of Patch.com will just give Outside.in even more content to aggregate.

    “I saw they were planning to spend $50 million, and I tried to think of ways we could spend $50 million and I just couldn’t do it,” Josephson said in a recent interview. Although they’re coming at the marketplace from two different perspectives, the goal of both Patch.com and Outside.in is similar — to tap into the local advertising market. “I think we see the same thing, which is an opportunity to capture an advertising audience at the local level and then roll that up into large national buys,” the Outside.in CEO says. Other hyper-local efforts with their eyes on the same prize include Topix and Placeblogger.com.

    “They’ve decided to create original content for everywhere they want to be, whereas we aggregate and pull together what is out there already,” Josephson said of AOL. Outside.in pulls from more than 40,00 different sources, including traditional publications such as local newspapers and TV stations, bloggers, social networks such as Twitter, Facebook and Foursquare, as well municipal information sources and real estate listings. Those latter sources make Outside.in a little like Everyblock, the startup founded by programmer Adrian Holovaty that was sold last year to MSNBC. Microsoft is also experimenting with some aggregation of hyper-local blogs through its Bing search engine.

    Josephson says Outside.in has been working closely with traditional publishers to “help them take costs out of their business.” The company offers a self-serve version for publishers that allows them to aggregate content for a specific location and then use it to beef up their local coverage. “We’re working with about 100 publishers, including the New York Post, Tribune Co., Media General, as well as TV stations such as CBS, NBC and Fox,” Josephson says. “We don’t want to see the traditional media go away — we want to help everybody to stay in business.” One of the more recent additions to Outside.in’s list of clients was CNN, which is not only working with the service but also invested an undisclosed amount in the company in December.

    Meanwhile, Josephson says the New York Post’s web site, which used to have just four sub-sites for their regional bureaus, now has a toolbar with hundreds of different boroughs, towns and other locations, all powered by Outside.in — providing news headlines as well as blog posts, and a local map with events and news items pinned to it at various locations. The service also pulls in content from Gothamist and other news sites, via their RSS feeds. And the content feed from Outside.in that publishers such as the NY Post use comes with ads embedded in it (publishers can pay a licensing fee for a feed with no ads, but 90 percent of the company’s customers take the ads).

    As far as other hyper-local or aggregated content solutions such as Daylife and Topix, Josephson says they all serve a slightly different purpose. Daylife “does for other sections of the paper what we do for news,” he says, creating vertical and topic pages around content areas such as travel and entertainment, while Topix “does a great job of aggregating community and comment boards, but doesn’t really focus on news.” In any case, the Outside.in CEO says there is “plenty of room for everybody in hyper-local. There’s $115 billion spent on advertising focused on local markets, through TV, radio, print and outdoor — and only about $15 billion of it is online.”

    And if AOL spends $50 million and brings a lot of attention and advertisers into online hyper-local, says Josephson, “then we all benefit.”

    Post and thumbnail photos courtesy of Flickr user Thomas Merton

    Related content from GigaOM Pro (sub req’d):

    Developers, Meet Your Hungry New Market: The News

  • Asia, Middle East Users Now 25% of Facebook

    One in four Facebook users now come from Asia or the Middle East, according to O’Reilly Media research analyst and blogger Ben Lorica, who’s been tracking the statistics behind various demographic and geographic groups on the social networking site. Given that Facebook now claims a total of some 400 million users, that translates into about 100 million people. And the number of users from Asia in particular is growing much faster than any other major geographic region (while the share held by North American users has been declining rapidly). In the past three months alone, the O’Reilly researcher notes that Facebook has added an additional 2.3 million users from South Asia.

    Lorica also notes in his post that, with a market penetration of just 1.7 percent in Asia and Africa, Facebook “has barely scratched the surface in both regions.” In a similar post last fall, the O’Reilly researcher said that in the prior three months, Asia had added more than 17 million users. Meanwhile, the share of users from the Middle East/North Africa region remains stable at just over 8 percent, he says, and had “the second fastest-growth rate over the past 12 weeks,” according to his research data. Lorica also says that the number of users who are in the 18-25 age group is higher outside the U.S., particularly in Asia, the Middle East/North Africa, Africa and South America.

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

    Post photos courtesy of O’Reilly Media, thumbnail photo courtesy of Flickr user Betta Design

  • Can IBM Help You Write a Better Blog Post?

    If you have a personal blog, or even a corporate one that you help write, you’ve undoubtedly run into it: the Wall. Also known as “writer’s block,” it’s the inability to come up with something to write about, or a lack of enthusiasm for doing so. Well, researchers at IBM think they may have come up with a way to get past the Wall, with what they’re calling the Blog Muse (PDF link) — a kind of social recommendation system for blog posts in which users say what they want to read about and then other users vote on those suggestions, and the most popular topics get distributed to those most likely to want to write about them.

    Casey Dugan and Werner Geyer of IBM Research started working on the problem a couple of years ago, after finding “blogger’s block” or “blog fatigue” to be one of the leading problems for both internal social networks at IBM and some of the company’s clients. “We asked what they had trouble getting people to contribute content to, and they said blogs,” Dugan said. “They said that people would start them and then stop writing, or that there were a lot of blogs with not very much on them.” In fact, IBM research shows that about 80 percent of those who begin a corporate blog never post more than five entries.

    Geyer said many companies see the value of blogs because they allow people to share information, but that “often people stop blogging because they don’t get any attention with what they’re writing, no one comments on their blogs, they don’t know what to write, and so on.” As the researchers describe in their paper:

    In order to inspire bloggers, our system suggests topics they can write about. The audience is given a voice by letting blog readers share topics they would like to read about with the blogging community. Our system then suggests these topics to potential blog writers who can decide whether or not they would like to address the topic requested.

    The system consists of two simple widgets added to BlogCentral, the internal blog network at IBM, which was launched in 2003 and has since seen a total of more than 145,000 blog posts written on 16,000 different blogs by over 14,000 users. One widget allows users to suggest topics they’d like to see written about, while the other allows them to vote on recommendations from others. In testing the system, using the profiles created by users in SocialBlue (IBM’s version of Facebook), it found up to 50 users who might be interested in writing about those topics and sent them the recommendations. When a post was written, it sent the post to anyone who voted for the topic.

    And what did the research show? According to Geyer, in a study of 1,000 users who tried Blog Muse, “blog posts created from our system got twice as many comments and got more views as well, and they got 3 times as many stars (or likes).” Interestingly enough, Dugan writes that “we didn’t find an increase in the number of blog posts, so maybe there was some substitution going on there — maybe people didn’t write more, but the ones they wrote got more readers.” There was also some resistance from blog writers who wanted to follow their own muses, rather than playing to the crowd, As he put it:

    Some described how they already have topics they write about, are without a shortage of ideas, and find blogging a “personal” activity that suggestions might infringe upon. One went as far as saying, “This would be similar to writing paid reviews for consumer products.”

    Among bloggers who didn’t write as frequently, however, there was support for the system because it helped them come up with ideas. The researchers said in their report that their goal was “to inspire users to write more blog posts, and our approach is to involve readers by allowing them to share their topics of interest with the blogging community. Sharing and voting on topics adds a new communication channel to the blogging ecosystem.” IBM says it’s planning to roll out Blog Muse internally, and may look at commercializing it at some point in the future.

    There are a number of blogosphere recommendation systems that do something similar to Blog Muse — arguably, topic filters such as Techmeme and Tweetmeme perform the same kind of function, by letting bloggers know what topics are getting the most attention from readers. But does this remove some of the serendipity that can make blogging so powerful? We don’t always know in advance what we want to read about or what will move us. What do you think?

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

    Post and thumbnail photos courtesy of Flickr user Kristina B.

  • UPDATED: Who Uses Social Media More, Men or Women?

    Updated: Men are more positively inclined towards social media activities and use social networking sites more than women, according to what Liberty Mutual called a “comprehensive national survey” of online behavior it released yesterday. This is somewhat surprising, since it’s the exact opposite of what other surveys have found, including a recent one from Royal Pingdom that looked at user profile data from some of the major social networks. Among the findings in Liberty’s survey, which was done as part of the Responsibility Project:

    • Men (57 percent) are more likely than women (50 percent) to have more than one social networking account.
    • With the exception of Facebook, men are generally more likely than women to use social media accounts at least a few times per week, particularly Twitter. For MySpace, the breakdown is 35 percent of men vs. 26 percent of women; LinkedIn is 25 percent of men vs. 16 percent of women, and Twitter is 53 percent of men vs. 38 percent of women.
    • Dads are more likely than moms to have a MySpace account or a Twitter account, at 43 percent vs. 29 percent and 50 percent vs. 32 percent, respectively.

    The Royal Pingdom survey, meanwhile, found that of 19 social networking sites studied — including Facebook, Twitter, MySpace and Bebo — the majority (84 percent) had more female than male users. The exceptions to that rule were social news sites such as Digg, Reddit and Slashdot (the latter had 82 percent male users). Twitter and Facebook, meanwhile, were found to have the same proportion of male and female users — 59 percent and 57 percent respectively. The most female-dominated site was Bebo, with 66 percent female users, closely followed by MySpace and Classmates.com with 64 percent each. Royal Pingdom said the average ratio of all 19 sites was 47 percent male and 53 percent female.

    Royal Pingdom’s numbers are very similar to those produced by others who have studied the issue, including Brian Solis, who put together some numbers on male-female ratios at different social networks and concluded that “In the World of Social Media, Women Rule.” So what explains the discrepancies between Liberty Mutual’s survey and the others? It could be as simple as the difference between what people say they do and what they actually do — since Liberty asked people how many networks they belonged to, whereas Royal Pingdom and others used actual data from people’s profiles. Liberty Mutual’s survey was also based on a relatively small sample size of just 1,000 people. I’ve emailed the company to ask for a comment, and will update if and when I get one.

    Update:

    A spokesperson for Liberty Mutual responded via email and said that the survey went out to a random sample of the population that was reflected the gender breakdown of the U.S. — 52 percent women, 48 percent men. Respondents were also screened to ensure that they had at least one social media account.

    “We did not focus on the findings of how many women versus men on each network, but rather looked at data AMONG women on each network compared to data AMONG men on each network. And that is the analysis where we found men to be more active (with the exception of Facebook) – that is, more likely to say they use each network at least a few times a week, not necessarily more likely to have those accounts.”

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

    Post and thumbnail photos courtesy of Flickr user Wili Hybrid.

  • Zoompass Trials Mobile Payment Tag

    Zoompass, a mobile payment service launched last year by a consortium of Canadian telecom players, is branching out with the introduction of a wireless payment sticker that can be attached to a mobile phone, effectively turning it into a “tap-and-pay” debit card system. Zoompass was developed by EnStream, a partnership among Canada’s three major telecom companies: Bell Mobility, Rogers Communications and Telus Corp. The service allows members to send money to friends or family with their handheld device via an iPhone app, BlackBerry app, etc., and to pay for products and services through a credit card linked to their Zoompass account.

    Now, the service has launched a sticker that attaches to a phone or other handheld device and works as a “contactless payment tag.” The sticker can be scanned by any mobile payment system that supports it, including many that are already in use across Canada at coffee shops, gas stations and other retail locations. Both Mastercard and Visa have also been doing trials of special credit cards that allows for contactless payment, which involves waving the card near a payment terminal rather than having to insert or swipe it through a slot.

    Will users want to stick something to the back of their iPhone or BlackBerry that lets them swipe and pay for things? A short video of the device in action (embedded below) makes it look relatively inconspicuous, but the reality is that you’re still sticking something to your phone, and it has to be thick enough to transmit a wireless signal. It remains to be seen how willing users are to do this, and whether they will trust the consortium to handle access to their bank and/or credit card accounts.

    That said, however, Zoompass probably has a better chance of making contactless payment work than some other startups that have tried to do so — including one that also involved Bell Mobility and Telus. The two carriers partnered with two of Canada’s major banks (TD Canada Trust and National Bank) to launch a tag-based payment system called Dexit in Toronto in 2001, installing payment terminals in various coffee shops and other locations, but the service never took off and has since been shut down. That system used a separate keychain-style fob that users had to carry, however, while the Zoompass system uses a device that everyone already carries with them.

    Related content from GigaOM Pro (sub req’d):

    NewNet Winners and Losers of 2009

    Post and thumbnail photos courtesy of Flickr user Andres Ruida.

  • Zynga Gets Unfairly Slammed Over Haiti Donations

    If you want to see a Twitter mob in its larval stage, just do a search on Zynga or Farmville and Haiti and you will see one emerging over a report that the social-gaming company kept 50 percent of the money that it raised in donations for the country in the wake of a devastating earthquake. The report originally appeared in a Brazilian magazine called Superinteressante, which did a feature on Zynga and Farmville and mentioned in the piece that it had only given 50 percent of what it raised to Haiti. That was in turn picked up by a leading Brazilian newspaper called Folha de Sao Paulo , which said that Zynga had admitted to only sending 50 percent of the money it raised for Haiti to that country.

    That story got written about in several places around the Web, including at Social Media Today (in a post that has since been removed and replaced with a different one featuring an altered headline) as well as at the opinion site True/Slant, where Marcelo Ballve — a former Associated Press reporter in Brazil — summarized the Falho story about how Zynga had misled Farmville players into thinking 100 percent of their donations would be going to Haiti for earthquake relief (he has since posted an update). The story was also written up at Gawker, which also repeated the allegations.

    The Folha story, however, blurs together two Farmville campaigns to raise money for Haiti: One was set up before the earthquake, and specifically said that only 50 percent of the money raised would be sent to Haiti (a screenshot is embedded below). The second, which involved the purchase within the game of special “white corn” for a user’s farm, said that 100 percent of the proceeds would be sent to earthquake relief. According to an emailed statement from a Zynga spokesperson that I’ve embedded below, this is exactly what happened (a similar statement has been posted at the bottom of both the True/Slant post and the Folha story, and referred to by Gawker, but not by Social Media Today, although the latter has since posted an update and apology). The initial campaign for Haiti raised $1.2-million for the country, and the subsequent “white corn” campaign raised an additional $1.5-million.

    Meanwhile, dozens of Twitter messages are still being posted every minute (based on a recent survey of the social network) saying that Zynga “admits to keeping half the money it raised for Haiti,” despite the repeated efforts by Zynga CEO Mark Pincus to rebut such claims through his own Twitter account. The eagerness with which people seem to believe such claims could have something to do with the language barrier between the initial reports and those who have repeated them — but it could also be a result of some negative press that Zynga has received in the past, alleging “scammy” behavior related to lead-generation offers within its games.

    If nothing else, Zynga’s current woes are just another example of social media’s ability to spread both information and misinformation at lightning-fast speeds. For another recent example, see our report about the “death” of folk legend Gordon Lightfoot.

    Related content from GigaOM Pro (sub req’d):

    How The Next Zynga Could Reinvent Social Gaming

    Post and thumbnail photos courtesy of Flickr user Rusty Boxcars.

  • Venture Capital Market Warming Up in Canada

    The venture capital business hasn’t exactly been setting the house on fire in Canada in recent years. The number of successful exits has been tiny, and some funds have all but dried up or stopped making new investments. According to a recent survey released by the Canadian Venture Capital & Private Equity Association, investment levels in 2009 were the lowest they’ve been in over a decade. That said, there have been some encouraging signs that money is starting to flow — or may soon begin to flow — into smaller-stage companies: A new $20 million investment fund called Mantella Venture Partners launched this week, and the Quebec government also announced that it’s selected three seed-capital venture funds to receive a total of C$100 million ($96.82 million) in provincial funding.

    Mantella Venture Partners is a collaboration between Mantella Corp. — a family-owned real estate development firm based in Toronto — and Basecamp Labs, a technology fund run by Robin Axon and Duncan Hill. Both Axon and Hill were formerly at Vancouver-based Ventures West, and left to set up Basecamp Labs, which they describe as an “accelerator” for early-stage companies. The fund provides financing for startups, but also gets involved in hands-on support, including business development, marketing and team development. Hill says that he feels that the “more passive investment model that is common in Silicon Valley” works there because the Valley has a strong ecosystem of repeat entrepreneurs, but that a more hands-on approach works better in a market like Toronto, where most startups are run by first-timers.

    Hill says that Mantella Venture Partners is looking to fund 10-15 companies, and will provide initial rounds of between C$100,000 and C$500,000 with an option to join additional rounds or top up those amounts. Among other opportunities, Mantella says it is interested in startups that are emerging from the technology community that has formed around BlackBerry developer Research in Motion, which is based in Waterloo, Ontario. Basecamp’s current portfolio companies include Chango, which is developing a web-based advertising platform, and a mobile entertainment platform called PushLife (founded by a former RIM employee).

    Despite the downturn in Canadian investment over the past few years, “innovation is still thriving,” Hill said in a statement. “With the venture market in such a state of flux, the timing could not be better for the launch of a new fund that is focused on both early-stage investing and providing the hands-on support entrepreneurs need.” Before he joined Ventures West as entrepreneur-in-residence, Hill was the founder and chief technology officer of Think Dynamics, a data center automation software company that was bought by IBM in May 2003. Axon worked at MD Robotics (formerly Spar Aerospace) and the Canadian Space Agency, where he helped prepare the Canadarm2 for installation onto the International Space Station.

    The Quebec venture funds that are going to receive government financing include FounderFuel Ventures (which is focused on the information and communications technologies industry), AmorChem (focused on life sciences) and Cycle-C3E Capital (which will focus on green technologies). Quebec has agreed to provide a total of $100 million to the three funds through several provincial agencies, including $50 million from Investissement Québec, $33 million from the Solidarity Fund QFL and $17 million from a fund called FIER Partners. Each of the three new funds must also find a minimum of $8.25 million from the private sector in order to receive the government funding.

    FounderFuel Ventures was created by the team behind a Montreal-based venture fund called Montreal Start-up, a group that includes entrepreneur and angel investor Austin Hill, the founder of Zero Knowledge Systems, now known as Radialpoint.

    Related content from GigaOM Pro (sub req’d):

    What The VC Industry Upheaval Means For Startups

    Post and thumbnail photos courtesy of Flickr users Tico and rogiro.

  • GetGlue Expands From Toolbar to Popups

    GetGlue, the social network and recommendation engine, has launched some new features that it hopes will broaden the appeal of its service by adding popup content widgets to any page that contains a link to Wikipedia, the Internet Movie Database, Last.fm and dozens of other popular sites, with related content and recommendations (i.e. likes and dislikes). GetGlue, a service from New York-based Adaptive Blue, already offers a toolbar that shows up when users visit certain web sites such as Amazon and eBay — giving them video clips, photos or recommendations related to a product, for example — but the company says it wanted to provide even more ways for people to see that kind of related content and recommendations wherever they are on the web.

    Ami Greko, director of business development for GetGlue, says that while the toolbar provides recommendations and related content for a wide range of popular sites, it can’t possibly cover every site on the Internet. The feature that launched today, she says, is a way of providing those recommendations and related content on any page, whether it’s a major retailer’s site or a friend’s blog, provided the page links to a recognized web site or service such as Wikipedia or IMDB. In a demo, she showed how the links to a GetGlue popup widget were available (to users who install the browser plugin) on the Twitter web site and a page of Google search results.

    After installing the plugin, when GetGlue detects a link to a site it supports, a small “G” icon appears next to the link. When a user’s cursor hovers over the icon, a small popup window appears that contains all the information that the GetGlue toolbar does — including information from the site that is being linked to, but also whatever the service can find elsewhere, such as related YouTube videos, Flickr images and ratings from people you’re friends with on the social network.

    Such functionally is very similar to that of a blog plugin called Apture, which also embeds small icons next to links in web pages with information about the content at the link — YouTube videos, Wikipedia entries and so on. The two major differences are that Apture simply shows you in a popup a sample of what you would get by following the link, while GetGlue shows you other related info as well, and GetGlue adds the social recommendations from your friends.

    Although getting all sorts of info about a site combined with recommendations from your friends seems like a good idea, services such as GetGlue and StumbleUpon suffer from a number of drawbacks. One is that you have to download and install a toolbar, which not everyone wants to do, and then that toolbar pops up whenever you go to a web site that might have related content, behavior that some users also don’t like (since it can also slow down the loading of the page). The other requirement is that you follow lots of people in the network, since the recommendation part of the service requires a certain amount of scale before it becomes useful.

    Greko wouldn’t say how many users the GetGlue network has, but that the number is growing rapidly, and that its members provide 40,000 recommendations (likes, dislikes, etc.) every day on a variety of items such as books, movies and related services. The service doesn’t carry any advertising, she said, because “right now we’re really just interested in creating the best user experience we can, so we’re not really focusing on advertising at the moment.” Which raises an obvious question: How does GetGlue plan to make money?

    Related content from GigaOM Pro (sub req’d):

    The New Digital Retail Revolution

    Post and thumbnail photos courtesy of Flickr user Ryan Grove.

  • Twitter Staffer Stops Blogging After Backlash

    Updated: Alex Payne, a Twitter engineer, is shutting down his personal blog after a comment he posted on Twitter became the subject of a TechCrunch blog post and caused a minor firestorm among Twitter application developers and others involved with the company. The comment (which has since been deleted from Payne’s stream) referred to “some nifty site features” that had been implemented on the internal version of the Twitter site. The Twitter engineer said that if users had access to the same features, “you might not want to use third-party clients.”

    As the TechCrunch post described, this caused a bit of consternation among developers, some of whom were concerned that Twitter would be implementing features that might compete with third-party Twitter tools such as Tweetdeck, Seesmic, etc. As TechCrunch writer MG Siegler noted in a post on his personal blog about the response to his piece, certain Twitter staffers were unimpressed with the article and expressed their displeasure (via Twitter, of course) over what they seemed to think was an overreaction to Payne’s comment.

    So did the Twitter incident cause Payne to stop blogging? He says in his final blog post that while he intended the personal blog to be a place where he could talk about ideas, his posts had started to “spark whole conversations that I never intended to start in the first place, conversations that leech precious time and energy while contributing precious little back.” He also responded to someone on Twitter that he had been considering taking a break from blogging, but that the TechCrunch post “certainly pushed me to consider how I communicate.” And he said that he is “still baffled as to why anyone pays that level of attention to what I have to say.”

    While the Twitter engineer said he will continue to use the service he helped create, it sounds like he will be more cautious about what he posts and the possible implications. He says: “Over time, I’m coming to realize what sort of messages I can communicate effectively via Twitter, and what sort I can’t.” And in a Twitter post, he says: “Learn from my mistake: talk about your business carefully.”

    Although it’s too bad that Payne will no longer be sharing his thoughts about the service and its implications on his blog (which I confess I had become a fan of), it’s somewhat comforting to know that even one of the key architects behind this popular social media tool is still learning how to use it effectively — as we all are.

    Update: Here’s a video clip of some Twitter staffers discussing Payne’s comments at a recent “tweetup” with Twitter developers at the company’s headquarters in San Francisco (thanks to Kosso from Phreadz, who posted a link to this video in a comment below):

    Related content from GigaOM Pro (sub req’d):

    Why NewNet Companies Must Shoulder More Responsibility

    Thumbnail photo courtesy of Flickr user BrittneyBush. Feature photo courtesy of Flickr user Charlesdyer.

  • UPDATED: Google Gets Tough With EU on Street View

    Updated: If the European Union continues to ask for changes to the way Google handles its Street View images, the search engine company may simply decide not to take any new photos for the service in Europe, according to a senior Google executive. Chief technology advocate Michael Jones, who was instrumental in developing Google Earth, made the comments in an interview with Bloomberg at the CeBIT technology conference in Hanover.

    An EU body made up of privacy regulators from all the member countries recently recommended that Google shorten the amount of time it keeps unblurred photos of people and other identifying items such as license plates and street addresses. Currently, Google keeps those images on its servers for a year before they are replaced (users of the Street View service only see blurred images, as part of a previous requirement from privacy regulators).

    Jones told Bloomberg that having to generate new images every six months instead of every year would make the company reconsider whether Street View was worth the investment of time and money. “I think we would consider whether we want to drive through Europe again, because it would make the expense so draining,” he said. Update: In an emailed statement, a Google spokesperson said that Jones’s comment “was not in reference to the retention policy, but was simply a statement generally about how frequently we’d want to update Street View.” In a previous statement to the agency that made the request — an EU advisory group known as the Article 29 Data Protection Working Party — Google lawyer Peter Fleischer said:

    The need to retain the unblurred images is legitimate and justified — to ensure the quality and accuracy of our maps, to improve our ability to rectify mistakes in blurring, as well as to use the data we have collected to build better maps products for our users.

    The dispute over keeping unblurred photos is only part of what has been an ongoing battle with both the European Union and individual European countries over privacy issues surrounding Google Street View. As part of the launch of the German version of the service, for example, the search company has had to agree that it will allow residents to have their pictures removed from the service before they are published, and has also provided a tool that will let German citizens remove photos of themselves, their homes, etc. from Street View after publication.

    Related content from GigaOM Pro (sub req’d):

    As Cloud Computing Goes International, Whose Laws Matter?

    Post and thumbnail photos courtesy of Flickr users dspain.

  • Macmillan Stacking Sandbags Against e-Book Flood

    If you want to see someone frantically struggling to defend an existing analog business model against the disruption that comes from digital, look no further than a blog post today from John Sargent, CEO of book publisher Macmillan. You might remember Macmillan as the company that had all of its books briefly yanked from Amazon’s electronic store a while back, as the two fought over the pricing of e-books. Amazon wanted to force Macmillan and others to sell books for $9.99, but the publisher wanted Amazon to adopt a new “agency model” that would provide more flexibility in pricing. Macmillan’s ace in the hole: Apple had already agreed to the agency model for the forthcoming iPad. Faced with this competitive threat, Amazon wound up conceding defeat.

    In his blog post, Sargent describes (somewhat patronizingly) how the agency model works. In a nutshell, instead of allowing a retailer such as Amazon or Apple to set the price of e-books, the publisher sets the price and gives the retailer a larger cut of the proceeds in return (and lets them think of themselves as “agencies” instead of just humble old retailers). As I mentioned in a previous post about Macmillan and Amazon, this approach might seem a little odd if you’re used to the real world, where retailers typically set the price of the goods they carry on their shelves based on what they think the market wants, supply and demand, and all of those other quaint principles.

    What becomes obvious after reading Sargent’s blog post is that Macmillan wants to retain flexible pricing for e-books for one simple reason: to protect its existing printed book business. You can see this reflected in the very first point the Macmillan CEO uses to justify why the agency model is a better approach than the traditional publisher-retailer relationship, when he talks about the end of the practice known in the industry as “windowing” (which is deeply flawed in its own right). As he describes it:

    All the new adult trade books for which we have the rights to publish in e-book format will be available at the first release of the printed book. We will no longer delay the publication of e-books (read: no windowing). Readers were clearly frustrated at the lack of availability of new titles, and the change to the agency model will solve this problem.

    Here’s the thing: This “problem,” as Sargent calls it, has been wholly created by publishers like Macmillan, who hold back the release of e-books in order to try and milk traditional hardcover and paperback sales for as long as they can. So now, in response to Amazon and others acceding to their demands on price, Macmillan is going to be good enough to stop doing that. This is the retailing equivalent of the serial killer who scrawls “Stop me before I kill again” on a mirror in lipstick. Could not the publishers themselves have stopped this practice at any time and avoided frustrating readers?

    Then Macmillan’s CEO moves on to book pricing itself, and notes that e-book prices will effectively move in lockstep with the prices of printed hardcover and paperbacks books, although they will start out somewhat cheaper to begin with. For example, Sargent says that hardcover books typically sell for between $24 and $28, whereas the e-book versions of these books will be priced between $12.99 and $14.99.

    He describes this as a “tremendous discount,” but that ignores a couple of important points. For one thing, it ignores the fact that the vast majority of books aren’t sold for the cover price. Why? Because retailers discount them when they aren’t moving. It also ignores the fact that e-books cost orders of magnitude less to produce than printed books, although debate continues as to how much.

    More than one reader of Sargent’s blog post noticed that the publisher is effectively trying to replicate the existing price structure and business model of the printed book industry in electronic form (something that publishers have been trying to do for some time now, as John Siracusa noted in a piece at Ars Technica). One commenter said:

    So how much more expensive is hardcover e-ink over paperback e-ink? Your model is doomed.

    While another said:

    This seems pretty well-considered. There’s only one point I’m confused about — can you please explain to me the difference between a “hardcover” and “paperback” e-book? Because that don’t make a lick of sense. Unless of course your definition of sense is “artificial price stratification of identical content.”

    In effect, Macmillan is trying to do exactly the same thing that many other media companies are desperate to do — from newspapers to music labels to movie companies — which is to replicate the pricing model of an analog, real-world business in digital form. In other words, it is trying to artificially reproduce the kind of scarcity (and thus pricing power) it used to have in one medium in a medium that doesn’t even know what scarcity is. Sooner or later, that attempt will fail (among other things, iTunes appears to show that flexible pricing actually leads to lower sales). For now, Macmillan and other publishers have managed to convince Amazon and Apple to accept the new agency model, but those sandbags aren’t going to last for long.

    Related content from GigaOm Pro (sub req’d):

    The Price of e-Book Progress

    Post and thumbnail photos courtesy of Flickr users Cindy47452 and radioher.

  • Books Now Outnumber Games on the iPhone

    People love their iPhone apps — after all, Apple has sold over a billion of them since it launched the phone. And a big proportion of those apps are games. But you know what else a growing number of people love to have on their iPhone? Books. According to Mobclix, which does mobile advertising for apps, the number of books in the iTunes store now exceeds the number of games for the first time since the device was launched, making books the largest category in the store. The numbers from Mobclix, which keeps a regular tally on the most popular apps and downloads, show that there are more than 26,000 books in iTunes, compared with a little over 24,000 games.

    This fits in with something Om wrote recently based on data from Flurry, which also showed a substantial increase in the number of books being downloaded to the iPhone. At the time, Flurry said that Apple was “positioned to take market share from the Amazon Kindle” for book reading, despite the small size of the display, and that “with Apple working on a larger tablet form factor, running on the iPhone OS, we believe Jeff Bezos and team will face significant competition.” That larger form-factor device, of course, will soon be available under the name iPad, and it looks like an even better book reader than the iPhone.

    In many ways, the popularity of the iPhone as an e-book reader has created a ton of momentum for Apple when it comes to launching the iPad, which as some have pointed out looks like a much larger version of the phone or its cousin the iPod touch. Having grown used to reading books on a smaller device, it will probably seem pretty natural to trade that in for a larger unit that makes books look even better, and the tendency will likely be to gravitate towards something that looks and feels familiar (and is a full-color touchscreen) as opposed to something like the Kindle.

    Having downloaded and read many books on the iPhone myself, through e-book apps such as the Kindle one, Stanza (which Amazon acquired last year) and Classics, I’ve grown quite used to reading them on the device, despite its small size. But I’d be happy to have something that functioned the same way with a bit more real estate, and I’m sure many e-book fans would share that feeling.

    Related reports from GigaOM Pro (sub req’d):


    Evolution of the e-Book Market

    Post and thumbnail photos courtesy of Flickr user striatic

  • Worlds of Email Marketing and Twitter Collide

    In another sign of how the lines between traditional digital marketing and social media are blurring, email marketer ExactTarget has acquired Twitter management tool CoTweet for an undisclosed sum. ExactTarget, based in Indianapolis, has tools that allow companies to target email marketing pitches and related “permission-based” web campaigns, and has a customer list that includes Expedia, Home Depot, Gannett Co. and Wellpoint Inc. Although it does some text messaging as part of its marketing services, it is not known for its work with social media. The CoTweet acquisition is clearly an attempt to change that. The deal even got the imprimatur of Twitter COO Dick Costolo, who said:

    This acquisition is strong validation that valuable, sustainable businesses are emerging from the Twitter ecosystem. An ExactTarget and CoTweet combination should lead to even further digital marketing innovation through use of the Twitter platform.

    CoTweet, founded in 2008, has become popular with companies such as McDonald’s, Microsoft, Ford, Dell and Pepsi as a tool for managing multiple Twitter accounts from a single page. This makes it easy for marketing staff at those companies to handle the social media pitching and responses they do as part of their campaigns and customer service. CoTweet — which raised $1 million in seed capital in in July from a group including The Founders Fund, Ron Conway’s SV Angel fund and First Round Capital, Maples Investments and Freestyle Capital — was one of the first Twitter-related businesses to recognize that the social network could be used as part of an integrated customer relations management service.

    ExactTarget, which filed for an initial public offering in 2007 but never actually went public, raised $70 million in financing from a group including Battery Ventures and Scale Venture Partners in May of last year, and then just a few months later raised another $75 million from Technology Crossover Ventures. It has been using the funding to expand its business outside the United States, and said recently that it recorded its 36th consecutive quarter of growth in the fourth quarter of last year and had annual revenues of $95 million.

    Embedded below is a video of ExactTarget CEO Scott Dorsey (no relation to Twitter co-founder Jack Dorsey, as far as we know) talking about the deal with CoTweet founder Jesse Engle.

    Post and thumbnail photos courtesy of Flickr user foxspain

    Related content from GigaOM Pro (sub req’d):

    How Human Users Are Holding Twitter Back

  • Mobile Deal Brings Ads to Your Twitter Stream

    Twitter may be working on the imminent launch of its own advertising platform, but that hasn’t stopped others from rushing to profit from the social network. A Twitter ad service called 140proof announced today that its ads will now be integrated into the iPhone and Android mobile apps from HootSuite, a Twitter tool that many businesses use to manage their social media marketing campaigns. Unlike some other advertising options for Twitter, which have seen celebrities paid to endorse products in their posts, 140proof ads are messages posted to a user’s stream by the company in service of a specific targeted ad campaign.

    140proof, which is based in San Francisco and backed by a $2 million investment raised last summer from Blue Run Ventures and Founders Fund, said that its algorithm aims ads at users based on their profiles and other public data. Other Twitter advertising services include Ad.ly, which has gotten some press attention for paying celebrities such as Kim Kardashian thousands of dollars to endorse products to their followers, as well as Magpie, Assetize and IZEA.

    The question all of these services will inevitably confront — including Twitter itself, once it launches its own platform — is how users will react to a wave of advertising in what was once an ad-free social network (in the case of 140proof, of course, you can simply not use HootSuite’s mobile apps and you won’t see them). Many of these services are only just ramping up in what will undoubtedly become a much bigger campaign to bring ads to the Twittersphere. So what will you do when ads start appearing in your Twitter stream?

    Related content from GigaOM Pro (sub req’d):

    How Human Users Are Holding Twitter Back

  • “Startup University” Pays Off for VC

    As in so many other cities that aren’t located in or near Silicon Valley, the startup scene in Toronto is a fairly small and close-knit community. It  benefits from events such as “Demo Camp” and “Bar Camp,” where young entrepreneurs can come and bounce their ideas off others who have been down that road before, and get some critical feedback. Inspired by those kinds of events, and building on their own efforts to create a community within their own portfolio companies, the venture capitalists at Extreme Venture Partners decided to fund a kind of school/competition last year called Extreme University. It went so well that EVP recently announced it’s running another one this summer.

    Based loosely on the format developed by Paul Graham at his Y Combinator incubator, ExtremeU asked for entrepreneurs to “audition,” “American Idol”-style, for one of three slots in the program. Successful applicants got $5,000 each from Extreme Venture Partners, as well as office space and free Internet access at the EVP offices in Toronto, and 12 weeks to build and launch their app, product or service. The program was run by Farhan Thawar, the VP of engineering for Xtreme Labs (one of EVP’s portfolio companies) who says he’s hoping to make it even larger by opening it up to more startups. I’ve embedded a short video interview with Farhan below:

    Saif Ajani, co-founder of Assetize, was one of those who applied for and was accepted to Extreme University, and says there were a number of benefits to being in the program. One was the ability to get feedback from Thawar and the founders of Extreme Venture Partners (which has a $10 million fund) as the company’s idea was evolving. Ajani says this was instrumental in helping Assetize shift from its original idea, which was to monetize Twitter by buying and selling user accounts (just as some companies do with domain names), and move towards its current service, which is an advertising network built on Twitter.

    Ajani also says the program allowed Assetize to make connections through Extreme Venture Partners that would not have been possible otherwise, or would have taken much longer. As a result of connections between the venture firm and sports management agency Octagon, he says, Assetize was able to form a partnership with the agency to use its Twitter advertising tools, and recently launched a site called FanWaves.com with the Phoenix Suns as a major partner. One of the EVP partners, he recalls, said: “‘You know who this would be perfect for is sports celebrities like Michael Phelps,’ and because they had worked with Octagon, that helped us connect with someone there.”

    A third benefit of the Extreme University program, Ajani says, was being able to leverage the development work that was already going on in the offices of Xtreme Labs, which Assetize and the other members of Extreme University shared. The largest of EVP’s venture portfolio companies, Xtreme Labs is a web and mobile development shop that works on iPhone, BlackBerry and Android apps, and has done work for sites such as Urbanspoon, CitySearch and Dictionary.com. About 30 developers are set up in a single room, working in what is called “paired development” teams, which involves two programmers (typically one experienced and one less experienced) working together on a project. Thawar says this structure allows not just the Extreme University companies but all of EVP’s portfolio companies to take advantage of the brainpower of Xtreme Labs.

    The Xtreme Labs VP says the “startup university” idea not only provided a boost for some startups and built goodwill in the Toronto startup community, but also gave Extreme Venture Partners connections to some strong entrepreneurs and companies that could turn into full-fledged portfolio companies in the future (as part of the Extreme University program, EVP gets a 10 percent stake in each of the startups that are accepted). Assetize, for example, is looking for its first round of financing with help from EVP, which has agreed to provide some if the company finds a second VC to join.

    Related content from GigaOM Pro (sub req’d):
    What The VC Industry Upheaval Means for Startups

    Post and thumbnail photos courtesy of Flickr user Thomas Hawk