Author: Marshall Kirkpatrick

  • Gmail Becomes an App Platform: Google Adds OAuth to IMAP

    You may or may not be excited by the acronyms OAuth and IMAP/SMTP, but the combination of them all together is very exciting news. Google Code Labs announced this afternoon that it has just enabled 3rd party developers to securely access the contents of your email without ever asking you for your password. If you’re logged in to Gmail, you can give those apps permission with as little as one click.

    What does that mean? It means mashups based on the actual emails in your inbox. If you’ve given a 3rd party app secure access to your Twitter account, then you’ll be familiar with the user experience. The first example out of the gate is a company called Syphir, which lets you apply all kinds of complex rules to your incoming mail and then lets you get iPhone push notification for your smartly filtered mail. Backup service Backupify will announce tomorrow morning that it is leveraging the new technology to back up your Gmail account, as well.

    Sponsor

    People are often wary about the idea of giving outside services access to their email, and well they should. OAuth is designed to make that safe to do. Combined with the IMAP/SMTP email retrieval protocols, it gives an app a way to ask Gmail for access to your information. Gmail pops up a little window and says “this other app wants us to give it your info – if you can prove to us that you are who they say you are (just give Gmail your password) – then we’ll go vouch for you and give them the info.” The 3rd party app never sees your password and can have its access revoked at any time. You can read more about OAuth, how it was developed and how it works, on the OAuth website.

    Why is this so exciting? Because it means that the application we all spend so much time in, where so much of our communication goes on and where you can find some of our closest work and personal contacts – can now have value-added services built on top of it by a whole world of independent developers, without your having to give them your email password.

    That’s the kind of thing that the data portability paradigm is all about. It’s the opposite of lock-in and seeks to allow users to take their data securely from site to site, using it as the foundation for fabulous new services. Google says it is working with Yahoo!, Mozilla and others to develop an industry-wide standard way to combine OAuth and IMAP/SMTP.

    See also: Rapportive – an incredible GMail contacts plug-in.

    Discuss


  • Unvarnished: Is Pete Kazanjy an Evil Genius?

    Unvarnished is a new website where you can post and read anonymous reviews of people and their professional performance. That sounds a little frightening, doesn’t it?

    TechCrunch has been writing about it for days and the company just started rolling out invites. See Michael Arrington’s thought provoking, if extreme, post Reputation Is Dead: It’s Time To Overlook Our Indiscretions and Evelyn Rusli’s review Unvarnished: A Clean, Well-Lighted Place For Defamation. I told Unvarnished founder Pete Kazanjy that I thought he was doing more harm than good, I heard his response and now I’ve tried his site. It turns out that reality is a lot more complex than the hype. Unvarnished is both more intellectually interesting and less freakishly prurient than you might think.

    Sponsor


    Above: A “trusted reviewer” badmouths VC Dave Hornik. But the criticism isn’t anything you couldn’t read elsewhere (like TheFunded) and is pretty debatable in its validity. This was the only criticism of a person I could find on the site in early browsing today, and it’s pretty tame stuff.

    Unvarnished could be positioned as a place you can anonymously slam your former bosses or a place you’ve got to visit in order to see what’s been written about you. It could just as accurately be described as LinkedIn with teeth: minus the sappy reviews people post to each others’ profiles on that site. LinkedIn with teeth makes it seem more mundane, and that is the truth of the matter. Browse around a little and you’ll calm down pretty quickly. Come back later when you’re considering working with someone and you may find it useful.

    Could the service be abused? It could, but first let’s look at how it works.

    Unvarnished operates on top of Facebook, which is both good and bad. You have to get a request to be reviewed sent by a Facebook friend in order to create an Unvarnished account, you have to use FacebookConnect to log-in to the service and you have to have demonstrated a certain amount of activity on Facebook in order to prove that you aren’t setting up a fake account just to post critical reviews of people on Unvarnished. At many points in navigating the site you’re encouraged to post reviews of your Facebook friends.

    The reviews you post are tied to your profile, but readers and the people you review cannot trace back from your reviews to see who posted them. They can only see your aggregate activity history on the site and how highly rated your other reviews have been. In other words, if you’ve reviewed a lot of people and many other users have approved of your reviews, then your next review is going to carry extra weight in the minds of readers. Chronically judgmental but on balance positive? You’ll love Unvarnished!

    The downside of the close Facebook integration is that one more time Facebook is centralizing our identity while we navigate around the larger web. Expect to see many more sites do this, though, as it makes authentication really easy and means that every new user automatically arrives with demographic, social and taste data. Sorry OpenID and distributed data portability, 400 million people voted for Facebook.

    Opportunities for Abuse

    You can’t delete things that get said about you on Unvarnished. It’s like Yelp but for individuals, and many businesses already hate Yelp. What’s to stop people from saying untrue, unkind, unfair and unattributed things about you? Not much.

    “A lot of people say ‘I don’t want people to make reputation claims about me’,” site founder Kazanjy says, “but they also say ‘I certainly would like to consume repuation claims about other people’.”

    People on the site have the opportunity to say bad things about you and your supporters have the opportunity to respond. You might be a bully with a posse of bullies who have your back. Your critics might be marginalized people who make no use of Unvarnished other than to shed much-needed light on your abuses of power, or they might be people with an axe to grind who jump onto the site to post terrible, untrue things about you.

    Kazanjy’s contention is that a low reputation on the site and a group of vocal supporters can overcome any unfair criticism of you. That’s not very convincing.

    Unvarnished as a Democratic Force

    When he says that both the offline world and the web at large work in the same way (anyone can post anything about anybody) but that Unvarnished is merely centralizing this discourse, then things start to get interesting.

    Few people have the knowledge, the broadcast platform or the search engine pull to really post a free-flying slam against a person online in a place it could be easily found. The relatively few people who could do that have an unspoken agreement not to do so. It would be uncouth and open them up to other powerful people doing the same thing to them.

    Unvarnished aims to create one centralized, democratized place to learn about a person’s reputation. Suddenly even people who are not powerful public figures will have a single, prominent place to post their criticisms of others – and they’ll have very little disincentive to doing so.

    Is that evil? Perhaps it is, a little. Is it a little bit genius as well? Time will tell. Unvarnished invites have begun filtering through Facebook today. If you see one, take a few minutes to check it out.

    Discuss


  • Boom! Tweets & Maps Swarm to Pinpoint a Mysterious Explosion

    What would you do if you heard a giant boom and you didn’t know where it came from? If you’re like thousands of people in Portland, Oregon, you might hit Twitter and Google Maps to participate in the city-wide exploration of a slightly frightening mystery. Last night at about 8 p.m., people in a big part of the city felt their windows shake and no one could tell them what caused it.

    Was it a sonic boom? An angry deity? Even the mayor himself tweeted this morning that he was looking into the sound. In the meantime, thousands of people were using the hashtag #pdxboom and adding themselves to a hastily configured Google Map showing where they lived and how loud the boom had been there. In just a few hours, a pattern emerged, with reports clustering around one city park. This morning the police found a detonated pipe bomb there and cited the Google Map in their announcement.

    Sponsor

    Pausing the Stream

    Reid Beels is a designer, geo-developer and one of the community organizers of Portland’s forthcoming conference Open Source Bridge (“The conference for open source citizens”).

    Beels says he was sitting in a restaurant in southeast Portland when he heard the boom, and saw tweets streaming in about it within minutes. He searched Twitter for “boom” and “explosion,” limiting the results by location. Within five minutes, he says, a hashtag had emerged: #pdxboom.

    What was the #pdxboom, people wanted to know? Some people said it sounded like thunder. Lots of people said it sounded like an empty trash Dumpster crashing on the ground. They mentioned their locations in their Tweets and Beels quickly grew frustrated that all this data was just streaming into the ether, lost from analysis.

    So he threw up a Google Map with instructions to put a pin in your location and describe how the boom sounded to you.

    Within an hour 100 people had placed pins on the map. Beels and developer Audrey Eschright came up with a color coded system to describe the intensity of the sound, and began retroactively coloring in pins based on any comments people left.

    Then they found out that Google Maps will only display the 200 most recent pins placed in a public map. Beels’ friend Aaron Parecki wrote a script to download the map’s data every fifteen minutes. That came in handy when a few hours later someone vandalized the map by dragging a large number of markers outside the town. It was trivial to roll back to the last valid data.

    Photo by Igal KoshevoyThe local TV news and the newspaper ran stories about the boom, and pointed their audiences to the Google Map. Thousands of people visited it, and just under 1,000 added a pin marking where they where and how loud the boom had sounded to them.

    It became clear that the boom originated near the Sellwood Bridge; a big cluster of red markers surrounded the area, especially to the east. Thousands of people are still streaming in to look at the map; at the end of the day it’s now approaching 70,000 views, even if the mystery, if not the crime, is solved.

    Some people thought it was a precursor Earthquake Boom. (I woke up convinced my house was in an earthquake.) But the Portland police went to a park in the area most filled with red flags on the map and found a large detonated pipe bomb. A Portland police spokesperson said the maps and tweets were very helpful.

    A topographic view of the map made some inclined to believe that cliffs across the river and low-hanging clouds combined to make the sound travel as far across the city and in the direction that it did.

    That Was a Practice Run

    Beels says two big lessons came out of the experience for him. First, the tools they used were easy and fast, but they were also quite limited. Google Maps in particular was capable of multi-user collaboration but did poorly when it came to displaying a large amount of data. As Eschright wrote after the action, “It’s not the best platform for a couple hundred people, many without prior experience editing maps, to be using all at once.”

    Inspired by campaigns like CrisisCampPDX and the CrisisWiki, Beels says the community is interested in setting up an installation of open-source, crisis support software Ushahidi on standby in case a real crisis has to be dealt with.

    Beels says he’s inspired not just by what was done in this situation, but by what it revealed about the future. “The community of people who will search for things online and go out of their way to try to figure out what’s going on,” he says, “is larger than you might think.”

    Marshall Kirkpatrick is leading a webinar for Poynter’s News University on Thursday about how location services are changing the news.

    Discuss


  • Cracking Facebook’s Dominance: New Cross-Network Commenting Protocol Could Be a Game Changer

    Two companies outside Silicon Valley say they are the first implementers of a new open source protocol called Salmon, which allows comments to be sent over the walls of one social network to communicate with users of another. Imagine being able to post a message on Facebook to “@janedoe@twitter” and then seeing Jane receive the message in real time on Twitter. It’s a vision comparable to being able to call any telephone number, whether it’s part of your phone provider’s network or not.

    Facebook isn’t implementing Salmon, but that’s what Canadian open-source business microblogging service Status.net and Florida-based stream service Cliqset announced they have implemented between their networks this morning. Think of this as a technical foil for monopoly beginning to unfold.

    Sponsor

    Because Salmon is an open standard, any service can implement it without formal business relationships, and Google Buzz is expected to enter the Salmon ecosystem next. If a substantial portion of the technical community implements Salmon, Facebook could be under a lot of pressure to do so as well. (As it was with OpenID, for example.) If you could still message your friends inside and outside Facebook, it would be a lot easier for innovative new alternative networks to lure you away from the one big site that 400 million people use today.

    The Players

    Evan Prodromou of Status.net says his service has 1.2 million users, hosts 12,000 sites on its cloud and is adding 800 sites per week. It’s a hot little startup that’s fast implementing new technical protocols and making high profile hires. Status.net began rolling out Salmon support earlier this month but today announced that it was working with Cliqset on displaying the cross-network communication. “We’ve got disparate implementations communicating well using this open standard for cross-network conversations,” Prodromou said today, “It’s the first time!”

    Cliqset is better at trailblazing innovation than at user acquisition but is a very respected member of the technical community working to create social network interoperability.

    Google Buzz appears to have seen a lukewarm public reaction to its launch but is most disruptive because of its support for open data standards. Salmon is still listed in the “coming soon” stage of the Buzz roadmap.

    Today’s news isn’t just about those players, it’s about the Salmon protocol that would allow any social network to participate. Salmon was developed primarily by Google employee John Panzer. If you’ve seen the way that the Echo commenting system displays Tweets, trackbacks and other social media mentions below blog posts, that’s the kind of model that Salmon aims to make open source.

    Interoperability as Foundation for Choice, Innovation, User Control

    Facebook’s near monopoly on mainstream social networking means that users have limited options in how they experience social networking and they have to play by Facebook’s rules. Not everyone likes how Facebook changes its rules, especially its privacy policy.

    Likewise, though Facebook is incredibly quick to innovate, it’s generally assumed that a market with more than one competitor gives all companies in question more incentive to try to win the hearts of users.

    Simply put, if you could leave Facebook and still communicate with people using Facebook (you can’t today) then leaving Facebook would be a lot easier, and more social networks would have reason to invest in building a compelling service for you to use. If there was more than one meaningful option, those services would compete to build the best social network they possibly could. And Facebook would have more reason to be careful when considering dramatic changes in things like its privacy policy. Today, where else are you going to go without losing touch with all your friends?

    That’s why interoperability is important and that’s why it’s a big deal that two small social networks used by early adopters have pushed Salmon-based interoperability out into the wild.

    Discuss


  • Facebook May Share User Data With External Sites Automatically

    Imagine visiting a website and finding that it already knows who you are, where you live, how old you are and who your Facebook friends are, without your ever having given it permission to access that information. If you’re logged in to Facebook and visit some as yet unnamed “pre-approved” sites around the web, those sites may soon have default access to data about your Facebook account and friends, the company announced today.

    Barry Schnitt, Senior Manager, Corporate Communications and Public Policy at Facebook, told us in an email that “the right way to think about this is not like a new experience but as making the [Facebook] Connect experience even better and more seamless.” There will be new user controls made available, but this is a new experience: this makes Facebook Connect opt-out instead of opt-in.

    Sponsor

    The proposed change was first written about by Jason Kincaid on TechCrunch, who called it Facebook’s Plan To Automatically Share Your Data With Sites You Never Signed Up For.

    Here’s the language Facebook used to describe the draft policy:

    Pre-Approved Third-Party Websites and Applications. In order to provide you with useful social experiences off of Facebook, we occasionally need to provide General Information about you to pre-approved third party websites and applications that use Platform at the time you visit them (if you are still logged in to Facebook). Similarly, when one of your friends visits a pre-approved website or application, it will receive General Information about you so you and your friend can be connected on that website as well (if you also have an account with that website). In these cases we require these websites and applications to go through an approval process, and to enter into separate agreements designed to protect your privacy.

    That sounds downright creepy. It’s nice to have one-click access to your Facebook info if you decide to share it with other sites – that’s what Facebook Connect does – but the prospect of having that information automatically shared when you show up on another website seems like an idea that won’t be well received by users. There’s a big difference between opt-in and opt-out “data portability.”

    Schnitt says: “People love personalized and social experiences and that’s why Facebook and Facebook Connect have been so successful. We think there are some instances where people would benefit from this experience as soon as they arrive on a small number of trusted websites that we pre-approve.”


    Shnitt is the man who told us in a previous interview about Facebook’s fundamental shift away from being private by default (Why Facebook Changed Its Privacy Strategy) that users generally go along with the company’s default privacy settings because they agree with the company’s recommendations and because the world is changing to be less private. He cited the growth of Twitter, blogging and reality TV as evidence that the world was changing this way and that people are less interested in privacy.

    In that interview, Schnitt also acknowledged that business reasons, like pageviews and advertising, were part of why Facebook was transforming away from privacy as well. We asked if this new opt-out Facebook Connect was the first step in a Facebook Ad Network, where your profile on Facebook is used to target ads that Facebook sells on sites all over the web. Schnitt told us, “this has absolutely nothing to do with advertising.”

    Do you buy all that?

    Do you trust Facebook to select trustworthy websites to automatically share your data with when you browse around the web? If you don’t trust Facebook’s judgement, you will be able to opt-out of exposing that data. But by default you’ll be sharing it.

    By default, you’re sharing more and more these days, with more and more people. Perhaps that’s because of your love for Twitter and reality TV, but perhaps its because of Facebook’s cultural and commercial agenda.

    Discuss


  • The Day EveryBlock Came to Town

    A fight just broke out down the street from my house. Yesterday, a dog in my neighborhood had one of its legs amputated. That’s the kind of news I like to know and so I’m very excited that MSNBC’s hyper-local news aggregator EveryBlock has expanded this week to include services in Portland, Oregon.

    EveryBlock is one of scores of competing services that serve up public records, social media content and local announcements on a neighborhood-by-neighborhood, or in this case block-by-block, basis. What does it mean when the most successful of these services rolls into your town? 12 hours into the experience, here’s what some people in the local (human) media geeks have to say about it. This conversation offers a unique view into the front-line battle to offer news consumers more and faster information about our own neighborhoods than we’ve probably ever had before.

    Sponsor

    Does existing local media consider EveryBlock a threat? Local TV news personality and new media experimenter Stephanie Stricklen doesn’t. “I can’t think of any reason why it’s not awesome,” she told us. “Any time you bring another source of information into a city, especially one where you can access info about such a small geographic perspective, I like that.”

    “No matter what you think of online journalism, everything is changing and the more players that come to the table the better we are all. We serve different audiences. The local TV stations could never have the time to visit every single block every day, there’s not enough people, not even the newspaper could.”

    Might local human reporters use a service like EveryBlock to find stories they should investigate and put in context? “Absolutely,” Stricklen said, “I can see myself using something like this.”

    As I write this story, some kind of animal problem has been reported at an intersection near where I live and an experimental short film screening was just blogged about by a neighborhood arts organization. The films aren’t my style, to be frank, but I love that I am aware of the event.

    In fact, many of the updates from public records are maddeningly unclear. Many others are so trivial that lots of readers wouldn’t consider them news. The health department visited the Chinese restaurant down the street and found the ice-scooper stuck handle-side down in the ice machine! Some lady on Yelp said she didn’t like the tapas restaurant. Someone just flagged down a police car, but EveryBlock has no idea what it was about. To this EveryBlock’s Dan O’Neil says: “Are there gaps in EveryBlock’s knowledge? Yes. Are there gaps in human knowledge in real life? Yes! There’s a comment field, help us out!” Is this a newswire of completed stories? No, this is something different. (But it is a complete publishing of the public records your taxpayer funded agencies make available, O’Neil points out.)

    To be honest, I like reading that kind of stuff. Maybe you do too. As O’Neil says, “we do have a wider definition of what news is.” Not everyone feels satisfied with the level of detail being provided or the absence of filtering the signal from the noise. It’s hard to imagine machines replacing the human storytelling that journalists provide. The machines could augment that journalism, though, and there’s lots of room for them to do an even better job of it.

    Where Humans and Machine Work Together

    EveryBlock founder Adrian Holovaty told us in January that the organization had hired a full-time editor to research various government agency codes in order to articulate public records in a more human-readable way. “It’s one thing to publish public records; it’s another to make sense of them,” he said.

    EveryBlock’s O’Neil told us that editor’s name is Paul Wilson and said Paul put in hours interviewing Portland municipal staff in order to translate the data fields the city publishes into the format EveryBlock now publishes. O’Neil says those municipal staff members are unsung heroes, especially Rick Nixon of the Bureau of Technology Services.

    “It’s a very complex endeavor to publish regularly updated data,” O’Neil says.

    “Portland has excellent meta data and contact info, but a lot of times it’s hard to get to the expertise and for those experts to explain it to someone else. When it’s not your job to answer phone calls from web developers and tell them what spreadsheets mean, it’s tough. We’re in a weird gap time. In the future the expectations and questions we bring to data will be more common and it will be a part of peoples’ job descriptions – but the people in Portland should be commended for already really trying to figure out what these things mean.”

    Portland makes a lot of this data officially available as part of its brand-new CivicApps program, but EveryBlock worked with the county restaurant inspection agency to get that data in particular through other channels. “We’re cycling through 5,000 restaurants on a nightly basis,” O’Neil says, “and the restaurant inspections in Portland are the most plain language content of all the cities we look at. It’s great to see those people speaking in human and not just municipal language.”

    Home-Team Geeks

    “What will be really exciting is to see what Portland’s indigenous community of developers and web journos do with the content the city is making available,” says Steve Suo, editor and executive VP of Portland’s real-time, white-label EveryBlock competitor NozzlMedia. Nozzl is made up of long-time newspaper guys, now building something for the future. (See our write up of Nozzl: “Welcome to the Age of Robot Reporters“.) EveryBlock’s arrival in town happened just days after the city’s celebrated opening of a substantial amount of new data through CivicApps, and with help from the city. Nozzl thinks it can do a better job of putting this data into context. “The more eyes you have on the data, the more insights we’ll see brought to bear,” Suo says.

    “We’re currently adding all the same Portland data for our Portland metro news customers,” Nozzl co-founder and CEO Steve Woodward says. Woodward says that in addition to prioritizing context and serving white-label customers, Nozzl pulls from more sources of data, covers a broader geographic area, and focuses on real-time data. “EveryBlock will tell you what crimes occurred near your home over the last several days. Nozzl will give you information about that siren you hear at this very moment.”

    EveryBlock’s O’Neil basically says bring it on, pointing to Portland’s mere five minute delay on 911 call data and his site’s real-time bulletin feature.

    These are remarkable times. There are services like EveryBlock, Nozzl, Outside.in, Fwix and more all battling it out to best serve us users with new and innovative ways to drill down into more details about our immediate physical surroundings.

    EveryBlock is the biggest player in the game, though, and our awareness of hyper-local news here in tech-savvy Portland has probably been changed for good.

    Discuss


  • Facebook Confirms & Reconsiders Forthcoming Location Feature

    Facebook confirmed today that it is working on a location-based product but said that it has re-evaluated its plans to focus more on places like restaurants.

    As part of a larger blog post about clarifying language around privacy controls, Facebook deputy general counsel Michael Richter said today that the company now has “different ideas” that are “even more exciting” than what it previously planned to do with location. More details will be available, including regarding privacy, as the company finalizes the product.

    Sponsor

    That doesn’t sound like an open product development cycle with privacy policy discussions going on before the product is finalized, but from an innovation perspective it’s hard not to be excited about something “more exciting” than simply adding location to posted items.

    Anonymous sources told the New York Times earlier this year that Facebook was developing a location feature to be released at the F8 developers conference in April.

    Here’s the relevant section of today’s post:

    The last time we updated the Privacy Policy, we included language describing a location feature we might build in the future. At that point, we thought the primary use would be to “add a location to something you post.” Now, we’ve got some different ideas that we think are even more exciting.

    So, we’ve removed the old language and, instead added the concept of a “place” that could refer to a Page, such as one for a local restaurant. As we finalize the product, we look forward to providing more details, including new privacy controls.

    The reference to Pages like local restaurants may allude to a very close tie-in with local business advertising at the launch of the location feature.

    The difference between location and “place” is a significant one. Substantial resources are dedicated by location-aware social networks to determine what “place” your location refers to. That might mean neighborhood, it might mean business name and it might mean recognizing when you are posting from home so that location can be selectively hidden if you so choose.

    What kind of “place” analysis does Facebook have in mind that goes beyond location? Time will tell, but hopefully user privacy will be handled effectively. Location disclosure is a very touchy subject and Facebook’s recent about-face towards a default all-public privacy stance could cause a substantial backlash when it comes to the mainstreaming of location sharing.

    Physical location is one of the most sensitive forms of information we posses and it’s going to be very tempting for Facebook to push people towards being more public than they might like. Can the company get the balance right? Privacy and useful features are the two big questions.

    See also: Why Facebook Changed Its Privacy Strategy

    Discuss


  • Digg’s iPhone App Might Be Better Than the Website

    Digg released its official iPhone app this morning and in many ways it’s more usable than the website itself. The app is a little buggy, doesn’t allow you to post comments and doesn’t include the video or images section of the site – but it’s still quite good.

    The best is likely to come as the app’s login process points to the next version of Digg as a platform. In that future scenario, top stories will be surfaced faster and more to your liking by integrating links shared by your friends on Twitter and Facebook. Meanwhile, there are a number of reasons I’m more likely to use this new app than I am to visit Digg.com.

    Sponsor

    The new Digg app’s design is clean, simple and easy to browse. No ads, no story summaries until you ask for one, no extraneous text about Digg features. It’s much easier on the eyes than the site itself, which has grown overloaded with features, colors and text. Feature request: It sure would be nice to be able to peruse the images section of Digg on the iPhone, especially if there was a way it could be made fast.

    Bookmarking stories to read later is smart. That’s a feature unique to the iPhone interface. Browse your favorite categories looking for the stories you’re interested in reading, bookmark them, then go back and see them all in one place. That’s a compelling use-case, especially on mobile. What percentage of the stories on any Digg page do you really want to read? Hunting for interesting content amongst what’s popular is part of the fun. Why stop hunting for fear of losing track of a gem you’ve found? Feature request: Instapaper integration. I’d love to hit up Digg while sitting on the tarmac before a plane takes off and easily save some good stories to read while I’m up in the air. Perhaps just off-line reading like the New York Times app is what I’m looking for.

    The “most popular” paradigm is particularly well-suited for mobile. If you’re a serious web user, you probably get news and links from all over the web, not from one site. Right now Digg’s links are probably old news to you, too. The limited selection or most-popular “in case you missed it” type of stories you’ll find on the Digg front page is uniquely well suited to mobile reading. I don’t have time to do too much exploration on my phone so please recommend me some good stuff in a hurry. Feature request: Please bring me personalized recommendations asap in this app.

    The sooner those recommendations can be tied to the activities of my friends on Twitter and Facebook, the better. Unfortunately, it looks like that New Digg is still several months away.

    Until it comes out, though, the Digg iPhone app is a pretty good way to follow popular news online. You can get it here or search for it in the app store.

    Discuss


  • Test Shows: iPhone Touchscreen Still the Best

    If the future is all about touchscreen interfaces, then performance of the screen in registering where it’s been touched is pretty important. International design firm Moto ran a robotic finger test on 6 leading touchscreen smart phones to see how well they registered a robot’s loving touch.

    Some of the phones did remarkably poorly, like the BlackBerry Storm and the Motorola Droid. The iPhone, Google Nexus One and HTC Droid Eris all did quite well. Check out the video below to see the tests and marvel at the apparent differences between touchscreens and their performances.

    Sponsor

    Robot Touchscreen Analysis from MOTO Development Group on Vimeo.

    As Sadat Karim writes on Neowin,

    “Hope is not lost though, as Moto Labs concludes that they do expect these problems to be remedied in the future as touchscreens mature and gain further traction in the industry. Commitment and competition will ultimately deliver seamless touch experiences for all consumers over time, since phone makers are continuously perfecting their products.”

    To see touchscreen hardware nerds duke it out over the test, check out the Moto Labs blog. How about you, readers? Have you felt the difference in performance across some of these handsets?

    See also: User Interfaces Rapidly Adjusting to Information Overload

    Don’t miss the ReadWriteWeb Mobile Summit on May 7th in Mountain View, California! We’re at a key point in the history of mobile computing right now – we hope you’ll join us, and a group of the most innovative leaders in the mobile industry, to discuss it.

    Discuss


  • Twitter Hacker, TechCrunch Document Leaker, Arrested in France (UPDATED)

    The AFP is reporting that the person who leaked internal business documents from Twitter Inc. to the blog TechCrunch last July is also the same person who compromised the Twitter accounts of Barack Obama and other celebrities last year. A 25-year-old who went by the name “Hacker Croll” has been tracked down and arrested in France by French authorities, with the assistance of the FBI. It’s not clear from the report what charges are to be filed.

    Reportedly, the FBI alerted France to the man’s presence in that country almost a year ago, in the same month the internal documents were leaked. Update: Hours after the report of the man’s arrest, the AFP now says he has been released after questioning. Apparently the man explained that he merely guessed peoples’ passwords and the police were unimpressed. “He’s not a genius,” a source explained.

    Sponsor

    The media report doesn’t make mention of the leaked documents, only the illicit takeover of Obama’s account. “Hacker Croll” was identified as the source of the controversial files, though. It seems possible that these two incidents are being improperly connected, but the report filed indicates they were carried out by the same person.

    We’ve reached out to both Twitter and TechCrunch for comment.

    When the documents were sent to TechCrunch, that blog deliberated publicly at length about whether it had a journalistic obligation to publish or suppress them. Founder Michael Arrington in the end decided to work with Twitter executives to identify the most sensitive documents but published other, less sensitive information days later. The resulting blog posts provided a very interesting look into the thinking of one of the most important companies on the Internet, but proved damaging to TechCrunch’s reputation with people in the industry who considered the decision to publish them an unacceptable betrayal.

    TechCrunch argued that it was within its legal rights to publish the information, and the law breaking had been done by the person who sent them the files. Now that person is apparently headed to trial.

    The ethical and perhaps legal implications of TechCrunch’s decision will no doubt be discussed again due to this turn of events.

    The one clear lesson from all this that no one can argue with, though: Don’t mess with Twitter or the FBI will hunt you down where ever you may be around the world.

    Discuss


  • That’s Not a Phone, It’s a Tiny Computer: Global Mobile Data Surpasses Voice

    The mobile phone’s days as primarily a phone were short lived. Global mobile company Ericsson announced at the CTIA conference today that mobile data traffic surpassed voice traffic worldwide at approximately 140,000 Terabytes per month at the end of last year.

    GigaOM’s Stacey Higginbotham writes, “Worryingly, that data traffic was generated by an estimated 400 million smartphones set against 4.6 billion mobile subscribers making voice calls. What happens when everyone has a smartphone?” This is an historic moment in terms of both technical capacity and the development of innovative features to serve mobile users.

    Sponsor

    The mobile industry is just coming to terms with this “tsunami of data” and the challenges it poses. Tricia Duryee wrote last year on MocoNews that two years ago none of the mobile companies would admit they faced a shortage of capacity, but that changed dramatically at the CTIA conference last year. In calling for more wireless spectrum, Qualcom co-founder Irwin Mark Jacobs said last year, “In the lab, we’ve done everything we know how to do to optimize spectrum. We have to use different tricks now.”

    These tiny computers trying to use the spectrum that phones have traditionally used for voice are real game changers. As Duryee again reported last year, one smartphone equals 30 feature phones on a network, and one netbook or aircard equals 450 feature phones.

    It’s not just about capacity, either. As mobile search specialist Peggy Anne Salz wrote last Summer, there’s a whole lot of feature development possibilities opening up because of this data:

    “The advance of Internet-specific smartphones and the spread of app store schemes turns up the pressure on mobile operators (and their content providers) to decipher data transactions (on and off the network), combine it with location and demographic data and use the results to create a 360-degree view of the individual.”

    Hopefully that will mean cool new features to serve users, not just mobile profiles to follow us around and target us with ads. So far smart phones have treated us pretty well though, haven’t they? They certainly aren’t just phones anymore.

    Don’t miss the ReadWriteWeb Mobile Summit on May 8th in Mountain View, California! We’re at a key point in the history of mobile computing right now – we hope you’ll join us, and a group of the most innovative leaders in the mobile industry, to discuss it.

    Discuss


  • Automated Sports Reporters Coming This Summer

    Make room on the bleachers, the robot reporter wants to sit down and watch the game. Sports statistics company StatSheet says it will have technology ready this Summer to turn statistics for hundreds of small college basketball games into richly reported blow-by-blow coverage of how the contests unfold.

    People have been talking about robot reporters for years, but sports coverage is a logical, structured field for it to happen in and StatSheet says it will soon bring a product to market.

    Sponsor

    A veteran sports reporter can recall from memory all kinds of stories and history, but that’s no match for the bulk number crunching that a computer can perform to discover patterns and context over the history of a sports season or a player’s career. Engineer and StatSheet founder Robbie Allen says his company will soon launch technology that produces sports narrative that 90% of readers won’t be able to discern from human reporting of college basketball. Then he’ll expand into NFL, NBA, NHL and MLB games.

    StatSheet today offers embeddable statistics to sports media sites around the web. The company licenses bulk stats from a vendor and then analyzes that data to draw out higher-level insights. Allen says the next logical step is to build narrative prose around those insights. Once he’s got the tools built to narrate one game, there’s zero marginal cost to apply them to hundreds of college basketball teams around the country. Many of those teams are small enough that they don’t get much attention from human reporters, Allen contends.

    Human reporters know a team and a season, but Allen says they also “have their scripts written.” “They already think they know what to look at as the most interesting things that have happened,” he says. “I’m talking about codifying that knowledge, to build a wider corpus of interesting facts to draw from.”

    Scientists in Belgium have built software that automates live video coverage of basketball games. It balances tracking the ball with capturing the most movement of players on the court and alternates between wide angle and close-up shots. Might players someday “play to the robot camera”?

    Qualitative events like defensive plays are often not made explicit in sports stats but Allen says that’s the new frontier for stats companies and will become easier to incorporate in the future.

    Even if at any given moment a human can generally beat a machine writer, “in many ways this going to surpass a lot of the sports media that is out there,” Allen says. “There are going to be times that any writer can outperform a computer, but when you look at the breadth it’s going to be hard to beat a computer.”

    Allen says he’s not trying to replace human sports writers, just to augment their coverage. Sports media organizations are currently limited by the number of people they can throw at a league and at statistical analysis. There’s no reason not to automate much of that work, he says.

    Traditionalists might doubt that the writing could possibly be as good, but a look at excerpts generated from a competing academic project called StatsMonkey makes robot reporters look pretty capable of the basics.

    An outstanding effort by Willie Argo carried the Illini to an 11-5 victory over the Nittany Lions on Saturday at Medlar Field.

    Argo blasted two home runs for Illinois. He went 3-4 in the game with five RBIs and two runs scored.

    Illini starter Will Strack struggled, allowing five runs in six innings, but the bullpen allowed only no runs and the offense banged out 17 hits to pick up the slack and secure the victory for the Illini.

    The Illini turned the game into a rout with four in the ninth inning.

    That’s not perfect, but it’s pretty good! It’s quite basic, too. It will be interesting to see how much more StatSheet can offer in its robot coverage. Allen says he’s having a lot of fun building out complex flow charts and tracking for statistical anomalies.

    “It’s going to follow a standard disruptive curve,” StatsSheet’s Allen told us, “maybe version one will be a little rough but there will be plenty of opportunity and incentive for improvement. It’s not like an algorithm is going to have writer’s block.”

    He says he plans on offering a variety of writing voices to interpret the facts of a game in. Readers or publications could choose between the “over the top” vs the “subtle” coverage, for example, depending on their tastes.

    There’s all kinds of artistic and ethical ways to consider this vision. Will Google punish these non-human content creators? Should it? If a reporter breaks a news scoop, is its engineer responsible for making sure it protects its sources? How will an athlete feel when they graduate from a minor sports league covered by machine media into the big time and a human sports reporter’s beat? “They may be disappointed,” Allen quips, “because the coverage may not be as good.”

    Discuss


  • Mobile Media Gets Pushy: Push Notifications With a Media Payload

    Portland, Oregon mobile service provider Urban Airship announced today that it now offers push notifications as a service – with a multi-media payload. The white label technology, called AirMail, sends users of iPhone, BlackBerry and soon Android phones a push notification that when clicked launches not just an app, but specific content like images, videos or text inside that app.

    Developers who put the AirMail library into their apps will also receive full analytics showing how many recipients opened the messages, how long they spent viewing the content and more. AirMail is available only as a developer preview today but a preview video can be viewed below.

    Sponsor

    The downside to using services like UrbanAirship is always dependence on 3rd party service providers. This newest feature is probably the startup’s most intimate integration yet from a technology perspective, but development required is non-trivial and the resulting functionality is likely to be a real boon to publishers.

    If you’ve got multi-media in an iPhone app, this is a way for it to reach out and grab (with push) your users and keep them engaged.


    Discuss


  • Google’s China Move: What Does it Mean?

    Google has just announced that it will stop censoring search results in China and will instead redirect mainland Chinese web visitors to an uncensored version of the site based in Hong Kong. China may block access to the Hong Kong site as its next move.

    In announcing its decision to stop censoring search results, Google didn’t argue that censorship was intolerable, but said that the situation was getting worse: a wave of hacking attacks against companies including Google, evidence that the Gmail accounts of human rights activists had been compromised and the government’s shutting off access to YouTube, Blogger, Twitter and Facebook. What comes next?

    Sponsor

    Committee to Protect Journalists: “We welcome this stand against censorship and hope that all Internet companies operating in China take a similar principled position… In the long run, however, we hope that [Google] ramps up pressure on the Chinese government to allow its citizens to access the news and information they need to be informed and engaged citizens.”

    One future scenario that could unfold is that China could block access for its citizens to the Google Hong Kong site and Google could call on the US government for assistance. The US government may or may not be interested in intervening.

    Google might also determine that it did the best it could: that its hands are now clean, that Chinese citizens who wanted to could access the Hong Kong site by proxy and that there is probably limited interest in Google inside China anyway. (Though Google is believed to control 30% marketshare of search in China, the company has said its revenues there are “immaterial.”)

    Finally, Google could make this move in anticipation of political pressure to do something about censorship around the world. Senate Democrats have said they will soon introduce proposed legislation that will “require Internet companies to take reasonable steps to protect human rights, or face civil and criminal liability.”

    Different companies are liable to have different reactions to such political pressure. Facebook told a Senate committee it is still too small to decide how to deal with international questions of censorship yet!

    Human Rights Watch: “This is a crucial moment for freedom of expression in China, and the onus is now on other major technology companies to take a firm stand against censorship.”

    Google may well have made this latest move in order to spare itself from an impending wave of political pressure later.

    Or maybe Google just did the morally right thing to do today. It’s hard to consider that the most likely explanation though when it wasn’t the censorship that was the last straw, it was the hacking attacks, the Gmail break-ins and the lack of access to a variety of sites.

    What do you think this means? What do you think will happen next? Stay tuned for what’s sure to be an interesting, unfolding story.

    See Also: ReadWriteWeb’s interview last week with Chinese digital activist Wei Wei.

    Discuss


  • @ Symbol Acquired by Museum of Modern Art

    The @ symbol has been added to the prestigious collection of New York’s Museum of Modern Art, the organization announced today. Unlike some other ephemera, the Museum didn’t pay a penny for the symbol, nor will it claim exclusive rights to its use.

    “It might be the only truly free–albeit not the only priceless–object in our collection,” Paola Antonelli, Senior Curator, Department of Architecture and Design, wrote this morning. Why add such an artifact to the collection? Antonelli’s explanation is quite artful itself. For social web users, though, the meaning of the symbol is more contentious than the Museum has acknowledged. Isn’t good art generally the subject of controversy?

    Sponsor

    Says MoMA:

    “The appropriation and reuse of a pre-existing, even ancient symbol–a symbol already available on the keyboard yet vastly underutilized, a ligature meant to resolve a functional issue (excessively long and convoluted programming language) brought on by a revolutionary technological innovation (the Internet)–is by all means an act of design of extraordinary elegance and economy.”

    Discussion of the @ symbol around the web today has focused on its widespread use in email addresses, but how do you use it most often each day? If you’re like us, your use of the symbol on Twitter and Facebook is becoming increasingly common.

    That might seem a minor matter, but in fact the changing use of the @ could be a symbol itself for a larger battle over identity on the internet. Twitter and especially Facebook are jostling to become the primary providers of our identities online. When we address each other as @myfriend – these days that’s an in-network message that assumes we all use the same identity host. That’s very different from [email protected].

    It’s about monopoly, competition, innovation, control, freedom and interoperability. @ may be a work of art, but it’s also an important point of contention for the future.

    See also: Email as Identity: Google Turns on Webfinger and Not Everyone Is Exited About Facebook Vanity URLs

    Discuss


  • Inventor of the Web Gets Backing to Build Web of Data

    Sir Tim Berners-Lee, inventor of the World Wide Web, and prominent researcher Nigel Shadbolt will lead a new British Institute for Web Science with $45 million in government backing. The announcement was not without its critics, but the Institute could have a world-wide impact.

    The two men collaborated in helping build the excellent data.gov.uk and will now expand upon that work. Prime Minister Gordon Brown said of the move: “We are determined to go further in breaking down the walled garden of Government…This Institute will help place the UK at the cutting edge of research on the Semantic Web and other emerging web and internet technologies.”

    Sponsor

    Understanding the Web of Data

    Berners-Lee said two years ago last month that all the pieces were in place to build the semantic web, a paradigm based on giving structured meaning to and clear links between otherwise unstructured content floating around the web. Many people believe that a web with semantic structure will be the same type of boon to innovation that common standards like HTML have been.

    Berners-Lee famously described his vision of the semantic web like this:

    I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.

    Today’s announcement came along government calls to build super-fast broadband to every home in the UK. Prime Minister Brown claimed that such a development could foster economic development and as many as 250,000 new jobs.

    As writer Tom Foremski pointed out this morning about the Web Science Institute, however “internet technologies have resulted in fewer jobs created than have been lost — which is the way of all disruptive technologies.”

    Making the Vision Real is Hard, Too

    Super-fast broadband to every home is a much easier thing to sell to the public than a future-web of structured, machine-readable data. After years of expectations, the semantic web remains in search of its clearly comprehensible killer-app. Earlier this month, semantic social bookmarking service Twine quietly slid into obscurity and was bought up by news recommendation service Evri. Twine was said to be possibly “the first mainstream semantic web app” two years ago. Founder Nova Spivak raised half as much money for Twine’s parent company Radar Networks ($24m) as Berners-Lee’s entire new institute is receiving. Twine faltered under poor usability and the leadership of Spivak, considered to be both one of the smartest people in the internet industry and a caustic egotist.

    Thus is the dilemma for this supposed next stage of the web. Andrew Orlowski tears into today’s announcement in The Register, calling it “a confluence of two groups of people with a shared interest in bureaucracy.” Orlowski says the Web of Data is a fraud as well: “Of course, most web data is personal communication that happens to have been recorded. Most of the rest is spam, generated by robots, or cut-and-paste material ‘curated’ by the unemployed or poor graduates – another form of spam, really.”

    That’s a funny critique but the truth is probably somewhere in between the superlatives and the condemnation. Critics like Orlowski have already grown jaded about the world-changing impact of the last iteration of the web (easy social publishing) and underestimate the platform potential of this next iteration.

    “I’ve always been sceptical of the need for a ‘new discipline.’ [Web Science],” says leading semantic web consultant Paul Miller.

    “A significant tranche of funding such as that announced by the Prime Minister this morning will be helpful in tackling some of the issues (both hard and soft) that still remain as we push more and more data out into the public sphere. I hope and expect that Nigel, Tim and others will devote at least as much attention to issues of trust, provenance, licensing etc as to the details of angle brackets, triples and ontologies.”

    Miller did a podcast interview with Shadbolt for the UK’s largest semantic web company Talis, here.

    For an in-depth explanation of Berners-Lee’s perspective, see the two-part interview ReadWriteWeb founding Editor Richard MacManus did with him last year. MacManus has said that Berners-Lee’s vision of a read/write, two-way web, was a key inspiration behind the founding of the publication you’re reading now.

    We wish the Institute for Web Science the best of luck in delivering on its rich promise.

    Discuss


  • Want to Read Good Journalism? Try NewsTrust’s New Personalized Filtering Tool

    Fair, thorough, enterprising and in context – that’s what we’re looking for in the journalism we read, isn’t it? At a time when shallow ranting takes up so much space in public discourse, a new media evaluation technology offers hope, inspiration and is a lot of fun to use.

    NewsTrust is a media technology organization funded by the Omidyar Network and MacAurthur Foundation. Yesterday it launched a personalized news filtering tool called MyNews. The tool helps users review the quality of journalism from all over the web and discover high-quality content they and their friends might enjoy. A light-weight, crowd-sourced, personalized recommendation engine that adds value on top of existing content? Sounds like our kind of app!

    Sponsor

    When reading content from around the web through NewsTrust, the user is presented with a well-designed interface through which to review the quality of journalism in question. Users are prompted to evaluate stories based on things like how well they were sourced, whether both sides of a controversy were explained and how enterprising the story was. Short and long reviews are supported and it’s easy to review a story in less than 30 seconds if you feel so inclined.

    The ability to post links to Twitter and Facebook with a single click means that users who already share articles around social networks have an opportunity to pause briefly and add another layer of value by using NewsTrust.

    The new MyNews product released yesterday leverages that network of reviewers to draw in a stream of high-quality links from around the web, on particular topics. In addition to NewsTrust reviewers, the service also delivers stories discovered and vetted algorithmically and it pulls links shared by your friends on Facebook and Twitter into the NewsTrust ecosystem. It’s one thing to get a vote of apparent approval from friends sharing links on social networks, it’s another to peruse those links through a lens of community grading for journalistic quality.

    The end result is a personalized news reader populated with generally high-quality topical stories that have been reviewed by other readers. It’s a useful product and one that would work well as a mobile app, where browsing through lots of content of variable quality is less appealing.

    NewsTrust and MyNews aren’t for everyone, though. Only so many people will be interested in a news consumption interface so closely wedded to review activities. Many people will, no doubt, bristle at the prospect (or reality) of amateurs reviewing the quality of professional journalistic product. Some will find the site too left-leaning for their tastes. (Though it tries hard not to be.)

    Many people will enjoy MyNews, though, and we suspect everyone who follows social software in general will find this project particularly interesting. Projects like this may or may not be able to change the way news producers operate, but the news consumers who use it will likely find MyNews a helpful way to enrich their time on an otherwise all-too often low-quality web of news content.

    Discuss


  • How US Government Spies Use Facebook (Updated)

    The US Department of Justice this week released slides from a presentation deck titled Obtaining and Using Evidence from Social Networking Sites. The document was released in response to a Freedom of Information Act request by the Electronic Frontier Foundation (EFF).

    The DoJ presentation describes Facebook as much more co-operative with law enforcement requests for user information than Twitter and MySpace are. Update: Facebook’s Barry Schnitt contests this interpretation of the document, says the company is resistant to illegitimate government requests for user information and offers one example of that resistance in a comment posted below. The document also explains to officers what the advantages of going undercover on social networking sites are. The EFF posted IRS training documents for using various internet tools as well, including Google Street View, but those were much tamer than the Justice file.

    Sponsor

    Highlights from the deck include:

    • On “getting info from Facebook” – options include photos, contact info, group contact info and IP logs. “HOWEVER, Facebook has other data available.” The deck notes that Facebook is “often cooperative with emergency requests.”
    • MySpace and Twitter, on the other hand, are described differently. MySpace “requires a search warrant for private messages/bulletins less than 181 days old.” Twitter “will not preserve data without legal process,” has a “stated policy of producing data only in response to legal process” and has no Law Enforcement Guide (or spying manual, as some parties call such documents). Wouldn’t you like your social network to say no before it says yes and require a warrant before handing over information to law enforcement? We reached out to Facebook this evening about the government claim that it was unusually co-operative but have not yet received a response. Update: Facebook has responded in comments below and says that the company is in fact resistant to any requests for user information that it does not believe are an emergency and even then hands over a minimal amount of user data.
    • Funny: As social networks go, LinkedIn’s “use for criminal communications appears limited” the document says. You don’t say. LinkedIn can be useful in finding expert witnesses, however.
    • “Why go undercover on Facebook, MySpace, etc?” the document asks. Three reasons are offered: 1. Communicate with suspects/targets. 2. Gain access to non-public info. 3. Map social relationships/networks.
    • “If agents violate terms of service,” the document asks, “is that ‘otherwise illegal activity’?” No answer is offered in the text.
    • “Many witnesses have social-networking pages,” the presentation notes. Those pages can be a “valuable source of info on defense witnesses” and “potential pitfalls for government witnesses.”
    • Also funny: DoJ prosectors are urged to “use caution in ‘friending’ judges, defense counsel.”

    We expect the Electronic Frontier Foundation to offer further analysis in coming days. You can download a PDF of the document yourself here. For further discussion of these documents, see blog posts clustered on Techmeme.

    Discuss


  • Oops: Google Denied Trademark on Android Nexus One

    It’s been a rough day for Google’s Android phone, the Nexus One. First we learned this morning that initial sales have been far weaker than the iPhone saw when it first came out of the gate. Now it’s being reported that the U.S. Patent and Trademark Office has rejected its application for a trademark on the name Nexus One.

    The name “Nexus One” was ruled too close to Portland, Oregon based Integra Telecom’s own registered trademark for its Nexus fixed bandwidth integrated voice and internet T1 product.

    Sponsor

    Mike Rogoway, of Portland’s The Oregonian newspaper, got the following statement from Integra:

    “We appreciate that the PTO is protecting our trademark rights. Integra has over $60 Million in annual revenue associated with our Nexus brand and it represents millions of new revenue for the company each year. Google hasn’t contacted us since the PTO issued its objection but we hope we can work together to achieve our respective business goals.”

    Does that mean Google will rename the Nexus One, or that it will end up paying the trademark holder for the privilege of using the name? Google just expanded the Nexus One onto the AT&T network today.

    Either way, we wouldn’t be surprised if the hunt for a new name is already on. What would you suggest, readers?

    It’s tempting to say this is another example of the Patent and Trademark Office moving too slow, but note that Integra was granted its trademark in December 2008. The Nexus One was just release January 5, 2010.

    Meanwhile, the open Android operating system marches on. XML co-creator Tim Bray announced this weekend that he has joined Google to work on Android. He called the iPhone in a blog post “a sterile Disney-fied walled garden surrounded by sharp-toothed lawyers. The people who create the apps serve at the landlord’s pleasure and fear his anger.”

    Discuss


  • Internet of Things Explained (Video)

    IBM’s Smarter Planet team has created a great 5 minute video explaining the emerging trend of Internet of Things, an exciting topic ReadWriteWeb has and will continue to cover frequently and in depth. Internet of Things is about, as the video explains, the coming future when there are more “things” on the Internet (sensors especially) than there are people.

    The result of that will be “a kind of global data field” the video says. “If we can actually begin to see the patterns in the data, then we have a much better chance of getting our arms around this. That’s where societies become more efficient, that’s where more innovation is sparked.” Check out this artistic, succinct, optimistic and inspiring video explaining what could well become a big factor in how the future unfolds.

    Sponsor

    This is heavy stuff, clearly aimed to fostering positive and substantial cultural change through technology – by opening up a new plane of options for humanity. Of course there’s little critique of this movement in videos like this; that’s something we’re still exploring but we imagine surveillance is one down side. There’s also some risk of paying so much attention to our machines that we lose track of the joy of engaging directly with the world around us.

    The upside as described in the video is big, though.

    “When we talk about a smarter planet, you can say that it has two dimensions. One is to be more efficient, be less destructive, to connect different aspects of life which do affect each other in more conscience and deliberate and intelligent ways. But the other is also to generate fundamentally new insights, new activity, new forms of social relations. So you could look at the planet as an information, creation and transmission system, and the universe was hearing its information but we werent. But increasingly now we can, early days, baby steps days, but we can actually begin to hear the planet talking to us.”

    To track this trend across multiple vendors, check out ReadWriteWeb’s Internet of Things archive.

    Photo by Svilen Milev.

    Discuss