
Category: News
-
T-Mobile CEO rips ‘greedy hedge funds’ allegedly trying to block MetroPCS merger
T-Mobile CEO John Legere strikes a rather populist tone compared to your typical businessperson and now he’s going after “greedy hedge funds” who are allegedly trying to block his company’s merger with MetroPCS (PCS). Per Bloomberg, Legere this week expressed confidence that MetroPCS shareholders would vote in favor of merging with T-Mobile “despite the greedy hedge funds that are trying to take a double-dip out of that process.” Legere went onto explain that big hedge funds who own large stakes in companies typically make a lot of noise during acquisitions because they want “to get more money” through empty sabre rattling. Legere also made headlines this week when he described rival carrier AT&T’s (T) mobile plans as “the biggest crock of s—” he’s ever seen.
-
Does Your Internet Seem Slower Today? It Might Be Due To A Massive Cyberattack
The most popular form of cyberattack anymore is the Distributed Denial of Service attack. These DDoS attacks rarely affect anyone outside of those attempting to access the attacked Web site, but a recent DDoS attack is proving to have widespread effects.
The BBC reports that Spamhaus, an anti-spam outfit, and Cyberpunker, a Web host that will host anything including spam sites, got in a spat recently after Spamhaus blocked a few of Cyberpunker’s servers. In retaliation, Cyberpunker reportedly launched a massive DDoS against Spamhaus.
So, how does this affect the Internet at large? Spamhaus’ DNS servers are under attack, and these servers are what helps convert IP addresses into domain names. Spamhaus hosts servers all around the world so these attacks are slowing down the Internet for everyone.
What’s terrifying about all of this is that Cyberpunker is launching attacks that peak at 300 gigabits per second. To put that into perspective, Spamhaus CEO Steve Linford says that a 50 gigabit attack is enough to bring down a major bank. How is Spamhaus still online then? The distributed nature of the company’s servers ensures that it can stay up amidst the attacks, and companies that rely on Spamhaus’ services, like Google, are reportedly offering up servers to absorb a lot of the traffic.
The attacks have been going on over a week now, and show no sign of slowing down. It’s already being called the biggest cyber attack in history. It’s gotten so bad that five national cyber-police-forces are launching investigations into the incident. There’s no telling when the attacks will die down either. Cyberpunker is reportedly coordinating the attacks, but the actual traffic is said to be coming from criminal outfits in Eastern Europe and Russia.
We’ll continue following this story, and let you know of any developments. It will be interesting to see what will happen if things escalate. Maybe Danny Hillis wasn’t too far off the mark when he argued that the Internet needed a Plan B just in case something like this happened.
-
Sony Xperia A ‘Dogo’ and Xperia UL ‘Gaga’ heading to Japan this summer
Someone please tell me when Sony will run out of letters? We can now throw the Xperia A and Xperia UL into the mix. Both are headed to Japan this summer. The Xperia A is headed to NTT DoCoMo and is codenamed ‘Dogo’ with a model number of SO-04E. It will sport a 4.6-inch HD display (not sure if it will be 720p or 1080p), a quad-core Qualcomm Snapdragon 600 CPU, 2GB of RAM, 32GB of storage, and a 2,300mAh battery. It’s also said to be water/dust resistant and will come with mobile wallet, One Seg and an IR blaster.
The Xperia UL is headed to KDDI and is codenamed “Gaga’ with a model number of SOL22. It will have a design that is similar to the Xperia ZL and it will sport a 5-inch 1080p display, a quad-core Qualcomm Snapdragon 600 CPU, 2GB of RAM, 32GB of storage, and a removable 2,300mAh battery. Just like the Xperia A, it will be water/dust resistant and come with mobile wallet, One Seg, and an IR blaster.
There is also a rumor of another handset that could wind up on NTT DoCoMo as well, but it will have a whopping 6.4-inch 1080p display. It’s codename is ‘Tagari.” Anyone want to take a guess at what letter scheme Sony will go with for this one?
source: xperiablog
Come comment on this article: Sony Xperia A ‘Dogo’ and Xperia UL ‘Gaga’ heading to Japan this summer
Visit TalkAndroid for Android news, Android guides, and much more! -
School for Life: From out-of-school kid to university student
Abubakari Sulemana Hafiz tells me about his journey from being an out-of-school child to now studying at the University of Ghana
Abubakari Sulemana Hafiz has a lot to be proud of. He is one of 11 undergraduates who are part of Ghana’s first cohort of veterinary science students at the University of Ghana. This achievement is even more impressive as until he was 14 years old, he had not been to school. Abubakari comes from Kumbungu district in Ghana’s northern region where many families simply cannot afford the opportunity costs of sending their children to school. While Ghana has been experiencing high growth rates, the north has been left behind – over the past decade the number of poor declined by 2.5 million in the south while it grew by 0.9 million in areas in the north. Abubakari explains that his story is common.
“Children are kept out of school so they can work to help support their families – selling goods at the market, doing manual labour on the farms, or rearing cattle.”
Until School for Life came to Abubakari’s village in 1999, he was living with his cousin and looking after their farm. School for Life is a NGO that provides an accelerated learning programme for out-of-school children, known as complementary basic education. Students attend three-hour classes, 5 days a week for 9 months, taught in their mother-tongue language. Founded in 1995, School for Life has been very successful at getting out-of-school children into school. Part of their success centres around the strong community involvement. School for Life offers communities the chance to run the 9-month programme, by appointing community members to form a school management committee.
“In every community where we start School for Life, we hold a large discussion forum where we explain what we do. If the community wants to start a programme, they go onto elect a committee of 3 women and 2 men who oversee the programme,” the deputy manager, Mr Braimah explains.
School for Life deliberately seeks to get women to take a leading role on the committee, and ensures that over 50% of the out-of-school children who enrol are girls. This is because more than half of girls in these communities are not in school (see my earlier blog post on our partnership with Camfed).
The committee is then tasked with ensuring that the children who signed up attend the classes, parents are kept informed of their children’s progress and the community gains a stronger understanding of the value of education. The strong linkage with the community does not stop there. The teachers, known as facilitators, are also recruited from the same communities in which they teach. The community has to appoint a facilitator who is trained by School for Life to teach the class and commits to personally ensuring that all the enrolled children attend.
The founder of School for Life, Mr Saaka, explains that the success of the programme very much stems from the fact that communities trust the people running the programme.
“Having a facilitator from the community, promotes high retention rates, excellent attendance, and quality engagement with parents and the community. The facilitator will go around to an individual family’s house if their child is not coming to the classes. There is also flexible school timetabling that allows children to support their family’s need to earn an income during the morning.”
School for Life has really worked to reduce the number of out-of-school children, currently at around, 500,000. From 1998 to 2007, it educated more than 85,000 children aged between 8 and 14. More than 90% of these students graduated from the program, and around 70% were integrated into the formal school system. In 2006, a 2 -3% increase in the national enrolment rate was attributed to the School for Life programme.
Abubakari commends School for Life and the Ghana Education Service’s close partnership to enable graduates to transition to regular school on completing the programme. Usually graduates enter into class 4 of primary school.
“Straight after, I went to Kumbungu primary school. My teacher knew that I was a School for Life graduate and really supported me to improve my English skills, and recognised that I enjoyed Maths. She was the one who pushed me to enter into the northern region’s Science and Maths competition where I came first. That really gave me the confidence to continue my education and go onto to senior high school, and become the first person in my family to go to university.”
A community facilitator from School for Life teaches the out-of-school children in his community how to read and write
DFID Ghana first started supporting School for Life in 2008. Since then, we have supported 38,000 out-of-school children to go through this second-chance learning programme. Based on the strong results, DFID Ghana is now supporting a scale-up of complementary basic education to reach 120,000 out-of-school children by replicating the School for Life model and partnering with other NGO-providers.
In addition to providing access, the programme will support the Government of Ghana to move the out-of-school agenda forward. The Chief Director (most senior civil servant in the Ministry of Education) Enoch Kobbinah, will chair the taskforce on complementary basic education. DFID will help strengthen the government’s ability to procure complementary basic education programmes through non-state providers and provide advice on the draft policy. We have also provided support to refine the existing package of government-approved teaching and learning materials. The well-known Indian NGO, Pratham has been sharing ideas and collaborating with School for Life to improve the teaching and learning materials. There will also be a large research strand to the programme: a longitudinal study will be carried out to track the out-of-school children supported over a ten-year period. The study will fill in the evidence gap on out-of-school children looking, assessing how much they learn and the longer-term value for money of complementary basic education programmes.
For Abubakari there is no doubt that School for Life’s complementary basic education programme has changed his life.
“It is hard to believe that I am now living in the capital, and studying veterinary science. I really hope to be able to help increase Ghana’s agricultural productivity, so we can grow more and do that more efficiently, and rear healthier animals. This will also help poor families, like mine, to be able to send their children to school.”
-
Gay Marriage Debate Forces Twitter Users to Draw Lines in the Sand
It appears that gay rights and marriage equality are issues that compel people to speak their minds, no matter what the cost. It’s just one of those topics that people know will cause fervent debate, but it’s simply too important to stay silent.
After a quick glance down your Twitter stream or your Facebook news feed, this is more than obvious.
But what you’re about to see is a simple graph that shows exactly how much of a disruption in the normal Twitter flow has been caused by the reinvigorated same-sex marriage debate (thanks the the Supreme Court’s interest in the topic).
First spotted by SFGate’s Tech Chronicles blog, it looks like the use of the term “unfollow” has seen a surge in the days leading up to the opening arguments in the two same-sex marriage Supreme Court cases.
Here’s the past week’s worth of mentions on Twitter (provided by Topsy). The yellow is mentions of “unfollow.” You can see the spike occurred near similar spikes for terms like “gay marriage” (blue) and “same-sex” (red).

It doesn’t take a genius to infer as to why these terms saw a similar surge. For the most part, you see people drawing a line in the sand, saying “hey, I support/oppose gay marriage, and if you don’t like it you can unfollow me.”
For example:
Im sorry but if you’re not for Gay Rights, unfollow me now.
— Corinaa♔ (@corona717) March 27, 2013
If you’re against Gay Marriage, go ahead and unfollow me please, thank you.
— Sofia Herrera (@SoofHerrera) March 27, 2013
No, im against gay marriage. Unfollow me if you want, but the bible is not for gay marriage. #Sorrynotsorry
— Brunette Love (@brunettelove123) March 27, 2013
If you’re against marriage equality, please unfollow me.You can hate LOST all you want, but you can’t hate love.
— Damon Lindelof (@DamonLindelof) March 26, 2013
Just FYI, the gay marriage support vastly outweighs the gay marriage opposition if you just scan Twitter.
Then you have the people who have already made the decision to unfollow someone based on one of their tweets:
I’m disgusted that any person I follow would tweet gay marriage is wrong. #Unfollow #GayIsOkay
— Juliet! ♥ (@Juliet_Dean98) March 27, 2013
Also, people that are warning others that they will be unfollowed if they tweet a certain way:
Pause, I will unfollow every person who has a problem with gay marriage or rights. Let me know how you feel 1x to make this simple.
— Kait (@darlingkait) March 27, 2013
And judging by the sheer volume of gay marriage-related tweets I’ve seen in the past couple of days, I can assure you that there is probably a whole lot of unfollowing going on.
[Image via erin_wagner, Instagram]
-
6 Videos Highlight Google’s ‘Full Value of Mobile’
This week, Google announced a new initiative to help marketers better understand the impact of mobile on their businesses both online and off. The campaign, called “Full Value of Mobile,” includes a Calculator tool as discussed here.
Google has now put out a handful of videos related to the initiative, which we’ve compiled below.
This one is aimed at showing you how an in-store purchase can originate with a search on a smartphone:
The next one shows how replacing a shredded suit was made possible through a purchase on a mobile website.
This one shows how a spontaneous party was made possible with purchases made via phone calls:
This one shows the purchasing of a new dog collar on a laptop based on research done on smartphone after “being inspired while out and about.”
This one shows how a pizza order from a smartphone app led to a romance:
The last one includes all of the above stories in one.
“Customers’ constant connectivity through mobile has created new paths to purchase that start on their smartphones,” Google says of the video. “They can call a business, download apps, look for store directions, buy on a mobile website or continue on a different website . As a marketers, it is key that you account for all these new types of conversion and understand the return on investment you’re getting from your mobile efforts. This video, ‘How they got there’, is a short story told backwards that will show you 5 ways in which mobile can drive value for your business.”
-
Apple ensnared in Chinese patent fight over Siri
A Chinese company that makes an automated online chat technology is suing Apple in China, charging that Siri infringes on patents it holds, according to a report Wednesday in the Shangai Daily.
Shanghai Zhi Zhen makes a product called Xiaoi, which the company calls a “chat robot system” used for customer service and hotlines. While Apple owns a patent on Siri, its voice-activated personal assistant app, the Chinese company claims its patent was applied for in 2004 and was granted in 2006. Siri appeared first on the iPhone in fall 2011.
Siri was developed with a technology Apple acquired when it purchased the company behind it in 2010. The speech recognition engine is believed to have been built using technology licensed from Nuance Communications.
Shanghai Zhi Zhen’s problem with Siri is the robot interaction aspect of Siri, not speech recognition, according to what its spokeswoman told Shanghai Daily:
“The core technology of Siri is man-machine interaction rather than speech recognition, and that is based on the word chat robot system Xiaoi patented,” Mei [Li] said.
Though the original suit was filed last year, the first hearing is set to take place Wednesday.
Last year Apple was forced to pay $60 million to a local company after a Chinese court ruled against Apple in a trademark dispute over the iPad. The company that won the damages award was bankrupt and looking for cash. But this company, Shanghai Zhi Zhen, has not asked for any damages yet. But it is asking for its patents to be enforced.
Apple, for its part, has reportedly asked the country’s intellectual property agency to invalidate Shangai Zhi Zhen’s patent.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.- Siri: Say hello to the coming “invisible interface”
- The future of mobile: a segment analysis by GigaOM Pro
- What the Google-Motorola deal means for Android, Microsoft and the mobile industry

-
Designing For Dependability In The Cloud
David Bills is Microsoft’s chief reliability strategist and is responsible for the broad evangelism of the company’s online service reliability programs.
DAVID BILLS
MicrosoftThis article kicks off a three-part series on designing for dependability. Today I will provide context for the series, and outline the challenges facing all cloud service providers as they strive to provide highly available services. In the second article of the series, David Gauthier, director of data center architecture at Microsoft, will discuss the journey that Microsoft is on in our own data centers, and how software resiliency has become more and more critical in the move to cloud-scale data centers. Finally, in the last piece, I will discuss cultural shift and evolving engineering principles that Microsoft is pursuing to help improve the dependability of the services we offer.
Matching the Reliability to the Demand
As the adoption of cloud computing continues to grow, expectations for utility-grade service availability remain high. Consumers demand access 24 hours a day, seven days a week to their digital lives, and outages can have a significant negative impact on a company’s financial health or brand equity. But the complex nature of cloud computing means that cloud service providers, regardless of whether they sell offerings for infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS), need to be mindful that things will go wrong — because it’s not a case of “if things will go wrong,” it’s strictly a matter of “when.” This means, as cloud service providers, we need to design our services to maximize the reliability of the service and minimize the impact to customers when things do go wrong. Providers need to move beyond the traditional premise of relying on complex physical infrastructure to build redundancy into their cloud services to instead utilize a combination of less complex physical infrastructure and more intelligent software that builds resiliency into their cloud services and delivers high availability to customers.
The reliability-related challenges that we face today are not dramatically different from those that we’ve faced in years past, such as unexpected hardware failures, power outages, software bugs, failed deployments, people making mistakes, and so on. Indeed, outages continue to occur across the board, reflecting not only on the company involved, but also on the industry as a whole.
In effect, the industry is dealing with fragile, (sometimes referred to as brittle), software. Software continues to be designed, built, and operated based on what we believe is a fundamentally-flawed assumption: failure can be avoided by rigorously applying well-known architectural principles as the system is being designed, testing the system extensively while it is being built, and by relying on layers of redundant infrastructure and replicated copies of the data for the system. Mounting evidence paints a picture that further invalidates this flawed assumption; articles continue to regularly appear describing failures of online services that are heavily relied on, and service providers routinely supply explanations of what went wrong, why it went wrong, and summarize steps taken to avoid repeat occurrences. The media continues to report failures, despite the tremendous investment that cloud service providers continue to make as they apply the same practices that I’ve noted above.
Resiliency and Reliability
If we assume that all cloud service providers are striving to deliver a reliable experience for their customers, then we need to step back and look at what really comprises a reliable cloud service. It’s essentially a service that functions as the designer intended it to, functions when it’s expected to, and works from wherever the customer is connecting. That’s not to say every component making up the service needs to operate flawlessly 100 percent of the time though. This last point is what brings us to needing to understand the difference between reliability and resiliency.
Reliability is the outcome that cloud service providers strive for. Resiliency is the ability of a cloud-based service to withstand certain types of failure and yet remain fully functional from the customers’ perspective. A service could be characterized as reliable, simply because no part of the service, (for example, the infrastructure or the software that supports the service), has ever failed, and yet the service couldn’t be regarded as resilient, because it completely ignores the notion of a “Black Swan” event – something rare and unpredictable that significantly affects the functionality or availability of one or more of the company’s online services. A resilient service assumes that failures will happen and for that reason it has been designed and built in such a way to detect failures when they occur, isolate them, and then recover from them in a way that minimizes impact on customers. To put the meaning of the relationship between these terms differently, a resilient service will — over time — become viewed as reliable because of how it copes with known failure points and failure modes.
Changing Our Approach
As an industry, we have traditionally relied heavily on hardware redundancy and data replication to improve the resiliency of cloud-based services. While cloud service providers have experienced successes applying these design principles, and hardware manufacturers have contributed significant advancements in these areas as well, we cannot become overly reliant on these solutions as paving the path to a reliable cloud-based service.
It takes more than just hardware-level redundancy and multiple copies of data sets to deliver reliable cloud-based services — we need to factor resiliency in at all levels and across all components of the service.
That’s why we’re changing the way we build and deploy services that are intended to operate at cloud-scale at Microsoft. We’re moving toward less complex physical infrastructure and more intelligent software to build resiliency into cloud-based services and deliver highly-available experiences to our customers. We are focused on creating an operating environment that is more resilient and enables individuals and organizations to better protect information.
In the next article of this series, David Gauthier, director of data center architecture at Microsoft, discusses the journey that Microsoft is making with our own data centers. This shift underscores how important software-based resiliency has become in the move to cloud-scale data centers.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
-
System Mechanic 11.7 ekes more performance from your PC
Iolo Technologies has released System Mechanic 11.7 Free andSystem Mechanic 11.7, a minor update to its popular Windows system optimization tool that delivers refinements to existing technologies in order to eke more performance out of PCs.Version 11.7 comes with three major new features, aimed at the paid-for versions of the software: streamlined startup speeds, more machine-oriented optimisation and Direct Expert Connection.
System Mechanic 11.7’s promise of faster start-up times is delivered via enhancements to System Mechanic’s boot-optimization technology, which iolo promises will make Windows ready for use much quicker than with previous builds of the program. This builds on previous enhancements including one where the user is given complete control over what boots when — for example, creating “black out” times where no boot-time operations are performed.
System Mechanic already makes use of special Tune-up Definitions, which allow it to pass on research findings on performance-related issues to the program that in turn deliver improved performance on the user’s PC. Ongoing testing has allowed iolo to now inject more personalized recommendations into its definitions, which in turn means performance can be focused into the individual setup and profile of the user’s PC.
The final new feature is the Direct Expert Connection, which sees iolo’s collective powers being delivered direct to the desktop, allowing computers to benefit even more quickly than before from the latest cutting-edge performance data.
The enhanced features build on other recent improvements — version 11.5 extended Windows 8 support, introduced cloud-based Guided Recommendations based on advice from other System Mechanic users, and dropped per-PC licensing restrictions, for example.
System Mechanic 11.7 Free is available as a free, reduced-functionality download, while both System Mechanic 11.7 and System Mechanic Professional 11.7 are available as free trial downloads for PCs running Windows XP or later. The 11.7 update is free to all registered users of System Mechanic 11.
You can purchase both for a significant discount through the Downloadcrew Software Store: System Mechanic 11.5 costs $24.95, a saving of 50% on the MSRP, while System Mechanic Professional 11.5 can be bought for just $39.95, saving you 43 percent.
Photo Credit: studio online/Shutterstock
-
Twitter ad revenues higher than expected on strong mobile numbers — report
A research firm predicts that Twitter will earn nearly $1 billion in advertising revenue in 2014, largely on the strength of mobile ad sales. The figure exceeds the firm’s earlier predictions and reflects Twitter’s ongoing emergence as a force in mobile marketing.
According to eMarketer, Twitter will pull in $528 million this year and $950 million in 2014 with 53 percent of that money coming from mobile ads. The firm had earlier pegged the 2014 number at $800 million.
There are two takeaways from these figures. One is the obvious observation that Twitter is killing it, and fulfilling predictions that it will become a media and advertising behemoth. The other is that Twitter is one of the only companies to crack the mobile morass — the problem, faced by many websites, in which users are moving to mobile devices but ad dollars are not.
Twitter is sitting pretty here because its mobile experience is highly conducive to so-called “native ad units” (sponsored or promoted tweets) that drop easily into its regular story flow. Twitter’s surging mobile numbers could also bode well for Facebook which is ramping up its own advertising efforts, and will likely expand options for marketers to drop “sponsored stories” (another type of native ad unit) into its mobile News Feed.
The other figure that jumps out from the eMarketer report is how much of Twitter’s ad money still comes from the U.S. The firm says that the figure was 90% in 2012 and predicts it will be 83% in 2013.
Twitter continues to hire high profile figures to drive advertising ambitions; in February, it announced the hiring of Jeffrey Graham, a Googler and former New York Times executive.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.- The future of mobile: a segment analysis by GigaOM Pro
- Readers weigh in: future prospects for Twitter
- Consumer privacy in the mobile advertising era

-
MakerBot To Enable Gamers To 3D-Print Their Own OUYA Android Console Cases

MakerBot and OUYA announced a partnership today that will allow gamers to print their own OUYA game console cases at home. The partnership will see OUYA create 3D design files for Thingiverse.com, MakerBot’s 3D printing design repository, which are designed to be used with the MakerBot Replicator 2 Desktop 3D printer.
The OUYA Game Console Enclosure design created by MakerBot allows OUYA console owners to print their own case, which includes a lid and a spring-loaded button for housing the hardware. They can also be printed on the MakerBot Replicator 2X Experimental 3D printer for those who want to use ABS instead of PLA to print their designs.
It’s a move that brings an advanced level of customization to the OUYA, which is already based on an open-sourced development kit, which, while it limits developers in some ways, allows for a wide range of flexibility. The addition of home 3D-printable hardware elements makes for yet more personalization options, and could make for additional opportunities for game creators to develop case mod tie-ins for their titles.
MakerBot says on its website for the OUYA console kit design that it can be opened with a user’s own 3D printing software to make modifications and additional customizations, so we could see much more than the standard Yves Behar-sourced cube with a rounded edge at the bottom.
-
Robin Roberts: ESPN to Award Her an ESPY For Courage
ESPN this week announced that the 2013 Arthur Ashe Courage award will go to news anchor Robin Roberts.
Roberts, one of the hosts of ABC’s Good Morning America, has captivated morning show audiences this past year with her battle against myelodysplastic syndrome (MDS), a blood disease disorder of bone marrow stem cells. Before she was the darling of morning network TV, though, she was a sportscaster for over a decade.
Roberts began working for ESPN in 1990 as one of the first black female sports journalists in the business. She earned three Emmy Awards for her work at ESPN, and now the network will be honoring her for her achievements.
The 2013 Arthur Ashe Courage award will be given to Roberts during this year’s ESPY awards, which air on July 17. Past recipients of the award have included Muhammad Ali and Dean Smith.
“Robin brings an amazing amount of energy, compassion and determination to everything she does,” said John Skipper, president of ESPN. “Those qualities made her an incredible asset during her time here at ESPN, and they have served her well as she battled the terrible health challenges that she’s had to face.”
My humble thanks to @ESPN for naming me recipient of the Arthur Ashe Courage Award. He was a true friend and mentor. An amazing honor.
-
Better Late Than Never: Temple Run Finally Comes To Windows Phone 8
One of the main obstacles facing Windows Phone is its lack of apps. The situation is getting better with high-profile apps like Pandora finally making the jump to Microsoft’s mobile platform, but it’s still lacking the high-profile mobile games that Android and iOS players have enjoyed for the past few years.
Engadget reports that the situation is getting better this week as Temple Run will finally make its debut on Windows Phone during the Game Developers Conference. There’s no word on whether or not its immensely popular sequel – Temple Run 2 – will make it to the platform.
Alongside Temple Run, Windows Phone will also be getting a few more popular mobile titles, including 6th Planet, Propel Man, Orcs Must Survive, Fling Theory and Ruzzle. With these additions, Microsoft is one step closer to achieving parity with the Apple Appstore and Google Play. It still has a ways to go, however, until it can offer the breadth of content available on competing platforms.
All the above games should be available this week. A quick check of the Windows Phone store reveals that they haven’t been released just yet, but it shouldn’t be much longer before Windows Phone 8 owners can finally enjoy one of the best mobile games of recent years.
-
Android apps make up 20% of all BlackBerry 10 apps
BlackBerry (BBRY) included an Android emulator in its BlackBerry 10 operating system that allows developers to easily port their applications from Android to BlackBerry. The decision to include such a tool paid off for the company, which launched its new platform with more than 70,000 apps. BlackBerry recently announced that its app store is now home to more than 100,000 BlackBerry 10 applications, and it has been revealed that only 20% are ported from Android. While the operating system is still missing key apps such as Instagram and Netflix (NFLX), for the most part BlackBerry has been able to attract developers to its still unproven platform.
-
BioShock Infinite Review (PC)
Without a doubt one of the biggest games of this generation was the original BioShock, as developer Irrational Games managed to deliver a never-before-seen narrative set in a unique environment, the underwater city of Rapture, with many great characters and a twist that won’t be forgotten by players.Now, Irrational has the job of surpassing its seminal … (read more)
-
Will We Get a Second-Hand Market for Digital Goods?
When the U.S. Supreme Court rendered its decision on Kirtsaeng v. John Wiley & Sons, it dipped a toe into waters that run much deeper. As I discussed in an earlier post, that case addressed the question of whether a book buyer, having taken ownership of a physical book, is at liberty to turn around and sell it. But that question is, to slightly mix metaphors, only the tip of the iceberg. The decision says little about the much bigger fight brewing over ownership of digital products — e-books, digital music files, and electronic copies of movies and software.
Does the “first sale” doctrine the court applied in Kirtsaeng also apply to these goods? Do consumers have the right to resell songs purchased from Apple’s iTunes store, or books downloaded to their Amazon Kindles, or other media accessed entirely through the cloud — goods, that is, of which they never took possession as a physical copy, standalone or embedded? What role will first sale play in a future where all information products, like software, are increasingly produced and distributed entirely through the cloud?
For the past decade, emerging digital markets have operated under the assumption that, despite the obvious similarity between a music CD and an MP3 downloaded from iTunes, digital goods are different. Despite what is often sloppy use of terminology, Apple, Amazon, and others maintain that you don’t “buy” a copy of an e-book, nor do you “own” the music in your iTunes library or the copies of apps loaded on your mobile phone. You rent them. Or, to use the legal term, you license their use.
How does that happen? The short answer is through terms-of-service agreements and other click wrap. When you check the “I agree” box, you enter into a contract with the seller, including a litany of conditions that restrict how you can make use of the licensed good. Depending on the license, that includes how long you can use it, how many users can simultaneously access it, and, notably, what rights you have to transfer your license to someone else.
The short answer to that last question? In most cases: no (re)sale.
The disproportionate bargaining power of licensors and licensees aside, there are sound economic reasons why producers of digital media have insisted — so far successfully — on licensing rather than selling their goods. In brief, resale markets are dangerous, especially for information goods. If they are particularly robust, they can drive down the price for first sales, undermining the opportunity for copyright holders to recover their high sunk costs. Which is, in turn, the whole point of copyright’s “exclusive rights” in the first place.
Publishers and media companies may not have worried much about second-hand stores when it came to physical goods. Used physical goods deteriorate rapidly and, in the days before near-perfect market information, were relatively hard to find. Since media purchases are often impulse buys, customers tended to be willing to pay a premium price to avoid even a short delay — buying hardcover instead of paperback. Likewise, many music lovers are willing to just click the “buy” button for a music track rather than wait for the song to come around again on Pandora.
But a digital copy of the latest “Iron Man” movie is a perfect replica, and infinitely reproducible at a marginal cost of zero. Stored in the cloud, it will remain a perfect copy so long as there is software that understands the format it’s been encoded into. (Which may not be as long as nervous media executives think.) An unrestricted ability to resell could, therefore, easily lead to efficient second-hand markets that directly compete with first sales. The result could be, ironically, less original content in the first place — a lose-lose outcome.
So the media industry’s slow acquiescence to allow their copyrighted works to be distributed in digital form at all has come with the quid pro quo that reselling is forbidden. In theory, consumers have implicitly traded the right to resell for a lower price and more content to choose from.
Courts have so far upheld license terms that forbid resale against arguments that they violate “first sale.” Kirtsaeng, as copyright scholar Eric Goldman notes, is of no help. In the licensing of digital goods, after all, there was no sale, not even a first sale. Just a rental.
But even if courts and legislatures continue to support licensing agreements that bar resale, that doesn’t end the discussion. Market pressure has long been building for changes in the relationship between media companies and their customers, and customers are gaining the upper hand, thanks again to near-perfect market information. Many of those customers prefer, if only for sentimental reasons, to own rather than to rent their digital goods. Today, they cannot do so — at any price.
Several start-ups have foundered on the rocks of copyright law trying to bridge the gap. In 2000, for example, MP3.com lost a life-or-death case over a service that allowed consumers to “register” music CDs they owned and access the contents from the company’s servers. The court found that whatever the “equities” involved, MP3 could not lawfully make and distribute copies without permission “simply because there is a consumer demand for it.”
A more recent start-up, ReDigi, is now facing its own existential struggle with copyright law. The company’s beta service allows users to barter MP3 files licensed from iTunes (and soon other digital sellers) to other ReDigi users at discounted prices. In effect, ReDigi operates a market for license transfers among iTunes users.
According to the company’s website, the value of such transfers can only be used to acquire other licenses — users cannot cash out whatever price they can get from each other for used iTunes licenses. They are simply “recycling” their music purchases. Everything stays within the iTunes universe.
Not surprisingly, the company’s self-imposed limits didn’t satisfy the highly litigious music industry. Capitol Records quickly sued the company, and the case now awaits decision in a New York federal court. Capitol argued that the service necessarily makes copies of protected works without permission — straight-up copyright infringement. ReDigi argues that no copy is made — that it transfers the “exact file” from the seller’s device to its servers. More to the point, the transfer of the licensed file does not violate the copyright holder’s exclusive distribution right, because, like Dr. Kirtsaeng, they are immunized by the first sale doctrine.
ReDigi is another good example of a business model I would call “barely legal by design.” Here, the company is putting its eggs entirely in the “first sale” basket. If the court decides there is no “particular copy” that an iTunes customer “owns,” the first sale defense will fail. It’s hard to see how the service can continue in that event. For what it’s worth, the relevant case law, if applied, bodes poorly for the startup. (The company did not respond to a request for a comment.)
But even if ReDigi becomes the latest casualty in the war between media companies and their customers, it will hardly be the final word. The merchants themselves may find that a carefully designed second-hand market can generate profit without undermining primary markets. According to a recent New York Times article , both Amazon and Apple have filed patent applications for systems that would create digital resale systems, at least for the digital goods they respectively market. (Amazon’s patent has already been granted.)
Reading between the lines of the applications, the companies seem to have in mind markets that would allow their customers to transfer licenses to other customers within the system. You might someday be able to trade, sell, rent, or even loan out your Kindle books to other Kindle readers, in other words, but only under the watchful eyes of Amazon.
Will that be enough? Perhaps you, like many consumers, have a visceral reaction to the very idea that you have not purchased but merely rented your digital goods, and under highly restrictive terms. “I bought it,” you might be thinking even as you read this, “It’s mine. And I can dispose of it any way I like.” That, of course, is the essence of the consumer mindset, one that manufacturers have strongly encouraged ever since the Industrial Revolution made possible identical, cheap goods sold at fixed prices.
But operating in parallel with the consumer paradigm, there have always been markets for licensed goods. When you buy a ticket to a movie, you are licensing the use of a seat in the theater, for one particular showing of the film. Your ticket is yours to keep, but it doesn’t give you the right to watch the movie again, or to sell someone else that right. No one feels outraged or cheated when they have to vacate the seat.
That’s the only workable model, content companies believe, for digital goods. As books, entertainment, and software are being distributed less and less in physical media, the companies argue that what you are paying for is much more like a movie ticket than a manufactured gooda limited right to use, but nothing to own. You can listen to the music, read the book, watch the movie, or access the software. But only at agreed-upon times and places.
Economically, they may be right. If so, however, the reeducation of consumers from buyers to renters will be a long, uphill battle. Consumers will resist, and start-ups will try to push the legal envelope to help them. The courts, in any case, are on the side of the incumbents. At least so far.
Long term, however, media companies can take heart in the realization that our digital future is one in which the benefits of owning for consumers are being quickly outweighed by the costs of storing, maintaining, and replacing quickly outmoded and inferior versions. Licensing is also more flexible than ownership, and that could mean that in the future we’ll see more rental options (pay-as-you-go, all-you-can-eat, subscriptions, ad-supported, hybrids), each with its own price.
We may be more comfortable emotionally with ownership, in other words, but may soon come to see the superiority of licensing. For ourselves, not just the producers.
Author Kevin Kelly argued in a provocative 2009 essay that the very idea of ownership for digital goods is an anachronism, an unnecessary and expensive way of thinking about information which is quickly losing relevance. Indeed, says Kelly, usage rights are far more consistent with the economics of information than the ownership of copies. “An idea can’t be owned in the way gold can; in fact an idea has little value unless it is shared or used to some extent. Its value paradoxically can increase the less it is owned privately. But if no one owns it, who gains the benefit of that increase in value? In the new regime users will often assume many of the chores that owners once had to do. And so in a way, usage becomes ownership.”Kelly may be further up the evolutionary ladder than the rest of us. For many consumers, outright ownership still matters, whether an information good takes physical form or not. So it should matter to them, too, what role the courts and legislatures play in deciding how such questions are resolved.
-
The next Windows won’t be called Blue
Microsoft knows something about cool codenames, but little on how to name actual products. Whistler, Longhorn, Cougar, Blackcomb, Vienna and even Blue all sound great, resounding and promising, but that impression goes away fast when Microsoft baptizes its creations: XP, Vista or 7. The guy with the cool names went on a bathroom break, and all the boring suits took over.That’s the very same impression I get after reading about Microsoft’s “Looking Back and Springing Ahead” blog post, which touts a number of apparently impressive achievements and future plans that the company has. Lo and behold, there’s even a strategy in place to raise the pace for “updates and innovations” — that’s the “new normal across Microsoft”, according to the company. But then I notice the Windows Blue reference.
On Windows Blue, Frank X. Shaw, corporate vice president of corporate communications at Microsoft, says: “Chances of products being named thusly are slim to none. And don’t start with the ‘so you’re telling me there’s a chance’ bit”. Blue may not be the most imposing name, but it’s out there with the big boys, and now the software giant is practically telling us that the boring route will be used instead.
I can only speculate that Windows Blue will be named Windows 9 or something along those lines, after the product is released into public hands (even as a preview). Admittedly, Microsoft may want to avoid future genitalia puns (which I shall not name) but I expected the company to grow a pair and get bold.
Apple manages to deliver successful operating system releases that users adopt and keep cool, and memorable, nicknames as well. Just think about Mac OS X 10.6 Snow Leopard. Anything is better than boring and when you want to hang around with the cool kids, hip is the way to go. Adapt and conquer.
But what else is there in that blog post? Microsoft reveals that Windows Azure has twice as many users with revenue growing three fold, while sales of Windows Server 2012 Datacenter licenses increased in excess of 80 percent. Great news comes from Office 365 “paid seats”, which have tripled year over year during the previous quarter. The software giant also quotes an IDC report that places Windows Phone at 10 percent market share “in a number of countries”, surpassing Blackberry and iPhone shipments in 26 and seven markets, respectively.
This is hardly surprising, but Microsoft also announced that a “unified planning approach” is implemented in order to deliver “devices, apps and services working together wherever [users] are and for whatever [users] are doing”.
Judging by the title of the blog post and the innuendo at the end, Microsoft did not bury the hatchet after Google killed a couple more services under the now-traditional spring cleaning. “See, spring isn’t just for cleaning/whacking away at things. It’s also a time to plant and get ready for summer. So…get ready!” Shaw says.
-
Review of the Department of Labor’s Site Exposure Matrix Database
Final Book Now Available
Beginning with the development of the atomic bomb during World War II, the United States continued to build nuclear weapons throughout the Cold War. Thousands of people mined and milled uranium, conducted research on nuclear warfare, or worked in nuclear munitions factories around the country from the 1940s through the 1980s. Such work continues today, albeit to a smaller extent. The Department of Energy (DOE) is now responsible for overseeing those sites and facilities, many of which were, and continue to be, run by government contractors. The materials used at those sites were varied and ranged from the benign to the toxic and highly radioactive. Workers at DOE facilities often did not know the identity of the materials with which they worked and often were unaware of health risks related to their use. In many instances, the work was considered top secret, and employees were cautioned not to reveal any work-related information to family or others. Workers could be exposed to both radioactive and nonradioactive toxic substances for weeks or even years. Consequently, some of the workers have developed health problems and continue to have concerns about potential health effects of their exposures to occupational hazards during their employment in the nuclear weapons industry.
In response to the concerns expressed by workers and their representatives, DOL asked the Institute of Medicine (IOM) to review the SEM database and its use of a particular database, Haz-Map, as the source of its toxic substance-occupational disease links. Accordingly, this IOM consensus report reflects careful consideration of its charge by the committee, and describes the strengths and shortcomings of both. To complete its task, IOM formed an ad hoc committee of experts in occupational medicine, toxicology, epidemiology, industrial hygiene, public health, and biostatistics to conduct an 18-month study to review the scientific rigor of the SEM database. The committee held two public meetings at which it heard from DOL Division of Energy Employee Occupational Illness Compensation (DEEOIC) representatives, the DOL contractor that developed the SEM database, the developer of the Haz-Map database, DOE worker advocacy groups, and several individual workers. The committee also submitted written questions to DOL to seek clarification of specific issues and received written responses from DEEOIC. The committee’s report considers both the strengths and weaknesses of the SEM and the Haz-Map databases, recognizing that the latter was developed first and for a different purpose. The committee then discusses its findings and recommends improvements that could be made in both databases with a focus on enhancing the usability of SEM for both DOL claims examiners and for former DOE workers and their representatives. Review of the Department of Labor’s Site Exposure Matrix Database summarizes the committee’s findings.
Topics: Health and Medicine
-
Springpad moves beyond the app, making its notebooks portable to other websites
Springpad has always made it easy to take content from all over the web and organize them in notebooks on its online portal and mobile apps. Now it’s allowing its customers to take those same notebooks outside of its app and display them anywhere on the web.
As part of its upgrade to version 4.0 of its service, Springpad on Wednesday unveiled a notebook-embedding feature for publishers and brands. The idea is that brands will create notebooks full of relevant content for their customers and then post those notebooks on their websites. Customers can browse and interact with those notebooks just as they would through Springpad’s web and mobile apps, and if they find something they like they can save those notebooks into their own Springpad libraries.
For instance, one of Springpad’s new partners, Glamour, is using embedded notebooks to aggregate everything from beauty tips and shopping list suggestions to specific articles on fashions or product pages. Customers never have to leave Glamour’s site to explore that notebook, but if they want to save the notebook it will be copied into a new or existing Springpad account. There the notebook lives on the user’s library – every time Glamour updates it, the customer’s digital copy reflects the new content.
Springpad co-founder and VP of Business Development Jeff Janer said that brands have long been taking advantage of social media and curation services to promote their content and products, but while Facebook and Pinterest generate an awful lot of traffic, there’s limited follow-through. For instance, many customers may “like” a brand’s Facebook profile, but there’s little chance they’ll return to it after the initial liking. Pinterest is a great way for brands to display their wares in a visually appealing way, but beyond the visual, there are few options for displaying other forms of content.
While embedded notebooks are initially targeted at companies and advertising agencies that will pay Springpad for the service, Janer said they’re just a first step in the startup’s strategy to make all of its user-organized content portable. Right now a lot of loose information flows into Springpad, gets organized and then stays in Springpad. The company wants to encourage users to take those notebooks outside Springpad’s confines and show the world their organizational labors, Janer said.
Right now, anyone can embed a notebook into a Facebook page, but Janer said Springpad is working with blogging platforms and other social networks to increase its reach. Eventually Springpad hopes to make posting a notebook anywhere on the web as easy as embedding a YouTube video.
Springpad 4.0 isn’t quite a facelift of last year’s 3.0 upgrade, which effectively turned Springpad from a note-taking service into a social networking and collaboration tool. But it is supporting another nifty new feature: intent-based search. Springpad has created new search categories that parse a user’s content based on specific interests or activities.For instance, if you want to be entertained, you can hit the “watch something” button and Springpad will dig up every movie or TV show you’ve ever “sprung” and display them in a menu. Any movie or show that is available instantly through Netflix will pop up on top. Movies that are available for rent or purchase on iTunes or Amazon will appear next. And finally showtimes and prices for films in the theater will appear at the bottom.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.- Controversy, courtrooms and the cloud in Q1
- Connected world: the consumer technology revolution
- Flash analysis: future opportunities for Pinterest

-
Could Banning Google Glass In The Car Actually Save Less Lives?
As previously reported, West Virginia is already looking to outlaw the use of devices like Google Glass while driving. Other states are likely to follow.
The device has not even become available for people to buy yet, outside of the lucky chosen few who won Google’s contest. Should this really be outlawed before we can really see how they can be used?
H.B. 3057 was recently introduced in the West Virginia legislature. It would add existing traffic safety rules in the state, specifically including a ban on “using a wearable computer with head mounted display”. This is described as “a computing device which is worn on the head and projects visual information into the field of vision of the wearer.”
The bill doesn’t single out Google Glass, of course (there will be plenty of competing devices), but it is a response to Google’s much hyped device. The bills authors see the amendment as an extension of not texting while driving. It’s understandable that they would want to prevent more deaths from reckless driving before they occur. However, an outright ban on the device could potentially prevent lives from being saved too.
You have to take into account that at this point we have no idea what these devices are really capable of, and it’s highly likely that developers will create applications that actually enhance safety. Consider this talk from one of the Google Glass engineers, who was actually talking about this kind of technology as it pertains to contact lenses (but it still applies here).
During his presentation, he outlines possibilities for the future, which include several types of vision improvement, such as “super vision,” night vision and multi-focal electronic lenses. In other words, it’s possible that at some point, devices like Google Glass could actually be used to help the vision impaired see better and more clearly. It’s possible that they can enhance anyone’s vision at night. Obviously, any of these scenarios could actually prevent auto accidents.
But that’s all just speculation for a possible future. The point is, do we want to have these devices banned before we really know what they can do? For that matter, if the technology makes it to contact lens form, how would any law ever be enforced?
It’s also worth considering what they’re already capable of today, and that is, for one, shifting the focus from devices that require you to look away from the road. You’re taking your eyes off the road when you look at your phone, or even your dashboard/console. With Google Glass, you’re not.
As Matt Peckham at Time says, “West Virginia already bans texting while driving or using a phone without a hands-free device…But isn’t Google Glass also a hands-free device for your eyes? A way of potentially freeing you from looking at things that might otherwise take your eyes completely off the road, whether glancing at your phone to check the time or answer a call or scan the weather?”
Still, it’s not that simple. I, for one, have not had the pleasure of trying one of the devices on, much less driving while wearing and operating one. I can’t speak from first-hand experience. It’s entirely possible that it does create distractions, and maybe there is valid argument for a ban. But banning the devices this early seems like a snap judgment that doesn’t take into consideration all possible factors.
Let’s not forget that Google started creating self-driving cars to reduce the number of auto accidents and make the roads safer. Some states like the idea of these being legal. Of course, driverless cars are more accident prone when humans are involved.
What do you think?
Lead image: Google co-founder Sergey Brin driving while automatically snapping photos from Google Glass





