Blog

  • Flash memory, the cloud and software-defined storage in 2013

    When flash memory hit the consumer market, it transformed the user experience in ways no one could have anticipated. The new experience completely transformed the HDD-based mobile consumer device market. When flash memory floods the enterprise storage market in 2013, it will have similar effects, though the situation for enterprise will be much more complex.

    Looking at data center architecture, for example, cloud-based storage is bulldozing convention in the design of data centers. Further, the emergence of “software-defined storage” platforms is making the decision-making process for designing IT infrastructure nothing short of perplexing.

    New and accelerated deployments of VDI, big data, server virtualization and performance-hungry databases will place growing pressure on enterprise infrastructure. The need for better storage will produce innovation opportunities in the data center. Some organizations will certainly face pitfalls in architectural decision making.

    Key topics of discussion:

    • How IT infrastructure is impacted by major shifts in storage technology in 2013
    • What happens when cloud architecture and flash memory collide
    • When and where to apply flash memory and software-defined storage architectures
    • The emerging role of software-defined storage architectures
    • How an organization can capitalize — or falter — on decisions this year
    • How to use storage as a strategic differentiator for your organization

    Speakers include:

    For a thorough discussion of these issues, please join GigaOM Research and our sponsor Pure Storage for “Flash memory, the cloud and software-defined storage in 2013,” a free analyst webinar on Monday, March 18, 2013, at 10 a.m. PT.  Register today.

  • Philips opens an iOS SDK for Hue, its connected lightbulb system

    Not satisfied with merely being able to control your Hue lightbulbs with a few taps on your iPhone? Starting Monday Philips is opening up Hue’s APIs and issuing a software development kit (SDK) for iOS developers who want to make their own mobile apps to turn off, dim, time or sync their Hue bulbs.

    Philips’ smartphone-controlled LED lightbulbs have been for sale in the Apple Store since late October. Since then, the company says some developers have already gotten creative with Hue, including one app that syncs the bulbs with music and another that uses the iPhone’s calendar to schedule when the lights should be on.

    Using the low-power wireless protocol ZigBee Light Link, Hue bulbs talk to a bridge that in turn talks to the iPhone (or an Android device). ZigBee is also used by other wireless lighting devices for the home, so between that and open APIs and an SDK, there should be a lot more creative ideas for ways to control home lighting systems, but other devices in the mix as well.

    Philips calls its new developer effort “just the first phase” of what’s to come. It says new features are already in the pipeline, including geofencing and scheduling. (And, as you might expect with wireless connected bulbs, the bulbs are self-updating.) iPhone controlled lightbulbs are just a splash in the coming connected home and internet of things wave. And as is the case for most early adopters, the people who dip their toes in the water first are going to have to lay out some dough for the privilege. Philips’ Hue system is not cheap: the starter kid, which includes the bridge and three bulbs is $199, and additional bulbs are $59 each.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Czechs pull plug on 4G spectrum auction after carriers bid too much

    Radio spectrum auctions are generally supposed to be about assigning spectrum in the most efficient way, so that new mobile services run as well as possible. That said, they also make money for governments, and some see this as their primary purpose.

    Not so the Czech telecoms regulator, CTU, which has suspended the country’s auction of spectrum in the 800MHz, 1800MHz and 2.6GHz bands because the carriers bid too much. The private equity firm PPF and the local businesses of telecoms giants Deutsche Telekom, Telefonica and Vodafone had collectively bid 20 billion crowns ($1.03 billion) before the regulator pulled the plug, saying investments of that level would mean unreasonably high prices for consumers.

    “When announcing the conditions in the first half of last year, we stressed that the main motivation of the auction was the quick availability of a 4G network for Czech citizens and the possible entry of a fourth operator — never about profits for the state,” CTU chairman Pavel Dvorak said in a statement (I’ve used an English translation of the quote from Reuters).

    Even though the reserve for the Czech auction was only $377 million, the billion-dollar figure doesn’t appear that high on the face of it: the UK spectrum auction last month pulled in $3.6 billion, and the Dutch auction in December accrued $5 billion.

    What’s wrong with raising too much? At the extreme end of the scale, we have the 3G spectrum auctions of a decade ago — there, carriers paid tens of billions for their licenses, effectively causing an industry-wide crash from which they and their vendors took years to recover. Let’s remind ourselves of what EU digital agenda commissioner Neelie Kroes had to say back in January, in reaction to the Dutch result:

    “Was nothing learned from previous auctions for UMTS [3G] frequencies, when the share price of KPN dropped substantially and the ecosystem of small supply companies in the telecom sector was severely damaged? … Telecom companies paid high prices. KPN saw a further decline in its credit rating. Prices for attracting money for infrastructure investments are expected to rise. The rollout of high-speed internet will slow down and the suppliers will be put out of business. This ‘Christmas gift’ could be a huge burden for the sector, and for all other businesses, entrepreneurs and citizens who need super-fast mobile internet.”

    Dvorak cited Kroes in his statement, pointing out that allowing excessive auction revenues would clash with his agency’s mandate of creating conditions for efficient investment. And so, the Czech Republic can look forward to a rebooted spectrum auction, hopefully sometime later this year.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Google ‘X Phone’ specs reportedly revealed: Quad-core CPU, 4.7-inch HD display, 16MP camera

    Motorola X Phone Specs
    Google (GOOG) is believed to be working on a high-end smartphone with Motorola, codenamed X Phone, that is expected debut this May. Initial reports claimed the handset would include a top-notch camera, a flexible display and revolutionary software features, however most specs have remained a mystery. According to a report from Android World, the flagship smartphone will be equipped with a 4.7-inch full HD display, a quad-core Tegra 4i processor and a 16-megapixel camera. The device will also reportedly include the latest version of Android, rumored to be called Key Lime Pie, and a 5-megapixel front-facing camera with eye scrolling technology. Purported dimensions are said to be 131.2 x 66.7 x 7.9 mm, making the device slightly thinner than the DROID RAZR MAXX HD.

  • Hate Daylight Saving Time? Here’s a Petition for You

    If you woke up this morning in a haze and needed an extra cup or four of coffee to get going, you can probably blame Daylight Saving Time. Or your hangover. But probably DST.

    On Sunday morning at 2:00am our clocks jumped an hour ahead, forcing millions to lose out on a crucial hour of weekend sleep. If you think that the practice of springing forward and falling back is archaic and unnecessary, you’re not alone. And there’s a White House petition that I’d like to bring to your attention.

    Hosted on the White House’s We The People online petition site, “Eliminate the bi-annual time change caused by Daylight Savings Time” has garnered over 19,000 signatures in less than a week. If it can get 100,000 by April 4th, it will warrant an official response from the Obama administration.

    Here’s the argument proposed by creator C.D. from Loveland, Ohio:

    Daylight Savings Time is an archaic practice in our modern society.

    The original reasons for the policies are no longer applicable, and the most cited reason for keeping DST (energy savings) has never been shown to be true.

    Some industries still like DST (like sporting equipment retailers), but there are many more who dislike the changed hours (like television).

    The real issue, however is not the later hours or extra sunlight. Studies have shown that changing the clocks is responsible for health problems (including increased heart attack and vehicular accident risks) and leads to hundreds of thousands of hours of lost productivity in workplaces across the country. Also: It’s really annoying.

    We should either eliminate DST or make it the year-round standard time for the whole country.

    The L.A. Times calls today one of “the most dangerous days of the year,” according to numerous reports that say that the Monday after the start of Daylight Saving Time is a very bad day for most of the country. Different reports from various studies have tied the onset of DST to more heart attacks, more traffic accidents, more accidents in the workplace, and more.

    You can believe that or not, it’s up to you. But I’m sure most of us can agree that losing an hour of your weekend around the beginning of Springtime each year is just plain annoying. What do you think?

  • BlackBerry Z10 Hits AT&T March 22, Costs $199

    After rumors pointing to a March 22 release date, AT&T has finally come forward to confirm the U.S. launch of the BlackBerry Z10.

    AT&T announced today that the BlackBerry Z10 will indeed be available on March 22. The device will cost $199 under a two-year contract. There’s no mention of how much the device would cost off contract, but another U.S. seller was selling the Z10 for $999.

    “AT&T customers were the first to experience BlackBerry smartphones and services in the U.S. and we are thrilled to bring the next evolution, the BlackBerry Z10, to the nation’s fastest 4G LTE network,” said Jeff Bradley, senior vice president, Devices and Developer Services, AT&T Mobility. “Customers who have grown to love the tried and true BlackBerry experience will continue to enjoy the easy typing and the secure platform they expect with a fresh platform that lets them get more out of their smartphone with easy access to all their messages in BlackBerry Hub.”

    Enterprise customers will be happy to know that AT&T will also be offering BlackBerry Enterprise Service 10 alongside the Z10′s launch. It will be added as one of the many Enterprise Management options offered by AT&T.

    There’s still no word on when the Z10 will be available on other carriers. The official story for all carriers is that a March release is still in the cards so we’ll probably see some announcements from Verizon and T-Mobile later this week.

    If you find yourself craving a BlackBerry Z10, AT&T says that preorders for the device will open tomorrow.

  • Microsoft entices students with Office 365 deal

    Just in time for mid-term exams, or for the few students who actually work during Spring Break, Microsoft offers a suite — ah, sweet — deal on Office 365. Not coincidentally, the offer carries many, if not most, in higher ed through the end of the school year.

    Microsoft’s Jeff Meisner explains: “Starting today, college students in the U.S. can get three months of Office 365 University and 20 GB of SkyDrive storage for free”.

    In January, Microsoft launched Office 365 Home Premium. The University edition is slightly less full-featured, including Word, Excel, PowerPoint and OneNote, which are likely the only apps that are really important to students anyway, although I am sure some would not mind having access to Publisher. Office 365 University does, at least, include the same 20GB of free SkyDrive space that comes with a subscription to the Home plan.

    In addition to offering the three free months, Microsoft adds an additional three months if the student shares the offer on Facebook — talk about pleading for viral marketing. Customers will need to provide a valid .EDU address to qualify and, according to the terms, “This Offer commences at 12:00 AM Pacific Standard Savings Time on March 11, 2013 and all Offer redemptions must be made via the Offer Site by 11:59 PM Pacific Standard Time on May 12, 2013 or while supplies last at which time the Offer ends. Redemption codes must be used no later than May 12, 2014”.

    While I am not exactly sure how supplies of a downloaded program can run out or why Microsoft waited until now to makes this offer, given that we are approaching the end of the school year, but it is still a good deal for a segment of Office users who frequently cannot afford to shell out the normal cost for software such as this.

    Photo Credit: Goodluz/Shutterstock

  • Guide to Optimizing Your Data Center’s Efficiency

    The modern data center faces challenges from dynamic technology lifecycles and a demanding business environment. Evolving from server closets to multi-million dollar facilities, the modern data center incorporates a number of innovations that contribute to its effective and efficient operation. Facility and IT infrastructure must be agile and keep pace with the business, while maintaining mission-critical distinction.

    To foster an agile, proactive approach towards facility and IT operations, data center owners and operators must adopt suitable engineering tools and techniques. Monitoring, management and simulation techniques help to:

    • Understand the cadence of business.
    • Function as a link to illustrate the efficiency of the data center investment.
    • Identify, assess and determine the risk of vulnerabilities.
    • Develop business and technical planning scenarios to deal with risk and uncertainty.
    • Identify and mitigate lost capacity.

    Using the right tools and techniques for the right job equates to monitoring, simulation and DCIM (Data Center Infrastructure Management) working in harmony to protect and sustain the data center investment. Using simulation techniques in a data center prior to or in production can identify conditions that lead to unnecessary costs, lost capacity, and service interruptions.

    This paper will explore the value that monitoring, simulation and DCIM offer, and the additional value of integrating these engineering tools and techniques. It explores the importance of promoting an environment where IT and Facilities departments present a unified approach towards the data center. The engineering tools available to data center managers also present the risk of inaccu¬rate interpretations and unreliable forecasting. Finally, this paper explores the misnomer of perceived capacity, the complexities of capacity management, and having an integrated simulation platform act as a catalyst for re-gaining lost capacity in the data center.

    Click here to downlaod this white paper on Optimization in the Data Center.

  • Apple’s @iBookStore Account Retweets Salty Tweet

    In what is most likely a case of the ol’ personal vs. business Twitter account login switcheroo, Apple’s official iBookstore Twitter account retweeted and then quickly removed a tweet containing an obscene phrase.

    Early Monday morning, the official @iBookstore account retweeted a tweet that said “Let me suck a dick and tell you how much I love introspective novels.”

    The tweet, which was sent out to over 214,000 followers, was removed within minutes.

    But not before some Twitter users had the chance to catch it. Apple’s iBookstore account has not referenced the tweet since.

    The tweets was originally sent out by Alison Agosti, a writer at Upright Citizens Brigade who sports over 228,000 followers herself.

    Although the retweet mishap is pretty tame in the realm of NSFW social media mishaps, it just goes to show that employees who operate official business accounts need to make sure that they’re logged into their personal accounts before retweeting off-color posts.

    [9to5Mac, Image via Michael Steeber, Twitter]

  • The US race is on: AT&T starts BlackBerry Z10 pre-orders on March 12

    U.S. network operator T-Mobile had hoped to be the first in the country to sell the new BlackBerry Z10 handset, but AT&T may yet beat its peer to the punch. On Monday, AT&T announced it would accept pre-orders on March 12 for the Z10. The offer is open to new and current AT&T subscribers only with a consumer plan; corporate accounts will have to wait until March 22.

    AT&T has set the price at $199.99 with contract for the Z10, which is the general going rate for any flagship phone. And as the first BlackBerry 10 handset, complete with competitive hardware, this surely is the BlackBerry flagship.

    I’m inclined to agree, however, with Chris Davies at Slashgear, who thinks $200 for the Z10 may be too much. Sure, the BlackBerry faithful will be happy with this device at $200, but it’s a tougher sell at that price for those happy with an iPhone or Android handset. The platform still has some catching up to do in the app store, for example. Yes, you can sideload Android apps, but the process is convoluted enough that most consumers aren’t likely to do so.

    In terms of communications the Z10 surely excels, thanks to the BlackBerry Hub. That makes sense, given BlackBerry’s rich messaging history. And the gesture-based user interface is enjoyable to use. Is that, plus the BlackBerry name, worth $200 to a large U.S. audience? Sound off in the comments and let me know what you think; I suspect the Z10 would be a much better seller at $149.99 with contract.

    This post was updated at 8:36 am to reflect that new customers can also pre-order the Z10.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • What’s wrong with tech in US K-12 education today

    If you surveyed the different directions K-12 school districts take in the United States, you’d find nothing less than a hodgepodge of technologies. The mess that was known as “Novell Hell” universally bows down to a diverse array of technologies including Active Directory, campus-wide Wi-Fi, iPads, Chromebooks, and a little bit of everything else in between. While it’s reassuring that most districts I’m in discussions with are moving to cloud-based Google Apps or Office 365 for their email, the end-user device side of things is murkier.

    I’m not going to call myself an expert in K-12 technology and policy, but seeing that I spent the last four years supporting and training users’ technology needs at my former high school district, I’ve got good experience understanding the issues affecting teachers and students alike. After attending educational tech conferences year after year, the common consensus stands: everyone in education knows where they want to be, but the paths some of them take to get there are muddled with too much idealism and not enough realism.

    Readers who frequent BetaNews know that I’m very passionate about the topic of educational tech. I’ve already shared my thoughts on why the Surface and Chromebook are better devices for 1:1 student engagement, and specifically why the Surface has better long terms odds for K-12 over Apple’s iPad. But the debate has to go further than merely just devices and “cool-factor”. Our public education system ranks a measly #17 of the world’s greatest powers, and I think leveraging technology properly is one way we can get back on track and regain our elite ranking that we once held a half century ago.

    Problem 1: Many Teachers refuse to realign their Teaching Styles

    The move towards 1:1 computing in K-12 education is meaningless if teachers aren’t willing to rethink, realign, and downright renovate their entire instructional mentality. Too much of the 1990s and early 2000s was spent wasting oodles of money on pet projects and technology implementations that didn’t really further students’ overall foundations of technical aptitude in any way. And teachers were then forced under the bus by administration to “adopt or die” even if the path forward was illogical and useless for the overall betterment of the student body.

    Much of this issue extends well into today, where my consulting company FireLogic comes across many seasoned teachers who are scared to branch out and utilize new-age technologies like Google Apps or Chromebooks full time. Some of the fears stem from appropriate auxiliary-but-related problems like sporadic Wi-Fi coverage in buildings, but a lot of it is tied to a reluctance to change the “status quo”. It’s easier to keep on the path forward than to deviate and build something fresh.

    While the student body at large has already been moving towards an online-driven and device-enriched culture, the teaching body as a whole is stuck in the textbook and paper mindset. The few technologies that teachers have adopted fairly well (like Microsoft Word and online research) are unchanged from the way they were used starting back in the mid 1990s. So now we have a generational gap where students are truly outpacing their teachers, and many instructors are torn whether to keep up or just give up on tech.

    At my former high school district, for example, I spent years trying to educate teachers conceptually and functionally how Google Docs was a much better approach for student created work (Office Web Apps affords similar benefits to Google Docs now, too, through Office 365.) Students could collaborate on teamwork in real time, share notes and spreadsheets to fellow students in a single click, and the fear of losing work was eliminated due to Docs’ wonderful cloud auto save functionality. So what was the hangup that kept Docs adoption down? Teachers refused to change the way they graded papers — with red pen in hand, no less.

    There is a lot of technology out there in education that never sees the light of day, with all of its benefits to boot, because we have a generation of teachers who are afraid to leave their comfort zone due to fears, unjustified or not. Some facets of K-12 education have been making the right moves, like adopting 1:1 programs and/or flipping their classrooms, but the progress has not kept up with the general direction of student learning expectations.

    Problem 2: Too Much Focus on Consumption

    The consumption-first mentality is one of the biggest problems surrounding K-12 “growing up” when it comes to technology adoption for the 21st century. What does consumption-first in K-12 look like? Districts that push 1:1 computing initiatives driven around app-centric devices like the iPad or Android tablets. These hardware choices are not well suited for education in a number of ways, including the lack of traditional input devices like keyboards/mice and too much importance placed on consumption instead of creation.

    I’m not here to discourage iPad or Android tablet plans ad hoc. There are situations and areas where these devices make sense. Special Education, for example, benefits greatly from iPads in my own former district and this is something that should be praised. But applying such focused use case scenarios into the macro at large, aka an entire high school student body, is recipe for disaster.

    Where do devices like the iPad excel? In consumption oriented tasks, driven by the use of single-purpose apps that are built for limited functionality. Sure, Google Apps offers the Drive app for Docs access, but is this an ideal way for students to access and organize their work on a prime-time scale? Definitely not. The same goes for research, analysis, and information dissertation in a number of other related areas for the modern student. If asking a student to type research papers and collaborate in groups on a touch based tablet is the new norm, then I must be behind the times.

    Microsoft Surface could possibly be one of the best post-PC platforms for students and educators alike; if only K-12 was willing to look past the iPad standard that is overtaking many districts.

    While there is a place for consumption in K-12 education, we shouldn’t fall victim into training the next generation of workers that consumption is more important than all else. Well-made apps that teach a given topic for a subject have their role in the classroom — but they should not be the driving force guiding device policy, especially in the wake of 1:1 computing pilots. We should ask more of our students, preparing them for the real world that expects ever-more demanding competency in typing, analysis, mathematics, and extensive group work.

    The new wave of computing hitting districts nationwide may be moving away from centralized computer labs, but it certainly is not necessitating a shift from keyboard-based input and the flexibility that traditional input devices afford. And I think there are lots of options that provide the flexibility our students need. Chromebooks, Surface tablets, and some other mixed-use hardware is a much better answer to this reality than single-purpose iPads that need dongles and extras just to double as usable input-focused machines.

    Problem 3: Adopting Technology that is guided by Fads, not Goals

    One good example is the misguided approach many K-5 districts took early on when it came to deciding which computing platform they wanted to unify upon. At least in my neck of the woods, this happened to be Apple, only to present all those graduating into high school and college with the reality check that the real world still runs primarily on Windows PCs. Luckily, my own high school district standardized on Windows machines nearly across the board, but many districts do students an injustice by pushing technologies that are not widespread in the world of work.

    A fellow BetaNews reader emailed me with his shared viewpoint on this problem. “The use of Apple based products in schools is something I never could understand. In fact I consider it ridiculous from a cost point of view and a practical benefit as well”, Robert Peltier stated in his message. “How many businesses use Apple compared to Microsoft Windows?”

    A good question indeed. If operating system usage numbers continue to hold, Windows will still have a solid 90-percent+ of the global computer share. In plain speak, this means more likely than not, students entering the work world will have to deal with Windows-based machines 9 out of 10 times. Perhaps more, as established business generally prefers Windows over OS X at an ever higher rate. So what benefit does a childhood of schooling centered around Apple equipment really provide? If the educational system is here to prepare students for their next steps in life, I’m not sure if this approach makes much sense.

    My own former employer and high school district made a similar mistake when jumping on the touch bandwagon without a solid vision as to the why and the how around the initiative. With budget money spilling at the seams, district decided that every classroom would receive one of these ill-conceived QOMO Tablets. At face value, the devices seem to fill the input gap between pen/paper and digital.

    But their implementation and execution was flawed in every way from the very start. Like many other failed tech launches in K-12, the devices were merely dumped into classrooms without any proper training or guidance. It was expected that just because the iPad craze was in full swing, everyone would cozy up to touch and incorporate the tablets into day-to-day teaching. Were they wrong.

    My own experience found less than 5 percent of our district’s teachers even considered the devices; the remaining few who did try them were inundated with a range of issues due to sub-par technology design, leading most of them to drop the tablets altogether from lesson plans. And so the devices sat; pushed into corners, with our technology department expected to “keep supporting them” with no end in sight. They became the laughing stock of tech policy in our district.

    I don’t think one single party can be blamed on similar failed initiatives. Districts have a spending problem in that they are forced to expunge budget money before they lose it. Educational tech leaders are too often caught up in fads being pushed both by outside influence and a vocal subset of teachers – a sort of “keeping up with the Joneses” mentality in education today.

    I think if districts were more often looked at to resolve problems in the way that corporate America does, we wouldn’t have the kind of waste on ineffective technologies and short-sighted pilots as we do today. A focus on ROI (return on investment) in the mindset that student education needs to be balanced as much with logical technology platforms as it does with common-sense spending habits.

    Let’s shift the Debate: How can we BEST prepare the Next Generation?

    My background in educational technology has taught me a few things, and one of the most notable ones is that using tech fads as a guide for what’s best in a student’s hands does no one justice. Budgets are unnecessarily bloated by expensive hardware (Apple gear isn’t the only culprit here); K-12 tech leaders believe that new hardware alone will solve all ills; and administration just wants to be able to push out the next self-praising press release to the community and save face.

    The messes of yesteryear are finally bearing way to more reasoned approach and a fresh thinking to the way districts can bring tech into students’ hands. One of the best moves that could be sweeping K-12 across America, 1:1 Computing, is our best shot at grasping the moment to re-think what students need going forward and how to best equip them for that vision. But let’s not make the same mistakes of the past.

    Chromebooks save $935 on average over a three-year span, compared to other alternatives like iPad. They are rugged, cost-effective, and excellent options for 1:1 programs. (“Quantifying the Economic Value of Chromebooks for K-12 Education”; IDC, Aug 2012)

    Collaboration is definitely one of the keys to this debate. Inclusive platforms like Google Apps and Office 365 are providing students and educators with the means to weave together rich, online-driven instruction without tying everyone to a single brand of hardware or price level. Unlike an iPad and iTunes Store, you can use either Office 365 or Google Apps on a PC or Apple device with a similar experience, give or take some bells/whistles here or there.

    But if we are to believe that touch is where computing is headed, Surface is a device that truly invokes the best of both worlds. A platform that shifts from tablet to traditional laptop in a matter of seconds, and offers tactile input along with touch-based capability seamlessly. I really think this is where education wants to be, but just doesn’t know how to best get there. The one common theme from teachers who I mingled with at educational tech conferences was that touch on the iPad was great, but the technical boundaries and reliance on Apple’s strict iTunes Store for nearly every aspect of functionality were near universal grievances.

    And so I ask: why don’t we broaden the horizon? Instead of focusing so much energy and attention on the iPad and its Android clones, let’s ask ourselves if this family of single-function devices is really what students need to excel and prepare themselves for post-K12 and life in general. Too many college professors at America’s finest higher-ed institutions question the preparedness of incoming students year after year. Taking the opportunity that is 1:1 computing and dumbing it down to a generation of iPad-aholics is a sad excuse for technology in the classroom.

    So when I learned just a few weeks back that my former high school district is moving to Google Chromebooks for all students, it was a true sigh of relief. There is still some dose of reasoned sanity in the educational sector, it seems. If teachers can integrate the devices to capitalize on their inherent benefits, perhaps we can close that American education gap — one baby step, and district, at a time.

    Derrick Wlodarz is an IT professional who owns Park Ridge, IL (USA) based computer repair company FireLogic. He has over 7+ years of experience in the private and public technology sectors, holds numerous credentials from CompTIA and Microsoft, and is one of a handful of Google Apps Certified Trainers & Deployment Specialists in the States. He is an active member of CompTIA’s Subject Matter Expert Technical Advisory Council that shapes the future of CompTIA examinations across the globe. You can reach out to him at [email protected].

  • Leaked prototype reveals that Apple built the world’s first ‘phablet’

    Apple Phablet Prototype
    The success of Samsung’s (005930) super-large Galaxy Note “phablet” was an important milestone for the company, as it showed that Samsung is capable not just of following industry trends but of starting trends of its own. But now Ars Technica has gotten hold of pictures showing that Apple (AAPL) may have actually been the first company ever to design a “phablet,” albeit not one that’s designed as smartly or stylishly as the Galaxy Note.

    Continue reading…

  • Rihanna Cancels Concert Due to Laryngitis

    TMZ reported this weekend that pop star Rihanna cancelled her scheduled Sunday night concert in Boston. The singer blamed the cancellation on Laryngitis.

    Live nation stated that the no-show is doctor’s orders for Rihanna, and that the Boston show will now be rescheduled. The singer is still expected to take the stage in Baltimore on Tuesday as scheduled.

    Rihanna used her twitter feed to apologize to her many disappointed fans in Boston. The pop star stated that she was “embarrassed” and “hurt” that she let fans down:

    Rihanna recently revealed that she and the hot-headed Chris Brown are once again an item. Also, a “Rihanna sex tape” link that was going around Facebook last month was revealed to be a malware scam.

  • Tesla delays production of Model X electric car to the end of 2014

    In February 2012 when Tesla first showed off a prototype of its third electric car, a crossover SUV/minivan, the company told us it wanted to launch the car at the end of 2013, with volume sales starting in 2014. Now in its latest annual report Tesla said that it plans to start production of the Model X in late 2014 (hat tip Los Angles Times, Autoblog Green).

    The delay sounds like it could be at least a year, which won’t make those early Model X reservation holders happy. Just after showing off the Model X prototype, the company said it had brought in more than $40 million worth of reservations for about 500 cars without any advertising. Of course reservations don’t always translate into sales, and the delay will likely lead to potential customers dropping off that reservation list.

    Tesla Model X with falcon wings fully open

    Tesla Model X with falcon wings fully open

    The Model X is an important car for Tesla, as it’s the company’s third car, and would signal to the world that Tesla has bloomed into a full-fledged automaker with a line of cars for all demographics (see my research note for GigaOM Pro, subscription required). The Model X is supposed to seat seven, has cargo space in both the back trunk and the front trunk, and has those “falcon wing” doors, which are a Tesla-designed double-hinged play on gull wing doors. Tesla designed the doors not only to add that extra cool-factor, but also to appeal to, say, a mom or family driving to the mall and parking in a tight spot.

    Model X

    Model X

    The Model X is supposed to be built on the core technology of the Model S, but with a different chassis, so Tesla will likely spend less on development of the Model X than the Model S. Tesla told the Los Angeles Times and Autoblog Green that the delay of the Model X production is not expected to have a material impact on the company’s profitability in 2013 or 2014. Tesla’s stock is up 1.22 percent in morning trading.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • How Nonprofits Prove Their Worth

    Everyone agrees that social enterprises need to use data to assess their effectiveness. But precisely what data should organizations capture? How should they incorporate it into meaningful evaluations that prove their approaches work? And, how should potential funders analyze these evaluations to determine which enterprises are most effective?

    At GiveWell (where I’m co-founder) we’ve analyzed data and evaluations from hundreds of social enterprises and programs to identify standout organizations. We serve donors who want to be confident that their money is going to the most effective programs they can find. The best data and evaluations we’ve seen have two characteristics:

    1. They’re balanced. The data shared has a good chance of demonstrating that an organization’s program is ineffective. Many evaluations we’ve seen appear to be stacked in the organization’s favor from the start, and we wonder whether the evaluation could possibly have reached a negative conclusion and, if it did, whether it would ever have seen the light of day.
    2. They explore alternative causes. The evaluation should consider alternative hypotheses for the observed changes. For example, if the program provides training for farmers and observes rises in farmer incomes, the evaluation should question whether other factors — such as rainfall or changes in the accessibility of local markets — could have driven the changes.

    Let’s look at two charities — Village Reach and Against Malaria Foundation — that have produced evaluations that demonstrate these qualities.

    VillageReach
    VillageReach, an organization focused on the last mile delivery of vaccines to remote rural areas in Mozambique, produced the best evaluation we’ve seen from an organization itself (as opposed to academics). The evaluation stands out because it collects meaningful data directly connected to the program’s social impact, describes in detail its methodology, and discusses its inherent limitations — all characteristics we’ve rarely seen. The data make the case that the program significantly increased immunizations rates in the province in which it worked, where childhood immunization rates increased from less than 70% to more than 90% during the course of the project.

    Among other measures, the evaluation presents data on how often health clinics were out of vaccines when VillageReach monitors visited and the change in immunization rates. These data could have shown that the program failed: had immunization rates stayed flat or stockout rates stayed high, we would have known that the approach didn’t work. In fact, the evaluation showed large changes in both measures.

    Nevertheless, it still leaves some important questions unanswered. Did VillageReach cause the increase in immunization rates or did other factors, perhaps related to Northern Mozambique’s emergence from a recent civil war and subsequent donor interest in the area, cause the improvement? To its credit, the organization addresses alternative hypotheses in its evaluation, even if it was not able to arrive at compelling answers to these questions. We don’t have reason to believe that other factors were the primary driver of change, but we also have no way of knowing that they were not. The data made a case for impact but, necessarily and through no fault of VillageReach’s, left important questions of possible alternative causes of the measured changes unanswered.

    Against Malaria Foundation (AMF)
    AMF, which funds distribution of malaria nets, also makes a compelling case but through the use of independent evidence. The organization cites more than 20 randomized controlled trials that demonstrate that distributing nets saves lives. They also rely on additional evidence collected from household surveys across many countries by MEASURE DHS and the World Health Organization to show that the people who receive nets distributed in the type of real-world, large-scale distributions that AMF supports use them about as consistently as those who received them in the rigorous trials.

    The points above make a strong case that net distribution causes a reduction in malaria deaths. But AMF supplements the case with data it collects itself, data that would enable it (and outsiders) to know if its program were failing. Before distributing nets, the organization conducts pre-distribution surveys from all potential recipient households to determine whether they need additional nets. It then performs post-distribution surveys every six months from a sample of recipients to monitor net usage rates. The organization also collects malaria case rate data from local health centers to monitor malaria rates in the regions its serves.

    The combination of independent evidence (which addresses potential alternative causes of impact) and AMF-collected data (which would tell us if its program were failing) makes a compelling case for impact.

    Of the hundreds of organizations we’ve looked at, we’ve rarely seen cases for impact like those described above. Why is that?

    Donors may be the problem. When funders give to organizations based on vague and superficial stories, rely on poor evaluations, or don’t critically assess programs’ impacts at all, they show organizations that there’s no need to produce high-quality evaluations. Even worse, some donors give to organizations with the lowest “overhead ratios” as if that’s a proxy for effectiveness, making investment in assessment near impossible. We want to change the conversation around giving from one that assumes that all well-intentioned people are accomplishing great things to a more open, critical discussion of what organizations do and how well it works.

  • AmeriCorps: Service, Sacrifice, and Solutions

    Each generation of Americans embraces the belief that no problem is too big for a determined group of people to conquer. This challenge is central to national service, which gives thousands of Americans a chance to unite with like-minded people and work toward improving the lives of our most-vulnerable citizens.

    Each year, we take time to honor this American tradition of service as we mark AmeriCorps Week.

    During this week, March 9th – 17th, we salute AmeriCorps members and alums for their service, thank AmeriCorps community partners, and communicate AmeriCorps’ impact on communities and on the lives of those who serve.

    Since 1994, more than 800,000 AmeriCorps members have contributed more than 1 billion hours in service to others across America. Currently, more than 75,000 AmeriCorps members are touching the lives of millions as they tackle challenges that improve lives, strengthen communities, expand economic opportunity, and bolster civic and faith-based organizations. 

    AmeriCorps may be one of America’s best assets, with members making an impact through organizations such as the American Red Cross, Habitat for Humanity, Public Allies, and Teach for America. In addition, AmeriCorps National Civilian Community Corps (NCCC) and VISTA, the longtime anti-poverty program, are transforming communities every day.

    read more

  • Not Many Developers Are Working On Wii U Games

    The Wii U has a few obstacles it needs to overcome, but the biggest is definitely its lack of software. A recent poll of developers shows that the Wii U’s software drought may not be ending anytime soon.

    A new survey out of the Game Developers Conference polled a number of developers asking which platforms they’re developing for. Surprisingly, the Wii U came in dead last with only 4.6 percent of developers saying they were working on a title for Nintendo’s console. Other consoles didn’t fare too well either as only 13.2 percent of developers are making a game for the Xbox 360, whereas 13 percent are working on the PlayStation 3.

    Drops in console development leave plenty of room for PC and mobile development to pick up the slack. An astonishing 55 percent of developers said that they’re making their next game for smartphones or tablets. Another 48 percent said that their next game would be on the PC and/or Mac.

    Now, before everybody starts freaking out, these numbers need a little context. For starters, GDC is no longer attended exclusively by people working at a major publisher. In fact, most of the attendees at GDC these past few years have been indie developers. The latest numbers only reinforce that fact as over 53 percent of respondents to the GDC survey identified themselves as an indie developer.

    So, how does tie into the low console numbers, and high mobile numbers? It shows that indie developers are flocking to the PC and mobile markets because they’re easier and cheaper to develop for. It doesn’t hurt that PC and mobile platforms are far more open than consoles, despite Sony’s and Nintendo’s best efforts to fix that.

    In short, the high number of indie developers attending GDC ensures that the numbers for indie platforms, like mobile and PC, are going to be higher. Now this doesn’t mean that Nintendo gets away without any criticism. It needs to do a better job of courting third-party developers and indies. The Wii U launched with a great selection of indie titles, and the hardware maker needs to continue that trend going into the future.

    As big AAA games start to cost more and take more time to develop, the smaller indie titles will become increasingly more important to the livelihood of any platform. Sony seems to embracing the indie developer with the PS4 saying the console will support any kind of game. Nintendo is reportedly doing much the same with the Wii U. Now these hardware makers just have to prove it by securing quality indie content while supporting these developers with the help they need to realize their vision.

    [h/t: Gamasutra]

  • Who’s the biggest cloud of all? The numbers are in

    And the winner is …  Amazon, at least among Infrastructure as a Service (IaaS) providers, by a very wide margin in the fourth quarter of 2012, according to new numbers from Synergy Research Group. Synergy ranked IBM second and somewhat surprisingly — to me anyway — British Telecom ranked third worldwide.

    Overall revenue from IaaS and Platform as a Service (PaaS) made up just 15 percent of the overall cloud infrastructure market, although they were the fastest growing categories. That’s hardly a surprise given the cash funneled into these arenas, not only by Amazon but by Rackspace, HP, IBM, Red Hat, and all the telcos.

    According to a post by Telegeography, a Synergy partner and the company behind all the cool fiber pipeline maps:

    “In the past year, IaaS and PaaS revenues increased 55% and 57%, respectively. Amazon dominates the IaaS segment, accounting for 36% of revenues, and is quickly approaching PaaS leader Salesforce’s 19% market share. Although well behind Akamai and Level 3 in theCDN/ADN segment, Amazon holds the number three spot with a 7% market share.”

    synergycloud

    Rackspace, because of its roots as opposed to its budding OpenStack cloud business,  leads the managed hosting segment, followed by Verizon and NTT. Across all segments, Amazon is the market leader in North America and NTT leads in Asia. Europe, the Middle East and Africa or EMEA is a battle ground hotly contested by  France Telecom-Orange, British Telecom, and Deutsche Telekom.

    Content Delivery Network (CDN) leader Akamai remains at the front of the pack in that category, followed by Level3 and Amazon.

    Numbers like these are fascinating snapshots of what will doubtless be a changing market. I was surprised to see Amazon so strong in PaaS — it was ranked second after Salesforce.com.  Amazon’s Elastic Beanstalk PaaS just doesn’t seem to have that much traction — but definitions of PaaS vary and the array of higher-level services that Amazon offers atop its IaaS foundation (but aren’t considered Elastic Beanstalk) could be considered PaaS-like.  Salesforce.com’s PaaS tally presumably includes both Force.com and Heroku.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Mr. Wolfdog Is Old Spice’s New Marketing Chief [VIDEO]

    Old Spice has just introduced their new chief director of marketing to the world, and he’s a wolfdog. This is going to go places, I can feel it.

    “Sometimes you gotta eat people, America. That’s how business works.”

  • Here comes the real

    Inflation is finally biting Brazilian policymakers. The real strengthened around 1.5 percent last week without triggering the usual shrill outcries from government ministers. Nor did the central bank intervene in the currency market even though the real is the best performing emerging currency this year. The bank in fact shifted towards a more hawkish policy stance during its March meeting, a move that seems to have had the blessing of the government.

    Friday’s data showed the benchmark consumer price index, IPCA,   up 0.6 percent for a year-on-year inflation rate of 6.31 percent. President Dilma Rousseff, who faces elections next year, took to the airwaves soon after to reassure voters about her commitment to taming inflation, announcing a series of tax cuts. That effectively is a signal that there is now no political constraint on raising interest rates. According to the political risk consultancy, Eurasia:

    If the government doesn’t enact measures during the first half of this year to anchor inflationary expectations, Rousseff would run one of two risks. She would either run the risk of inflation starting to eat into the disposable income of families in a manner that could hurt her politically, or relatedly, put the central bank in a position of having to raise interest rates more aggressively later in the year to control inflation with more negative repercussions to growth.

    Accordingly swaps markets and analysts polls alike are penciling in more rate rises — the Selic rate is seen rising 75 bps by end-2013 to 8 percent. Second,  foreigners are betting on more real strength. The currency has broken 1.95 per dollar, a level that has previously triggered intervention (after spending a year hobbled in the 2.0-2.10 range) and the next level to watch for may be 1.9357, last hit in May 2009.

    Bernd Berg, head of emerging currency strategy at Credit Suisse, reckons markets will keep testing the central bank’s tolerance for currency appreciation and the bank could respond with some intervention should the currency appreciate too fast. But it is not too-far fetched to expect the real to trade at 1.90 per dollar later this year, he says.