Author: Amanda Alvarez

  • Steering clear of the iceberg: three ways we can fix the data-credibilty crisis in science

    As I detailed yesterday, science has a data-credibility problem. There’s been a rash of experiments that no one can reproduce and studies that have to be retracted, all of which threatens to undermine the health and integrity of a fundamental driver of medical and economic progress. For the sake of the researchers, their funders and the public, we need to boost the power of the science community to self-correct and confirm its results.

    In the eight years since John Ioannidis dropped the bomb that “most published research findings are false,” pockets of activist scientists from both academia and industry have been forming to address this problem, and it seems this year that some of those efforts are finally bearing fruit.

    The research auditors

    One interesting development is that a group of scientists is threatening to topple the impact factor, which ranks studies based on the journals in which they appear. This filter for quality research is based on journal prestige, but some scientists and startups are beginning to use alternative metrics in an effort to refocus on the science itself (rather than the publishing journal).

    Taking a cue from the internet, they are citing the number of clicks, downloads, and page views that the research gets as better measures of “impact.” One group leading that charge is the Reproducibility Initiative, an alliance that includes an open-access journal (the Public Library of Science’s PLOS ONE) and three startups (data repository Figshare, experiment marketplace Science Exchange, and reference manager Mendeley). The Initiative isn’t trying to solve fraud, says Mendeley’s head of academic outreach William Gunn. Rather, it wants to address the rest of the dodgy data iceberg: the selective reporting of data, the vague methods for performing experiments, and the culture that contributes to so many scientific studies being irreproducible.

    Stamp of ApprovalThe Initiative will leverage Science Exchange’s network of outside labs and contract research organizations to do what its name says: try to reproduce published scientific studies. They have 50 studies lined up for their first batch. The authors of these studies have opted in for the additional scrutiny, so there is a good chance much of their research will turn out to be solid.

    Whatever the outcome, though, the Initiative wants to use this first test batch to show the scientific community and funders that this kind of exercise is value-adding despite the costs, which are estimated to be $20,000 per study (about 10% of the original research price tag, depending on the study).

    Gunn likens the process to a tax audit: not all studies can or should be tested for reproducibility, but the likely offenders may be among those that have high “impact factors,” much like high-income earners with many deductions warrant suspicion.

    A stumbling block may be the researchers themselves, who like many successful people have egos to protect; no one wants to be branded “irreproducible.” The Initiative stresses that the replication effort is about setting a standard for what counts as a good method, and finding predictors of research quality that supersede journal, institution or individual.

    The plumbers and librarians of big data

    While the Reproducibility Initiative is trying to accelerate science’s natural self-correction process, another nascent group is working on improving the plumbing that serves data. The Research Data Alliance (RDA), which is partially funded by the National Science Foundation, is barely a few months old, but it is already uniting global researchers who are passionate about improving infrastructure for data-driven innovation. “The superwoman of supercomputing” Francine Berman, a professor at Rensselaer Polytechnic Institute, heads up the U.S. division of RDA.

    The RDA is structured like the World Wide Web Consortium, with working groups that produce code, policies for data interoperability, and data infrastructure solutions. As of yet there is no working group for data integrity, but it is within RDA’s scope, says Berman. While the effort is still in its infancy, the broad goals would be to come up with a way to make sure that the data contained in a study is more accessible to more people, and also that it doesn’t simply disappear at a certain point because of, say, storage issues.  She says with data it’s like we’re back in the  Industrial Revolution, when we had to create a new social contract to guide how we do research and commerce.

    The men who stare at data

    visualization-examplesYou can build places for data to live and spot-check it once it’s published, but there are also things researchers can do earlier, while they’re “interrogating” the data. After all, says Berman, you’re careful around strangers in real life, so why jump into bed with your data before you’re familiar with it?

    Visualization is one of the most effective ways of inspecting the quality of your data, and getting different views of its potential. Automated processing is fast, but it can also produce spurious results if you don’t sanity-check your data first with visual and statistical techniques.

    Stanford University computer scientist Jeff Heer, who also co-founded the data munging startup Trifacta, says visualization can help spot errors or extreme values. It can also test the user’s domain expertise (do you know what you’re doing and can you tell what a complete or faulty data set looks like?) and prior hypotheses about the data. “Skilled people are at the heart of the process of making sense of data,” says Heer. Someone with domain expertise who brings their memories and skills to the data can spot new insights, and in this way combat the determinism of blindly collected and reported data sets. Context, in the form of metadata, is rich and omni-present, Heer argues, as long as we’ve collected the right data the right way. Context can aid in interpretation and combat the determinism of blindly collected and reported data sets.

    The three-pronged approach — better auditing, preservation and visualization — will help steer science away from the iceberg of unreliable data.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Dodgy data: the iceberg to science’s Titanic

    There’s an epidemic going on in science, and it’s not of the H7N9 bird flu variety. The groundbreaking, novel results that scientists are incentivized to publish (and which journalists are then compelled to cover) seem peppered with gaps: experiments that no one can reproduce, studies that have to be retracted, and the emergence of an iceberg of an integrity crisis, both for scientists personally and for those who rely — medically, financially, professionally — on the data they produce.

    As a recent PhD, I can attest to the fact that many researchers first experience the iceberg as no more than an ice cube-sized annoyance impeding their work towards a Nobel Prize. Maybe the experimental instructions from a rival lab don’t quite seem to work. Maybe the data you’re trying to replicate for your homework assignment don’t add up, as UMass graduate student Thomas Herndon found. If a spreadsheet error that underpins sweeping global economic policy doesn’t convince you, here is some more evidence that we are already scraping the iceberg of a research and data reliability crisis.

    Newton, we have a problem

    All those life-saving cancer drugs? Drug makers Amgen and Bayer found that the majority of research behind them couldn’t be reproduced.

    What about neutral reporting of scientific results? A number of pervasive sources of bias in the scientific literature have been discovered, such as selective reporting of positive results, data suppression, or academic competition and pressure to publish. The hunger for ever more novel and high-impact results that could lead to that coveted paper in a top-tier journal like Nature or Science is not dissimilar to the clickbait headlines and obsession with pageviews we see in modern journalism. (The long-form investigative story is experiencing a bit of a digital renaissance, so maybe that bodes well for the appreciation of “slow science.”)

    Readers of the New York Times Magazine will recall last month’s profile of Diederik Stapel, prolific data falsifier and holder of the number-two spot in the list of authors with the most scientific paper retractions. ScienceIsBroken.org is a frank compendium of first-hand anecdotes from the lab frontlines (the tags #MyLabIsSoPoor and #OverlyHonestMethods reflect the dire straits of research funding and the corner-cutting that can result) and factoids that drive home just how tenuous many published scientific results are (“Only 0.1% of published exploratory research is estimated to be reproducible”). The provocative title of a 2005 essay by Stanford professor John Ioannidis sums it up: “Why Most Published Research Findings Are False.”

    Interestingly, unlike in science where many results are interdependent and on-the-record, in big data accuracy is not as much of an issue. As my colleague Derrick Harris points out, for big data scientists the abilty to churn through huge amounts of data very quickly is actually more important than complete accuracy. One reason for this is that they’re not dealing with, say, life-saving drug treatments, but with things like targeted advertising, where you don’t have to be 100 percent accurate. Big data scientists would rather be pointed in the right general direction faster — and course-correct as they go – than have to wait to be pointed in the exact right direction. This kind of error-tolerance has insidiously crept into science, too.

    Lies, damn lies, and statistics

    Fraud (the principal cause of retractions, which are up roughly tenfold since 1975)  is not a new phenomenon, but digital manipulation and distribution tools have increased the spread and impact of science, both faulty and legitimate, beyond the confines of the ivory tower. Patients now look for new clinical trial data online in search of cures, and studies that are ultimately retracted (with a delay of 12 years, in the case of the infamous MMR vaccine study published in The Lancet) can persist in their effects on public health, and public opinion.

    The Great BetrayalEven in fields that don’t end up in the news or have direct medical impact, the effects of a retraction can be broad and disconcerting. When a lab at the Scripps Research Institute had to retract five papers because of a software error, hundreds of other researchers who based their work on those findings, and who used the same software, were affected. When the proverbial giants on whose shoulders scientists stand turn out to be a house of cards, years of effort can go down the drain.

    For those providing the funding, like foundations and the federal government, events like this present a further justification to tighten the purse strings, and for the general public, they serve to deepen the distrust of science and increase the reluctance to support audacious (and economically and medically important) projects. This is not a PR crisis borne out of the actions of greedy charlatans or “nutty professors” – they are but a highly visible minority. The real problems for research reproducibility have to do with how we handle data, and are much more benign – and controllable.

    Openness to the rescue?

    As I reported last week, blind trust in black box scientific software is one part of the problem. Users of such software may not fully understand how the scientific sausage they end up with is made. Om has written about the deterministic influence big social data can have on people and businesses. As the velocity and volume of data have increased, the practices of science have struggled to keep pace. (I will discuss some potential solutions to the data-credibility problem in a separate post tomorrow.)

    The question of whether the retraction and irreproducibility epidemic is spreading is akin to the debate over whether spiking autism or ADHD rates, for example, are real or the result of better diagnoses. The recent insight that retractions are correlated with impact factor (a measure of journal prestige derived from the number of citations papers in journals receive) seems to suggest that much of the most valued and publicized science could be less than trustworthy. (Retractions can result from innocuous omissions or malfunctioning equipment as well as more severe and willful acts like plagiarism or fraud.)

    Some see the phenomenon not as an epidemic but as a rash, a sign that the research ecosystem is getting healthier and more transparent. Openness is indeed a much-touted solution to the woes of science, with even the White House mandating an open data policy for government agencies; the National Institutes of Health (the major biomedical research funder in the U.S.) and many foundations already require research they fund to be publicly accessible.

    Efforts to combat seedy science are popping up at an almost viral pace. A new Center for Open Science has launched at the University of Virginia, and the twitterati of the ScienceOnline un-conference spearhead a number of initiatives to improve the practice, publishing and communication of science.

    Motivation through tastier carrots

    Culling the bird flu epidemic. Getty Images.

    Culling the bird flu epidemic. Getty Images.

    Many in these communities point to a broken incentive structure as the source of compromised science. Novel, “breakthrough” results are rewarded, though the foundation of science rests in the power of real physical phenomena to be experimentally replicated. If the best research – the real McCoys in a sea of trendy one-hit wonders – can be identified and rewarded, science, industry and the public stand to benefit. The challenge is finding predictors of high-quality research, beyond the traditional impact factor metric.

    Much like scientists can rapidly sequence the genetic code of new influenza strains like H7N9, they are also starting to identify systemic frailties, attitudes and entrenched practices, and take measures to inoculate science against them. Stay tuned for tomorrow’s post, which will cover three remedies to tame the epidemic and revitalize reproducible research.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The devil you know: entrepreneurs prefer current ventures to new ones

    Whether in gambling or relationships, many people have the feeling that they just need to stick with their chosen course of action if they’ve already invested blood, sweat, and dollars in it. Would-be entrepreneurs are also taught to be persistent, a quality that is often highly praised by both fellows entrepreneurs and the press. Therefore it’s not surprising that entrepreneurs are subject to many of the same biases that cause people to persist with their investments, and don’t necessarily behave to always maximize outcomes when presented with lucrative new opportunities, a new study says.

    The sunk cost fallacy is the nearly automatic psychology that causes you to stick with losing bets rather than jump ship. A strategy to combat this fallacy — mentally focusing on the potential gains of cutting and running rather than the losses — was outlined last week in The Atlantic. Persistence, an important quality in entrepreneurs, can also be amplified by self-justification or normative pressures to stick with in-progress endeavors, according to researchers at Oregon State and Utah State universities. They surveyed 135 (mostly male, middle-aged) high-tech entrepreneurs in the U.S. and found that the entrepreneurs don’t always pursue decisions that would maximize utility if it requires giving up a current business venture.

    Thinkstock

    Thinkstock

    Like mere mortals, entrepreneurs are also motivated in their decision-making to maximize potential financial and non-financial benefits (value) given the likelihood of achieving those benefits (expectancy). Entrepreneurs also have a host of other factors to consider, like their past startup experience, the size of their current business, potential psychological, social, and financial switching costs, and whether starting a new business meshes with their personal principles of autonomy and risk-averseness, for example. When it comes to leaving their current business and starting a new one, however, it turns out that the way expectancy and value play into the decision isn’t straightforward.

    Here’s how the survey worked, in the researchers’ own words:

    “The participants were asked to rate the likelihood that they would pursue a series of hypothetical entrepreneurial opportunities. Each hypothetical opportunity was presented as a comparison with the participant’s current business across four criteria: value of financial returns, likelihood of finan­cial returns, value of non-financial benefits, and the likelihood of non-financial benefits”

    To test the effects of expectancy and value on entrepreneurs’ persistence with their existing business, the researchers set different prior conditions: in some cases the entrepreneurs would have the resources to continue with their current business as well as start a new one, while in others they had to choose between staying with the old business or launching a new one. These two conditions resulted in unexpected behaviors.

    If the potential value of the current business was higher than that of the alternative, or if probability of success was higher for the current than the alternative, entrepreneurs tended to discount potential highly successful or financially rewarding outcomes associated with the hypothetical new business venture. That is, expectancy and value weren’t simply additive factors that predicted whether entrepreneurs would spring for a new business opportunity.

    There are a number of factors that could be at play here beyond just money. The researchers also looked at the size of the company (entrepreneurs were more likely to leave bigger firms) and past startup experience, which could help entrepreneurs better evaluate the market (over 80 percent of those surveyed had previous startups under their belt). Uncertainty (as opposed to risk) may also play a role, with entrepreneurs opting to stick with their existing venture where some amount of uncertainty may already have been eliminated.

    Of course, an online survey is no match for the real world, where entrepreneurs would have much more information and time to guide their business decisions. But if entrepreneurs can self-diagnose and recognize tendencies for overconfidence, counterfactual thinking, and a drive to avoid uncertainty, they can potentially maximize their returns and generate some kickass businesses in the process.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Underwater batteries are making a splash for energy storage

    Hydroelectricity generation exploits the tremendous height differential that occurs naturally at waterfalls or artificially at dams as water flows through the system. Now, efforts are underway to harness a differential of another sort for both energy storage and generation: the pressure under the sea. A Norwegian company called Subhydro is making forays into underwater hydroelectrical power plants, and Canadian company Hydrostor is creating an underwater grid storage system.

    Think of water rushing in through the open hatch of a submarine, and you get an idea of the forces at work underwater. Atmospheric pressure and the weight of the water combine to create pressures that compound with increasing depth. At a depth of 400 meters (almost a quarter mile), for example, the pressure is that of 40 atmospheres, one atmosphere being the pressure we experience at sea level. Subhydro envisions installing large concrete tanks at depths of 400-800 meters, and the deeper the better for maximizing energy generation.

    underwater-turbineWhen the “hatch” is opened, water is allowed to flow into the tanks through a turbine that drives an electric generator. The more and larger the tanks, the longer the generation can go on. When the tanks are filled, the turbine can be reversed to pump out the water, a process that draws on the power grid and consumes energy. In this way, the pumped storage plant functions like an underwater battery that can be re-charged, much like a hydroelectric plant on dry land pumps water into an upper reservoir after it has passed through a turbine.

    According to Subhydro, the efficiency of the underwater plant is about 80 percent, comparable to efficiencies achieved at conventional plants. Integrating the pumped storage plant with wind or solar farms could create a grid storage system that harnesses excess renewable energy generation to pump out the tanks and flood them during peak hours of production.

    Another approach to underwater grid storage is in the works at a depth of 80 meters in Lake Ontario, just off shore of Toronto. There, Hydrostor will begin building underwater tanks that will hold compressed air. Surplus energy from renewables (wind, solar) will provide the energy to compress air from the atmosphere and pump it in to the tanks. To put energy back into the grid, the air is allowed to surface, driving generators as it expands back into the atmosphere.

    Hydrostor is partnering with Toronto Hydro to build the 1MW/4MWh compressed air energy storage demonstration facility. The system will run at 70 percent efficiency, according to Hydrostor. Earlier this month MaRS Cleantech Fund announced an investment in Hydrostor’s tech.

    Clearly, there are still some hurdles to overcome before energy companies everywhere take the plunge. The environmental impact of offshore submerged facilities will need to be considered, as will the building materials themselves. To withstand the underwater pressure, Subhydro is working with research partners to develop thin concrete reinforced with steel fibers, while Hydrostor’s system will use inflatable polyester bags to hold compressed air. Building underwater facilities is itself energy-intensive, so whether the process can be made cost and energy-effective will determine whether cleantech is ready to get its feet wet.

    Image via Knut Gangåssæter/Doghouse

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Hydrogen energy the chloroplast way: solar-to-fuel with the artificial leaf

    With atmospheric carbon dioxide recently hitting a record 400 parts per million, the discovery of alternative renewable energy sources has taken on added urgency. One effort is the so-called “artificial leaf,” a photosynthetic system that uses light energy to split water molecules and produce hydrogen. Researchers at Lawrence Berkeley National Lab have recently published details of their new nanowire-based system that mimics the way plant chloroplasts transport charged particles.

    The artificial leaf’s titanium dioxide and silicon nanowires are arranged in an array that actually resembles a microscopic forest of straight pines. The key to achieving good solar-to-fuel conversion efficiency is the integration of the components — the nanowire semiconductors that absorb light, an interfacial layer, and co-catalysts for the water splitting reaction — in a structure that resembles and functions like a chloroplast.

    Plants are so efficient at turning sunlight into sugars partly because of what is termed the “Z-scheme”: the daisy chain of molecules that deliver a charged electron from a chloroplast to molecular energy production in the cell. The artificial leaf uses the Z-scheme, too, but with the silicon nanowires responsible for the hydrogen generation and the titanium dioxide nanowires contributing to the formation of by-product oxygen. The use of two semiconductor materials allows for a large part of the sunlight spectrum to be harnessed (the silicon works off visible light and the titanium dioxide uses UV), while the forest-like array of nanowires increases the surface area for the solar-to-fuel reactions, which are helped along by embedded catalysts.

    The artificial leaf has a conversion efficiency of 0.12 percent, comparable to that of natural photosynthesis. To be commercially viable, the efficiency number will have to get into the single digit percentages, and companies like MIT spin-off Sun Catalytix have already chosen to refocus their efforts away from artificial leaf tech. Replacing the current-limiting titanium dioxide anode in the system is the Berkeley researchers’ next target for improving conversion efficiency.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Black box software: a problem for science that extends to big data

    You probably don’t need to know how a calculator makes two plus two equal four, or how your favorite smartphone app works, but the way the background software is implemented can make a big difference to the output. Slight rounding errors or slow load times in these cases might be annoying, but when you scale up to big data modeling, for instance, you might want to take a closer look at the software running your calculations before you click go.

    Blind trust in black box, or click-and-run, software is a growing problem in science, according to a commentary published Thursday in the journal Science, and the concern extends beyond formal research to other domains that use high performance computing.

    The researchers who addressed the “troubling trend in scientific software use” were motivated by a growing unease that the abundance of powerful software is letting scientists derive answers without a thorough understanding of what the software is doing. Software snafus have been responsible for some high-profile data misinterpretations and retractions.

    This wouldn’t normally cause a blip on the average citizen’s radar, but now a lot of these scientific conclusions have real-world implications, from climate modeling and weather forecasting to high volume financial trading. In any domain using big data, misplaced trust in the power of software can be problematic, particularly when the decision makers don’t know what the software they are using is doing, said lead author Lucas Joppa of Microsoft Research.

    So what does ecology have to do with any of this? Joppa is an ecologist by training, and works on computational techniques in that field that may also have applications for big data more broadly. He and his colleagues surveyed scientists in a sub-field of ecology — species distribution modeling (SDM) — to find out how they choose software and how well they understand its inner workings.

    “Lots of SDM techniques are only available as computational methods, but there is a lot of discourse going on in the literature about whether the methods themselves are correct,” said Joppa. Scientists use SDM to forecast where plants and animals will be in the future given current numbers, known habitats, and climate change. It’s a niche area of research, but the disquieting survey results should be noted in any domain where forecasting is done by plugging data into software.

    Only 8 percent of the more than 400 scientists who responded had validated their modeling software against other methods. “The number speaks for itself,” said Joppa. “The real crux of the problem is the results from software being published in a peer-reviewed journal, versus the software itself having been peer-reviewed,” which is rare. Software packages, whether proprietary or not, are often black box systems that can’t be opened and inspected. Even if you can get under the proverbial hood, like with open source software, said Joppa, most people will still have no idea what they are looking at, or how to judge its quality.

    catch 22

    To top it all off, having confidence in what your software is doing results in a massive computational catch-22: how do you know the software is giving you the right answer, if you can’t get the answer without running the software? The level of confusion over what algorithms are doing in the SDM field is illustrated by a debate over which of two statistical techniques is superior. It turns out, Joppa explained, that the two techniques were mathematically equivalent, but the ways they were implemented in software resulted in big predictive differences.

    This sort of mix-up isn’t surprising given the messy nature of software development (if you can even call it that) in research environments. Joppa lauded efforts like Software Carpentry that teach scientists basic software fundamentals for better programming, and said the days of getting a doctorate by merely pushing a button are over.

    “Scientists themselves can learn a bare minimum of software engineering,” said Joppa. On the flip side, he said computer science students should have more exposure to scientific methods. “People with traditional software engineering training become uncomfortable with the way scientists want to work with software, where the design and specs are constantly changing. The way that scientific software is built is fundamentally different from consumer apps.”

    Developers of scientific software, like MathWorks or SAS, may want to watch this space. If Joppa’s suggestions are implemented, journals may start requiring that even proprietary software be opened up for inspection and peer-review. Nearly half of the surveyed ecologists report using free statistical language R as their primary software, so maybe there is hope yet, both for open, inspectable code, and for computational science becoming more accessible while yielding trustworthy, high impact results.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • This electromagnetic coil can detect brain injuries wirelessly

    This head-mounted coil can’t read your mind, but it can tell if you’ve experienced brain trauma. The cheap medical diagnostic, a volumetric electromagnetic phase-shift spectroscopy (VEPS) clinical coil, is meant to function as a substitute to full CT scans in parts of the world where those aren’t available, and has now been field tested in Mexico.

    The VEPS technique isn’t quite the same as reading brain waves: it looks at perturbations made by the brain tissue in a weak electromagnetic field. If excess fluid is present due to swelling or bleeding, it will show up as blips in the conductivity. This can be indicative of brain trauma that might not be externally obvious, but that could require time-sensitive treatment.

    A small-scale study using the VEPS coil has now validated that it can indeed distinguish between healthy adults and individuals who were known to have brain trauma from earlier CT scans. This means that VEPS could stand in for CT scans in places like rural Mexico. Writing in the journal PLOS ONE, the researchers also demonstrated that VEPS can tell brain edema, or swelling, from bleeding. Another insight from the study was that the aging brain’s electromagnetic transmission starts to resemble that of a younger brain with a hematoma (bleeding).

    Two of the authors, Boris Rubinsky of UC Berkeley and Cesar Gonzalez from Mexico’s National Polytechnic Institute, are patent holders on some VEPS-related IP that has been licensed to a company called Cerebrotech. Cerebrotech has received funding from TriStar Technology Ventures and the National Space Biomedical Research Institute. The research published today, however, was not done with Cerebrotech’s involvement.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Makers go to market with hardware startups for learning, play, and IoT

    Hardware accelerator HAXLR8R unveiled its newest class of startups at a demo day in San Francisco on Monday. This year’s crop of startups skewed heavily towards gadgets for learning, play, and the internet of things, with devices like a connected vibrator, bike handlebars with technicolor lights and GPS tracking, and the hardware hacker’s favorite product — a drone. The entrepreneurial teams hailed not only from the U.S. and China but also Singapore, Canada, and the U.K. After 111 days of perfecting their prototypes in Shenzhen, China, the ten teams returned to the Bay Area to pitch investors and enter the vanguard of the “hardware renaissance,” as HAXLR8R co-founders Cyril Ebersweiler and Sean O’Sullivan put it.

    In addition to seeking seed funding, many of the startups are commencing or have already launched Kickstarter campaigns. Engaging in crowdfunding may reflect the unwillingness of the VC ecosystem to fully back hardware-based efforts, but it may also speak to the lack of staying power for quirky, fun gadgets, which are a dime a dozen on Kickstarter and other similar sites. While many of the products presented at the demo day were indeed colorful, fun, and eye-catching, I wondered whether their creators had harnessed the full potential of the fast product iteration and vast component availability of Shenzhen touted by the HAXLR8R team. Many of the ideas seemed to address decidedly first-world desires or needs, rather than the stated goal of “solving real problems or creating a meaningful change to our current technological state.”

    Ironically, one way some of the startups are innovating is not so much with their product or design, but in their business model. Some of the companies are using their apps or software as a Trojan horse for the actual hardware product, while others are using platform-as-a-component plans. One company is just going for the “sex sells” strategy (literally). Here’s a recap of HAXLR8R’s inaugural class last year, and below are this year’s top six startups to watch.

    LightUp

    lightupThe brainchild of two Stanford students, LightUp is like a digital erector set. Magnetic snap blocks let kids build working circuits and learn about electronics through trial and error. Besides the physical play kits, which will be available via Kickstarter for $30-200, LightUp also has an augmented reality app that acts like a tutor and lets you visualize current flow in a circuit. The Arduino-compatible system is powered by a button battery, can be used for building all kinds of electronics projects, and will be launched at select partner schools in August.

    HEX Air Robot

    hex-air-robotChinese company HEX is betting that the FAA will follow through with opening up the skies to commercial drones in 2015. They’ve developed a modular auto-pilot system that they will sell to the DIY drone community, as well as two drone bodies, the smaller of which will debut on Kickstarter next month. Hex’s system also includes an app to launch, land, and have the drone follow the user like an airborne puppy. For photo enthusiasts, the mini HEX includes a camera, and the full-sized drone has a detachable auto-balancing arm for GoPro camera integration.

    Molecule Synth

    Honeycombs meet Legos in the build-your-own musical instrument from Molecule Synth. It has color-coded parts for pitch control, sound generation, and sensors, and can hook up to an iOS device or a keyboard. A mobile app lets users share compositions, and an upcoming Bluetooth module will give the synth drum machine capabilities. This is definitely the kit for music geeks who want a hyper-customized system, or DJs who want to out-Skrillex Skrillex.

    Helios

    Helios’ mission is to solve the dual dilemmas of safety and security for the hipster biker. Not only does the high-tech handlebar have blinker indicators for turning, it has a built-in super bright leadlight and a GPS tracker. An iOS app lets you change the blinker colors at will, and can even coordinate the indicators with turn-by-turn directions. The $199 bullhorn or drop bars also have two built-in rechargeable batteries and a dedicated battery for the GPS, giving you a 15-day window to find your bike (or probably just the removed handlebars) should it get stolen.

    Spark Devices

    Spark makes hardware connected with its Arduino-compatible Wi-Fi chip that can be embedded into existing electronics. This “core” tech is gaining traction with early adopter hobbyists on Kickstarter. For enterprise, Spark provides a cloud service that lets Spark-connected devices connect to each other or online services via a REST API. For more on Spark, check out Stacey Higginbotham’s recent post.

    Vibease

    Most other devices are smart now, so why not vibrators? Vibease has over 1,000 pre-orders for its $99 rechargeable Bluetooth vibrator. Their companion “fantasy marketplace” app aims to be the iTunes for erotica, with crowdsourced audio fantasies that synchronize with the vibrator’s intensity. The app plus licensing of the Vibease chip to other sex toy manufacturers will form the core of Vibease’s business model.

    vibease

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Crowdsource your carbon: power plant mapping project seeks location, CO2 data

    A citizen science project based at Arizona State University has put out a call for data: they want eagle-eyed environmentalists to help map power plants on Google Maps. The leader of the Ventus project, a carbon emissions modeler, wants to use the crowdsourced data to improve global carbon cycle models.

    The estimated 30,000 power plants worldwide account for about 40 percent of global carbon emissions. While about 80 percent of these plants can be found on a map, there are still unknowns about what these plants are doing, like what fuel they use and how much electricity they generate. The ASU scientists hope to source this information by turning Ventus into a competition: the player who provides the most useable information within the first year will get a trophy and be included as an author on a research publication, plus (according to the website) gain serious street cred “among our very elite, newly-formed global group of citizen scientist enviro-nerds.”

    To contribute to Ventus, users are asked to input the exact location of a power plant, its carbon dioxide emissions, what fuel it uses, and its electricity output. Not all of these pieces of information are needed, and users can also edit existing information for 25,000 power plants that have already been mapped, using data from the Center for Global Development. The researchers are specifically hoping to target people who live near or work at power plants, especially the thousands of new facilities in the developing world about which little data exist.

    Using Ventus’ Google Maps interface seems simple enough, but as Nature News points out, a previous crowdsourcing effort to map dams wasn’t all that successful. But if the power plant data does start pouring in, researchers should be able to better track fossil fuel emissions, and apply this knowledge to tackle one of the largest contributors to climate change.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The best student entrepreneurs at Stanford are working on health tech and energy

    Eager nervous students, angel investors, and representatives from top VC firms crowded into a campus conference room at Stanford University on Friday to hear pitches in the annual startup competition organized by the Business Association of Stanford Entrepreneurial Students (BASES). In fields from biotech to e-commerce, novice and serial entrepreneurs — medical doctors, computer science students, and MBAs — presented their ideas in the hopes of scoring some of the $150,000 prize money on offer in three entrepreneurial tracks: social, general, and product-focused. After closed-door judging by a mix of VC and industry representatives, startups in the medical device (Awair) and non-profit patient education fields (Anjna) emerged victorious; the winners are described below.

    The showcase for the product track packed an auditorium with 50 next-big-thing prototypes, apps, and inventions. Offerings included geolocation apps, hotel and travel services, sanitation and energy products targeted at the developing world, assistive technologies, and big data approaches to property search, programming, and human resources. More than a few of the teams looked a bit sleep-deprived, telling me they had cobbled together their platforms in a few days or weeks. Besides the Bluetooth pepper spray device (Deimos Defense) we hope we will never need, here are a few startups that stood out from the crowd.

    Energy

    Dragonfly Systems has patent pending tech to boost the output of solar panels. Instead of the weakest panel in a linked installation bringing all the others down, Dragonfly’s module reroutes the energy that would otherwise be lost as heat back to the grid. Each of their modules costs about $9, and Dragonfly said it brings the best of a costly parallel circuit system into the standard serial way that panels are linked. Their tech recently earned them third place in the Department of Energy FLOW clean business challenge.

    stanford-bases-product-showcase

    Cloudfridge, from a company called Visible Energy, does what the name implies: in place of the traditional thermostat, it takes refrigeration to the cloud. A large fraction of commercial energy use goes towards refrigeration (think walk-in meat lockers). Cloudfridge uses Wi-Fi and sensors to optimize commercial-grade cooling, and has just been awarded a grant by the California Energy Commission.

    Defense

    One of the developers of this mine-sniffing tech is a native of Sri Lanka who was inspired to name his company after a poem by Nobel Prize winner Rabindranath Tagore. Red Lotus Technologies brings the beeping handheld metal detector into the digital age by visualizing buried hazards on a tablet. This visual feedback method could also improve training for human mine detectors. Red Lotus’ tech is being trialed by the Department of Defense later this year.

    Engineering

    The winner of the product showcase challenge was Alice, construction engineering software supercharged with artificial intelligence. In a matter of seconds, Alice churns out project management schedules optimized by equipment, manpower, and materials availability, to enable construction projects to proceed efficiently and on-time. Alice’s assembly-line-for-buildings tech earned its team $20,000 (below).

    alice-bases-prize

    Medical devices

    Awair won the $25,000 general entrepreneurial challenge with its patient ventilation system. The gag-inducing tubes used in intensive care units to deliver air are often accompanied by heavy sedation. Awair uses topical nerve numbing so reduced or no sedation is needed, leading to improved patient comfort and faster healing times.

    Patient education

    Anjna is a non-profit that harnesses the natural proclivity for texting in its low-income target demographic. Their system automates appointment and medical reminders via text, and also delivers tailored medical content. Anjna took home the grand prize of $25,000 in the social entrepreneurial track.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Open source flight, from the Drone Lab to Twitter: Q&A with Dave Lester

    I recently had the chance to catch up with Dave Lester, a soon-to-be graduate of UC Berkeley’s School of Information and a web developer who has been involved in a number of open source initiatives. Dave has been working on bringing technology together with the humanities and education through an un-conference he co-founded, and in his former role as assistant director of the Maryland Institute for Technology in the Humanities. We talked about his drone hacking project, the importance of code integration, and his upcoming foray into open source at Twitter in an email interview.

    How did you become interested in open source and community building?

    I was contributing to an open source web publishing system for digital archives called Omeka. The primary goal of Omeka is to make publishing digital archives of historical photographs and stories as easy as publishing a blog. We patterned our community strategy around Mozilla and WordPress, trying to create a ladder of contributions where people of varying skill levels could get involved, and I was helping coordinate developer community growth. Shortly after launching our first public beta, we realized that the community of interested users was more diverse than we imagined, not only from museums and archives but also libraries.

    For me, community building began mostly as a way of understanding and negotiating the differences and needs of these institutions. You need direct, personal connections with your users in order to understand their needs; in the process, you start to draw connections between the work of others and play a role of matchmaker.

    My interest in community building led me to help co-found THATCamp, The Humanities and Technology Camp, an un-conference. THATCamp is a BarCamp-style event, bringing together technologists and humanists to create sessions related to digital humanities. Sessions vary from event to event, but my favorites have always been ones that focus on building. And since 2008, there have been over 100 THATCamp events around the world.

    You’re involved in open web projects through the Mozilla Foundation, right?

    mozilla-open-badgesI’ve been working as an Integration Engineer Contractor with the Open Badges team at Mozilla, mostly helping third-party developers integrate with APIs to create and display badges. Open Badges is a standard to recognize learning online through the open sharing of digital badges, It’s an exciting approach to informal learning and using badges as a way to capture achievements that are otherwise not visible on a resume.

    One of my contributions to the project has been creating several WordPress plugins to make it easier to issue and display badges; it’s important that a variety of platforms adopt the standard to give the community a variety of ways to hook into our infrastructure.

    You’re also interested in hacking hardware, such as drones. What has this taught you about coding?

    This semester I helped organize a group of fellow graduate students at UC Berkeley to form what we’ve called “Drone Lab”, an informal group that has met weekly to hack, discuss, and investigate creative and problem-solving uses of consumer-grade quadcopters. These are hobbyist toys that you can buy at your local shopping mall, but the ability to control them using software that you script unleashes the potential to tap into their cameras and sensors from heights and hard-to-reach places that are new and exciting. What we ended up focusing our hacking on were new ways to control the quadcopters, including voice and tracking head movements.

    parrot-ar-drone

    What I found fascinating the last several months was introducing several of my classmates to Node.JS through programming these drones. Learning to program can often be a frustrating and unrewarding experience, but with just a proper development environment and a few lines of Javascript, you can fly a copter. Programming shouldn’t be limited to terminal windows, and the feedback of seeing the drone fly can be very rewarding. This also fosters creativity and unexpected things – sometimes you’ll see the drone do something in flight that seems odd, which prompts new questions about your code and experimentation that can be less common in programming.

    So were you part of last year’s TacoCopter stunt?

    TacoCopter is a project that I’m not involved with; I believe it’s meant to be more of a joke than a real thing. Still, there’s something intriguing and futuristic about a flying robot delivering Mexican food that gets people’s attention. We joke a lot about delivering tacos via drones.

    What do you see on the horizon for programming and the open source movement?

    In the age of GitHub where it’s easy for anyone to share code online and gain a following, the proliferation of projects both big and small can come at the expense of a clear way to integrate various codebases together. In my experience, it’s often the “glue code” and examples that are most valuable to users who want to use your software; the last 10 percent, so to speak. To be effective in open source community building, understanding those needs of integration is crucial and something I’ll be spending a lot of time working on.

    In general, I’m excited to see more companies using and releasing open source software, not for the goal of selling it but in an effort to develop better services and give back to communities that they benefit from. The precise model for how this software will be supported, grown, managed, and sustained is still to be defined; these are often projects without a software foundation. I hope to see more coordination and partnerships among companies regarding open source contributions.

    Finally, what’s next after you finish your Master’s?

    I’ll be joining Twitter as an Open Source Advocate in June. I’ll be responsible for building relationships with communities to drive adoption of our open source projects and APIs. Twitter has over 100 open source projects, and as an organization has made a big investment in using and releasing open source software.

    Images via OpenBadges.org, UC Berkeley School of Information

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • CheckinDJ is the Foursquare for Spotify

    CheckinDJ may be the cure for the bar jukebox dominated by die-hard Nickelback fans. True to its name, it’s a check-in app that uses music preferences from social network profiles to create playlists for coffeeshops and other venues.

    Built by Mobile Radicals, a group of researchers and developers at Lancaster University in the U.K., the little jukebox lets users input their music tastes by tapping their phones on the device. The combined tastes of the group determine the playlist, which is streamed from Spotify. The playlist is fluid depending on people’s participation, so no one user can hog the music with their own favorites. There is also a limit on how many times a user can check in, and the majority has to agree on a musical genre for it to get played.

    CheckinDJ uses a capability that many smartphones already have – near field communication (NFC), similar to RFID and present for example in the Samsung Galaxy SIII (Samsung calls them TecTiles). Checking in involves tapping the phone to the CheckinDJ “jukebox,” which is built off a Raspberry Pi mini-PC. CheckinDJ can also be used with other NFC-tagged items like library or loyalty cards, and once a few musical genres are selected and a social network identity is input (this happens automatically when using smartphones), the user can enter the jukebox “system of influence,” where they will start to affect the playlist.

    Playlist influence increases with each additional linked social networking account and each new connected friend that checks in. The system updates every 20 seconds to adapt to changing group composition and preferences. CheckinDJ sounds like the perfect app to help turn your neighborhood diner into a Harlem Shake flash mob.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Google Glass minus glass: Dekko makes the world your OS

    With the advent of wearable tech, apps and operating systems will have to radically adapt to the changing ways users want to interact with their devices and the world. One startup that thinks it is up to this challenge is San Francisco-based Dekko, a new company founded by the husband-and-wife team that used to work at augmented reality pioneer Layar.

    Dekko plans to launch its real-world operating system Thursday, which brings the promise of augmented reality (AR) to fruition using the camera on your mobile device, powerful computer vision algorithms, and some serious financial backing.

    Dekko’s tech overlays content on the real world, a kind of out-of-the-device OS that goes way beyond just superimposing search results over a snapshot of a landmark or restaurant, like Google Glass does. The system actually builds up a 3D model of the scene in front of you from image frames, then reconstructs it and inserts whatever you want — a favorite cartoon character, perhaps, or a guided walking tour — onto the image. “The tech layer can run anywhere that the camera can see anything,” said Dekko CEO and co-founder Matt Miesnieks. This is a big change from other AR efforts that require an anchor or a known object to interpret the scene.

    The biggest advance from Dekko is the ability to complete the entire AR process — scene modeling, object recognition, and reconstruction — in real time on the relatively limited processor of an iPad Mini. This is impressive considering computer vision techniques still struggle with basic pattern recognition, even with powerful post-processing. Dekko’s algorithms don’t even rely on two cameras (like the stereo vision of human eyes) or an infrared field (like the Kinect) to calculate depth. The system just uses the slight differences between moment-to-moment image frames to build up a 3D model of the world, and focuses on surface textures to segment objects.

    dekko-game-sceenshot-full

    As is the case for other AR ventures, the cluttered and dynamic real world still poses a challenge for Dekko. The OS works best in static environments, and can now model a 10-by-10-foot window in front of the user. Miesnieks is confident that his team can solve the problem of tracking far objects, and said the window will be expanded to 100-by-100 feet in the next six months. Real time reconstruction at the pixel level should also become possible, with improvements in mobile device GPUs and CPUs.

    Co-founders Matt and Silka Miesnieks are veterans of another AR outfit, Layar, which superimposes digital content onto snapshots of printed pages. Disillusioned with what he calls the “gimmicks” of earlier AR efforts that devolved into marketing, the Miesnieks are focusing on gaming as Dekko’s entrée. “We consciously chose gaming as a vertical because it’s often how new technology is introduced to the market,” he explained, citing Microsoft’s bundling of Solitaire with Windows to get users comfortable with graphical user interfaces. “It’s a new way for people to see apps outside the box.”

    Dekko’s tech will almost certainly have advertising applications as well. Samsung, Intel, and Facebook have already expressed interest in using it to augment their services and devices, and Dekko is in talks with major hardware manufacturers to integrate its core tech into new devices. On the app side, toy, game, and media companies want to have their superheroes and creatures frolicking among the dishes and books on your coffee table. This capability will be demonstrated when Dekko Monkey, a tabletop game app, comes out this summer.

    Dekko is working jointly with developers to build apps rather than just opening up its tools, since Miesnieks thinks that the company occupies a unique space and has privileged and complex algorithms. He concedes that a tension exists between framing Dekko as a tech platform versus a stand-alone app. “Augmented reality has the exciting potential of a goldmine, but no one has come out with a nugget of gold,” Miesnieks mused. “We need to go in ourselves to get the first nugget before selling shovels to others.”

    Dekko has already scored something akin to gold, securing $1.9 million in funding last September. Today the company announced an additional $1.3 million of seed funding, mostly from MicroVentures. That cash should help Dekko scale up its OS and make good on the AR promise of a seamless experience between digital and real.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Parkour robot can leap ledges in a single bound

    Your Roomba can’t jump, can it? UPenn researchers presented the acrobatic feats of their X-RHex-Light robot today at the IEEE International Conference on Robotics and Automation in Germany.

    Their research paper “Toward a Vocabulary of Legged Leaping” details how they taught the robot the tricks to not only run, but also jump and execute the equivalent of robotic back flips and triple jumps. The nearly 15-pound, 20-inch long robot can jump up ledges, and can even do leap grabs that let it ascend an impressive 28 inches. Now that the robot knows the leaping lingo, it could use it to take instruments or sensors to the right locations, or right itself when it flips over.

    Image via UPenn Kod*lab

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • University of Florida embraces Internet2′s 100-gigabit network, launches new supercomputer

    Universities have been pioneer participants in developing and using the internet since its inception. Now, the consortium of education and research institutions known as Internet2 has reached another milestone, with the announcement that the University of Florida has implemented Internet2’s next-gen computing architecture, the Innovation Platform. The platform should bring superfast connections and software-defined networking (SDN) to campuses across the country.

    The 300 or so universities and government labs that belong to Internet2 have been able to access a 100Gb network backbone since its launch in 2006. But on Tuesday UF became only the fourth university to roll out a full 100Gbps connection to  Internet2 (most other schools are still at 10Gbps and working to expand to full bandwidth), and the only school so far to fulfill the other two Innovation Platform requirements: SDN and a Science DMZ, a kind of buffer between the campus network and the wider internet that lets research computing move freely without firewalls. The amplified bandwidth will let researchers share huge amounts of data or access supercomputer resources, like the simultaneously announced HiPerGator. With a peak speed of 150 teraflops, HiPerGator is Florida’s fastest supercomputer and one of the top 500 supercomputers globally.

    internet2map

    Everything from genome sequencing to drug discovery and climate modeling relies on computing power, and 30-odd schools are working to realize the Innovation Platform to fully take advantage of big data research and long-distance collaboration. SDN, for example, will allow disparate machines to be programmed to communicate, share, and manipulate data, a step towards a massive academic data center. Another project that will be made possible by the Innovation Platform is the Global Environment for Network Innovations, a testbed for exploring future internets and developing network science and engineering breakthroughs that is supported by the National Science Foundation.

    Internet2 has been a roll lately on other fronts, collaborating with the Smithsonian Institution on content distribution and launching a videoconferencing service from Vidyo. There are also over 30 cloud services available to Internet2 member institutions, including collaboration, storage, and productivity apps. While Internet2 is in no way designed to replace the commercial web, the tech it spawns will probably impact the cyberinfrastructure we depend on, and that will be crucial for fields like tele-medicine and big data.

    Images via Internet2

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Mmm, invisibility donut… researchers try new takes on the “invisibility cloak”

    Duke University researchers who previously demonstrated invisibility cloaking in the lab have employed 3D printing to build their latest “cloak” – a disk that can block microwaves.

    The thickness of the donut-like disk roughly matches one wavelength, and its combination of air and dielectric (insulating, nonconducting) composite material deflects microwaves. An object placed in the center effectively “disappears” when microwaves are aimed at it. Because of the properties of transformation optics (physics of electromagnetic radiation that behaves similarly to relativistic warping of space-time), the shell of the disk eliminates any backward reflection that a viewer or detector would use to see the object, and also suppresses shadows and scattering.

    The cloaking disk is made of plastic, but another transparent polymer or glass would work equally well, the researchers say. Simulations shows that the cloak could be made thinner and larger in area, and could potentially work for shorter wavelengths, like visible light.

    invisibility-cloak-donut-urzhumov-duke

    Transformation optics also underlies another advance towards invisibility with metamaterials. These are engineered materials with new kinds of properties that don’t normally exist in nature. The key development by the researchers from Stanford and Spain is tailoring the new metamaterial’s refractive index, or the degree to which it can bend light. Only positive refractive indices (like 1.33 for water) exist in nature, but using transformation optics the investigators were able to design constituents of the new material that have a negative refractive index.

    In order for metamaterials to have the interesting properties they were designed for, they need to interact with both magnetic and electric fields. The constituent “atoms” of the new material can do both, which means their interactions with light over broader wavelengths can be controlled. The visible spectrum of light extends from 400-700 nanometers, but previous invisibility efforts have only been able to cover about 50 nm of this range.

    In their theoretical analysis, the researchers started with an infinite sheet of material that they fold into a crescent-shape on the nanometer scale. This is their constituent “atom,” which is placed into an array with other identical ones in a background material. The result is a structure that has negative refractive index, i.e. “invisibility” over much of the visible spectrum, in a band over 200 nm wide. Engineering a material from the bottom up opens up new optical potentials, like precisely controlling the light path, and changing the geometry of the nano-crescents or shrinking them could help the invisibility band grow to cover the whole visible spectrum. The material’s negative refraction is shown in the video below.

    Image of Yaroslav Urzhumov via Duke University

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • The bionic ear is closer than you think with new apps, implants, and biomimetic mics

    It’s not a secret that replicating what the human brain and senses do naturally still presents a substantial challenge in engineering. New advances in restoring and improving hearing are getting closer to the real thing by going mobile and taking cues from nature.

    A little yellow fly that lays parasitic eggs is actually the inspiration for a next-gen hearing aid. Its complex “ear” near the base of the front legs responds to male cricket calls, and is the model for tiny microphones. Two or more microphones detecting the pressure changes in sound waves are better for hearing aids, but the smaller and closer the microphones, the harder it is to accurately detect those waves. The mechanics of the MEMS microphone design come directly from the fly, and new research scheduled to be presented at the International Congress of Acoustics in June shows that by tweaking certain parameters, the prototype hearing aid can be made much smaller than conventional ones, with a greater tolerance to noise.

    Of more immediate impact are hearing aid apps for smartphones, which contain the microphone, processor, and headphones needed for a hearing aid at a fraction of the cost of a separate device. Hearing aid apps with diverse features and prices are available, but one recent app that stands out is BioAid, a free and open source hearing aid.

    The difference with BioAid is that its algorithm performs compression and amplification selectively across different frequency bands, instead of a uniform gain across all frequencies, like turning up the volume. Since it was built by hearing researchers in the U.K., it has been lab tested with real hearing-impaired volunteers. With the app’s sliders, users can adjust and save filters for different background noise levels. Plus, the phone’s touchscreen and charger save users from the tiny-battery-fat-fingers dilemma.

    On a regulatory note, the Food and Drug Administration is paying close attention to mobile health services, which may explain why some apps shun the “hearing aid” label or are plastered with heavy disclaimers. Congressional hearings in March were a prelude to upcoming new FDA guidelines on smartphone and mobile health apps; 75 apps have been approved so far.

    inner ear

    The bulky cochlear implant, one of the great successes of sensory prosthetics, is also getting a makeover. Georgia Tech researchers, for example, have developed a thin film electrode array that flexes to fit the tiny, two-millimeter diameter cochlear surface to better stimulate the auditory nerve. This still needs to be connected to a battery, microphones, and processor; the externally visible part of conventional cochlear implants. A fully internal system, trialed by industry leader Cochlear in 2007, means users never have to remove it to swim, sleep, or shower. The downside with an internal microphone, apparently, is that you can really hear your chewing and heartbeat, but with signal processing this effect can be mitigated.

    With the cost of conventional implants being reduced by upstarts like Nurotron thanks to cheap manufacturing and fast-tracked regulatory approval in China, and biotech providing new materials and designs, a true bionic ear (3D printed of course) may be arriving at the speed of sound.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Government lab demonstrates stealth quantum security project

    Quantum cryptography is supposed to be a kind of holy grail solution for securing the smart grid, cloud computing, and other sensitive networked resources. The technology is still experimental, with only a handful of companies globally providing quantum key distribution services. Now, researchers at Los Alamos National Lab have quietly revealed that they’ve successfully been running what amounts to a mini quantum internet for the past two-and-a-half years.

    The basic premise of keeping information secret using quantum mechanical phenomena lies in what is popularly called the observer effect. A quantum message, sent as photons, will be permanently altered if someone observes it, so the sender and recipient will be able to tell if there was a breach. What this means currently is that only one-to-one quantum secured communications are possible over a single optical fiber. Routing the message onward is problematic, again because of the observer effect: reading the sending instructions in the message alters the message itself.

    To get around this issue, the Los Alamos scientists developed a hub-and-spoke architecture for their quantum network. The nodes on the network’s spokes can talk to each other via the hub, and quantum security is maintained by messages being converted to conventional bits at the hub, before again being reconverted to quantum bits for further transmission.

    This system is not yet a perfect “pure” quantum internet because its security is only as good as that of the hub, and true node-to-node quantum communications aren’t yet possible. However, the extremely short latencies and the scalability of the system are a significant advance for quantum networked communications. The researchers highlighted some possible applications for the system, including cryptography for the smart grid, where optical fiber is already widely deployed, and as a “retrofit” solution to existing communication infrastructures. Plug-and-play crypto-modules are already in the works, and could be coming to your TV or computer in the not-so-distant future.

    The researchers explain their setup in this video.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Adaptive streaming will let you access apps, HD video and your whole OS from the cloud

    Mozilla has teamed up with Hollywood rendering company OTOY to create a new codec to stream video and apps from the cloud directly to the browser. The JavaScript library ORBX can render apps, gaming platforms or an entire operating system in any HTML5-capable browser, including Chrome, Safari or Firefox, even on a mobile device. The announcement is another attempt at destabilizing the hegemony of the H.264 video-compression standard, famously advanced by Apple over Flash and present in all iOS devices, after the promotion of WebM by Matroska and Google.

    The impacts of the purely JavaScript-based system are multiple: for end users, the ability to run native PC apps on any device with an internet connection and to purchase and protect content without digital-rights management (DRM); for content creators, cheaper, faster rendering and the ability to distribute anywhere viewers can type in a URL; and for open web or cloud-computing advocates, a push away from proprietary or legacy plug-ins and an embrace of HTML5. With the presence of William Morris Endeavor CEO Ari Emanuel at the launch on Friday, the creators of the ORBX.js technology were also seeking to emphasize its piracy-fighting powers for the movie and TV industries: with video streams or apps watermarked in the cloud, DRM in the browser becomes unnecessary.

    OTOY and Mozilla came together recently with the realization of a shared goal: trying to turn the web into the platform for all apps. Mozilla’s effort to implement H.264 in Firefox inspired OTOY to rewrite their own codec to run in JavaScript, said OTOY founder and CEO Jules Urbach at the launch event in San Francisco on Friday, and the partnership has now culminated in an optimized rendering experience that is approaching native app speeds in Firefox. Among the capabilities demonstrated at the launch were a virtualized Windows desktop running in Safari, lag-free gaming in a browser and streaming that can be adaptively encoded based on a user’s bandwidth.

    “Web is the medium,” said Autodesk CTO Jeff Kowalski, who was very upbeat about the possibilities of the new tech for increasing work collaboration and creativity, and reducing delays through real-time rendering. Besides investing in OTOY, Autodesk’s interest is in providing 3D apps to their customers using cloud resources. The implications for agility — both for individuals and for enterprises — are freeing: a low-power home device can drive the centralized, high-power cloud machine, eliminating the need for a high-end workstation or provisioning of hardware assets to employees or contractors. Kowalski’s suggestion, in fact, was that such a move will allows users to downgrade their hardware, because it no longer has to match the needs of the software.

    So what is needed for ORBX.js to work? Any HTML5 browser (Chrome, Safari, Firefox, IE10 or Opera) will do, but it needs to have WebGL technology to take advantage of the codec’s full decoding speed. Mozilla CTO Brendan Eich predicted that Apple will eventually come around to more fully accept WebGL. When asked if Apple, Google, or big streaming providers can do anything to stop the use of ORBX, Urbach said nothing short of getting rid of the browser would stop the tech from being used.

    The central issues with streaming all of your computing are bandwidth and money. Video seemed to stream well on an iPhone over 4G, and with the adaptive streaming and superior compression of ORBX, Urbach projects a 25 percent bandwidth savings for, say, Netflix streaming. For that to happen, Netflix, Amazon and other providers have to adopt ORBX, something that the Mozilla-OTOY partnership is actively working on. They are hoping that their solution will be the one to put the format wars to rest, and allow consumers to collect the highest-definition content possible in a way that is format-agnostic. With respect to pricing, a ballpark figure suggested at the launch was $300 per year for OTOY’s cloud-rendering engine to take over one person’s computing needs. Pricing is still up in the air, but Urbach expects an AMI to launch later this year with the second generation of ORBX that will also include HDR encoding capabilities.

    The videos below show streaming video and gaming through a browser using ORBX (via Mozilla).

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Games meet brains: the new immersive tech of gaming

    From joysticks to gamepads and now gesture recognition, gaming has been downsizing the gadgets over the decades, and it is getting close to tapping into the ultimate controller: your brain. About 300 people gathered to learn about and test the latest neurogaming tech at the eponymously named conference and expo in San Francisco this week, and many an attendee was outfitted like they belonged to the Borg. Besides the “neuro” side of the tech offerings, haptics technology — involving devices that stimulate the touch or body sense — was also well-represented.

    The majority of exhibitors were peddling tech to monitor brain waves, and it seemed like EEG (electroencephalography) headsets were available in every color, shape, and size, from sleek Google Glass-like headbands to traditional electrode-laden caps. Since the measured waves represent the activity of the whole brain, it’s not possible for the devices to literally read your mind; you can’t, for example, move left or right in a game just by thinking it (yet). However, because some wave activity is associated with alertness, games like Intific’s NeuroStorm can incorporate the user’s extra focus into gameplay. Sustained concentration can also be used to launch and fly Puzzlebox’s Orbit helicopter. If you want to stimulate your brain rather than just read it, Foc.us has a headband that supposedly increases focus while playing.

    Electroactive polymers — perhaps the base material for future shapeshifting phones — are being used to deliver enhanced touch and feel to gaming in Bayer MaterialScience’s Vivitouch technology. The thin film can minutely expand and contract, so that when attached to a gamepad the player can feel blasts and receive force-feedback. Vivitouch will be launching a new product at E3 in June. Tactile Haptics’ Reactive Grip is like a Wiimote on steroids that lets you feel like you’re really gripping, firing, or wielding weapons, but it’s still a tethered device.

    Noticeably absent were offerings in the augmented reality or facial tracking space. Predictions about where neurogaming is headed from a few of the conference’s speakers included integrating data from wearable and smartphone sensors to enhance gameplay, and artificial intelligence-modulated games that could, for example, level the playing field between beginners and grandmasters in chess.

    For mass adoption, of course, the “tech” can’t be too technical. That’s why smell and sound-driven game experiences are under development, to appeal to the limbic brain in all of us.

    Image via NeuroGaming Conference and Expo 

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.