Author: Amanda Alvarez

  • Study: Perceived value of “hard” versus “soft” engineering might drive gender pay gap

    Inequality in engineering isn’t just a product of how few women are in the profession: the tasks women perform within engineering are relevant, too. Deep-rooted ideologies also contribute to a gender wage gap, according to a Rice University sociologist.

    As “first generation” bias — discrimination based on overt factors like gender or ethnicity — is becoming unacceptable, we need to dig deeper into cultural processes that reproduce inequality. That’s what Erin Cech has done in a new study. After all, inequality is still apparent, with women and minorities continuing to be underpaid and underrepresented in many job sectors. Cech thinks an implicit dualism in engineering — the notion of “hard” technical work versus “soft” social or people-focused activities — contributes to women’s lower pay.

    To test her theory, Cech used data gathered by the National Science Foundation from nearly 10,000 recent college graduates who identified as being employed as engineers. Women made up only 11 percent of the sample. There was a clear pay gap between men and women — $13,000 annually or about a 16 percent difference — across all engineering subfields. Cech found that women more likely worked in “softer” fields like industrial engineering, or chemical and bioengineering, than in electrical, computer, or mechanical engineering. Women were also underrepresented in technical work activities, like research and development, and overrepresented among management, administration, or teaching activities.

    When she drilled down further into the numbers, though, she found that women actually experience a pay penalty for engaging in technical work, and also a slight penalty when their work is related to their highest academic degree. As Cech wrote in the paper, “women are devalued for engaging in technical primary work activities but not social ones.” Apparently, culturally benign beliefs that are persistent in engineering, like the separation of the technical and social aspects of engineering, thus seem to contribute to the wage gap.

    In engineering especially, the “purest” forms of the profession, like design, research, or computational activities, are valued more highly than management, sales, or teaching, according to Cech. She compared the data from the engineers to other scientists, but found that the same wage inequality patterns were not apparent in biology or physical sciences. The technical/social dualism doesn’t appear to drive segregation in those fields.

    Cech thinks this is because the ideology is especially strong in engineering, where judgments of professional competence or fit are associated with the parts of the profession that are most valued. Thus, what’s driving the devaluation of women isn’t their gender, but their engagement in undervalued parts of the profession, like management, or perceived unsuitability for its more valued technical aspects.

    Cech thinks the cultural ideologies that contribute to the wage gap in engineering could be changed through training and by refuting the technical/social dualism in college engineering education, where the professional culture for tech gets ingrained. The simple realization that both men and women engage in heterogeneous work activities in engineering could be a start. Because the cultural contributions to pay inequality appear to be strongly specific to engineering, Cech believes training may be more effective than attempts to create broad “inclusive climates.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • First in flight: Maryland professor’s robot bird good enough to fool the real thing

    Satyandra Gupta apparently loves birds so much he decided to build one. His skills as a professor of mechanical engineering at the University of Maryland probably didn’t hurt in his quest, and this week he announced the Robo Raven is now a reality. The robotic avian can dive and roll and looks so realistic that other birds have attacked it in flight.

    Developing the robot bird was a decidedly start-and-stop affair. Over the course of eight years, design flaws caused incapacitating crashes in each iteration of the robot. 2007 saw the first successful flight by a prototype with simultaneously flapping wings. By 2012, Gupta and colleagues had succeeded in developing a model that could flap its wings independently. For the robot, at least, simultaneous wing-flapping was a drawback. Engineering independent wing-flapping behavior was time-consuming, and also made the robot heavier.

    The Robo Raven has two motors that are coupled to coordinate the movements between the two wings. It can be programmed with arbitrary flight patterns, as can be seen in the video below. To compensate for the additional weight of a bigger onboard battery and microcontroller, the robotics team used lightweight 3D printed parts for the body. Aerodynamic optimization allowed the Robo Raven to reproduce observed flight behavior of real birds.

    Like quadcopter drones, the future of flapping wing micro air vehicles may lie in surveillance, or in just looking really really cool.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Skinny RFID tags could soon show up embedded in paper

    Two new developments in RFID research could pave the way for tags that are thinner, cheaper, and more versatile. Using new materials and cutting-edge laser fabrication, engineers at North Dakota State University have made RFID tags compatible with paper or metal, with applications ranging from banknotes to cargo containers.

    The key to embedding ultra-thin RFID tags into paper is what’s called Laser Enabled Advanced Packaging. Instead of using the pick-and-place robotic methods generally employed with today’s larger tags, a laser pulse is used to insert the RFID circuitry into a substrate: in this case, paper. The force generated by this laser pulse is essential when dealing with chips that are so thin — 20 microns, less than most commercial RFID chips — in order to overcome the attractive forces that could hinder the pickup and placement with conventional methods. Static electricity, for example, can make the super-skinny chips stick to the robot, which impacts assembly speed and precision.

    The speed and precision of this contactless method beats current manufacturing techniques, according to the researchers, and it also doesn’t result in bumps in the paper. An added benefit is that the chip’s silicon becomes flexible at such tiny scales, so it can bend if needed. The paper-embedded RFID tag still has a tiny antenna, which is first printed onto the paper before the laser etching. Another NDSU discovery has done away with the antenna altogether, overcoming the interference problems associated with tagging metals or containers filled with liquid.

    rfid-money

    The passive ultra-high frequency tags use the metal objects to which they are attached as antennas. This means that they can be thinner, because no spacer is required to isolate the tag from the metal surface to make it readable. In addition, the tag’s highly permeable material lets current flow into the integrated circuit. “RFID on metal” could be used to track assets from laptops to medical devices and oil barrels, and because they can be embedded in the metal itself, they can stay with an object from origin to end. While the creation of antenna-less RFID tags isn’t entirely new, the development marks another step towards realizing the internet of things.

    Image via North Dakota State University Center for Nanoscale Science and Engineering

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Four ways data scientists are using digital art to humanize data

    The growing pains of big data were apparent at the Data 2.0 Summit on Tuesday in San Francisco.

    During one panel, the assertion that data science is dead was indeed debated. Along with the habitual tension between end user requirements for businesses and consumers and the “elitist” ideas of data scientists and engineers, other themes explored included increasing accessibility to data, as well as changing behaviors and encouraging better decision-making with data. Everyone from sales and marketing people to fitness enthusiasts, it turns out, can be motivated by pretty pictures.

    As IBM’s Alah Keahey put it during a panel, “there is a hunger for friendly data,” and visualization can help to humanize those threatening terabytes. Here are a selection of new, and new-to-us, visualization tools that came up at the meeting.

    Bringing climate change home: Databasin.org

    A mapping and analytics platform from the Conservation Biology Institute that has 10,000 datasets on everything you need to understand how extreme weather will impact natural resources, renewable energy, and endangered species. Here is one projection of maximum temperatures in 2080.

    world-map-climate-change-databasin

    Sparkvis by Chloe Fan

    This app is for the quantified self junkie who loves to interpret their burned calories as abstract art. The research behind the colorful display of Fitbit (see disclosure) data is explained here. Image via QuantifiedSelf.com

    sparkvis-fitbit-visualization

    Disqus Gravity

    The commenting platform’s diverse content is brought together in an interactive and live visualization. Pulling from about 500 sites that use Disqus, Gravity brings together the “small” data of individual comments within the context of 11 content categories. Another visualization, Orbital, shows realtime comments geolocated on a spinning globe.

    disqus-gravity-visualization

    IBM Many Eyes

    Originally conceived by visualization guru Martin Wattenberg and colleagues in 2007, Many Eyes lets you plug in any dataset and generate nifty figures. Here, for example, is the distribution of U.S. foreign aid over a 60-year period.

    many-eyes-visualization-foreign-aid

    Disclosure: Fitbit is backed by True Ventures, a venture capital firm that is an investor in the parent company of GigaOM. Om Malik, founder of GigaOM, is also a venture partner at True.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Researchers have created a 21st century global mood ring with data mining

    After your morning stock market and weather updates, maybe add a check of the hedonometer to your list. The new site draws on tweets — and soon, the New York Times, Google Trends, and other sources of textual sentiment — to gauge population-level happiness. This big data approach is taking the collective mood temperature across space and time, but it’s unlikely to reveal the secret to achieving happiness.

    Launching today and updated every 24 hours with faster refresh rates to come, Hedonometer.org uses English-language tweets to create a happiness index. The system is based on a 10,000-word strong “emotional temperature” database, where words are ranked on a scale of 1-9 by volunteers using Amazon’s Mechanical Turk. Words like “laughter,” “happiness,” and “love” top the list, while “loneliness,” “bad,” “inflation,” and “surgery,” along with assorted expletives, round out the bottom, with rankings close to 1. The emoticon “:(“ has a rating of 2.36.

    Users can zoom in on any day all the way back to September 10, 2008, check out the balance of positive and negative words, and see how these compare to the week before and after. Saturdays, for example, tend to be happier than Tuesdays. Christmas Day stands out as being the happiest day of the year, every year. The hedonometer developers, mathematicians from the University of Vermont along with scientists from the MITRE Corporation, found that April 15, the day of the Boston bombings, was the unhappiest day on record, with an average happiness index of 5.88. Other recent sad days include December 14 last year (Newtown school shooting) and June 25, 2009 (death of Michael Jackson).

    Indeed, eyeballing the global happiness index suggests a slight downward slope since 2008. Whether or not this effect is real depends on establishing a normal background happiness level, and comparison with geographic, socioeconomic, and political metrics. What’s interesting is that the hedonometer is turning more than 50 million daily micro-statements into a “quantitative macro-story,” as UV’s Chris Danforth put it. Individually insignificant words and tweets swell into a collective emotional response, the blips and dips of which stand out and correlate with major events.

    hedonometer-happiness-data-mining

    The research from Danforth and his colleagues got some press earlier this month, when they reported that happiness went up the further Twitter users were from home. Other insights from the same team included the fact that obesity and happiness were inversely correlated, and that cities’ happiness scores were related to swear words, suggesting that “geoprofanity” could be a good marker for regional happiness differences.

    The hedonometer is set to draw on more data streams soon, including blogs, news transcripts, and Bit.ly shortened links, and will be data mining in a dozen languages. Nonetheless, the happiness index will remain an aggregate measure, like a nation’s GDP, and may not have much impact in and of itself. The underlying methodology, however, is the real driver, with broad applicability to big data, whether social media-generated or not.

    Image via Chris Danforth, University of Vermont

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Smartphones evolving from flat to flex with new shapeshifting prototypes

    The mobile devices of tomorrow will be shapeshifters, and experimentation in the design space will make them a reality, according to Anne Roudaut. She’s a computer scientist at the University of Bristol who, along with U.K. and German colleagues, is unveiling flexible touchscreen prototypes at CHI2013 on Monday. The devices they’ve built incorporate smart materials and can morph into different shapes, hence the name “Morphees.”

    Static touchscreens can be compared on dimensions like pixel density, screen size, or refresh rate, but no such vocabulary exists for shape-changing devices, Roudaut said. One of the goals of the Morphees project was to create metrics to describe flexible devices and their ability to change shape. Having these “shape resolution” descriptors will help the construction of devices to fit the services they are designed to support. Roudaut cites the example of a stress ball: downloading the app would cause the device to collapse into a sphere that the user could squeeze.

    Right now, the Morphee prototypes need external help to change shape, like wires, springs, and actuators, but in the future, the flexible material, touch sensor, and actuator will be merged. “All the layers will be made of flexible material,” says Roudaut. “My work is to make that happen faster, with new prototypes, and pushing the vision so companies [become] interested in making higher fidelity devices.”

    FlowerShift

    Roudaut and colleagues experimented with six different Morphees — made of materials like wood, dielectric electroactive polymers, and smart memory alloys — and measured their shape resolution along dimensions like speed (how fast can it deform) and ability to curve. They wanted to get a sense of what kinds of materials are functional — and safe. “The electroactive polymer requires a huge voltage. We had to figure out how to use it without electrocuting ourselves,” says Roudaut.

    A tiled touchscreen made of wood and smart memory alloy wires (above) seemed the most promising of the prototypes. It was able to hold its shape and could quickly curve. Another prototype using a two millimeter-thick E-ink display could roll into a cylinder, but could achieve greater flexibility if it was even thinner, according to Roudaut.

    These concepts are already being brought to life by companies like Fremont, Calif.-based Tactus. Its shape-changing display layer takes the place of the front glass on a smartphone and creates a physical keyboard with inflatable buttons that can appear and recede. The challenges in this space, according to Roudaut, remain finding suitably heat-resistant materials that can create sufficient force – after all, having your stress ball phone crumble when you squeeze it would be decidedly stress-inducing.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Predicting Twitter popularity is all about probability

    Tweets have the power to decimate markets, but they also have users and companies seeing dollar signs. With huge marketing, political, and social mobilization potential, how can you predict which tweets will get more views, and which retweets will go viral? A new study developed a statistical model that attempts to estimate the popularity of tweets, and thus how memes spread.

    Starting with 52 “root” tweets from users both famous and obscure, the researchers first analyzed the dynamics of retweeting, like the speed and spread of a tweet from a user to followers and then their followers. The researchers, from the University of Washington, MIT, and Penn, used the Twitter API to collect all the retweet information and found that most retweets occurred within one hour of the original tweet. Not surprisingly, they also found that root tweets are retweeted more than the retweets themselves.

    They then plugged the important variables –- number of followers, retweet speed, retweets of other tweets –- into a Bayesian model, a statistical approach that uses prior evidence (the root tweets) to calculate how the retweet graph evolves. They experimented with feeding the model different amounts of prior evidence to see how much was needed to make an accurate prediction. Using only 10 percent of the retweets to guide the model, they were able to reasonably accurately predict retweet time and volume, and the error decreased the more retweet data they included. The average retweet time was only 4.4 minutes.

    tweet-prediction-kimkardashian

    Throwing more information into the prediction engine (like whether a particular follower has a large numbers of followers of his or her own) could improve the accuracy. Their model was thrown off, it seems, by a few anomalous tweets with a very rapid onset and termination of retweets that didn’t follow the same pattern as the other tweets. (Though they don’t identify who sent those tweets, my bet is on @KimKardashian, whose followers’ actual and predicted retweet timecourse is pictured above.) The researchers didn’t even consider the time of day a tweet was posted, nor its content; there is likely huge potential to mine in those domains for what, and when, leads to trending.

    With the abundance of the Twitterverse open to developers via API, this study represents just the tip of the iceberg in predicting tweeting behavior, something that startups like Blab are busily pursuing. It also shows that robust methods like Bayesian statistics can predict if a tweet has any retweet life left, and thus whether it can gather more eyeballs and clicks, something that is sure to prove very lucrative.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Now there’s an app to help you dodge bullets

    Some researchers from Vanderbilt have developed a new app and hardware module that will help you find the direction of gunfire. The research team used the sonic signatures associated with firing to pinpoint its location, and put this on an Android smartphone map.

    Originally developed for the Department of Defense, acoustic shockwave bearing estimation was designed to help soldiers locate snipers. The technology takes advantage of the properties associated with gunfire – the initial flash of the muzzle blast and the shockwaves that follow. The supersonic speeds and whizzes of bullets can be tracked with microphones and a really precise clock hooked up to a microprocessor. These sensor nodes communicate with smartphones via Bluetooth; data from a few differently placed sensor nodes are required to triangulate the location of the gunshots.

    The sniper location system was built into combat helmets, but the research team has now updated it for smartphones with funding from DARPA. Some nodes are still required, so civilian use may not be practical. But the researchers think security details or police squad cars could make use of the smartphone version.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Would you like some storage in your storage?

    A new type of memory device that will allow for much greater miniaturization and efficiency than current RAM has proved to have a surprising property. German researchers reported in a recent issue of Nature Communications that ReRAM (resistive memory cells) has a battery-type effect in which the devices actually store charge. This helps explain some anomalous behavior in memristors, the class of circuit elements that subsumes ReRAM.

    Resistive memory cells (ReRAM or RRAM) have the potential to become a front-runner technology among nonvolatile memories. First developed by HP in 2008, ReRAM exhibits fast switching times and is suitable for low-power applications, because it requires less voltage. ReRAM differs from conventional computer memory by using ions (charged atoms) for storing data, rather than electrons, which owing to their tinier size are harder to control, impacting both data storage density and energy use.

    ReRAM

    The new research study reports that the ions in ReRAM behave like a battery. The data storage function in ReRAM is realized by the movement of ions between the memory cell’s two electrodes, silver or copper on the active end, and platinum, for example, on the inert end. A positive voltage leads to metal depositing at the counter electrode, which eventually forms a filament that short circuits the cell by connecting the two ends. This corresponds to the cell’s “on” state, while the “off” state can be restored by applying an oppositely polarized voltage.

    A byproduct of this switching is the generation of an electric voltage, meaning the ReRAM cells act like tiny batteries. This finding has an impact not just for potentially improving data readout (an idea that the research team has patented), but also has theoretical implications. ReRAM, with active electrochemical components, violates the definition of memristors as passive circuit elements. The researchers argue that their finding means the memristor concept needs to be expanded, and that ReRAM cells are real memristors, something memristor pioneer Leon Chua has also argued.

    Image via Jülich Aachen Research Alliance

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Forget touchscreens: paint a computer interface anywhere with WorldKit

    Ubiquitous, gesture-controlled interfaces are one step closer to reality, thanks to a new system developed at Carnegie Mellon University. WorldKit lets you create interactive apps on any surface just by waving your hand. The project was announced by the university on Thursday.

    Instead of being tethered to your hardware, WorldKit is designed to make access to computing instant and mobile by making the world your touchscreen. Right now, the system involves a ceiling-mounted camera and projector that record hand movements and then project onto the surface of your choice. Some potential uses include TV remote controls, which can be accessed by rubbing the arm of a sofa, or calendars that can be swiped onto doors.

    With projectors and depth-sensing cameras (the current system uses a Kinect) getting smaller, the researchers envision a system like WorldKit could eventually fit into a light bulb. Any room thus equipped could become a smart environment, where objects and walls become display surfaces. One member of the research team, Chris Harrison, previously worked on the Skinput device that allows users to turn their own arms into touch interfaces.

    In the future, users should be able to design their own interfaces with WorldKit. The system currently allows for things like buttons, multitouch drawing (akin to a whiteboard), and counting the number of object within an interaction “bubble.” The existing prototype still has limited resolution and input dimensions, but hardware advances and future research could allow voice commands or even interaction in free space rather than on surfaces. The CMU team will be presenting their work at CHI2013 on April 30.

    Image via Chris Harrison/Carnegie Mellon University

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Keeping Fitbit safe from hackers and cheaters with FitLock

    As if having the caloric details of your sex life posted publicly wasn’t enough, new research has exposed additional security vulnerabilities in the popular Fitbit fitness tracking devices (See disclosure). A team from Florida International University has shown that Fitbits can be subject to attacks including denial of service, injection, and data capture.

    Many of these problems stem from the fact that the Fitbit uses plain HTTP in its communications, exposing usernames, passwords, and data to opportunistic attackers. A suite of tools to probe the Fitbit created by the researchers was able to capture data from any Fitbit tracker within a radius of 15 feet. Another type of attack they tested forced the Fitbit to attempt frequent data upload, draining the battery 21 times faster than with normal once a day uploading.

    An additional problem the researchers identified is an absence of a data consistency check on the Fitbit and its associated online social network. For example, they were able to inject 12.6 million steps into a user account, which the system translated into only 0.02 miles traveled, based on the initial calibration to the user’s stride length. This kind of data injection could be exploited by cheats, people who don’t want to work for the badges and monetary rewards that are available to fitness over-achievers.

    While such an attack on a given individual might seem far-fetched, hackers could be motivated to expose or misuse sensitive personal health data. The consequences of that exposure could be no more than embarrassment for the Fitbit’s owner, but the security and privacy ramifications could go much deeper for similarly vulnerable wireless devices used in larger settings by healthcare companies.

    The researchers also highlighted a few more bizarre “mule” attacks, such as attaching the Fitbit to a spinning rope or a car wheel (you can “burn” about 350 calories in 20 minutes with the latter method).

    To combat these attacks, they developed FitLock, a hacked together defense system that includes encryption. A data consistency check also verifies new uploads against stride length and basal metabolic rate so that number of steps, distance traveled, and calories burned correspond. According to the recently released research, this additional security results in a negligible increase in processing time of 37 ms, about 2.4 percent more than normal Fitbit overhead. They also propose an extra step to thwart mule attacks: using a smaller, more accurate GPS chip to tell whether location is not changing (rope attack) while steps are being taken, or when the location is changing far too much (wheel attack).

    The attacks that are averted with FitLock are not unique to Fitbit or other sensing devices. Insulin pumps and cardiac defibrillators, for example, could be manipulated with the same methods, with much more dire consequences.

    Disclosure: Fitbit is backed by True Ventures, a venture capital firm that is an investor in the parent company of GigaOM. Om Malik, founder of GigaOM, is also a venture partner at True.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • 12 hours of Kevin Bacon: finding anyone in a social network

    Crowdsourcing as a method of locating important people or information has become a familiar accompaniment to the aftermath of disasters. But are people really effective at locating a target, or do they just throw out blanket broadcasts? How quickly can people mobilize their social networks? The answers to these questions have become even more relevant with the fervor displayed on Reddit during the search for the Boston bombing suspects.

    The small-world experiment that gave us the concept of six degrees of separation has a new counterpart in the internet age. The State Department’s Tag Challenge had teams search for five “thieves” (portrayed by actors, pictured below) in five international cities. The winning team, who found three of the five targets in less than 12 hours, have now released research analyzing their performance. They were interested in people’s mobilization efforts under time pressure, particularly whether messages were targeted (like @ mentions on Twitter) or whether social network participants engaged in a blind search.

    In their Tag Challenge data, the researchers found that geographically targeted tweets increased over time, especially as the deadline approached. They think this represents conscious mobilization efforts as time became critical to the task, similar to the locally targeted geographic mobilization seen during Occupy Wall Street. They also found that successful mobilization requires passive participants. These are people who don’t sign up or recruit their friends into the challenge, but are aware of the efforts and pass on this information in other ways.

    12-Hours-of-Separation

    In a simulation of how people choose to use their social network to locate someone, the proportion of messages reaching the target cities was about 0.46, higher than what would be expected with a random flow of messages. The simulation was based on the constraints of the Tag Challenge, where the targets’ geographic locations (but not identities, save for a mugshot) were known, so real world situations might play out slightly differently.

    The researchers think the fast discovery of people via social networking depends on thoughtful targeting. When people are being bombarded with news and social media, an @ mention may cause them to pay more attention, and a geographically targeted message may hit closer to home and give the recipient more of a reason to care. The recursive incentive scheme used by the winning team, which landed them 4,400 sign-ups within 48 hours, was also a crucial part of their success.

    There is no doubt that the world has shrunk with online social networking. “We can find any person (who is not particularly hiding) in less than 12 hours,” wrote the study’s authors; their claim seems to be borne out by other research showing only four degrees of separation on Facebook. Correct identification may not be as easy in the real world, though, where “suspects” don’t wear t-shirts identifying them as targets, and the wisdom of the crowd can degenerate into frenzied fingerpointing.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Why facial recognition software isn’t ready for prime time

    In the wake of the manhunt for the Boston bombers, opinions are divided on whether facial recognition technology helped or hindered the search. Headlines like “Why Facial Recognition Failed” (Salon.com) are echoed in a statement from the Boston police commissioner, who told The Washington Post that the technology “came up empty.”

    The opposite interpretation can be found at Technorati (“Facial Recognition Technology Helps Identify Boston Marathon Bombing Suspects”). So who is right, and were today’s facial recognition techniques up to the task?

    The high-tech video intelligence methods hyped in the media during the manhunt may be available for use by investigators, but that doesn’t mean they’re effective or actually used by law enforcement. Neither San Francisco nor San Jose police use facial recognition, for example, and an FBI biometric system planned for introduction in California and eight other states next year apparently only makes exploratory use of face recognition, relying instead mostly on the trusty fingerprint.

    Jim Wayman, director of the National Biometric Test Center at San Jose State University, said automated facial recognition didn’t fail in the Boston case: it simply wasn’t used. Contrary to reports like that of San Francisco’s ABC7, Wayman said video intelligence company 3VR’s products were not used to find the Boston bombing suspects.

    3VR did not respond to our request for comment. The FBI also has no large-scale automated face recognition system, according to Wayman.

    The essential problem with face recognition is getting an algorithm to correctly match degraded cell phone or surveillance images with well-lit, head-on photos of faces. While this is effortless for the human brain (unless you have prosopagnosia), hair, hats, sunglasses, and facial expressions can throw off automated recognition methods. Of course, before you can even get to the matching stage, you have to identify a suspect, and hope their face is included in driver’s license, mugshot, or other databases.

    Face_recognition_with_hopfield_network

    What video surveillance more broadly was useful for in the Boston case was tracking the movements of the suspects. This still required a considerable human effort: the Post reports one agent watching the same video clip 400 times.

    The next development step for facial recognition, both academically and commercially, is 3D, using shadows and facial landmarks to create best-guess models of faces. Face recognition challenges organized by the National Institute of Standards and Technology have expedited improvements at a Moore’s law-like pace, but the nuances that impede computers, like image alignment, occlusion, and face angle, remain a problem.

    Better and cheaper (and more ubiquitous) cameras should address the issues of grainy and blurry images; an international standard requires a resolution of 90 pixels between the eyes for facial recognition algorithms to work, says Wayman, whereas the images released of the Boston suspects had 12-20 pixels. A database with which to compare is still required, however; identifying and tracking a face across video streams would be much more useful.

    And even when facial recognition technologies improve and mature, the question still remains: should they be ready for prime time, in a way reminiscent of Minority Report? Wayman said currently employed systems that compare live people to their passport photos at airports still have a false negative rate of about 15 percent. If performance in such controlled situations is so fickle, it seems there is still a lot of work to do before these systems can automatically, and accurately, pick out faces of interest from surveillance footage.

    Image via Wikimedia Commons user Mrazvan22

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • Taming the HetNet with Wi-Fi traffic cops

    Interference and clogs over wireless networks — the result of Wi-Fi, Bluetooth devices, and even baby monitors competing for bandwidth — could be reduced with software that acts like a wireless traffic light.

    GapSense, developed by University of Michigan computer scientists, allows heterogeneous devices to talk to each other, allowing them to coordinate the start and stop of their packets and make them wait their turn to use the airwaves. UM hopes to develop the technology into a commercial product.

    CTIA, the wireless industry trade group, has estimated there are more than 320 million wireless-enabled devices in the U.S. With protocols using different spectrum widths and placing varying levels of demand on the network, data collisions that cause interference between these devices are bound to happen. Using a coordinated sequence of pauses and pulses, GapSense reduces these collisions.

    In one test, interference between ZigBee (a protocol that can be used e.g. for automating home lighting and temperature control) and Wi-Fi was reduced by 88 percent. Because these devices operate at different clock speeds, GapSense’s modulation of data transfer can also lower power consumption by 44 percent for Wi-Fi devices, according to the researchers.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

        

  • QWERTY out, KALQ in: the new fast keyboard for touchscreens

    A re-imagined touchscreen keyboard layout promises to speed up typing on tablets. The split keyboard, known as KALQ, features two 4×4 grids of keys that were generated to produce optimal thumb typing, up to 34 percent faster than typing with QWERTY, according to new research. The new layout will be available as a free Android app in May.

    Research into optimal keyboard layouts is as old as QWERTY itself, a legacy inherited from 19th century typewriters. Thumb typing with QWERTY is notoriously inefficient on touchscreen tablets and phones. Starting from the basics — how a touchscreen device is held in one’s hands — an international team of researchers drew on user behavioral data and computational models to develop the new layout. The lead investigator, Antti Oulasvirta of the Max Planck Institute for Informatics, will officially unveil this research at CHI2013 on May 1.

    Theoretically, the model predicts that users should be able to reach 49 words per minute with KALQ, and because the study’s subjects were non-native English speakers, typing speed could conceivably be even better in natives. KALQ was designed so the most commonly used letters are clustered, which means the travel distances are short and both hands work roughly equally and alternately. Most of the vowels are positioned near the space bar and are handled by the right thumb, while the left thumb takes care of most of the consonants and most of the first letters of words. For lefties, the orientation can be reversed, and the key size can even be scaled for different hand sizes.

    KALQ keyboard layout

    For KALQ to work, tablets should ideally be gripped horizontally, with the corners cradled in the valley at the base of the thumbs. On a 7-inch tablet (the researchers used the Samsung Galaxy Tab), test subjects had the fastest movements times and best thumb mobility with this configuration, though the grip gave them access to less tablet surface area overall.

    Based on this tablet gripping strategy, the researchers used computational techniques to determine the optimal key assignments. Their model of thumb movements was trained on millions of English-language tweets that originated from mobile devices. The end result, KALQ, minimizes movement times, and worked even better when users were trained to move their thumbs simultaneously and anticipate moves by hovering the thumb over the next letter.

    Novice tablet users reached typing speeds that eclipsed those achievable with QWERTY after about 10 hours of training, and continued to improve, reaching 37 words per minute. This is the fastest thumb typing speed ever reported, according to Oulasvirta and colleagues, and is 19 percent faster than typing speeds found in previous studies. The end result represents a 34 percent improvement over baseline QWERTY performance in this study’s subjects.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.