Blog

  • CoCo Therapeutics Names Steve Butcher COO

    UK-based Biotech startup CoCo Therapeutics Ltd. has named Steve Butcher as Chief Operating Officer. Dr Butcher was a founder and Scientific Director of the Fujisawa Institute of Neuroscience, before holding management positions of increasing responsibility at Pharmacia AB. The company is backed by Advent Life Sciences.

    PRESS RELEASE
    CoCo Therapeutics Ltd, a newly formed biotechnology company, today announced the appointment of Dr Steve Butcher as Chief Operating Officer. Dr Butcher was a founder and Scientific Director of the Fujisawa Institute of Neuroscience, before holding management positions of increasing responsibility at Pharmacia AB. More recently he held executive positions in a number of biotechnology companies, including CSO at BioImage A/S and COO at TopoTarget A/S.

    Raj Parekh, General Partner at Advent Venture Partners and the Chairman of CoCo said “We are pleased that an individual of Steve’s calibre is now leading the programmes at CoCo Therapeutics. The Board looks forward to working closely with Steve to evaluate RAR-alpha agonists in Alzheimer’s disease.”

    Alzheimer’s disease (AD) is the most common cause of dementia, affecting around 5.3 million people in the US, 417,000 people in the UK and many millions of others worldwide. It is estimated that this incidence will more than double by 2050, should current trends continue.

    About CoCo Therapeutics
    CoCo Therapeutics is a UK biotechnology company focussed on the discovery of new drugs for the treatment of Alzheimer’s disease. The company’s programmes are based on the recent scientific discoveries from the laboratory of Professor Jonathan Corcoran at King’s College London, made during research funded by the Wellcome Trust. This research has implicated RAR-alpha, a novel target, in Alzheimer’s disease.

    About Advent Venture Partners
    Advent Life Sciences is the dedicated Life Sciences team at Advent Venture Partners, one of Europe’s best established venture capital firms. Advent Life Sciences invests predominantly in early-stage and growth equity life sciences companies in the UK, Europe and the US. It will back companies that have a first- or best-in-class approach in a range of sectors within life sciences, including new drug discovery, enabling technologies, med-tech and diagnostics.

    Advent Life Sciences is a leader in European life sciences venture capital. Its investments include: PowderMed, a therapeutic DNA vaccine company sold to Pfizer; Thiakis, an obesity treatment company acquired by Wyeth Pharmaceuticals; Respivert, a drug discovery company focused on respiratory diseases that was acquired by Johnson & Johnson; EUSA Pharma, a transatlantic speciality pharmaceutical company acquired by Jazz Pharmaceuticals; Avila Therapeutics, a biotechnology company developing targeted covalent drugs acquired by Celgene Corporation, Micromet, a biotechnology company acquired by Amgen and Algeta (OSE: ALGETA), an oncology company developing treatments for bone metastases and disseminated tumours.

    The post CoCo Therapeutics Names Steve Butcher COO appeared first on peHUB.

  • Google Expands Public Alerts to Japan to Help with Natural Disaster Preparedness

    Google has expanded its new public alert program to Japan, a country that is still feeling the effects of a massive hurricane and tsunami that hit two years ago.

    Google Public Alerts, first launched in the U.S. following hurricane Sandy, are now available in Japan. Public Alerts provide pertinent information about natural disasters and other emergency situations inside Google Search, Google Maps. and Google Now.

    This is the first country that Public Alerts have reached since its U.S. launch.

    Now, when people in Japan search Google or Google Maps for information pertaining to an earthquake, let’s say, the alert info will appear on both desktop and mobile right at the top of the search. There will be a link inside the alerts that will let users access “more info,” which will include full disaster profiles from the Japan Meteorological Agency, among other stuff.

    “We hope our technology, including Public Alerts, will help people better prepare for future crises and create more far-reaching support for crisis recovery. This is why in Japan, Google has newly partnered with 14 Japanese prefectures and cities, including seven from the Tōhoku region, to make their government data available online and more easily accessible to users, both during a time of crisis and after. The devastating Tōhoku Earthquake struck Japan only two years ago, and the region is still slowly recovering from the tragedy,” says Google.

    The Public Alerts are also featured on Google Now, and are tailored to the user’s location. “For example, if you happen to be in Tokyo at a time when a tsunami alert is issued, Google Now will show you a card containing information about the tsunami alert, as well as any available evacuation instructions,” says Google.

    Google says that they are looking to expand these Public Alerts to other countries soon.

  • Nokia releases Glam Me app for narcissistic Windows Phone 8 Lumia users

    Nokia targeted hipsters with #2InstaWithLove a couple of days ago, and now the Finnish smartphone manufacturer has shifted its focus onto narcissists. Nokia Glam Me allows users to take enhanced self-portraits for later adulation and, obviously, sharing across various social networks.

    There’s a catch though. In order to take advantage of the benefits that Nokia Glam Me touts, one has to own a compatible Lumia smartphone running Windows Phone 8. And, for the best results, users might want to look towards a higher-spec’d model in the front-facing camera department, like the Lumia 920 or the recently-introduced Lumia 720.

    Self-adulation can be achieved simply by using the default camera software to take self-portraits of course, but that’s not the Nokia way. The Finnish company takes it a step further by allowing users to “lighten up that minor blemish” or spruce up their day by adding automatic enhancements and manual adjustments.

    The app provides tweaks for facial details as well as “cool artistic effects optimized for self-portrait” like a grayscale effect of sorts. There’s even “intelligence inside”, because what’s beauty without brains?

    Nokia Glam Me is available to download from the Windows Phone Store.

  • Job cuts to accompany T-Mobile, MetroPCS merger

    T-Mobile MetroPCS Merger Layoffs
    The proposed merger of No.4 wireless carrier T-Mobile USA and MetroPCS (PCScleared a major hurdle earlier this week, but some troubling news accompanied the win: more than 100 T-Mobile employees are reportedly set to lose their jobs. The Seattle Times reports that more than 100 people in T-Mobile’s marketing group and across other departments will be laid off during “integration” meetings scheduled to take place on Thursday. T-Mobile employed approximately 36,000 people across the country in 2012. MetroPCS shareholders will convene on April 12th to vote on the merger, which is still being reviewed by regulators.

  • Disney’s Hyperion will reportedly sell off most of its titles, focus on TV-related books

    Hyperion, the book publisher owned by Disney, is reportedly planning to sell off most of its backlist. Going forward, the company will focus on books that tie into ABC-Disney TV shows, according to a report in the Wall Street Journal.

    The company, under the direction of publisher Ellen Archer, has been moving in this direction for some time. Last year, it hired former Hollywood talent agent Laura Popper as editorial director of franchise publishing. The goal is for Hyperion to be able to promote its books across multimedia platforms, and it is a lot easier to do that if a book’s author already has his or her own TV show.

    The WSJ says that Hyperion will “look for books either linked to ABC television properties or that it believes can be extended to television or other corners of Walt Disney,” citing an unidentified source. Hyperion is already doing this to an extent, publishing books “written by” the main character on the show Castle. It also published cookbooks from the daytime show The Chew and by Jamie Oliver and tie-ins to the soap opera General Hospital.

    Hyperion has also had plenty of non-TV-related bestsellers: It publishes Mitch Albom’s books, for example (The Five People You Meet in Heaven), Randy Pausch’s The Last Lecture and J.R. Moehringer’s The Tender Bar, among others. Those big names presumably won’t be sold off, but I’ve asked Hyperion for comment.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Whatever it Takes: Paralyzed Motocross Rider – Darius Glover

    Darius Glover

    The next time you feel sick, stub your toe or maybe even sprain an ankle, you need to think of Darius Glover. He’s the guy in this video that’s soaring through the air on a dirt bike while being physically strapped in. You see at 15, while racing, Darius was involved in an accident that left him paralyzed from the waist down. His doctors, friends and relatives said he’d never ride again – Darius disagreed. Click through and prepare for the best 4-minutes and 43-seconds of your day.

    Source: Youtube.com

  • HIPAA and PCI Compliance Are Not Interchangeable

    Mike Klein is president and COO of Online Tech, which provides colocation, managed servers and private cloud services. He follows the health care IT industry closely and you can find more resources at www.onlinetech.com/compliant-hosting/overview.

    Mike KleinMIKE KLEIN
    Online Tech

    When thinking about compliance, many companies assume PCI DSS is interchangeable with HIPAA. Otherwise it is assumed that the gap between the two is small. This ignores that HIPAA and PCI DSS compliance protect different types of information, with different audit guidelines, safeguard requirements, and consequences for non-compliance or breaches.

    Origins and Audits

    HIPAA compliance is monitored by Health and Human Services, and the audit is based on OCR (Office of Civil Rights) protocols that are continuously updated and enforced. These are governmental entities, not private companies. KPMG was selected as HHS’ auditor of choice, and investigation of compliance with the Security and Privacy rules comes with the benefit of the fully informed and funded auditing power of a well-respected auditing powerhouse.

    Conversely, PCI compliance is defined by the PCI SSC (Payment Card Industry Security Standards Council). This council is a collaboration including Visa, Mastercard, American Express, Discover, and JCB (Japan Credit Bureau), with these companies having a vested interest in keeping consumer data safe.

    Consequences of Non-Compliance

    The cost of a breach is very different between HIPAA and PCI compliance as well. HIPAA is a US federal law. There are criminal and civil penalties associated with a breach, as well as fines. This means that in addition to stiff financial consequences, willfully negligent stakeholders can go to jail for non-compliance. If a breach occurs, healthcare providers are required to post public press releases in traditional media outlets to inform patients of the potential threat to their information. This damage to the image and credibility of an institution can have long lasting impacts.

    With PCI compliance, there are contractually agreed upon fines, but no criminal charges. You aren’t going to see anyone going to jail for not being PCI compliant. This isn’t to say that PCI costs aren’t serious. A PCI breach could cost anywhere from thousands to millions in fines to the credit card companies, and could result in the loss of card processing privileges, which severely impacts business cashflow. Of course, there is also always a threat to a company’s reputation that might discourage current or future buyers.

    Requirements

    When you peel back the curtain on HIPAA and PCI requirements, they look very different. HIPAA is very focused on policies, training, and processes. It’s more subjective and broad in application, caring about how a company handles breach notification, whether an organization insists on BAAs (Business Associate Agreements) with their vendors, or whether the cloud provider associated with a company has conducted a thorough risk assessment against all administrative, physical, and technical safeguards. To this last point, the final HIPAA Privacy and Security Rules published by HHS recently clarified that data center and cloud providers are, in fact, considered Business Associates that must be HIPAA compliant if there is Protected Health Information (PHI) in their data centers or on their servers. HIPAA doesn’t precisely describe technical specification or methods to achieve compliance. Each Covered Entity and Business Associate is to complete a risk assessment and management plan for addressing each of the HIPAA safeguards.

    The Business Associate Agreement is unique to HIPAA, and extends the ‘chain-of-trust’ and liabilities for protecting PHI from the Covered Entities (healthcare providers), throughout its network of supporting vendors. Any company that stores, processes, or accesses patient health information is automatically considered a Business Associate. As such, they will be held to the full legal liability to keep PHI safe. Turning a blind eye only makes the penalties steeper.

    PCI DSS requirements are much more prescriptive, comparably. The technical requirements are more detailed, explicitly outlining the necessity for processes like daily log review and encryption across open, public networks, while processes around training and policies are not as prevalent. PCI DSS does not have an equivalent of a Business Associate Agreement required between a company that needs to be PCI DSS compliant and its vendors.

    Do HIPAA and PCI Compliance Overlap?

    Well, yes and no. The technical PCI requirements can set up a nice framework that could work as a prescriptive guide for some of HIPAA’s technical safeguard requirements. However, the foundation of HIPAA compliance is a documented risk assessment and management plan against the entire security rule. PCI share this core cornerstone for the basis of meeting compliance.

    The bottom line is that passing a PCI audit does not mean you’re HIPAA compliant, or that KPMG is going to care about PCI when it comes to an evaluation on due diligence to meet HIPAA compliance.

    The reverse is also true. Passing an independent audit against the HIPAA Security and Privacy rules does not imply PCI compliance either. Even with overlap, they’re still separate and should be treated as such. The best course when looking at hosting providers is to request an audit report, read the details, and confirm that HIPAA compliance is based on the OCR Audit Protocols and PCI compliance is based on the PCI DSS. This insures that the business not only understands the difference between each compliance (if both are necessary), but that the company has truly been diligent to keep your data safe. After all, compliance is not a checkmark, it’s a culture.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

  • Inside Facebook’s Internal Innovation Culture

    Business news headlines featuring social-networking giant Facebook change almost as often and as dramatically as a teenager updates her Facebook status online.

    But one big part of the Facebook story that rarely gets told is that of the company’s internal innovation culture. Obviously, it’s a story worth investigating, as executives at established corporations and startups alike wonder, how does such a young organization grab so much attention — and a billion customers — regardless of whether people “like” their product or their privacy policies, or its stock is so volatile?

    Today, Facebook is expected to announce significant changes to the design of one of its core products, the news feed. Ahead of these changes, I sat down with Kate Aronowitz, Facebook’s Director of Design, a veteran executive of two other Silicon Valley giants that changed the way the world does business online, LinkedIn and eBay (where she directed design as well). I asked her to reveal how Facebook encourages creativity and collaboration, both philosophically and practically.

    Our conversation revealed these innovation tips that Facebook is currently following — and shared exclusively with HBR.

    1. Encourage everyone — even those in the C-suite — to learn by making. At Facebook, Aronowitz told me, top executives, including CEO Mark Zuckerberg and Vice President of Product Christopher Cox, are “super involved” in the conception and design of all products. In an age when user experience as king, it’s important that top management weigh in directly on prototypes themselves before approving any project. “There isn’t a review board that designers and engineers go present to with PowerPoint slides. We’re very much a build and prototype culture. Ideas presented on slides just don’t stick,” she said, echoing the credo of Steve Jobs.

    At Facebook, all top executives are cast as entrepreneurial thinkers, not as judges. “They’re just other people on the team, in a way, even if they are Time magazine’s Person of the Year. Our innovation process is less about getting approval, and more about getting these thinkers to participate. Why have them sequestered?” Aronowitz told me. If leaders’ hands get dirty in design decision-making, then decisions don’t come as a surprise. Plus, as she added, “It can be hard to judge something if you’re not part of the process of making it.”

    2. A winning mobile strategy: ask what’s essential and contextual. More active Facebook users now access Facebook on mobile phones daily than on desktops or laptops, and Facebook is now quickly challenged to be a primarily “mobile” company. What are the strategies being used to propel Facebook into the #1 used mobile app (across Android and iOS, according to the latest stats from ComScore)? Sure, there are the no-brainer approaches, such as simplifying images and text for smaller screen sizes to make them appealing on a handheld device. But the way Facebook approaches mobile innovation is to ask, What’s most essential thing I can present to somebody? “Our attention span is different when we’re using a phone. We need to give users something interesting, relevant, and create an experience where they can take action very quickly,” Aronowitz said. “They’re not focused, like they are at a desk.”

    The way Facebook is promoting mobile innovation internally is by creating an internal mobile design think tank, which it established last year. The team created mobile-experience best practices for all of Facebook and has a strategy for 1-2 years in the future. The company also embedded a designer who has been steeped in mobile strategy to each and every product team.

    Facebook is working on refining the experience of “contextual sharing,” Aronowitz said, in terms of offering information that can be understood, questioned, or answered while on the go on a phone. For instance, engineers and designers constantly ask how to make the experience better of asking friends for directions or advice on a tourist destination while you’re on vacation, in real time.

    Some of the general mobile innovation tips that Aronowitz can offer, though, are that any design learnings from creating mobile interfaces can go back to the design of the Web site, to make it more streamlined and appealing.

    Also, companies in general need to really think about the appropriateness of its mobile notifications — say, when a bill is due in the case of a bank, or when offering new discounts or offers via texts, emails, and other ways on phones. “We’ve learned at Facebook, where we offer so many notifications, that we can’t flood phones with those, as it gets annoying. We constantly ask, what do we buzz about? It’s important to think of every part of the mobile experience: what’s the most essential experience to serve at any time?” But the biggest challenge is also how to give enough of a compelling experience that is also satisfying enough if someone is using their phone while commuting or waiting for a flight in an airport.

    3. Physically mix up your work environment on a regular basis. “Your physical environment influences how you think and feel. If you want to build openness and collaboration, then the office must reflect that,” Aronowitz said. Although such an observation might seem painfully obvious in an era of open-plan seating, what sets Facebook apart is that engineering, management, and other teams at Facebook often physically move around their desks and furniture to focus on hatching fresh ideas by joining new groups — in person, and on a daily basis rather than moving back and forth from permanent desk locations.

    Also related to Facebook’s focus on keeping an office in constant flux, Facebook is currently in the process of expanding its headquarters (with Frank Gehry). Sure, cynics will say this is a purely a show-off move because Gehry is a marquee-name designer — but his design isn’t a swooping, shimmering showpiece. Facebook and Gehry are designing a space that is not only large and open, but with many intimate meeting areas. Colors will be “residential and comforting,” Aronowitz says, reflecting the current palette of Facebook’s offices, so workers feel comfortable and at ease while at work. The new space will have moveable walls and furniture so workers can feel nimble and ready to switch gears, building on the current Facebook practice of reconfiguring desks and chairs.

  • Tablets Now Taking A Greater Global Share Of Web Page Views Than Smartphones, According To Adobe’s Digital Index

    iphone-ipad

    The proportion of web traffic coming from tablets has pushed past smartphones for the first time, according to Adobe’s latest Digital Index which has tracked more than 100 bil­lion vis­its to 1,000+ web­sites worldwide, between June 2007 to date, to compare which device types are driving the most page views. The monitored markets are the  U.K, U.S., China, Canada, Australia, Japan, France and Germany. While the difference between smartphone and tablet traffic is marginal — with tablets accounting for eight per cent of the measured page views and smartphones seven per cent — the growth in tablet page views is impressive, especially considering how new the category is (the first iPad launched in April 2010).

    Of course both mobile device types still account for a fraction of the total share of page views when compared to desktops/laptops — which accounted for 84 per cent of the page views, according to Adobe’s data – but both are taking a growing share, and tablet growth is on an especially steep trajectory:

    Adobe attributes the rise of tablet page views to how well-suited the form factor is for web browsing, with the most obvious attribute being tablets’ larger screen size vs smartphones (albeit, that gap is closing as some tablets shrink and some smartphones swell). On average, Adobe found that Inter­net users view 70 per cent more pages per visit when brows­ing with a tablet com­pared to a smartphone — so tablet users are doing more leisurely (and presumably leisure time) browsing.

    While there is a good spread of different activities across both tablets and smartphones, Adobe’s index indicates that online shopping is a particularly popular activity for tablet users. Retail web­sites receive the high­est share of tablet traf­fic across all indus­tries, according to its data, while auto­mo­tive and travel shop­ping websites also get a “sig­nif­i­cant share” of tablet traffic:

    Writing on its digital index blog, Adobe adds:

    We’ve been keep­ing a close eye on how quickly tablets have taken off. Just ayear ago in Jan­u­ary we uncov­ered that vis­i­tors using tablets spend 54% more per online order than their coun­ter­parts on smart­phones, and 19% more than desktop/laptop users. Dur­ing the past hol­i­day shop­ping sea­son we saw that 13.5% of all online sales were trans­acted via tablets. And last month before the Super Bowlwe learned that online view­er­ship via tablets dou­bles dur­ing big sport­ing events. Now we know that not only is tablet traf­fic more valu­able in terms of ecom­merce and engage­ment, tablets have also become the pri­mary device for mobile browsing.

    The U.K. leads Adobe’s Index for tablet page views, with the U.S. second:

    All coun­tries tracked saw their share of traf­fic from tablets dou­ble over the course of 2012 — a trend Adobe expects to con­tinue through 2013. It added that some slight dips in tablet share in certain countries in November were down to PC traffic surging, rather than tablet page views dropping:

  • I’m Afraid Bankers Really Do Earn Their Bonuses

    The debate over the pay of top bankers is highly charged. One person’s reward for generating significant revenues is another’s blank check for doing little else than gambling with a client’s life savings.

    In what is now a famous interview with the UK’s Sunday Times in 2009, Goldman Sachs chief executive officer, Lloyd Blankfein suggested he was simply a banker, “doing God’s work,” even if that meant, at the same time, recognizing, “people are pissed off, mad, and bent out of shape,” over bankers’ actions.

    At least he recognized that it was hard for the average wage slave fearful for his job to absorb the fact that during the low-water point of the global recession in 2008, the average banker at Goldman Sachs still received an annual pay packet, including bonuses, of $700,000.

    Of course, Goldman Sachs (GS) employees are smarter than the average bear. They also work harder than the average bear. But it’s still a fair question: Is Blankfein right to think that they really deserve all that money?

    Investment banks like GS reward their employees by reserving between 40 to 50 percent of net revenues for compensation. According to the annual returns from GS in 2011, this came in at just over $12 billion or 42 percent of net revenues.

    To judge whether this outrageous looking sum is fair or not I apply a metric I call the Return on Invested Talent (or ROIT). Like Return on Invested Capital (ROIC), which reflects what a company earns, how much capital it needs to earn it and the ratio between the two, ROIT reveals what the company earns, how much it has to spend on its talent to earn it, and what the ratio is between the two. In short, we divide the (financial) outputs by the major (talent) inputs.

    At face value, a GS banker on average generates $3.05 of revenues for every $1 spent on his or her salary. Examining profits before tax and solving for the ROIT equation described above reveals the return on investment to GS’s talent generates a rate of $1.44 of profit for every dollar expended on the typical GS employee.

    To put this in context, using the same equation, the average employee in a low cost airline generates a profit before tax of $1.94. On that basis, the average banker is arguably about 25% less profitable than the average airline employee — but that 25% difference certainly doesn’t account for the huge gap between the average take-home pay of the banker ($367,000 in 2011) and the pay check for the average low-cost airline employee ($67,000)

    So what does explain that differential? Another metric, total earning assets per head, provides a possible answer. According to this measure, in the year ending 2011, each GS banker was handling roughly nineteen times the value of the corresponding figure of assets as a typical low cost airline employee ($27.7m to $1.4m). On this basis, the GS banker is arguably under-compensated, since his or her salary is about six times that of the airline employee.

    The point is that we cannot all be paid at the same rate as a banker because most of us are not dealing directly with the same volumes of capital. It is perhaps this financial reality alone, not the alternative arguments over the technically demanding and complex instruments in need of the highest levels of human capital that has become twenty first century financial market making, which justifies the differential.

    Bottom line, when facts rather than emotion are applied, bankers on average might apparently deserve their bonuses. Which rather makes one wonder whether the European Union’s plans to cap them isn’t just an act of political petulance.

  • 5 reasons why the future of Hadoop is real-time (relatively speaking)

    In some ways, Hadoop is a like a fine wine: It gets better with age as rough edges (or flavor profiles) are smoothed out, and those who wait to consume it will probably have a better experience. The only problem with this is that Hadoop exists in a world that’s more about MD 20/20 than it is about Relentless Napa Valley 2008: Companies often want to drink their big data fast, get drunk on insights, and then have some more — maybe something even stronger. And with data — unlike technology and tannins — it turns out older isn’t always better.

    That’s a crude analogy, of course, but it gets at the essence of what’s currently plaguing Hadoop adoption and what will propel it forward in the next couple years. The work being done by companies like Cloudera and Hortonworks at the distribution level is great and important, as is MapReduce as a processing framework for certain types of batch workloads. But not every company can afford to be concerned about managing Hadoop on a day-to-day basis. And not every analytic job pairs well with MapReduce.

    In Part I of our four-part series on Hadoop, we looked at how the technology was born and grew into the juggernaut it is today. In Part II, we laid out the map of the current products and projects that comprise the Hadoop ecosystem. In this installment, we’ll take a closer look at some of them and how they’re positioning themselves to be important players down the road.

    If there’s one big Hadoop theme at our Structure: Data conference March 20-21 in New York, it’s the new realization that people shouldn’t be asking “What’s next after Hadoop?” but rather “What will Hadoop become next?”. Based on what’s transpiring today, the answer to that question is that Hadoop will become faster in all regards and more useful as a result.

    Interactivity, big-data-style

    Source: Shutterstock user hauhu.

    Source: Shutterstock user hauhu.

    As I explained with some detail a couple weeks ago, SQL is what’s next for Hadoop, and that’s not because of familiarity alone or the types of queries permitted by SQL on relational data. It’s also because the types of massively parallel processing engines developed to analyze relational data over the years are very fast. That means analysts can ask questions and get answers at speeds much closer to the speed of their intuitions than is possible when querying entire data sets using standard MapReduce.

    But just as SQL and its processing techniques bring something to Hadoop, Hadoop (the Hadoop Distributed File System, specifically) brings something to the table, too. Namely, it brings scale and flexibility that don’t exist in the traditional data warehouse world, where new hardware and licenses can be expensive; so only the “valuable” data makes its way inside and only after it has been fitted to a pre-defined structure. Hadoop, on the other hand, provides virtually unlimited scale and schema-free storage, so companies can store however much information they want in whatever format they want and worry later about what they’ll actually use it for. (Actually, though, most Hadoop jobs do require some sort of structure in order to run, and Hadoop co-creator Mike Cafarella is working on a project called RecordBreaker that aims to automate this process for certain data types.)

    How hot is SQL-on-Hadoop space? I profiled the companies and projects working on it on Feb. 21, and since then EMC Greenplum announced a completely rewritten Hadoop distribution that fuses its analytic database to Hadoop, and an entirely new player called JethroData emerged along with $4.5 million in funding. Even if there’s a major shakeout, there will be a few lucky companies left standing to capitalize on a shift to Hadoop as the center of data gravity that EMC Greenplum’s Scott Yara (albeit a biased source) thinks will be the data equivalent of the mainframe’s demise.

    This is your database. This is your database on HDFS

    The SQL versus NoSQL debate appears to be dying down as companies and developers begin to realize there’s definitely a place for both in most environments, but a new debate — with Hadoop at the center — might be about to start up. At its core is the concept of data gravity and the large, attractive (in a gravitational sense) entity that is HDFS. Here’s the underlying question that might be posed: If I’m already storing my unstructured data in HDFS and am expected to replace my data warehouse with it, too, why would I also run a handful of other databases that require a separate data store?

    This is in part why HBase has attracted such a strong following despite its relative technical and commercial immaturity compared with comparable NoSQL database Cassandra. For applications that would benefit from a relational database, startups such as Drawn to Scale and Splice Machine have turned HBase into a transactional SQL system. Wibidata, the new startup from Cloudera C0-founder Christophe Bisciglia and Aaron Kimball, is pushing an open source framework called Kiji to make it easier to develop applications that use HBase.

    “If you talk to anyone from Cloudera or any of the platform vendors, I think they will tell you that a large percentage of their customers use HBase,” Bisciglia said. “It’s something that I only expect to see increasing.”

    MapR seems to think so, too: the Hadoop-distribution vendor is getting ahead of the game by selling an enterprise-grade version of HBase called M7. Should hot startups such as TempoDB and Ayasdi decide to take their HBase-reliant cloud services into the data center, they’ll tap into Hadoop clusters, too.

    And the National Security Agency built Apache Accumulo, a key-value database similar to HBase but designed for fine-grained security and massive scale. It’s now being sold commercially by a startup called Sqrrl. There’s even a graph-processing project called Giraph that relies on HBase or Accumulo as the database layer.

    Whatever “real-time” means to you

    Real-time is one of those terms that means different things to different people and different applications. The interactivity that SQL-on-Hadoop technologies promise is one definition, as is the type of stream processing enabled by technologies like Storm. When it comes to the latter, there’s a lot of excitement around YARN as the innovation will make it happen.

    YARN, aka MapReduce 2.0, is a resource scheduler and distributed application framework that allows Hadoop users to run processing paradigms other than MapReduce. This could mean things, from traditional parallel-processing methods such as MPI to graph processing to newly developed stream-processing engines such as Storm and S4. Considering for how many years Hadoop meant HDFS and MapReduce, this type flexibility is certainly a big deal.

    figure1Stream processing, of course, is the antithesis of batch processing, for which Hadoop is known, and which is inherently too slow for workloads such as serving real-time ads or monitoring sensor data. And even if Storm and other stream-processing platforms somehow don’t make their way onto Hadoop clusters, a startup called HStreaming has made it its mission to deliver stream processing to Hadoop, and it’s on other companies’ radars, as well.

    For what it’s worth, though, VertiCloud Founder and CEO and former Yahoo CTO Raymie Stata thinks we should do away with terms such as batch, real-time and interactive altogether. Instead, he prefers the terms synchronous and asynchronous to describe the human experience with the data rather than the speed of processing it. Synchronous computing happens at the speed of human activity, generally speaking, while asynchronous computing is largely decoupled from the idea of someone sitting in front of a computer screen awaiting a result.

    The change in terms is associated with a change in how you manage SLAs for applications. Uploading photos to Flickr: synchronous. Running a MapReduce job: most likely asynchronous. Ironically, according to Stata, stream processing data with Storm is often asynchronous, too. That’s because there’s probably not someone on the other end waiting for a page to update or a query to return. And unless you’re doing something where guaranteed real-time latency is necessary, the occasional difference between milliseconds and 1 second probably isn’t critical.

    Time to insight starts at the planning phase

    Even when MapReduce is the answer, though, not everyone is game for a long Hadoop deployment process coupled with a consulting deal to identify uses and build applications or workflows. Sometimes, you just want to buy some software and get going.

    Already, companies such as Wibidata and Continuuity are trying to make it easier for companies to build Hadoop applications specific to their own needs, and Wibidata’s Bisciglia said his company is doing less and less customization the more it deals with customers in the same vertical markets. “I think it’s still a couple years out before you can buy a generic application that runs on Hadoop,” he told me, but he does see opportunity for billion-dollar businesses at this level, possibly selling the Hadoop equivalent of an ERP or CRM application.

    Structure Data 2012: Michael Olson – CEO, Cloudera

    Cloudera CEO Mike Olson at Structure: Data 2012
    (c) 2012 Pinar Ozger [email protected]

    And Cloudera CEO Mike Olson told the audience at our Structure: Data conference last year that he’ll connect startups trying to build Hadoop-based applications with funding opportunities. In fact, Cloudera backer Accel Partners launched a Big Data Fund in 2011 with the sole purpose of funding application-level big data startups.

    But maybe Cloudera, like database vendor Oracle before it, will just get into the application space itself: According to Hadoop creator and Cloudera chief architect Doug Cutting:

    “I wouldn’t be surprised if you see vendors, like Cloudera, starting to creep up the stack and sell some applications. You’ve seen that before from Red Hat, from Oracle. You could argue that the relational database is a platform for Oracle and they’ve sold a lot of applications on top. So I think that happens as the market matures. When it’s young, we don’t want to stomp on potential collaborators at this point, we want to open that up to other people to really enhance the platform.”

    Cloud computing is proving to be a big help in getting Hadoop projects off the ground, too. Even low-level services such as Amazon Elastic MapReduce can ease the burden of managing a physical Hadoop cluster, and there are already a handful of cloud services exposing Hadoop as a SaaS application for business intelligence and analytics. The easier it gets to store, process and analyze data in the cloud, the more appealing Hadoop looks to potential users who can’t be bothered to invest in yet another IT project.

    Google (and Microsoft): A guiding light

    Lest we forget, Hadoop is based on a set of Google technologies, and it seems likely its future will also be influenced by what Google is doing. Already, improvements to HDFS seem to mirror changes to the Google File System a few years back, and YARN will enable some new types of non-MapReduce processing similar to what Google’s new Percolator framework does. (Google claims Percolator lets it “process the same number of documents per day, while reducing the average age of documents in Google search results by 50%.”) The MapR-led Apache Drill project is a Hadoop-based version of Google’s Dremel tool; Giraph was likely inspired by Google’s Pregel graph-processing technology.

    Cutting is particularly excited about Google Spanner, a database system that spans data geographies while still maintaining transactional consistency. “It’s a matter of time before somebody implements that in the Hadoop ecosystem,” he said. “That’s a huge change.”

    It’s possible Microsoft could be an inspiration to the Hadoop community, too, especially if it begins to surface pieces of its Bing search infrastructure as products like a couple of company executives have told me it will. Bing runs on a combination of tools called Cosmos, Tiger and Scope, and it’s part of the Online Services division ran by former Yahoo VP and Hadoop backer Qi Lu. Lu said that Microsoft (like Google) is looking beyond just search — Hadoop’s original function — and into building an information fabric that changes how data is indexed, searched for and presented.

    However it evolves, though, it’s becoming pretty obvious that Hadoop is no longer just a technology for doing cheap storage and some MapReduce processing. “I think there’s still some doubt in people’s minds about whether Hadoop is a flash in the pan … and I think they’re missing the point,” Cutting said. “I think that’s going to be proven to people in the next year.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Public Cloud Security, Readiness and Reliability

    cloud-rows-dreamstime

    The modern idea of the “cloud” may be something new but a lot of the technology it uses has been around for a while, since 1997 in fact. As with any technology, the most important aspects of deploying a new solution will be an understanding of the platform and, of course, thorough planning.

    Ready for Public Cloud?

    When considering public cloud options, it’s important to understand where there is a direct fit. This means that both key business stakeholders as well as IT executives will need to see the benefits of moving towards a public cloud “Infrastructure as a Service” environment. Although there are many benefits, administrators should take in some considerations when looking at public cloud options.

    • Public cloud and security. This is a major consideration for any organization. Although a public cloud is certainly secure, some organizations have specific regulations as to how their data can be delivered over the WAN. Also, securing the server and application environment will differ when these workloads are pushed through a cloud environment. Special planning meetings and considerations have to go into knowing the type of security requirements and environment might have.

    It’s important NOT to get overwhelmed when we talk about cloud security options. Yes, there are new technologies revolving around ensuring cloud security, but it doesn’t have to be overwhelming. As mentioned earlier, we can break down cloud security at a high-level by examining the following:

    • Security on the LAN: The first steps will be the understanding of the security elements of your LAN. Is data being encrypted internally? Are there ACLs on the switches? How are the firewalls and load-balancers configured for data leaving the local network?
    • Security at the end-point: How is the end-point accessing the data? Is it through a VPN or through an encrypted connection? Is there a secure client involved? Understanding the end-point security setting and policies is important to ensuring that the data reaches its destination safely.
    • Security in the middle: When data is being transmitted over the WAN there have to be security settings in place from beginning to end. That means setting up a secure tunnel for the data to travel, constant monitoring of the links, and proactively maintaining server and LAN security policies.

    Remember one main point as you plan out your environment: Cloud security isn’t really just one component in itself. Rather, it’s a lot of security best practices being applied for the purpose of transmitting data over the WAN. This is where using next-generation security tools can really help. Advanced device interrogation engines as well as intrusion prevention/detection (IPS/IDS) can further secure a cloud platform.

    • Environment readiness and reliability. Although public clouds can be easy to adapt to, some environments may not be ready for a cloud initiative. Having the right infrastructure in place to support a cloud movement may be required. In these cases, organizations should take the time and evaluate their current position to see if going to the cloud is the right move.

    Just like any other infrastructure, it’s important to create an environment capable of supporting business continuity needs. This means understanding the fact that the cloud can and will potentially go down. For example, in a recent major cloud outage – a simple SSL certificate was allowed to expire. This then created a global, cascading failure taking down numerous vital public cloud components. Who was the provider? Microsoft Azure.

    • Deploying the right workload. The larger the workload, Virtual Desktop Infrastructure (VDI) for example, the longer it will take to be delivered. Some core applications require backend database connectivity where a public cloud model may not be the right fit. Before moving to the cloud, make sure to have a complete understanding of what will be utilized in the public cloud arena. From there, a good decision can be made as to whether a given application or even virtual node is the right fit for a cloud model.
    • Maintaining control. Just like a local, non-cloud environment, administrators must retain control of their environment. This is especially important in pay-as-you-go models. With little control or oversight, administrators might be provisioning Virtual Machines (VMs) and resources when they’re simply not needed. This is where a public cloud can quickly lose its value. IT organizations must keep a watchful eye on their cloud-based workloads and resources to know what is being use and that they are utilizing that environment efficiently.
    • End-user and administrator training. The success of almost any new deployment will be user acceptance. If an organization deploys a new public cloud capable of delivering entire workloads to the end-user, there must be core training associated with it. What good is a robust, highly scalable infrastructure if the end-user is confused or not sure how to use it? Since users are often adverse to change, all modifications should be gradual and well documented. Information passed to the user should be easy to understand and simple to follow. With good training and solid support on the backend, administrators can deliver powerful data on-demand solutions to the end-user.

    Cloud computing is here to stay – and there are the many benefits to such a powerful Wide Area Network-based platform. Whether administrators need to provision a new workload or test out an application, a public cloud solution can help an organization stay innovative. Remember, as with any new environment, it’s important to plan out the infrastructure and find the need behind the deployment. When it comes to a public cloud, administrators should evaluate their needs and see how this type of cloud platform can directly benefit them.

    The goal with many recent cloud articles is to debunk the myth that cloud computing is an insecure, Wild West environment. Unlike the dot com bust or other failed technologies, our generation is evolving into a data-on-demand environment where cloud computing acts as the delivery mechanism for vast amounts of information. So while you may not be ready to embrace the technology, it’s important start to understand it and learn the facts, not the hype.

  • Run Windows 8 apps in a desktop window with ModernMix

    Windows 8 has several annoyances, but perhaps one of the most notable is its requirement to run apps full screen, or in an ugly 2/3, 1/3 mode. When you’re used to being able to position and arrange application windows just as you like, this seems like a significant backward step: we have far less choice then we did before.

    But ModernMix, the latest release from Stardock, changes all that. Because this simple $4.99 program allows you to run Windows 8 apps in a resizable window on your desktop, just like anything else.

    Getting hold of the beta build is a little awkward right now, as you have to provide your email address and wait to be sent a link. But with that out of the way, the program downloads and installs quickly, before presenting you with a basic settings dialog. Just clear that for the moment (the default settings are fine), and you’re ready to go.

    Now launch an app from the Start Screen, and it’ll appear in a window on the desktop, where you can use it as normal. The window can be freely resized and positioned to suit your needs, and has a regular Close button in the top right corner, so you can shut the app down like any other program.

    The app also has a button on the taskbar, of course, and clicking there will bring it to the foreground. Right-click, though, and you’ll find an option to pin your favorite apps to the taskbar, so avoiding the need to switch to the Start screen at all. (And if you do launch an app that way, it’ll relaunch with whatever window size and position it had last time, so you only need set it up once.)

    While this all worked very smoothly for us, it’s possible that some apps might not work so well in a window. Or perhaps you’ll just want to switch back to the Start screen for some other reason. Either way, pressing F10 while running an app will switch you from a desktop window to the start screen, and back again. Or, if you prefer to use the mouse, ModernMix adds a tiny overlay to the top right corner of the app which allows you to do the same thing.

    And if you’re unhappy with any of this, a settings box makes it easy to change. You can turn off the window overlay, say; disable or change the F10 hotkey; or maybe set things up so that apps run full screen when launched from the Start screen, but in a window when launched from the desktop.

    Despite being a beta, all this worked very well for us, with no noticeable problems or issues at all. And so, if you spend more time on the desktop than the Start screen, ModernMix comes highly recommended: it’s the best Windows 8 extension we’ve seen to date, and at $4.99 is an absolute bargain.

  • Facebook may someday charge users for an ad-free experience

    Facebook Paid Profile Pages
    Facebook (FB) may have plans to introduce a monthly subscription fee option that gives users more features on the world’s largest social networking site. A patent application titled “Paid Profile Personalization” describes how the company could “replace advertisements or other elements that are normally displayed to visitors of the user’s profile page that are otherwise controlled by the social networking system.”

    Continue reading…

  • ASUS ‘launches’ the Transformer AiO for Android and Windows 8 lovers

    If you’re in the market for an all-in-one PC running Windows 8 but you also want an Android tablet to carry about inside the house, Taiwanese manufacturer ASUS has just the thing for you — the new Transformer AiO. Designed as a niche of a niche product, the Transformer AiO appears to have it all figured out.

    The all-in-one aims to give users the power of legacy and Modern UI Windows 8 apps, combined with the vast and mobile-oriented Android ecosystem. On the Windows 8 side, the Transformer AiO brings an 18.4-inch LED-backlit IPS display with 10-point multitouch and a resolution of 1920 by 1080. Power comes from a third generation Intel Core i3, i5 or i7 processor backed by an Nvidia GeForce GT 730M graphics card with 2GB of RAM. Like you’d expect it features the usual array of ports, including HDMI and USB 3.0 ones.

    Other specs include 4GB up to 8GB of RAM; 1TB up to 2TB HDD; DVD-RW; Wi-Fi 802.11 a/b/g/n; Bluetooth 4.0; 1MP front-facing camera and two 3W speakers.

    But how does it provide the two software platforms? Well, the display is the key.

    The 18.4-inch display is actually a detachable tablet — you didn’t expect this, did you? — which features a quad-core Nvidia Tegra 3 processor and Android 4.1 Jelly Bean as the operating system of choice. Other features include 32GB of internal storage; 2GB of RAM and a G-sensor, among others.

    ASUS says that the Transformer AiO can switch between Windows 8 and Android simply by pressing a button. The tablet can also be used as a “wireless Remote Desktop technology” when detached, because it comes with Wi-Fi  802.11 a/b/g/n connectivity and Bluetooth 3.0 with EDR (Enhanced Data Rate).

    The device does not break any battery life records with ASUS giving an estimate of five hours of detached operation for the 38W battery, a performance similar to the Microsoft Surface Pro tablet PC.

    And, to help lug the tablet-side around and make it usable as a tablet, ASUS has also added a carrying handle and a folding stand because it weighs 2.4KG and it measures 466 x 294 x 18 mm.

    The Taiwanese manufacturer has yet to announce official pricing or date of availability.

  • Android Ice Cream Sandwich encryption broken with the aid of a freezer

    When Google released Android 4.0 (Ice Cream Sandwich) back in 2011, it introduced a new data scrambling system designed to protect sensitive user information from snoopers who successfully managed to bypass the lock screen.

    It’s strong security, but a team of German researchers have managed to crack the encryption by freezing a Galaxy Nexus and using a toolset called FROST (Forensic Recovery Of Scrambled Telephones) to retrieve contact lists, browser histories, and photos (basically everything you’d want to keep private).

    The process, detailed here, involved firstly unlocking the bootloader and then packing the Galaxy Nexus into a freezer bag and putting the device inside a 15 degree Celsius freezer for an hour until the phone temperature was below 10 degrees. Once cold, they turned the phone on to check it was working, dismantled it, reassembled it, and put it into fastboot mode.

    From there (still acting quickly) they connected it to a Linux PC via USB and flashed the pre-compiled, frost.img recovery image file and were able to use the software to decrypt the user partition.

    There’s something amusing about breaking Ice Cream Sandwich encryption using a freezer (perhaps they tried Gingerbread with a cup of tea initially) but the method works because cooling the RAM chips slows down the speed that data fades from them, giving the crackers more time to access the phone’s contents.

    Having cracked the Galaxy Nexus, the researchers say they plan to try out their system on other Android devices.

    If you have a Galaxy Nexus and fancy trying it for yourself — and are prepared to accept the risks involved with sticking your phone in a freezer — you can download the FROST recovery image and everything else you’ll need from the website.

  • IoT podcast: When devices can talk, will they conspire against you?

    Alex Hawkinson lives in the smartest house around. As the CEO of SmartThings, a company building a hub and cloud service for the internet of things, he’s playing with connected toys all the time and trying to get them to work together. In this podcast, we talk about SmartThings, how we may one day program our fridge door to lock if we don’t meet our fitness goals and how your smart house could foil your future surprise parties.

    (download this episode)

    Subscribe to RSS

    iTunes

    Stitcher Radio

    Show notes:
    Host: Stacey Higginbotham

    • Why we need a “physical graph” for our stuff online
    • Hawkinson’s crazy smart house that stalks him throughout the day
    • What kind of privacy and security can we expect in a connected home?
    • The cool things we can do when we connect everything and that data is open

    SELECT PREVIOUS EPISODES:
    Call-in show: Why the “I’m leaving iPhone” trend?

    Internet of things Podcast – Almond+’s nutty idea: Making sensor connectivity a snap

    Yahoo’s WFH Boo-Boo

    PlayStation Snore?

    Podcast: Why the internet of things is cool and how Mobiplug is helping make it happen

    Podcast: Ballmer’s in the Dell, do tweets ruin TV? And how ISPs are not like gas pumps

    Podcast Q&A: MotoACTV smartwatch now or wait? Lumia 822 in India? Best running apps?

    Podcast: Kabam founder on scaling globally and designing for different platforms

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Pertino takes on $20M to build out wide-area networks through clouds

    Software-defined-networking (SDN) player Pertino, whose software on public clouds enables wide-area networks (WAN) for users anywhere, has taken on $20 million in Series B funding, helping the company gear up to add more network services and customers.

    Jafco Ventures led the funding round, while previous investors Lightspeed Venture Partners and Norwest Venture Partners also contributed. The company has now raised a total of $28.85 million.

    Pertino has more than 700 customers. It has aimed at small and medium-sized businesses, and in time it will look to add enterprise customers, Pertino Co-founder and CEO Craig Elliott said.

    The software lets customers easily spin networks up or down to meet operational needs, just as they can do with compute or storage power on Amazon Web Services and other public clouds. Also, the virtualized networks can move around to other clouds Pertino runs, in case of a natural disaster. The virtual WAN is designed to be as secure as the local-area network (LAN) that employees use inside a single office.

    Software-defined networking (SDN) remains hot. But it isn’t necessarily one big market. While Big Switch Networks, Embrane and Nicira target SDN deployments in the data center, Pertino works on servers elsewhere. That’s why Elliott said doesn’t believe Pertino will take away business from those other SDN businesses and vice-versa. However, all of those companies focus on separating the control plane and the data plane with software that precludes the need for appliances that take up space and require considerable capital expenditures.

    Cisco, Juniper Networks and other companies also are using SDN to optimize wide-area network (WAN) traffic, according to a GigaOM Research report (subscription required).

    With plenty of SDN hype still hanging around, it’s not hard to imagine more companies jumping into the SDN-via-cloud market. But use cases are few and far between, and that could mean new entrants might have to wait before racking up a long list of production-scale clients. Pertino, meanwhile, could maintain its lead.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Jawbone Design Guru Helps Bring Wearable Tech & Data Tracking To Your Golf Game

    Screen shot 2013-03-07 at 3.58.09 AM

    “I have a tip that can take five strokes off anyone’s golf game: It’s called an eraser,” Arnold Palmer once remarked. Yes, Even brave enough to wear ridiculous clothes and hack a small white ball around a manicured lawn, golf is a difficult and sometimes humiliating, sport.

    Luckily for golfers, John McGuire feels your pain and is on a mission to make the game just a little less painful for anyone daring (and ignorant) enough to pick up a club. His new company, Active Mind Technology, wants to give the golfing masses access to the same tools traditionally reserved for the pros by leveraging the same wearable sensor-based technologies found in health-tracking devices like Fitbit, Basis and Jawbone’s Up.

    And who better to assist in that endeavor than the mastermind behind the design of products like Jambox, Jawbone and Jawbone Up? Joining McGuire and his team of twenty is Yves Behar, the design and branding guru (and Chief Creative Officer of Jawbone) known for helping to design the products mentioned above as well as those for PUMA, General Electric, Samsung, Prada and more.

    While Behar hasn’t assumed a title in the company, McGuire tells us that he has not only led the design of the UI, UX, branding and packaging of Active Mind’s newest product, he’s also and investor and “thankfully, even acts like a founder,” he says.

    This week, McGuire, Behar and team officially unveiled Game Golf, a wearable product that employs a combination of sensors, GPS and NFC technologies to provide golfers with a stream of data and feedback to help them improve their scores.

    Essentially, the device, which includes transmitter tags that are inserted into clubs and a receiver that can be attached to your belt, track every shot a user takes during a round, as well as distance, club selection and so on. And, a la health and fitness trackers, Game Golf compiles this data and syncs it with the cloud, allowing users to then access their performance data via its mobile app on their mobile devices and personal computers.

    Golfers can then share highlights of their round and their overall progress with friends by way of their social network(s) of choice, and see the percentage of shots that they hit in the fairway, greens in regulation, and putting performance. Backing its software, the team has designed Game Golf’s battery to accomodate two full rounds of data tracking before requiring a charge.

    Though that all equates to a good start, one feature that’s conspicuously absent is that the device is not able to measure the velocity of one’s swing (or its relative accuracy). his could deter some early adopters, it’s not a flat-out deal breaker; however, adding this capability down the road could become a significant selling point for those sitting on the fence.

    And, unfortunately for those looking for instant gratification, Game Golf isn’t yet available in stores. Instead, the company has launched a crowdfunding campaign on Indiegogo through which it hopes to raise $125,000 in an effort to finance its product development and distribution. In spite of (or perhaps because of) the fact that it will cost a hefty $249 when it does become publicly available in stores, McGuire tells us that Game Golf has become the fastest money-raising campaign in Indiegogo’s history, raising $63K in 12 hours.

    Now, two days removed from launch, the campaign has raised over $108,000. At this rate, it should meet its goal within a week, which the founder takes as a promising sign of the potential demand for its golf tracker.

    Based on its initial concept and after recruiting well-known pro golfers like Lee Westwood and Graeme McDowell to help with early testing (and invest), Active Mind was able to raise seed financing from a bevy of reputable investors, including Chamath Palihapitiya, Jerry Yang (of AME Cloud Ventures), Morado Venture Partners, Crosslink Capital and Ed Colligan (the Former CEO of Palm) — to name a few.

    “Game Golf gives everyone access to crucial data that can dramatically improve your golf game and handicap,” McDowell says of its appeal to golfers. “[It’s] intuitive, doesn’t disrupt your game and is essential for any golfer looking to understand their game better and knock down their handicap.”

    With its Indiegogo campaign acting as a proof of concept, the startup is currently in the process of raising what McGuire tells us will be a $4 million series A round. If Game Golf is able to sustain this early demand, it will eventually look to expand into other sports, like board and motor sports and soccer, for example.

    While the near-term plan involves serious iterating around Game Golf, McGuire said that the platform is being architected in such a way that it will be able to eventually help users measure activity — and provide a gamification and social layer — across multiple sports.

    As to Game Golf, the founder said that users can expect to see its public launch sometime this summer.

    For more, find the startup’s Indiegogo campaign here, along with video demo below:

  • Meet Thermodo: A Tiny External Thermometer That Lives In Your iPhone’s Headphone Jack

    3ebb48f440695911c77727d698bc946d_large












    Danish startup Robocat has built a lot of software for Apple’s iOS devices, but today the company is branching out with the launch of a new hardware accessory for the iPhone, iPad, and Android devices. It’s called Thermodo, and it’s a very small hardware thermometer that fits in your device’s headphone jack, and transmits real temperature data for use in apps.

    The Thermodo hardware has a passive temperature sensor, housed in an audio jack and protected by a small cylindrical end cap that only extends around a quarter of an inch out from your device. It doesn’t need its own power source, and it transmits weather data as an audio signal that can be picked up by your phone and translated into the corresponding temperature on your phone via an API, which the company will first use in a dedicated Thermodo companion app for iOS, as well as in two of its previously released apps, Haze and Thermo.

    The Thermodo works offline, indoors and out, and comes with a carrying case keyring to make sure you don’t lose the tiny thing when it’s not in use. Robocat says that eventually, any device could potentially support Thermodo, including Raspberry Pi, Macs, and Arduino-based gadgets, thanks to the company’s open source SDK.

    I talked to Robocat founder Willi Wu about the project, and why it came to be in the first place. He says the company branched out from its core focus on mobile weather apps based on feedback from users.

    “The idea Thermodo is actually based on an indirect request from our users,” he explained.” We received several one star reviews because our users wanted the feature of measuring the temperature themselves right where they are. Currently the iPhone does not support any access to any temperature reading within the phone nor is there a dedicated sensor for this purpose. We wanted to attack to this problem anyway and came up with the most simple solution we could imagine, Thermodo.”

    While other devices like the Square credit card reader and the Jawbone UP fitness band use the headphone jack as a way for accessories to communicate with smartphone devices, Wu says that Thermodo is fundamentally different in its approach. That opens up plenty more possibilities for how the company could use the tech in the future to create other kinds of sensors, he says.

    “Thermodo is not translating sounds to data like Square or other softmodem-based products,” he said. “It turns out that we can apply this method to all kind of applications. What we do is converting the temperature into an electrical impedance and this impedance is determined by what we call the “Thermodo Principle.” Now we can convert all kind of things into an electrical impedance, like for example wind speed, pressure, brightness and so on.”

    Wu says Robocat’s technical lead is already measuring his resistors and capacitors in this manner, and that the company is experimenting with some of these alternate sensing capabilities already. Eventually  Thermodo could have a number of sibling devices to gauge just about everything under the sun (including the sun’s brightness).

    Thermodo is looking for just $35,000 in funding, and pre-order pledges start at just $19 for a single Thermodo unit. This is a project that will hit its goal quickly, and I can’t wait to see what comes next from Robocat’s new hardware focus.