Category: News

  • Morning Advantage: What Makes a Mobile Ad Effective

    More and more of us find our smartphones all-absorbing, addictive, impossible to put down. And yet advertisers have been slow to allocate their dollars in accordance with our attentions, in part because of the received wisdom that “mobile ads don’t work.” Columbia Business School’s Ideas@Work reports on research done by Miklos Sarvary, Yakov Bart, and Andrew T. Stephen on mobile ads that do.

    Logically, you might expect that someone on their mobile phone using an inexpensive app would be most interested in ads that, say, serve up other inexpensive mobile apps — perhaps a new one that they haven’t heard of before. That’s exactly wrong, say the researchers. Instead, the ads that work best are a) for big-ticket items, like cars and plane tickets, and b) feature brands you’ve already heard of. “The ad’s strength is not adding new data,” says Sarvary, “But reminding you of what you already know and making you think about the product again.” The right way to use mobile ads, they say, is to deploy them after your major TV and print campaign, using the tiny mobile ads as reminders.

    Lessons Learned

    Why It’s a Mistake to Write Off Japan (The Boston Globe)

    HBS dean Nitin Nohria reflects in The Boston Globe on a recent visit to Japan. Despite Westerners’ view that the country is “a place that’s been fighting the same set of vexing economic problems for so long with so little effectiveness that it resembles a sports team mired in a long winless streak,” he found Japan and its business leaders to be dynamic, engaged, and optimistic. As so many countries struggle with fiscal imbalances, no-growth economies, and the aftermath of an epic bubble, there’s much we can learn from Japan — but only if we’re paying attention. —Dan McGinn

    HAVE YOUR CAKE

    Someone’s Got to Earn the Money to Support Charities; It Might as Well Be You (Quartz)

    Trying to figure out how you can make the world a better place? One obvious way is to work for a nonprofit, but did you ever consider the option of “earning to give”? That’s the term William MacAskill applies to making a lot of money — in finance, say — and then giving a lot of it away to charities. One of his former students, for example, now works at a trading firm and gives away 50% of his income. The ex-student’s donations alone could pay the wages of several people working for nonprofits. The nonprofit sector already has plenty of people working for it, MacAskill says; what it really needs is your money. So go get rich. —Andy O’Connell

    BONUS BITS:

    Remember Friendster?

    Autopsy of a Dead Social Network (MIT Technology Review)
    Lifestyles of the Mega-Wealthy, in One Mega-Chart (The Reformed Broker)
    The Strange Behavioral Logic of the Sequester Stalemate (HBR.org)

  • Fon scores a big one: crowdsourced Wi-Fi community signs DT for millions of hotspots

    It may not be the investment that was rumored earlier this year, but Deutsche Telekom has struck a deal with the crowdsourced Wi-Fi outfit Fon to provide coverage across Germany. This comes a month after Fon signed with a DT subsidiary in Croatia – a country, as we pointed out at the time, that DT sometimes uses as a testbed for new services that it intends to roll out more widely.

    Fon is a community of people who submit their Wi-Fi hotspots for inclusion in a global pool. By doing so they become “Foneros” who let others use their Wi-Fi connections for free, and in exchange they get to do the same around the world. Due to ISPs’ terms and conditions, which generally forbid letting strangers onto customers’ connections, this idea works best in concert with the ISPs themselves – BT in the UK was a trailblazer here, and DT is certainly one of the biggest ISPs that Fon has landed.

    The DT offering is called WLAN TO GO, and through it DT’s customers who offer up their own connection will gain access to around 8 million hotspots worldwide. As DT itself has 12 million broadband lines and around 12,000 Wi-Fi hotspots, there’s clearly scope for major expansion of Fon’s reach too – this deal doesn’t just cover Germany, but also DT’s local subsidiaries in Bulgaria, Greece, Romania, Slovakia and Hungary.

    For DT, there’s an extra motivation too: if its customers start using more hotspots, they will theoretically use less mobile data – a boon for networks feeling the strain of bandwidth hogs such as mobile video. Here’s how DT CEO Rene Obermann put it in a statement this morning:

    “The partnership with FON fits perfectly with Telekom’s network expansion strategy. The astonishing increase in data traffic calls for network optimization and expansion, as well as the implementation of new high-speed networks.

    “By the year 2016, we want to set up more than 2.5 million additional hotspots in Germany with the WLAN TO GO offering. With our technology mix of mobile communication, fixed lines and Wi-Fi, we can gradually introduce our customers to the benefits of internet access anywhere and anytime.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Fixtures Living Accelerates Growth with Catterton Partners Backing

    Fixtures Living has announced a strategic partnership with Catterton Partners. The partnership will support the expansion of the company throughout the United States. Fixtures Living is a retail concept specializing in providing customers a tailored assortment of premium lifestyle products for the home. Terms of the transaction were not disclosed.

    PRESS RELEASE

    Fixtures Living (the “Company”), a fast growing retail concept specializing in providing customers a tailored assortment of premium lifestyle products for the home, today announced a strategic partnership with Catterton Partners, the leading consumer-focused private equity firm. The strategic partnership endorses the remarkable growth potential of Fixtures Living and is designed to support the expansion of the Company throughout the United States. Terms of the transaction were not disclosed.

    Founded in 2009, Fixtures Living carries a select array of best-in-class products for indoor and outdoor living spaces — kitchens, laundry rooms, and bathrooms. Unlike typical industry showrooms, Fixtures Living stores invite consumers to enjoy a 360 degree sensory experience that ignites the imagination by implementing live kitchens and working bath fixtures (from decorative plumbing to entire Health and Wellness systems), and by recruiting and retaining a welcoming and knowledgeable staff who encourage guests to “Live Joyfully™.” The Company currently has three experiential showrooms in Southern California and plans to expand rapidly across the U.S. over the next several years, including adding two locations later this year – one in San Diego at the Westfield UTC luxury center and one in the iconic Glendale Galleria.

    Chief Executive Officer Jeffery Sears and the current management team will continue to lead the Company as it expands its presence across the country. Sears, along with co-founder and Chairman Jim Stuart, will retain a significant stake in the Company.

    Sears commented, “The Fixtures Living concept has created a new way for the consumer to choose lifestyle goods for the home. Our innovative approach is embraced enthusiastically by homeowners and industry professionals who appreciate the opportunity to test-drive products in an inviting and interactive setting. This fosters a connection between the visitor and the products that helps them create the types of moments they wish to share in their homes. We look forward to working with Catterton, a partner with significant experience growing similar best-in-class retailers such as Restoration Hardware. Together, we are well-equipped to capitalize on the vast opportunity we see to fill a void in the current marketplace.”

    “Fixtures Living has developed a game-changing retail model,” said Scott Dahnke, Managing Partner at Catterton Partners. “By delivering a full-immersion retail experience that facilitates the relationship between consumers, trade professionals, and leading brands, the Company has succeeded in creating a retail concept that is positioned to win. This concept is unlike any other that we have seen. We are excited to partner with the talented team at Fixtures Living to help the Company realize its immense potential.”

    Recently, Fixtures Living has received numerous awards and honors, including:

    Listed #78 on Forbes’ annual ranking of “America’s Most Promising Companies” (2013)
    Winner, “Best New Prototype or Reinvention of a Prototype” by Retail Traffic Magazine (2012)
    Winner, “Store Design of the Year” (Large Format) by Retail Week Magazine (2012)
    Winner, Grand Prize, “Sustainability,” from the Association for Retail Environments (2012)
    Winner, Individual Element, “In-Store Communications” from the Association for Retail Environments (2012)
    Winner, Outstanding Merit, “Design,” from the Association for Retail Environments (2012)
    Winner, “Retail Store of the Year” (Hard-Lines 15k-25k sf category) from Chain Store Age (2012)
    Winner, International Award of Merit, “Best Retail Concept” (Hard-Lines category) by the Retail Design Institute (2012)
    Winner, “Best in Show Worldwide: In-Store Signage and Graphics” (2012)
    LEED Silver Certification from USGBC (2012)
    Appeared as cover story or featured story in the industry’s top five magazines
    Valtus Capital Group acted as Valuation Advisor to Fixtures Living in connection with the transaction.

    About Fixtures Living

    Fixtures Living is a place for trade professionals and consumers to dream about, play with, and choose products that lead to better living. Specializing in premium lifestyle goods for the home, the store carries a hand-culled array of best-in-class brands for indoor- & outdoor-kitchens, laundry rooms and the bath, from exquisite decorative plumbing to entire Health-and-Wellness Systems. Currently in three California markets, the store will soon have a presence in the Glendale Galleria, and is poised to expand into several other major U.S. cities within the next 12 months.

    About Catterton Partners

    With more than $3.0 billion currently under management and a twenty-three year track record of success in building high growth companies, Catterton Partners is the leading consumer-focused private equity firm. Since its founding in 1989, Catterton has leveraged its category insight, strategic and operating skills, and network of industry contacts to establish one of the strongest private equity investment track records in the middle market. Catterton Partners invests in all major consumer segments, including Food and Beverage, Retail and Restaurants, Consumer Products and Services, Consumer Health, and Media and Marketing Services. Catterton’s investments include, among others: Restoration Hardware, Bloomin’ Brands (Outback Steakhouse, Carraba’s Italian Grill, Bonefish Grill, Fleming’s Prime Steakhouse & Wine Bar, and Roy’s), Edible Arrangements, Cheddar’s Casual Café, Noodles & Company, Plum Organics, Frederic Fekkai, Build-A-Bear Workshop, Kettle Foods, Odwalla, and P.F. Chang’s China Bistro.

    SOURCE Catterton Partners

    The post Fixtures Living Accelerates Growth with Catterton Partners Backing appeared first on peHUB.

  • Praxair Acquires NuCO2 Backed by

    Praxair has acquired NuCO2 for $1.1 billion from Aurora Capital Group. NuCO2 is a provider of fountain beverage carbonation in the US with approximately 162,000 customer locations and 900 employees.

    PRESS RELEASE

    Praxair, Inc. PX +0.41% announced today that it has completed the previously announced acquisition of NuCO2 Inc. for $1.1 billion from Aurora Capital Group.

    NuCO2 is the largest provider of fountain beverage carbonation in the United States with approximately 162,000 customer locations and 900 employees. The NuCO2 micro-bulk beverage carbonation offering is the service model of choice for customers offering fountain beverages as it is cost effective, reliable and less labor intensive.

    “NuCO2 is a high-quality business that fits directly within Praxair’s strategy of building density and providing customer reliability. It will continue to operate with a focus on customer service and will benefit from Praxair’s strong competencies in distribution and productivity,” said Eduardo Menezes, executive vice president of Praxair. “Praxair will grow its beverage carbonation businesses domestically and internationally through expanded product and service offerings,” he added.

    Praxair, Inc. is the largest industrial gases company in North and South America, and one of the largest worldwide, with 2012 sales of $11 billion. The company produces, sells and distributes atmospheric and process gases, and high-performance surface coatings. Praxair products, services and technologies are making our planet more productive by bringing productivity and environmental benefits to a wide variety of industries including aerospace, chemicals, food and beverage, electronics, energy, healthcare, manufacturing, metals and others.

    This document contains “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995. These statements are based on management’s reasonable expectations and assumptions as of the date the statements are made but involve risks and uncertainties. These risks and uncertainties include, without limitation: the performance of stock markets generally; developments in worldwide and national economies and other international events and circumstances; changes in foreign currencies and in interest rates; the cost and availability of electric power, natural gas and other raw materials; the ability to achieve price increases to offset cost increases; catastrophic events including natural disasters, epidemics and acts of war and terrorism; the ability to attract, hire, and retain qualified personnel; the impact of changes in financial accounting standards; the impact of changes in pension plan liabilities; the impact of tax, environmental, healthcare and other legislation and government regulation in jurisdictions in which the company operates; the cost and outcomes of investigations, litigation and regulatory proceedings; continued timely development and market acceptance of new products and applications; the impact of competitive products and pricing; future financial and operating performance of major customers and industries served; the impact of information technology system failures, network disruptions and breaches in data security; and the effectiveness and speed of integrating new acquisitions into the business. These risks and uncertainties may cause actual future results or circumstances to differ materially from the projections or estimates contained in the forward-looking statements. Additionally, financial projections or estimates exclude the impact of special items which the company believes are not indicative of ongoing business performance. The company assumes no obligation to update or provide revisions to any forward-looking statement in response to changing circumstances. The above listed risks and uncertainties are further described in Item 1A (Risk Factors) in the company’s Form 10-K and 10-Q reports filed with the SEC which should be reviewed carefully. Please consider the company’s forward-looking statements in light of those risks.

    SOURCE: Praxair, Inc.

    The post Praxair Acquires NuCO2 Backed by appeared first on peHUB.

  • Synteract and Harrison Clinical Research Close Deal

    Synteract, a full-service contract research organization has completed its acquisition of Harrison Clinical Research to form a new multinational business – SynteractHCR. Synteract is a portfolio company of San Francisco-based Gryphon Investors.

    PRESS RELEASE

    Synteract, a leading full-service contract research organization (CRO) and portfolio company of San Francisco-based Gryphon Investors, has completed its acquisition of Harrison Clinical Research (“HCR”) to form a new multinational CRO – SynteractHCR. The company will have its corporate headquarters in San Diego County. Wendel Barr will lead SynteractHCR as CEO. Dr. Francisco Harrison, formerly HCR’s chairman and founder, will remain a senior member of the executive team and become a member of the Board of Directors. The existing management stays in place, with former Synteract COO Stewart Bieler becoming president of U.S. operations and former HCR CEO Benedikt van Nieuwenhove becoming president of European operations for SynteractHCR.

    The merger of the companies will make SynteractHCR a top tier, global CRO and provide additional resources and scale to support large, later phase programs:

    Geographic Footprint — The combined company will have offices in 16 countries with operations in both Western and Eastern Europe, Israel and South America, as well as the U.S. In addition to its headquarters in San Diego, California, SynteractHCR’s U.S.presence will include two offices on the East coast in Research Triangle Park, North Carolina and Princeton, New Jersey, reflecting coverage of three important biopharmaceutical hubs. This expanded presence will allow the company to offer strong international and regional clinical trial support to clients throughout the world. SynteractHCR also maintains a clinical in-patient unit in Germany that will help enlist and treat trial patients, as well as a clinical research training center in Belgium to assist internal and external professionals.
    Longstanding Experience and Talent — SynteractHCR has exceptionally strong, full-service clinical development expertise with a staff of more than 800. Typical employee tenure at the company is greater than five years. Employees have a lower-than-average turnover rate compared to the industry.
    Client Loyalty and Repeat Business — A strong tradition of customer service excellence combined with a high regard for trust and transparency are cornerstones of ongoing client relationships, resulting in a high percentage of repeat and referral business.
    Reinforced Therapeutic Breadth — Both organizations have a complementary heritage of managing Phase I-IV clinical trials across multiple therapeutic areas, much of which strategically aligns with the largest areas of clinical R&D investment, including oncology, CNS, infectious disease, endocrinology, cardiovascular and respiratory.
    SynteractHCR is focused on enabling and supporting the innovation and development of better therapies in healthcare. A continued directive of the new combined company will be to maintain a strong connection with emerging to midsize biopharmaceutical companies through which a consultative approach and strong clinical development expertise provide the foundation of its customer relationships.
    “Our longstanding drug development expertise allows us to form a new global leader with enhanced scale and therapeutic breadth, but with the personal approach that we have always taken to working with clients,” said CEO Wendel Barr. “We will provide a continuum of service that allows us to work with clients throughout the entire development life cycle, from emerging products through post-marketing, and will bring technology efficiencies to the company that will help to take time and cost out of drug development.”

    SynteractHCR is a portfolio company of Gryphon Investors, a San Francisco-based premier middle market private equity firm, which was the lead financial partner in the acquisition. Fairmount Partners provided financial advice to Synteract, while KPMG International provided financial advice to HCR. Terms of the transaction were not disclosed.

    The new company will unveil its new brand identity and messaging at the DIA EuroMeeting in Amsterdam, from March 4-6, 2013, in booth 616. In addition, Andrei Kravchenko, MD, PhD, Head of Office, Ukraine, will participate as session chair on Tuesday, March 5th at 2-3:30 p.m. CET on “Enhancing Clinical Research Effectiveness.”

    About SynteractHCR SynteractHCR is a full-service contract research organization with a successful two-decade track record supporting biotechnology, medical device and pharmaceutical companies in all phases of clinical development. With its “Shared Work – Shared Vision” philosophy SynteractHCR provides customized Phase I through IV services collaboratively and cost effectively ensuring on-time delivery of quality data so clients get to decision points faster. Operating in 16 countries, SynteractHCR delivers trials internationally, offering expertise across multiple therapeutic areas including notable depth in oncology, CNS, infectious disease, endocrinology, cardiovascular and respiratory, among other indications.

    CONTACT: Beth Walsh [email protected] 760-230-2424

    SOURCE SynteractHCR

    The post Synteract and Harrison Clinical Research Close Deal appeared first on peHUB.

  • Xbox Live premieres its first movie

    In tough economic times, raising the money and getting a movie made without any major stars in it can be more than a little challenging for independent film makers. Getting it distributed is even harder.

    So instead of trying to get their movie into cinemas, releasing it straight to DVD, or even putting it out on YouTube, the makers of Pulp are distributing their low-budget British comedy via an alternative method — Xbox Live.

    The film, which follows a struggling comic book publisher who becomes mixed up in a plot to take down a crime syndicate, is available to Xbox 360 Live subscribers from today, making it the first film to premiere on the games console. It’s only currently being offered in the UK, but if it’s a success Microsoft may also release it in the US and other international markets later this year.

    Pulp’s co-director Adam Hamdy says: “Microsoft might not seem like the obvious partner for an indie comedy, but the film industry has changed. Xbox 360 can instantly distribute Pulp to millions of UK customers, and publicize the release in ways that simply aren’t possible traditionally”.

    Talking about the decision to distribute the movie, Xbox Live product manager Pav Bhardwaj says: “It’s a great fit. The film is really well aligned with our audience and it’s great to support British talent”.

    Microsoft says it intends to distribute more films in the future, confirming what my colleague Joe Wilcox observed three weeks ago, that the future of Xbox isn’t gaming.

     

  • UpCloud bursts out of Finland for European launch, with U.S. in sights for this year

    An ambitious Finnish infrastructure-as-a-service company called UpCloud has launched across Europe with local rivals such as CloudSigma and Elastichosts in its sights, and U.S. expansion already on the agenda.

    The first two data centers are in London and Helsinki, and more are planned for Chicago towards the middle of the year, then Las Vegas and Singapore. A spin-off of sorts from the established Finnish hosting firm Sigmatic, UpCloud claims to have 100 percent homegrown virtualization tech. According to general manager Antti Vilpponen, this keeps costs low and services flexible:

    “Our offering is truly scaleable and flexible – we’re not giving any predetermined set of instances. You can freely choose the amount of CPUs, memory and storage… We’re really fast -– all servers are deployed in less than 60 seconds -– and our network stack is 100 percent redundant.”

    Alongside enterprise-grade redundancy, the company offers fully dedicated server resources and one-click migration between availability zones. What’s more, Vilponnen said, unlike with Amazon Web Services, the cheapest service levels UpCloud offers have the same level of performance you’d get with more expensive packages. As the tech was built in-house, UpCloud offers a 100 percent Service Level Agreement (SLA) and 50x downtime compensation — that’s a decent uptime guarantee, which should bolster the enterprise-grade status UpCloud is going for.

    There’s a browser-based control panel and apps for Android and iOS, while those with more complex control needs also have an API (although it’s not fully Amazon compatible) that they can play with. CPU, memory, storage – both HDD and SSD — OS, firewall and IP addresses are all billed on an hourly basis. Network traffics and storage device I/O requests are billed by use.

    Right now UpCloud is being personally bankrolled by founder and CTO Joel Pihlajamaa, who is also Sigmatic’s CEO. That makes it pretty impressive to see the company apparently already undercutting local rivals, if not Amazon itself at this point. For example, Vilponnen showed me a chart showing a low-tier UpCloud package of 2GB memory and 100GB storage priced at €41.84 ($54.47) monthly, versus €57.40 on Elastichosts and €72.03 on CloudSigma (it should be noted that CPU amounts were included in the calculation but not explicit in the price, “as they change quite a lot between the providers”).

    “We really think offering constantly high-performing resources at affordable prices is key to how want to position ourselves,” Vilponnen said. “We won’t be able to overcome Amazon in Europe within a couple of years, but we’ve got big plans.”

    UpCloud is even offering the first 1,000 people that register from Monday morning (and buy €10 of credit) an extra €100 in free credit, so there’s clearly a fair amount of money to back this up — for now. Bootstrapping will only take you so far in the scale-driven cloud business, though, so UpCloud would do well to raise fresh funding this year, as it intends to.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • If Bradley Manning and WikiLeaks are guilty, then so is the New York Times

    While the trial of Bradley Manning has sparked some interest in certain circles, many people probably think the former U.S. Army private’s case will have little impact on either them or American society as a whole. Harvard law professor Yochai Benkler, however, argues that they are wrong — and that if Manning is found guilty of “aiding the enemy” for releasing classified documents to WikiLeaks, it could change the nature of both journalism and free speech forever.

    Why? Because as Benkler points out, the charge for which Manning is being court-martialed could just as easily be applied to someone who leaks similar documents to virtually any media outlet, including the New York Times or the Washington Post. In other words, if the U.S. government has seen fit to go after Manning and WikiLeaks, what is to stop them from pursuing anyone who leaks documents, and any media entity that publishes them?

    WikiLeaks is a media entity just like the Times

    I’ve argued in the past that WikiLeaks is a media entity, and a fairly crucial one in this day and age, and Benkler clearly agrees. As the Harvard professor (who will likely be testifying at Manning’s trial) describes in his piece:

    “Someone in Manning’s shoes in 2010 would have thought of WikiLeaks as a small, hard-hitting, new media journalism outfit — a journalistic ‘Little Engine that Could’ that, for purposes of press freedom, was no different from the New York Times.”

    New York Times

    And we don’t have to hypothesize about whether the government would have gone after Manning for leaking documents directly to the New York Times instead of to WikiLeaks: as Benkler notes, the chief prosecutor in the case was asked that exact question by the judge in January and responded “Yes ma’am.” In other words, for the purposes of the government’s case against Manning, there is no appreciable difference between WikiLeaks and the Times, or any other traditional media outlet.

    Benkler argues that the government’s behavior constitutes “a clear and present danger to journalism in the national security arena” — not just because it is trying to penalize a whistleblower, but because the state is arguing that Manning is guilty of “aiding the enemy,” a charge that could put him in prison for life. Benkler also notes that unlike the other charges against Manning, aiding the enemy is something even civilians can be found guilty of.

    Isn’t the New York Times aiding the enemy too?

    So if handing documents over to a media entity that subsequently publishes them qualifies as “aiding the enemy” in the eyes of the government, then giving them to the New York Times would fit that description just as well as giving them to WikiLeaks. And if providing classified documents to a publisher can qualify, then wouldn’t the entity that actually published them be guilty as well — regardless of whether it’s WikiLeaks or the Times?

    The First Amendment would seem to protect the NYT in a case like this, and I’ve argued before that it should protect WikiLeaks as well — an argument that former Times‘ executive editor Bill Keller has said he agrees with. But the U.S. government continues to pursue WikiLeaks for its role in publicizing the documents that Manning leaked, and some U.S. legislators have mused aloud about whether espionage charges could be laid against other media entities like the New York Times as well.

    Benkler’s warning shouldn’t be taken lightly: if Manning is guilty of aiding the enemy for simply leaking documents, then anyone who communicates with a newspaper could be guilty of something similar. And if the leaker is guilty, then the publisher could be as well — and that could cause a chilling effect on the media that would change the nature of public journalism forever.

    /Images courtesy of Shutterstock / Nata-lia and Flickr user jphilipg

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Charcoal Black Samsung Galaxy Note 8.0 Photo Leaked

    GALAXY Note 8.0_1

    Samsung has always been one for adding a bit of colorful flare to its devices. Be it the Galaxy S III in Pebble Blue; the Note II in Pink; or the Note 10.1 in Garnet Red, Samsung loves to give us choices. Well it appears that the Korean Giant’s newest addition to the family, the recently announced Note 8.0 will be seeing multiple colors as well. As a leaked press shot suggests (below) the tablet will not just come in white but will also be available in a Charcoal Black color. 

    Samsung-Galaxy-Note-8-0-Charcoal-Black

    This rumored color addition does appear to add some weight to the idea that Samsung is abandoning its short-lived, influenced-by-nature color scheme. Although that isn’t to say that down the road we’ll see different color options added. We won’t know until for sure until after the Note 8.0 debuts in the second quarter of this year. Stay tuned!

    Source: PhoneArena

    Come comment on this article: Charcoal Black Samsung Galaxy Note 8.0 Photo Leaked

  • Google currently testing data compression feature for Chrome Browser

    chrome-for-android

     

    Chrome is no doubt one of the most popular web browsers on our Android devices, but it appears that Google is intent on making the web browsing experience just a little faster than it already is. Taking a page from web browsers like that from Opera, Google has unveiled a preliminary build of its Chromium web browser that utilizes data compression. Here’s how Google describes the data compression feature:

    “Reduce data consumption by loading optimized web pages via Google proxy servers”.

    So essentially Google’s data compression will do two things: provide more efficient security and speed up page load times. More efficient security comes from the Chrome browser using SPDY (the company’s proxy servers) which forces SSL encryption for all sites. The speeding up of the load page times on the other hand, is done by multiplexing multiple streams of data over a single network connection and assigning high or low priorities to page resources being requested from a server.

    Google hasn’t exactly made this feature public yet, but it is available to test out for those of you Android 4.2 users who like to be ambitious and try things out. Hit the break for the full instructions on how to get this neat feature going.

     

    • Go to Settings, About Phone.

    • Find the build number

    • Tap on the build number repeatedly about seven times

    • Note that after the third tap, you’ll see a snarky dialog that says you are four taps away from being a developer.

    • After the seventh tap you will see a message that says “You are now a developer.”

    Once all is done, the Developer Settings option (which you need to enable USB debugging) will again be in the device’s Settings app.

    Come comment on this article: Google currently testing data compression feature for Chrome Browser

  • Accidental Empires, Part 11 — Role Models (Chapter 5)

    Eleventh in a series. The next installment of Robert X. Cringely’s 1991 tech-industry classic Accidental Empires is highly appropriate for the industry today. He discusses concepts like “look and feel”, how pioneers freely copied ideas and where attitudes began to change. There’s something prescient, with respect to aggressive patent litigation by Apple and some other companies today.

    This chapter also explores the incredible contribution one research lab, Xerox PARC, made to personal computing as we know it –germinator of graphical user interface, mouse, Ethernet and laser printer, among others. Photo is of the Alto, arguably the first computer workstation and one of many, many products conceived but not marketed.

    This being the 1990s, the economy is shot to hell and we’ve got nothing much better to do, the personal computer industry is caught up in an issue called look and feel, which means that your computer software can’t look too much like my computer software or I’ll take you to court. Look and feel is a matter of not only how many angels can dance on the head of a pin but what dance it is they are doing and who owns the copyright.

    Here’s an example of look and feel. It’s 1913, and we’re at the Notre Dame versus Army football game (this is all taken straight from the film Knute Rockne, All-American, in which young Ronald Reagan appeared as the ill-fated Gipper — George Gipp). Changing football forever, the Notre Dame quarterback throws the first-ever forward pass, winning the game. A week later, Notre Dame is facing another team, say Purdue. By this time, word of the forward pass has gotten around, the Boilermakers have thrown a few in practice, and they like the effect. So early in the first quarter, the Purdue quarterback throws a forward pass. The Notre Dame coach calls a time-out and sends young Knute Rockne jogging over to the Purdue bench.

    “Coach says that’ll be five dollars”, mumbles an embarrassed Knute, kicking at the dirt with his toe.

    “Say what, son?”

    “Coach says the forward pass is Notre Dame property, and if you’re going to throw one, you’ll have to pay us five dollars. I can take a check”.

    That’s how it works. Be the first one on your block to invent a particular way of doing something, and you can charge the world for copying your idea or even prohibit the world from copying it at all. It doesn’t even matter if the underlying mechanism is different; if the two techniques look similar, one is probably violating the look and feel of the other.

    Just think of the money that could have been earned by the first person to put four legs on a chair or to line up the clutch, brake, and gas pedals of a car in that particular order. My secret suspicion is that this sort of easy money was the real reason Alexander Graham Bell tried to get people to say “ahoy” when they answered the telephone rather than “hello”. It wasn’t enough that he was getting a nickle for the phone call; Bell wanted another nickle for his user interface.

    There’s that term, user interface. User interface is at the heart of the look-and-feel debate because it’s the user interface that we’re always looking at and feeling. Say the Navajo nation wants to get back to its computing roots by developing a computer system that uses smoke signals to transfer data into and out of the computer. Whatever system of smoke puffs they settle on will be that computer’s user interface, and therefore protectable under law (U.S., not tribal).

    What’s particularly ludicrous about this look-and-feel business is that it relies on all of us believing that there is something uniquely valuable about, for example, Apple Computer’s use of overlapping on-screen windows or pull-down menus. We are supposed to pretend that some particular interface concepts sprang fully grown and fully clothed from the head of a specific programmer, totally without reference to prior art.

    Bullshit.

    Nearly everything in computing, both inside and outside the box, is derived from earlier work. In the days of mainframes and minicomputers and early personal computers like the Apple II and the Tandy TRS-80, user interfaces were based on the mainframe model of typing out commands to the computer one 80-character line at a time—the same line length used by punch cards (IBM should have gone for the bucks with that one but didn’t). But the commands were so simple and obvious that it seemed stupid at the time to view them as proprietary. Gary Kildall stole his command set for CP/M from DEC’S TOPS-10 minicomputer operating system, and DEC never thought to ask for a dime. Even IBM’s VM command set for mainframe computers was copied by a PC operating system called Oasis (now Theos), but IBM probably never even noticed.

    This was during the first era of microcomputing, which lasted from the introduction of the MITS Altair 8800 in early 75 to the arrival of the IBM Personal Computer toward the end of 1981. Just like the golden age of television, it was a time when technology was primitive and restrictions on personal creativity were minimal, too, so everyone stole from everyone else. This was the age of 8-bit computing, when Apple, Commodore, Radio Shack and a hundred-odd CP/M vendors dominated a small but growing market with their computers that processed data eight bits at a time. The flow of data through these little computers was like an eight-lane highway, while the minicomputers and mainframes had their traffic flowing on thirty-two lanes and more. But eight lanes were plenty, considering what the Apples and others were trying to accomplish, which was to put the computing environment of a mainframe computer on a desk for around $3,000.

    Mainframes weren’t that impressive. There were no fancy, high-resolution color graphics in the mainframe world — nothing that looked even as good as a television set. Right from the beginning, it was possible to draw pictures on an Apple II that were impossible to do on an IBM mainframe.

    Today, for example, several million people use their personal computers to communicate over worldwide data networks, just for fun. I remember when a woman on the CompuServe network ran a nude photo of herself through an electronic scanner and sent the digitized image across the network to all the men with whom she’d been flirting for months on-line. In grand and glorious high-resolution color, what was purported to be her yummy flesh scrolled across the screens of dozens of salivating computer nerds, who quickly forwarded the image to hundreds and then thousands of their closest friends. You couldn’t send such an image from one terminal to another on a mainframe computer; the technology doesn’t exist, or all those wacky secretaries who have hopped on Xerox machines to photocopy their backsides would have had those backsides in electronic distribution years ago.

    My point is that the early pioneers of microcomputing stole freely from the mainframe and minicomputer worlds, but there wasn’t really much worth stealing, so nobody was bothered. But with the introduction of 16-bit microprocessors in 1981 and 1982, the mainframe role model was scrapped altogether. This second era of microcomputing required a new role model and new ideas to copy. And this time around, the ideas were much more powerful — so powerful that they were worth protecting, which has led us to this look-and-feel fiasco. Most of these new ideas came from the Xerox Palo Alto Research Center (PARC). They still do.

    To understand the personal computer industry, we have to understand Xerox PARC, because that’s where the most of the computer technology that we’ll use for the rest of the century was invented.

    There are two kinds of research: research and development and basic research. The purpose of research and development is to invent a product for sale. Edison invented the first commercially successful light bulb, but he did not invent the underlying science that made light bulbs possible. Edison at least understood the science, though, which was the primary difference between inventing the light bulb and inventing fire.

    The research part of R&D develops new technologies to be used in a specific product, based on existing scientific knowledge. The development part of R&D designs and builds a product using those technologies. It’s possible to do development without research, but that requires licensing, borrowing, or stealing research from somewhere else. If research and development is successful, it results in a product that hits the market fairly soon — usually within eighteen to twenty-four months in the personal computer business.

    Basic research is something else — ostensibly the search for knowledge for its own sake. Basic research provides the scientific knowledge upon which R&D is later based. Sending telescopes into orbit or building superconducting supercolliders is basic research. There is no way, for example, that the $1.5 billion Hubble space telescope is going to lead directly to a new car or computer or method of solid waste disposal. That’s not what it’s for.

    If a product ever results from basic research, it usually does so fifteen to twenty years down the road, following a later period of research and development.

    What basic research is really for depends on who is doing the research and how they are funded. Basic research takes place in government, academic, and industrial laboratories, each for a different purpose. Basic research in government labs is used primarily to come up with new ideas for blowing up the world before someone else in some unfriendly country comes up with those same ideas. While the space telescope and the supercollider are civilian projects intended to explain the nature and structure of the universe, understanding that nature and structure are very important to anyone planning the next generation of earth-shaking weapons. Two thirds of U.S. government basic research is typically conducted for the military, with health research taking most of the remaining funds.

    Basic research at universities comes in two varieties: research that requires big bucks and research that requires small bucks. Big bucks research is much like government research and in fact usually is government research but done for the government under contract. Like other government research, big bucks academic research is done to understand the nature and structure of the universe or to understand life, which really means that it is either for blowing up the world or extending life, whichever comes first. Again, that’s the government’s motivation. The universities’ motivation for conducting big bucks research is to bring in money to support professors and graduate students and to wax the floors of ivy-covered buildings. While we think they are busy teaching and learning, these folks are mainly doing big bucks basic research for a living, all the while priding themselves on their terrific summer vacations and lack of a dress code.

    Small bucks basic research is the sort that requires paper and pencil, and maybe a blackboard, and is aimed primarily at increasing knowledge in areas of study that don’t usually attract big bucks — that is, areas that don’t extend life or end it, or both. History, political science, and romance languages are typical small bucks areas of basic research. The real purpose of small bucks research to the universities is to provide a means of deciding, by the quality of their small bucks research, which professors in these areas should get tenure.

    Nearly all companies do research and development, but only a few do basic research. The companies that can afford to do basic research (and can’t afford not to) are ones that dominate their markets. Most basic research in industry is done by companies that have at least a 50 percent market share. They have both the greatest resources to spare for this type of activity and the most to lose if, by choosing not to do basic research, they eventually lose their technical advantage over competitors. Such companies typically devote about 1 percent of sales each year to research intended not to develop specific products but to ensure that the company remains a dominant player in its industry twenty years from now. It’s cheap insurance, since failing to do basic research guarantees that the next major advance will be owned by someone else.

    The problem with industrial basic research, and what differentiates it from government basic research, is this fact that its true product is insurance, not knowledge. If a researcher at the government-sponsored Lawrence Livermore Lab comes up with some particularly clever new way to kill millions of people, there is no doubt that his work will be exploited and that weapons using the technology will eventually be built. The simple rule about weapons is that if they can be built, they will be built. But basic researchers in industry find their work is at the mercy of the marketplace and their captains-of-industry bosses. If a researcher at General Motors comes up with a technology that will allow cars to be built for $100 each, GM executives will quickly move to bury the technology, no matter how good it is, because it threatens their current business, which is based on cars that cost thousands of dollars each to build. Consumers would revolt if it became known that GM was still charging high prices for cars that cost $100 each to build, so the better part of business valor is to stick with the old technology since it results in more profit dollars per car produced.

    In the business world, just because something can be built does not at all guarantee that it will be built, which explains why RCA took a look at the work of George Heilmeier, a young researcher working at the company’s research center in New Jersey and quickly decided to stop work on Heilmeier’s invention, the liquid crystal display. RCA made this mid-1960s decision because LCDs might have threatened its then-profitable business of building cathode ray picture tubes. Twenty-five years later, of course, RCA is no longer a factor in the television market, and LCD displays — nearly all made in Japan — are everywhere.

    Most of the basic research in computer science has been done at universities under government contract, at AT&T Bell Labs in New Jersey and in Illinois, at IBM labs in the United States, Europe, and Japan, and at the Xerox PARC in California. It’s PARC that we are interested in because of its bearing on the culture of the personal computer.

    Xerox PARC was started in 1970 when leaders of the world’s dominant maker of copying machines had a sinking feeling that paper was on its way out. If people started reading computer screens instead of paper, Xerox was in trouble, unless the company could devise a plan that would lead it to a dominant position in the paperless office envisioned for 1990. That plan was supposed to come from Xerox PARC, a group of very smart people working in buildings on Coyote Hill Road in the Stanford Industrial Park near Stanford University.

    The Xerox researchers were drawn together over the course of a few months from other corporations and from universities and then plunked down in the golden hills of California, far from any other Xerox facility. They had nothing at all to do with copiers, yet they worked for a copier company. If they came to have a feeling of solidarity, then, it was much more with each other than with the rest of Xerox. The researchtrs at PARC soon came to look down on the marketers of Xerox HQ, especially when they were asked questions like, “Why don’t you do all your programming in BASIC — it’s so much easier to learn”, which was like suggesting that Yehudi Menuhin switch to rhythm sticks.

    The researchers at PARC were iconoclastic, independent, and not even particularly secretive, since most of their ideas would not turn into products for decades. They became the celebrities of computer science and were even profiled in Rolling Stone.

    PARC was supposed to plot Xerox a course into the electronic office of the 1990s, and the heart of that office would be, as it always had been, the office worker. Like designers of typewriters and adding machines, the deep thinkers at Xerox PARC had to develop systems that would be useful to lightly trained people working in an office. This is what made Xerox different from every other computer company at that time.

    Some of what developed as the PARC view of future computing was based on earlier work by Doug Engelbart, who worked at the Stanford Research Institute in nearby Menlo Park. Engelbart was the first computer scientist to pay close attention to user interface — how users interact with a computer system. If computers could be made easier to use, Engelbart thought, they would be used by more people and with better results.

    Punch cards entered data into computers one card at a time. Each card carried a line of data up to 80 characters wide. The first terminals simply replaced the punch card reader with a new input device; users still submitted data, one 80-column line at a time, through a computer terminal. While the terminal screen might display as many as 25 lines at a time, only the bottom line was truly active and available for changes. Once the carriage return key was punched, those data were in the computer: no going back to change them later, at least not without telling the computer that you wanted to reedit line 32, please.

    I once wrote an entire book using a line editor on an IBM mainframe, and I can tell you it was a pain.

    Engelbart figured that real people in real offices didn’t write letters or complete forms one line at a time, with no going back. They thought in terms of pages, rather than lines, and their pens and typewriters could be made to scroll back and forth and move vertically on the page, allowing access to any point. Engelbart wanted to bring that page metaphor to the computer by inventing a terminal that would allow users to edit anywhere on the screen. This type of terminal required some local intelligence, keeping the entire screen image in the terminal’s memory. This intelligence was also necessary to manage a screen that was much more flexible than its line-by-line predecessor; it was comprised of thousands of points that could be turned on or off.

    The new point-by-point screen technology, called bit mapping, also required a means for roaming around the screen. Engelbart used what he called a mouse, which was a device the size of a pack of cigarettes on wheels that could be rolled around on the table next to the terminal and was connected to the terminal by a wire. Moving the mouse caused the cursor on-screen to move too.

    With Engelbart’s work as a start, the folks at PARC moved toward prototyping more advanced systems of networked computers that used mice, page editors, and bit-mapped screens to make computing easier and more powerful.

    During the 1970s, the Computer Science Laboratory (CSL) at Xerox PARC was the best place in the world for doing computer research. Researchers at PARC invented the first high-speed computer networks and the first laser printers, and they devised the first computers that could be called easy to use, with intuitive graphical displays. The Xerox Alto, which had built-in networking, a black-on-white bit-mapped screen, a mouse, and hard disk data storage and sat under the desk looking like R2D2, was the most sophisticated computer workstation of its time, because it was the only workstation of its time. Like the other PARC advances, the Alto was a wonder, but it wasn’t a product. Products would have taken longer to develop, with all their attendant questions about reliability, manufacturability, marketability, and profitability — questions that never once crossed a brilliant mind at PARC. Nobody was expected to buy computers built by Xerox PARC.

    There is a very good book about Xerox PARC called Fumbling the Future, which says that PARC researchers Butler Lampson and Chuck Thacker were inventing the first personal computer when they designed and built the Alto in 1972 and 1973 and that by choosing not to commercialize the Alto, Xerox gave up its chance to become the dominant player in the coming personal computer revolution. The book is good, but this conclusion is wrong. Just the pans to build an Alto in 1973 cost $10,000, which suggests that a retail Alto would have had to sell for at least $25,000 (1973 dollars, too) for Xerox to make money on it. When personal computers finally did come along a couple of years later, the price point that worked was around $3,000, so the Alto was way too expensive. It wasn’t a personal computer.

    And there was no compelling application on the Alto — no VisiCalc, no single function — that could drive a potential user out of the office, down the street, and into a Xerox showroom just to buy it. The idea of a spreadsheet never came to Xerox. Peter Deutsch wrote about what he called spiders — values (like 1989 revenues) that appeared in multiple documents, all linked together. Change a value in one place and the spider made sure that value was changed in all linked places. Spiders were like spreadsheets without the grid of columns and rows and without the clearly understood idea that the linked values were used to solve quantitative problems. Spiders weren’t VisiCalc.

    If Xerox made a mistake in its handling of the Alto, it was in almost choosing to sell it. The techies at PARC knew that the Alto was the best workstation around, but they didn’t think about the pricing and application issues. When Xerox toyed with the idea of selling the Alto, that consideration instantly erased any doubts in the minds of its developers that theirs was a commercial system. Dave Kerns, the president of Xerox, kept coming around, nodding his head, and being supportive but somehow never wrote the all-important check.

    Xerox’s on-again, off-again handling of the Alto alienated the technical staff at PARC, who never really understood why their system was not marketed. To them, it seemed as if Kerns and Xerox, like the owners of Sutter’s Mill, had found gold in the stream but decided to build condos on the spot instead of mining because it was never meant to be a gold mine.

    There was a true sense of the academic — the amateur — in Ethernet too. PARC’s technology for networking all its computers together was developed in 1973 by a team led by Bob Metcalfe. Metcalfe’s group was looking for a way to speed up the link between computers and laser printers, both of which had become so fast that the major factor slowing down printing was, in fact, the wire between the two machines rather than anything having to do with either the computer or the printer. The image of the page was created in the memory of the computer and then had to be transmitted bit by bit to the printer. At 600 dots-per-inch resolution, this meant sending more than 33 million bits across the wire for each page. The computer could resolve the page in memory in 1 second and the printer could print the page in 2 seconds, but sending the data over what was then considered to be a high-speed serial link took just under 15 minutes. If laser printers were going to be successful in the office, a faster connection would have to be invented.

    PARC’s printers were computers in their own right that talked back and forth with the computers they were attached to, and this two-way conversation meant that data could collide if both systems tried to talk at once. Place a dozen or more computers and printers on the same wire, and the risk of collisions was even greater. In the absence of a truly great solution to the collision problem, Metcalfe came up with one that was at least truly good and time honored: he copied the telephone party line. Good neighbors listen on their party line first, before placing a call, and that’s what Ethernet devices do too — listen, and if another transmission is heard, they wait a random time interval before trying again. Able to transmit data at 2.67 million bits per second across a coaxial cable, Ethernet was a technical triumph, cutting the time to transmit that 600 dpi page from 15 minutes down to 12 seconds.

    At 2.67 megabits per second (mbps), Ethernet was a hell of a product, for both connecting computers to printers and, as it turned out, connecting computers to other computers. Every Alto came with Ethernet capability, which meant that each computer had an individual address or name on the network. Each user named his own Alto. John Ellenby, who was in charge of building the Altos, named his machine Gzunda “because it gzunda the desk”.

    The 2.67 mbps Ethernet technology was robust and relatively simple. But since PARC wasn’t supposed to be interested in doing products at all but was devoted instead to expanding the technical envelope, the decision was made to scale Ethernet up to 10 mbps over the same wire with the idea that this would allow networked computers to split tasks and compute in parallel.

    Metcalfe had done some calculations that suggested the marketplace would need only 1 mbps through 1990 and 10 mbps through the year 2000, so it was decided to aim straight for the millennium and ignore the fact that 2.67 mbps Ethernet would, by these calculations, have a useful product life span of approximately twenty years. Unfortunately, 10 mbps Ethernet was a much more complex technology — so much more complex that it literally turned what might have been a product back into a technology exercise. Saved from its brush with commercialism, it would be another six years before 10 mbps Ethernet became a viable product, and even then it wouldn’t be under the Xerox label.

    Beyond the Alto, the laser printer, and Ethernet, what Xerox PARC contributed to the personal computer industry was a way of working — Bob Taylor’s way of working.

    Taylor was a psychologist from Texas who in the early 1960s got interested in what people could and ought to do with computers. He wasn’t a computer scientist but a visionary who came to see his role as one of guiding the real computer scientists in their work. Taylor began this task at NASA and then shifted a couple years later to working at the Department of Defense’s Advanced Research Projects Administration (ARPA). ARPA was a brainchild program of the Kennedy years, intended to plunk money into selected research areas without the formality associated with most other federal funding. The ARPA funders, including Taylor, were supposed to have some idea in what direction technology ought to be pushed to stay ahead of the Soviet Union, and they were expected to do that pushing with ARPA research dollars. By 1965, 33-year-old Bob Taylor was in control of the world’s largest governmental budget for advanced computer research.

    At ARPA, Taylor funded fifteen to twenty projects at a time at companies and universities throughout the United States. He brought the principal researchers of these projects together in regular conferences where they could share information. He funded development of the ARPAnet, the first nationwide computer communications network, primarily so these same researchers could stay in constant touch with each other. Taylor made it his job to do whatever it took to find the best people doing the best work and help them to do more.

    When Xerox came calling in 1970, Taylor was already out of the government following an ugly experience reworking U.S. military computer systems in Saigon during the Vietnam War. For the first time, Taylor had been sent to solve a real-world computing problem, and reality didn’t sit well with him. Better to get back to the world of ideas, where all that was corrupted were the data, and there was no such thing as a body count.

    Taylor held a position at the University of Utah when Xerox asked him to work as a consultant, using his contacts to help staff what was about to become the Computer Science Laboratory (CSL) at PARC. Since he wasn’t a researcher, himself, Taylor wasn’t considered qualified to run the lab, though he eventually weaseled into that job too.

    Alan Kay, jazz musician, computer visionary, and Taylor’s first hire at PARC, liked to say that of the top one hundred computer researchers in the world, fifty-eight of them worked at PARC. And sometimes he said that seventy-six of the top one hundred worked at PARC. The truth was that Taylor’s lab never had more than fifty researchers, so both numbers were inflated, but it was also true that for a time under Taylor, CSL certainly worked as though there were many more than fifty researchers. In less than three years from its founding in 1970, CSL researchers built their own time-sharing computer, built the Alto, and invented both the laser printer and Ethernet.

    To accomplish so much so fast, Taylor created a flat organizational structure; everyone who worked at CSL, from scientists to secretaries, reported directly to Bob Taylor. There were no middle managers. Taylor knew his limits, though, and those limits said that he had the personal capacity to manage forty to fifty researchers and twenty to thirty support staff. Changing the world with that few people required that they all be the best at what they did, so Taylor became an elitist, hiring only the best people he could find and subjecting potential new hires to rigorous examination by their peers, designed to “test the quality of their nervous systems.” Every new hire was interviewed by everyone else at CSL. Would-be researchers had to appear in a forum where they were asked to explain and defend their previous work. There were no junior research people. Nobody was wooed to work at CSL; they were challenged. The meek did not survive.

    Newly hired researchers typically worked on a couple of projects with different groups within CSL. Nobody worked alone. Taylor was always cross-fertilizing, shifting people from group to group to get the best mix and make the most progress. Like his earlier ARPA conferences, Taylor chaired meetings within CSL where researchers would present and defend their work. These sessions came to be called Dealer Meetings, because they took place in a special room lined with blackboards, where the presenter stood like a blackjack dealer in the center of a ring of bean-bag chairs, each occupied by a CSL genius taking potshots at this week’s topic. And there was Bob Taylor, too, looking like a high school science teacher and keeping overall control of the process, though without seeming to do so.

    Let’s not underestimate Bob Taylor’s accomplishment in just getting these people to communicate on a regular basis. Computer people love to talk about their work — but only their work. A Dealer Meeting not under the influence of Bob Taylor would be something like this:

    Nerd A (the dealer): “I’m working on this pattern recognition problem, which I see as an important precursor to teaching computers how to read printed text”.

    Nerd B (in the beanbag chair): “That’s okay, I guess, but I’m working on algorithms for compressing data. Just last night I figured out how to … ”

    See? Without Taylor it would have been chaos. In the Dealer Meetings, as in the overall intellectual work of CSL, Bob Taylor’s function was as a central switching station, monitoring the flow of ideas and work and keeping both going as smoothly as possible. And although he wasn’t a computer scientist and couldn’t actually do the work himself, Taylor’s intermediary role made him so indispensable that it was always clear who worked for whom. Taylor was the boss. They called it “Taylor’s lab.”

    While Bob Taylor set the general direction of research at CSL, the ideas all came from his technical staff. Coming up with ideas and then turning them into technologies was all these people had to do. They had no other responsibilities. While they were following their computer dreams, Taylor took care of everything else: handling budgets, dealing with Xerox headquarters, and generally keeping the whole enterprise on track. And his charges didn’t always make Taylor’s job easy.

    Right from the start, for example, they needed a DEC PDP-io time-sharing system, because that was what Engelbart had at SRI, and PDP-ios were also required to run the ARPAnet software. But Xerox had its own struggling minicomputer operation, Scientific Data Systems, which was run by Max Palevsky down in El Segundo. Rather than buy a DEC computer, why not buy one of Max’s Sigma computers, which competed directly with the PDP-io? Because software is vastly more complex than hardware, that’s why. You could build your own copy of a PDP-io in less time than it would take to modify the software to run on Xerox’s machine! And so they did. CSL’s first job on their way toward the office of the future was to clone the PDP-io. They built the Multi-Access Xerox Computer (MAXC). The C was silent, just to make sure that Max Palevsky knew the computer was named in his honor.

    The way to create knowledge is to start with a strong vision and then ruthlessly abandon parts of that vision to uncover some greater truth. Time sharing was part of the original vision at CSL because it had been part of Engelbart’s vision, but having gone to the trouble of building its own time-sharing system, the researchers at PARC soon realized that time sharing itself was part of the problem. MAXC was thrown aside for networks of smaller computers that communicated with each other — the Alto.

    Taylor perfected the ideal environment for basic computer research, a setting so near to perfect that it enabled four dozen people to invent much of the computer technology we have today, led not by another computer scientist but by an exceptional administrator with vision.

    I’m writing this in 1991, when Bill Gates of Microsoft is traveling the world preaching a new religion he calls Information At Your Fingertips. The idea is that PC users will be able to ask their machines for information, and, if it isn’t available locally, the PC will figure out how and where to find it. No need for Joe User to know where or how the information makes its way to his screen. That stuff can be left up to the PC and to the many other systems with which it talks over a network. Gates is making a big deal of this technology, which he presents pretty much as his idea. But Information At Your Fingertips was invented at Xerox PARC in 1973′-Like so many PARC inventions, though, it’s only now that we have the technology to implement it at a price normal mortals can afford.

    In its total dedication to the pursuit of knowledge, CSL was like a university, except that the pay and research budgets were higher than those usually found in universities and there was no teaching requirement. There was total dedication to doing the best work with the best people — a purism that bordered on arrogance, though Taylor preferred to see it more as a relentless search for excellence.

    What sounded to the rest of the world like PARC arrogance was really the fallout of the lab’s intense and introverted intellectual environment. Taylor’s geniuses, used to dealing with each other and not particularly sensitive to the needs of mere mortals, thought that the quality of their ideas was self-evident. They didn’t see the need to explain — to translate the idea into the world of the other person. Beyond pissing off Miss Manners, the fatal flaw in this PARC attitude was their failure to understand that there were other attributes to be considered as well when examining every idea. While idea A may be, in fact, better than idea B, A is not always cheaper, or more timely, or even possible  — factors that had little relevance in the think tank but terrific relevance in the marketplace.

    In time the dream at CSL and Xerox PARC began to fade, not because Taylor’s geniuses had not done good work but because Xerox chose not to do much with the work they had done. Remember this is industrial basic research — that is, insurance. Sure, PARC invented the laser printer and the computer network and perfected the graphical user interface and something that came to be known as what-you-see-is-what-you-get computing on a large computer screen, but the captains of industry at Xerox headquarters in Stamford, Connecticut, were making too much money the old way — by making copiers — to remake Xerox into a computer company. They took a couple of halfhearted stabs, introducing systems like the Xerox Star, but generally did little to promote PARC technology. From a business standpoint, Xerox probably did the right thing, but in the long term, failing to develop PARC technology alienated the PARC geniuses. In his 1921 book The Engineers and the Price System, economist Thorstein Veblen pointed out that in high-tech businesses, the true value of a company is found not in its physical assets but in the minds of its scientists and engineers. No factory could continue to operate if the knowledge of how to design its products and fix its tools of production was lost. Veblen suggested that the engineers simply organize and refuse to work until they were given control of industry. By the 1970s, though, the value of computer companies was so highly concentrated in the programmers and engineers that there was not much to demand control of. It was easier for disgruntled engineers just to walk, taking with them in their minds 70 or 80 percent of what they needed to start a new company. Just add money.

    From inside their ivory tower, Taylor’s geniuses saw less able engineers and scientists starting companies of their own and getting rich. As it became clear that Xerox was going to do little or nothing with their technology, some of the bolder CSL veterans began to hit the road as entrepreneurs in their own right, founding several of the most important personal computer hardware and software companies of the 1980s. They took with them Xerox technology — its look and feel too. And they took Bob Taylor’s model for running a successful high-tech enterprise — a model that turned out not to be so perfect after all.

    Reprinted with permission

  • How and why LinkedIn is becoming an engineering powerhouse

    Most LinkedIn users know “People You May Know” as one of that site’s flagship features — an onmipresent reminder of other LinkedIn users with whom you probably want to connect. Keeping it up to date and accurate requires some heady data science and impressive engineering to keep data constantly flowing between the various LinkedIn applications. When Jay Kreps started there five years ago, this wasn’t exactly the case.

    “I was here essentially before we had any infrastructure,” Kreps, now principal staff engineer, told me during a recent visit to LinkedIn’s Mountain View, Calif., campus. He actually came LinkedIn to do data science, thinking the company would have some of the best data around, but it turned out the company had an infrastructure problem that needed his attention instead.

    How big? The version of People You May Know in place then was running on a single Oracle database instance — a few scripts and heuristics provided intelligence — and it took six weeks to update (longer if the update job crashed and had to restart). And that’s only if it worked. At one point, Kreps said, the system wasn’t working for six months.

    When the scale of data began to overload the server, the answer wasn’t to add more nodes but to cut out some of the matching heuristics that required too much compute power.

    So, instead of writing algorithms to make People You Know Know more accurate, he worked on getting LinkedIn’s Hadoop infrastructure in place and built a distributed database called Voldemort.

    tracking_high_levelSince then, he’s built Azkaban, an open source scheduler for batch processes such as Hadoop jobs, and Kafka, another open source tool that Kreps called “the big data equivalent of a message broker.” At a high level, Kafka is responsible for managing the company’s real-time data and getting those hundreds of feeds to the apps that subscribe to them with minimal latency.

    Espresso, anyone?

    But Kreps’s work is just a fraction of the new data infrastructure that LinkedIn has built since he came on board. It’s all part of a mission to create a data environment at LinkedIn that’s as innovative as that of any other web company around, and that means the company’s applications developers and data scientists can keep building whatever products they dream up.

    Bhaskar Ghosh, LinkedIn’s senior director of data infrastructure engineering — who’ll be part of our guru panel at Structure: Data on March 20-21 — can’t help but find his way to the whiteboard when he gets to discussing what his team has built. It’s a three-phase data architecture comprised of online, offline and nearline systems, each designed for specific workloads. The online systems handle users’ real-time interactions; offline systems, primarily Hadoop and a Teradata warehouse, handle batch processing and analytic workloads; and nearline systems handle features such as People You May Know, search and the LinkedIn social graph, which update constantly but require slightly less than online latency.

    Ghosh's diagram of LinkedIn's data architecture

    Ghosh’s diagram of LinkedIn’s data architecture

    One of the most-important things the company has built is a new database system called Espresso. Unlike Voldemort, which is an eventually consistent key-value store modeled after Amazon’s Dynamo database and used to serve certain data at high speeds, Espresso is a transactionally consistent document store that’s going to replace legacy Oracle databases across the company’s web operations. It was originally designed to provide a usability boost for LinkedIn’s InMail messaging service, and the company plans to open source Espresso later this year.

    According to Director of Engineering Bob Schulman, Espresso came to be “because we had a problem that had to do with scaling and agility” in the mailbox feature. It needs to store lots of data and keep consistent with users’ activity. It also needs a functional search engine so users — even those with lots of messages — can find what they need in a hurry.

    With the previous data layer in tact, he explained, the solution for developers to solve scalability and reliability issues was doing so in the application.

    However, Principal Software Architect Shirshanka Das noted, “trying to scale [your] way out of a problem” with code isn’t necessarily a long-term strategy. “Those things tend to burn out teams and people very quickly,” he said, “and you’re never sure when you’re going to meet your next cliff.”

    L to R: Kreps, Shirshanka Das, Bhaskar Ghosh, Bob Schulman

    L to R: Kreps, Das, Ghosh and Schulman

    Schulman and Das have also worked together on technologies such as Helix — an open-source cluster management framework for distributed systems — and Databus. The latter, which has been around since 2007 and the company just open sourced, is a tool that pushes changes in what Das calls “source of truth” data environments like Espresso to downstream environments such as Hadoop so that everyone can ensure they’re working with the freshest data.

    In an agile environment, Schulman said, it’s important to be able to change something without breaking something else. The alternative is to bring stuff down to make changes, he added, and “it’s never a good time to stop the world.”

    databus-usecases

    Next up, Hadoop

    Thus far, LinkedIn’s biggest push has been in improving its nearline and online systems (“Basically, we’ve hit the ball out of the park here,” Ghosh said), so its next big push is offline — Hadoop, in particular. The company already uses Hadoop for the usual gamut of workloads — ETL, model-building, exploratory analytics and pre-computing data for nearline applications — and Ghosh wants to take it even further.

    He laid out a multipart vision, most of which centers around tight integration between the company’s Hadoop clusters and relational database systems. Among the goals: better ETL frameworks, ad-hoc queries, alternative storage formats and an integrated metadata framework — which Ghosh calls the holy grail — that will make it easier for various analytic systems to use each other’s data. He said LinkedIn has something half-built that should be finished this year.

    “[SQL on Hadoop] is going to take two years to work,” he explained. “What do we do in the meanwhile? We cannot throw this out.”

    Actually, the whole of LinkedIn’s data engineering efforts right now put a focus on building services that can work together easily, Das said. The Espresso API, for example, allows developers to connect a columnar storage engine and do some limited online analytics right from within the transactional database.

    With Hadoop plans laid out

    With Hadoop plans laid out.

    Good infrastructure makes for happy data scientists

    Yael Garten, a senior data scientist at LinkedIn, said better infrastructure makes her job a lot easier. Like Kreps, she was drawn to LinkedIn (from her previous career doing bioinformatics research at Stanford) because the company has so much interesting data to work with, only she was fortunate enough to miss the early days of spotty infrastructure that couldn’t handle 10 million users, much less today’s more than 200 million users. To date, she said, she hasn’t come across a problem she couldn’t solve because the infrastructure couldn’t handle the scale.

    The data science team embeds itself with the product team and they work together to either prove out product managers’ hunches or build products around data scientists’ findings. In 2013, Garten said, developers should expect infrastructure that lets them prototype applications and test ideas in near real time. And even business managers need to see analytics as close to real time as possible so they can monitor how new applications are performing.

    And infrastructure isn’t just about making things faster, she noted: “Something things wouldn’t be possible.” She wouldn’t go into detail about what this magic piece of infrastructure is, but I’ll assume it’s the company’s top-secret distributed graph system. Ghosh was happy to go into detail about a lot things, but not that one.

    A virtuous hamster wheel

    Neither Ghosh nor Kreps sees LinkedIn — or any leading web company, for that matter — quitting the innovation game any time soon. Partially, this is a business decision. Ghosh, for example, cites the positive impact on company culture and talent recruitment, while Kreps points out the difficult total-cost-of-ownership math when comparing paying for software licenses or hiring open source committers versus just building something internally.

    Kreps acknowledged that the constant cycle of building new systems is “kind of a hamster wheel,” but there’s always an opportunity to do new stuff and build products with their own unique needs. Initially, for example, he envisioned two targets use cases for Hadoop but now the company has about 300 individual workloads; it went from two real-time data feeds to 650.

    “But companies are doing this for a reason,” he said. “There is some problem this solves.”

    Ghosh, well, he shot down the idea of relying too heavily on commercial technologies or existing open source projects almost as soon as he suggests it’s a possibility. “We think very carefully about where we should do rocket science,” he told me, before quickly adding, “[but] you don’t want to become a systems integration shop.”

    In fact, he said, there will be a lot more development and a lot more open source activity from LinkedIn this year: “[I’m already] thinking about the next two or three big hammers.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Low-end Huawei Prism II with Jelly Bean coming to T-Mobile soon

    T-Mobile_Huawei_Prism

    Images and specs of the T-Mobile Prism II (Huawei U8686) have been leaked online via @evleaks. A followup to the former Huawei Prism, the Prism II will feature a similar form factor to its predecessor and will come with Android 4.1.1 Jelly Bean, a 1.0 GHz CPU and a HVGA (480 x 320) display. Not much else is known yet about this device but check back with TalkAndroid as we learn more about the device.

    Sources: Twitter & Twitter

    Come comment on this article: Low-end Huawei Prism II with Jelly Bean coming to T-Mobile soon

  • Soul to sole: Eye surgeon Anthony Vipin Das has developed shoes that see for the blind

    Video still from Le Chal, courtesy Anthony Vipin Das.

    Video still from Le Chal, courtesy Anthony Vipin Das.

    A haunting black-and-white video screened during the TED Fellows talks, depicting people speaking into a device and then walking — at first taking halting steps, then more confident strides. As the video unfolds, the camera zooms in on the faces of the walkers — revealing that they are blind.

    With his team, TED Senior Fellow Anthony Vipin Das, an eye surgeon, has been developing haptic shoes that use vibration and GPS technology to guide the blind. This innovation — which could radically change the lives of the vision-impaired — has drawn the interest of the United States Department of Defense, which has recently shortlisted the project for a $2 million research grant. Anthony tells us the story behind the shoe.

    Tell us about the haptic shoe.

    The shoe is called Le Chal, which means “take me there” in Hindi. My team, Anirudh Sharma and Krispian Lawrence and I, are working on a haptic shoe that uses GPS to guide the blind. The most difficult problems that the blind usually face when they navigate is orientation and direction, as well as obstacle detection. The shoe is in its initial phase of testing: We’ve crafted the technology down to an insole that can fit into any shoe and is not limited by the shape of the footwear, and it vibrates to guide the user. It’s so intuitive that if I tap on your right shoulder, you will turn to your right; if I tap on your left shoulder, you turn to your left.

    The shoe basically guides the user on the foot on which he’s supposed to take a turn. This is for direction. The shoe also keeps vibrating if you’re not oriented in the direction of your initial path, and will stop vibrating when you’re headed in the right direction. It basically brings the wearer back on track as we check orientation at regular intervals. Currently I’m conducting the first clinical study at LV Prasad Eye Institute in Hyderabad, India. It’s very encouraging to see the kind of response we’ve had from wearers. They were so moved because it was probably the very first time that they had the sense of independence to move confidently — that the shoe was talking to them, telling them where to go and what to do.

    How do you tell the shoe where you want to go?

    It uses GPS tracking, and we’ve put in smart taps: gestures that the shoe can learn. You tap twice, and it’ll take you home. If you lift your heel for five seconds, the shoe might understand, “This is one of my favorite locations.” And not just that. If a shoe detects a fall, it can automatically call an emergency number. Moving forward, we want to try to decrease the dependency on the phone and the network to a great extent. We hope to crowdsource maps and build up enough data to store on the shoe itself.

    The second phase we are working on is obstacle detection. India has got such a varied terrain. The shoe can detect immediate obstacles like stones, potholes, steps. It’s not a replacement for the cane, but it’s an additive benefit for a visually impaired person to offer a sense of direction and orientation.

    Are you still in the development stage?

    The insole is already done. We are currently testing it. I’m using simple and complex paths — simple paths like a square, rectangle, triangle and a circle, and complex paths include a zigzag or a random path. Then we are going to step it up with navigation into a neighborhood. From there we’ll develop navigation to distant locations, including the use of public transportation. It will be a stepwise study that we’ll finish over the middle of this year, then go in for manufacturing the product.

    You’re an eye doctor. How did you get involved in this?

    I’m an eye surgeon who loves to step out of my box and try to see others who are working in similar areas of technology that are helpful for my patients. So Anirudh Sharma and I, we’re on the same TR35 list of India in 2012. I said, “Dude, I think we can be doing stuff with the shoe and my patients. Let’s see how we can refine it.” There was already an initial prototype when he presented last year at EmTech in Bangalore. Anirudh teamed up with one of his friends, Krispian Lawrence of Ducere Technologies in Hyderabad, who is leading the development and logistics to get this into the market. We just formed a really cool team, and started working on the shoe, started testing it on our patients and refining the model further and further. Finally we’ve come to a stage where my patients are walking and building a bond with the shoe.

    Are these patients comfortable with the shoe?

    Yes, it’s totally unobtrusive. And more importantly, we are working on developing the first vibration language in the world for the Haptic Shoe. We’re looking at standardizing the vibration, like Braille, which is multilingual. But even more crucial than the technology, the shoe is basically talking to the walker. How they can trust the shoe? So that’s an angle that we are looking at. Because at the end of the day, it’s the shoe that’s guiding you to the destination. We’re trying to build that bond between the walker and the sole.

    Building a bond with the sole. That’s good. I’m going to use that.

  • LG hits the 10 million mark for LTE phones sold

    LG_Wireless_Ultra_HD_Transmission_Standard

    The latest announcement from LG is that they’ve officially sold 10 million LTE-enabled smartphones globally. While they’re still trailing some of the biggest competitors in the market by a large chunk, it’s still a very impressive number, especially considering LTE isn’t commonplace all over the world like it is in countries like the US. Besides that, the number doesn’t include the Nexus 4, which has no (official) LTE but is still one of the best selling phones available on the market right now.

    We’ve seen LG make a push for more LTE phones lately, and I think it’s going to pay off for them. 10 million is a good stepping stone, and LG has plenty of room to go up. When the wave of next generation devices hit the shelves in the coming months, expect LG to post some impressive numbers.

    source: Newswire

    Come comment on this article: LG hits the 10 million mark for LTE phones sold

  • Browsing the web on an iPad stinks–and Apple likes it that way

    When iPads were first introduced in 2010, an Apple press release promised that the “iPad’s revolutionary Multi-Touch interface makes surfing the web an entirely new experience, dramatically more interactive and intimate than on a computer.” The implication was that the web via the tablet would be unrecognizable and vastly superior: hoverboarding compared with surfing on my laptop and doggie paddling on my phone.

    Yet, here it is three years on, and we’re still waiting for that “interactive and intimate” browsing experience (and hoverboards, for that matter).

    A recent study conducted by Onswipe revealed that iPads account for a whopping 98.1 percent of tablet traffic on websites. Despite this, the actual experience of surfing the web on an iPad is underwhelming at best and infuriating at worst. Simply put, today’s state-of-the-art tablet browsers, especially Safari, don’t do the Internet, the user, or the iPad justice. Apple wasn’t totally wrong: The iPad has proven itself to be a revolutionary device that absolutely has the potential to offer a transformative web-browsing experience. It just hasn’t yet. Which means there’s a gap in the market for an intuitive, immersive, innovative iPad browser. Whoever develops it is going to win big.

    Safari is deliberately hobbled

    As more and more of the services we use on a daily basis have migrated to the cloud, the web browser has become the computer’s most essential app. And when we surf the web on a computer, we encounter few obstacles. Though we may have to scale the occasional paywall or sit through an obligatory five seconds of an ad before accessing content, the navigational experience of a computer user is fluid and frictionless — as anyone who’s gone down the rabbit hole researching alpaca breeds or underrated Val Kilmer films at 3 a.m. can attest.

    Surfing the web is far less pleasurable on an iPad. Visiting a site frequently presents one with a pop-up and a dilemma: Download the app, or endure the diminished experience of a website designed for another device. Safari is essentially a limited version of its desktop sibling – and apps almost always provide a better experience. (Or, as Firefox UX Lead Alex Limi has summed it up, it’s ”kind of sucky.”)

    Of course, this is sort of the point. It’s in Apple’s, or any tablet maker’s, best interest to make using (read: buying) apps preferable to visiting websites. Safari is designed to make using web-based apps on an iPad inconvenient, if not impossible. In response, most companies focus their mobile development resources on creating native apps rather than optimizing their content for tablet browsers. The result is a browsing experience full of flow-breakers. In short, on a computer the browsing experience is limitless; on a tablet, it’s filled with blind alleys and false doors.

    Why web browsing still matters

    There is an impulse among some to assume that the rise of apps – or, more sensationally, the death of the website – will eventually render browsers, or at least mobile ones, obsolete. While it’s true that more and more content is consumed through apps, and that personalization has shifted our approach to content from searching to getting, the average number of Google searches per day has steadily increased – by an astounding one trillion each year.

    But even if we accept that the importance of mobile websites is on the wane, there’s no reason for mobile browsers to beat them to an early grave. There is plenty of room for resurrection, but only if we throw out desktop-based notions of what a browser looks and feels like. Freed of all the tasks and responsibilities that other apps accomplish, tablet browsers should offer an absorbing, engaging innovative experience. Further, they should evolve the idea of what a browser is and can be on a tablet. Take GarageBand, for example: The iPad version is infinitely more interactive and tactile than the desktop version.

    I’ve mostly been picking on Safari. As the native browser for a tablet that accounts for 98.1 percent of tablet traffic, its influence is enormous. However, that’s not to say there aren’t more innovative browsers taking steps in the right direction. Dolphin, for instance, allows you to create your own gestures for various functions. And though there are any number of other browsers contending in the space, as of yet none has emerged as the standard-setter or must-have. Mozilla’s forthcoming iPad browser, Junior, which completely throws out desktop-inspired design and focuses on simplicity, could be a contender, but for now we have to wait and see.

    What we’ve lost

    As it currently stands, the shoehorning of hobbled desktop browsers onto tablets is forcing us to move from a browser to app-navigation experience. This is not necessarily a negative development, but we must carefully consider what we lose as our web experience becomes siloed, or, alternately, take into consideration in our app design how we can ensure and better enable the type of surfing serendipity that made web browsing valuable in the first place.

    The web as we have known it was designed to facilitate the browsing experience – to be a boundlessly linked rhizomatic structure of hypertext. But we have quite willingly begun to fence it off as we have shifted our experience to the iPad and individual apps. Even worse, though, is that most of the apps and services that have attempted to fill the browsing void have only further constricted the experience of the web via the tablet.

    Under the claim of “personalization” and making the browsing and discovery experience more individually valuable and meaningful, they really provide little more than constricting customization confined to picks of an editor or your social graph. Most of it is expected or retreaded.What is lost is the magic of blazing a trail from one page to the next, the anticipation of revealing the unknown that lurks behind the next link. Personalization shouldn’t be an either/or experience of web discovery, and neither should browsing on the tablet.

    While we will continue to make strides in personalizing the web, and hopefully even enhancing the web experience on tablets, I’m also looking forward to a browser that lets me fall down an unexpected rabbit hole once in awhile. As long as there are alpacas and Val Kilmer movies, there will be surfers. It’s up to developers to provide the hoverboards.

    Hank Nothhaft is the co-founder and chief product officer of Trapit, a personalized content discovery platform.

    Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

     

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Join First Lady Michelle Obama in Google+ Hangout about Let’s Move! on Monday

    Ed. note: This post was originally published on the official Let's Move website. You can read that post here.

    Last week, First Lady Michelle Obama traveled around the country to celebrate the third anniversary of Let’s Move!, her initiative to ensure that all our children grow up healthy and reach their full potential.

    Mrs. Obama highlighted great progress being made in schools, towns and businesses across America. She also announced several new programs that will help families make healthier choices and will enable our kids to be more physically active, including a new MyPlate partnership that identifies recipes that meet USDA nutrition standards on the largest food sites on the web, and the launch of Let's Move Active Schools, which empowers schools to find free or low-cost ways to incorporate movement before, during, and after the school day.

    On Monday, March 4th at 11:10 a.m. ET, the celebration continues as the First Lady joins her first ever Google+ Hangout.

    Mrs. Obama will participate in a completely virtual conversation from the Blue Room of the White House, speaking with families from around the country. The hangout will be moderated by Kelly Ripa, co-host of LIVE with Kelly and Michael, and we hope you’ll join, too!

    read more

  • 2013 Mercedes-Benz SLS AMG GT Roadster

    SLS AMG GT Roadster

    It’s a well known fact that the 2013 Mercedes-Benz SLS AMG is one serious performer. However when the boys at Benz decided to chop the top and create the SLS AMG GT Roadster, some enthusiasts thought that the car would lose some of its performance magic. InsideLine.com has come through with some favorable track test numbers that we hope will put all those who were skeptical at ease. Check it out after the jump.

    Source: InsideLine.com

  • Hacking Sand Hill: How the cloud will help security startups lure VCs

    The computer security industry is far from an easy place to build a successful startup. Security has traditionally been controlled by a small group of established firms that maintain a vice-like grip on the major IT sales channels. And understandably, big ticket customers like the military and large enterprise can be hard to sell into for startups. The technology in fields such as encryption and intrusion detection is complex and arcane, and often requires expensive certifications.

    But even in the face of such challenges security remains a hot field and offers opportunities for startups. So-called endpoint security for consumers was a $4.9 billion market in 2012, according to IDC, and enterprise security software and hardware is roughly $31.4 billion worldwide. In the past two years there have been over $12 billion in security acquisitions, with many of the notable exits in 2011 and 2012 having hit north of $800 million.

    It’s also a disruptive field. Security is constantly evolving to confront the mercurial world of hackers and cybercriminals. With the proliferation of professional financial cybercrime and high-profile state-sponsored hacking, modern adversaries for information security are incredibly sophisticated. The rise of this generation of hackers creates a demand for new and better security technologies, and two fields in particular are currently big areas of interest for Sand Hill VCs.

    Cloud and next-gen infrastructure security

    Cloud and infrastructure security refers to the hardware and software associated with protecting modern IT infrastructures. As more businesses move workloads to the cloud, critical financial and personal data becomes exposed to the public internet. Securing data in flight to the cloud and at rest off-site is mission critical.

    VCs will be heavily investing in hardware and software in this field because it shares complementary demand with the success for cloud computing; as companies demand the flexibility and cost-savings of the cloud, they will also require next-generation security built to secure the infrastructure of public and hybrid-cloud environments.

    This is a hard area for startups to play. Proving compliance with draconian and mercurial regulations like PCI-DSS or the Common Criteria is a difficult and frequently expensive endeavor. As a result of high barriers to entry, systems incumbents such as NetApp and Oracle have an advantage.

    But several new startups in this space have navigated these issues through the engagement of established veterans and a focused but superior feature set. These include encryption-focused Ciphercloud and the back-end infrastructure-focused CloudPassage. (Note: I have no financial or professional relationship with these or any of the other companies mentioned in this article.) Both Ciphercloud and Cloudpassage augment the security of an existing IT infrastructure and uniquely target bringing compliance-grade security to hybrid cloud environments. Compliance is a serious and expensive issue for the enterprise, and these industry veteran-led startups are attractive to VCs because they provide an economic but well-monetized alternative to purely consulting-based solutions.

    Intrusion detection and prevention systems

    Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) refer to software and hardware solutions that detect and halt attacks or attackers as they attempt to compromise a system in real time. The rocket science-esque fields of IDS and IPS aren’t new, but with the advent of this generation of sophisticated attackers and widespread interest in big data analysis, IDS and IPS are quickly becoming a hot topic for VCs.

    A great example of IDS/IPS success can be seen in Silvertail Systems. Acquired late last year by EMC, Silvertail used complex algorithms to detect attacks from the outside and even internal attacks launched by compromised accounts. VCs liked that Silvertail’s tech was managed by a team of industry veterans and that from the beginning they closed deals with large enterprises.

    Late-stage starlet FireEye seems poised for success by employing the same formula. Their late 2012 hire of ex-McAfee CEO Dave DeWalt and success in traditional security verticals like US DoD, US federal, and large financial have well prepared the company for their imminent IPO.

    SF-based CloudFlare can also be considered an IDS/IPS company. CloudFlare intercepts and sifts traffic to a site through an analysis engine to improve performance and protect websites from modern attacks like Distributed Denial of Service (DDoS) and Cross-Site Request Forgery (CSRF). CloudFlare protects a significant measure of the internet and remains on the watch list for nearly every VC on Sand Hill.

    CloudFlare’s frictionless sales model is also an interesting point for VCs. Bucking the traditional IT model of inside/outside sales teams, infrastructure companies like CloudFlare and New Relic allow customers to directly purchase through their sites. This decreases sales cycle time and increases margins – both key diligence metrics for VCs.  In a busy space like IPS/IDS (or IT in general), employing positive differences like a unique sales architecture help startups to distinguish themselves in the eyes of investors.

    Finding an edge

    As a security startup you can do a few things to improve your chances of closing your round. Make sure your team is led by veterans who know how to build and sell into your verticals (or actively recruit them). Also, align your company with sectors that have complementary demand with big tech trends.

    And, as in any industry, attack big problems that people are willing to pay lots of money to solve.

    Andrew “Andy” Manoske is an Associate at GGV Capital, a Sand Hill and Shanghai-based venture capital firm. Prior to GGV, he was a product manager at NetApp and managed the design of security features across the company’s entire product line. Follow him on Twitter @a2d2.

    Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

     Photo courtesy alexmillos/Shutterstock.com.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • 5 tips for startup founders from startup founders

    Building a new tech company from the ground up is incredibly hard. Here are some tips from founders and co-founders who have already scaled that mountain that might help ease the journey for others..

    David Mytton, CEO of Server Density (and friends.)

    David Mytton, CEO of Server Density (and friends.)

    1:  Haste makes waste.  It’s natural to be in a hurry to get product out the door, but take a breath first and really gauge where you are. Slow down when it comes to key decisions, said Dan Belcher, co-founder of Boston-based Stackdriver, a startup focused on monitoring and managing cloud workloads. “Doing things too early is as dangerous as — or even worse than — doing them too late. Think hard about when you start to invest in sales and marketing and when you start forecasting, you need to implement roles and controls.”

    Yesware founder and CEO Mathew Bellows

    Yesware founder and CEO Mathew Bellows

    Matthew Bellows, co-founder of Yesware, an email provider for salespeople, agreed. “Don’t sell your product too soon. [That’s a] hard lesson for a salesperson like me to learn but our board was very clear that I shouldn’t start selling the product before the product was getting tons of in-bound interest.”

    2: Do everything. This is easy because you’ll have to, but embrace this opportunity to get outside your comfort zone. ”Founders should do every role first before hiring someone to take it over. This helps me understand who I’m hiring, what they should be good at, what they should be doing and how to measure their success,” said David Mytton, founder of Server Density, a London-based provider of server monitoring services

    Karl Wirth, co-founder of Apptegic, which helps companies tailor content shown to website users based on who they are and their activities, agrees. “For the first year and a half, I was our only salesperson.” This meant he learned how to cold calling prospects, find buyers. And to assess that person’s problem then work overtime to close the deal.”I knew sales would be important — I didn’t expect it to also shape and refine us so profoundly,” he said.

    GrabCAD founder Hardi Weybaum.

    GrabCAD founder Hardi Weybaum.

    Hardi Meybaum, co-founder and CEO of GrabCAD, an online marketplace for mechanical engineers, is all over this notion. “You are engineer, then product manager, then sales manager, then you’re raising money, then you hire smarter people than yourself to run product, engineering, sales, and marketing and then you need to lead by trust and great communication,” Meybaum said.

    Apptegic founder Karl Wirth.

    Apptegic founder Karl Wirth.

    3: People are your biggest asset. Hire carefully. Mytton feels founders need to hold off on any new hires until things start hurting. “Hiring ahead of demand is the fastest way to burn through money,” he said. But, conversely, founders always need to look for new talent — perhaps for hiring down the road. “You should always be interviewing and always be hiring regardless of your headcount plan,” says Stackdriver co-founder Izzy Azeri. “It’s so hard to find good people and the founder is always the best recruiter.”

    4: It’s all about the user, stupid. Ok, that’s harsh. But any startup or older company that loses its focus on the customer and solving a customer problem is toast.

    “If you are genuinely helping people work more effectively, you will get pulled into companies,” said Yesware’s Bellows. “The days of selling to the IT department and the office of CIO are coming to an end. Frankly, the days of sales-and-markeing-driven companies are coming to an end.” So, talk to your users and perhaps more importantly, listen to your users.

    Cloze co-founders Dan Foody (left) and Alex Cote.

    Cloze co-founders Dan Foody (left) and Alex Cote.

    5: Be prepared to fail. Expect it; it’s part of the gig. Dan Foody, co-founder of Cloze, the maker of an iOS app that consolidates a user’s mail and social media messages, said anyone in that line of work should heed Path CEO Dave Morin’s adage that the first version of any mobile app will fail.

    Morin’s right, says Foody. ”The real reason is that Apple restricts developers to at most 100 beta test devices for any app. In today’s world that’s not nearly a large enough audience to refine an app (especially a consumer-focused one),” he said.

    “You need hundreds to thousands of beta testers. How can you avoid this pitfall?  Build a web app first so you can learn the hard lessons up front with a wide audience without being restricted by platform and store limitations.”

    That’s a good micro example, but generally speaking, failure is how we learn. So founders: be prepared to fail. It can be a badge of honor, especially if you learn from the experience.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.