Blog

  • Reuters – Best Buy Spurns $1bn Minority Investment

    Best Buy Co has turned down a $1 billion minority investment proposal by founder Richard Schulze‘s three private equity partners, writes Reuters. Under the proposal, Leonard Green Partners, Cerberus Capital Management and TPG Capital would have each received a seat on the board of the world’s largest electronics retailer, writes Reuters.

    Reuters – Best Buy Co Inc (BBY.N) turned down a $1 billion minority investment proposal by founder Richard Schulze’s three private equity partners, two sources familiar with the situation told Reuters Friday.

    Under the proposal, Leonard Green Partners, Cerberus Capital Management and TPG Capital would have each received a seat on the board of the world’s largest electronics retailer, the sources said, asking not to be named because they were not authorized to speak to the media.

    A Best Buy spokesman declined to comment. Calls to Leonard Green and Cerberus were not immediately returned. TPG declined to comment.

    (Reporting By Jessica Toonkel; Editing by Gerald E. McCormick)

    The post Reuters – Best Buy Spurns $1bn Minority Investment appeared first on peHUB.

  • KRG’s Fort Dearborn Completes FetterGroup Deal

    KRG Capital Partners‘ portfolio company Fort Dearborn Company has completed the acquisition of FetterGroup’s paint & coating labels business. KRG made its initial investment in Fort Dearborn Company in August 2010 and the acquisition of FetterGroup’s paint & coatings label business represents the 179th investment for KRG since its inception. Fort Dearborn Company is a supplier of cut and stack, pressure sensitive, roll-fed and shrink sleeve labels.

    PRESS RELEASE

    KRG Capital Partners (KRG) announces that one of its Fund IV platform companies, Fort Dearborn Company, a leading supplier of cut & stack, pressure sensitive, roll-fed and shrink sleeve labels, has completed the acquisition of FetterGroup’s paint & coating labels business. KRG made its initial investment in Fort Dearborn Company in August 2010 and the acquisition of FetterGroup’s paint & coatings label business represents the 179th investment for KRG since its inception.

    About Fort Dearborn Company: Fort Dearborn Company is a leading supplier of high-impact decorative labels for the beverage, food, household products, nutraceutical, paint and coatings, personal care, private label/retail and spirits markets. The company provides cut & stack, pressure sensitive, roll-fed and shrink sleeve labels across multiple print technologies including digital, flexographic, offset lithographic and rotogravure. Headquartered in Illinois, the company has ten operating divisions in North America, employing nearly 1,200 associates.

    About KRG Capital Partners: Founded in 1996, KRG is a Denver-based private equity buyout firm with $4.4 billion of cumulative capital either deployed or available for future investment, which includes approximately $1.1 billion deployed since inception by institutional equity co-investors. The firm seeks investment opportunities for its partners where KRG can work in concert with owners and operating managers who are committed to expanding their companies and becoming industry leaders. The result is a partnership that focuses on creating a significantly larger enterprise through a combination of internal growth and complementary add-on acquisitions. Since inception, KRG has invested in 45 platform companies and has completed 134 add-on acquisitions for those platforms.

    The post KRG’s Fort Dearborn Completes FetterGroup Deal appeared first on peHUB.

  • Culbro Completes Minority Investment in TDBBS

    TDBBS, a Richmond, VA-based manufacturer and marketer of all natural dog treats and chews, has completed a minority investment by Culbro, a family-controlled private equity firm based in New York City. Terms were not disclosed.

    PRESS RELEASE

    TDBBS, LLC, a Richmond, VA-based manufacturer and marketer of all natural
    dog treats and chews, announced the completion of a minority investment by Culbro, LLC, a family-controlled
    private equity firm based in New York City. The transaction closed on February 28, 2013; terms were not
    disclosed.
    Founded by Avrum and Lauren Elmakis in 2007, TDBBS has achieved extraordinary growth since inception and
    was ranked #384 in Inc. Magazine’s 2012 “Inc. 500” ranking of the nation’s fastest-growing private companies.
    TDBBS specializes in bully sticks, elk antlers, and other natural, high protein dog chews that are highly digestible
    and are becoming increasingly popular among dog owners across the U.S. and abroad. In addition to bully sticks,
    the company also offers a wide variety of complementary all natural dog treats and chews to consumers through its
    proprietary ecommerce sites and also through other retail channels.
    Culbro, LLC was formed in 2005 as the private equity investment vehicle of the Cullman family and is presently
    managed by members of that family. Among numerous successful businesses it has owned, the Cullman family
    owned and operated General Cigar Holdings, Inc., the largest premium cigar company in the U.S. with prestigious
    brands such as Macanudo and Partagas, for over forty years before selling the company in 2005. The principals of
    Culbro have significant operating and investing experience across a number of industries, with a particular focus on
    consumer products companies.
    Avrum Elmakis, Founder and CEO of TDBBS, said, “I am very excited to partner with Culbro. The knowledge
    and first-hand experience that the Culbro team has through its development of a world-class consumer products
    company like General Cigar will be invaluable to TDBBS as we seek to capitalize on the many exciting growth
    opportunities in the pet industry. This transaction will enable TDBBS to accelerate our growth at an even greater
    pace due to the resources, guidance and access to capital that Culbro brings.”
    “In a relatively short time span TDBBS has established itself as a leader in the dynamic pet products industry by
    providing pet owners with innovative products that are all natural, great-tasting and healthy for their pets,” said
    Edgar Cullman, Jr., Managing Member of Culbro, “We are looking forward to working with Avrum and the rest of
    the TDBBS team to continue to build upon their success to date.”
    David Danziger, Managing Member of Culbro, said, “Avrum and his team have done an extraordinary job building
    a successful business with a number of attributes that we look for when making an investment, including a strong
    leadership team, loyal customer relationships and sustainable competitive advantages. The fact that TDBBS
    operates in an industry that is expanding so rapidly makes this investment even more compelling for Culbro.”
    Marriott & Co. acted as exclusive financial advisor to TDBBS on this transaction. Williams Mullen provided legal
    advice to TDBBS, and Gunster provided legal advice to Culbro on this transaction.
    Justin Marriott, Managing Director of Marriott & Co. said, “We thoroughly enjoyed working with Avrum and the
    rest of the TDBBS team on this important transaction. The cultural fit between TDBBS and Culbro was evident
    throughout this entire process, and I know that TDBBS is poised for great things under this new partnership.”

    Added Mr. Elmakis, “I can’t thank the Marriott & Co. team enough for the amazing job they did throughout this
    process. They managed an efficient and professional process to ensure that TDBBS found a private equity partner
    that truly understands our unique business model.”
    About TDBBS, LLC:
    About Culbro, LLC:
    Contacts:
    Avrum Elmakis CEO – TDBBS, LLC Phone: (804) 525-7169 [email protected] www.tdbbsllc.com
    David Danziger Managing Member – Culbro, LLC Phone: (646) 461-9277 [email protected] www.culbro.com
    ###
    Headquartered in Richmond, VA, TDBBS, LLC is a leading manufacturer and marketer of premium pet related
    products throughout the U.S. and abroad. TDBBS provides its loyal customer base with a diverse line of unique
    products in the pet industry including all natural dog treats and chews. Among its product offerings are bully
    sticks, which are all natural, high protein dog treats and chews that the company markets in a variety of shapes,
    sizes and varieties. TDBBS sells its products to consumers through its ecommerce sites, and it has also developed
    relationships with some of the leading distributors and retailers across the U.S.
    TDBBS was founded in 2007 and has achieved a triple-digit revenue growth rate since its inception. In Inc.
    Magazine’s 2012 “Inc. 500” ranking of the fastest-growing private companies in the U.S., TDBBS was ranked
    #384.
    Based in New York, NY, Culbro, LLC (www.culbro.com) is the private equity investment vehicle of the Cullman
    family and is presently managed by members of that family. Culbro primarily focuses on making equity
    investments of $10 to $20 million in middle market companies across a variety of sectors, including branded
    consumer products, healthcare services, education products and services and other outsourced services including
    information technology. The firm invests its partners’ capital and is supported by a number of allied investors who
    often invest alongside it, providing for flexible investment structures and time horizons. The Managing Members
    of Culbro are former operating executives who seek to partner with strong existing management teams to help their
    companies grow.
    The Cullman family has a rich history dating back to 1884 which includes owning and managing a number of
    successful companies, including General Cigar Holdings, Inc., the largest premium cigar manufacturer and
    marketer in the U.S. with prestigious brands including Macanudo and Partagas. Upon completing a sale of its
    interests in General Cigar to Swedish Match AB in 2005, the Cullman family created Culbro, LLC as its private
    equity investment vehicle. In addition to its U.S. operations, Culbro invests in India through its associated
    company, Helix Investments

    The post Culbro Completes Minority Investment in TDBBS appeared first on peHUB.

  • Kofax Acquires Altosoft

    Kofax, a provider of smart process applications has acquired Altosoft, a developer of business intelligence and analytics software. Altosoft will function as a wholly owned subsidiary of Kofax, conducting its business as usual while also enhancing Kofax’s product portfolio with near real-time process and data analytics, visualization and ETL capabilities.

    PRESS RELEASE

    Kofax® plc (LSE: KFX), a leading provider of smart process applications for the business critical First Mile™ of customer interactions, today reported it has acquired Altosoft, Inc., a leading developer of business intelligence and analytics software. Altosoft will function as a wholly owned subsidiary of Kofax, conducting its business as usual while also enhancing Kofax’s product portfolio with near real-time process and data analytics, visualization and ETL capabilities. With this acquisition, Kofax now has all of the core capabilities necessary to provide market leading smart process applications.

    “This acquisition is consistent with our stated acquisition strategy and in an adjacent area of interest that we’ve been talking about for some time now. It fundamentally allows us to provide more actionable information to our customers sooner than would otherwise be possible,” said Reynolds C. Bish, chief executive officer of Kofax. “Altosoft is a great fit with Kofax as the leadership team has prior experience in the BPM market, and the technology is an ideal complement to our existing product offerings. We’re pleased to welcome Altosoft to the Kofax family.”

    Altosoft’s software provides rapid, no-coding development of near real time reporting and dashboard applications through the use of a data integration and analytics engine utilizing in-memory techniques. The software is available for both traditional on-premise deployments and as a hosted SaaS subscription offering featuring multi-tenant capabilities. It is developed using Microsoft’s .net environment and therefore consistent with and easily integrated into Kofax’s software products. Altosoft has a significant presence in the healthcare industry, and was noted in Gartner’s 2012 Magic Quadrant for Business Intelligence Platforms as therefore appropriate for related business use cases.

    According to the Forrester Research Report, The Forrester Wave™: Business Intelligence Service Providers, Q4 2012, “Business intelligence now has a much more important role. All products and services continue to become more commoditized in our global economy. … But if there are two businesses that are marketing and selling identical products or services and one has more insight into its customers’ behavior — or even if two share the same insights but one gets that information a day sooner — that business has a much higher chance of success.”

    “We look forward to new growth opportunities within Kofax’s larger direct and indirect sales channels, and being able to enhance Kofax’s smart process application solutions,” stated Scott Opitz, president and chief executive officer of Altosoft. “Our strength in business intelligence is a great fit with Kofax’s leadership in smart capture, BPM, dynamic case management and mobile applications.”

    Kofax acquired all of Altosoft’s stock for $13.5 million in cash. Additional payments may be made subject to the achievement of specific annual revenue growth rates and EBITDA levels during calendar years 2013, 2014 and 2015 and certain management employment conditions.

    Altosoft was a privately held company headquartered in Media, Pennsylvania with approximately 43 employees and contractors principally in the U.S. and Russia. Its unaudited financial statements for calendar year 2012 reported revenues of $3.4 million, an EBITA of $0.5 million and gross assets of $1.4 million with no material debt at closing. Kofax expects it to be EBITDA neutral during calendar year 2013 and accretive in subsequent periods. Scott Opitz, age 53, and Alex Elkin, chief technical officer and age 47, of Altosoft were its most important employees and majority shareholders, and will remain as employees.

    About Kofax

    Kofax® plc (LSE: KFX) is a leading provider of innovative smart capture and process automation software and solutions for the business critical First Mile of customer interactions. These begin with an organization’s Systems of Engagement, which generate real time, information intensive communications from customers, and provide a fluid bridge to their Systems of Record, which are typically large scale, rigid enterprise applications and repositories not easily adapted to more contemporary technology. Success in the First Mile can dramatically improve an organization’s customer experience and greatly reduce operating costs, thus driving increased competitiveness, growth and profitability. Kofax software and solutions provide a rapid return on investment to more than 20,000 customers in banking, insurance, government, healthcare, business process outsourcing and other markets. Kofax delivers these through its own sales and service organization, and a global network of more than 800 authorized partners in more than 75 countries throughout the Americas, EMEA and Asia Pacific. For more information, visit kofax.com.

    © 2013 Kofax, plc. “Kofax” is a registered trademark and “First Mile” is a trademark of Kofax, plc.

    The post Kofax Acquires Altosoft appeared first on peHUB.

  • Morning Advantage: What Makes a Mobile Ad Effective

    More and more of us find our smartphones all-absorbing, addictive, impossible to put down. And yet advertisers have been slow to allocate their dollars in accordance with our attentions, in part because of the received wisdom that “mobile ads don’t work.” Columbia Business School’s Ideas@Work reports on research done by Miklos Sarvary, Yakov Bart, and Andrew T. Stephen on mobile ads that do.

    Logically, you might expect that someone on their mobile phone using an inexpensive app would be most interested in ads that, say, serve up other inexpensive mobile apps — perhaps a new one that they haven’t heard of before. That’s exactly wrong, say the researchers. Instead, the ads that work best are a) for big-ticket items, like cars and plane tickets, and b) feature brands you’ve already heard of. “The ad’s strength is not adding new data,” says Sarvary, “But reminding you of what you already know and making you think about the product again.” The right way to use mobile ads, they say, is to deploy them after your major TV and print campaign, using the tiny mobile ads as reminders.

    Lessons Learned

    Why It’s a Mistake to Write Off Japan (The Boston Globe)

    HBS dean Nitin Nohria reflects in The Boston Globe on a recent visit to Japan. Despite Westerners’ view that the country is “a place that’s been fighting the same set of vexing economic problems for so long with so little effectiveness that it resembles a sports team mired in a long winless streak,” he found Japan and its business leaders to be dynamic, engaged, and optimistic. As so many countries struggle with fiscal imbalances, no-growth economies, and the aftermath of an epic bubble, there’s much we can learn from Japan — but only if we’re paying attention. —Dan McGinn

    HAVE YOUR CAKE

    Someone’s Got to Earn the Money to Support Charities; It Might as Well Be You (Quartz)

    Trying to figure out how you can make the world a better place? One obvious way is to work for a nonprofit, but did you ever consider the option of “earning to give”? That’s the term William MacAskill applies to making a lot of money — in finance, say — and then giving a lot of it away to charities. One of his former students, for example, now works at a trading firm and gives away 50% of his income. The ex-student’s donations alone could pay the wages of several people working for nonprofits. The nonprofit sector already has plenty of people working for it, MacAskill says; what it really needs is your money. So go get rich. —Andy O’Connell

    BONUS BITS:

    Remember Friendster?

    Autopsy of a Dead Social Network (MIT Technology Review)
    Lifestyles of the Mega-Wealthy, in One Mega-Chart (The Reformed Broker)
    The Strange Behavioral Logic of the Sequester Stalemate (HBR.org)

  • Fon scores a big one: crowdsourced Wi-Fi community signs DT for millions of hotspots

    It may not be the investment that was rumored earlier this year, but Deutsche Telekom has struck a deal with the crowdsourced Wi-Fi outfit Fon to provide coverage across Germany. This comes a month after Fon signed with a DT subsidiary in Croatia – a country, as we pointed out at the time, that DT sometimes uses as a testbed for new services that it intends to roll out more widely.

    Fon is a community of people who submit their Wi-Fi hotspots for inclusion in a global pool. By doing so they become “Foneros” who let others use their Wi-Fi connections for free, and in exchange they get to do the same around the world. Due to ISPs’ terms and conditions, which generally forbid letting strangers onto customers’ connections, this idea works best in concert with the ISPs themselves – BT in the UK was a trailblazer here, and DT is certainly one of the biggest ISPs that Fon has landed.

    The DT offering is called WLAN TO GO, and through it DT’s customers who offer up their own connection will gain access to around 8 million hotspots worldwide. As DT itself has 12 million broadband lines and around 12,000 Wi-Fi hotspots, there’s clearly scope for major expansion of Fon’s reach too – this deal doesn’t just cover Germany, but also DT’s local subsidiaries in Bulgaria, Greece, Romania, Slovakia and Hungary.

    For DT, there’s an extra motivation too: if its customers start using more hotspots, they will theoretically use less mobile data – a boon for networks feeling the strain of bandwidth hogs such as mobile video. Here’s how DT CEO Rene Obermann put it in a statement this morning:

    “The partnership with FON fits perfectly with Telekom’s network expansion strategy. The astonishing increase in data traffic calls for network optimization and expansion, as well as the implementation of new high-speed networks.

    “By the year 2016, we want to set up more than 2.5 million additional hotspots in Germany with the WLAN TO GO offering. With our technology mix of mobile communication, fixed lines and Wi-Fi, we can gradually introduce our customers to the benefits of internet access anywhere and anytime.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Fixtures Living Accelerates Growth with Catterton Partners Backing

    Fixtures Living has announced a strategic partnership with Catterton Partners. The partnership will support the expansion of the company throughout the United States. Fixtures Living is a retail concept specializing in providing customers a tailored assortment of premium lifestyle products for the home. Terms of the transaction were not disclosed.

    PRESS RELEASE

    Fixtures Living (the “Company”), a fast growing retail concept specializing in providing customers a tailored assortment of premium lifestyle products for the home, today announced a strategic partnership with Catterton Partners, the leading consumer-focused private equity firm. The strategic partnership endorses the remarkable growth potential of Fixtures Living and is designed to support the expansion of the Company throughout the United States. Terms of the transaction were not disclosed.

    Founded in 2009, Fixtures Living carries a select array of best-in-class products for indoor and outdoor living spaces — kitchens, laundry rooms, and bathrooms. Unlike typical industry showrooms, Fixtures Living stores invite consumers to enjoy a 360 degree sensory experience that ignites the imagination by implementing live kitchens and working bath fixtures (from decorative plumbing to entire Health and Wellness systems), and by recruiting and retaining a welcoming and knowledgeable staff who encourage guests to “Live Joyfully™.” The Company currently has three experiential showrooms in Southern California and plans to expand rapidly across the U.S. over the next several years, including adding two locations later this year – one in San Diego at the Westfield UTC luxury center and one in the iconic Glendale Galleria.

    Chief Executive Officer Jeffery Sears and the current management team will continue to lead the Company as it expands its presence across the country. Sears, along with co-founder and Chairman Jim Stuart, will retain a significant stake in the Company.

    Sears commented, “The Fixtures Living concept has created a new way for the consumer to choose lifestyle goods for the home. Our innovative approach is embraced enthusiastically by homeowners and industry professionals who appreciate the opportunity to test-drive products in an inviting and interactive setting. This fosters a connection between the visitor and the products that helps them create the types of moments they wish to share in their homes. We look forward to working with Catterton, a partner with significant experience growing similar best-in-class retailers such as Restoration Hardware. Together, we are well-equipped to capitalize on the vast opportunity we see to fill a void in the current marketplace.”

    “Fixtures Living has developed a game-changing retail model,” said Scott Dahnke, Managing Partner at Catterton Partners. “By delivering a full-immersion retail experience that facilitates the relationship between consumers, trade professionals, and leading brands, the Company has succeeded in creating a retail concept that is positioned to win. This concept is unlike any other that we have seen. We are excited to partner with the talented team at Fixtures Living to help the Company realize its immense potential.”

    Recently, Fixtures Living has received numerous awards and honors, including:

    Listed #78 on Forbes’ annual ranking of “America’s Most Promising Companies” (2013)
    Winner, “Best New Prototype or Reinvention of a Prototype” by Retail Traffic Magazine (2012)
    Winner, “Store Design of the Year” (Large Format) by Retail Week Magazine (2012)
    Winner, Grand Prize, “Sustainability,” from the Association for Retail Environments (2012)
    Winner, Individual Element, “In-Store Communications” from the Association for Retail Environments (2012)
    Winner, Outstanding Merit, “Design,” from the Association for Retail Environments (2012)
    Winner, “Retail Store of the Year” (Hard-Lines 15k-25k sf category) from Chain Store Age (2012)
    Winner, International Award of Merit, “Best Retail Concept” (Hard-Lines category) by the Retail Design Institute (2012)
    Winner, “Best in Show Worldwide: In-Store Signage and Graphics” (2012)
    LEED Silver Certification from USGBC (2012)
    Appeared as cover story or featured story in the industry’s top five magazines
    Valtus Capital Group acted as Valuation Advisor to Fixtures Living in connection with the transaction.

    About Fixtures Living

    Fixtures Living is a place for trade professionals and consumers to dream about, play with, and choose products that lead to better living. Specializing in premium lifestyle goods for the home, the store carries a hand-culled array of best-in-class brands for indoor- & outdoor-kitchens, laundry rooms and the bath, from exquisite decorative plumbing to entire Health-and-Wellness Systems. Currently in three California markets, the store will soon have a presence in the Glendale Galleria, and is poised to expand into several other major U.S. cities within the next 12 months.

    About Catterton Partners

    With more than $3.0 billion currently under management and a twenty-three year track record of success in building high growth companies, Catterton Partners is the leading consumer-focused private equity firm. Since its founding in 1989, Catterton has leveraged its category insight, strategic and operating skills, and network of industry contacts to establish one of the strongest private equity investment track records in the middle market. Catterton Partners invests in all major consumer segments, including Food and Beverage, Retail and Restaurants, Consumer Products and Services, Consumer Health, and Media and Marketing Services. Catterton’s investments include, among others: Restoration Hardware, Bloomin’ Brands (Outback Steakhouse, Carraba’s Italian Grill, Bonefish Grill, Fleming’s Prime Steakhouse & Wine Bar, and Roy’s), Edible Arrangements, Cheddar’s Casual Café, Noodles & Company, Plum Organics, Frederic Fekkai, Build-A-Bear Workshop, Kettle Foods, Odwalla, and P.F. Chang’s China Bistro.

    SOURCE Catterton Partners

    The post Fixtures Living Accelerates Growth with Catterton Partners Backing appeared first on peHUB.

  • Praxair Acquires NuCO2 Backed by

    Praxair has acquired NuCO2 for $1.1 billion from Aurora Capital Group. NuCO2 is a provider of fountain beverage carbonation in the US with approximately 162,000 customer locations and 900 employees.

    PRESS RELEASE

    Praxair, Inc. PX +0.41% announced today that it has completed the previously announced acquisition of NuCO2 Inc. for $1.1 billion from Aurora Capital Group.

    NuCO2 is the largest provider of fountain beverage carbonation in the United States with approximately 162,000 customer locations and 900 employees. The NuCO2 micro-bulk beverage carbonation offering is the service model of choice for customers offering fountain beverages as it is cost effective, reliable and less labor intensive.

    “NuCO2 is a high-quality business that fits directly within Praxair’s strategy of building density and providing customer reliability. It will continue to operate with a focus on customer service and will benefit from Praxair’s strong competencies in distribution and productivity,” said Eduardo Menezes, executive vice president of Praxair. “Praxair will grow its beverage carbonation businesses domestically and internationally through expanded product and service offerings,” he added.

    Praxair, Inc. is the largest industrial gases company in North and South America, and one of the largest worldwide, with 2012 sales of $11 billion. The company produces, sells and distributes atmospheric and process gases, and high-performance surface coatings. Praxair products, services and technologies are making our planet more productive by bringing productivity and environmental benefits to a wide variety of industries including aerospace, chemicals, food and beverage, electronics, energy, healthcare, manufacturing, metals and others.

    This document contains “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995. These statements are based on management’s reasonable expectations and assumptions as of the date the statements are made but involve risks and uncertainties. These risks and uncertainties include, without limitation: the performance of stock markets generally; developments in worldwide and national economies and other international events and circumstances; changes in foreign currencies and in interest rates; the cost and availability of electric power, natural gas and other raw materials; the ability to achieve price increases to offset cost increases; catastrophic events including natural disasters, epidemics and acts of war and terrorism; the ability to attract, hire, and retain qualified personnel; the impact of changes in financial accounting standards; the impact of changes in pension plan liabilities; the impact of tax, environmental, healthcare and other legislation and government regulation in jurisdictions in which the company operates; the cost and outcomes of investigations, litigation and regulatory proceedings; continued timely development and market acceptance of new products and applications; the impact of competitive products and pricing; future financial and operating performance of major customers and industries served; the impact of information technology system failures, network disruptions and breaches in data security; and the effectiveness and speed of integrating new acquisitions into the business. These risks and uncertainties may cause actual future results or circumstances to differ materially from the projections or estimates contained in the forward-looking statements. Additionally, financial projections or estimates exclude the impact of special items which the company believes are not indicative of ongoing business performance. The company assumes no obligation to update or provide revisions to any forward-looking statement in response to changing circumstances. The above listed risks and uncertainties are further described in Item 1A (Risk Factors) in the company’s Form 10-K and 10-Q reports filed with the SEC which should be reviewed carefully. Please consider the company’s forward-looking statements in light of those risks.

    SOURCE: Praxair, Inc.

    The post Praxair Acquires NuCO2 Backed by appeared first on peHUB.

  • Synteract and Harrison Clinical Research Close Deal

    Synteract, a full-service contract research organization has completed its acquisition of Harrison Clinical Research to form a new multinational business – SynteractHCR. Synteract is a portfolio company of San Francisco-based Gryphon Investors.

    PRESS RELEASE

    Synteract, a leading full-service contract research organization (CRO) and portfolio company of San Francisco-based Gryphon Investors, has completed its acquisition of Harrison Clinical Research (“HCR”) to form a new multinational CRO – SynteractHCR. The company will have its corporate headquarters in San Diego County. Wendel Barr will lead SynteractHCR as CEO. Dr. Francisco Harrison, formerly HCR’s chairman and founder, will remain a senior member of the executive team and become a member of the Board of Directors. The existing management stays in place, with former Synteract COO Stewart Bieler becoming president of U.S. operations and former HCR CEO Benedikt van Nieuwenhove becoming president of European operations for SynteractHCR.

    The merger of the companies will make SynteractHCR a top tier, global CRO and provide additional resources and scale to support large, later phase programs:

    Geographic Footprint — The combined company will have offices in 16 countries with operations in both Western and Eastern Europe, Israel and South America, as well as the U.S. In addition to its headquarters in San Diego, California, SynteractHCR’s U.S.presence will include two offices on the East coast in Research Triangle Park, North Carolina and Princeton, New Jersey, reflecting coverage of three important biopharmaceutical hubs. This expanded presence will allow the company to offer strong international and regional clinical trial support to clients throughout the world. SynteractHCR also maintains a clinical in-patient unit in Germany that will help enlist and treat trial patients, as well as a clinical research training center in Belgium to assist internal and external professionals.
    Longstanding Experience and Talent — SynteractHCR has exceptionally strong, full-service clinical development expertise with a staff of more than 800. Typical employee tenure at the company is greater than five years. Employees have a lower-than-average turnover rate compared to the industry.
    Client Loyalty and Repeat Business — A strong tradition of customer service excellence combined with a high regard for trust and transparency are cornerstones of ongoing client relationships, resulting in a high percentage of repeat and referral business.
    Reinforced Therapeutic Breadth — Both organizations have a complementary heritage of managing Phase I-IV clinical trials across multiple therapeutic areas, much of which strategically aligns with the largest areas of clinical R&D investment, including oncology, CNS, infectious disease, endocrinology, cardiovascular and respiratory.
    SynteractHCR is focused on enabling and supporting the innovation and development of better therapies in healthcare. A continued directive of the new combined company will be to maintain a strong connection with emerging to midsize biopharmaceutical companies through which a consultative approach and strong clinical development expertise provide the foundation of its customer relationships.
    “Our longstanding drug development expertise allows us to form a new global leader with enhanced scale and therapeutic breadth, but with the personal approach that we have always taken to working with clients,” said CEO Wendel Barr. “We will provide a continuum of service that allows us to work with clients throughout the entire development life cycle, from emerging products through post-marketing, and will bring technology efficiencies to the company that will help to take time and cost out of drug development.”

    SynteractHCR is a portfolio company of Gryphon Investors, a San Francisco-based premier middle market private equity firm, which was the lead financial partner in the acquisition. Fairmount Partners provided financial advice to Synteract, while KPMG International provided financial advice to HCR. Terms of the transaction were not disclosed.

    The new company will unveil its new brand identity and messaging at the DIA EuroMeeting in Amsterdam, from March 4-6, 2013, in booth 616. In addition, Andrei Kravchenko, MD, PhD, Head of Office, Ukraine, will participate as session chair on Tuesday, March 5th at 2-3:30 p.m. CET on “Enhancing Clinical Research Effectiveness.”

    About SynteractHCR SynteractHCR is a full-service contract research organization with a successful two-decade track record supporting biotechnology, medical device and pharmaceutical companies in all phases of clinical development. With its “Shared Work – Shared Vision” philosophy SynteractHCR provides customized Phase I through IV services collaboratively and cost effectively ensuring on-time delivery of quality data so clients get to decision points faster. Operating in 16 countries, SynteractHCR delivers trials internationally, offering expertise across multiple therapeutic areas including notable depth in oncology, CNS, infectious disease, endocrinology, cardiovascular and respiratory, among other indications.

    CONTACT: Beth Walsh [email protected] 760-230-2424

    SOURCE SynteractHCR

    The post Synteract and Harrison Clinical Research Close Deal appeared first on peHUB.

  • Xbox Live premieres its first movie

    In tough economic times, raising the money and getting a movie made without any major stars in it can be more than a little challenging for independent film makers. Getting it distributed is even harder.

    So instead of trying to get their movie into cinemas, releasing it straight to DVD, or even putting it out on YouTube, the makers of Pulp are distributing their low-budget British comedy via an alternative method — Xbox Live.

    The film, which follows a struggling comic book publisher who becomes mixed up in a plot to take down a crime syndicate, is available to Xbox 360 Live subscribers from today, making it the first film to premiere on the games console. It’s only currently being offered in the UK, but if it’s a success Microsoft may also release it in the US and other international markets later this year.

    Pulp’s co-director Adam Hamdy says: “Microsoft might not seem like the obvious partner for an indie comedy, but the film industry has changed. Xbox 360 can instantly distribute Pulp to millions of UK customers, and publicize the release in ways that simply aren’t possible traditionally”.

    Talking about the decision to distribute the movie, Xbox Live product manager Pav Bhardwaj says: “It’s a great fit. The film is really well aligned with our audience and it’s great to support British talent”.

    Microsoft says it intends to distribute more films in the future, confirming what my colleague Joe Wilcox observed three weeks ago, that the future of Xbox isn’t gaming.

     

  • UpCloud bursts out of Finland for European launch, with U.S. in sights for this year

    An ambitious Finnish infrastructure-as-a-service company called UpCloud has launched across Europe with local rivals such as CloudSigma and Elastichosts in its sights, and U.S. expansion already on the agenda.

    The first two data centers are in London and Helsinki, and more are planned for Chicago towards the middle of the year, then Las Vegas and Singapore. A spin-off of sorts from the established Finnish hosting firm Sigmatic, UpCloud claims to have 100 percent homegrown virtualization tech. According to general manager Antti Vilpponen, this keeps costs low and services flexible:

    “Our offering is truly scaleable and flexible – we’re not giving any predetermined set of instances. You can freely choose the amount of CPUs, memory and storage… We’re really fast -– all servers are deployed in less than 60 seconds -– and our network stack is 100 percent redundant.”

    Alongside enterprise-grade redundancy, the company offers fully dedicated server resources and one-click migration between availability zones. What’s more, Vilponnen said, unlike with Amazon Web Services, the cheapest service levels UpCloud offers have the same level of performance you’d get with more expensive packages. As the tech was built in-house, UpCloud offers a 100 percent Service Level Agreement (SLA) and 50x downtime compensation — that’s a decent uptime guarantee, which should bolster the enterprise-grade status UpCloud is going for.

    There’s a browser-based control panel and apps for Android and iOS, while those with more complex control needs also have an API (although it’s not fully Amazon compatible) that they can play with. CPU, memory, storage – both HDD and SSD — OS, firewall and IP addresses are all billed on an hourly basis. Network traffics and storage device I/O requests are billed by use.

    Right now UpCloud is being personally bankrolled by founder and CTO Joel Pihlajamaa, who is also Sigmatic’s CEO. That makes it pretty impressive to see the company apparently already undercutting local rivals, if not Amazon itself at this point. For example, Vilponnen showed me a chart showing a low-tier UpCloud package of 2GB memory and 100GB storage priced at €41.84 ($54.47) monthly, versus €57.40 on Elastichosts and €72.03 on CloudSigma (it should be noted that CPU amounts were included in the calculation but not explicit in the price, “as they change quite a lot between the providers”).

    “We really think offering constantly high-performing resources at affordable prices is key to how want to position ourselves,” Vilponnen said. “We won’t be able to overcome Amazon in Europe within a couple of years, but we’ve got big plans.”

    UpCloud is even offering the first 1,000 people that register from Monday morning (and buy €10 of credit) an extra €100 in free credit, so there’s clearly a fair amount of money to back this up — for now. Bootstrapping will only take you so far in the scale-driven cloud business, though, so UpCloud would do well to raise fresh funding this year, as it intends to.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • If Bradley Manning and WikiLeaks are guilty, then so is the New York Times

    While the trial of Bradley Manning has sparked some interest in certain circles, many people probably think the former U.S. Army private’s case will have little impact on either them or American society as a whole. Harvard law professor Yochai Benkler, however, argues that they are wrong — and that if Manning is found guilty of “aiding the enemy” for releasing classified documents to WikiLeaks, it could change the nature of both journalism and free speech forever.

    Why? Because as Benkler points out, the charge for which Manning is being court-martialed could just as easily be applied to someone who leaks similar documents to virtually any media outlet, including the New York Times or the Washington Post. In other words, if the U.S. government has seen fit to go after Manning and WikiLeaks, what is to stop them from pursuing anyone who leaks documents, and any media entity that publishes them?

    WikiLeaks is a media entity just like the Times

    I’ve argued in the past that WikiLeaks is a media entity, and a fairly crucial one in this day and age, and Benkler clearly agrees. As the Harvard professor (who will likely be testifying at Manning’s trial) describes in his piece:

    “Someone in Manning’s shoes in 2010 would have thought of WikiLeaks as a small, hard-hitting, new media journalism outfit — a journalistic ‘Little Engine that Could’ that, for purposes of press freedom, was no different from the New York Times.”

    New York Times

    And we don’t have to hypothesize about whether the government would have gone after Manning for leaking documents directly to the New York Times instead of to WikiLeaks: as Benkler notes, the chief prosecutor in the case was asked that exact question by the judge in January and responded “Yes ma’am.” In other words, for the purposes of the government’s case against Manning, there is no appreciable difference between WikiLeaks and the Times, or any other traditional media outlet.

    Benkler argues that the government’s behavior constitutes “a clear and present danger to journalism in the national security arena” — not just because it is trying to penalize a whistleblower, but because the state is arguing that Manning is guilty of “aiding the enemy,” a charge that could put him in prison for life. Benkler also notes that unlike the other charges against Manning, aiding the enemy is something even civilians can be found guilty of.

    Isn’t the New York Times aiding the enemy too?

    So if handing documents over to a media entity that subsequently publishes them qualifies as “aiding the enemy” in the eyes of the government, then giving them to the New York Times would fit that description just as well as giving them to WikiLeaks. And if providing classified documents to a publisher can qualify, then wouldn’t the entity that actually published them be guilty as well — regardless of whether it’s WikiLeaks or the Times?

    The First Amendment would seem to protect the NYT in a case like this, and I’ve argued before that it should protect WikiLeaks as well — an argument that former Times‘ executive editor Bill Keller has said he agrees with. But the U.S. government continues to pursue WikiLeaks for its role in publicizing the documents that Manning leaked, and some U.S. legislators have mused aloud about whether espionage charges could be laid against other media entities like the New York Times as well.

    Benkler’s warning shouldn’t be taken lightly: if Manning is guilty of aiding the enemy for simply leaking documents, then anyone who communicates with a newspaper could be guilty of something similar. And if the leaker is guilty, then the publisher could be as well — and that could cause a chilling effect on the media that would change the nature of public journalism forever.

    /Images courtesy of Shutterstock / Nata-lia and Flickr user jphilipg

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Charcoal Black Samsung Galaxy Note 8.0 Photo Leaked

    GALAXY Note 8.0_1

    Samsung has always been one for adding a bit of colorful flare to its devices. Be it the Galaxy S III in Pebble Blue; the Note II in Pink; or the Note 10.1 in Garnet Red, Samsung loves to give us choices. Well it appears that the Korean Giant’s newest addition to the family, the recently announced Note 8.0 will be seeing multiple colors as well. As a leaked press shot suggests (below) the tablet will not just come in white but will also be available in a Charcoal Black color. 

    Samsung-Galaxy-Note-8-0-Charcoal-Black

    This rumored color addition does appear to add some weight to the idea that Samsung is abandoning its short-lived, influenced-by-nature color scheme. Although that isn’t to say that down the road we’ll see different color options added. We won’t know until for sure until after the Note 8.0 debuts in the second quarter of this year. Stay tuned!

    Source: PhoneArena

    Come comment on this article: Charcoal Black Samsung Galaxy Note 8.0 Photo Leaked

  • Google currently testing data compression feature for Chrome Browser

    chrome-for-android

     

    Chrome is no doubt one of the most popular web browsers on our Android devices, but it appears that Google is intent on making the web browsing experience just a little faster than it already is. Taking a page from web browsers like that from Opera, Google has unveiled a preliminary build of its Chromium web browser that utilizes data compression. Here’s how Google describes the data compression feature:

    “Reduce data consumption by loading optimized web pages via Google proxy servers”.

    So essentially Google’s data compression will do two things: provide more efficient security and speed up page load times. More efficient security comes from the Chrome browser using SPDY (the company’s proxy servers) which forces SSL encryption for all sites. The speeding up of the load page times on the other hand, is done by multiplexing multiple streams of data over a single network connection and assigning high or low priorities to page resources being requested from a server.

    Google hasn’t exactly made this feature public yet, but it is available to test out for those of you Android 4.2 users who like to be ambitious and try things out. Hit the break for the full instructions on how to get this neat feature going.

     

    • Go to Settings, About Phone.

    • Find the build number

    • Tap on the build number repeatedly about seven times

    • Note that after the third tap, you’ll see a snarky dialog that says you are four taps away from being a developer.

    • After the seventh tap you will see a message that says “You are now a developer.”

    Once all is done, the Developer Settings option (which you need to enable USB debugging) will again be in the device’s Settings app.

    Come comment on this article: Google currently testing data compression feature for Chrome Browser

  • Accidental Empires, Part 11 — Role Models (Chapter 5)

    Eleventh in a series. The next installment of Robert X. Cringely’s 1991 tech-industry classic Accidental Empires is highly appropriate for the industry today. He discusses concepts like “look and feel”, how pioneers freely copied ideas and where attitudes began to change. There’s something prescient, with respect to aggressive patent litigation by Apple and some other companies today.

    This chapter also explores the incredible contribution one research lab, Xerox PARC, made to personal computing as we know it –germinator of graphical user interface, mouse, Ethernet and laser printer, among others. Photo is of the Alto, arguably the first computer workstation and one of many, many products conceived but not marketed.

    This being the 1990s, the economy is shot to hell and we’ve got nothing much better to do, the personal computer industry is caught up in an issue called look and feel, which means that your computer software can’t look too much like my computer software or I’ll take you to court. Look and feel is a matter of not only how many angels can dance on the head of a pin but what dance it is they are doing and who owns the copyright.

    Here’s an example of look and feel. It’s 1913, and we’re at the Notre Dame versus Army football game (this is all taken straight from the film Knute Rockne, All-American, in which young Ronald Reagan appeared as the ill-fated Gipper — George Gipp). Changing football forever, the Notre Dame quarterback throws the first-ever forward pass, winning the game. A week later, Notre Dame is facing another team, say Purdue. By this time, word of the forward pass has gotten around, the Boilermakers have thrown a few in practice, and they like the effect. So early in the first quarter, the Purdue quarterback throws a forward pass. The Notre Dame coach calls a time-out and sends young Knute Rockne jogging over to the Purdue bench.

    “Coach says that’ll be five dollars”, mumbles an embarrassed Knute, kicking at the dirt with his toe.

    “Say what, son?”

    “Coach says the forward pass is Notre Dame property, and if you’re going to throw one, you’ll have to pay us five dollars. I can take a check”.

    That’s how it works. Be the first one on your block to invent a particular way of doing something, and you can charge the world for copying your idea or even prohibit the world from copying it at all. It doesn’t even matter if the underlying mechanism is different; if the two techniques look similar, one is probably violating the look and feel of the other.

    Just think of the money that could have been earned by the first person to put four legs on a chair or to line up the clutch, brake, and gas pedals of a car in that particular order. My secret suspicion is that this sort of easy money was the real reason Alexander Graham Bell tried to get people to say “ahoy” when they answered the telephone rather than “hello”. It wasn’t enough that he was getting a nickle for the phone call; Bell wanted another nickle for his user interface.

    There’s that term, user interface. User interface is at the heart of the look-and-feel debate because it’s the user interface that we’re always looking at and feeling. Say the Navajo nation wants to get back to its computing roots by developing a computer system that uses smoke signals to transfer data into and out of the computer. Whatever system of smoke puffs they settle on will be that computer’s user interface, and therefore protectable under law (U.S., not tribal).

    What’s particularly ludicrous about this look-and-feel business is that it relies on all of us believing that there is something uniquely valuable about, for example, Apple Computer’s use of overlapping on-screen windows or pull-down menus. We are supposed to pretend that some particular interface concepts sprang fully grown and fully clothed from the head of a specific programmer, totally without reference to prior art.

    Bullshit.

    Nearly everything in computing, both inside and outside the box, is derived from earlier work. In the days of mainframes and minicomputers and early personal computers like the Apple II and the Tandy TRS-80, user interfaces were based on the mainframe model of typing out commands to the computer one 80-character line at a time—the same line length used by punch cards (IBM should have gone for the bucks with that one but didn’t). But the commands were so simple and obvious that it seemed stupid at the time to view them as proprietary. Gary Kildall stole his command set for CP/M from DEC’S TOPS-10 minicomputer operating system, and DEC never thought to ask for a dime. Even IBM’s VM command set for mainframe computers was copied by a PC operating system called Oasis (now Theos), but IBM probably never even noticed.

    This was during the first era of microcomputing, which lasted from the introduction of the MITS Altair 8800 in early 75 to the arrival of the IBM Personal Computer toward the end of 1981. Just like the golden age of television, it was a time when technology was primitive and restrictions on personal creativity were minimal, too, so everyone stole from everyone else. This was the age of 8-bit computing, when Apple, Commodore, Radio Shack and a hundred-odd CP/M vendors dominated a small but growing market with their computers that processed data eight bits at a time. The flow of data through these little computers was like an eight-lane highway, while the minicomputers and mainframes had their traffic flowing on thirty-two lanes and more. But eight lanes were plenty, considering what the Apples and others were trying to accomplish, which was to put the computing environment of a mainframe computer on a desk for around $3,000.

    Mainframes weren’t that impressive. There were no fancy, high-resolution color graphics in the mainframe world — nothing that looked even as good as a television set. Right from the beginning, it was possible to draw pictures on an Apple II that were impossible to do on an IBM mainframe.

    Today, for example, several million people use their personal computers to communicate over worldwide data networks, just for fun. I remember when a woman on the CompuServe network ran a nude photo of herself through an electronic scanner and sent the digitized image across the network to all the men with whom she’d been flirting for months on-line. In grand and glorious high-resolution color, what was purported to be her yummy flesh scrolled across the screens of dozens of salivating computer nerds, who quickly forwarded the image to hundreds and then thousands of their closest friends. You couldn’t send such an image from one terminal to another on a mainframe computer; the technology doesn’t exist, or all those wacky secretaries who have hopped on Xerox machines to photocopy their backsides would have had those backsides in electronic distribution years ago.

    My point is that the early pioneers of microcomputing stole freely from the mainframe and minicomputer worlds, but there wasn’t really much worth stealing, so nobody was bothered. But with the introduction of 16-bit microprocessors in 1981 and 1982, the mainframe role model was scrapped altogether. This second era of microcomputing required a new role model and new ideas to copy. And this time around, the ideas were much more powerful — so powerful that they were worth protecting, which has led us to this look-and-feel fiasco. Most of these new ideas came from the Xerox Palo Alto Research Center (PARC). They still do.

    To understand the personal computer industry, we have to understand Xerox PARC, because that’s where the most of the computer technology that we’ll use for the rest of the century was invented.

    There are two kinds of research: research and development and basic research. The purpose of research and development is to invent a product for sale. Edison invented the first commercially successful light bulb, but he did not invent the underlying science that made light bulbs possible. Edison at least understood the science, though, which was the primary difference between inventing the light bulb and inventing fire.

    The research part of R&D develops new technologies to be used in a specific product, based on existing scientific knowledge. The development part of R&D designs and builds a product using those technologies. It’s possible to do development without research, but that requires licensing, borrowing, or stealing research from somewhere else. If research and development is successful, it results in a product that hits the market fairly soon — usually within eighteen to twenty-four months in the personal computer business.

    Basic research is something else — ostensibly the search for knowledge for its own sake. Basic research provides the scientific knowledge upon which R&D is later based. Sending telescopes into orbit or building superconducting supercolliders is basic research. There is no way, for example, that the $1.5 billion Hubble space telescope is going to lead directly to a new car or computer or method of solid waste disposal. That’s not what it’s for.

    If a product ever results from basic research, it usually does so fifteen to twenty years down the road, following a later period of research and development.

    What basic research is really for depends on who is doing the research and how they are funded. Basic research takes place in government, academic, and industrial laboratories, each for a different purpose. Basic research in government labs is used primarily to come up with new ideas for blowing up the world before someone else in some unfriendly country comes up with those same ideas. While the space telescope and the supercollider are civilian projects intended to explain the nature and structure of the universe, understanding that nature and structure are very important to anyone planning the next generation of earth-shaking weapons. Two thirds of U.S. government basic research is typically conducted for the military, with health research taking most of the remaining funds.

    Basic research at universities comes in two varieties: research that requires big bucks and research that requires small bucks. Big bucks research is much like government research and in fact usually is government research but done for the government under contract. Like other government research, big bucks academic research is done to understand the nature and structure of the universe or to understand life, which really means that it is either for blowing up the world or extending life, whichever comes first. Again, that’s the government’s motivation. The universities’ motivation for conducting big bucks research is to bring in money to support professors and graduate students and to wax the floors of ivy-covered buildings. While we think they are busy teaching and learning, these folks are mainly doing big bucks basic research for a living, all the while priding themselves on their terrific summer vacations and lack of a dress code.

    Small bucks basic research is the sort that requires paper and pencil, and maybe a blackboard, and is aimed primarily at increasing knowledge in areas of study that don’t usually attract big bucks — that is, areas that don’t extend life or end it, or both. History, political science, and romance languages are typical small bucks areas of basic research. The real purpose of small bucks research to the universities is to provide a means of deciding, by the quality of their small bucks research, which professors in these areas should get tenure.

    Nearly all companies do research and development, but only a few do basic research. The companies that can afford to do basic research (and can’t afford not to) are ones that dominate their markets. Most basic research in industry is done by companies that have at least a 50 percent market share. They have both the greatest resources to spare for this type of activity and the most to lose if, by choosing not to do basic research, they eventually lose their technical advantage over competitors. Such companies typically devote about 1 percent of sales each year to research intended not to develop specific products but to ensure that the company remains a dominant player in its industry twenty years from now. It’s cheap insurance, since failing to do basic research guarantees that the next major advance will be owned by someone else.

    The problem with industrial basic research, and what differentiates it from government basic research, is this fact that its true product is insurance, not knowledge. If a researcher at the government-sponsored Lawrence Livermore Lab comes up with some particularly clever new way to kill millions of people, there is no doubt that his work will be exploited and that weapons using the technology will eventually be built. The simple rule about weapons is that if they can be built, they will be built. But basic researchers in industry find their work is at the mercy of the marketplace and their captains-of-industry bosses. If a researcher at General Motors comes up with a technology that will allow cars to be built for $100 each, GM executives will quickly move to bury the technology, no matter how good it is, because it threatens their current business, which is based on cars that cost thousands of dollars each to build. Consumers would revolt if it became known that GM was still charging high prices for cars that cost $100 each to build, so the better part of business valor is to stick with the old technology since it results in more profit dollars per car produced.

    In the business world, just because something can be built does not at all guarantee that it will be built, which explains why RCA took a look at the work of George Heilmeier, a young researcher working at the company’s research center in New Jersey and quickly decided to stop work on Heilmeier’s invention, the liquid crystal display. RCA made this mid-1960s decision because LCDs might have threatened its then-profitable business of building cathode ray picture tubes. Twenty-five years later, of course, RCA is no longer a factor in the television market, and LCD displays — nearly all made in Japan — are everywhere.

    Most of the basic research in computer science has been done at universities under government contract, at AT&T Bell Labs in New Jersey and in Illinois, at IBM labs in the United States, Europe, and Japan, and at the Xerox PARC in California. It’s PARC that we are interested in because of its bearing on the culture of the personal computer.

    Xerox PARC was started in 1970 when leaders of the world’s dominant maker of copying machines had a sinking feeling that paper was on its way out. If people started reading computer screens instead of paper, Xerox was in trouble, unless the company could devise a plan that would lead it to a dominant position in the paperless office envisioned for 1990. That plan was supposed to come from Xerox PARC, a group of very smart people working in buildings on Coyote Hill Road in the Stanford Industrial Park near Stanford University.

    The Xerox researchers were drawn together over the course of a few months from other corporations and from universities and then plunked down in the golden hills of California, far from any other Xerox facility. They had nothing at all to do with copiers, yet they worked for a copier company. If they came to have a feeling of solidarity, then, it was much more with each other than with the rest of Xerox. The researchtrs at PARC soon came to look down on the marketers of Xerox HQ, especially when they were asked questions like, “Why don’t you do all your programming in BASIC — it’s so much easier to learn”, which was like suggesting that Yehudi Menuhin switch to rhythm sticks.

    The researchers at PARC were iconoclastic, independent, and not even particularly secretive, since most of their ideas would not turn into products for decades. They became the celebrities of computer science and were even profiled in Rolling Stone.

    PARC was supposed to plot Xerox a course into the electronic office of the 1990s, and the heart of that office would be, as it always had been, the office worker. Like designers of typewriters and adding machines, the deep thinkers at Xerox PARC had to develop systems that would be useful to lightly trained people working in an office. This is what made Xerox different from every other computer company at that time.

    Some of what developed as the PARC view of future computing was based on earlier work by Doug Engelbart, who worked at the Stanford Research Institute in nearby Menlo Park. Engelbart was the first computer scientist to pay close attention to user interface — how users interact with a computer system. If computers could be made easier to use, Engelbart thought, they would be used by more people and with better results.

    Punch cards entered data into computers one card at a time. Each card carried a line of data up to 80 characters wide. The first terminals simply replaced the punch card reader with a new input device; users still submitted data, one 80-column line at a time, through a computer terminal. While the terminal screen might display as many as 25 lines at a time, only the bottom line was truly active and available for changes. Once the carriage return key was punched, those data were in the computer: no going back to change them later, at least not without telling the computer that you wanted to reedit line 32, please.

    I once wrote an entire book using a line editor on an IBM mainframe, and I can tell you it was a pain.

    Engelbart figured that real people in real offices didn’t write letters or complete forms one line at a time, with no going back. They thought in terms of pages, rather than lines, and their pens and typewriters could be made to scroll back and forth and move vertically on the page, allowing access to any point. Engelbart wanted to bring that page metaphor to the computer by inventing a terminal that would allow users to edit anywhere on the screen. This type of terminal required some local intelligence, keeping the entire screen image in the terminal’s memory. This intelligence was also necessary to manage a screen that was much more flexible than its line-by-line predecessor; it was comprised of thousands of points that could be turned on or off.

    The new point-by-point screen technology, called bit mapping, also required a means for roaming around the screen. Engelbart used what he called a mouse, which was a device the size of a pack of cigarettes on wheels that could be rolled around on the table next to the terminal and was connected to the terminal by a wire. Moving the mouse caused the cursor on-screen to move too.

    With Engelbart’s work as a start, the folks at PARC moved toward prototyping more advanced systems of networked computers that used mice, page editors, and bit-mapped screens to make computing easier and more powerful.

    During the 1970s, the Computer Science Laboratory (CSL) at Xerox PARC was the best place in the world for doing computer research. Researchers at PARC invented the first high-speed computer networks and the first laser printers, and they devised the first computers that could be called easy to use, with intuitive graphical displays. The Xerox Alto, which had built-in networking, a black-on-white bit-mapped screen, a mouse, and hard disk data storage and sat under the desk looking like R2D2, was the most sophisticated computer workstation of its time, because it was the only workstation of its time. Like the other PARC advances, the Alto was a wonder, but it wasn’t a product. Products would have taken longer to develop, with all their attendant questions about reliability, manufacturability, marketability, and profitability — questions that never once crossed a brilliant mind at PARC. Nobody was expected to buy computers built by Xerox PARC.

    There is a very good book about Xerox PARC called Fumbling the Future, which says that PARC researchers Butler Lampson and Chuck Thacker were inventing the first personal computer when they designed and built the Alto in 1972 and 1973 and that by choosing not to commercialize the Alto, Xerox gave up its chance to become the dominant player in the coming personal computer revolution. The book is good, but this conclusion is wrong. Just the pans to build an Alto in 1973 cost $10,000, which suggests that a retail Alto would have had to sell for at least $25,000 (1973 dollars, too) for Xerox to make money on it. When personal computers finally did come along a couple of years later, the price point that worked was around $3,000, so the Alto was way too expensive. It wasn’t a personal computer.

    And there was no compelling application on the Alto — no VisiCalc, no single function — that could drive a potential user out of the office, down the street, and into a Xerox showroom just to buy it. The idea of a spreadsheet never came to Xerox. Peter Deutsch wrote about what he called spiders — values (like 1989 revenues) that appeared in multiple documents, all linked together. Change a value in one place and the spider made sure that value was changed in all linked places. Spiders were like spreadsheets without the grid of columns and rows and without the clearly understood idea that the linked values were used to solve quantitative problems. Spiders weren’t VisiCalc.

    If Xerox made a mistake in its handling of the Alto, it was in almost choosing to sell it. The techies at PARC knew that the Alto was the best workstation around, but they didn’t think about the pricing and application issues. When Xerox toyed with the idea of selling the Alto, that consideration instantly erased any doubts in the minds of its developers that theirs was a commercial system. Dave Kerns, the president of Xerox, kept coming around, nodding his head, and being supportive but somehow never wrote the all-important check.

    Xerox’s on-again, off-again handling of the Alto alienated the technical staff at PARC, who never really understood why their system was not marketed. To them, it seemed as if Kerns and Xerox, like the owners of Sutter’s Mill, had found gold in the stream but decided to build condos on the spot instead of mining because it was never meant to be a gold mine.

    There was a true sense of the academic — the amateur — in Ethernet too. PARC’s technology for networking all its computers together was developed in 1973 by a team led by Bob Metcalfe. Metcalfe’s group was looking for a way to speed up the link between computers and laser printers, both of which had become so fast that the major factor slowing down printing was, in fact, the wire between the two machines rather than anything having to do with either the computer or the printer. The image of the page was created in the memory of the computer and then had to be transmitted bit by bit to the printer. At 600 dots-per-inch resolution, this meant sending more than 33 million bits across the wire for each page. The computer could resolve the page in memory in 1 second and the printer could print the page in 2 seconds, but sending the data over what was then considered to be a high-speed serial link took just under 15 minutes. If laser printers were going to be successful in the office, a faster connection would have to be invented.

    PARC’s printers were computers in their own right that talked back and forth with the computers they were attached to, and this two-way conversation meant that data could collide if both systems tried to talk at once. Place a dozen or more computers and printers on the same wire, and the risk of collisions was even greater. In the absence of a truly great solution to the collision problem, Metcalfe came up with one that was at least truly good and time honored: he copied the telephone party line. Good neighbors listen on their party line first, before placing a call, and that’s what Ethernet devices do too — listen, and if another transmission is heard, they wait a random time interval before trying again. Able to transmit data at 2.67 million bits per second across a coaxial cable, Ethernet was a technical triumph, cutting the time to transmit that 600 dpi page from 15 minutes down to 12 seconds.

    At 2.67 megabits per second (mbps), Ethernet was a hell of a product, for both connecting computers to printers and, as it turned out, connecting computers to other computers. Every Alto came with Ethernet capability, which meant that each computer had an individual address or name on the network. Each user named his own Alto. John Ellenby, who was in charge of building the Altos, named his machine Gzunda “because it gzunda the desk”.

    The 2.67 mbps Ethernet technology was robust and relatively simple. But since PARC wasn’t supposed to be interested in doing products at all but was devoted instead to expanding the technical envelope, the decision was made to scale Ethernet up to 10 mbps over the same wire with the idea that this would allow networked computers to split tasks and compute in parallel.

    Metcalfe had done some calculations that suggested the marketplace would need only 1 mbps through 1990 and 10 mbps through the year 2000, so it was decided to aim straight for the millennium and ignore the fact that 2.67 mbps Ethernet would, by these calculations, have a useful product life span of approximately twenty years. Unfortunately, 10 mbps Ethernet was a much more complex technology — so much more complex that it literally turned what might have been a product back into a technology exercise. Saved from its brush with commercialism, it would be another six years before 10 mbps Ethernet became a viable product, and even then it wouldn’t be under the Xerox label.

    Beyond the Alto, the laser printer, and Ethernet, what Xerox PARC contributed to the personal computer industry was a way of working — Bob Taylor’s way of working.

    Taylor was a psychologist from Texas who in the early 1960s got interested in what people could and ought to do with computers. He wasn’t a computer scientist but a visionary who came to see his role as one of guiding the real computer scientists in their work. Taylor began this task at NASA and then shifted a couple years later to working at the Department of Defense’s Advanced Research Projects Administration (ARPA). ARPA was a brainchild program of the Kennedy years, intended to plunk money into selected research areas without the formality associated with most other federal funding. The ARPA funders, including Taylor, were supposed to have some idea in what direction technology ought to be pushed to stay ahead of the Soviet Union, and they were expected to do that pushing with ARPA research dollars. By 1965, 33-year-old Bob Taylor was in control of the world’s largest governmental budget for advanced computer research.

    At ARPA, Taylor funded fifteen to twenty projects at a time at companies and universities throughout the United States. He brought the principal researchers of these projects together in regular conferences where they could share information. He funded development of the ARPAnet, the first nationwide computer communications network, primarily so these same researchers could stay in constant touch with each other. Taylor made it his job to do whatever it took to find the best people doing the best work and help them to do more.

    When Xerox came calling in 1970, Taylor was already out of the government following an ugly experience reworking U.S. military computer systems in Saigon during the Vietnam War. For the first time, Taylor had been sent to solve a real-world computing problem, and reality didn’t sit well with him. Better to get back to the world of ideas, where all that was corrupted were the data, and there was no such thing as a body count.

    Taylor held a position at the University of Utah when Xerox asked him to work as a consultant, using his contacts to help staff what was about to become the Computer Science Laboratory (CSL) at PARC. Since he wasn’t a researcher, himself, Taylor wasn’t considered qualified to run the lab, though he eventually weaseled into that job too.

    Alan Kay, jazz musician, computer visionary, and Taylor’s first hire at PARC, liked to say that of the top one hundred computer researchers in the world, fifty-eight of them worked at PARC. And sometimes he said that seventy-six of the top one hundred worked at PARC. The truth was that Taylor’s lab never had more than fifty researchers, so both numbers were inflated, but it was also true that for a time under Taylor, CSL certainly worked as though there were many more than fifty researchers. In less than three years from its founding in 1970, CSL researchers built their own time-sharing computer, built the Alto, and invented both the laser printer and Ethernet.

    To accomplish so much so fast, Taylor created a flat organizational structure; everyone who worked at CSL, from scientists to secretaries, reported directly to Bob Taylor. There were no middle managers. Taylor knew his limits, though, and those limits said that he had the personal capacity to manage forty to fifty researchers and twenty to thirty support staff. Changing the world with that few people required that they all be the best at what they did, so Taylor became an elitist, hiring only the best people he could find and subjecting potential new hires to rigorous examination by their peers, designed to “test the quality of their nervous systems.” Every new hire was interviewed by everyone else at CSL. Would-be researchers had to appear in a forum where they were asked to explain and defend their previous work. There were no junior research people. Nobody was wooed to work at CSL; they were challenged. The meek did not survive.

    Newly hired researchers typically worked on a couple of projects with different groups within CSL. Nobody worked alone. Taylor was always cross-fertilizing, shifting people from group to group to get the best mix and make the most progress. Like his earlier ARPA conferences, Taylor chaired meetings within CSL where researchers would present and defend their work. These sessions came to be called Dealer Meetings, because they took place in a special room lined with blackboards, where the presenter stood like a blackjack dealer in the center of a ring of bean-bag chairs, each occupied by a CSL genius taking potshots at this week’s topic. And there was Bob Taylor, too, looking like a high school science teacher and keeping overall control of the process, though without seeming to do so.

    Let’s not underestimate Bob Taylor’s accomplishment in just getting these people to communicate on a regular basis. Computer people love to talk about their work — but only their work. A Dealer Meeting not under the influence of Bob Taylor would be something like this:

    Nerd A (the dealer): “I’m working on this pattern recognition problem, which I see as an important precursor to teaching computers how to read printed text”.

    Nerd B (in the beanbag chair): “That’s okay, I guess, but I’m working on algorithms for compressing data. Just last night I figured out how to … ”

    See? Without Taylor it would have been chaos. In the Dealer Meetings, as in the overall intellectual work of CSL, Bob Taylor’s function was as a central switching station, monitoring the flow of ideas and work and keeping both going as smoothly as possible. And although he wasn’t a computer scientist and couldn’t actually do the work himself, Taylor’s intermediary role made him so indispensable that it was always clear who worked for whom. Taylor was the boss. They called it “Taylor’s lab.”

    While Bob Taylor set the general direction of research at CSL, the ideas all came from his technical staff. Coming up with ideas and then turning them into technologies was all these people had to do. They had no other responsibilities. While they were following their computer dreams, Taylor took care of everything else: handling budgets, dealing with Xerox headquarters, and generally keeping the whole enterprise on track. And his charges didn’t always make Taylor’s job easy.

    Right from the start, for example, they needed a DEC PDP-io time-sharing system, because that was what Engelbart had at SRI, and PDP-ios were also required to run the ARPAnet software. But Xerox had its own struggling minicomputer operation, Scientific Data Systems, which was run by Max Palevsky down in El Segundo. Rather than buy a DEC computer, why not buy one of Max’s Sigma computers, which competed directly with the PDP-io? Because software is vastly more complex than hardware, that’s why. You could build your own copy of a PDP-io in less time than it would take to modify the software to run on Xerox’s machine! And so they did. CSL’s first job on their way toward the office of the future was to clone the PDP-io. They built the Multi-Access Xerox Computer (MAXC). The C was silent, just to make sure that Max Palevsky knew the computer was named in his honor.

    The way to create knowledge is to start with a strong vision and then ruthlessly abandon parts of that vision to uncover some greater truth. Time sharing was part of the original vision at CSL because it had been part of Engelbart’s vision, but having gone to the trouble of building its own time-sharing system, the researchers at PARC soon realized that time sharing itself was part of the problem. MAXC was thrown aside for networks of smaller computers that communicated with each other — the Alto.

    Taylor perfected the ideal environment for basic computer research, a setting so near to perfect that it enabled four dozen people to invent much of the computer technology we have today, led not by another computer scientist but by an exceptional administrator with vision.

    I’m writing this in 1991, when Bill Gates of Microsoft is traveling the world preaching a new religion he calls Information At Your Fingertips. The idea is that PC users will be able to ask their machines for information, and, if it isn’t available locally, the PC will figure out how and where to find it. No need for Joe User to know where or how the information makes its way to his screen. That stuff can be left up to the PC and to the many other systems with which it talks over a network. Gates is making a big deal of this technology, which he presents pretty much as his idea. But Information At Your Fingertips was invented at Xerox PARC in 1973′-Like so many PARC inventions, though, it’s only now that we have the technology to implement it at a price normal mortals can afford.

    In its total dedication to the pursuit of knowledge, CSL was like a university, except that the pay and research budgets were higher than those usually found in universities and there was no teaching requirement. There was total dedication to doing the best work with the best people — a purism that bordered on arrogance, though Taylor preferred to see it more as a relentless search for excellence.

    What sounded to the rest of the world like PARC arrogance was really the fallout of the lab’s intense and introverted intellectual environment. Taylor’s geniuses, used to dealing with each other and not particularly sensitive to the needs of mere mortals, thought that the quality of their ideas was self-evident. They didn’t see the need to explain — to translate the idea into the world of the other person. Beyond pissing off Miss Manners, the fatal flaw in this PARC attitude was their failure to understand that there were other attributes to be considered as well when examining every idea. While idea A may be, in fact, better than idea B, A is not always cheaper, or more timely, or even possible  — factors that had little relevance in the think tank but terrific relevance in the marketplace.

    In time the dream at CSL and Xerox PARC began to fade, not because Taylor’s geniuses had not done good work but because Xerox chose not to do much with the work they had done. Remember this is industrial basic research — that is, insurance. Sure, PARC invented the laser printer and the computer network and perfected the graphical user interface and something that came to be known as what-you-see-is-what-you-get computing on a large computer screen, but the captains of industry at Xerox headquarters in Stamford, Connecticut, were making too much money the old way — by making copiers — to remake Xerox into a computer company. They took a couple of halfhearted stabs, introducing systems like the Xerox Star, but generally did little to promote PARC technology. From a business standpoint, Xerox probably did the right thing, but in the long term, failing to develop PARC technology alienated the PARC geniuses. In his 1921 book The Engineers and the Price System, economist Thorstein Veblen pointed out that in high-tech businesses, the true value of a company is found not in its physical assets but in the minds of its scientists and engineers. No factory could continue to operate if the knowledge of how to design its products and fix its tools of production was lost. Veblen suggested that the engineers simply organize and refuse to work until they were given control of industry. By the 1970s, though, the value of computer companies was so highly concentrated in the programmers and engineers that there was not much to demand control of. It was easier for disgruntled engineers just to walk, taking with them in their minds 70 or 80 percent of what they needed to start a new company. Just add money.

    From inside their ivory tower, Taylor’s geniuses saw less able engineers and scientists starting companies of their own and getting rich. As it became clear that Xerox was going to do little or nothing with their technology, some of the bolder CSL veterans began to hit the road as entrepreneurs in their own right, founding several of the most important personal computer hardware and software companies of the 1980s. They took with them Xerox technology — its look and feel too. And they took Bob Taylor’s model for running a successful high-tech enterprise — a model that turned out not to be so perfect after all.

    Reprinted with permission

  • How and why LinkedIn is becoming an engineering powerhouse

    Most LinkedIn users know “People You May Know” as one of that site’s flagship features — an onmipresent reminder of other LinkedIn users with whom you probably want to connect. Keeping it up to date and accurate requires some heady data science and impressive engineering to keep data constantly flowing between the various LinkedIn applications. When Jay Kreps started there five years ago, this wasn’t exactly the case.

    “I was here essentially before we had any infrastructure,” Kreps, now principal staff engineer, told me during a recent visit to LinkedIn’s Mountain View, Calif., campus. He actually came LinkedIn to do data science, thinking the company would have some of the best data around, but it turned out the company had an infrastructure problem that needed his attention instead.

    How big? The version of People You May Know in place then was running on a single Oracle database instance — a few scripts and heuristics provided intelligence — and it took six weeks to update (longer if the update job crashed and had to restart). And that’s only if it worked. At one point, Kreps said, the system wasn’t working for six months.

    When the scale of data began to overload the server, the answer wasn’t to add more nodes but to cut out some of the matching heuristics that required too much compute power.

    So, instead of writing algorithms to make People You Know Know more accurate, he worked on getting LinkedIn’s Hadoop infrastructure in place and built a distributed database called Voldemort.

    tracking_high_levelSince then, he’s built Azkaban, an open source scheduler for batch processes such as Hadoop jobs, and Kafka, another open source tool that Kreps called “the big data equivalent of a message broker.” At a high level, Kafka is responsible for managing the company’s real-time data and getting those hundreds of feeds to the apps that subscribe to them with minimal latency.

    Espresso, anyone?

    But Kreps’s work is just a fraction of the new data infrastructure that LinkedIn has built since he came on board. It’s all part of a mission to create a data environment at LinkedIn that’s as innovative as that of any other web company around, and that means the company’s applications developers and data scientists can keep building whatever products they dream up.

    Bhaskar Ghosh, LinkedIn’s senior director of data infrastructure engineering — who’ll be part of our guru panel at Structure: Data on March 20-21 — can’t help but find his way to the whiteboard when he gets to discussing what his team has built. It’s a three-phase data architecture comprised of online, offline and nearline systems, each designed for specific workloads. The online systems handle users’ real-time interactions; offline systems, primarily Hadoop and a Teradata warehouse, handle batch processing and analytic workloads; and nearline systems handle features such as People You May Know, search and the LinkedIn social graph, which update constantly but require slightly less than online latency.

    Ghosh's diagram of LinkedIn's data architecture

    Ghosh’s diagram of LinkedIn’s data architecture

    One of the most-important things the company has built is a new database system called Espresso. Unlike Voldemort, which is an eventually consistent key-value store modeled after Amazon’s Dynamo database and used to serve certain data at high speeds, Espresso is a transactionally consistent document store that’s going to replace legacy Oracle databases across the company’s web operations. It was originally designed to provide a usability boost for LinkedIn’s InMail messaging service, and the company plans to open source Espresso later this year.

    According to Director of Engineering Bob Schulman, Espresso came to be “because we had a problem that had to do with scaling and agility” in the mailbox feature. It needs to store lots of data and keep consistent with users’ activity. It also needs a functional search engine so users — even those with lots of messages — can find what they need in a hurry.

    With the previous data layer in tact, he explained, the solution for developers to solve scalability and reliability issues was doing so in the application.

    However, Principal Software Architect Shirshanka Das noted, “trying to scale [your] way out of a problem” with code isn’t necessarily a long-term strategy. “Those things tend to burn out teams and people very quickly,” he said, “and you’re never sure when you’re going to meet your next cliff.”

    L to R: Kreps, Shirshanka Das, Bhaskar Ghosh, Bob Schulman

    L to R: Kreps, Das, Ghosh and Schulman

    Schulman and Das have also worked together on technologies such as Helix — an open-source cluster management framework for distributed systems — and Databus. The latter, which has been around since 2007 and the company just open sourced, is a tool that pushes changes in what Das calls “source of truth” data environments like Espresso to downstream environments such as Hadoop so that everyone can ensure they’re working with the freshest data.

    In an agile environment, Schulman said, it’s important to be able to change something without breaking something else. The alternative is to bring stuff down to make changes, he added, and “it’s never a good time to stop the world.”

    databus-usecases

    Next up, Hadoop

    Thus far, LinkedIn’s biggest push has been in improving its nearline and online systems (“Basically, we’ve hit the ball out of the park here,” Ghosh said), so its next big push is offline — Hadoop, in particular. The company already uses Hadoop for the usual gamut of workloads — ETL, model-building, exploratory analytics and pre-computing data for nearline applications — and Ghosh wants to take it even further.

    He laid out a multipart vision, most of which centers around tight integration between the company’s Hadoop clusters and relational database systems. Among the goals: better ETL frameworks, ad-hoc queries, alternative storage formats and an integrated metadata framework — which Ghosh calls the holy grail — that will make it easier for various analytic systems to use each other’s data. He said LinkedIn has something half-built that should be finished this year.

    “[SQL on Hadoop] is going to take two years to work,” he explained. “What do we do in the meanwhile? We cannot throw this out.”

    Actually, the whole of LinkedIn’s data engineering efforts right now put a focus on building services that can work together easily, Das said. The Espresso API, for example, allows developers to connect a columnar storage engine and do some limited online analytics right from within the transactional database.

    With Hadoop plans laid out

    With Hadoop plans laid out.

    Good infrastructure makes for happy data scientists

    Yael Garten, a senior data scientist at LinkedIn, said better infrastructure makes her job a lot easier. Like Kreps, she was drawn to LinkedIn (from her previous career doing bioinformatics research at Stanford) because the company has so much interesting data to work with, only she was fortunate enough to miss the early days of spotty infrastructure that couldn’t handle 10 million users, much less today’s more than 200 million users. To date, she said, she hasn’t come across a problem she couldn’t solve because the infrastructure couldn’t handle the scale.

    The data science team embeds itself with the product team and they work together to either prove out product managers’ hunches or build products around data scientists’ findings. In 2013, Garten said, developers should expect infrastructure that lets them prototype applications and test ideas in near real time. And even business managers need to see analytics as close to real time as possible so they can monitor how new applications are performing.

    And infrastructure isn’t just about making things faster, she noted: “Something things wouldn’t be possible.” She wouldn’t go into detail about what this magic piece of infrastructure is, but I’ll assume it’s the company’s top-secret distributed graph system. Ghosh was happy to go into detail about a lot things, but not that one.

    A virtuous hamster wheel

    Neither Ghosh nor Kreps sees LinkedIn — or any leading web company, for that matter — quitting the innovation game any time soon. Partially, this is a business decision. Ghosh, for example, cites the positive impact on company culture and talent recruitment, while Kreps points out the difficult total-cost-of-ownership math when comparing paying for software licenses or hiring open source committers versus just building something internally.

    Kreps acknowledged that the constant cycle of building new systems is “kind of a hamster wheel,” but there’s always an opportunity to do new stuff and build products with their own unique needs. Initially, for example, he envisioned two targets use cases for Hadoop but now the company has about 300 individual workloads; it went from two real-time data feeds to 650.

    “But companies are doing this for a reason,” he said. “There is some problem this solves.”

    Ghosh, well, he shot down the idea of relying too heavily on commercial technologies or existing open source projects almost as soon as he suggests it’s a possibility. “We think very carefully about where we should do rocket science,” he told me, before quickly adding, “[but] you don’t want to become a systems integration shop.”

    In fact, he said, there will be a lot more development and a lot more open source activity from LinkedIn this year: “[I’m already] thinking about the next two or three big hammers.”

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Low-end Huawei Prism II with Jelly Bean coming to T-Mobile soon

    T-Mobile_Huawei_Prism

    Images and specs of the T-Mobile Prism II (Huawei U8686) have been leaked online via @evleaks. A followup to the former Huawei Prism, the Prism II will feature a similar form factor to its predecessor and will come with Android 4.1.1 Jelly Bean, a 1.0 GHz CPU and a HVGA (480 x 320) display. Not much else is known yet about this device but check back with TalkAndroid as we learn more about the device.

    Sources: Twitter & Twitter

    Come comment on this article: Low-end Huawei Prism II with Jelly Bean coming to T-Mobile soon

  • Soul to sole: Eye surgeon Anthony Vipin Das has developed shoes that see for the blind

    Video still from Le Chal, courtesy Anthony Vipin Das.

    Video still from Le Chal, courtesy Anthony Vipin Das.

    A haunting black-and-white video screened during the TED Fellows talks, depicting people speaking into a device and then walking — at first taking halting steps, then more confident strides. As the video unfolds, the camera zooms in on the faces of the walkers — revealing that they are blind.

    With his team, TED Senior Fellow Anthony Vipin Das, an eye surgeon, has been developing haptic shoes that use vibration and GPS technology to guide the blind. This innovation — which could radically change the lives of the vision-impaired — has drawn the interest of the United States Department of Defense, which has recently shortlisted the project for a $2 million research grant. Anthony tells us the story behind the shoe.

    Tell us about the haptic shoe.

    The shoe is called Le Chal, which means “take me there” in Hindi. My team, Anirudh Sharma and Krispian Lawrence and I, are working on a haptic shoe that uses GPS to guide the blind. The most difficult problems that the blind usually face when they navigate is orientation and direction, as well as obstacle detection. The shoe is in its initial phase of testing: We’ve crafted the technology down to an insole that can fit into any shoe and is not limited by the shape of the footwear, and it vibrates to guide the user. It’s so intuitive that if I tap on your right shoulder, you will turn to your right; if I tap on your left shoulder, you turn to your left.

    The shoe basically guides the user on the foot on which he’s supposed to take a turn. This is for direction. The shoe also keeps vibrating if you’re not oriented in the direction of your initial path, and will stop vibrating when you’re headed in the right direction. It basically brings the wearer back on track as we check orientation at regular intervals. Currently I’m conducting the first clinical study at LV Prasad Eye Institute in Hyderabad, India. It’s very encouraging to see the kind of response we’ve had from wearers. They were so moved because it was probably the very first time that they had the sense of independence to move confidently — that the shoe was talking to them, telling them where to go and what to do.

    How do you tell the shoe where you want to go?

    It uses GPS tracking, and we’ve put in smart taps: gestures that the shoe can learn. You tap twice, and it’ll take you home. If you lift your heel for five seconds, the shoe might understand, “This is one of my favorite locations.” And not just that. If a shoe detects a fall, it can automatically call an emergency number. Moving forward, we want to try to decrease the dependency on the phone and the network to a great extent. We hope to crowdsource maps and build up enough data to store on the shoe itself.

    The second phase we are working on is obstacle detection. India has got such a varied terrain. The shoe can detect immediate obstacles like stones, potholes, steps. It’s not a replacement for the cane, but it’s an additive benefit for a visually impaired person to offer a sense of direction and orientation.

    Are you still in the development stage?

    The insole is already done. We are currently testing it. I’m using simple and complex paths — simple paths like a square, rectangle, triangle and a circle, and complex paths include a zigzag or a random path. Then we are going to step it up with navigation into a neighborhood. From there we’ll develop navigation to distant locations, including the use of public transportation. It will be a stepwise study that we’ll finish over the middle of this year, then go in for manufacturing the product.

    You’re an eye doctor. How did you get involved in this?

    I’m an eye surgeon who loves to step out of my box and try to see others who are working in similar areas of technology that are helpful for my patients. So Anirudh Sharma and I, we’re on the same TR35 list of India in 2012. I said, “Dude, I think we can be doing stuff with the shoe and my patients. Let’s see how we can refine it.” There was already an initial prototype when he presented last year at EmTech in Bangalore. Anirudh teamed up with one of his friends, Krispian Lawrence of Ducere Technologies in Hyderabad, who is leading the development and logistics to get this into the market. We just formed a really cool team, and started working on the shoe, started testing it on our patients and refining the model further and further. Finally we’ve come to a stage where my patients are walking and building a bond with the shoe.

    Are these patients comfortable with the shoe?

    Yes, it’s totally unobtrusive. And more importantly, we are working on developing the first vibration language in the world for the Haptic Shoe. We’re looking at standardizing the vibration, like Braille, which is multilingual. But even more crucial than the technology, the shoe is basically talking to the walker. How they can trust the shoe? So that’s an angle that we are looking at. Because at the end of the day, it’s the shoe that’s guiding you to the destination. We’re trying to build that bond between the walker and the sole.

    Building a bond with the sole. That’s good. I’m going to use that.

  • LG hits the 10 million mark for LTE phones sold

    LG_Wireless_Ultra_HD_Transmission_Standard

    The latest announcement from LG is that they’ve officially sold 10 million LTE-enabled smartphones globally. While they’re still trailing some of the biggest competitors in the market by a large chunk, it’s still a very impressive number, especially considering LTE isn’t commonplace all over the world like it is in countries like the US. Besides that, the number doesn’t include the Nexus 4, which has no (official) LTE but is still one of the best selling phones available on the market right now.

    We’ve seen LG make a push for more LTE phones lately, and I think it’s going to pay off for them. 10 million is a good stepping stone, and LG has plenty of room to go up. When the wave of next generation devices hit the shelves in the coming months, expect LG to post some impressive numbers.

    source: Newswire

    Come comment on this article: LG hits the 10 million mark for LTE phones sold

  • Browsing the web on an iPad stinks–and Apple likes it that way

    When iPads were first introduced in 2010, an Apple press release promised that the “iPad’s revolutionary Multi-Touch interface makes surfing the web an entirely new experience, dramatically more interactive and intimate than on a computer.” The implication was that the web via the tablet would be unrecognizable and vastly superior: hoverboarding compared with surfing on my laptop and doggie paddling on my phone.

    Yet, here it is three years on, and we’re still waiting for that “interactive and intimate” browsing experience (and hoverboards, for that matter).

    A recent study conducted by Onswipe revealed that iPads account for a whopping 98.1 percent of tablet traffic on websites. Despite this, the actual experience of surfing the web on an iPad is underwhelming at best and infuriating at worst. Simply put, today’s state-of-the-art tablet browsers, especially Safari, don’t do the Internet, the user, or the iPad justice. Apple wasn’t totally wrong: The iPad has proven itself to be a revolutionary device that absolutely has the potential to offer a transformative web-browsing experience. It just hasn’t yet. Which means there’s a gap in the market for an intuitive, immersive, innovative iPad browser. Whoever develops it is going to win big.

    Safari is deliberately hobbled

    As more and more of the services we use on a daily basis have migrated to the cloud, the web browser has become the computer’s most essential app. And when we surf the web on a computer, we encounter few obstacles. Though we may have to scale the occasional paywall or sit through an obligatory five seconds of an ad before accessing content, the navigational experience of a computer user is fluid and frictionless — as anyone who’s gone down the rabbit hole researching alpaca breeds or underrated Val Kilmer films at 3 a.m. can attest.

    Surfing the web is far less pleasurable on an iPad. Visiting a site frequently presents one with a pop-up and a dilemma: Download the app, or endure the diminished experience of a website designed for another device. Safari is essentially a limited version of its desktop sibling – and apps almost always provide a better experience. (Or, as Firefox UX Lead Alex Limi has summed it up, it’s ”kind of sucky.”)

    Of course, this is sort of the point. It’s in Apple’s, or any tablet maker’s, best interest to make using (read: buying) apps preferable to visiting websites. Safari is designed to make using web-based apps on an iPad inconvenient, if not impossible. In response, most companies focus their mobile development resources on creating native apps rather than optimizing their content for tablet browsers. The result is a browsing experience full of flow-breakers. In short, on a computer the browsing experience is limitless; on a tablet, it’s filled with blind alleys and false doors.

    Why web browsing still matters

    There is an impulse among some to assume that the rise of apps – or, more sensationally, the death of the website – will eventually render browsers, or at least mobile ones, obsolete. While it’s true that more and more content is consumed through apps, and that personalization has shifted our approach to content from searching to getting, the average number of Google searches per day has steadily increased – by an astounding one trillion each year.

    But even if we accept that the importance of mobile websites is on the wane, there’s no reason for mobile browsers to beat them to an early grave. There is plenty of room for resurrection, but only if we throw out desktop-based notions of what a browser looks and feels like. Freed of all the tasks and responsibilities that other apps accomplish, tablet browsers should offer an absorbing, engaging innovative experience. Further, they should evolve the idea of what a browser is and can be on a tablet. Take GarageBand, for example: The iPad version is infinitely more interactive and tactile than the desktop version.

    I’ve mostly been picking on Safari. As the native browser for a tablet that accounts for 98.1 percent of tablet traffic, its influence is enormous. However, that’s not to say there aren’t more innovative browsers taking steps in the right direction. Dolphin, for instance, allows you to create your own gestures for various functions. And though there are any number of other browsers contending in the space, as of yet none has emerged as the standard-setter or must-have. Mozilla’s forthcoming iPad browser, Junior, which completely throws out desktop-inspired design and focuses on simplicity, could be a contender, but for now we have to wait and see.

    What we’ve lost

    As it currently stands, the shoehorning of hobbled desktop browsers onto tablets is forcing us to move from a browser to app-navigation experience. This is not necessarily a negative development, but we must carefully consider what we lose as our web experience becomes siloed, or, alternately, take into consideration in our app design how we can ensure and better enable the type of surfing serendipity that made web browsing valuable in the first place.

    The web as we have known it was designed to facilitate the browsing experience – to be a boundlessly linked rhizomatic structure of hypertext. But we have quite willingly begun to fence it off as we have shifted our experience to the iPad and individual apps. Even worse, though, is that most of the apps and services that have attempted to fill the browsing void have only further constricted the experience of the web via the tablet.

    Under the claim of “personalization” and making the browsing and discovery experience more individually valuable and meaningful, they really provide little more than constricting customization confined to picks of an editor or your social graph. Most of it is expected or retreaded.What is lost is the magic of blazing a trail from one page to the next, the anticipation of revealing the unknown that lurks behind the next link. Personalization shouldn’t be an either/or experience of web discovery, and neither should browsing on the tablet.

    While we will continue to make strides in personalizing the web, and hopefully even enhancing the web experience on tablets, I’m also looking forward to a browser that lets me fall down an unexpected rabbit hole once in awhile. As long as there are alpacas and Val Kilmer movies, there will be surfers. It’s up to developers to provide the hoverboards.

    Hank Nothhaft is the co-founder and chief product officer of Trapit, a personalized content discovery platform.

    Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

     

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.