Blog

  • PE Investors Temper Returns Expectations

    Private equity investors hoping for outsized profits are facing an awkward truth – investment returns have shrunk and are unlikely to go back to their peak levels, Reuters reported. At private equity’s annual global gathering in Berlin this week, investors acknowledged that the asset class looks unlikely to revert to its bumper past payouts, weighed down by modest global economic prospects as well as an influx of funds into the sector creating increased competition for deals.

    (Reuters) – Private equity investors hoping for outsized profits are facing an awkward truth – investment returns have shrunk and are unlikely to go back to their peak levels.

    At private equity’s annual global gathering in Berlin this week, investors acknowledged that the asset class looks unlikely to revert to its bumper past payouts, weighed down by modest global economic prospects as well as an influx of funds into the sector creating increased competition for deals.

    “It’s just too hard to see, with the level of capital out there, the baseline rates and the lack of growth globally, that you will be able to generate the kind of returns that were available in points of time in the past,” Howard Searing, director of private markets at pension fund manager Dupont Capital Management, told the SuperReturn conference in Berlin.

    Granted, prospective returns from the corporate buyouts that are private equity’s stock in trade are still generous set against meagre bond yields and volatile stock markets. It’s just that they are less generous than they were.

    A rise in leveraged buyout activity mostly in the United States, culminating in the $24.4 billion offer for computer maker Dell Inc backed by private equity firm Silver Lake as well as the company’s founder, has seen private equity fund managers spend more of their investors’ money on deals.

    Financing costs for deals are at historic lows as debt investors chase better returns amid persistently low interest rates, driving up demand for high-yield debt.

    This has in turn led to pledges by private equity executives that they will avoid relying on cheap debt, clever financial tricks and the other excesses of the heady days that preceded the financial crisis.

    Some acknowledged that private equity’s glory days are not coming back, at least not for the sector as a whole.

    “If you are wholly dependent on doing conventional buyouts, which today are very competitive with a lot of money around … frankly it’s going to be very difficult to generate traditional private equity returns in the low- to mid-20 percent (range),” said Leon Black, chief executive of buyout firm Apollo Global Management LLC.

    “If you and your limited partners (investors) have decided in this low interest-rate environment that low- to mid-teen returns are OK, then maybe there will be a lot of things to do,” Black added.

    GOOD OUTCOME

    Private equity has established a track record of outperforming other asset classes. The U.S. private equity index compiled by advisory firm Cambridge Associates LLC shows an net internal rate of return (IRR) of 13.7 percent in the 10 years through September 30, 2012, compared with an 8 percent return by the S&P 500 Index.

    Yet returns have come down as buyout funds proliferated.

    The top-performing 25 percent of U.S. fund managers whose fund launched in 2001 have delivered a net IRR of 36.5 percent; by comparison the net IRR of the top-performing 25 percent of funds launched in 2004, when 66 funds were raised as opposed to 24 funds in 2001, is 13.9 percent, Cambridge Associates said.

    “If you look at the private equity world over its 40-year history, the vintages when we as an industry create good returns are when it is toughest to raise capital,” said Kurt Björklund, co-managing partner of buyout firm Permira Advisers LLP .

    Many private equity funds suffered from overpaying for assets on the back of too much borrowing in the years leading up to the financial crisis of 2008. These funds however have a typical maturity of 10 years, so the jury is still out on their final performance.

    To be sure, there are still some private equity funds that deliver net IRRs of over 20 percent. But the industry is coming to terms with return expectations that are unlikely to improve by a new wave in private equity dealmaking.

    “I think net returns in the mid- to upper-teens would be a good outcome for most investors, especially when they look at the landscape of what the alternatives are today,” said Thomas Haubenstricker, chief executive of Goldpoint Partners, which manages assets for New York Life Insurance Co and other clients.

    Private equity investors typically include insurance firms, sovereign wealth funds, university endowments and family offices, but also large public pension funds that turn to the asset class to help them meet their pension liabilities.

    ASSUMPTIONS

    Apollo’s Black said fund managers who pay 9 times earnings before interest, tax, depreciation and amortization (EBITDA), the average price for U.S. private equity deals currently, make too many assumptions about what has to go well.

    Such assumptions include that interest rates will stay low for the next five years, that companies can be sold at the same EBITDA multiple they were bought, and that they can grow these companies faster than the underlying economy.

    Apollo has however managed to secure lower valuations in niches such as corporate carve-outs, or buying businesses put on their block by a parent and paying on average only 6 times EBITDA, Black added.

    Adding to deal price inflation has been the accumulated capital by private equity firms that they have to spend or return to investors. As of January 2013, North America-focused private equity buyout funds had $189.4 billion in unspent capital, down just 12 percent from December 2011, according to market research firm Preqin.

    Since private equity saw its best returns when capital was scarce, it should become more profitable as investors refuse to stump up more capital for managers who underperform, Permira’s Björklund noted.

    “A number of the large funds have come down in size now so we would expect to see returns improving,” Björklund said.

    One way investors have been trying to boost private equity returns is by avoiding fees, either by co-investing with fund managers in companies or excluding fund managers completely and investing directly.

    “Returns are absolutely more attractive in the co-investment and direct investment portfolio,” said Rich Hall, head of private equity at Teacher Retirement System of Texas. “We are seeing about an 8 or 9 percent advantage relevant to our funds portfolio,” he added, referring to the outperformance of such investments. (By Greg Roumeliotis)

    The post PE Investors Temper Returns Expectations appeared first on peHUB.

  • Call-in Show: Chromebook Pixel pinch secrets

    It’s another edition of the weekly call-in show where we answer your tech questions. We devote this one to Google’s new Chromebook Pixel, which is both impressive and perhaps limiting based on your questions. Or is it?

    To be a part of the show, just call in and leave a voicemail at 262-KCTOFEL. If you do, we’ll play back the question on the show and answer it. Or you can tweet me at @kevinctofel on Twitter. Each week, I’ll answer as many questions as I can while keeping the podcast to a manageable amount of time: 20 to 30 minutes at most.

    (download)

    Subscribe to RSS

    iTunes

    Stitcher Radio

    Show notes:
    Hosts: Chris Albrecht and Kevin C. Tofel

    • Any decent audio or video web apps that work with Chrome OS?
    • Why doesn’t that touchscreen pinch & zoom out of the box and how can I enable it? Chrome::/flags is your friend!
    • What web apps really take advantage of this hardware? This one does!
    • Will the Pixel price ever drop?

    SELECT PREVIOUS EPISODES:
    PlayStation snore? Google Pixel and Tesla earnings

    Podcast: Why the internet of things is cool and how Mobiplug is helping make it happen

    Podcast: Ballmer’s in the Dell, do tweets ruin TV? And how ISPs are not like gas pumps

    Podcast Q&A: MotoACTV smartwatch now or wait? Lumia 822 in India? Best running apps?

    Podcast: Kabam founder on scaling globally and designing for different platforms

    Podcast: RoadMap Re-Run: Kickstarter’s Perry Chen on creativity and crowdsourcing

    Podcast: The Sporkful’s Dan Pashman on web and food culture (and how bacon is over)

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Data Center Jobs: Alban Cat

    At the Data Center Jobs Board, we have a new job listing from Alban Cat, which is seeking a EPG Field Service Supervisor in Sterling, VA.

    The EPG Field Supervisor is responsible for directing and supervising activities of field operations and repairs; traveling from site to site and providing guidance and support to the EPG field technicians; and addressing customer concerns and disputes and resolving them in a timely manner. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

  • Nautic Partners Exits Big Train

    Nautic Partners has exited Big Train Inc., selling the company to Kerry Group plc. The terms of the transaction were not disclosed. Lake Forest, Calif.-based Big Train is a manufacturer of liquid and powdered beverage concentrates.

    PRESS RELEASE

    Nautic Partners, LLC (“Nautic”) announced today that it has completed the sale of Big Train, Inc. (“Big Train” or the “Company”) to Kerry Group plc. The terms of the transaction were not disclosed.

    Headquartered in Lake Forest, CA, Big Train is a manufacturer and marketer of liquid and powdered beverage concentrates used for blended ice coffee, fruit smoothies, chai tea, cocoa drinks and various syrups. The company distributes its products through multiple channels, including independent coffeehouses, large retail chains, and international distributors. Big Train serves its approximately 14,000 coffeehouse customers directly, as well as through distributors. Coffeehouses sell the Company’s product under the “Big Train” brand as well as under private label.

    “Our experience working alongside the Big Train management team has been excellent,” said Bernie Buonanno, Managing Director of Nautic. “The strength of the Company’s leadership and the high quality of its products are reflected in its double-digit growth over the last few years.”

    “Our partnership with Nautic was a tremendously positive experience,” said Robyn Hawkins, former Chief Executive Officer of Big Train. “With Nautic we expanded our relationships with multiple key customers, acquired substantial new business, and positioned the Company for strong growth in the coming years.”

    Edwards Wildman Palmer acted as legal counsel for the Company.

    The Big Train sale was Nautic’s fourth liquidity event since November 2012. The three other exits were GCA Services Group to The Blackstone Group, Aavid Thermalloy to Audax Private Equity, and Agilex Fragrances to MidOcean Partners.

    About Nautic Partners

    Founded in 1986, Nautic Partners is a middle-market private equity firm with over $2.5 billion of equity capital under management. The firm has completed 114 platform investments in partnership with management and delivered successful results to investors over three decades. Nautic targets equity investments of $25-$75 million, representing majority ownership in niche businesses with strong market share and growth potential, identified value enhancement opportunities and strong management teams. Areas of focus include business services, manufacturing, and healthcare.

    The post Nautic Partners Exits Big Train appeared first on peHUB.

  • Reuters – KKR, First Reserve, Ares Prepare Bids for Utex

    Private equity firms KKR & Co, First Reserve Corp and Ares Management are preparing final bids for Utex Industries, a U.S. manufacturer of sealing products and services used for oil and gas drilling, Reuters reported Tuesday. Bids for Utex, which is owned by New York-based private equity firm Rhone Capital, are due this week, the sources said. They said other parties may also bid on the company. Sources have said Utex could bring in bids in the $700 million to $800 million range.

    (Reuters) – Private equity firms KKR & Co , First Reserve Corp and Ares Management are preparing final bids for Utex Industries, a U.S. manufacturer of sealing products and services used for oil and gas drilling, according to two sources familiar with the matter.

    Bids for Utex, which is owned by New York-based private equity firm Rhone Capital, are due this week, the sources said. They said other parties may also bid on the company.

    Sources have said Utex could bring in bids in the $700 million to $800 million range.

    First Reserve declined to comment on the matter. KKR, Ares and Utex did not immediately respond to calls for comment.

    Reuters reported last month that Rhone had put the company on the block.

    Founded in 1940 and based in Houston, Texas, Utex makes products used for oil drilling, as well as water management and mining, according to its website.

    Rhone Capital, an investment arm of Rhone Group LLC, specializes in middle market leveraged buyouts, recapitalizations and partnership financings.

    The post Reuters – KKR, First Reserve, Ares Prepare Bids for Utex appeared first on peHUB.

  • Why Qualcomm Wants To Bring Ultrasound Transmitters To Smartphones And Tablets

    qualcomm logo

    Mobile chipmaker Qualcomm has a track record of pushing new capabilities into its chips faster than its competitors in a bid to carve out a bigger chunk of the market. Last year, for instance, its LTE Snapdragon processor helped it to take a 48 per cent revenue share in H1 (Strategy Analytics‘ figure), helping to drive more LTE handsets into the market which in turn accelerated the rate of 4G adoption.

    The company made an interesting acquisition last November, buying some of the assets of an Israeli company called EPOS which makes digital ultrasound technology. Ultrasound may seem an odd technology to push into consumer electronics but Qualcomm clearly sees it as another differentiator for its chips, thanks to its potential to offer some novel additions to the user interface space — both for stylus-based inputs and even touch-less interfaces like gestures.

    Discussing Qualcomm’s interest in ultrasound at the Mobile World Congress tradeshow in Barcelona, Raj Talluri, SVP of Product Management, explained that to put the technology to work in mobile devices an ultrasound transmitter could be located in a stylus, with microphones sited on the mobile device that can then detect the position of the pen.

    Samsung has already included a capacitive stylus with its Galaxy Note phablet but Talluri said an ultrasound-based stylus would extend the capabilities — allowing a stylus to be used off-screen, say on the table top next to where your phone is resting, and still have its input detected.

    “It’s is better [than a capacitive stylus] in some key different ways which we’re working on getting to market – for example you could write here [on the table next to the phone] and it will still detect where it is. So let’s say you have a [paper] notepad… and you have a phone [nearby on the table] and you can start writing on your notepad it will actually also be transcribed into text on the phone because what happens is the ultrasound can be used to calibrate any reasonable distance,” he told TechCrunch.

    The technology could also support gesture-based interactions by positioning an ultrasound transmitter on the mobile device. “There are many use cases of ultrasound,” said Talluri. “You could put a little ultrasound transmitter here [on the corner of the screen] and transmit stuff and then when you cut the ultrasound field [by swiping above the device’s screen] you can do gestures.

    “There’s many different things you can do with it, once you have it. So we’re working on it and hopefully we’ll get it to commercial products.”

    Talluri would not be drawn on the likely timeframe of bringing this technology to market in Qualcomm chips, or which device makers Qualcomm is working with. “We haven’t announced anything yet. There’s clearly a lot of work to be done on it. We’re working on it we’re just not ready to announce,” he said. “We are very interested in in, that’s why we acquired the assets.”

    He would say that Qualcomm is looking at both phone and tablet form factors for the ultrasound tech but added that it could work “anywhere” — including in wearable devices, such as Google Glass.

    The system also doesn’t necessarily require new microphones to function — opening up the possibility of ultrasound-enabled accessories that can be retrofitted to existing devices to extend their capabilities.

    “The other nice thing is that we find that the microphones [on existing mobile devices] that we put in to use for speech can also detect ultrasound waves — so you probably don’t need special microphones. There are lots of interesting ways to do it… You just need a transmitter somewhere,” said Talluri.

    Discussing how mobile chipsets are generally going to evolve, Talluri said in his view the focus will be, not so much on on simply adding more and more cores, but rather on getting all the various chipset elements to work together better.

    “We think the next generation of innovation is going to be more on heterogeneous compute. Right now if you look in the phone we’ve got CPUs, we’ve got GPUs, we’ve got video engines, we’ve got audio engines, we’ve got cameras, we’ve got security blocks but they all do one thing at a time.  Ideally you just want to say I want to do this and it should just go map itself to whatever its logical place is and if that place is busy it should work on something else, maybe not optimally,” he said.

    “That’s what I mean by heterogeneous compute. Every block should be able to do other things so that’s kind of where I think SOC in general will evolve to. How can you take advantage of the silicon that you put inside the die to do multiple things, not just one thing at a time. I think that’s a more interesting concept than just put more cores.”

  • The opportunities and dangers in the native advertising land rush

    Even if you’re not in the media or marketing space, you’ve surely noticed that there’s a content boom right now and that many media outlets are handling ad-like stuff a bit differently. Brands are now making more and sometimes better content than in years past, and seasoned journalists and editors are smudging the hard black lines separating editorial from advertising.

    For a few years now, I’ve been working in and around “native advertising,” though I’ve only recently (and somewhat grudgingly) started referring to it by that name. There’s a ton of opportunity here on both sides of the fence: new revenue streams for media companies and journalists (especially freelancers), and marketing channels that demand smart, engaging content — as smart and engaging as the “real” editorial content it coexists with — from brands.

    My favorite flavor of native advertising involves working with a top-notch media partner and publishing a blend of opinions from established thought leaders as well as a few brilliant people at the client company.  When done well, these campaigns are great for the publisher, the brand and the reader — or at least that’s the goal.

    But there are plenty of inherent dangers. On the brand side, it can be very hard for corporate spokespeople to write with a natural voice, and it often falls to me as editor to turn reflexive corporate-speak into something human. This can be a sticky proposition — nearly every piece goes through rounds of client and legal review before publishing, but for it to work, it can’t read like it’s been through the sausage grinder.

    But the real challenges in this type of advertising are on the publishers’ and writers’ sides. In the rush for media brands to become platforms and for journalists to become marketers, we’re missing some important considerations — branding considerations.

    Journalists: You’re brands, too

    Journalists, let’s start with you: You should know that I only want to hire you for your reputation, smarts and objectivity. (And partly for your audience, but if you say something smart, I can find an audience for it.) That’s your brand, and you have to protect it as fiercely as I protect my clients’ brands.

    When I reach out to a journalist with an assignment, I always stipulate that he’ll have full editorial control, and I expect him to exercise it. The client gets final approval of the content — but only on a binary “approve/reject” level — and the journalist gets paid either way. That lets you, the journalist, go about your business of telling the truth and saying interesting things, which is what we value journalists for in the first place.

    You’re free to push back on my edits, ask tough questions, even tar-and-feather the client, if that’s what you want. It’s my job to figure out a client response to that. I once got a client to approve a somewhat critical piece, with the stipulation that one of their people could write a counterpoint to it. It worked beautifully — a meaningful dialogue between influencer and company. On the other hand, I worked on a project with the editor-in-chief of a popular news site and had to throw out a lot of his work because he was obviously trying to please my client. That’s a ticket to journalistic purgatory.

    I work hard to protect the reputation of the writers I commission, partly because screwing over journalists is not a good earned-media strategy, and partly because a compromised journalist is of no use to anyone. But mostly because surprisingly honest writing is a great way to engage the reader and bolster the brand that commissioned it.

    Not every advertiser gets that, though, and I see bad examples of native advertising all the time: articles where the final copy clearly either isn’t the writer’s normal work or was passed through a corporate-speak filter. Journalists, that hurts your brand, and you should refuse to participate in that stuff.

    Publishers: Are you sure you want to be platforms?

    The biggest danger in native advertising is to the publishers and their brands. On Thursday, Andrew Sullivan asked “aren’t we in danger of destroying the village in order to save it?” And in my years of negotiations with media companies, I’ve been shocked at how little some of them value their reputations and brand equity.

    The “platform” idea perfectly encapsulates my point: August publishing companies are opening themselves up to the rantings of every amateur and careerist who wants to add “columnist” to a LinkedIn profile. Those companies also let brands publish on their sites with very little to distinguish ad from editorial.

    Guys, a trusted brand is better than an open platform, both for your readers and for your advertising customers. The internet is full of amateur ravings and branded swill, and is starved for great content. If you can make the latter, why open yourself up to the former? For new revenue streams? That story is short, and we know how it ends.

    I talked with a publisher a couple weeks ago who is taking a stand on native advertising. He was getting too much crap, and his readers were noticing, so he’s going to start rejecting stuff that isn’t up to his publication’s editorial standards.

    At the end of the day, he’ll be leaving money on the table by making that decision. But should he even be worried about the quality of the native ads that run in his publication? Should he pick that money up? Many publishers are asking themselves these questions, and you probably are, too. I’d love to hear your thoughts in the comments.

    Kyle Monson, who will be speaking more about native advertising at paidContent Live on April 17, is Chief Creative at Knock Twice, a startup agency that focuses on tech PR and advertising. In a previous life, Kyle was content strategy director at JWT, and spent almost a decade as a journalist and editor.

    paidContent Live: April 17, 2013, New York City. Register Now

    Image courtesy of Flickr user Dano

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Argentina back in court

    Argentina squares off today in a U.S. Appeals court with the so-called holdout creditors who are demanding $1.3 billion in payments on defaulted bonds. A decision will probably take a few days but supporters of both sides have been mustering.

    Emails have been pouring into journalists’ inboxes thick and fast from the Argentine Task Force, a lobby group that wants Argentina to settle with bondholders and identifies its goal as “pursuing a fair reconciliation of of the Argentine debt default”.  And yesterday, a noisy pots-and-pans protest was held outside the London offices of Elliot Associates (the parent company of one of the two hedge fund litigants)  by groups supporting Argentina in its battle against those it terms “vulture funds”.  Nick Dearden, director of the Jubilee Debt Campaign, a group that calls for cancelling poor countries’ debts, says:

    If the vulture funds are allowed to extract their pound of flesh from Argentina today, we will see a proliferation of vulture funds in Europe tomorrow.

    Meanwhile, market jitters are also mounting. Argentine dollar bond yields have risen steadily since the start of the year, with the country’s 2017  dollar bond now yielding 15.5 percent, 400 basis points up from early January (it’s still off the 20 percent record high hit in November when a technical default looked imminent).

    Debt insurance costs too have surged. The annual cost of insuring one year of exposure to $10 million of Argentine debt via CDS has risen to around $5 million, according to Markit. That is double the level of one-year CDS at the start of 2013.

  • Edit audio with no loss of quality using WaveShop

    If you’d like to edit an audio file then there’s plenty of free tools around to help, however most of them are prone to altering your files in unexpected ways. To test this yourself, just open any file, save it with a different name, and compare that file with the original. Even though you’ve not performed any operations on the second file at all, you’ll still often find there are differences, and inevitably that’s going to mean some compromise in sound quality.

    WaveShop takes an alternative approach. The program is specifically designed to be bit-perfect, only altering your audio when it’s absolutely necessary. So if you open a file and then save it immediately, there will be no changes. And if you carry out some editing task on one area of the file — fade it out at the end, say — everything else remains exactly as it was.

    The program is open source, and comes in a variety of forms: portable, 32-bit, 64-bit and so on. For performance reasons it’s designed to process audio files entirely in memory, so where possible you should always opt for a 64-bit build.

    Whatever you choose, though, the download will be relatively small (just over 1MB). Installation is quick and easy, and on launch you’ll find a very familiar interface — there are no great surprises here and you’ll know immediately what to do.

    Specifically, open an audio file (WAV, AIFF, AU, AVR, CAFF, FLAC, HTK, IFF, MAT4, MAT5, MPC, OGG, PAF, PVF, RAW, RF64, SD2, SDF, VOC, W64, WVE and 6I formats are supported), it’s presented in a standard waveform-type display, and you can zoom in, select whatever area you like, and delete or copy and paste it wherever you need.

    The program has some effects you can apply, too. You’re able to amplify audio, for instance, fade it in or out, or scan your file for clipped audio.

    A “Change Format” option adjusts the sample rate, sample size and number of channels.

    There are Reverse and Invert tools; you get simple Peak and RMS statistics, and there are a host of ways to work with channels. You can insert, delete or swap them, for instance; extract channels to mono files; even edit surround sound audio to decide which channel is assigned to a particular speaker.

    And if you’ve problems with any of this then an excellent Help file (a real Windows Help file, not a PDF or a link to some inadequate web page) does a good job of explaining what you need to know.

    WaveShop doesn’t have a lot of processing options, then, but there’s a reasonable selection for version 1.0, and its non-destructive nature means there may be already good reason to use the program for simple editing tasks. So if you’d like to preserve your audio quality, as much as possible, then it’s definitely worth a try.

    Photo Credit: asiana/Shutterstock

  • Exclusive: Happtique releases standards for ‘seal of approval’ for mobile health

    With an estimated 40,000 mobile health apps (PDF) available for doctors, consumers and others in healthcare, it can be hard to separate quality apps from, well, crap.  A November report from the New England Center for Investigative Reporting highlighted the number of apps that over promise and under deliver. And while more doctors are using apps to monitor patients or check information, there are still valid concerns about reliability, privacy and security.

    To help give hospitals and health care providers more clarity around the good, bad and ugly in mobile health apps, New York-based Happtique has been working on a certification program for mobile apps and on Wednesday plans to release its final set of standards.

    “One of the things I hear all the time when I’m dealing with providers and institutions is ‘hey, there are so many apps out there, how do we know which ones have just even been looked at by clinicians? … Or within [a category] ‘how do we decide which ones that we’ll use or recommend to patients?’,” said Ben Chodor, CEO of Happtique. “They just need somewhere to turn where at least these apps have been peer-reviewed and scanned so we know that they’re safe.”

    In the past year, Happtique has enlisted experts and patient advocates to serve on its standards committee, and it’s met with hospital and medical associations and government agencies to hear their feedback. Last July, it released a draft of its standards (PDF) to give developers, care providers and other health care professionals the opportunity to comment.

    The final standards released Wednesday cover not only technical performance, including operability, privacy and security, but content standards. For example, they encompass issues like the credibility of an app’s information and sources, the fairness of its description and claims, compliance with rules and regulations and advertising disclosures.

    Chodor said they’re intended to give health care providers and consumers a Good Housekeeping-like “seal of approval” to look for, as well as provide app developers a set of guidelines to build to and a way to show customers their value.

    The Food and Drug Administration is still expected to hand down its own guidelines — and Happtique says its standards will shift to follow federal regulations. But the FDA will only cover some mobile apps, leaving others in a gray area still helped by an industry standard, Chodor said, adding that Happtique could also be a feeder to the FDA.

    Still, even though mobile health could certainly be helped by standards, some argue that Happtique’s plan is unfolding a little too early because there aren’t enough good apps worth filtering out. And, in the vast and quickly growing world of mobile health, Happtique will have to establish itself as a trusted, known name. But its pedigree and partnerships will likely serve it well — not only did it grow out of the hospital community (it was incubated in the venture arm of the Greater New York Hospital Association), it’s signed on impressive partners, including Mount Sinai Hospital, the NYU School of Medicine and Beth Israel Medical Center.

    While Happtique’s final guidelines will be released Wednesday, it won’t start taking submissions from developers until this spring. At that time, app developers interested in certification will pay $2,500 to $3,000 and then it will go to third-party partners for review. The Association of American Medical Colleges and the Commission on Graduates for Foreign Nursing Schools (CGFNS), a credentialing authority for healthcare professionals, will review the content and Intertek will scan for technical performance.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Time running out for Hungarian bonds?

    Could Hungary’s run of good luck be about to end?

    Despite controversial policies, things have gone the country’s way in recent months — the easing euro crisis and abundant global liquidity saw investors flock to high-yield emerging markets such as Hungary and also allowed it to tap international capital for a $3.25 billion bond. It has slashed interest rates seven times straight, cutting them this week to a record low 5.25 percent. The result is an increased reliance on international bond investors. Foreigners’ share of the Budapest bond market  is almost 50 percent, among the highest percentages in emerging markets.

    But analysts at Unicredit write that both markets and economic data had validated rate cuts in 2012, which may not be the case any more. Annual headline inflation fell from 6.6% in September 2012 to 3.7% in January 2013 while the economy contracted 1.7% last year. As a result, net foreign buying of Hungarian bonds rose  in the second half of 2012 to 837 billion forints (an average daily rate of almost 6 billion forints), they note.  Markets are pricing at least 3 more cuts, that will take the rate to 4.5 percent.

    But support from foreigners is ebbing. Since the beginning of the year, Unicredit points out, foreign investors have cut holdings of government bonds by 236.8 billion forints (average daily outflow of 6.1 billion forints). Moreover, the most recent rate cuts have failed to fully translate into bond yield corrections, they say.  While the short-dated 2-5 year segment of the curve dropped 23-40 basis points, the belly (the middle) of the curve dipped by only 9-24 bps and longer-dated yields over 10 years have risen by around 18 bps. And the fall in inflation too could be a thing of the past if the government resorts to tax hikes in order to meet the deficit target of 2.7% of GDP  — that would persuade the European Union to lift the excessive deficit procedure it has triggered against Hungary for repeated budget deficit overshoots.

    The biggest complication could be the upcoming leadership change at the central bank, which is expected to tightly align monetary and government policies. That has contributed to the forint’s 1.5 percent weakening this year (after 11 percent gains in 2012) and many reckon portfolio flows could be seriously undermined.  Peter Attard Montalto, an economist at Nomura, calls outgoing governor Andras Simor “a level-headed defender of  financial stability, the currency, inflation and central bank independence even against unimaginable pressure from the government”. Montalto adds:

    Foreign investors will find the institution much more closed and no longer the ‘safe haven’ for information and analysis. That may well be even more damaging in both the short and long run than any policy unorthodoxies.

    The new governor is to be named on Friday and the front-runner is Gyorgy Matolcsy, widely seen as the architect of Hungary’s unconventional policies.

  • Morning Advantage: Weight Watchers Workers Want Fatter Wallets

    Weight Watchers leaders are waging a battle — and it’s not just over our expanding waistlines. They’re angry about the wages they’re being paid, complaining that the base rate for running meetings ($18) hasn’t increased in more than a decade, even as the company shells out millions to celebrities like Jennifer Hudson and Jessica Simpson to advertise the program. Hundreds have taken to an internal corporate website in protest, some arguing that Weight Watchers gets away with paying so little because most of its leaders are female.

    Steven Greenhouse at The New York Times says that the uproar comes at a tough time for Weight Watchers. Two years ago, the company reached a $6.2 million settlement that ended a class-action lawsuit in California in which employees complained about minimum wage violations, off-the-clock work and paychecks that didn’t explain how wages were calculated. And Weight Watchers is forecasting lower earnings this year, citing problems recruiting new members. (Is it any wonder, with its $42.95 monthly fee, when there are plenty of free weight loss apps that simulate the Weight Watchers experience?)

    But for disgruntled employees, they sure do seem to love their work. As Teri Weatherby, a meeting leader in Hartford, CT, puts it: “Other than the financial problems, it’s probably the most rewarding thing I’ve ever done,” she said. “That’s what they prey upon. It’s like an abusive relationship. You know you should leave, but you stay because you love it.”

    IN RELATED NEWS…

    Cash is Nice, but Think About What Might Benefit Both Employees and the Company (Business Horizons)

    What if you could “pay” your employees in such a way as to not only give them something they valued but simultaneously improve their commitment or job-related knowledge? You can. As a reward for high performance, you can give employees the freedom to redesign their jobs, or you can provide extra training. How about giving them a sabbatical? That will get their attention. Herman Aguinis of the Kelley School of Business at Indiana University lays out the options for non-monetary compensation. —Andy O’Connell

    A WALK TO REMEMBER

    A Chance Encounter Becomes Impromptu Mentoring (Fortune)

    So, you’re a kid on a bike and you recognize the guy who takes a walk through your neighborhood as Mark Zuckerberg. Why does he take walks through your neighborhood? Because his office is nearby and it’s California and the weather is always beautiful. And what do you do? You strike up a conversation with him, and pretty soon you’re talking to him every day as he passes by. Before you know it, he’s urging you to learn to program and giving you detailed instructions on how to go about becoming a budding engineer. It’s a story of two people united by a sense of wonder and enjoyment. —Andy O’Connell

    BONUS BITS:

    But Can They Still Wear Their PJs?

    Marissa Mayer’s No-Working-From-Home Rule Is Stupid — Or It Could Save Yahoo (Wired)
    New Research: What Yahoo Should Know About Good Managers and Remote Workers (HBR)
    Yahoo CEO Marissa Mayer Installed a Nursery In Her Office (Gawker)

  • Ubuntu Touch developer preview coming to more devices soon

    My colleague Mihaita Bamburic posted his first impressions on the preview version of Ubuntu Touch yesterday, and now Canonical has announced its intention to bring the early version of the mobile operating system to a further 20+ devices.

    Originally only available to install on the Galaxy Nexus, Nexus 4, Nexus 7, and Nexus 10, the developer preview gives installers an early start with Ubuntu Touch, but it’s currently a taster, more than an actual, fully usable operating system.

    An update on the Ubuntu Wiki reveals a long list of devices that the developer preview will soon be available for, and includes the Motorola XOOM, Galaxy Nexus (codenamed toro and toroplus), Sony Xperia S and T, Samsung Galaxy S III (international, Verizon Wireless, and AT&T), Huawei Ascend G300, Samsung Galaxy S (GT-I9000), Samsung Galaxy S SCL (GT-I90003), Samsung Galaxy Note, Samsung Galaxy Note II, Samsung Galaxy S II (international), HTC One X, HTC One XL, HTC One X+ (multiple versions), Asus Transformer Infinity, LG Optimus 4x HD, Nexus S, Nexus One, Samsung Galaxy Tab 10.1 Wi-Fi, and the Asus Transformer Pad.

    Unlocking and install instructions, as well as the code/image are already available for some devices in the list, including the Sony Xperia S and T, and Huawei Ascend G300.

    It’s good to see the OS being made compatible with such a wide range of devices, and surprising too, as I wouldn’t have expected it to run all that well on older hardware like the Nexus One. Of course making it available for aging devices is a good thing as enthusiasts will be more willing to install the OS on an old phone in a drawer than on their current model.

    Do you plan to take the developer preview for a spin, or like me are you waiting for a more fully operational version? Leave your comments below.

  • Accidental Empires, Part 10 — Amateur hour (Chapter 3)

    Tenth in a series. Robert X. Cringely’s brilliant tome about the rise of the personal computing industry continues, looking at programming languages and operating systems.

    Published in 1991, Accidental Empires is an excellent lens for viewing not just the past but future computing.

    CHAPTER FOUR

    AMATEUR HOUR

    You have to wonder what it was we were doing before we had all these computers in our lives. Same stuff, pretty much. Down at the auto parts store, the counterman had to get a ladder and climb way the heck up to reach some top shelf, where he’d feel around in a little box and find out that the muffler clamps were all gone. Today he uses a computer, which tells him that there are three muffler clamps sitting in that same little box on the top shelf. But he still has to get the ladder and climb up to get them, and, worse still, sometimes the computer lies, and there are no muffler clamps at all, spoiling the digital perfection of the auto parts world as we have come to know it.

    What we’re often looking for when we add the extra overhead of building a computer into our businesses and our lives is certainty. We want something to believe in, something that will take from our shoulders the burden of knowing when to reorder muffler clamps. In the twelfth century, before there even were muffler clamps, such certainty came in the form of a belief in God, made tangible through the building of cathedrals — places where God could be accessed. For lots of us today, the belief is more in the sanctity of those digital zeros and ones, and our cathedral is the personal computer. In a way, we’re replacing God with Bill Gates.

    Uh-oh.

    The problem, of course, is with those zeros and ones. Yes or no, right or wrong, is what those digital bits seem to signify, looking so clean and unconnected that we forget for a moment about that time in the eighth grade when Miss Schwerko humiliated us all with a true-false test. The truth is, that for all the apparent precision of computers, and despite the fact that our mothers and Tom Peters would still like to believe that perfection is attainable in this life, computer and software companies are still remarkably imprecise places, and their products reflect it. And why shouldn’t they, since we’re still at the fumbling stage, where good and bad developments seem to happen at random.

    Look at Intel, for example. Up to this point in the story, Intel comes off pretty much as high-tech heaven on earth. As the semiconductor company that most directly begat the personal computer business, Intel invented the microprocessor and memory technologies used in PCs and acted as an example of how a high-tech company should be organized and managed. But that doesn’t mean that Bob Noyce’s crew didn’t screw up occasionally.

    There was a time in the early 1980s when Intel suffered terrible quality problems. It was building microprocessors and other parts by the millions and by the millions these parts tested bad. The problem was caused by dust, the major enemy of computer chip makers. When your business relies on printing metallic traces that are only a millionth of an inch wide, having a dust mote ten times that size come rolling across a silicon wafer means that some traces won’t be printed correctly and some parts won’t work at all. A few bad parts are to be expected, since there are dozens, sometimes hundreds, printed on a single wafer, which is later cut into individual components. But Intel was suddenly getting as many bad parts as good, and that was bad for business.

    Semiconductor companies fight dust by building their components in expensive clean rooms, where technicians wear surgical masks, paper booties, rubber gloves, and special suits and where the air is specially filtered. Intel had plenty of clean rooms, but it still had a big dust problem, so the engineers cleverly decided that the wafers were probably dusty before they ever arrived at Intel. The wafers were made in the East by Monsanto. Suddenly it was Monsanto’s dust problem.

    Monsanto engineers spent months and millions trying to eliminate every last speck of dust from their silicon wafer production facility in South Carolina. They made what they thought was terrific progress, too, though it didn’t show in Intel’s production yields, which were still terrible. The funny thing was that Monsanto’s other customers weren’t complaining. IBM, for example, wasn’t complaining, and IBM was a very picky customer, always asking for wafers that were extra big or extra small or triangular instead of round. IBM was having no dust problems.

    If Monsanto was clean and Intel was clean, the only remaining possibility was that the wafers somehow got dusty on their trip between the two companies, so the Monsanto engineers hired a private investigator to tail the next shipment of wafers to Intel. Their private eye uncovered an Intel shipping clerk who was opening incoming boxes of super-clean silicon wafers and then counting out the wafers by hand into piles on a super-unclean desktop, just to make sure that Bob Noyce was getting every silicon wafer he was paying for.

    The point of this story goes far beyond the undeification of Intel to a fundamental characteristic of most high-tech businesses. There is a business axiom that management gurus spout and that bigshot industrialists repeat to themselves as a mantra if they want to sleep well at night. The axiom says that when a business grows past $1 billion in annual sales, it becomes too large for any one individual to have a significant impact. Alas, this is not true when it’s a $1 billion high-tech business, where too often the critical path goes right through the head of one particular programmer or engineer or even through the head of a well-meaning clerk down in the shipping department. Remember that Intel was already a $1 billion company when it was brought to its knees by desk dust.

    The reason that there are so many points at which a chip, a computer, or a program is dependent on just one person is that the companies lack depth. Like any other new industry, this is one staffed mainly by pioneers, who are, by definition, a small minority. People in critical positions in these organizations don’t usually have backup, so when they make a mistake, the whole company makes a mistake.

    My estimate, in fact, is that there are only about twenty-five real people in the entire personal computer industry — this shipping clerk at Intel and around twenty-four others. Sure, Apple Computer has 10,000 workers, or says it does, and IBM claims nearly 400,000 workers worldwide, but has to be lying. Those workers must be temps or maybe androids because I keep running into the same two dozen people at every company I visit. Maybe it’s a tax dodge. Finish this book and you’ll see; the companies keep changing, but the names are always the same.

    Intel begat the microprocessor and the dynamic random access memory chip, which made possible MITS, the first of many personal computer companies with a stupid name. And MITS, in turn, made possible Microsoft, because computer hardware must exist, or at least be claimed to exist, before programmers can even envision software for it. Just as cave dwellers didn’t squat with their flint tools chipping out parking brake assemblies for 1967 Buicks, so programmers don’t write software that has no computer upon which to run. Hardware nearly always leads software, enabling new development, which is why Bill Gates’s conversion from minicomputers to microcomputers did not come (could not come) until 1974, when he was a sophomore at Harvard University and the appearance of the MITS Altair 8800 computer made personal computer software finally possible.

    Like the Buddha, Gates’s enlightenment came in a flash. Walking across Harvard Yard while Paul Allen waved in his face the January 1975 issue of Popular Electronics announcing the Altair 8800 microcomputer from MITS, they both saw instantly that there would really be a personal computer industry and that the industry would need programming languages. Although there were no microcomputer software companies yet, 19-year-old Bill’s first concern was that they were already too late. “We realized that the revolution might happen without us”, Gates said. After we saw that article, there was no question of where our life would focus”.

    “Our life!” What the heck does Gates mean here — that he and Paul Allen were joined at the frontal lobe, sharing a single life, a single set of experiences? In those days, the answer was “yes”. Drawn together by the idea of starting a pioneering software company and each convinced that he couldn’t succeed alone, they committed to sharing a single life — a life unlike that of most other PC pioneers because it was devoted as much to doing business as to doing technology.

    Gates was a businessman from the start; otherwise, why would he have been worried about being passed by? There was plenty of room for high-level computer languages to be developed for the fledgling platforms, but there was only room for one first high-level language. Anyone could participate in a movement, but only those with the right timing could control it. Gates knew that the first language — the one resold by MITS, maker of the Altair — would become the standard for the whole industry. Those who seek to establish such de facto standards in any industry do so for business reasons.

    “This is a very personal business, but success comes from appealing to groups”, Gates says. “Money is made by setting de facto standards”.

    The Altair was not much of a consumer product. It came typically as an unassembled $350 kit, clearly targeting only the electronic hobbyist market. There was no software for the machine, so, while it may have existed, it sure didn’t compute. There wasn’t even a keyboard. The only way of programming the computer at first was through entering strings of hexadecimal code by flicking a row of switches on the front panel. There was no display other than some blinking lights. The Altair was limited in its appeal to those who could solder (which eliminated most good programmers) and to those who could program in machine language (which eliminated most good solderers).

    BASIC was generally recognized as the easiest programming language to learn in 1975. It automatically converted simple English-like commands to machine language, effectively removing the programming limitation and at least doubling the number of prospective Altair customers.

    Since they didn’t have an Altair 8800 computer (nobody did yet), Gates and Allen wrote a program that made a PDP-10 minicomputer at the Harvard Computation Center simulate the Altair’s Intel 8080 microprocessor. In six weeks, they wrote a version of the BASIC programming language that would run on the phantom Altair synthesized in the minicomputer. They hoped it would run on a real Altair equipped with at least 4096 bytes of random access memory. The first time they tried to run the language on a real microcomputer was when Paul Allen demonstrated the product to MITS founder Ed Roberts at the company’s headquarters in Albuquerque. To their surprise and relief, it worked.

    MITS BASIC, as it came to be called, gave substance to the microcomputer. Big computers ran BASIC. Real programs had been written in the language and were performing business, educational, and scientific functions in the real world. While the Altair was a computer of limited power, the fact that Allen and Gates were able to make a high-level language like BASIC run on the platform meant that potential users could imagine running these same sorts of applications now on a desktop rather than on a mainframe.

    MITS BASIC was dramatic in its memory efficiency and made the bold move of adding commands that allowed programmers to control the computer memory directly. MITS BASIC wasn’t perfect. The authors of the original BASIC, John Kemeny and Thomas Kurtz, both of Dartmouth College, were concerned that Gates and Allen’s version deviated from the language they had designed and placed into the public domain a decade before. Kemeny and Kurtz might have been unimpressed, but the hobbyist world was euphoric.

    I’ve got to point out here that for many years Kemeny was president of Dartmouth, a school that didn’t accept me when I was applying to colleges. Later, toward the end of the Age of Jimmy Carter, I found myself working for Kemeny, who was then head of the presidential commission investigating the Three Mile Island nuclear accident. One day I told him how Dartmouth had rejected me, and he said, “College admissions are never perfect, though in your case I’m sure we did the right thing”. After that I felt a certain affection for Bill Gates.

    Gates dropped out of Harvard, Allen left his programming job at Honeywell, and both moved to New Mexico to be close to their customer, in the best Tom Peters style. Hobbyists don’t move across country to maintain business relationships, but businessmen do. They camped out in the Sundowner Motel on Route 66 in a neighborhood noted for all-night coffee shops, hookers, and drug dealers.

    Gates and Allen did not limit their interest to MITS. They wrote versions of BASIC for other microcomputers as they came to market, leveraging their core technology. The two eventually had a falling out with Ed Roberts of MITS, who claimed that he owned MITS BASIC and its derivatives; they fought and won, something that hackers rarely bothered to do. Capitalists to the bone, they railed against software piracy before it even had a name, writing whining letters to early PC publications.

    Gates and Allen started Microsoft with a stated mission of putting “a computer on every desk and in every home, running Microsoft software”. Although it seemed ludicrous at the time, they meant it.

    While Allen and Gates deliberately went about creating an industry and then controlling it, they were important exceptions to the general trend of PC entrepreneurism. Most of their eventual competitors were people who managed to be in just the right place at the right time and more or less fell into business. These people were mainly enthusiasts who at first developed computer languages and operating systems for their own use. It was worth the effort if only one person — the developer himself — used their product. Often they couldn’t even imagine why anyone else would be interested.

    Gary Kildall, for example, invented the first microcomputer operating system because he was tired of driving to work. In the early 1970s, Kildall taught computer science at the Naval Postgraduate School in Monterey, California, where his specialty was compiler design. Compilers are software tools that take entire programs written in a high-level language like FORTRAN or Pascal and translate them into assembly language, which can be read directly by the computer. High-level languages are easier to learn than Assembler, so compilers allowed programs to be completed faster and with more features, although the final code was usually longer than if the program had been written directly in the internal language of the microprocessor. Compilers translate, or compile, large sections of code into Assembler at one time, as opposed to interpreters, which translate commands one at a time.

    By 1974, Intel had added the 8008 and 8080 to its family of microprocessors and had hired Gary Kildall as a consultant to write software to emulate the 8080 on a DEC time-sharing system, much as Gates and Allen would shortly do at Harvard. Since there were no microcomputers yet, Intel realized that the best way for companies to develop software for microprocessor-based devices was by using such an emulator on a larger system.

    Kildall’s job was to write the emulator, called Interp/80, followed by a high-level language called PL/M, which was planned as a microcomputer equivalent of the XPL language developed for mainframe computers at Stanford University. Nothing so mundane (and useful by mere mortals) as BASIC for Gary Kildall, who had a Ph.D. in compiler design.

    What bothered Kildall was not the difficulty of writing the software but the tedium of driving the fifty miles from his home in Pacific Grove across the Santa Cruz mountains to use the Intel minicomputer in Silicon Valley. He could have used a remote teletype terminal at home, but the terminal was incredibly slow for inputting thousands of lines of data over a phone line; driving was faster.

    Or he could develop software directly on the 8080 processor, bypassing the time-sharing system completely. Not only could he avoid the long drive, but developing directly on the microprocessor would also bypass any errors in the minicomputer 8080 emulator. The only problem was that the 8080 microcomputer Gary Kildall wanted to take home didn’t exist.

    What did exist was the Intellec-8, an Intel product that could be used (sort of) to program an 8080 processor. The Intellec-8 had a microprocessor, some memory, and a port for attaching a Teletype 33 terminal. There was no software and no method for storing data and programs outside of main memory.

    The primary difference between the Intellec-8 and a microcomputer was external data storage and the software to control it. IBM had invented a new device, called a floppy disk, to replace punched cards for its minicomputers. The disks themselves could be removed from the drive mechanism, were eight inches in diameter, and held the equivalent of thousands of pages of data. Priced at around $500, the floppy disk drive was perfect for Kildall’s external storage device. KildaU, who didn’t have $500, convinced Shugart Associates, a floppy disk drive maker, to give him a worn-out floppy drive used in its 10,000-hour torture test. While his friend John Torode invented a controller to link the Intellec-8 and the floppy disk drive, Kildall used the 8080 emulator on the Intel time-sharing system to develop his operating system, called CP/M, or Control Program/Monitor.

    If a computer acquires a personality, it does so from its operating system. Users interact with the operating system, which interacts with the computer. The operating system controls the flow of data between a computer and its long-term storage system. It also controls access to system memory and keeps those bits of data that are thrashing around the microprocessor from thrashing into each other. Operating systems usually store data in files, which have individual names and characteristics and can be called up as a program or the user requires them.

    Gary Kildall developed CP/M on a DEC PDP-10 minicomputer running the TOPS-10 operating system. Not surprisingly, most CP/M commands and file naming conventions look and operate like their TOPS-10-counterparts. It wasn’t pretty, but it did the job.

    By the time he’d finished writing the operating system, Intel didn’t want CP/M and had even lost interest in Kildall’s PL/M language. The only customers for CP/M in 1975 were a maker of intelligent terminals and Lawrence Livermore Labs, which used CP/M to monitor programs on its Octopus network.

    In 1976, Kildall was approached by Imsai, the second personal computer company with a stupid name. Imsai manufactured an early 8080-based microcomputer that competed with the Altair. In typical early microcomputer company fashion, Imsai had sold floppy disk drives to many of its customers, promising to send along an operating system eventually. With each of them now holding at least $1,000 worth of hardware that was only gathering dust, the customers wanted their operating system, and CP/M was the only operating system for Intel-based computers that was actually available.

    By the time Imsai came along, Kildall and Torode had adapted CP/M to four different floppy disk controllers. There were probably 100 little companies talking about doing 8080-based computers, and neither man wanted to invest the endless hours of tedious coding required to adapt CP/M to each of these new platforms. So they split the parts of CP/M that interfaced with each new controller into a separate computer code module, called the Basic Input/Output System, or BIOS. With all the hardware-dependent parts of CP/M concentrated in the BIOS, it became a relatively easy job to adapt the operating system to many different Intel-based microcomputers by modifying just the BIOS.

    With his CP/M and invention of the BIOS, Gary Kildall defined the microcomputer. Peek into any personal computer today, and you’ll find a general-purpose operating system adapted to specific hardware through the use of a BIOS, which is now a specialized type of memory chip.

    In the six years after Imsai offered the first CP/M computer, more than 500,000 CP/M computers were sold by dozens of makers. Programmers began to write CP/M applications, relying on the operating system’s features to control the keyboard, screen, and data storage. This base of applications turned CP/M into a de facto standard among microcomputer operating systems, guaranteeing its long-term success. Kildall started a company called Intergalactic Digital Research (later, just Digital Research) to sell the software in volume to computer makers and direct to users for $70 per copy. He made millions of dollars, essentially without trying.

    Before he knew it, Gary Kildall had plenty of money, fast cars, a couple of airplanes, and a business that made increasing demands on his time. His success, while not unwelcome, was unexpected, which also meant that it was unplanned for. Success brings with it a whole new set of problems, as Gary Kildall discovered. You can plan for failure, but how do you plan for success?

    Every entrepreneur has an objective, which, once achieved, leads to a crisis. In Gary Kildall’s case, the objective — just to write CP/M, not even to sell it — was very low, so the crisis came quickly. He was a code god, a programmer who literally saw lines of code fully formed in his mind and then committed them effortlessly to the keyboard in much the same way that Mozart wrote music. He was one with the machine; what did he need with seventy employees?

    “Gary didn’t give a shit about the business. He was more interested in getting laid”, said Gordon Eubanks, a former student of Kildall who led development of computer languages at Digital Research. “So much went so well for so long that he couldn’t imagine it would change. When it did — when change was forced upon him — Gary didn’t know how to handle it.”

    “Gary and Dorothy [Kildall’s wife and a Digital Research vice-president) had arrogance and cockiness but no passion for products. No one wanted to make the products great. Dan Bricklin [another PC software pionee — read on] sent a document saying what should be fixed in CP/M, but it was ignored. Then I urged Gary to do a BASIC language to bundle with CP/M, but when we finally got him to do a language, he insisted on PL/i  — a virtually unmarketable language”.

    Digital Research was slow in developing a language business to go with its operating systems. It was also slow in updating its core operating system and extending it into the new world of 16-bit microprocessors that came along after 1980. The company in those days was run like a little kingdom, ruled by Gary and Dorothy Kildall.

    “In one board meeting”, recalled a former Digital Research executive, “we were talking about whether to grant stock options to a woman employee. Dorothy said, ‘No, she doesn’t deserve options — she’s not professional enough; her kids visit her at work after 5:00 p.m.’ Two minutes later, Christy Kildall, their daughter, burst into the boardroom and dragged Gary off with her to the stable to ride horses, ending the meeting. Oh yeah, Dorothy knew about professionalism”.

    Let’s say for a minute that Eubanks was correct, and Gary Kildall didn’t give a shit about the business. Who said that he had to? CP/M was his invention; Digital Research was his company. The fact that it succeeded beyond anyone’s expectations did not make those earlier expectations invalid. Gary Kildall’s ambition was limited, something that is not supposed to be a factor in American business. If you hope for a thousand and get a million, you are still expected to want more, but he didn’t.

    It’s easy for authors of business books to get rankled by characters like Gary Kildall who don’t take good care of the empires they have built. But in fact, there are no absolute rules of behavior for companies like Digital Research. The business world is, like computers, created entirely by people. God didn’t come down and say there will be a corporation and it will have a board of directors. We made that up. Gary Kildall made up Digital Research.

    Eubanks, who came to Digital Research after a naval career spent aboard submarines, hated Kildall’s apparent lack of discipline, not understanding that it was just a different kind of discipline. Kildall was into programming, not business.

    “Programming is very much a religious experience for a lot of people”, Kildall explained. “If you talk about programming to a group of programmers who use the same language, they can become almost evangelistic about the language. They form a tight-knit community, hold to certain beliefs, and follow certain rules in their programming. It’s like a church with a programming language for a bible”.

    Gary Kildall’s bible said that writing a BASIC compiler to go with CP/M might be a shrewd business move, but it would be a step backward technically. Kildall wanted to break new ground, and a BASIC had already been done by Microsoft.

    “The unstated rule around Digital Reseach was that Microsoft did languages, while we did operating systems”, Eubanks explained. “It was never stated emphatically, but I always thought that Gary assumed he had an agreement with Bill Gates about this separation and that as long as we didn’t compete with Microsoft, they wouldn’t compete with us”.

    Sure.

    The Altair 8800 may have been the first microcomputer, but it was not a commercial success. The problem was that assembly took from forty to an infinite number of hours, depending on the hobbyist’s mechanical ability. When the kit was done, the microcomputer either worked or didn’t. If it worked, the owner had a programmable computer with a BASIC interpreter, ready to run any software he felt like writing.

    The first microcomputer that was a major commercial success was the Apple II. It succeeded because it was the first microcomputer that looked like a consumer electronic product. You could buy the Apple from a dealer who would fix it if it broke and would give you at least a little help in learning to operate the beast. The Apple II had a floppy disk drive for data storage, did not require a separate Teletype or video terminal, and offered color graphics in addition to text. Most important, you could buy software written by others that would run on the Apple and with which a novice could do real work.

    The Apple II still defines what a low-end computer is like. Twenty-third century archaeologists excavating some ancient ComputerLand stockroom will see no significant functional difference between an Apple II of 1978 and an IBM PS/2 of 1992. Both have processor, memory, storage, and video graphics. Sure, the PS/2 has a faster processor, more memory and storage, and higher-resolution graphics, but that only matters to us today. By the twenty-third century, both machines will seem equally primitive.

    The Apple II was guided by three spirits. Steve Wozniak invented the earlier Apple I to show it off to his friends in the Homebrew Computer Club. Steve Jobs was Wozniak’s younger sidekick who came up with the idea of building computers for sale and generally nagged Woz and others until the Apple II was working to his satisfaction. Mike Markkula was the semiretired Intel veteran (and one of Noyce’s boys) who brought the money and status required for the other two to be taken at all seriously.

    Wozniak made the Apple II a simple machine that used clever hardware tricks to get good performance at a smallish price (at least to produce — the retail price of a fully outfitted Apple II was around $3,000). He found a way to allow the microprocessor and the video display to share the same memory. His floppy disk controller, developed during a two-week period in December 1977, used less than a quarter the number of integrated circuits required by other controllers at the time. The Apple’s floppy disk controller made it clearly superior to machines appearing about the same time from Commodore and Radio Shack. More so than probably any other microcomputer, the Apple II was the invention of a single person; even Apple’s original BASIC interpreter, which was always available in readonly memory, had been written by Woz.

    Woz made the Apple II a color machine to prove that he could do it and so he could use the computer to play a color version of Breakout, a video game that he and Jobs had designed for Atari. Markkula, whose main contributions at Intel had been in finance, pushed development of the floppy disk drive so the computer could be used to run accounting programs and store resulting financial data for small business owners. Each man saw the Apple II as a new way of fulfilling an established need —  to replace a video game for Woz and a mainframe for Markkula. This followed the trend that new media tend to imitate old media.

    Radio began as vaudeville over the air, while early television was radio with pictures. For most users (though not for Woz) the microcomputer was a small mainframe, which explained why Apple’s first application for the machine was an accounting package and the first application supplied by a third-party developer was a database — both perfect products for a mainframe substitute. But the Apple II wasn’t a very good mainframe replacement. The fact is that new inventions often have to find uses of their own in order to find commercial success, and this was true for the Apple II, which became successful strictly as a spreadsheet machine, a function that none of its inventors visualized.

    At $3,000 for a fully configured system, the Apple II did not have a big future as a home machine. Old-timers like to reminisce about the early days of Apple when the company’s computers were affordable, but the truth is that they never were.

    The Apple II found its eventual home in business, answering the prayers of all those middle managers who had not been able to gain access to the company’s mainframe or who were tired of waiting the six weeks it took for the computer department to prepare a report, dragging the answers to simple business questions from corporate data. Instead, they quickly learned to use a spreadsheet program called VisiCalc, which was available at first only on the Apple II.

    VisiCalc was a compelling application — an application so important that it, alone justified the computer purchase. Such an application was the last element required to turn the microcomputer from a hobbyist’s toy into a business machine. No matter how powerful and brilliantly designed, no computer can be successful without a compelling application. To the people who bought them, mainframes were really inventory machines or accounting machines, and minicomputers were office automation machines. The Apple II was a VisiCalc machine.

    VisiCalc was a whole new thing, an application that had not appeared before on some other platform. There were no minicomputer or mainframe spreadsheet programs that could be downsized to run on a microcomputer. The microcomputer and the spreadsheet came along at the same time. They were made for each other.

    VisiCalc came about because its inventor, Dan Bricklin, went to business school. And Bricklin went to business school because he thought that his career as a programmer was about to end; it was becoming so easy to write programs that Bricklin was convinced there would eventually be no need for programmers at all, and he would be out of a job. So in the fall of 1977, 26 years old and worried about being washed up, he entered the Harvard Business School looking toward a new career.

    At Harvard, Bricklin had an advantage over other students. He could whip up BASIC programs on the Harvard time-sharing system that would perform financial calculations. The problem with Bricklin’s programs was that they had to be written and rewritten for each new problem. He began to look for a more general way of doing these calculations in a format that would be flexible.

    What Bricklin really wanted was not a microcomputer program at all but a specialized piece of hardware — a kind of very advanced calculator with a heads-up display similar to the weapons system controls on an F-14 fighter. Like Luke Skywalker jumping into the turret of the Millennium Falcon, Bricklin saw himself blasting out financials, locking onto profit and loss numbers that would appear suspended in space before him. It was to be a business tool cum video game, a Saturday Night Special for M.B.A.s, only the hardware technology didn’t exist in those days to make it happen.

    Back in the semireal world of the Harvard Business School, Bricklin’s production professor described large blackboards that were used in some companies for production planning. These blackboards, often so long that they spanned several rooms, were segmented in a matrix of rows and columns. The production planners would fill each space with chalk scribbles relating to the time, materials, manpower, and money needed to manufacture a product. Each cell on the blackboard was located in both a column and a row, so each had a two-dimensional address. Some cells were related to others, so if the number of workers listed in cell C-3 was increased, it meant that the amount of total wages in cell D-5 had to be increased proportionally, as did the total number of items produced, listed in cell F-7. Changing the value in one cell required the recalculation of values in all other linked cells, which took a lot of erasing and a lot of recalculating and left the planners constantly worried that they had overlooked recalculating a linked value, making their overall conclusions incorrect.

    Given that Bricklin’s Luke Skywalker approach was out of the question, the blackboard metaphor made a good structure for Bricklin’s financial calculator, with a video screen replacing the physical blackboard. Once data and formulas were introduced by the user into each cell, changing one variable would automatically cause all the other cells to be recalculated and changed too. No linked cells could be forgotten. The video screen would show a window on a spreadsheet that was actually held in computer memory. The virtual spreadsheet inside the box could be almost any size, putting on a desk what had once taken whole rooms filled with blackboards. Once the spreadsheet was set up, answering a what-if question like “How much more money will we make if we raise the price of each widget by a dime?” would take only seconds.

    His production professor loved the idea, as did Bricklin’s accounting professor. Bricklin’s finance professor, who had others to do his computing for him, said there were already financial analysis programs running on mainframes, so the world did not need Dan Bricklin’s little program. Only the world did need Dan Bricklin’s little program, which still didn’t have a name.

    It’s not surprising that VisiCalc grew out of a business school experience because it was the business schools that were producing most of the future VisiCalc users. They were the thousands of M.B.A.s who were coming into the workplace trained in analytical business techniques and, even more important, in typing. They had the skills and the motivation but usually not the access to their company computer. They were the first generation of businesspeople who could do it all by themselves, given the proper tools.

    Bricklin cobbled up a demonstration version of his idea over a weekend. It was written in BASIC, was slow, and had only enough rows and columns to fill a single screen, but it demonstrated many of the basic functions of the spreadsheet. For one thing, it just sat there. This is the genius of the spreadsheet; it’s event driven. Unless the user changes a cell, nothing happens. This may not seem like much, but being event driven makes a spreadsheet totally responsive to the user; it puts the user in charge in a way that most other programs did not. VisiCalc was a spreadsheet language, and what the users were doing was rudimentary programming, without the anxiety of knowing that’s what it was.

    By the time Bricklin had his demonstration program running, it was early 1978 and the mass market for microcomputers, such as it was, was being vied for by the Apple II, Commodore PET, and the Radio Shack TRS-80. Since he had no experience with micros, and so no preference for any particular machine, Bricklin and Bob Frankston, his old friend from MIT and new partner, developed VisiCalc for the Apple II, strictly because that was the computer their would-be publisher loaned them in the fall of 1978. No technical merit was involved in the decision.

    Dan Fylstra was the publisher. He had graduated from Harvard Business School a year or two before and was trying to make a living selling microcomputer chess programs from his home. Fylstra’s Personal Software was the archetypal microcomputer application software company. Bill Gates at Microsoft and Gary Kildall at Digital Research were specializing in operating systems and languages, products that were lumped together under the label of systems software, and were mainly sold to hardware manufacturers rather than directly to users. But Fylstra was selling applications direct to retailers and end users, often one program at a time. With no clear example to follow, he had to make most of the mistakes himself, and did.

    Since there was no obvious success story to emulate, no retail software company that had already stumbled across the rules for making money, Fylstra dusted off his Harvard case study technique and looked for similar industries whose rules could be adapted to the microcomputer software biz. About the closest example he could find was book publishing, where the author accepts responsibility for designing and implementing the product, and the publisher is responsible for manufacturing, distribution, marketing, and sales. Transferred to the microcomputer arena, this meant that Software Arts, the company Bricklin and Frankston formed, would develop VisiCalc and its subsequent versions, while Personal Software, Fylstra’s company, would copy the floppy disks, print the manuals, place ads in computer publications, and distribute the product to retailers and the public. Software Arts would receive a royalty of 37.5 percent on copies of VisiCalc sold at retail and 50 percent for copies sold wholesale. “The numbers seemed fair at the time,” Fylstra said.

    Bricklin was still in school, so he and Frankston divided their efforts in a way that would become a standard for microcomputer programming projects. Bricklin designed the program, while Frankston wrote the actual code. Bricklin would say, “This is the way the program is supposed to look, these are the features, and this is the way it should function”, but the actual design of the internal program was left up to Bob Frankston, who had been writing software since 1963 and was clearly up to the task. Frankston added a few features on his own, including one called “lookup”, which could extract values from a table, so he could use VisiCalc to do his taxes.

    Bob Frankston is a gentle man and a brilliant programmer who lives in a world that is just slightly out of sync with the world in which you and I live. (Okay, so it’s out of sync with the world in which you live.) When I met him, Frankston was chief scientist at Lotus Development, the people who gave us the Lotus 1-2-3 spreadsheet. In a personal computer hardware or software company, being named chief scientist means that the boss doesn’t know what to do with you. Chief scientists don’t generally have to do anything; they’re just smart people whom the company doesn’t want to lose to a competitor. So they get a title and an office and are obliged to represent the glorious past at all company functions. At Apple Computer, they call them Apple Fellows, because you can’t have more than one chief scientist.

    Bob Frankston, a modified nerd (he combined the requisite flannel shirt with a full beard), seemed not to notice that his role of chief scientist was a sham, because to him it wasn’t; it was the perfect opportunity to look inward and think deep thoughts without regard to their marketability.

    “Why are you doing this as a book?” Frankston asked me over breakfast one morning in Newton, Massachusetts. By “this”, he meant the book you have in your hands right now, the major literary work of my career and, I hope, the basis of an important American fortune. “Why not do it as a hypertext file that people could just browse through on their computers?”

    I will not be browsed through. The essence of writing books is the author’s right to tell the story in his own words and in the order he chooses. Hypertext, which allows an instant accounting of how many times the words Dynamic Random-Access Memory or fuck appear, completely eliminates what I perceive as my value-added, turns this exercise into something like the Yellow Pages, and totally eliminates the prospect that it will help fund my retirement.

    “Oh”, said Frankston, with eyebrows raised. “Okay”.

    Meanwhile, back in 1979, Bricklin and Frankston developed the first version of VisiCalc on an Apple II emulator running on a minicomputer, just as Microsoft BASIC and CP/M had been written. Money was tight, so Frankston worked at night, when computer time was cheaper and when the time-sharing system responded faster because there were fewer users.

    They thought that the whole job would take a month, but it took close to a year to finish. During this time, Fylstra was showing prerelease versions of the product to the first few software retailers and to computer companies like Apple and Atari. Atari was interested but did not yet have a computer to sell. Apple’s reaction to the product was lukewarm.

    VisiCalc hit the market in October 1979, selling for $100. The first 100 copies went to Marv Goldschmitt’s computer store in Bedford, Massachusetts, where Dan Bricklin appeared regularly to give demonstrations to bewildered customers. Sales were slow. Nothing like this product had existed before, so it would be a mistake to blame the early microcomputer users for not realizing they were seeing the future when they stared at their first VisiCalc screen.

    Nearly every software developer in those days believed that small businesspeople would be the main users of any financial products they’d develop. Markkula’s beloved accounting system, for example, would be used by small retailers and manufacturers who could not afford access to a time-sharing system and preferred not to farm the job out to an accounting service. Bricklin’s spreadsheet would be used by these same small businesspeople to prepare budgets and forecast business trends. Automation was supposed to come to the small business community through the microcomputer just as it had come to the large and medium businesses through mainframes and minicomputers. But it didn’t work that way.

    The problem with the small business market was that small businesses weren’t, for the most part, very businesslike. Most small businesspeople didn’t know what they were doing. Accounting was clearly beyond them.

    At the time, sales to hobbyists and would-be computer game players were topping out, and small businesses weren’t buying. Apple and most of its competitors were in real trouble. The personal computer revolution looked as if it might last only five /ears. But then VisiCalc sales began to kick in.

    Among the many customers who watched VisiCalc demos at Marv Goldschmitt’s computer store were a few businesspeople — rare members of both the set of computer enthusiasts and the economic establishment. Many of these people had bought Apple lis, hoping to do real work until they attempted to come to terms with the computer’s forty-column display and lack of lowercase letters. In VisiCalc, they found an application that did not care about lowercase letters, and since the program used a view through the screen on a larger, virtual spreadsheet, the forty-column limit was less of one. For $100, they took a chance, carried the program home, then eventually took both the program and the computer it ran on with them to work. The true market for the Apple II turned out to be big business, and it was through the efforts of enthusiast employees, not Apple marketers, that the Apple II invaded industry.

    “The beautiful thing about the spreadsheet was that customers in big business were really smart and understood the benefits right away”, said Trip Hawkins, who was in charge of small business strategy at Apple. “I visited Westinghouse in Pittsburgh. The company had decided that Apple II technology wasn’t suitable, but 1,000 Apple lis had somehow arrived in the corporate headquarters, bought with petty cash funds and popularized by the office intelligentsia”.

    Hawkins was among the first to realize that the spreadsheet was a new form of computer life and that VisiCalc — the only spreadsheet on the market and available at first only on the Apple II — would be Apple’s tool for entering, maybe dominating, the microcomputer market for medium and large corporations. VisiCalc was a strategic asset and one that had to be tied up fast before Bricklin and Frankston moved it onto other platforms like the Radio Shack TRS-80.

    “When I brought the first copies of VisiCalc into Apple, it was clear to me that this was an important application, vital to the success of the Apple II”, Hawkins said. “We didn’t want it to appear on the Radio Shack or on the IBM machine we knew was coming, so I took Dan Fylstra to lunch and talked about a buyout. The price we settled on would have been $1 million worth of Apple stock, which would have been worth much more later. But when I took the deal to Markkula for approval, he said, ‘No, it’s too expensive’”.

    A million dollars was an important value point in the early microcomputer software business. Every programmer who bothered to think about money at all looked toward the time when he would sell out for a cool million. Apple could have used ownership of the program to dominate business microcomputing for years. The deal would have been good, too, for Dan Fylstra, who so recently had been selling chess programs out of his apartment. Except that Dan Fylstra didn’t own VisiCalc — Dan Bricklin and Bob Frankston did. The deal came and went without the boys in Massachusetts even being told.

    Reprinted with permission

  • Rackspace buys its way into MongoDB market with ObjectRocket

    Rackspace is buying its way into the hot MongoDB database market with its acquisition of ObjectRocket, a year-old provider of cloud-based MongoDB services.

    Chris Lalonde, co-founder CEO of ObjectRocket

    Chris Lalonde, co-founder CEO of ObjectRocket

    The deal, the terms were not disclosed, shows that major cloud infrastructure providers need to offer an array of database options — Rackspace already offers MySQL but Amazon Web Services offers a full slate of databases and managed databases. In December, Softlayer launched hosted MongoDB as a service it developed with 10gen.

    “Mongo is breaking away from the pack and our customers are asking for it,” said Pat Matthews, SVP of corporate development for Rackspace, San Antonio, Texas. He said the company could have built its own version of the open-source database or partnered with a MongoDB provider — but was impressed with the expertise of the ObjectRocket co-founders Chris Lalonde, Erik Beebe and Kenny Gorman who between them spent years at Ebay, Paypal, Shutterfly and AOL.

    ObjectRocket characterizes its offering as MongoDB as a service, meaning that users don’t have to sweat a lot of the set-up nitty gritty. It competes with rivals like MongoHQ and MongoLab.

    “The primary difference between us and other database-as-a-service companies is we built out our cloud rather than layer on top of general platforms,” Lalonde said in an interview. “We built a cloud platform from the ground up specifically for MongoDB, we went to Equinix and did our own hardware platform and tuned the OS and the rest of the stack for Mongo in a way that enables us to get great performance and also have a more highly available system.”

    Of course that means integrating it into the Rackspace platform will take time, which is fine with Rackspace, according to Matthews. “The offering as it stands will exist for a while till we can figure out the best ways to integrate it. We will maintain or improve performance and we won’t rush to integrate it at the expense of what we have now.”

    Critics could argue that Rackspace is late to this party given the database options Amazon Web Services has, but then again, we’re still pretty early in the cloud deployment game.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Is your hard drive in shape? BenchMe

    If your PC has multiple drives – or even just several USB keys — then you’ve probably already decided exactly how each drive is going to be used. But are you sure that decision is correct? Do you know which drive is the fastest, for instance? If that might make a difference, then BenchMe is a simple and free device benchmarking tool which may be able to help.

    The program comes in a very small download (703KB), which unfortunately then requires installation. We’re not quite sure why — it looks like the kind of tool which could very easily be portable — but at least there’s no adware or other dangers to worry about.

    On launch BenchMe presents you with an extremely basic interface, which essentially consists of 5 buttons and an (initially blank) report screen. And so, while we’d normally complain about the lack of documentation — there’s no Help file, no Readme.txt, not even any tooltips — in this case you really don’t need any at all.

    All you really have to do is click the arrow to the right of the Start button, and choose the device or drive you’d like to benchmark. And that’s it, your work is done — you can now sit back and watch as BenchMe begins its checks.

    This simplicity doesn’t mean the program is short on features, though. It’ll start by giving you the model name of your drive, for instance. And then it’ll itemize your drive’s capabilities, so if you need to know whether it supports S.M.A.R.T., Automatic Acoustic Management, Native Command Queuing, Tagged Command Queuing, TRIM and so on, you can find out at a glance.

    BenchMe measures some values, too. It’ll tell you the drive’s minimum, maximum and average access time, for example. And you’ll see the number of IOPS (I/O Operations Per Second) the drive can handle, for both a queue depth of 1 and 32.

    Perhaps most usefully, you’ll also get a graph which shows you the linear read speed and how it varies across the surface of the drive, with the minimum and maximum speeds highlighted.

    And when it’s all done, you’re able to print the results immediately, or copy them to the clipboard (in various sizes) for further processing elsewhere.

    We did have one problem with BenchMe. For some reason it was unable to detect our drive capabilities, and so they were all greyed out on the report screen.

    We’re not sure whether that’s a general issue, though, or just something specific to our hardware setup. And even without that feature, BenchMe is a handy benchmarking tool, straightforward and easy to use.

    Photo Credit: Sarah Cheriton-Jones/Shutterstock

  • Horsemeat scandal and global processed food suppliers linked to arms trafficking

    The unfolding horse meat outrage currently engulfing Europe has taken on a new, bizarre twist, according to investigators. Key figures involved in the unfolding scandal have been linked to a similar secretive network of companies that have ties to convicted Russian arms…
  • Flu vaccine warning and natural solutions

    First of all, you should know that the flu vaccine is based on “educated” guessing, not good science. In addition, these flu shots are created by mixing various flu virus strains (toxic germs) with formaldehyde, MSG, sodium chloride and mercury. And, here’s the kicker…
  • Explosive report: 98% of newborn babies are genetically screened

    “Newborn Screening in America,” a report from the Council for Responsible Genetics, states: “Before they are even a week old, ninety-eight percent of the 4.3 million babies born annually in the United States have a small sample of blood taken from their heels.” The…
  • Get rid of acne by ridding your diet of high glycemic foods and dairy

    If you’re looking to clear up your problem complexion – specifically, your acne problem – new research suggests you may be able to do so simply by implementing some changes in your diet. A study published recently in the Journal of the Academy of Nutrition and Dietetics…