Blog

  • The Lobbyists’ Ability To Control The Message

    It certainly won’t come as much of a surprise to readers around here that lobbyists from Roche/Genentech were able to get 42 different members of Congress to include text they had written into the Congressional Record. For way too long, we’ve seen how much politicians seem to rely on lobbyists to write the legislation, create the talking points and (at times) even deferring questions to the lobbyists themselves. Is it any wonder that lobbyists have become the new celebrities?

    But what is rather stunning about the NY Times story on how Genentech’s talking points were mentioned (with multiple Congressional reps using the exact same language) is how unconcerned everyone is about it. The lobbyists wrote up talking points for both sides of the aisle. It wasn’t about being in support or against the current healthcare bill, but just to get these Congressional Reps “on the record” in supporting key concepts, so that those same lobbyists can go back and point to such “bipartisan” support in the future, even if the Congressional reps themselves don’t even know what they’re talking about.

    The NY Times talked to a bunch of Congressional offices about this, and they all seem to admit freely that the language came from Genentech lobbyists, and they incorporated it directly (sometimes with a few minor changes) into the remarks that get put into the Congressional record. This isn’t the fault of Genentech or its lobbyists — who, of course, are going to push for such things. The really damning part is that all of these Congressional reps don’t seem to think there’s any problem at all with simply taking text directly from a company and putting it into their own remarks as if they agree on the concept, when they don’t even seem to understand what they’re saying half of the time. Often these sorts of Congressional remarks are later used to show “Congress’ intent” in doing certain things. But, perhaps they should just start being upfront and honest about the fact that these remarks are “the industry’s intent” and simply signing them with the companies that actually wrote the language (or at least tagging the remarks with the name of the company/industry group that wrote it).

    Permalink | Comments | Email This Story





  • Starting ’em young: Crytek teams up with universities, aims to ‘hook’ students on CryENGINE 3

     Crytek has just teamed up with universities yet again for their technology, CryENGINE 3. Their goal is simple: get them hooked.Crytek’s R&D …

  • Send your freeze reports straight to Sony, thanks to Firmware 3.10

    With Firmware 3.10 officially out, Sony can now get your direct report if in case your PS3 encounters some major errors, say, like unwarranted freezes…

  • Final Fantasy XIII rated by ESRB

     It’s not as much of a shocker as, say, Cactuar being already taken, but it’s just as much of a news. ESRB has finally come out with their rating…

  • Mushroom Caps Stuffed with Eggplant, Garlic, Onion and Honey Habanero

     Bookclub_mushroomcaps2

    One of the appetizers I made for a book club I catered were these delicious mushroom caps stuffed with eggplant and sauteed garlic and onion with a kick of honey habanero mustard and a secret ingredient. The stuffed mushrooms had a nice zest without being hot.

    For the guests, I had to take in consideration vegan and dairy, egg allergies. This appetizer is vegan, but on half the mushrooms, I topped them with some Parmesan cheese. You could also use Vegan Parmesan.

     Mushroomcaps_stems

    Ingredients (makes 14 mushrooms):

    • 14 large button mushrooms
    • 1/2 large eggplant
    • 1/4 onion
    • 5 cloves of garlic
    • 2 tbsp olive oil
    • 1/2 tbsp Dave’s Habanero & Honey mustard
    • Parmesan cheese or Vegan Parmesan
    • (optional secret ingredient) 1 tbsp Paradise Bakery Tomato Pineapple spread mixed with 1 tbsp mandarin orange or pineapple juice

    Let’s get cooking

    • Preheat the oven to 350 degrees.

     Mushroomcaps_eggplant

    • Dice the onion and garlic. Cut up the eggplant into 1/2 inch pieces to make it faster for the eggplant to cook.
    • Wash the mushrooms and remove the stems.
    •  Put 2 tbsp olive oil in a pan and heat on medium. When hot, saute the garlic and onion until they’re sweaty, and then stir in the Dave’s Honey Habanero mustard. If you want more heat (spice), add more of the Dave’s mustard. I used about 3/4 tbsp.
    • After stirring in the Dave’s mustard, toss in the eggplant and stir until the eggplant cooks. You might have to add another 1 tbsp of olive oil or 1 tbsp of water to help cook the eggplant. The eggplant is pretty absorbent. The cooked eggplant should be soft but not mushy.

     Mushroomcaps_fry

    • Take the eggplant off the burner and put into a bowl to let the mix cool down for about 20 minutes.
    • While the eggplant is cooling, massage the mushrooms with olive oil. Place a Silpat on a baking sheet, and and bake the mushrooms at 350 degrees for 3 minutes to soften them up a bit but not too much.

     Mushroomcaps_solo

    • Take the mushrooms out of the oven to cool down, and then turn up the oven heat to 400 degrees.

     Mushroomcaps_processor

    • When cool, put the eggplant in a 3-cup food processor and blend until chunky. Pour into a bowl and mix in the Paradise Bakery Tomato Pineapple sauce. You don’t need to use this special ingredient Paradise Bakery spread, it’s just my personal twist. In fact, you might want to try a little marinara sauce mixed with pineapple juice.
    • Stuff each mushroom with the eggplant mix.

     Mushroomcaps_stuffed

    • Put a Silpat on a baking sheet, and arrange the mushrooms about one inch apart. Sprinkle the cheese on top of the mushroom caps or leave the cheese out.
    • Bake the mushrooms at 400 degrees for 12 minutes.
    • Take the mushrooms out of the oven and cool down about 15 minutes before serving.

    Enjoy!


  • New primary care practice holds groundbreaking ceremony

    Practice to help address nationwide physician shortage

    Bloomington, Ind. (November 20 2009) – Today, the Southern Indiana Medical Group (SIMG) held a groundbreaking ceremony for its newest primary care practice to be located in Bloomington.  The practice, set to open in summer 2010, will be located at 1302 South Rogers Street, near the intersection of Rogers and Patterson Streets.

    “Nationally, our health care system faces a shortage of primary care physicians.  Estimates say in 2010 the shortage will be approximately 54,000 physicians.  By 2025, the Association of American Medical Colleges (AAMC) says the physician shortage will grow to between 124,000 and 159,000 physicians,” says Mark Moore, president & CEO of Bloomington Hospital.  “Today’s groundbreaking is moving our community one step closer to increasing the number of primary care physicians available in Bloomington and Monroe County.  We are excited to be able to offer a new primary care practice in this redevelopment area.”

    Primary care physicians, most often specializing in family medicine or internal medicine, provide patients with a “medical home” and are often the entry point into the health care system for many patients.  The primary care physician is the person who helps patients coordinate their care among specialists as needed, and follows them throughout their lifetime.  And, as the population ages, the need for additional primary care physicians will continue to grow.

    “Primary care medicine is now taking center stage locally and nationally to provide some of the missing pieces in our national health care system. This includes a return to having the patient at the center of attention, in a setting that facilitates partnerships between patients, their physicians and their families,” says Lee McKinley, M.D., a Bloomington physician board-certified in internal medicine and critical care medicine who will practice at the new facility.

    The Southern Indiana Medical Group’s new practice will have four to five primary care physicians in addition to nurse practitioners, nurses and other health care professionals.  First Capital Group is developing the new space, which was designed by Bynum Fanyo and will be built by Fox Construction, both local firms.

    Bloomington Hospital operates four primary care physician practices through the Southern Indiana Medical Group.  For more information about the Southern Indiana Medical Group, please visit bloomingtonhospital.org/simg.

    ###

    Bloomington Hospital has been innovative in providing quality care to south central Indiana communities for more than a century. Offering a comprehensive continuum of care, Bloomington Hospital is a not-for-profit organization and has a patient base of 413,000 in 10 counties (Brown, Daviess, Greene, Jackson, Lawrence, Martin, Monroe, Morgan, Orange and Owen.) Bloomington Hospital currently operates two hospital campuses (Bloomington and Orange County) with regional specialty offerings for Heart and Vascular, Behavioral Health, Cancer, Women and Children, Neurology and Orthopedic services.  As a leading hospital in Indiana, Bloomington Hospital enhances health by advancing the art and science of medicine through the use of new technologies, procedures and care.

  • Tuning out the tablet: Time to give the endless speculation a rest

    By Carmi Levy, Betanews

    Carmi Levy: Wide Angle Zoom (200 px)

    I’m sure I’m not the only one who looks at renderings of Apple’s long-rumored tablet – or iTablet, or whatever name the faithful have assigned to it this week – and wishes the FedEx truck would pull up to my door with an early demo in time for the holiday season. I’m sure I’m also not the only one who’s ready for the endless speculation to, well, end.

    I don’t think I’ve ever seen an unreleased product generate so much discussion without so much as a peep from the vendor of record. I realize the frenzied speculation is as frenzied as it is because we’re talking about Apple, and that if this were any other company, we’d collectively yawn our response before moving on to the next big thing. This is a company that seems unique in its ability to generate so much activity around what is, for now at least, vapourware. And while I appreciate the value of healthy exchanges in advance of a major product launch, I can’t shake the feeling that the never-ending iTablet fever is just a little much, and that we’d all be doing ourselves a favor by giving it a rest and waiting until Apple actually ships a working product.

    Like junk food – great taste, not so healthy

    Don’t get me wrong: This is all good for Apple. Once again, without spending so much as a dime on advertising, Apple has managed to keep its corporate brand front and center in both tech and mainstream media. It hasn’t had to use its PR firm retainers to pre-announce anything because breathless conventional and social media folks have been perfectly willing to share their so-called news on the company’s behalf. Any other vendor would sell its first, second and third born offerings to have even a fraction of this kind of market visibility and influence. Decades from now, when media mastery is taught in institutions of higher learning, Apple’s ability to time and again conjure a deafening buzz around things that may or may not see the light of day will serve as an iconic case study.

    But is any of this good for us? Is it helpful or hurtful for consumers and wannabes alike to spend days on end hovering over blog entries, twittering madly or debating in online forums? These activities in and of themselves are the sign of a healthy community, of course, and are crucial to giving vendors the kind of insight they need to continue to deliver market-relevant products and services. But has the uber-hype that seems to follow Apple around – and that has seemingly impossibly shifted into an even higher gear for the iTablet – finally reached the point of diminishing returns?

    Too much of anything isn’t good for you

    I’m going to argue that the hype has gone well beyond the point at which it adds any value to our collective lives. We’re working ourselves into a tizzy over something we know nothing about. We don’t know what OS this thing will run, how large it will be, what kind of screen it will have and how much it’ll cost. We’ve seen lots of beautifully rendered images of it and heard a near endless string of confirmed – then scotched – confirmations of imminent component orders and production. And as much fun as it is to bat around possibilities, it hardly seems like a productive way to spend time.

    That’s because while we’re all breathlessly sharing thoughts and opinions – but precious few facts – on a mysterious device that we now won’t apparently see until late next year, we continue to be challenged with more mundane needs, like using technology that’s available today to keep customers happy, our bank accounts filled and our lights on. I have no issue gazing into the collective crystal ball as a means of informing the kinds of decisions we need to make either today or in the near future. Keeping at least half an eye on what’s coming is one way of avoiding nasty surprises and keeping one step ahead of everyone else. But when said crystal ball becomes our sole focus of conversation, I’d like to humbly suggest that we’ve gone too far. Balance matters here, too, and if we’re spending all our time discussing a mythical product that’s close to a year from possibly seeing the light of day, we’re missing the significance of today.

    We’re missing the real point

    In a way, it’s a little disappointing that the enormous halo cast by this not-quite-a-product product eclipses the real issue at hand: that vendors have for the better part of the last decade failed to convince consumers that they should pony up for devices in the empty space between pocketable mobile devices and laptops. UMPCs and later MIDs failed to gain any traction thanks to low value propositions and ridiculous pricing. Netbooks have come close, thanks largely to their just-good-enough-for-the-purpose performance, conveniently portable form factor and recession-friendly price point. Timing has also helped netbooks carve out a niche, as their short range wireless and, increasingly, carrier-supported 3G connectivity gives them mobile capabilities that earlier, less well-connected devices could only dream of. Increasingly Web-centric application models don’t hurt, either.

    The success of the netbook is giving rise to new forms of devices and revenue models that could – maybe – finally fill in the veritable valley of death that has already claimed so many mid-sized, mid-priced form factors. While it’s unclear where Apple’s product will ultimately fit, it’s hardly a big story until the company actually moves closer to marketing the thing. Until then, every other competing vendor has just gotten a bit of additional breathing room to figure out what resonates with consumers before Apple satisfies the fanboys and finally introduces its tablet. Or whatever it is.

    Until then, count me among the cynics who really doesn’t care whether or not it has an OLED or a TFT screen, whether it’s released as one product or two, or whether it costs $2,000 or half that. Only when we begin to see actual data points will we be able to decide whether it’s worth pulling the plastic out or our wallets. For now, even Apple is capable of overstepping the limits of my patience.

    Carmi Levy is a Canadian-based independent technology analyst and journalist still trying to live down his past life leading help desks and managing projects for large financial services organizations. He comments extensively in a wide range of media, and works closely with clients to help them leverage technology and social media tools and processes to drive their business.

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • In Going Free, London Evening Standard Doubles Circulation While Slashing Costs

    In October, we wrote about how, just as Rupert Murdoch and crew look to put up paywalls for online content, the operators of the London Evening Standard were going in the other direction and making their physical paper free. So, how’s that been working out? mowgs alerts us to the news that the paper has doubled its circulation in just a month. Not bad. But what’s more interesting is that it’s also slashed its distribution costs massively. It used to cost about 30p, and now it’s just 4p per paper.

    This actually brings up a point that’s rarely talked about in the free vs. paid debate. Charging can be expensive. It takes quite a bit of effort to charge, to take money, to manage the money, to set up the accounting and bureaucracy for managing each transaction. And, even worse, if you’re working with third party distributors, like news agents, then you have to handle financial relationships with them as well. Getting rid of the per paper price changes the economics not just on the revenue side, but on the cost side as well — something that’s rarely discussed at all. And, yes, this impacts online news orgs too. Putting up a paywall is going to prove a lot more expensive than most people think on the cost side.

    Permalink | Comments | Email This Story





  • Etsy Find: Stylish Cat Beds from Apartment C PLUS BONUS GIVEAWAY

    Apartment C Cat Beds

    I’m absolutely in love with the beautiful designer fabrics used on these pet beds from Etsy seller Apartment C. There are so many to choose from! Each bed is handmade by Alling Welsh. The beds measure 18 inches, perfect for a cozy cat nest!

    Handmade Pet Beds from Apartment C

    I love these beds in the black and white patterns. Very mod! And the kitties below sure do seem to be enjoying their beds.

    Apartment C Cat Beds

    BONUS GIVEAWAY! ENTER TO WIN! TWO WINNERS!

    Alling is offering not one, but two beds for this giveaway! And here’s the hard part, the winners will get to choose whichever bed they want! Oh, that will be difficult! To enter, please take a look at the fabric selections in the Apartment C Etsy shop, then come back here and leave a comment on this post with the name of your favorite design. The winners will be chosen in a random drawing on November 27. One entry per person. This giveaway is open to US addresses only.


  • Invincible shield without the invincible

    ZAGG is one of the most popular full body consumer electronic screen logo_thumbprotector maker out there,  and they are back with something new and unique. The new ZAGGSking is the highly advance unscratchable invincible shield with the added customization of a skin.

    This new product is set to be lunched tomorrow and will have everything that a skin should have with the protection a full body protector has.

    On Friday the 20Th visit ZAGG to get yours.

    Thanks For the Details

    Share/Bookmark

  • Smart Marketing = Greener Printing for J. C. Penney

    One of the terrific things about greening a print marketing program is that many of the best practices in marketing today have “green” as a by-product.

    Take the example of J. C. Penney, which made marketing headlines today when it announced that it would be discontinuing its semi-annual Big Book catalog after the Fall-Winter 09 season. Over the years, J. C. Penney was finding that its catalog was less a direct selling channel than a way to prime the pump for online sales. Instead of wasting volumes of paper, ink, and coating — not to mention the fossil fuels to deliver the 800-1000-page books — it decided to slim things down.

    Read more of this story »

  • Battlefield: Bad Company 2 US Beta client now available for download, servers still down

    The PS3-exclusive beta for DICE’s Battlefield: Bad Company 2 (game also on Xbox 360 and PC) is already available for download, so get it while it’s st…

  • CitiKitty Giveaway Winners

    Giveaway Winners

    It looks like a lot of people are excited about toilet training their cats! Out of 599 entries, the two lucky winners of the CitiKitty giveaway are:

    Not only do Cindy and Kimberly each win a CitiKitty Cat Toilet Training Kit, but they both get to go on a $100 shopping spree at www.CittyKitty.com! Have fun checking out all the cool cat products!

  • Media Create hardware sales: November 9 – 15, 2009

     And now for our weekly report on the hardware sales from Japan. Media Create has released the numbers for the week of November 9th through Novem…

  • Google Books Settlement 2.0: Evaluating Competition

    This is the third in a series of posts about the proposed Google Book Search settlement.

    Now that we’ve described the proposed settlement agreement’s biggest potential upside for the public—expanded online access to books, particularly out-of-print books—that benefit must be weighed against the potential down-sides. On that score, the settlement’s potential impact on competition in the online book market has loomed large. Critics of the settlement have emphasized two principal dangers:

    1. The potential for a Google monopoly over orphan and unclaimed books.
    2. The potential for monopolistic pricing of the Institutional Subscription Database, particularly for higher education.

    The revised Settlement 2.0 made little or no effort to address these concerns, leaving it to Congress or antitrust authorities to fix later.

    A Google Monopoly on Orphan & Unclaimed Books?

    At the heart of the proposed settlement is a bargain that lets Google (and only Google) leapfrog the problem of “unclaimed works”—books whose copyright owners cannot be found or whose owners can’t be bothered to fill out paperwork for a small payment disbursed by the Registry (consider how many “class action” notices you’ve tossed in the trash unread). Thanks to the magic of the class action process, the settlement solves this problem by resolving the copyright claims of these otherwise unreachable copyright owners and designating all of their works by default as available for “Display Uses” by Google. In other words, so long as no one steps forward to claim these books, Google (and only Google) has a license to make them available in all the ways the settlement allows.

    Many who filed objections to the proposed settlement, including the Department of Justice, Microsoft, Amazon.com, the Internet Archive, and Public Knowledge, among others, argued that this could create a de facto Google monopoly over online use of these unclaimed works. And while the revised Settlement 2.0 creates an “Unclaimed Works Fiduciary” (UWF) to act as a guardian on behalf of owners of unclaimed works, neither the UWF nor the Registry has the power to grant a similar license to any other entity that might want to make the same kinds of uses that Google will be entitled to make under the settlement.

    Nobody likes this “only-for-Google” aspect of the settlement—in fact, Google has said that it would support orphan works legislation that would empower the Registry to make the same deal (or even a better deal) with others who want to use these unclaimed works. (Where the claimed books are concerned, in contrast, the Registry will likely ask the rightsholders to appoint it to license companies other than Google. But that still leaves all the unclaimed books out.) The settlement agreement even has a provision that makes it clear that the UWF can license others “to the extent permitted by applicable law”—what amounts to an “insert orphan works legislation here” invitation.

    But absent some legislative supplement to the revised Settlement 2.0, it still seems that any other company would have to scan these books, get sued, and hope for a class action settlement. That, of course, is the kind of barrier to entry that any monopolist would envy.

    This raises a worthy question: if legislation is necessary to fix the competition problem posed by the settlement, then why do we need a class action settlement in the first place? Why not solve what seems like a quintessentially legislative problem with legislation, instead? (As Amazon points out, that’s exactly what was done when music publishers brought a class action against the first digital audio tape (DAT) recorders).

    Here’s where realpolitik enters the equation. Google correctly points out that Congress has been working on orphan works legislation for years, to no avail. And none of the legislative proposals came close to the comprehensive solution embodied in the proposed settlement. So the question boils down to a political one: do you believe that approval of Settlement 2.0 will make orphan works legislation more likely, or less likely? Without a crystal ball, it’s hard to know.

    Monopoly Pricing of the Institutional Subscription Database?

    One of the commercial services that Google is authorized to provide under the proposed settlement is the “Institutional Subscription Database” (aka “ISD”), which will provide “all-you-can-eat” access to the corpus of scanned books. The chief customers for the ISD are likely to be universities (the same folks who are providing Google with the books to be scanned), for whom instant digital access to every word in every book in Google’s collection is likely to be very compelling.

    The big question is whether, over time, the ISD will become the one database that no university can do without, and the one database with no market substitute (again, because Google will be the only company who can provide a comprehensive corpus without fear of copyright liability, for the reasons explained above). This, of course, is a recipe for monopolistic price gouging, as a group of academic authors led by Prof. Pam Samuelson have pointed out. Over time, universities could face spiraling prices as Google and the Registry conspire to maximize their revenues on the ISD product.

    Google and its supporters respond by pointing out that the settlement requires that pricing for the ISD be set with regard to “two objectives: (1) the realization of revenue at market rates for each Book and license on behalf of Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.” The settlement goes on to promise that Google and the BRR “will use the following parameters to determine the price of Institutional Subscriptions: pricing of similar products and services available from third parties, the scope of Books available, the quality of the scan and the features offered as part of the Institutional Subscription.”

    But Google’s own people have reportedly admitted that there might not be any “similar products and services” to the ISD. And the settlement does not give ISD subscribers the right to go to court to enforce these “objectives” and “parameters.” Instead, Google has entered into “side agreements” with some of its major library partners (U. of Michigan, U. of Wisconsin—both of which will be receiving subsidies from Google for their ISD fees) that allow only those institutions to challenge pricing, and only under certain circumstances. So what we are left with is a “trust us” from Google, the Registry, and their biggest library partners.

    Of course, the chances of this coming to pass are hard to know in advance. As we have pointed out, if many large publishers pull their books out of the ISD database, then perhaps the ISD service won’t become indispensable to universities after all. So, ironically, the more successful the ISD proves to be, the more of a danger its pricing mechanism might prove to be for higher education.

    Fixing the Competition Problem

    Just because the proposed Book Search settlement isn’t good for competition doesn’t mean it’s illegal. There is a robust debate going on (see, e.g., articles by Picker, Elhauge, Fraser, Lemley, and Picker again) about whether the proposed settlement might violate antitrust laws, and the Antitrust Division of the Department of Justice will doubtless continue its investigation.

    But we shouldn’t be satisfied with antitrust law here. This is not just a simple market transaction between commercial entities. Google is building an enormously important public resource, a task it can only undertake with the blessing of a federal court. The public deserves a solution that is not “barely legal,” but that instead encourages real, robust competition. As written, without some modification or legislative adjunct, Settlement 2.0 does not do that.

  • New Fast Company: The Meowtrix

    I CAN HAS SINGULARITY?

    My new Fast Company essay is now up, looking at the news that IBM researchers have produced a cortical computing system with the connection complexity of a cat’s brain. (My original title is shown here on the illustration; the replacement title is a bit inaccurate and I’ve suggested a replacement, so let’s just move along.) It’s a follow-up to the research from a couple of years ago on a mouse-scale brain simulation; we’re still on-target for a human-level brain connection simulation by 2020.

    All of the stories about this, including my own, have emphasized the cat brain aspect, but in reality the truly nifty development is the improved ability to map brain structures using advanced MRI and supercomputer modeling.

    Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated “Blue Matter” project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections. The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.

    Science fiction writers will have a field day with this, especially if they develop a way to “write” neural connections, and not just read them. Brain back-ups? Shadow minds in a box, used to extract secret knowledge? Hypercats, with brains operating at a thousand times normal speed? The mind reels.

    The phrase “shadow minds” should be familiar to anyone who read the Transhuman Space game books — this is almost exactly what the game talked about, and on an even more aggressive schedule!

  • The Statinator Paradox

    Pity the poor lipophobes and statinators.  They’ve just taken another grievous wound to their favorite theory and haven’t even got sense enough to know it.  In fact, not only do they not have sense enough to realize they’ve taken the hit, they’re actually crowing about it.

    The current issue of the Journal of the American Medical Association (JAMA) has an article titled Trends in High Levels of Low-Density Lipoprotein Cholesterol in the United States, 1999-2006 that puts another major dent in whatever validity remains of the lipid hypothesis of heart disease.

    I’m going to start categorizing the types of findings published in this paper under the rubric of The Statinator Paradox.  I find it interesting that whenever scientists discover data that shows the opposite of what their hypotheses predict, they don’t conclude that their hypotheses might be wrong; instead they deem the contradiction a ‘paradox’ and bumble on ahead with their hypotheses intact.

    The lipophobes hold the hypothesis dear that saturated fat causes heart disease.  When the data began to surface that the French eat tons more saturated fat than do Americans yet suffer only a fraction of the heart attacks, the French Paradox was born.  Nothing wrong with our hypothesis, it’s just those pesky French people who are somehow different.  It’s a By God paradox, that’s what it is.

    Same thing happened with the Spanish.  Researchers looked at the food consumption data in Spain and discovered that Spaniards had been eating more meat, more cheese and more dairy while decreasing their consumption of sugar and other carbohydrate-rich foods over a 15-year period.  And, lo and behold, during this same period, stroke and heart disease rates fell.  Can’t be.  Saturated fat causes all these things.  But the data show…  Thus came the Spanish Paradox.

    Statinators and lipophobes believe with all their little fat-free hearts that LDL-cholesterol is bad and is the driving factor behind heart disease.  So whenever I come upon data that gives the lie to this notion, I’m going to start calling it the Statinator Paradox.

    This JAMA paper is a classic case of the Statinator Paradox.

    Researchers using the NHANES data looked at the change in the prevalence of elevated LDL cholesterol and found that it fell substantially from 1999-2000 to 2005-2006.  In a period of about six years the prevalence of high LDL cholesterol dropped by a third, which is a lot of drop in a fairly short period of time.

    And since everyone knows that high LDL cholesterol causes heart disease, it should go without saying that during this same time period there occurred a significant decrease in the prevalence of heart disease.  Right?  Uh, well, no, not really.  If anything, the prevalence of heart disease actually increased.  But not to a statistically significant degree.  So statistically there was no difference in the prevalence of heart disease during a time in which high LDL cholesterol levels were falling.  But if high LDL cholestrol causes heart disease…? It’s the ol’ Statinator Paradox writ large.

    It was fun reading this paper because a basically fairly simple project was cloaked in all the regalia of academia and academic speak.

    It starts out with a great opening sentence that is a paragon of academic weaselry:

    High total blood cholesterol is recognized as a major contributing factor for the initiation and progression of atherosclerosis.

    Recognized?  What does that mean?

    I could substitute words in this sentence and come up with the following:

    The policies of Barrack Obama are recognized as a major contributing factor in the initiation and progression of socialism in America.

    What does that mean?  Depends upon whom you say it to.  If I were to shout this sentence at a Sarah Palin campaign event, I would be cheered loudly.  If I said it at a Nancy Pelosi event, I would be tarred and feathered.  Since the ‘truth’ of the sentence is a function of the bias of the person hearing it, it’s not a meaningful sentence.  As written, the sentence doesn’t mean squat, which makes it perfect for academic writing.

    The authors, I’m sure, are believers in the lipid hypothesis but just can’t muster the gumption to write ‘high total blood cholesterol IS a major contributing factor…’  Instead they use the word ‘recognized,’ which makes the sentence meaningless and lets them off the hook should the lipid hypothesis ever blow up in their faces.

    In setting up the study, the researchers went through a lot of rigmarole to allocate subjects to three different categories depending upon their degree of risk for developing heart disease.  In determining this risk, researchers used the Framingham risk equation, which relies to a great extent on cholesterol levels to allocate that risk.  Which is strange since the Framingham Study has never shown elevated cholesterol to be a risk factor for heart disease.

    Once subjects were divvied into these three groups, the researchers measured LDL-cholesterol levels and calculated what percentage of subjects in each group had high LDL-cholesterol levels.  The threshold as to what was high varied as a function of the risk level of the group as a whole.  The bar for what was high was lowest in the high risk group and highest in the low-risk group.  In other words, if subjects had multiple risk factors, then an LDL-cholesterol level of anything over 100 mg/dl was considered ‘high,’ whereas in subjects in the lowest risk category, an LDL-cholesterol level over 160 was considered ‘high.’

    Researchers calculated as a percentage the number of subjects who had high LDL-cholesterol in each risk group and did the calculations again six years later.

    The weighted age-standardized prevalence of high LDL-C levels among all participants and among participants in each ATP III risk category decreased significantly during the study periods.

    Which is what they were crowing about.  Our therapy dramatically decreased the number of people at risk for heart disease.

    But as for heart disease itself:

    No significant changes were observed in the prevalence of CHD or CHD equivalents from 1999-2000 to 2005-2006.

    So what did our researchers conclude from the fact that there were one third fewer people with high LDL-cholesterol yet there was no decrease in heart disease?

    They concluded the obvious.  There were still two thirds of people with LDL-cholesterol levels that were too high.  And, no doubt, these people were not on statins.

    Don’t believe me?  Here it is in their own words.

    However, our study found that almost two-thirds of participants who were at high risk for developing CHD within 10 years and who were eligible for lipid-lowering drugs were not receiving medication.

    So, let me see if I’ve got this straight.  This study shows no evidence that lowering LDL-cholesterol levels decreases the prevalence of heart disease.  And what we conclude from this data is that we simply need to treat more people.  Brilliant!

    As I was reading this paper online, I got a bing alerting me that I had an email from Medscape bringing me the latest in mainstream medical thought.  I opened the email and began scrolling through the various articles displayed when my eye fell on one titled “Lipids for Dummies.”

    I clicked on it, and what opened was a video of a statinator of the deepest dye interviewing an alpha statinator about how to best deal with the risk of heart disease.

    It was unbelievable.

    Here in a short interview is everything that is wrong with mainstream medicine today.  We have two influential doctors at the pinnacle of their academic and clinical prowess – no doubt on the payrolls of multiple pharmaceutical companies – who are absolutely full of themselves blathering on about expensive treatments that have no true scientific grounding.  And their BS is being disseminated to practicing doctors everywhere. Instead of ‘Lipids for Dummies’ this interview should have been called Dummies for Statins.

    Watch and just shake your head.

    Click here to view the embedded video.

    These guys aren’t really talking about reducing the risk for heart disease or early death; they’re discussing how to use extremely expensive medications that are not particularly benign to treat lab values.  As I’ve written countless times, statins can quickly and effectively treat lab values, but there is little evidence they treat much else.  So if you want to have lab values that are the envy of all your friends, statins are the way to go.  But if you want to really reduce your risk for all-cause mortality, you might want to think twice before you sign up for a drug that will cost you (or your insurance company) $150-$250 per month, make your muscles ache, diminish your memory and cognition, and potentially croak your liver.

    If you wonder who underwrites these kinds of interviews, take a look at the actual Medscape link in which the video is embedded.  See if you, like Sherlock Holmes, can figure it out.

    This link requires requires free registration.

    (If I weren’t so pleased with a nice Sous Vide Supreme review we got today, this kind of nonsense would make me contemplate seppuku.)


    DietPower Calorie Counter Software

  • Senate Exploring Med School Profs Putting Names On Ghostwritten Journal Articles In Favor Of Drugs

    We’ve had a few posts recently about the growing scandal in the pharma and publishing worlds, whereby big pharma companies would produce fake medical journals with the stamp of approval from big publishing houses, to make it look like their drugs had a lot more scientific support than they really did. To make matters even more insane, often the pharma companies would ghostwrite articles, and then get professors to basically put their names on the works, which were designed to emphasize the benefits of certain drugs, while hiding or de-emphasizing the risks. Copycense points us to the good news that Senator Grassley is at least asking various med schools to explain why this was allowed, while probing how putting professors names on ghostwritten articles is any different than plagiarism.

    Permalink | Comments | Email This Story





  • Say Hello to the New New GigaOM

    Two winters ago, we unveiled a new design for GigaOM — today, we are launching another one. Whereas before, our focus was on design, this time around we’re aiming to bring you a unique user experience.

    The biggest change at our company in the intervening two years, of course, has been in the growth of our network, which now totals seven blogs: In addition to this site, we also have TheAppleBlog, jkOnTheRun, NewTeeVee, Earth2Tech, OStatic and WebWorkerDaily. In short, we generate a lot of content that adheres to the basic ethos of GigaOM.

    Our product guru, Jaime Chen

    While I remain a big believer is specialist niches, I feel it’s also important to surface more of the quality work being produced across these properties, such as the Car 2.0 coverage by Josie Garthwaite on Earth2Tech or Simon Mackie’s web working tips. So about six months ago, I asked our product guru, Jaime Chen, to come up with a game plan that would allow us to conduct a complete overhaul. Her mission was to:

    • Better showcase new content and related articles so that we can overcome the limitations of the blog format without really moving away from it.
    • Give readers an easy way to go to other GigaOM Network properties so that they can discover the work of our entire team of writers.
    • Focus on super simple content consumption and discovery.
    • Enable us to be more social.
    • When it comes to actual blogging, take us back to our roots.

    Jaime, instead of taking my word for it, went out and talked to a whole lot of our users — nearly 1,000 of you shared your feedback and insights with us. And you were not shy about your dislikes. As it turned out, most of what you wanted was already on my wish list. So we got ahold of our old friend Ryan Freitas and the ace design team of Shane Pearlman and Peter Chester to turn what we learned into a unique experience. They quietly toiled away for months and now, here you have it: The first step in the network-wide overhaul.

    What we’ve tried to do is strike a fine balance between what is a blog and what would be an online magazine. We have done this by adding a Featured Posts block at the top of the home page, while toward the bottom we’ve added topic pages and special reports. The rest maintains the typical blog format, but with a focus on extreme discoverability — the most-requested feature amongst our readers.

    To that end, many of you asked for a list of three bullet points that summarize the highlights of longer posts. You got it. A list of related posts was another common request, so we’ve implemented that as well. And for those of you that wanted the GigaOM Team to point to great blog posts we might have read across the web and found useful, we’re rolling out that feature later this week. It’s pretty simple — we don’t have a monopoly on ideas, and since our business is based on your attention, it’s our job to make sure that your attention is being put to good use. And that means helping you save time and finding you stuff that you might find useful.

    A note about typography: I wanted us to make reading an easy experience, so I opted for white spaces, bigger fonts and some elements that you would typically see in a traditional print publication. I’ve been reading the test site on an older, smaller screen (1024 x 768 ThinkPad) and my eyes don’t hurt — yet. In addition, some of the typographic stylings come to our blog courtesy of font technologies from San Francisco-based startup Typekit. We’re using the Clifford font, which is being served using the TypeKit technology (Disclosure: Typekit and Automattic, the company behind WordPress.com, are backed by True Ventures, investors in GigaOM and where I am a venture partner).

    In the end, I want us to be closer to my grand vision of what I see as the future of blogging — more visual, multimodal, interactive, real time and social. We’re not there just yet, but we will be in a few months. Today you can share our stories on Twitter and Facebook; you can also connect via Facebook Connect and leave comments on the site. It might come as a surprise, but this entire operation (including a fairly advanced publishing system) was built on top of WordPress.com, the on-demand blogging service based on the open-source software, WordPress. Without going into the dirty details, WordPress Jedi Mark Jaquith, our in-house coding champ Chancey Mathews and our dev team of Kelsey Damas, Nick Ohrn, Dan Cameron and Matt Wiebe and designers Reid Peifer and Brandon Jones – many of them spread across different time zones and geographies — helped us put together the whole back end for the new site (and our blog network).

    Now all this design and user experience is only as good as what we are supposed to do: create content you actually want to read. On that front, too, we have some good news. Liz Gannes, who till recently was the editor of NewTeeVee, has joined the GigaOM team as senior writer, where she will closely follow consumer web technologies and startups. She will be editor-at-large for NewTeeVee, where she will be contributing her insights into the world of online video as well. Liz is going to be joining me and Stacey Higginbotham, who has also been made a senior writer for GigaOM.

    Given her work ethic and deep insights, Stacey’s promotion is well deserved. She will continue to track broadband (including policy), the FCC and cloud computing. So there you have it: the GigaOM troika. We are going to be focusing on all the things we love, with a renewed emphasis on innovation. Thankfully we have our editor in chief, Sebastian Rupley, giving us his perspective on technology all the time — his experience brings a much-needed realism to the go-go nature of Silicon Valley.


  • Harper Chiller Project to Save Cold Hard Cash

    PALATINE, IL – Harper College will save at least $155,000 a year in electricity costs by installing new, high efficiency chillers that will provide air conditioning to six campus buildings. The project was approved by the Harper Board of Trustees at their regularly scheduled meeting on Thursday night.

    The new chillers will replace old air conditioning equipment, some of which use chlorofluorocarbons (CFCs), which are considered harmful to the environment. In addition to being more environmentally friendly, the chillers will reduce energy usage by approximately 60% compared to the old equipment. 

    “The old chillers were approaching the end of their useful life, so this is a good time to replace them and get considerable cost savings,” said Jim Ma, Director of Physical Plant. “Because of the poor economy, the bids for the chillers and construction came in well below estimates. It’s definitely a buyer’s market right now.”  

    “Given the difficult economy and the ongoing problems with state funding, it is essential that we find ways to reduce costs while not affecting the programs and services we provide,” said Harper College President Dr. Kenneth Ender. “With more efficient equipment, we should be able to reap the benefits of lower costs to cool these buildings for many years to come.”   

    Harper will spend about $3.2 million to install new chillers that will serve buildings F,L,P,R,A and W. The cost includes related infrastructure improvements, including new cooling towers, pumps, controls, piping, masonry and concrete work.  

    Money for the project will be drawn from the $153.6 capital referendum that was passed last year.  The new funds will be spent primarily on the repair and renovation of facilities. 

    “By investing in critical infrastructure improvements now, Harper will be better equipped to address the needs of our rapidly growing enrollment,” said Harper Board Chair Laurie Stone. “This is especially important as we look to add new programs to help workers train for new jobs and careers.”