Category: News

  • Statinators spill the beans

    Oftentimes people become so fixed in their thinking – and in their belief that everyone else thinks the same way – that they unwittingly raise the curtain and expose the wizard of their flawed thinking, showing it for what it really is.  Statinators have done just that in an article in the current issue of the Journal of the American College of Cardiology (JACC).

    The study, Effects of High-Dose Modified-Release Nicotinic Acid on Atherosclerosis and Vascular Function, compares the increase in carotid artery plaque over a 12-month period in subjects taking niacin versus those taking a placebo.  It turns out that those subjects taking the niacin experienced a shrinkage of their plaque whereas plaque grew larger on those taking the placebo. The revealing hitch in this study is that both groups were on statins, which means the group on statins alone was the placebo group.  Therefore the data from this study shows that statins alone do not reverse the growth of plaque (at least not plaque in the carotid arteries) despite lowering LDL levels.  Taking the logic a little further, the data from this study gives weight to the idea that a lowered LDL doesn’t reduce plaque growth.

    There is a lot we can glean from this study and the from the authors’ commentary on it.

    Let’s take a look.

    Researchers randomized 71 subjects–all of whom were on statins and all of whom had low HDL-C and either a) type II diabetes with coronary artery disease or b) carotid or peripheral atherosclerosis–into two groups.  The researchers did magnetic resonance imaging (MRI) studies of the carotid arteries of both groups, then started the subjects in the study group on niacin while the subjects in the other group got a placebo.  Subjects in both groups continued with their statin therapy.  At six months and one year later, MRI studies determined the degree of carotid atherosclerosis and whether it had increased, decreased or remained the same.

    After one year, it was found that the subjects receiving the niacin along with their statin significantly reduced their carotid atherosclerosis as compared to those subjects on placebo.  And remember, the placebo group of subjects were also on statins and still experienced an increase in their carotid atherosclerosis.

    Almost 90 percent (63) of the 71 subjects were males with an average age of 65.  As I’ve discussed previously, there is no evidence that statins provide any benefit in terms of decreased overall mortality to females of any age or to men over the age of 65 regardless of their state of health.  The only group that statins has shown to provide any benefit for in terms of decreases all-cause mortality (the only statistic that really counts) is men under the age of 65 who have been diagnosed with heart disease.  Even in that group, benefit is so small as to be questionable.  Knowing this, we can say (assuming an equal distribution of under 65 and over 65 to get an average of 65 years old for the group as a whole) that the majority of people in this study were taking statins unnecessarily.  Those males in the study who were under 65 and who had been diagnosed with heart disease were really the only ones who (according to all published research) may have received long-term benefit from the statin therapy.  This aside has nothing to do with study or its outcome, it’s simply my commentary on the widespread overuse of statins. So back to the study…

    The authors reported on changes in blood values, blood pressure and body weight between the groups:

    In the NA-treated [niacin-treated] group, mean HDL-C increased by 23% and LDL-C was reduced by 19% at 12 months. Triglycerides, apolipoprotein B, and lipoprotein(a) were significantly decreased by NA compared with placebo. CRP was decreased by NA compared with placebo (p = 0.03 at 6 months, p = 0.1 at 12 months). Adiponectin was significantly increased at both 6 and at 12 months (p < 0.01). From the safety perspective, minor transient elevations were noted in creatine kinase and liver enzymes, but no significant, sustained elevations (>3× the upper limit of normal for 2 weeks) were observed in any subjects. Fasting glucose did not change significantly, but glycated hemoglobin showed a small increase in the NA group versus placebo (p = 0.02 at 6 months, p = 0.07 at 12 months). Blood pressure and body mass index did not change significantly in either group.

    As any of you who have taken niacin will understand, about 10 percent of the subjects dropped out because they couldn’t tolerate the flushing, itching and GI side effects of the niacin. (Some people have had good luck with taking niacin as inositol hexanicotinate, marketed as ‘No-flush Niacin’ though the tolerance for this form isn’t perfect either.)

    Those subjects who were able to tolerate it had niacin (nicotinic acid) added to their statin dose and experienced a slight decrease in carotid plaque volume.  Meanwhile those on statins alone had their plaque volume increase.  Below is a representative MRI showing the difference:

    NA images2

    To the untrained eye, these kinds of studies are difficult to read.  Even to the trained eye, they can be misread, so there have been computer programs designed to calculate the plaque area so that it can be quantified.  You can see the results graphically below:

    NA2

    Before we all start thinking the combination of statins and niacin (nicotinic acid in the graph) is the second coming as far as atherosclerosis treatment is concerned, let’s be aware of a couple of facts.  First, these differences in plaque volume don’t really mean squat in terms of blood vessel functionality.  As the authors stated:

    Neither aortic distensibility nor flow-mediated dilation of the brachial artery was significantly altered by [niacin] treatment.

    The terms “aortic distensibility” and “brachial artery dilation” are measures of arterial function, and neither changed.  Also, as you can see from the MRI above, the differences in plaque size don’t seriously compromise the open area in the artery through which blood flows.

    The fact that none of these indicators of functionality changed and the plaque shrinkage didn’t make a measurable dent in the blood-carrying capacity of the arteries means that none of these subjects really got any short term benefit from the therapy in terms of true risk reduction.  Maybe subjects who were worse would have, but we don’t know.  And maybe if the therapy continued for the long term, really remarkable changes between the two groups would begin to become manifest. But we don’t know that for sure, either.

    What I found the most interesting about this study is what it didn’t say.  Or, I guess, a better way to put it is what it said, but probably didn’t intend to say.

    If you were to ask any statinator worth his/her salt what it would take to really significantly reduce the risk for heart disease, he/she would tell you to try to get LDL-cholesterol levels below 100 mg/dl.  If you then asked, “Well, what about if we got those levels to 80 mg/dl, what then?”  You would be no doubt told that the risk for heart disease would then be minimal.

    Well, the subjects on placebo – those on the statin alone – in this study had their LDL-cholesterol levels below 100 mg/dl.  In fact, at baseline their LDLs averaged 84 mg/dl and fell to 80 at six months and one year.  Yet their plaque continued to grow.

    We can conclude from this study that reducing LDL to these low levels doesn’t stop plaque growth.  We might also conclude that LDL levels may not have a whole lot to do with heart disease.  We can’t really make that conclusion definitively from this data, but it sure adds strength to that hypothesis.

    In an JACC editorial (available by subscription only) about this study, the author begins thus:

    Despite the substantial clinical benefit offered by potent low-density lipoprotein (LDL)-reducing therapeutics such as statins, a majority of patients will still experience major cardiovascular events.

    Hmmm. Let’s tease out all the information loaded into this one sentence.

    Despite “substantial clinical benefit” provided by statins means the substantial treatment of lab values, i.e., LDL-cholesterol lowering.  Statins lower LDL-C; no one denies that.  But to what end?  The last half of the sentence tells us:  A “majority of patients will still experience major cardiovascular events.”  If what you’re trying to do is reduce LDL levels, sounds like statins are the drug of choice.  But if what you’re trying to do is reduce heart disease, maybe not.

    We know for certain that statins reduce LDL, so the sentence also tells us that LDL may not have squat to do with heart disease, since significantly lowering it obviously doesn’t accomplish a lot.

    Now, here’s how the authors of the paper started out in their introduction:

    Atherosclerosis is a systemic condition in which coronary, carotid, and peripheral arterial disease frequently coexist.  In patients with atherosclerotic disease, low-density lipoprotein cholesterol (LDL-C) reduction with [statins] has consistently shown reduction in major cardiovascular events and mortality.  However, treatment of LDL-C with statins prevents only a minority of cardiovascular events.

    Another few sentences filled with interesting truths.  What the authors say about statins reducing “major cardiovascular events and mortality” is true as long as the word ‘mortality’ is associated with ‘cardiovascular.‘  In those who take them, statins do indeed reduce the incidence of cardiovascular events and deaths due to cardiovascular events.  What isn’t said in this sentence is that the decrease in cardiovascular deaths the statins prevent is more than made up for by deaths from other disorders that statins likely cause. As far as your risk for death is concerned, taking statins is a zero-sum game: you don’t die from heart disease but you do die from something else within the same period.  What you want to do is not to die.  Or at least not for a long time.  You want to decrease your all-cause mortality, i.e., deaths from all causes, not simply switch from one form of death to another.

    Also in the above paragraph, the authors – statinators to a man (or woman), I’m sure – state that treatment with statins “prevents only a minority of cardiovascular events.”  From this last sentence, we can once again draw the conclusion that – at least in the minds of true believers of the lipid hypothesis – lowering LDL doesn’t do diddly to reduce heart disease.  Yet they all continue to try to treat it by lowering LDL.

    I’m glad researchers are looking at niacin as a supplement to be used in the treatment of heart disease.  As I’ll discuss below, they have ulterior motives in doing so, which is why they combined niacin with a statin instead of having an arm of the study with niacin alone.  About 12 or 13 years ago MD and I found ourselves FAB (flat-a**ed broke) after sending three children through expensive private universities.  We had just written and published Protein Power, but it hadn’t started to sell, and we didn’t know if it ever would.  Our agent approached MD (who can write like the wind) about being the ghostwriter for one of the major university family medical guides (I can’t tell you which one, but it’s one of the Harvard-, Johns Hopkins-, Mayo Clinic-type of giant family medical guides than many of you may have in your homes) for a nice chunk of change.  She didn’t want to do it, and I didn’t want her to do it, but we decided that she should because it would probably make Protein Power a success.  Why did we decide this?  Because that’s how fate works.  We reasoned that if we didn’t take the deal, Protein Power would die on the vine, and we would be wishing that we had taken it.  If we took it and Protein Power took off, then we would be wishing that we hadn’t taken the ghost writing deal and could buy our way out.  We took it, Protein Power took off (thank God), and MD bought out of her contract after having written about four fifths of the book.

    During this awful project, I did a lot of the research and MD did all the writing.  Plus MD did all the teleconferences with the major university honchos whose names are actually on the book.  After each of these conferences she would run for the wine, because these guys (all were guys) were so detached from reality that it was impossible to deal with them.  They were so hidebound in their mainstream way of thinking that no amount of reasoning could dissuade them.  Which is why MD didn’t want her name anywhere on the book.  She didn’t want to be associated with such idiocy when she had had years of hands-on clinical practice teaching her that most of what these people – who probably hadn’t treated patients in years, if ever – believed was bunk.

    Where this dreary tale is leading is that during the research for this book, we determined from all the published data out there that niacin was the only substance that had ever been shown to actually reduce all-cause mortality in cardiovascular patients.  That was in the mid-to-late 1990s and now they’re just getting around to evaluating it again.

    So why after all these years are they now looking at niacin in conjunction with statins in this study?

    Follow the money.

    Robin Choudhury, in whose lab this study was done, is on the payroll of several statin manufacturers, including Merck.  The study was underwritten by Merck, the maker of Mevacor and Zocor.  Okay, so why would statinators and statin manufacturers want to add what is basically a nutritional supplement to their beloved statins?  A discussion in an online cardiology site tells the tale.

    From heartwire (requires free registration):

    The paper comes as anticipation builds for the ARBITER-HALTS 6 study results. ARBITER-HALTS 6 is an imaging study comparing changes in carotid intima-media thickness in patients treated with ezetimibe (Zetia, Merck/Schering-Plough) or extended-release niacin; market analysts are already predicting a win for niacin. As previously reported by heartwire, ARBITER-HALTS 6 was stopped early: full results will be presented Monday, November 16, 2009 at the American Heart Association meeting in Orlando, FL.

    So, it appears that extended-release niacin is going to kick tail when compared heads up to Zetia, or at least that’s the way the market is betting it.  And that’s usually because the market has info that the rest of us don’t.  If niacin is the clear winner, the press will be all over it and many people (and their physicians) will be wanting to switch from other cholesterol-lowering drugs to niacin.

    With this study in hand, Merck and the other statin manufacturers can say, “Don’t give up your statins; the science shows that statins plus niacin is the effective combo.”  Just keep your statin and add some niacin. And prescription niacin, to boot, so it all stays in the Big Pharma family.

    Which is why – as heartwire reported – this paper is coming out now: to beat the rush.

    We’ve learned a couple of things from this study.

    First, we’ve learned that we have here a randomized, double-blind, placebo-controlled study showing that statins reduce LDL but don’t stop the progression of atherosclerosis, which, after all, is why we would take them.

    And we have learned from reading between the lines in this study that statinators don’t really believe their own hype.  As Samuel Johnson said about second marriages, the statinator’s reliance on statins as a cure all for heart disease “is a triumph of hope over experience.”  Things haven’t really changed since MD wrote the family medical guide. If you’re worried about heart disease, take some niacin, the only substance yet that has been shown to decrease all-cause mortality. And it doesn’t have to be the prescription variety.


  • Verizon doesn’t seem to know what “unlimited” means in Droid contracts

    Remember when you’d argue with your friends about how many of this or that you had or did, and someone would always trump you by saying they had infinity? And do you remember the inevitable retort? “Oh yeah, well I’ve got infinity plus one.” Seems someone at Verizon decided to put that childish little witticism into practice as a service plan for the Droid.


  • Court Says Telenor Doesn’t Need To Block The Pirate Bay

    I’m heading over to Norway in the next few days to give a talk at the Nordic Music Week event, and it’s nice to see that the courts in that country seem to recognize how silly the IFPI’s demands that major ISP Telenor block access to The Pirate Bay are. Telenor was smart enough to fight back, and the courts have now said that Telenor is not liable for what its users do, and should not have to block access to a site like The Pirate Bay. From TorrentFreak on the ruling:


    The court ruled that Telenor is not contributing to any infringements of copyright law when its subscribers use The Pirate Bay, and therefore there is no legal basis for forcing the ISP to block access to the site…. In making its decision, the court also had to examine the repercussions if it ruled that Telenor and other ISPs had to block access to certain websites. This, it said, is usually the responsibility of the authorities and handing this task to private companies would be “unnatural.”

    Good to see a court recognize that the entertainment industry doesn’t own the internet, and shouldn’t be the one to determine what is and what is not legal online.

    Permalink | Comments | Email This Story





  • Come on, everyone, of course Nintendo is working on the Wii HD

    reggieNintendo’s generally vocal president, Reggie Fils-Aime, made headlines today when again he denied that Nintendo was working on the Wii HD. He said, “I don’t know how forcefully we can say there is no Wii HD.” That’s pretty clear, but it’s also mostly a lie. What do you expect the man to say two months before Christmas? “Psst, don’t buy the $200 Wii for your kid this year. We’ve got something real special coming in a few months. You’re going to want that instead.”

    Does anyone seriously think that Nintendo is not building a high-def capable system? You can’t even buy a SD TV larger than 20 inches anymore. Reggie probably told the truth when he said “there is no Wii HD” as it’s probably not named Wii HD, but there has to be some sort of high-def gaming system in the works. If there isn’t, Nintendo is in trouble.

    Nintendo made the right decision to make the original Wii not powerful enough to run HD graphics and therefore less expensive. It became an instant success because of not only the novel motion controller and easy-to-like bundled game, but also the relatively low starting price of $250 when compared to the Xbox 360 and PS3. If Nintendo had built a more powerful system, one of those points would have given way and events might have been a tad different.

    Plus, back when the Wii came out, HDTVs were still a luxury. They were only available in larger sizes and a higher prices. Now tube TVs are all but gone and increasingly small LCDs are reaching 720p resolutions. By next year, 1080p will probably be the standard resolution for 32-inch or larger screens and every TV will be at least 720p; Nintendo will need to put out a system accordingly.

    Of course this next-generation system will employ a motion control scheme. It’s not like Nintendo is going to take a step backwards. This system might not be called the Wii HD, but it’ll follow the Wii philosophy and be high-definition.

    Even if Nintendo outs a system next year that’s as powerful and cheap as a nettop today, it will be able to handle at least 720p graphics with a good graphics driver. Nintendo has proved that gameplay and accessibility are more important in the marketplace than graphics, but as time passes and more households upgrade to high definition, it’s becoming the standard, and Nintendo will have to work within those parameters.

    Reggie would never tell us a lie. He’s not like that. However, you can bet that Nintendo is working on a low-cost, but also high-def capable, Wii successor as we speak.


  • Google Chrome 4: Yes, it’s fast, but is it usable?

    By Scott M. Fulton, III, Betanews

    Improving Software series banner

    A first look at Google Chrome 4, with bookmarks freshly synched from Firefox.If, as Google says, a Web browser is not so much an application but a platform upon which a new class of applications may be built, then that platform must provide support. It needs to give its users the ability to accomplish tasks, and to devise new and better ways to accomplish them better. For as we all know now, “browser” is an inappropriate word for the thing we use to communicate with the Web using HTTP, because the Web is becoming a space for everyday applications deployment. Especially in the content industry, active work takes place within the browser, much more so than passive amusement.

    To that end, a browser may serve either as a springboard or a plank.
    Despite Google Chrome’s achievements, the crucial element of support remains missing. For all the spotlight we’ve given Chrome for being the fastest Web browser on Windows, it does not yet serve the purpose of supporting users and helping them to make their online tasks more efficient. This is why Google’s expert tuning of its V8 JavaScript engine for Chrome is so important, because the browser has truly evolved into a JavaScript platform rather than an HTML platform.

    For everyone I know who has, over the last year, made the switch from Microsoft Internet Explorer to any other browser, the reasons have had less to do with security than in the past. People who are compelled to switch are tired of how slow IE has become, and the sinking feeling that it’s getting slower — a feeling which Betanews confirmed this week with actual facts. If you’re the manufacturer of a competitive browser, and you have the opportunity to offer your customer a free alternative that’s close to 21 times faster overall than IE, and your brand is not only one of the most recognized in the world but the only one analysts believe can truly challenge Microsoft, you’d think there would be an exodus of mass proportion.

    There has been no such exodus. The reason is because, despite the number “4” on the version currently under development, Chrome gives one the feeling that it’s never been finished once.

    In a way, it doesn’t make sense to have a JavaScript engine that’s as good as it is, running a platform that is so minimalistic. As the manufacturer of any set-top box can tell you, a viewer’s entire experience in front of his TV can be ruined if the functionality of the program guide isn’t solid. A browser user’s bookmarks list is the counterpart of the program guide; it’s “what’s on,” and it’s also how to get there.

    Not that a list of folders and bookmarks is anywhere near as informative as an STB’s program guide. But for years, Firefox has had the good sense to enable users to open the bookmarks list in their sidebar, to open and close it with a keystroke (Ctrl-I) and scroll through it using a scroll bar. For IE8, Microsoft added an appealing and versatile Favorites menu that opens with the same keystroke (as part of an effort to win back refugees to Firefox). This menu starts off life as a pop-up, but can then be pinned to the left side as a sidebar. Then it too can be switched from Favorites (bookmarks) to RSS Feeds and browsing history. It’s a versatile feature that Microsoft has thought through, and that performs well.


    SEE THE FIREFOX/CHROME SHOWDOWN FROM THE TOP:

    The mess that bookmarks can make of your desktop in just three layers, in Google Chrome 4.

    In Chrome 3, the Bookmarks Bar was only part of the New Tab screen, and was actually provided by a Google Web page. With Chrome 4, the Bookmarks Bar becomes a feature of the actual program (along with curious re-additions such as an actual button for the home page, a recent Google discovery). But the complete list of bookmarks is only available through an “Other Bookmarks” menu on the right. Clicking on this button pulls up a drop-down menu, whose folders in turn pull up other pop-up menus. So you’re not perusing a folder tree as with Firefox or IE; instead, you’re scrolling through pop-ups. And you’re scrolling slow…ly… because these are classic menus; there are no scroll bars. So if you have a long bookmarks list, you’re not going anywhere fast.

    That I’m no fan of Chrome’s bookmarks system is nothing new — I first called attention to this last June. Back then, Chrome 3 was on the “beta” and “dev” tracks, while Chrome 2 was the version declared stable. Here I noted that Firefox 3.5 was more adept at searching for stored bookmarks by various criteria than Chrome, the browser from the company that’s supposed to be known for search.

    But I’m not exactly the only one screaming for functionality out here. Our own Fileforum features reviews from testers over the months who have explicitly asked why Chrome seems to be all chassis and no interface. “It’s as useful as a chocolate fireguard,” wrote madmike; “very bland, short on features, but competent,” wrote bobad; and, “I wish they could make it look and act like Firefox,” wrote CyberDoc999.

    Next: Shelving basic functionality under “Other…”

    Improving Software series banner

    Shelving basic functionality under “Other”

    That the Google Chrome user might only keep eight or so Web sites on her New Tab page, plus a handful of “Other Bookmarks” in a menu that should never grow large enough to have to be scrolled; that History is a separate page and not a function you can use side-by-side with other pages; that Chrome lacks the ability to even add the searching, researching, and translating functionality that Google makes for its own Toolbar for IE and Firefox; and that the button for the home page is a new feature that’s just now being tested, are all indications that Google only projects its browser will be used lightly and occasionally, by folks who’ll do a Google query, read the result, and come back to Google. If that’s truly the case, one wonders why Google actually bothers making its underlying JavaScript engine as good as it is — effectively mounting a V8 engine to a tricycle.

    In fairness, Mozilla Firefox also lacks that functionality. But Mozilla knows how to help users make Firefox more functional: through a wide array of add-ons, along with a developers’ community that’s nurtured and educated in the ways of making proper software without so much initial trial-and-error.

    Plug-ins and add-ons to Chrome do exist, and forums such as this one have cropped up in anticipation of a burgeoning market in these things at some future date. But for now, the theme of these sites appears to be stuck with themes. Even now, when skinning of some applications has become an art form that has brought forth its own grass-roots competitive “Olympics,” Chrome themes are a throwback to the Netscape Navigator era, sometimes comprised of celebrity photos cropped so that their faces fit just inside the tab area, decorated like the bedroom of some Disney Channel star.

    Google Chrome 3.0.196.2 showing off one of its new optional themes, 'Grass'

    Tell me you’d actually intentionally make your own Web browser look like a just-fertilized lawn.

    While Internet Explorer is dog slow, and now slower by the month, version 8 has functionality and, for the first time in IE’s existence, a reasonable degree of versatility. It also is relatively stable — crashing is not its problem. Crashing is a Firefox trademark — to this day, the “stable” version crashes on average 1.5 times per day for me, a fact which this “Firefox user” is not proud to share.

    Yet even though it does exhibit greater stability, Chrome lacks the functionality that makes it adaptable to users’ everyday purposes, and that enables them to take it beyond the realm of “general purpose” into “heavy duty.” Google’s complete inability to make that jump, to get the clue, to provide evidence of having listened to the smallest portion of tester sentiment, bewilders me completely as to whether the company has any realistic notion of what “beta” means anyway.

    With Mozilla, the newest code is developed under a private track, which only means that the developers aren’t taking comments from the public about it, even though it’s publicly available. When it’s developed enough to demonstrate in public, then it enters the “beta” track, which Mozilla code-names “Shiretoko” for 3.5 test code and “Namoroka” for 3.6. When Mozilla delays the rollout of a new build to the stable channel, it gets groans and moans from folks like me…but it’s generally because real beta testers have found real problems, or have advised some really good ideas.

    A full-featured browser chassis capable of running thoroughly debugged JavaScript add-ons that won’t crash, and that contain the basic functionality that Microsoft and Mozilla discovered as far back as 2005, coupled with the proven superior V8 JavaScript engine, would clinch the alternative browser market in maybe one year’s time. But that year has already passed for Chrome, which is already gaining a reputation as a browser that makes up for its performance superiority with slow and cumbersome functionality. As long as Google continues to not get this message, then we all need to face up to the fact that Google isn’t exactly open, is it?


    SEE THE FIREFOX/CHROME SHOWDOWN FROM THE TOP:

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Myka announces its latest Linux-based ‘net top box’

    By Tim Conneally, Betanews

    Myka ION htpc/nettopEarly in the summer, IPTV startup Myka delivered an impressive Linux-based device which was not quite a set-top box and not quite a home theater PC (HTPC). Though the device’s identity was sort of nebulous, the company’s goal was crystal clear: to easily make the tons of different types of Internet video content viewable on the TV.

    This week, the company has announced its second device, the Myka ION, which pushes itself up against the HTPC category. Because it’s equipped with a 1.6 GHz dual core Intel Atom 330 CPU, it could even be called a “net-top box.”

    Whatever you want to call it, Myka is really charging toward its goal of making the vast spectrum of Web video available in an easy and compact way. Since the ION is effectively an Ubuntu 9.10 mini ITX PC, it can run popular media manager software Boxee and XMBC alongside the Hulu desktop client — a bit of useful software which neither Boxee nor XMBC can actually run themselves.

    Myka ION UIIn case the name didn’t already give it away, the Myka ION is equipped with an Nvidia ION GPU which supports DirectX 10 graphics, and full 1080p HD video without overtaxing the CPUs.

    The company expects it to be shipping in about four weeks, and it will be available in various configurations, with different capacity hard drive sizes (up to 1 TB) and with additional options like a Blu-ray drive, and 802.11n wireless.

    We’ll give it a closer look when it becomes available before the holidays.

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Netflix Instant Streaming for PS3 works, is shown on YouTube


    Well, Netflix streaming on the PS3 works. Of course, you need to use the special disc (can’t just download the software eh? how quaint). But it appears to be working correctly. Check out the video above, which demonstrates that it’s working, and working smoothly.

    I’m actually glad that the PS3 has Netflix streaming now. It’s a nice addition to a really great Blu-ray player. If I owned a PS3, I’d totally get the disc and stream my heart out…

    You know, like I have been doing since 2008 on my 360.


  • Google Highlights Searches For Black Friday Deals

    With Thanksgiving just a few weeks’ away consumers are already starting to search for deals online.

    Over the last seven days fifty percent of the top "Black Friday" related search terms include ads, sales and deals, according to Google Insights for Search.

    Black-Friday-Search-Terms

    The Google Retail Advertising Blog provides more details. "Interestingly, when we look at Black Friday rising searches over the last 7 days, we see that ‘Early Black Friday’ is the second rising search term, meaning it has grown over 2,000% during this time period with respect to the previous time period."

    "It also may indicate that consumers are not only leveraging ‘Black Friday’ and searches to locate promotions on these days, but may be seeking out similar types of promotions and offerings even before Thanksgiving weekend."

    Rising-Searches

    Google also offers advice on what types of ads will appeal the most to bargain shoppers. "Text ads should highlight specific price points, discounts, coupon codes, and special promotions. Also, remember to keep an eye on search trends in your own category by leveraging Google Insights for Search."

     

     

     

    Related Articles:

    >Cyber Monday Tips From The Google AdWords Crew

    >Consumer Online Spending To Grow 24%

    >Black Friday Online Sales Better Than Anticipated

     

     

     

  • Update On Choruss: Universities Not Talking, Mysterious 10,000 Students Still Nowhere To Be Found

    We’ve been pretty big critics of the music tax concept, that was being pushed by Jim Griffin’s Choruss along with Warner Music (who had hired Griffin to create this program). Of course, we’ve only been able to criticize what bits and pieces have leaked out from those who have seen Griffin’s presentations. That’s because, despite a busy conference schedule, Griffin never seems to publicly describe what Choruss really is. So, every time we hear some new info about Choruss, and explain why it’s bad, we get angry emails from Griffin calling me all sorts of insulting names, and insisting that I’ve mischaracterized Choruss. So, we ask for more details, and we don’t get them. Instead, we’re given amorphous descriptions about how it’s “an experiment.” But what is the experiment? Well, it will be lots of things. As soon as we narrow in on an example, however, and explain why it’s bad, we’re attacked because the plan might not include that particular example. But we haven’t yet heard an example that makes sense.

    Griffin had agreed (as part of an angry email) to answer questions from the Techdirt community, and we obliged by sending him a long list of questions. Griffin had some personal issues to deal with over the summer, which was totally understandable, but we still haven’t heard any answers. I’m beginning to wonder if we ever will.

    But the biggest question I had was if he could explain who the “tens of thousands” of students were who Griffin told a conference in June would be using Choruss this fall semester. It seemed odd to find out that so many students had signed up for something when we still weren’t being told what it was. As the fall semester started, we asked to hear from students who were using Choruss, and got silence — which seemed odd. Apparently, it’s because those tens of thousands of students hadn’t signed up for the fall.

    However, as a bunch of you have sent in, now the claim is that six college campuses will be testing Choruss this spring semester, but Griffin won’t say who they are and the campuses won’t admit to participating. They claim that they’re afraid of backlash from folks like us — but that makes me wonder. If the concept is so good, why not stand up and defend yourself for being a part of the program? If you can’t defend the reasons for testing the program, it makes me wonder why you’re doing it in the first place.

    The article at the Chronicle of Higher Education provides a few new details that don’t sound particularly appealing. Rather than (as some had suggested earlier, but since Griffin never made it clear, we just don’t know if this was ever true) a system that would let students share files freely under some sort of blanket license, it sounds like “yet another limited music service.” It will allow unlimited downloads, but you have to use the Choruss service (again, perhaps the article is wrong, but that’s what it says). Similar services have been tried on various campuses and failed, so we’re curious to hear what’s so special about Choruss that will be different.

    It still seems like Choruss is trying to solve a problem that doesn’t exist. We’re seeing more and more smart musicians put in place business models that work. They work in a way that lets fans choose to send money to the artists they want to support directly, without a big middleman. Choruss appears (from all we’ve heard) to be an attempt to set up a big middleman that will take big chunks of money and then use some magical process to figure out how to dole it out. But why do we need that overhead? The market is figuring stuff out. It doesn’t need another middleman.

    Permalink | Comments | Email This Story





  • Ubuntu 9.10 Review

    Right after Ubuntu’s fifth birthday it was time to celebrate once more, because a Karmic Koala was released, and it brought with it a lot of reasons to upgrade. If Ubuntu 9.10’s smart looks haven’t convinced you yet, maybe the fast boot times and overall enhanced performance will. You still don’t know what Ubuntu 9.10 (Karmic Koala) is all about? Then read on, we will clue you in.

    While writing this review, I tried to put myself in the “shoes” of a new Ubuntu user. This meant that I didn’t go on to installing my favorite programs or setting up everything as I like right after installing the operating system, but instead I tried to make do with what Ubuntu provides in the default installation. Also, I tried to stay away from the terminal and, largely, succeeded in doing so. We’ve tested Ubuntu 9.10 over a period of one week on the following systems:

    [tablec][row][col]· AMD K8 nForce 250Gb Motherboard         
    · AMD Sempron 2800+ Processor
    · Nvdia GeForce FX5500 Video Card
    · 512 MB RAM
    · IDE HDD 80 GB Maxtor
    · LG CD-RW/DVD-ROM Drive
    · 17″ LG Flatron L1730S LCD[/col]
    [col]· Intel Gigabyte GA-965P Motherboard
    · Intel Pentium 4 3 GHz
    · Nvidia Leadtek Geforce 7300GS 256 VRAM
    &middot… (read more)

  • Strikeforce will be the ‘premier’ MMA league in EA Sports MMA

    eamma

    Strikeforce (UFC’s closest competitor here in the U.S.) promotes one of the bigger fights of the year tomorrow in Fedor vs. Brett Rogers. It takes place in Chicago, which explains why EA Sports just held a press conference there to reveal more details of its upcoming MMA game, entitled EA Sports MMA. EA Sports says that Strikeforce will be the “premier” mixed martial arts league in the game, and that well-known referee “Big” John McCarthy will be in the game.

    The game, which is due out for the Xbox 360 and PS3 next year, will, as such, feature Brett Rogers and other Strikeforce fighters.

    In-game commentary will be provided by Frank Shamrock and Mauro Ranallo. I haven’t played an EA Sports game in three years, so I have no idea who good the commentary is these days.

    Other fighters confirmed to be in the game include current UFC star Randy Couture, Gegard Mousasi, Renato Sobral, and Cung Le. (Check Wikipediafor the full list of confirmed and rumored fighters. Hopefully Alistair Overeem makes the cut. You’d think he would, being that he’s the Strikeforce heavyweight champion, even if he hasn’t defended the belt in two years.)

    Presumably EA Sports MMA will now be colloquially referred to as “the Strikeforce game.” Hopefully EA can work in plenty of Dream fighters, as well as the Dream arenas.

    Oh, the first trailer of the game will debut during tomorrow night’s Strikeforce show that airs on CBS at 9pm. And if you’re interested in a little backstory, check out Showtime’s 30-minute documentary on both Fedor and Rogers. Fun stuff.


  • Sony Ericsson’s 720p shootin’, S60 runnin’ Kurara leaks out

    se-kurara-3

    A year ago Sony Ericsson’s prominence was dropping faster than Kirstie Alley could pound back Quarter Pounders, but its most recent offerings have been successful in generating buzz in a way that the Japanese-Swedish phone maker hasn’t experienced in quite some time. Fast forward to today where pictures of the companies second Symbian S60 device have leaked out by way of Russia and again we have some buzz, this time about how its 8.1 megapixel camera might be able to record videos at resolutions up to 720p. It’s codenamed Kurara and is said to have a 3.5″ AMOLED display, but other than that we really don’t know anything about it, other than it’s likely to come sometime within the next 6 months. More pictures are available after the break.

    Thanks to everyone that send this in!

    se-kurara-1

    se-kurara-2

    se-kurara-3

    se-kurara-4

    Read

  • Stimulus Funds for Green Energy Projects Going Offshore along with Other U.S. Manufacturing

    The Obama Administration sold its $787 billion stimulus plan on the basis of improving the economy through investing in green energy and by doing so, increasing employment in the United States. But what is actually happening, particularly with wind and solar projects, is that the majority of the manufactured components are being built offshore in either Asia or Europe, resulting in foreign countries capturing a good deal of our stimulus funds and finding a lucrative haven for their products in the United States.

    Green Stimulus Money Going Overseas

    Since September 1, 84 percent of the $1.05 billion in clean energy grants has gone to foreign wind companies. Foreign countries benefiting from stimulus funds for wind technology are Spain (57%), Germany (12.6%), Japan (9.5%), and Portugal (5%).[i] Companies began applying for grants at the end of July and awards were announced by the two joint administrators of the program, the Energy and Treasury Departments, beginning on Sept. 1. In the first round of the grants, 77% went to foreign wind developers, followed by 84% in the second round. Of the 11 wind farms that received grants, 695 of the 982 installed turbines were manufactured by a foreign company.[ii]

    Further, there are few restrictions on how the grants can be used. According to the Investigative Reporting Workshop at American University, over $800 million were provided to wind farms that were already producing electricity. As required by law, all 11 wind farms started operating after January 1, 2009, but before the grants were awarded.[iii]

    Turbine Manufacturing Dominated by Foreign Competitors

    The U.S. currently has the most installed wind capacity in the world, but it is not a leader in the manufacture of turbines. The Investigative Reporting Workshop reported that of the turbines currently under construction in the U.S., 67 percent are slated to be purchased from foreign-owned turbine manufacturers.[iv] According to U.S. customs data for 2008, and the U.S. Trade Commission, the U.S. imported $2.5 billion worth of wind turbines last year—up from $365 million in 2003.

    In the future, wind turbines and/or their component parts may be coming from China where lower labor costs have allowed Chinese-made products to dominate many manufactured goods in the U.S. GE, a major U.S. wind turbine producer, already owns three facilities in China that produce turbine components. GE is also planning a factory in Vietnam that will employ 500 local workers and export 10,000 tons of components to GE Energy assembly plants around the world.[v]

    China is already beginning to develop its own strong hold for wind power in the U.S. A joint venture between China’s Shenyang Power Group, the U.S. Renewable Energy Group, and Cielo Wind Power LP to develop a 600 megawatt wind farm on 36,000 acres in West Texas, costing $1.5 billion, was announced on October 29, 2009.[vi] A-Power Energy Generation Systems Ltd., a provider of distributed generation systems in China and a fast-growing manufacturer of wind turbines, will supply the turbines. A-Power Energy entered the wind power industry last year.[vii] Delivery of wind turbines for the West Texas wind farm is scheduled for March 2010.[viii]

    Solar Cells Manufactured Overseas

    Not only are wind turbines mostly manufactured in countries overseas, but so are photovoltaic (PV) cells. Florida Power & Light (FPL) started operating its 25 megawatt photovoltaic solar plant in southwest Florida in conjunction with a visit to the plant by President Obama on October 27. [ix] The DeSoto plant in southwest Florida is the first of a total of 110 megawatts of solar capacity that FPL will install at 3 different sites by the end of 2010. Although Obama praised FPL’s work in the solar arena, he did not tell the American public that the components of the DeSoto plant are from foreign countries. While the PV cells were provided by a firm from California, they were made in the Phillipines. The steel PV frame holding the cells was produced in Canada, and the electrical parts and boxes were made in Germany, where solar power has been given heavy subsidies by the German Government. While German manufacturers have been producing PV technology for their country’s solar expansion, they are now concerned that China will take over their market due to costs that are 30% lower.[x]

    Conclusion

    The Obama Administration has told the American public that it will produce jobs and stimulate the U.S. economy through green energy technology. He has also touted that stimulus funds will be used for goods made in America. Yet, the the Investigative Reporting Workshop at American University finds that this is not the case. And, more examination of green energy development in the U.S., shows Asian and European countries well established here in providing the component parts for green energy technology.

    The problem is not with international trade per se. In a genuinely free market, where politicians do not pick winners or losers, the most efficient firms would capture market share, be they American or foreign. The result would be the best products at the lowest prices for American consumers.

    The real problems are a government “stimulus” plan and efforts to centrally plan a “green economy.” The government can only “stimulate” by spending money that it has first taxed or borrowed from the private sector. It would be bad enough for the government to destroy jobs in American fossil fuel industry while spending money on domestic producers of “green energy.” But it is particularly absurd for the U.S. government to cripple American industry while shoveling the lion’s share of the pork into the hands of foreign beneficiaries.


    [i] “Overseas firms collecting most green energy money”, October 29, 2009, http://investigativereportingworkshop.org/investigations/wind-energy-funds-going-overseas/

    [ii] Ibid.

    [iii] Ibid.

    [iv] Ibid

    [v] “Vietnam’s first turbine component plant underway”, May 13, 2009, http://www.vietnewsonline.vn/News/Business/Companies-Finance/6072/Vietnams-first-turbine-component-plant-underway.htm

    [vi] www.reuters.com/article/pressRelease/idUS200008+29-Oct-2009+BW20091029

    [vii] “Lone Star, Meet Red Star: China’s $1.5 Billion Wind-Power Deal in Texas”, October 30, 2009, http://blogs.wsj.com/chinarealtime/2009/10/30/lone-star-meet-red-starchina%e2%80%99s-15-billiob-wind-power-deal-in-texas/

    [viii] www.reuters.com/article/pressRelease/idUS195122+29-Oct-2009+PRN20091029

    [ix] http://www.instituteforenergyresearch.org/2009/10/26/highest-cost-generating-plant-comes-on-line-in-florida-to-obama-fanfare/

    [x] “Solar-Power Incentives in Germany Draw Fire,” Vanessa Fuhrmans, Wall Street Journal, September 28, 2009, http://online.wsj.com/article/SB125383541153239329.html

  • You are not worthy of the 18-button OpenOfficeMouse (and it has an analog stick)

    OOmouse-proto3
    You may recall our incredulity when SteelSeries announced their 15-button MMO Mouse. Not one to be passed by, Razer shortly thereafter came out with the 17-button Naga, which we’ll be reviewing soon. But unknown to them, a small team was working in obscurity to create an 18-button mouse… with an analog stick for your thumb, to boot.

    The OpenOfficeMouse, or OOMouse, isn’t exactly the most attractive piece of hardware, but its creator claims that “16 buttons divided into two 8-button halves were the maximum number of buttons that could be efficiently used by feel alone.” I guess if you take the thumb out of the equation with the analog stick, which Razer nor SteelSeries had the wherewithal to do, that’s probably true. They’ve set up profiles to make the OOMouse work with WoW, 3D Studio Max, Firefox, and many others — including, of course, the whole OpenOffice suite.

    Personally I’m not a mega-mouse kind of person; ergonomics are far more important for me considering the amount of mousing i have to do, which is why I’m considering the Microsoft Natural as an alternative to the G500 and Mamba, which I switch between to keep things interesting. And while this OOMouse may look ridiculous, I’m sure there are some people who will find it a joy.


  • Chinese Michael Jackson phone is no Thriller

    ♪It’s close to midnight and something cheesy’s lurking in the dark
    Under the moonlight, you see a phone that almost makes you barf♪

    Man. Just last weekend, I was looking at my boring ol’ phone and thinking to myself: if only this were covered in faux-gold and diamonds and molded to look vaguely like Michael Jackson’s torso!


  • Tundra Headquarters Now on Twitter AND Facebook

    Here’s a quick update on our growth plan for our loyal readers. If you like the site, I would be grateful for any feedback or comments you have on these items, our plans, and what we can be doing better.

    New Stuff

    First, as you may or may not know, I’ve been posting on Twitter on behalf of TundraHeadquarters.com for a few months now.

    TundraHeadquarters.com Admin Jason is posting on Twitter - click the image to see his profile.

    TundraHeadquarters.com Admin Jason is posting on Twitter – click the image to see his profile.

    Second, I setup a Facebook fan page for TundraHeadquarters just now. It’s sort of ugly right now (at least I think so), but I’m going to get some help with making it prettier.

    The first attempt at a Facebook fan page for TundraHeadquarters.com

    The first attempt at a Facebook fan page for TundraHeadquarters.com

    It’s also very empty right now…so anyone who wants to post a “Hey what’s up” or start a discussion or put a picture or whatever would be greatly appreciated.

    I’m also on Facebook myself and I’d like to be Facebook friends with anyone who wants to be friends with me. http://www.facebook.com/tundrahq.

    Third, we added a section for user reviews of Tundra accessories a month or two ago. It’s sort of empty right now, and we still need to add a bunch of parts, but I’m getting a helper to work with me on adding more stuff. If you have one of the accessories listed (like one of these air intakes), please take a moment to add your review. It will help other people out when they’re ready to buy.

    Our new Tundra Accessory user review system

    Our new Tundra Accessory user review system

    Future Plans

    • We’re going to re-do the Tundra Buyer’s Guide to be more useful.
    • We’re going to integrate a used Tundra classifieds system into the website.
    • We’re going to try and re-design the site to reduce clutter.
    • We’re going to try and publish an eBook about Tundra accessories, how they work, options, etc. for new Tundra owners or for anyone who wants to learn more about their truck.

    Big Picture

    As many of you know, TundraHeadquarters is about 90% me. I want that to change because I think the site could be better. I’d also like that to change because I want to create other sites like this one (one for Tacomas to start with, maybe some others…).

    I’ve got some help from Mark (a behind the scenes guy who I’d like to get more involved), I’ve got a couple of writers that help me with reviews, etc., but I’m always interested in the idea of having another author or two.

    If you’re interested in writing blog posts or articles (and if you have some experience), I’d like to hear from you. This is a paid writing opportunity, so if you know a writer who knows cars and you like their outlook on things, I’d love to hear from them.

    Final Thought

    Please, please, pretty-please, contact me with ideas, suggestions, things that don’t work, things that you’d like to see changed, etc. I’m very open to criticism and always looking to make this site better. You can email me directly jason[at]tundraheadquarters.com. Just replace the [at] with an @.

    Thanks to everyone who visits!

    Read user reviews of Tundra Accessories.

  • What hath Mac wrought? A remembrance after a quarter-century

    By Scott M. Fulton, III, Betanews

    [ME’s NOTE: This article was originally published on January 30, 2009, here in Betanews. I’m reprinting it today in honor of the memory of a man I refer to in this article, who was one of my early mentors in computing and in business, and who passed away last October 26: Elmer Zen “E.Z.” Million, the proprietor of the original Southwest Computer Conference, later the CEO of private aircraft services company Million Air, and occasional candidate for some lofty, high Oklahoma office. He was a brilliant businessman, a true fiscal conservative who really did teach me how to run a business, through long hours in his office poring over accurately written ledgers. And he was the absolute antithesis of everything people assumed a “computer pioneer” was, but he was all of that and more. I dedicate this to E.Z.’s enduring memory.]

    Banner: Viewpoint

    The reason there’s a Macintosh today is not because of some brilliant flash of engineering genius, as many revisionists like to believe. It’s because Apple had the audacity to make a few big mistakes first, and learn from them.

    The main reason I wasn’t escorted out of those first computer conferences, even though they typically displayed signs that expressly forbade anyone under 18 from entering, was because I looked the part of someone older who knew what he was doing. The moustache and the tailored suit somehow helped, like a rookie NASCAR driver who wanted to fit in with the big boys in the pit crews.

    It helped even more to know some people. Three decades ago now, I’d gotten to know a fellow who was one of the first great regional conference organizers, a promoter and business consultant whose given name truly was Elmer Zen Million. At first, he called me “The Kid,” which always made me shrink a little because that’s exactly what I tried not to look like at the time. After a few years, I was Scott to him and he was E.Z., and I was exempt from the 18-or-under rule…until one year when it finally didn’t matter. I was a private consultant, earning a small living, and just introducing myself to the publishers who would soon jump-start my career.

    It was the winter of 1983, one month before the rule wouldn’t matter anymore. By this time, a local computer store called High Technology had become one of Apple Computer’s top-selling independent retailers. I used to hang around that store and drum up business for myself, finding clients and helping them set up Apple II and Atari 800 computers. They didn’t mind because I’d end up sending my own customers back to them for more software, which was a high-margin business then. Folks were more interested in buying a computer that ran something weird-sounding like VisiCalc or Electric Pencil if they knew they’d have the help of someone who could give them a hand.

    So High Tech had purchased the prime space at one of E.Z.’s semi-annual computer conferences, and I was there to help set up. The store manager had reserved a big chunk of his floor display for the arrival of a computer he hadn’t seen yet. It was coming directly from Apple, its delivery was already a few days late, and all we knew about it was that it was not the “Apple IV” that had been rumored, and that it would cost ten thousand dollars.

    “So are you sure you want The Kid around?” asked one fellow. “Who, Scott?” replied the High Tech man. “Are you kidding? I don’t even have an instruction manual for this thing. He’s the only hope we have.”

    The crate arrived after lunch, literally looking like the “major award” shipped to Ralphie’s dad in the movie A Christmas Story. We were told to move our food and drinks a respectable distance from this major device since we wouldn’t know how delicate it would be, or how sensitive to soda pop drops and the grease from hot dogs. Some workmen gently extracted the device from its container, a process which consumed two hours, during which I probably consumed a six-pack of Dr. Pepper. And when it was eventually set up, it was missing its startup disk.

    The first attempt at a Macintosh: Apple's 're-invented' Lisa, model 1 (1983) [Photo credit: ComputerHistory.org]A High Tech associate eventually found it back at the store and drove it downtown, but in the meantime, we sat pondering what this new thing was going to do. “Lisa,” we’d concluded, must be a code-name and not the final brand. Somebody thought it would eventually be the Apple IV anyway, but the High Tech manager had heard from Cupertino that the Roman numerals had been declared history after “III.”

    I saw that Lisa came with a “puck.” At least that’s what I thought it was called; two years earlier, hanging around another computer conference, a guy from Tektronix instructed me on how to use its CAD/CAM system. It came with a digitizer device that you placed on a table called a “puck,” and you could also slide it along the left side of the table to select functions for the program.

    I had met a guy the year before who called it a “mouse,” but I thought it was a stupid name, and surely not the one anyone would settle upon. It was only several years later, after sorting through the mountain of business cards I’d collected over the years, that I realized, in one of those “holy-crap” moments, that the guy was Doug Engelbart.

    And since I had also been privy to a demonstration of the Xerox STAR Workstation a year or so earlier (although the fellow there also refused to call it a “mouse”), I was the one designated to flip the switch on Lisa. It took me about an hour to figure out how to boot the thing. You couldn’t even pull out the diskette by yourself; a software switch made the disk slide out slowly and deliberately, like teasing a sideways sloth and being teased back. Even E.Z. laughed at me as he walked by, at one point saying, “Who would’ve thought Apple would be the one to make The System That Stumped Scott?”

    It was mid-afternoon, and only when the electric sloth stopped spitting out cherry-bomb icons did we start drawing a crowd. Although I did hear one fellow praise the cherry-bombs, with language that stuck with me: “You know, if you think about it, that’s not a bad deal,” he said. “Imagine an operating system that’s so smart that it knows it’s hosed.”

    We spent the next several hours trying to guess how this most “intuitive” of systems worked, and I took extensive notes. By “we” at this point, I mean about a few hundred people — an audience had formed outside our table. Some brought out some Samsonite folding chairs, and E.Z. started making the rounds to make sure everyone was comfortable and had refreshments.

    We guessed wrong far more often than we guessed right. Ideas for what to do next were being shouted fast and furiously from folks in the crowd. The idea with Lisa was that you had this document, which you tore off from this on-screen pad using the puck. Then you used a menu to decide what to do with this open scrap of paper. Once we found LisaDraw, we started going to town with it. That’s when I could let the puck go for awhile and let other people (carefully, now, this thing costs ten grand) experiment with making the arrow move the way their hands moved.

    The most amazing thing I remember was how many folks were afraid of it. Psychologists who’ve studied the history of advertising have pointed out that it’s color that attracts people to a new gadget first and foremost. People thought of the Apple II, and even Apple’s logo, as being about color; this thing was monochrome and beige, like a brick of vanilla ice cream left to melt in the sun. We had decided “Lisa” couldn’t possibly have been Jobs’ or Wozniak’s girlfriend — perhaps a junior-high-school Spanish teacher, but not anyone close.

    [Photo credit: An original Lisa advertisement, from ComputerHistory.org]

    Next: The world that made the Mac…

    The world that made the Mac

    When younger folks today (I don’t have to pretend I’m old enough anymore) ask me what it felt like to experience a Macintosh for the first time, expecting a moment of revelation as though I’d set foot on Mars, it’s hard for them to understand this embryo of the Mac in the context of the world we early developers lived in. While we appreciated the Apple II for having accelerated the pace of evolution in computing, and for having been smart enough to let people tinker with its insides like with the Altair 8800 a mere three years before the II premiered, most of us in the business had the sincere impression that Apple least of all understood what our work was about. The Apple III was proof — the only way it could run good software was when it could step down into Apple II emulation mode. And the Lisa didn’t even have that.

    What it did have was Motorola’s 68000 processor, and now we could really see what a world of difference it would eventually bring to our lives. Back in the early ’80s, the CPU ran not only the operating system and the software but whatever graphics the software was capable of doing — it wasn’t shipped off to some co-processor. Simply watching the mouse pointer move fluidly on the screen was impressive to us at the time — more amazing, even, then creating our first plaid-patterned polygons with LisaDraw for no particular reason.

    But also, the world of computing was full of so many more great names than today. Sure, IBM was marching in and would set the tone for the next few decades, but we still had Commodore, Atari, Osborne, KayPro, Ohio Scientific, HP (which had its own designs for business computers at the time), Exidy, Sinclair, Apollo, and the brand which brought me into this business in the first place, Radio Shack. The world was full, and new ideas in hardware were coming out everywhere. Sure, some geniuses in particular rocked the world, but from our vantage point, that’s what everyone was doing…that’s what we were doing. Steve Jobs was one of our rock stars, sure. But we had a plethora of others — Jay Miner, Adam Osborne, Chuck Peddle, Clive Sinclair; the writers like Rodnay Zaks and Peter McWilliams; the great publishers we loved (some whom I’d later work for) like David Ahl, David Bunnell, Wayne Green; and the brilliant man whose name is so long forgotten, but who may have contributed at least as much if not more to our foundation of computing than anyone else, Gary Kildall.

    It started coming together with the advent of the Macintosh Plus (1984)

    It started to come together with the advent of the Macintosh Plus, with 512K — enough memory to actually run software — and the numeric keypad.

    So when the Macintosh first entered the scene for us, you have to understand, “new” for us came every three months or so. Even then, some of us were still puzzling over the Lisa. Though we all loved the “1984” ad, our expectations of the first Mac were based on our supposition that it would be an attempt to correct the failures of the Lisa. Being priced $8,000 less certainly helped.

    But even the first Macs weren’t brilliant, not really. They suffered from what we all perceived to be Steve Jobs’ basic nature to go with whatever he had at the time, explaining that it’s all by design, and if we didn’t get it, then it’s our fault. The first Mac was a closed system — oh sure, it had a serial interface that was being “pioneered” by Apple, for the connection of external devices that we were promised but never actually saw. But we couldn’t get hard drives to work with the first Macs, no matter how hard we tried (SCSI would only come later). The very first Mac-only conferences, sponsored by the nation’s Apple II users’ groups — easily the most friendly and the greatest computer users who ever walked this planet at any time in our history — were literally showered with businesses whose missions were to connect real peripherals to these things. We had laser printers, for crying out loud, and they were beautiful, but we had to fit our documents on these floppy diskettes; and without compression, we were using one diskette per document easily.

    And because opening up one’s Macintosh to do something horrible and unsanctioned, such as hiding a hard disk, constituted a violation of the sacred and sacrosanct Apple Warranty, none of these businesses were given accreditation by Apple, and many of them were scared to even employ the Apple logo in their brochures for fear of retribution. Thus the first Mac users’ groups — the offshoots of the Apple II groups — flourished despite Apple. In fact, at about the time he was ousted from his CEO position, many leaders of the Apple community were tired of Steve Jobs and his bloviating nature, and were more than happy to see him replaced with the down-to-earth, all-business, no-frills approach offered by John Sculley.

    Throughout the duration of the 1980s, the Macintosh was never the most powerful 68000-based computer you could buy. In terms of raw processing speed, the Atari ST (the focus of my career for about four years) blew the Mac away in every single challenge, yet that computer was being peddled by a company that was as clueless about computing as FEMA was about hurricanes. And for sheer fun and excitement and creativity, the Commodore Amiga ran circles around the Mac at warp speed. Both the ST and Amiga had orders of magnitude better software going into 1987. Meanwhile, the world’s best software authors all wanted to write for Mac and were stymied by all the hoops Apple made them jump through just to be certified, to get development kits, to attend the seminars, and to be treated in kind. Then something happened round about 1988, in the era of the Mac SE and the Mac IIci, when the 68020 and 68030 processors roared to life. The software got better, the systems became more reliable…HyperCard entered the public vernacular. And Apple became more desperate, more humble, and more willing to let other companies enter into its realm. There was an opening, for the first time. System 7 took bold steps forward in functionality and principle. It really took five long, painful years for the Mac to truly be born.

    Indeed, the Mac’s greatness derives from its designers’ willingness to break barriers. Not everything they tried was novel, and let’s face it, a good deal of it (just like Windows) was stolen from someone else. Many of the concept’s original ideas fell flat on their face, which is the key reason none of us boot up with Workshop disks today.

    The first great Macintosh: The Mac SE (1986)

    The first great Macintosh: The Mac SE

    The true brilliance of Macintosh is the ideal that computing can have one way of working that we can believe in and stick to. That brilliance was inside the crate the Lisa was delivered in, but it may have been too hard to notice on day one, when I flipped the switch for the first time. Back in the late ’70s and early ’80s, when systems crashed, we lost everything we were working on, and sometimes the disk it was stored on; even the Lisa brought forth the idea that an operating system can be all-encompassing, that you could be “in” the Lisa rather than “in” dBASE or VisiCalc or Valdocs. The computer itself could define the way its user worked.

    Granted, that was a great idea that was probably born in Gary Kildall’s mind before anyone else’s, but Apple made it work first. It took a lot of time and patience, and some swearing — most of which has been forgotten by revisionist history. But if you were there in the room when the switch was flipped, or if you can imagine sitting there on a folding chair and watching it happen and sharing the joys and the frustrations in equal measure, then you can truly appreciate what the Mac has brought us.

    [Photo credits: Scans of the Macintosh Plus (1985) and Mac SE (1986) from Byte Magazine]

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • 11 Top Open-source Resources for Cloud Computing

    Open-source software has been on the rise at many businesses during the extended economic downturn, and one of the areas where it is starting to offer companies a lot of flexibility and cost savings is in cloud computing. Cloud deployments can save money, free businesses from vendor lock-ins that could really sting over time, and offer flexible ways to combine public and private applications. The following are 11 top open-source cloud applications, services, educational resources, support options, general items of interest, and more.

    Eucalyptus. Ostatic broke the news about UC Santa Barbara’s open-source cloud project last year. Released as an open-source (under a FreeBSD-style license) infrastructure for cloud computing on clusters that duplicates the functionality of Amazon’s EC2, Eucalyptus directly uses the Amazon command-line tools. Startup Eucalyptus Systems was launched this year with venture funding, and the staff includes original architects from the Eucalyptus project. The company recently released its first major update to the software framework, which is also powering the cloud computing features in the new version of Ubuntu Linux.

    Red Hat’s Cloud. Linux-focused open-source player Red Hat has been rapidly expanding its focus on cloud computing. At the end of July, Red Hat held its Open Source Cloud Computing Forum, which included a large number of presentations from movers and shakers focused on open-source cloud initiatives. You can find free webcasts for all the presentations here. The speakers include Rich Wolski (CTO of Eucalyptus Systems), Brian Stevens (CTO of Red Hat), and Mike Olson (CEO of Cloudera). Stevens’ webcast can bring you up to speed on Red Hat’s cloud strategy. Novell is also an open source-focused company that is increasingly focused on cloud computing, and you can read about its strategy here.

    Traffic Server. Yahoo this week moved its open-source cloud computing initiatives up a notch with the donation of its Traffic Server product to the Apache Software Foundation. Traffic Server is used in-house at Yahoo to manage its own traffic, and it enables session management, authentication, configuration management, load balancing, and routing for entire cloud computing software stacks. Acting as an overlay to raw cloud computing services, Traffic Server allows IT administrators to allocate resources, including handling thousands of virtualized services concurrently.

    Cloudera. The open-source Hadoop software framework is increasingly used in cloud computing deployments due to its flexibility with cluster-based, data-intensive queries and other tasks. It’s overseen by the Apache Software Foundation, and Yahoo has its own time-tested Hadoop distribution. Cloudera is a promising startup focused on providing commercial support for Hadoop. You can read much more about Cloudera here.

    Puppet. Virtual servers are on the rise in cloud computing deployments, and Reductive Labs’ open-source software, built upon the legacy of the Cfengine system, is hugely respected by many system administrators for managing them. You can use it to manage large numbers of systems or virtual machines through automated routines, without having to do a lot of complex scripting.

    Enomaly. The company’s Elastic Computing Platform (ECP) has its roots in widely used Enomalism open-source provisioning and management software, designed to take much of the complexity out of starting a cloud infrastructure. ECP is a programmable virtual cloud computing infrastructure for small, medium and large businesses, and you can read much more about it here.

    Joyent. In January of this year, Joyent purchased Reasonably Smart, a fledgling open-source cloud startup based on JavaScript and Git. Joyent’s cloud hosting infrastructure and cloud management software incorporate many open-source tools for public and private clouds.  The company can also help you optimize a speedy implementation of the open-source MySQL database for cloud use.

    Zoho. Many people use Zoho’s huge suite of free, online applications, which is competitive with Google Docs. What lots of folks don’t realize, though, is that Zoho’s core is completely open source — a shining example of how SaaS solutions can work in harmony with open source. You can find many details on how Zoho deploys open-source tools in this interview.

    Globus Nimbus. This open-source toolkit allows businesses to turn clusters into Infrastructure-as-a-Service (IaaS) clouds. The Amazon EC2 interface is carried over, but is not the only interface you can choose.

    Reservoir. This is the main European research initiative on virtualized infrastructures and cloud computing. It’s a far-reaching project targeted to develop open-source technology for cloud computing, and help businesses avoid vendor lock-in.

    OpenNebula. The OpenNebula VM Manager is a core component of Reservoir. It’s an open-source answer to the many virtual machine management offerings from proprietary players, and interfaces easily with cloud infrastructure tools and services. “OpenNebula is an open-source virtual infrastructure engine that enables the dynamic deployment and re-placement of virtual machines on a pool of physical resources,” according to project leads.

    It’s good to see open-source tools and resources competing in the cloud computing space. The end result should be more flexibility for organizations that want to customize their approaches. Open-source cloud offerings also have the potential to keep pricing for all competitive services on a level playing field.

  • Facebook User Count Now 325 Million (Or More)

    On September 15th, Mark Zuckerberg announced that Facebook "now serves 300 million people across the world."  It seems the site’s growth hadn’t exactly come to a stop, either, as about a month and a half later, stats indicate the number of users has risen to at least 325 million.

    This isn’t a case of comScore versus Hitwise or Compete versus Nielsen discrepancies.  Nick O’Neill reported today, "According to Facebook’s own advertising statistics . . . the company is now beyond 325 million users and continuing to grow."  So everyone should be on the same page.

    What we appear to have here is a social network simply gaining 25 million users in the space of seven weeks.

    That’s an impressive fact all by itself.  Another way of looking at it is that, if Facebook users created their own country, the Land of Zuckerberg would have a larger population than the good old U.S. of A. (even if the users somehow didn’t leave America at the same time).

    What’s next, then?   Well, that’s hard to say.  Facebook’s almost sure to need additional hardware and employees.  Marketers are liable to worship it more, improving the results of monetization efforts.  Otherwise, this is pretty much uncharted territory.

    Related Articles:

    > Nearly Half Of Consumers Would Recommend A Product On Facebook

    > Social Networking Donation Cause Leaves MySpace For Facebook

    > Facebook Most Widely Used Network Among Businesses

  • Exclusive video of the Litl Webbook

    When news of the Litl Webbook broke out on Wednesday, I was pleased to learn that the company is located here in Boston, since there aren’t nearly as many people in this area making actual hardware devices, as opposed to software and web companies.

    I got a chance to sit down with CEO John Chuang for a thorough overview of the Webbook, so check out the above video for some information about the design philosophy and user interface behind the $699 transforming internet computer.

    As for the machine itself, it’s a 12-inch laptop-style device with a screen that folds over into an “easel mode” for viewing full screen web channels. The screen has a 178-degree viewing angle and there’s a built-in HDMI output for quick connection to TVs.

    The computer stores very little actual data on its 2GB flash drive, instead connecting to existing services and web sites. As such, user settings are constantly synched between multiple Litl machines and there’s no need to worry about losing data, viruses, or any of that stuff. Updates are pushed out silently to machines during the night and you can even pre-customize the “web cards” that will appear on your desktop before ordering. Litl owners in different households can send photos and videos directly to each others’ machines as well.

    Most existing web sites can be turned into web cards to be viewed in easel mode and to appear with the other cards on the home screen, although the company has also tapped into various services’ APIs already to create custom interfaces (Weather Channel, Photos, etc.). Easel mode can be controlled with a scroll wheel that’s built into the side of the computer or with an optional $19 remote control.

    Litl is priced at $699 and includes “a free two-year unconditional ‘satisfaction guaranteed or your money refunded’ warranty.” See the initial coverage and press release here.

    Litl [litl.com]