Category: News

  • Tuning out the tablet: Time to give the endless speculation a rest

    By Carmi Levy, Betanews

    Carmi Levy: Wide Angle Zoom (200 px)

    I’m sure I’m not the only one who looks at renderings of Apple’s long-rumored tablet – or iTablet, or whatever name the faithful have assigned to it this week – and wishes the FedEx truck would pull up to my door with an early demo in time for the holiday season. I’m sure I’m also not the only one who’s ready for the endless speculation to, well, end.

    I don’t think I’ve ever seen an unreleased product generate so much discussion without so much as a peep from the vendor of record. I realize the frenzied speculation is as frenzied as it is because we’re talking about Apple, and that if this were any other company, we’d collectively yawn our response before moving on to the next big thing. This is a company that seems unique in its ability to generate so much activity around what is, for now at least, vapourware. And while I appreciate the value of healthy exchanges in advance of a major product launch, I can’t shake the feeling that the never-ending iTablet fever is just a little much, and that we’d all be doing ourselves a favor by giving it a rest and waiting until Apple actually ships a working product.

    Like junk food – great taste, not so healthy

    Don’t get me wrong: This is all good for Apple. Once again, without spending so much as a dime on advertising, Apple has managed to keep its corporate brand front and center in both tech and mainstream media. It hasn’t had to use its PR firm retainers to pre-announce anything because breathless conventional and social media folks have been perfectly willing to share their so-called news on the company’s behalf. Any other vendor would sell its first, second and third born offerings to have even a fraction of this kind of market visibility and influence. Decades from now, when media mastery is taught in institutions of higher learning, Apple’s ability to time and again conjure a deafening buzz around things that may or may not see the light of day will serve as an iconic case study.

    But is any of this good for us? Is it helpful or hurtful for consumers and wannabes alike to spend days on end hovering over blog entries, twittering madly or debating in online forums? These activities in and of themselves are the sign of a healthy community, of course, and are crucial to giving vendors the kind of insight they need to continue to deliver market-relevant products and services. But has the uber-hype that seems to follow Apple around – and that has seemingly impossibly shifted into an even higher gear for the iTablet – finally reached the point of diminishing returns?

    Too much of anything isn’t good for you

    I’m going to argue that the hype has gone well beyond the point at which it adds any value to our collective lives. We’re working ourselves into a tizzy over something we know nothing about. We don’t know what OS this thing will run, how large it will be, what kind of screen it will have and how much it’ll cost. We’ve seen lots of beautifully rendered images of it and heard a near endless string of confirmed – then scotched – confirmations of imminent component orders and production. And as much fun as it is to bat around possibilities, it hardly seems like a productive way to spend time.

    That’s because while we’re all breathlessly sharing thoughts and opinions – but precious few facts – on a mysterious device that we now won’t apparently see until late next year, we continue to be challenged with more mundane needs, like using technology that’s available today to keep customers happy, our bank accounts filled and our lights on. I have no issue gazing into the collective crystal ball as a means of informing the kinds of decisions we need to make either today or in the near future. Keeping at least half an eye on what’s coming is one way of avoiding nasty surprises and keeping one step ahead of everyone else. But when said crystal ball becomes our sole focus of conversation, I’d like to humbly suggest that we’ve gone too far. Balance matters here, too, and if we’re spending all our time discussing a mythical product that’s close to a year from possibly seeing the light of day, we’re missing the significance of today.

    We’re missing the real point

    In a way, it’s a little disappointing that the enormous halo cast by this not-quite-a-product product eclipses the real issue at hand: that vendors have for the better part of the last decade failed to convince consumers that they should pony up for devices in the empty space between pocketable mobile devices and laptops. UMPCs and later MIDs failed to gain any traction thanks to low value propositions and ridiculous pricing. Netbooks have come close, thanks largely to their just-good-enough-for-the-purpose performance, conveniently portable form factor and recession-friendly price point. Timing has also helped netbooks carve out a niche, as their short range wireless and, increasingly, carrier-supported 3G connectivity gives them mobile capabilities that earlier, less well-connected devices could only dream of. Increasingly Web-centric application models don’t hurt, either.

    The success of the netbook is giving rise to new forms of devices and revenue models that could – maybe – finally fill in the veritable valley of death that has already claimed so many mid-sized, mid-priced form factors. While it’s unclear where Apple’s product will ultimately fit, it’s hardly a big story until the company actually moves closer to marketing the thing. Until then, every other competing vendor has just gotten a bit of additional breathing room to figure out what resonates with consumers before Apple satisfies the fanboys and finally introduces its tablet. Or whatever it is.

    Until then, count me among the cynics who really doesn’t care whether or not it has an OLED or a TFT screen, whether it’s released as one product or two, or whether it costs $2,000 or half that. Only when we begin to see actual data points will we be able to decide whether it’s worth pulling the plastic out or our wallets. For now, even Apple is capable of overstepping the limits of my patience.

    Carmi Levy is a Canadian-based independent technology analyst and journalist still trying to live down his past life leading help desks and managing projects for large financial services organizations. He comments extensively in a wide range of media, and works closely with clients to help them leverage technology and social media tools and processes to drive their business.

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • In Going Free, London Evening Standard Doubles Circulation While Slashing Costs

    In October, we wrote about how, just as Rupert Murdoch and crew look to put up paywalls for online content, the operators of the London Evening Standard were going in the other direction and making their physical paper free. So, how’s that been working out? mowgs alerts us to the news that the paper has doubled its circulation in just a month. Not bad. But what’s more interesting is that it’s also slashed its distribution costs massively. It used to cost about 30p, and now it’s just 4p per paper.

    This actually brings up a point that’s rarely talked about in the free vs. paid debate. Charging can be expensive. It takes quite a bit of effort to charge, to take money, to manage the money, to set up the accounting and bureaucracy for managing each transaction. And, even worse, if you’re working with third party distributors, like news agents, then you have to handle financial relationships with them as well. Getting rid of the per paper price changes the economics not just on the revenue side, but on the cost side as well — something that’s rarely discussed at all. And, yes, this impacts online news orgs too. Putting up a paywall is going to prove a lot more expensive than most people think on the cost side.

    Permalink | Comments | Email This Story





  • Etsy Find: Stylish Cat Beds from Apartment C PLUS BONUS GIVEAWAY

    Apartment C Cat Beds

    I’m absolutely in love with the beautiful designer fabrics used on these pet beds from Etsy seller Apartment C. There are so many to choose from! Each bed is handmade by Alling Welsh. The beds measure 18 inches, perfect for a cozy cat nest!

    Handmade Pet Beds from Apartment C

    I love these beds in the black and white patterns. Very mod! And the kitties below sure do seem to be enjoying their beds.

    Apartment C Cat Beds

    BONUS GIVEAWAY! ENTER TO WIN! TWO WINNERS!

    Alling is offering not one, but two beds for this giveaway! And here’s the hard part, the winners will get to choose whichever bed they want! Oh, that will be difficult! To enter, please take a look at the fabric selections in the Apartment C Etsy shop, then come back here and leave a comment on this post with the name of your favorite design. The winners will be chosen in a random drawing on November 27. One entry per person. This giveaway is open to US addresses only.


  • Invincible shield without the invincible

    ZAGG is one of the most popular full body consumer electronic screen logo_thumbprotector maker out there,  and they are back with something new and unique. The new ZAGGSking is the highly advance unscratchable invincible shield with the added customization of a skin.

    This new product is set to be lunched tomorrow and will have everything that a skin should have with the protection a full body protector has.

    On Friday the 20Th visit ZAGG to get yours.

    Thanks For the Details

    Share/Bookmark

  • Smart Marketing = Greener Printing for J. C. Penney

    One of the terrific things about greening a print marketing program is that many of the best practices in marketing today have “green” as a by-product.

    Take the example of J. C. Penney, which made marketing headlines today when it announced that it would be discontinuing its semi-annual Big Book catalog after the Fall-Winter 09 season. Over the years, J. C. Penney was finding that its catalog was less a direct selling channel than a way to prime the pump for online sales. Instead of wasting volumes of paper, ink, and coating — not to mention the fossil fuels to deliver the 800-1000-page books — it decided to slim things down.

    Read more of this story »

  • Battlefield: Bad Company 2 US Beta client now available for download, servers still down

    The PS3-exclusive beta for DICE’s Battlefield: Bad Company 2 (game also on Xbox 360 and PC) is already available for download, so get it while it’s st…

  • CitiKitty Giveaway Winners

    Giveaway Winners

    It looks like a lot of people are excited about toilet training their cats! Out of 599 entries, the two lucky winners of the CitiKitty giveaway are:

    Not only do Cindy and Kimberly each win a CitiKitty Cat Toilet Training Kit, but they both get to go on a $100 shopping spree at www.CittyKitty.com! Have fun checking out all the cool cat products!

  • Media Create hardware sales: November 9 – 15, 2009

     And now for our weekly report on the hardware sales from Japan. Media Create has released the numbers for the week of November 9th through Novem…

  • Google Books Settlement 2.0: Evaluating Competition

    This is the third in a series of posts about the proposed Google Book Search settlement.

    Now that we’ve described the proposed settlement agreement’s biggest potential upside for the public—expanded online access to books, particularly out-of-print books—that benefit must be weighed against the potential down-sides. On that score, the settlement’s potential impact on competition in the online book market has loomed large. Critics of the settlement have emphasized two principal dangers:

    1. The potential for a Google monopoly over orphan and unclaimed books.
    2. The potential for monopolistic pricing of the Institutional Subscription Database, particularly for higher education.

    The revised Settlement 2.0 made little or no effort to address these concerns, leaving it to Congress or antitrust authorities to fix later.

    A Google Monopoly on Orphan & Unclaimed Books?

    At the heart of the proposed settlement is a bargain that lets Google (and only Google) leapfrog the problem of “unclaimed works”—books whose copyright owners cannot be found or whose owners can’t be bothered to fill out paperwork for a small payment disbursed by the Registry (consider how many “class action” notices you’ve tossed in the trash unread). Thanks to the magic of the class action process, the settlement solves this problem by resolving the copyright claims of these otherwise unreachable copyright owners and designating all of their works by default as available for “Display Uses” by Google. In other words, so long as no one steps forward to claim these books, Google (and only Google) has a license to make them available in all the ways the settlement allows.

    Many who filed objections to the proposed settlement, including the Department of Justice, Microsoft, Amazon.com, the Internet Archive, and Public Knowledge, among others, argued that this could create a de facto Google monopoly over online use of these unclaimed works. And while the revised Settlement 2.0 creates an “Unclaimed Works Fiduciary” (UWF) to act as a guardian on behalf of owners of unclaimed works, neither the UWF nor the Registry has the power to grant a similar license to any other entity that might want to make the same kinds of uses that Google will be entitled to make under the settlement.

    Nobody likes this “only-for-Google” aspect of the settlement—in fact, Google has said that it would support orphan works legislation that would empower the Registry to make the same deal (or even a better deal) with others who want to use these unclaimed works. (Where the claimed books are concerned, in contrast, the Registry will likely ask the rightsholders to appoint it to license companies other than Google. But that still leaves all the unclaimed books out.) The settlement agreement even has a provision that makes it clear that the UWF can license others “to the extent permitted by applicable law”—what amounts to an “insert orphan works legislation here” invitation.

    But absent some legislative supplement to the revised Settlement 2.0, it still seems that any other company would have to scan these books, get sued, and hope for a class action settlement. That, of course, is the kind of barrier to entry that any monopolist would envy.

    This raises a worthy question: if legislation is necessary to fix the competition problem posed by the settlement, then why do we need a class action settlement in the first place? Why not solve what seems like a quintessentially legislative problem with legislation, instead? (As Amazon points out, that’s exactly what was done when music publishers brought a class action against the first digital audio tape (DAT) recorders).

    Here’s where realpolitik enters the equation. Google correctly points out that Congress has been working on orphan works legislation for years, to no avail. And none of the legislative proposals came close to the comprehensive solution embodied in the proposed settlement. So the question boils down to a political one: do you believe that approval of Settlement 2.0 will make orphan works legislation more likely, or less likely? Without a crystal ball, it’s hard to know.

    Monopoly Pricing of the Institutional Subscription Database?

    One of the commercial services that Google is authorized to provide under the proposed settlement is the “Institutional Subscription Database” (aka “ISD”), which will provide “all-you-can-eat” access to the corpus of scanned books. The chief customers for the ISD are likely to be universities (the same folks who are providing Google with the books to be scanned), for whom instant digital access to every word in every book in Google’s collection is likely to be very compelling.

    The big question is whether, over time, the ISD will become the one database that no university can do without, and the one database with no market substitute (again, because Google will be the only company who can provide a comprehensive corpus without fear of copyright liability, for the reasons explained above). This, of course, is a recipe for monopolistic price gouging, as a group of academic authors led by Prof. Pam Samuelson have pointed out. Over time, universities could face spiraling prices as Google and the Registry conspire to maximize their revenues on the ISD product.

    Google and its supporters respond by pointing out that the settlement requires that pricing for the ISD be set with regard to “two objectives: (1) the realization of revenue at market rates for each Book and license on behalf of Rightsholders and (2) the realization of broad access to the Books by the public, including institutions of higher education.” The settlement goes on to promise that Google and the BRR “will use the following parameters to determine the price of Institutional Subscriptions: pricing of similar products and services available from third parties, the scope of Books available, the quality of the scan and the features offered as part of the Institutional Subscription.”

    But Google’s own people have reportedly admitted that there might not be any “similar products and services” to the ISD. And the settlement does not give ISD subscribers the right to go to court to enforce these “objectives” and “parameters.” Instead, Google has entered into “side agreements” with some of its major library partners (U. of Michigan, U. of Wisconsin—both of which will be receiving subsidies from Google for their ISD fees) that allow only those institutions to challenge pricing, and only under certain circumstances. So what we are left with is a “trust us” from Google, the Registry, and their biggest library partners.

    Of course, the chances of this coming to pass are hard to know in advance. As we have pointed out, if many large publishers pull their books out of the ISD database, then perhaps the ISD service won’t become indispensable to universities after all. So, ironically, the more successful the ISD proves to be, the more of a danger its pricing mechanism might prove to be for higher education.

    Fixing the Competition Problem

    Just because the proposed Book Search settlement isn’t good for competition doesn’t mean it’s illegal. There is a robust debate going on (see, e.g., articles by Picker, Elhauge, Fraser, Lemley, and Picker again) about whether the proposed settlement might violate antitrust laws, and the Antitrust Division of the Department of Justice will doubtless continue its investigation.

    But we shouldn’t be satisfied with antitrust law here. This is not just a simple market transaction between commercial entities. Google is building an enormously important public resource, a task it can only undertake with the blessing of a federal court. The public deserves a solution that is not “barely legal,” but that instead encourages real, robust competition. As written, without some modification or legislative adjunct, Settlement 2.0 does not do that.

  • New Fast Company: The Meowtrix

    I CAN HAS SINGULARITY?

    My new Fast Company essay is now up, looking at the news that IBM researchers have produced a cortical computing system with the connection complexity of a cat’s brain. (My original title is shown here on the illustration; the replacement title is a bit inaccurate and I’ve suggested a replacement, so let’s just move along.) It’s a follow-up to the research from a couple of years ago on a mouse-scale brain simulation; we’re still on-target for a human-level brain connection simulation by 2020.

    All of the stories about this, including my own, have emphasized the cat brain aspect, but in reality the truly nifty development is the improved ability to map brain structures using advanced MRI and supercomputer modeling.

    Ultimately, this is a very interesting development, both for the obvious reasons (an artificial cat brain!) and because of its associated “Blue Matter” project, which uses supercomputers and magnetic resonance to non-invasively map out brain structures and connections. The cortical sim is intended, in large part, to serve as a test-bed for the maps gleaned by the Blue Matter analysis. The combination could mean taking a reading of a brain and running the shadow mind in a box.

    Science fiction writers will have a field day with this, especially if they develop a way to “write” neural connections, and not just read them. Brain back-ups? Shadow minds in a box, used to extract secret knowledge? Hypercats, with brains operating at a thousand times normal speed? The mind reels.

    The phrase “shadow minds” should be familiar to anyone who read the Transhuman Space game books — this is almost exactly what the game talked about, and on an even more aggressive schedule!

  • The Statinator Paradox

    Pity the poor lipophobes and statinators.  They’ve just taken another grievous wound to their favorite theory and haven’t even got sense enough to know it.  In fact, not only do they not have sense enough to realize they’ve taken the hit, they’re actually crowing about it.

    The current issue of the Journal of the American Medical Association (JAMA) has an article titled Trends in High Levels of Low-Density Lipoprotein Cholesterol in the United States, 1999-2006 that puts another major dent in whatever validity remains of the lipid hypothesis of heart disease.

    I’m going to start categorizing the types of findings published in this paper under the rubric of The Statinator Paradox.  I find it interesting that whenever scientists discover data that shows the opposite of what their hypotheses predict, they don’t conclude that their hypotheses might be wrong; instead they deem the contradiction a ‘paradox’ and bumble on ahead with their hypotheses intact.

    The lipophobes hold the hypothesis dear that saturated fat causes heart disease.  When the data began to surface that the French eat tons more saturated fat than do Americans yet suffer only a fraction of the heart attacks, the French Paradox was born.  Nothing wrong with our hypothesis, it’s just those pesky French people who are somehow different.  It’s a By God paradox, that’s what it is.

    Same thing happened with the Spanish.  Researchers looked at the food consumption data in Spain and discovered that Spaniards had been eating more meat, more cheese and more dairy while decreasing their consumption of sugar and other carbohydrate-rich foods over a 15-year period.  And, lo and behold, during this same period, stroke and heart disease rates fell.  Can’t be.  Saturated fat causes all these things.  But the data show…  Thus came the Spanish Paradox.

    Statinators and lipophobes believe with all their little fat-free hearts that LDL-cholesterol is bad and is the driving factor behind heart disease.  So whenever I come upon data that gives the lie to this notion, I’m going to start calling it the Statinator Paradox.

    This JAMA paper is a classic case of the Statinator Paradox.

    Researchers using the NHANES data looked at the change in the prevalence of elevated LDL cholesterol and found that it fell substantially from 1999-2000 to 2005-2006.  In a period of about six years the prevalence of high LDL cholesterol dropped by a third, which is a lot of drop in a fairly short period of time.

    And since everyone knows that high LDL cholesterol causes heart disease, it should go without saying that during this same time period there occurred a significant decrease in the prevalence of heart disease.  Right?  Uh, well, no, not really.  If anything, the prevalence of heart disease actually increased.  But not to a statistically significant degree.  So statistically there was no difference in the prevalence of heart disease during a time in which high LDL cholesterol levels were falling.  But if high LDL cholestrol causes heart disease…? It’s the ol’ Statinator Paradox writ large.

    It was fun reading this paper because a basically fairly simple project was cloaked in all the regalia of academia and academic speak.

    It starts out with a great opening sentence that is a paragon of academic weaselry:

    High total blood cholesterol is recognized as a major contributing factor for the initiation and progression of atherosclerosis.

    Recognized?  What does that mean?

    I could substitute words in this sentence and come up with the following:

    The policies of Barrack Obama are recognized as a major contributing factor in the initiation and progression of socialism in America.

    What does that mean?  Depends upon whom you say it to.  If I were to shout this sentence at a Sarah Palin campaign event, I would be cheered loudly.  If I said it at a Nancy Pelosi event, I would be tarred and feathered.  Since the ‘truth’ of the sentence is a function of the bias of the person hearing it, it’s not a meaningful sentence.  As written, the sentence doesn’t mean squat, which makes it perfect for academic writing.

    The authors, I’m sure, are believers in the lipid hypothesis but just can’t muster the gumption to write ‘high total blood cholesterol IS a major contributing factor…’  Instead they use the word ‘recognized,’ which makes the sentence meaningless and lets them off the hook should the lipid hypothesis ever blow up in their faces.

    In setting up the study, the researchers went through a lot of rigmarole to allocate subjects to three different categories depending upon their degree of risk for developing heart disease.  In determining this risk, researchers used the Framingham risk equation, which relies to a great extent on cholesterol levels to allocate that risk.  Which is strange since the Framingham Study has never shown elevated cholesterol to be a risk factor for heart disease.

    Once subjects were divvied into these three groups, the researchers measured LDL-cholesterol levels and calculated what percentage of subjects in each group had high LDL-cholesterol levels.  The threshold as to what was high varied as a function of the risk level of the group as a whole.  The bar for what was high was lowest in the high risk group and highest in the low-risk group.  In other words, if subjects had multiple risk factors, then an LDL-cholesterol level of anything over 100 mg/dl was considered ‘high,’ whereas in subjects in the lowest risk category, an LDL-cholesterol level over 160 was considered ‘high.’

    Researchers calculated as a percentage the number of subjects who had high LDL-cholesterol in each risk group and did the calculations again six years later.

    The weighted age-standardized prevalence of high LDL-C levels among all participants and among participants in each ATP III risk category decreased significantly during the study periods.

    Which is what they were crowing about.  Our therapy dramatically decreased the number of people at risk for heart disease.

    But as for heart disease itself:

    No significant changes were observed in the prevalence of CHD or CHD equivalents from 1999-2000 to 2005-2006.

    So what did our researchers conclude from the fact that there were one third fewer people with high LDL-cholesterol yet there was no decrease in heart disease?

    They concluded the obvious.  There were still two thirds of people with LDL-cholesterol levels that were too high.  And, no doubt, these people were not on statins.

    Don’t believe me?  Here it is in their own words.

    However, our study found that almost two-thirds of participants who were at high risk for developing CHD within 10 years and who were eligible for lipid-lowering drugs were not receiving medication.

    So, let me see if I’ve got this straight.  This study shows no evidence that lowering LDL-cholesterol levels decreases the prevalence of heart disease.  And what we conclude from this data is that we simply need to treat more people.  Brilliant!

    As I was reading this paper online, I got a bing alerting me that I had an email from Medscape bringing me the latest in mainstream medical thought.  I opened the email and began scrolling through the various articles displayed when my eye fell on one titled “Lipids for Dummies.”

    I clicked on it, and what opened was a video of a statinator of the deepest dye interviewing an alpha statinator about how to best deal with the risk of heart disease.

    It was unbelievable.

    Here in a short interview is everything that is wrong with mainstream medicine today.  We have two influential doctors at the pinnacle of their academic and clinical prowess – no doubt on the payrolls of multiple pharmaceutical companies – who are absolutely full of themselves blathering on about expensive treatments that have no true scientific grounding.  And their BS is being disseminated to practicing doctors everywhere. Instead of ‘Lipids for Dummies’ this interview should have been called Dummies for Statins.

    Watch and just shake your head.

    Click here to view the embedded video.

    These guys aren’t really talking about reducing the risk for heart disease or early death; they’re discussing how to use extremely expensive medications that are not particularly benign to treat lab values.  As I’ve written countless times, statins can quickly and effectively treat lab values, but there is little evidence they treat much else.  So if you want to have lab values that are the envy of all your friends, statins are the way to go.  But if you want to really reduce your risk for all-cause mortality, you might want to think twice before you sign up for a drug that will cost you (or your insurance company) $150-$250 per month, make your muscles ache, diminish your memory and cognition, and potentially croak your liver.

    If you wonder who underwrites these kinds of interviews, take a look at the actual Medscape link in which the video is embedded.  See if you, like Sherlock Holmes, can figure it out.

    This link requires requires free registration.

    (If I weren’t so pleased with a nice Sous Vide Supreme review we got today, this kind of nonsense would make me contemplate seppuku.)


    DietPower Calorie Counter Software

  • Senate Exploring Med School Profs Putting Names On Ghostwritten Journal Articles In Favor Of Drugs

    We’ve had a few posts recently about the growing scandal in the pharma and publishing worlds, whereby big pharma companies would produce fake medical journals with the stamp of approval from big publishing houses, to make it look like their drugs had a lot more scientific support than they really did. To make matters even more insane, often the pharma companies would ghostwrite articles, and then get professors to basically put their names on the works, which were designed to emphasize the benefits of certain drugs, while hiding or de-emphasizing the risks. Copycense points us to the good news that Senator Grassley is at least asking various med schools to explain why this was allowed, while probing how putting professors names on ghostwritten articles is any different than plagiarism.

    Permalink | Comments | Email This Story





  • Say Hello to the New New GigaOM

    Two winters ago, we unveiled a new design for GigaOM — today, we are launching another one. Whereas before, our focus was on design, this time around we’re aiming to bring you a unique user experience.

    The biggest change at our company in the intervening two years, of course, has been in the growth of our network, which now totals seven blogs: In addition to this site, we also have TheAppleBlog, jkOnTheRun, NewTeeVee, Earth2Tech, OStatic and WebWorkerDaily. In short, we generate a lot of content that adheres to the basic ethos of GigaOM.

    Our product guru, Jaime Chen

    While I remain a big believer is specialist niches, I feel it’s also important to surface more of the quality work being produced across these properties, such as the Car 2.0 coverage by Josie Garthwaite on Earth2Tech or Simon Mackie’s web working tips. So about six months ago, I asked our product guru, Jaime Chen, to come up with a game plan that would allow us to conduct a complete overhaul. Her mission was to:

    • Better showcase new content and related articles so that we can overcome the limitations of the blog format without really moving away from it.
    • Give readers an easy way to go to other GigaOM Network properties so that they can discover the work of our entire team of writers.
    • Focus on super simple content consumption and discovery.
    • Enable us to be more social.
    • When it comes to actual blogging, take us back to our roots.

    Jaime, instead of taking my word for it, went out and talked to a whole lot of our users — nearly 1,000 of you shared your feedback and insights with us. And you were not shy about your dislikes. As it turned out, most of what you wanted was already on my wish list. So we got ahold of our old friend Ryan Freitas and the ace design team of Shane Pearlman and Peter Chester to turn what we learned into a unique experience. They quietly toiled away for months and now, here you have it: The first step in the network-wide overhaul.

    What we’ve tried to do is strike a fine balance between what is a blog and what would be an online magazine. We have done this by adding a Featured Posts block at the top of the home page, while toward the bottom we’ve added topic pages and special reports. The rest maintains the typical blog format, but with a focus on extreme discoverability — the most-requested feature amongst our readers.

    To that end, many of you asked for a list of three bullet points that summarize the highlights of longer posts. You got it. A list of related posts was another common request, so we’ve implemented that as well. And for those of you that wanted the GigaOM Team to point to great blog posts we might have read across the web and found useful, we’re rolling out that feature later this week. It’s pretty simple — we don’t have a monopoly on ideas, and since our business is based on your attention, it’s our job to make sure that your attention is being put to good use. And that means helping you save time and finding you stuff that you might find useful.

    A note about typography: I wanted us to make reading an easy experience, so I opted for white spaces, bigger fonts and some elements that you would typically see in a traditional print publication. I’ve been reading the test site on an older, smaller screen (1024 x 768 ThinkPad) and my eyes don’t hurt — yet. In addition, some of the typographic stylings come to our blog courtesy of font technologies from San Francisco-based startup Typekit. We’re using the Clifford font, which is being served using the TypeKit technology (Disclosure: Typekit and Automattic, the company behind WordPress.com, are backed by True Ventures, investors in GigaOM and where I am a venture partner).

    In the end, I want us to be closer to my grand vision of what I see as the future of blogging — more visual, multimodal, interactive, real time and social. We’re not there just yet, but we will be in a few months. Today you can share our stories on Twitter and Facebook; you can also connect via Facebook Connect and leave comments on the site. It might come as a surprise, but this entire operation (including a fairly advanced publishing system) was built on top of WordPress.com, the on-demand blogging service based on the open-source software, WordPress. Without going into the dirty details, WordPress Jedi Mark Jaquith, our in-house coding champ Chancey Mathews and our dev team of Kelsey Damas, Nick Ohrn, Dan Cameron and Matt Wiebe and designers Reid Peifer and Brandon Jones – many of them spread across different time zones and geographies — helped us put together the whole back end for the new site (and our blog network).

    Now all this design and user experience is only as good as what we are supposed to do: create content you actually want to read. On that front, too, we have some good news. Liz Gannes, who till recently was the editor of NewTeeVee, has joined the GigaOM team as senior writer, where she will closely follow consumer web technologies and startups. She will be editor-at-large for NewTeeVee, where she will be contributing her insights into the world of online video as well. Liz is going to be joining me and Stacey Higginbotham, who has also been made a senior writer for GigaOM.

    Given her work ethic and deep insights, Stacey’s promotion is well deserved. She will continue to track broadband (including policy), the FCC and cloud computing. So there you have it: the GigaOM troika. We are going to be focusing on all the things we love, with a renewed emphasis on innovation. Thankfully we have our editor in chief, Sebastian Rupley, giving us his perspective on technology all the time — his experience brings a much-needed realism to the go-go nature of Silicon Valley.


  • Harper Chiller Project to Save Cold Hard Cash

    PALATINE, IL – Harper College will save at least $155,000 a year in electricity costs by installing new, high efficiency chillers that will provide air conditioning to six campus buildings. The project was approved by the Harper Board of Trustees at their regularly scheduled meeting on Thursday night.

    The new chillers will replace old air conditioning equipment, some of which use chlorofluorocarbons (CFCs), which are considered harmful to the environment. In addition to being more environmentally friendly, the chillers will reduce energy usage by approximately 60% compared to the old equipment. 

    “The old chillers were approaching the end of their useful life, so this is a good time to replace them and get considerable cost savings,” said Jim Ma, Director of Physical Plant. “Because of the poor economy, the bids for the chillers and construction came in well below estimates. It’s definitely a buyer’s market right now.”  

    “Given the difficult economy and the ongoing problems with state funding, it is essential that we find ways to reduce costs while not affecting the programs and services we provide,” said Harper College President Dr. Kenneth Ender. “With more efficient equipment, we should be able to reap the benefits of lower costs to cool these buildings for many years to come.”   

    Harper will spend about $3.2 million to install new chillers that will serve buildings F,L,P,R,A and W. The cost includes related infrastructure improvements, including new cooling towers, pumps, controls, piping, masonry and concrete work.  

    Money for the project will be drawn from the $153.6 capital referendum that was passed last year.  The new funds will be spent primarily on the repair and renovation of facilities. 

    “By investing in critical infrastructure improvements now, Harper will be better equipped to address the needs of our rapidly growing enrollment,” said Harper Board Chair Laurie Stone. “This is especially important as we look to add new programs to help workers train for new jobs and careers.”

     

  • Viewing the world as a system will help us establish sustainability

    Paul Hawken was the keynote speaker at the Sustainable Industries Economic Forum in San Francisco on Thursday. He had some inspiring talking points (the forum’s goal was to ‘reinspire the inspired’), but one of the key takeaways was in how we should be viewing sustainability.  He started by saying that sustainability should be viewed as a easily defineable.  Sustainability means we survive.  Living unsustainably means we don’t.  But it was how he suggested we view this that was really interesting. 

    Read more of this story »

  • Five improvements for IT managers in 2010

    By Ed Moyle, TechNewsWorld

    Every year around this time, everyone from antimalware companies to analyst firms line up to tell us about the top IT and security trends — what they are and why we should care. This year, chances are they’ll tell us all about cloud computing, virtualization, and social networking, and why these technologies are the new best (or worst) friends for security folks in 2010.

    Now if you’re sensing a bit of snarkiness here, you’re right — I find these lists a bit frustrating. That’s not because of inaccuracies in the lists themselves (to the contrary, many of them are dead-on), but instead because they sometimes inappropriately drive how IT managers make budgeting decisions. Don’t get me wrong, keeping abreast of the new areas is always valuable — and I’m always fully on board with keeping us and our staff up to date and capable of reacting to new types of threats. But it’s also important to keep in mind that what’s new isn’t always what’s most critical. Where should you be investing budget dollars? At critical areas, not just what’s new and shiny.

    To illustrate this, consider a firm that doesn’t use AV (antivirus) and also allows users to access social networking sites. The trend predictions are likely to clue us in about why social networking is something we should care about, but they might not mention malware at all (after all, that’s been around forever). But if your firm doesn’t yet have a cohesive antimalware strategy … well, you’ve got bigger fish to fry than how, when, or what your employees tweet. In other words, when it comes time to allocate budget for new projects, you need to consider both the new and the old — both the upcoming trends that Big Analyst Firm says are emerging, as well as the “tried and true” fundamentals that don’t get as much play.

    In the field, it’s all about the basics. When you stop to think about it, how many of us are really where we need to be when it comes to the fundamentals? Which position would you rather defend: that your firm was hacked because of some newly emerging threat, or that you got hacked because you weren’t doing the generally accepted minimum practice?

    So in the spirit of keeping one eye on the practical, here’s my New Year’s list — or, more accurately, my “reminder list”: a highly unscientific breakdown of the top five basics that are often overlooked in enterprise. These are things that you probably should be doing, but might not be — and things that you could probably do more with, but maybe aren’t.

    1) Vulnerability Assessment. There are several reasons why you might not be doing as much vulnerability assessment as you could be: It can bring down critical systems, it requires some specialized knowledge to vet false positives, it has a high overhead in terms of care and feedback by staff members. As such, there are quite a few organizations that just don’t use it at all — and for companies that do, it’s often inconsistently deployed.

    However, aside from this complexity, it’s also one of the most valuable areas of feedback that you can get about how your organization performs. Data about the effectiveness of your patch management processes, your password policy, and your system-hardening procedures are all directly and practically observable through the data coming out of vulnerability assessment results. If you haven’t deployed it yet, the technology is cheap, mature and commoditized.

    2) Asset Inventory. How many of us have a detailed inventory of all the “stuff” on our networks? Organizations tend to grow their IT organically, so many of us are in the situation where going back to inventory what we have fielded is a huge, expensive undertaking. Even when we do have some idea of what’s out there, there are very often “gaps” in our understanding of our environments. For example, we may have a pretty clear idea of what desktops are fielded but not have tremendous insight about applications.

    If you don’t have a clear idea of what you have fielded in your organization, now is the time to put together that inventory you’ve been putting off. Leverage existing tools like VA reports or business impact analysis documentation to put together a rough “map” of what you have fielded and keep it updated as changes occur. You don’t have to have a fancy system to do this. Start small and grow your inventory the same way you grew your network — organically.

    3) Provisioning. We all know the ideal end state of user provisioning: defined roles that govern access to network resources and applications. But in practice, when the topic of provisioning comes up we wind up going down a path that involves deploying complicated systems or that involves significant effort parsing out users based on vague or poorly defined roles. While we wait for the dust to settle, the day to day business of assigning new users to applications moves ever forward — often with little or no assist from security staff and even less organization.

    However, a solid map of roles doesn’t have to be complicated. Start by defining roles at the very highest levels, and get more granular over time. Don’t have a provisioning system deployed? Delegate responsibility for creating roles to subject matter experts who use the application all the time. Check in with them periodically to make sure they’re doing the right thing.

    4) Audit and Monitoring. IDS systems are chatty, and the individuals who review alerts are often under significant stress and workload already. At the OS and application levels, staffers are often too overwhelmed to review log and activity reports as much as they should. So who has time to keep up? In many organizations, the day-to-day monitoring of audit and activity logs tends to fall by the wayside — for folks that even have auditing features enabled.

    However, compliance mandates like PCI, HIPAA and others specifically require review of audit logs, so failure to make this happen is not an option. Step up what you review and how often you review it. Institute spot-checks to make sure that staffers are doing their jobs when it comes to reviewing this critical data.

    5) Business Continuity Planning. Let’s face it, BCP is a lot of work. It involves participation from all areas of the organization — from subject matter experts to business to management. Because of the number of folks involved, very often we don’t have time for formal models, or when we do, very often our analysis goes without updates for long periods of time.

    However, planning for contingencies is beyond critical, and all that data you’re getting about applications, systems and business processes can be recycled for other purposes within your security program, such as triage during an incident response exercise, risk analysis, and even asset inventory. So maybe now is a good time to do a refresh on this valuable data — or start doing it if you haven’t already.

    Now maybe your company has already hit all these topics, and you don’t need me to remind you to “eat your vegetables.” If so, nice work — count yourself among the well-positioned minority.

    However, if in reading through these items you see areas where you could be doing better, remember that boning up on the basics is just as important as looking for new ground to cover. After all, the basics might be “old hat,” but that doesn’t mean they’re not important.

    Originally published on TechNewsWorld

    © 2009 ECT News Network. All rights reserved.

    © 2009 BetaNews.com. All rights reserved.

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Sony Bringing Fee-based Subscription To Playstation Network


    ps3fee

    In Sony’s Investor/Analyst meeting today, they had a section in their slideshow called the “5 Key Advantages of Playstation 3,” which is a general overlook at the system and its advantages in the market. The usual information was present, including mention of the Motion Controller (that will have vibration feedback), Blu-ray capability, PSP integration, and 3D gaming. However, there was one slide that had several interesting items listed:

    • “New revenue stream from subscription”
    • “Q2 FY2010: release non-game software development kit”

    I fully believe that this new subscription based service would not replace the free Playstation network access, but rather compliment it. I think that a free Playstation Network will always exist and has been a strength for Sony from the start. However, I do think that they will offer some sort of subscription service based on entertainment offerings, such as allowing you to download a preset number of movies, games or themes each month. There are many other possibilities as well, including increased download speed, a more detailed Playstation Network profile online, and other exclusive benefits.

    It’s fair to say we will probably see this Playstation Network subscription offering in 2010/2011.

    As for the non-game software development kit, we haven’t been able to get more information at this time. Very unusual!

  • Location, Location, Location: SimpleGeo, Twitter, Flook

    In my first week back on the web beat at GigaOM, one of the topics I wanted to focus on was location. Let’s just say that hasn’t exactly been a difficult task. Coming at us from Boulder, San Francisco and London, here are today’s top three geo-tagging developments:

    SimpleGeo launched today, promising to build a contextual infrastructure of points and eventually polygons for the world so that people can build apps that incorporate where users are located. The company says it’s already received 600 beta inquiries in its first day out, and it also received both audience and judges’ choice accolades at the Under the Radar event where it debuted.

    “We’re selling shovels at the beginning of a gold rush,” is how co-founder Matt Galligan put it on a call today. “You want to add location, just come to us — it’s done.” Though four-person SimpleGeo still measures its age in months, it already has a price sheet: free, $399/month for small businesses and $2,499/month for custom implementations. Galligan said he expects to announce a funding round soon. (BTW, this follows the launch of competitor PublicEarth, which calls itself “the wiki for places,” yesterday.)

    While Boulder, Colo.-based SimpleGeo may have moved quickly in its short life, big social sites aren’t necessarily waiting for little startups to come fill their location-based needs. Today Twitter launched a geotagging API, at first only available as an opt-in feature for outside apps like Birdfeed, Seesmic Web, Foursquare, Gowalla, Twidroid and Twittelator Pro. When used, this feature associates a tweeter’s exact location (as best as it can be determined) at the time of tweeting with the tweet itslf.

    And lastly, Ambient Industries debuted a social location app for the iPhone today called Flook. While there’s no lack of competition for fun iPhone apps that enable users to mark up the world, Flook is built to be quirky, easy to browse and contextual-ad ready. The basic interface consists of virtual geo-tagged “cards” with facts, photos and recommendations left at particular locations by Flook users. Users can swipe through cards and turn them over to leave comments in a jaunty orange and purple interface manned by cute little robots (see video demo above).

    What’s interesting is that Flook comes from two Symbian founders, Roger Nolan and Jane Sales. Said Nolan on a call from London today, “Apple seemed to just do all the things that Symbian and Nokia should have done for a long time.” So he and Sales (they’re married) along with two other co-founders raised $1 million from Eden Ventures and Amadeus Capital and founded Ambient a year ago. Flook is the company’s first project.


  • PDC 2009: What have we learned this week?

    By Scott M. Fulton, III, Betanews

    Banner: Wrap Up

    PDC 2009 story bannerIt ended up being a somewhat different PDC conference than we had anticipated, and even to a certain extent, than we were led to believe. Maybe this was due in part to a little intentional misdirection to help generate surprise, but in the end, the big stories here in Los Angeles this week were more evolutionary than revolutionary. That was actually quite all right with attendees I spoke with this week, most of whom are just fine with one less thing to turn their worlds upside down. It’s tough enough for many of these good people to hold onto their jobs every week.

    We’ll start our conference wrap-up with a look at the flashpoints (remind me to call Score Productions for a jingle to go with that) we talked about at the beginning of the week, and we’ll follow up with the topic that crept in under the radar when we weren’t expecting.

    Making up for UAC, or, making Windows 7 seem less like Vista. This was absolutely the theme of “Day 0,” which featured the day-long workshops. At this point, Windows engineers have absolutely no problem with the notion of disowning Vista, disavowing it, even though it was technically a stairstep toward making Windows 7 possible. But it is now perfectly permissible to acknowledge the performance hardships Vista faced, and let go of the past in order to move forward.

    Microsoft Technical Fellow Dr. Mark RussinovichMark Russinovich leads the way in this department, and the fact that he’s appreciated leads others to follow suit. During his annual talk on “Kernel Improvements” — which he expanded this year to a two-parter — Russinovich spoke about the way that the timing of Windows’ response to user interactions was adjusted to give the user more reassurance that something was happening, rather than the sinking suspicion that nothing was happening.

    In an explanation of a user telemetry service he helped get off the ground called PerfTrack, he told attendees, “We went through and found roughly 300 places in the system where you interact with something, and there’s a beginning and then an end where you go, ‘Okay, that’s done,’ and optimized the performance of those user-visible interactions. We instrumented those begin-and-ends with data points, which collects timing information and sends that up to a Web service…and for each one of these interactions, we define what’s considered ‘great’ performance, what’s considered ‘okay’ performance, and what’s considered Vista — I mean, uh, ‘bad,’” he explained, with a little grin afterward that appeared borrowed from Jay Leno. “And then if we end up in that ‘okay’ or ‘bad,’ what we do is, selectively turn on more instrumentation using ETW [Event Tracing for Windows] — instrumentation of file accesses, Registry activity, context switches, page faults — and then we collect that information from a sampling of customer machines that are showing that kind of behavior.
    “We feed that back to the product teams, they go analyze those and figure out, ‘Why is their component sluggish in those scenarios?’ and optimize that.”

    A graph showing performance improvements in Start Menu reactions between two different builds of Windows 7, from a talk by Mark Russinovich at PDC 2009.

    One of the results he demonstrated, shown here in this pair of charts, shows the number of user-reported instances of Start menu lag time leaning more toward the quick side than the slow side of the chart, between two builds of the Windows 7 beta.

    The fact that performance matters was one of the key themes of PDC 2009, and attendees greeted that message with enthusiasm — or, maybe more accurately, with appreciation that the company had finally received the message. But there are still lessons to be learned here that can be applied to other product areas, if anybody out there is listening.

    Why Windows Azure? The major theme of Day 1 was the ability to scale services up — scaling local services up to the data center, and data center services up (or down, depending on your application) to Microsoft’s cloud provider, Windows Azure.

    Last year at this time, Microsoft went to bat with essentially nothing — no real definition of an Azure application, no clear understanding of who the customers will be, and absolutely no clue as to the business model. But now we know that services will be rendered on a utility basis like Amazon EC2, and we have a much clearer concept of the customer groups Azure will address. One is the small business that has never before considered data center applications; another is the class of customer that needs to plan for exceptional capacity traffic during unusual situations, but can’t afford to maintain that high capacity 24/7; and the third is the big customer building a new class of application that has never before been considered on any platform.

    Channeling customers to Microsoft’s cloud will be “Dallas,” its code name for large-capacity data bank services typically open for mining by the general public, which should eventually be given a typically Microsoft-sounding name; and AppFabric, the company’s new mix-and-match component applications system built on the IIS 7 platform. But in neither of these cases is Microsoft particularly inventing the wheel; and as I heard from a plurality of attendees this week, Microsoft’s entering another crowded field of contenders (including SalesForce.com and IBM) where competition has already been saturated. Success in this venture is by no means assured.

    Next: Office takes a backseat…

    What will Office Web Apps do? Less than we once thought, apparently. The extent to which you can view “rich content” created with real Office applications, in Office Web Apps, apparently remains strong. But since O Web will be free to everyone (for sensible reasons) the ability to create the same depth of rich content online will be artificially limited.

    Excel Web App 2010 screenshot

    Since many businesses utilize Excel as a type of database, or as a window into their databases elsewhere, this means the utility of that product online will be most restricted. Word may suffer the least, however, as the need to compose respectable looking correspondence from anywhere one happens to be, is a pressing need that Word Web App can easily fulfill.

    Making the case for Office 2010. We expected Microsoft Office to be the star of Wednesday’s keynote, with demos of new functionality that, if it wasn’t major, would at least have been advertised as fresh and new. It was not to be. Although we did have an opportunity to speak with an Office product manager (more on that in the coming days), the message Microsoft was sending this year was very different.

    In the past, folks used to ask why a consumer applications suite was being prominently featured at a conference geared towards developers. The answer from Microsoft typically was, because Office is a platform, and developers build to platforms. The message Microsoft sent this year was that Office was not a platform. And that’s a problem, because if that’s true, there’s no conference for Office. The excuse for the lack of Windows Mobile news was that it was a topic for MIX, the conference for Web developers set for next spring in Las Vegas.

    So does Office wait for TechEd? All of a sudden, this major profit center seems homeless.

    Outlook Social Connector screenshotThere was a little buzz devoted to something called the Outlook Social Connector plug-in, a new tool for integrating individuals’ social media contacts within Office’s communications app. Deals with social network hosts such as LinkedIn were announced. In one respect, that does address consumer concerns; in another, it’s a little ironic. Here we have a situation where people take the time to broadcast their identities over multiple social services on purposes as a way to spread out…only to discover the need for a kind of “identity vacuum” to pull them back in again to one cohesive chord.

    What we did see from the Office 2010 public beta (released Monday, then released Tuesday, then “launched” Wednesday) let us know that if Microsoft truly is listening to its customers and acting on their telemetry, then the word they’re saying most often must be, “Whoa!”

    Document Properties show up on the 'front page' of BackStage in Excel 2010.

    During the Technical Preview phase, Microsoft unveiled its BackStage concept — a way of organizing all the preparatory content of an application, such as print preview and preferences, in a more dimly-lit, cooler arrangement, making you almost want to whisper when you talk about it. The screenshot above shows BackStage in the Excel 2010 Technical Preview.

    Excel 2010 BackStage screenshot

    This is the same BackStage in Excel 2010 Beta 1. It’s more conservative in several obvious regards, including the staging. But notice also something very important: The “Office button,” which premiered in Office 2007 and which flattened down to become an icon menu tab in the Tech Preview, has now returned to being the File menu. If customers have been asking, “Where’s File/Save?” then you have to wonder when they started asking, and how long they’ve been at it.

    The new flavor of Visual Studio is already the old flavor. When you’re dealing with a development platform unto itself, the beta version is often, unofficially but certainly, the working edition for many developers. And VS 2010 is already on Beta 2 now. More than one session presenter this week asked for shows of hands as to how many folks were already using Beta 2 as their development platform — and in each case, a majority of everyone’s hands were raised.

    Will virtualization envelop Windows? Hell if I know. One of the hottest topics of prior conferences was something of a dud this year, and that’s not good for a company that is actually behind in its ability to virtualize 64-bit platforms on 32-bit systems — a feature Sun’s VirtualBox and VMware already provide. But once the problem of absence of live migration in Hyper-V was kicked, virtualization took something of a breather this year, though it wasn’t off the radar altogether.

    The push toward online identity. Indeed, this ended up being the wildcard topic of the show. The principal security and architectural problem faced not only by developers but administrators as well, is enabling a secure single sign-on platform for local and remote applications. With multiple vendors supporting even more authentication protocols than there are vendors — or so it appears — this goal would seem impossible to achieve.

    Microsoft is working to address this in its upcoming Windows Identity Foundation library, which will require the push of Active Directory Federation Services 2.0 — a way to get AD out there to servers that aren’t Windows. But just getting all hosted apps vendors on-board with AD is a colossal task, made more difficult by a “competitive” spirit among application and security vendors that works against the very spirit of communication and federation they need to accomplish the goal of common identity. We will be talking more about this in the coming days, because we learned a lot about this from PDC.

    Now, there’s something I’m missing. Yes, Scott Guthrie, I know I missed you in my list of headliners…and I’m sorry, it was inadvertent, and I apologize. Though I do know Brian Goldfarb gave you heck about it. But there’s something else, let’s see, I’m trying to recall…help me out, Brian…

    Microsoft's Scott Guthrie as you've never seen him before.Oh thank you, Scott, much obliged. Silverlight 4. This one should have been on our radar for certain. Silverlight stole the show on Wednesday, and was much of the talk among developers on Thursday. The new version will provide 1080p video, which everyone wanted. And it will provide authenticated access to system services outside the sandbox, which everyone wants.

    If Office Web Apps were to run on Silverlight 4, you would get access to the right-click context menu — a critical feature of regular Office 2007 and Office 2010 that’s difficult to make up for with the ribbon alone. S4’s access to system devices will make it feasible for developers to craft iTunes-like smartphone applications for devices that are tethered to PCs…and maybe even devices running on smartphones themselves, and not just Windows Phones.

    Microsoft Silverlight 4 streaming video on iPhone, as demonstrated by UX Platform Manager Brian Goldfarb.Which reminds me, there was that one Guthrie demo Wednesday that bit the bottom of the bit bucket, with that cool looking phone. Did anyone ever make that work…Brian Goldfarb to the rescue once again. Yes, it is indeed possible to perform adaptive streaming of movies to the iPhone using Silverlight. We talked at length with Goldfarb (more on that too in coming days), and here’s a preview of coming attractions:

    “We’ve worked with Apple to create a server-side-based solution with IIS Media Services; and what we’re doing is taking content that’s encoded for smooth streaming and enabling the content owner to say, ‘I want to enable the iPhone.’”

    Microsoft Silverlight 4 streaming video on iPhone, as demonstrated by UX Platform Manager Brian Goldfarb.

    Microsoft Windows Division President Steven Sinofsky during the Day 2 keynote at PDC 2009 with that Acer laptop everyone loves now.It was certainly more of an evolutionary than a revolutionary tone at this year’s PDC, but attendees seemed comfortable with that this time around. Here was one strange phenomenon we’ve never noticed before: Attendance increased with later days. Wednesday attendance was noticeably higher for sessions and the keynote than for the previous day, and that was despite news of the big laptop giveaway being kept under lock and key. And Thursday — which has often been a day for “leftovers” — ended up being packed as well, including with attendees who brought those shiny new Acer multitouch laptops with them.

    Now, there’s something that hasn’t been touched on: Acer. Think about that for a moment. This is the same company that publicly dissed Vista in 2006 for being a non-event for consumers, practically leading the wave for the complaints that were to follow. And here it is lending its name to an event that not only promotes Windows 7, but prototypes its proper use (from Microsoft’s perspective) in all computing. Microsoft let Acer show everyone else how quick bootup and clean performance are supposed to be done.

    That’s the biggest indicator of Lessons Learned we saw all week.

    Copyright Betanews, Inc. 2009



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • 3D Coming To Many Sony Products, Including Digital Cameras, VAIO Computers


    3D4

    Sony has presented some information in their recent Investor/Analyst meeting that is simply stunning. They have outlined further their 3D strategy, and it is much larger than I thought. Originally, when Sony announced their 3D intentions, they had did it in a way that seemed like it would be somewhat limited to projectors, televisions, and other displays. But what I am seeing here in these slides leads me to believe that it will be an all encompassing effort.

    In the slide above it is very clear to see that Sony is intending to bring 3D to their personal imaging line, including Cyber-shot cameras and their Alpha DSLR line. This would enable the creation of 3D photos. Sony also aims to bring 3D content to the home via the Playstation Network and the BRAVA Internet Video Link. There is also mention that Sony will release a 3D Blu-ray player (as expected, no word if previous models are upgradeable) and bring the technology to their VAIO Laptop (and possibly desktop) lines. Wow.

    Juding from the presentation, these products and services should be expected throughout 2010 to 2012.

    3D1

    Sony is commited to providing a firmware update to the PS3 that will enable 3D gaming. Nice.

    3D2

    Sony also aims to have more than 3,000 3D projectors installed in theaters by the end of FY2010.