Author: Scott M. Fulton, III

  • EU seeks privacy enforcement rights in US courts through diplomatic agreement

    By Scott M. Fulton, III, Betanews

    Yesterday, the Chairman of the European multi-national group of ministers overseeing online privacy policy enforcement, Jacob Kohnstamm of the Article 29 Working Party (WP29), sent letters to the CEOs of Google, Microsoft, and Yahoo, urging them to alter their personal data retention policies in keeping with new EU standards. Kohnstamm wants their search engines to destroy personal data after six months’ retention rather than nine, as is Google’s current policy; and he simultaneously urged European Commission Vice President Viviane Reding for help getting that message across.

    In less than a day, Kohnstamm got his wish: This morning in Brussels, Comm. Reding placed a public call on the United States to forge an agreement that would enable the EC to sue the search engine leaders in US courts for failure to follow EU guidelines for data protection.

    “The EU and US are both committed to the protection of personal data and privacy. However, they still have different approaches in protecting data, leading to some controversy in the past when negotiating information exchange agreements (such as the Terrorist Finance Tracking Programme, so-called SWIFT agreement, or Passenger Name Records),” reads this morning’s statement from Brussels. “The purpose of the agreement proposed by the Commission today is to address and overcome these differences. Today’s proposal would give the Commission a mandate to negotiate a new data protection agreement for personal data transferred to and processed by enforcement authorities in the EU and the US. It would also commit the Commission to keeping the European Parliament fully informed at all stages of the negotiations.”

    In a taped address this morning, Comm. Reding characterized the agreement not only as essential for protecting citizens’ rights, but also as a necessary tool for both the US and EU in the war on terrorism.

    European Commissioner for the Information Society Viviane Reding, in a weekly address April 14, 2009.“We all want to have control over our personal information. This is why the EU has rules on the protection of personal data,” Reding said. “Our rights are clear and must be respected. Whenever your personal information is collected, whenever it is processed, and whenever it is used, these are the high standards we must live up to…We are facing common security challenges from international crime and terrorism. We have been confronted with devastating attacks in Europe and in the United States. We are working hard with the US to confront these challenges.”

    Yesterday, the European Commission published the draft of a letter to the CEO of Google (PDF available here). It’s easy to determine it was a draft since Chairman Kohnstamm left blanks where he intended to look up and confirm the name “Eric Schmidt” (unless Schmidt actually did receive a letter where his own name was omitted in the salutation “Dear,”).

    What Kohnstamm did not leave blank was his view of the pointlessness of Google’s so-called anonymization policy, which for Google is not actually a destruction of personally identifiable data but a partial erasure of it. The specific part being erased is the last octet of the IP address for the computer being tracked. Although in practice, IP addresses are not necessarily personally identifiable, EU policy treats IP addresses as personal since, in theory, they can be used in determining the identity of users. Kohnstamm told the unnamed Mr. Schmidt that it’s an academic matter for a database manager to align a partial IP address older than nine months with personal cookie data, which Google retains for 18 months.

    “In its opinion, WP29 stressed the sensitivity of personal data related to search queries. I know that Google also shares this concern,” Kohnstamm wrote. “In response to the opinion, your company publicly announced you will ‘anonymize’ IP addresses in your server logs after 9 months. In practice, you have indicated that you will delete the last octet of the IP addresses held in the search query log files after a period of 9 months… deleting the last octet of the IP-addresses is insufficient to guarantee adequate anonymisation. Such a partial deletion does not prevent identifiability of data subjects. In addition to this, you state you retain cookies for a period of 18 months. This would allow for the correlation of individual search queries for a considerable length of time. It also appears to allow for easy retrieval of IP addresses, every time a user makes a new query within those 18 months. Therefore, WP29 cannot conclude your company complies with the European data protection directive.”

    Kohnstamm wrote a similar letter to Microsoft CEO Steve Ballmer, whose name was also left blank from the draft published by the EC. In it, he commends Microsoft for promising to anonymize Bing’s personal data after six months, as the EC requested. However, he doesn’t believe that promise is being kept, since it was apparently contingent upon Google’s and Yahoo’s willingness to follow suit. He also pointed out that last-octet deletion is pointless as an anonymization tool, a message he also sent to (unnamed) Yahoo CEO Carol Bartz. Yahoo has pledged to anonymize personal data after three months.

    In a separate letter to US Federal Trade Commission Chairman Jon Leibowitz yesterday (insert name, please), Kohnstamm called upon the FTC to investigate whether the three search engines’ failure to comply with EC directives constitutes “unfair or deceptive acts or practices in the marketplace” under Section 5 of the FTC Act. If it did — or if there was a reasonable theory that it did — such a finding could be the legal basis for charging Yahoo, Microsoft, and Google with fraud in US courts.

    And if the agreement were written as Comm. Reding would prefer, according to the statement from Brussels this morning, it wouldn’t have to be the European Commission initiating the action. A European citizen could sue these parties in US courts for fraud and deception, if he believed his privacy was violated: “There would be an individual right of administrative and judicial redress regardless of nationality or place of residence.”

    The US State Dept. has yet to issue a statement on the matter. Sec. of State Hillary Clinton is currently in Seoul, South Korea, in meetings with its president and foreign minister over the alleged sinking of a South Korean naval vessel by North Korea.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati






    GoogleMicrosoftYahooUnited StatesEuropean Commission

  • Facebook CEO: ‘We are removing the connections privacy model’

    By Scott M. Fulton, III, Betanews

    In a move that may end up drastically scaling back what Facebook had hoped last month would be a redefinition of the Web itself, the social service will soon begin rolling out simplified privacy controls, according to a blog post today from CEO Mark Zuckerberg. The new controls may make it easier for Facebook users to limit the extent to which the system shares their personal information with others, especially including other Web sites.

    Continuing to deflect criticism, the CEO said that Facebook had always offered a multiplicity of privacy controls, but “if you find them too hard to use then you won’t feel like you have control. Unless you feel in control, then you won’t be comfortable sharing and our service will be less useful for you. We agree we need to improve this.

    “We’ve reduced the amount of basic information that must be visible to everyone and we are removing the connections privacy model,” Zuckerberg announced. “Now we’ll be giving you the ability to control who can see your friends and pages. These fields will no longer have to be public.”

    Although Zuckerberg describes the new privacy control as a “single” switch, an examination of the screen shot he provided for the new controls reveals that one of the settings causing users the most headaches appears to be compartmentalized behind a subheading, “Applications and Websites.” As Facebook originally planned for its Open Graph API — a.k.a., “Like” — other Web sites can share information about content that their users have “favorited,” or voted up, with Facebook. That way, Facebook can assemble new links to its own content that has the same or similar subject matter.

    But that system would require an implied open sharing status between Facebook and those other sites, one which many users might not readily trust if its presence were plainly explained to them.

    A preview of Facebook's re-revised privacy controls reveals some simplification, but also some selective compartmentalization.

    A check of Facebook’s updated privacy page does show, however, that the service did make one switch out of several: If a user turns off Facebook’s ability to receive shared “Like” data from other Web sites, she also shuts off her ability to use Facebook applications. (No more lunchtime harvesting, in other words.) This process is referred to by Facebook as “turning off platform.” As an alternative, the user may opt to turn off individual applications’ and Web sites’ access to Facebook data on a per-app basis, which is the more “granular” option that already existed, and that Zuckerberg said he had thought users would have preferred.

    The new Applications and Websites panel, reads the new privacy page, “controls what information is shared with websites and applications, including search engines (applications and websites you and your friends use already have access to your name, profile picture, gender, networks, friend list, user ID, and any other information you share with everyone). You can view your applications, remove any you don’t want to use, or turn off platform completely. Turning off platform means you won’t be able to use any platform applications or websites and we won’t share your information with them.”

    Users will still see “recommended” privacy settings, however, which may still guide novice users into making relaxed, less stringent choices — a fact which may not extract Facebook from the hot water it finds itself in today.

    In an op-ed piece for the Washington Post last weekend, Zuckerberg showed reluctant acceptance for the notion that some people simply must have their privacy, even if they’re joining a social network. Today, he went a step further, literally but politely telling users that this is the last privacy upgrade they’ll be getting for a long while, so they’d better be happy with this one.

    “Finally and perhaps most importantly, I am pleased to say that with these changes the overhaul of Facebook’s privacy model is complete. If you find these changes helpful, then we plan to keep this privacy framework for a long time. That means you won’t need to worry about changes,” the CEO wrote. And if you had any doubt that he was biting his tongue a bit when he wrote that, he then added, “(Believe me, we’re probably happier about this than you are.)”

    At the time of this writing, the new privacy settings were not yet made available to those Facebook accounts to which we have access.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati






    FacebookMark ZuckerbergSocial networkPrivacyprivacy settings

  • U-verse down: AT&T’s fiberoptic voice customers can’t get a dial tone

    By Scott M. Fulton, III, Betanews

    A nationwide service outage continues to impact customers of AT&T’s VoIP-based U-Verse Voice service — users of its fiber-to-the-home TV and broadband network for phone service as well. Now home-based VoIP phone users are waiting in Internet chat queues numbering hundreds of users long seeking solutions, and at least a few customers are reporting they’re waiting on hold from their (working) Verizon Wireless phones.

    Users of AT&T’s U-talk Peer-to-Peer Forum are being advised to register their complaints with someone named “David” in the company’s Tier 2 Technical Support office. This as users in Memphis, Atlanta, Chicago, Houston, and elsewhere continue to report no service, although customers in some metropolitan areas such as Sacramento report service has been restored.

    Though there has been no official explanation, customers who have gotten through to support personnel have been advised that the cause is some sort of server outage. That diagnosis would appear to be contradicted by evidence that service has been restored to some areas while still down in others; and also that the outage does not appear to affect TV or broadband service.

    In lieu of an official word, news of the service outage was first confirmed by contributors to Broadband Reports, then later broadcast over the Associated Press wire. One form of the AP story reads that AT&T restored service to its customers by 2:45 pm ET Wednesday afternoon. Although the AP story contained a dateline of 5:57 pm CT / 6:57 pm ET Wednesday evening, the story actually ran at around noon Wednesday — almost three hours prior to the time in which the story reports services were restored.

    “Funny how AT&T doesn’t communicate with transparency,” one U-talk forum commenter noted. “An announcement on their website would go a long way in fostering confidence in their service and their ability to serve their clientele.”

    AT&T estimated last January that the number of U-verse customers also subscribing to U-verse Voice surpassed the 1 million mark that month; U-verse service itself currently reaches 2 million customers according to AT&T’s December 2009 estimates. Today’s service outage may give more impetus to a growing number of those 1 million customers who say they may cancel their Voice service, after having planned to pay fixed rates of $25 or $40 per month extra and ending up paying more.

    Last week, prior to the service outage, an AT&T U-verse customer reported to the company’s forum of having told the carrier’s customer representative, “If prices keep going up we will get rid of everything. Our bill has been running $162 and some change. We don’t watch that much TV especially this time of year, and with BlackBerrys don’t use the home Internet much either. As such, we are finding it hard to part with the money for what we get in return. We are looking into an antenna and setting up a computer with media center as a DVR with Netflix subscription.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati






    AT&T U-verseVerizon WirelessTelecommunicationNetflixVoice over Internet Protocol

  • After two high-profile Microsoft exits, is WP7 a device or a platform?

    By Scott M. Fulton, III, Betanews

    Robbie BachWhen a massive Microsoft corporate reorganization on September 20, 2005 vaulted Robbie Bach into the role of President of the Entertainment & Devices division, the explanation at the time was to enable the company to focus on devices where the goal was to promote devices, and on platforms where the goal was to promote devices. Xbox was a device, whatever MP3 player the company would decide to produce was a device, and obviously cell phones are devices should Microsoft ever choose to enter that business in earnest.

    Obviousness is highly susceptible to changes in perspective, especially over five years’ time. Today, with the launch of one of the company’s most important gaming initiatives, still called “Project Natal,” just months away, Bach has decided to leave the company, Microsoft confirmed this afternoon. Following in his wake will be Microsoft’s other high-profile gaming executive, J Allard, who leaves behind a real personal triumph in the form of XNA, the gaming platform that may yet unite development for Xbox 360, Windows, Windows Phone, and to some extent Zune.

    Ironically, it was the consolidation of the Entertainment & Devices division that has led to the development in recent years of the most innovative and head turning (if not yet entirely game changing) platforms Microsoft has produced in a good many years. XNA is one. Windows Phone 7 may very well be the first mobile platform ever to come out of Microsoft that deserves, at the very least, the attention it’s received. And despite the fact that the Project Natal device looks like it could be a rejected candidate for the design of Marvin the Paranoid Android from the last Hitchhiker’s Guide movie, the fact that it’s injecting a new concept of user input into gaming development makes Xbox 360 once again a competitive platform.

    So the mystery today is that Microsoft is bringing in a platform leader — Corporate Vice President David Treadwell, formerly the head of Live Platform Services — to report to Senior Vice President Don Mattrick, while Mattrick, who spearheaded Project Natal, will report directly to CEO Steve Ballmer. Andrew Lees, the Senior Vice President in charge of “mobile devices” but who led the development of Windows Phone 7, will retain his position but report directly to Ballmer as well.

    Gates and AllardIt’s a development that indicates that Microsoft realizes the importance of these assets as platforms, rather than as mere devices — a realization made feasible by the work of Bach and Allard. And yet off they go to parts unknown.

    Is there a message to be found in Microsoft’s relocation of its gaming and mobile development units back to the platform side of the business? Matt Rosoff, who researches the consumer and online side of the company for the analysis firm Directions on Microsoft, thinks…not.

    “I don’t think it matters so much which division of the company Microsoft’s mobile platform group is in,” Rosoff told Betanews this afternoon. “After all, Windows Phone 7 is a very good step, and it was created when the mobile division was under Bach in E&D, and I tend to think smartphones are a consumer-driven purchase — especially since the release of the iPhone. But maybe it’s time to think of mobile OSs in the context of small form-factor computers, in which case it might make sense to move that group back to Windows. That’s a hard call for Ballmer to make, which is probably why he’s taking the reins for a while.

    “The mobile space certainly has changed a lot in the last five years, hasn’t it?” said Rosoff. “I don’t think anybody at Microsoft (or many other places) would have predicted the iPhone and mainstreaming of smartphones and ‘apps,’ Google’s successful entry into the space, or Palm’s resurgence and acquisition by HP. Strategically, I think mobile is becoming far more important to Microsoft than it’s ever been — not only is there the opportunity of selling higher-margin software in tens of millions of devices per year, but there’s also the threat to desktop Windows from inexpensive dedicated hardware running competing mobile OSs, such as Apple’s iPad, HP’s planned slates, and possible Android or Chrome devices.”

    Could the exit of two executives closely associated with Project Natal, and who are partly responsible for its creation — an exit announced prior to the project’s final unveiling — indicate that perhaps it’s not as big a deal as the entertainment device press made it out to be?

    Rosoff sees a scenario where Natal infuses Xbox 360 with some of the “next generation” status that a full-scale successor to that console would have had, if Microsoft could afford to build one. So it’s a stepping stone in one respect, and a paperweight in another: “Project Natal could keep the Xbox 360 an active platform for another few years. That’s good because it helps Microsoft sell more games over the lifetime of each console so they can recover the initial costs of the hardware, and it delays the next generation of consoles, at which point they’ll have to subsidize the hardware again (at least that’s how it has worked with Sony, less so with Nintendo). So I don’t think it was ever viewed as a major business in itself.”

    Should we read anything into the fact that Bach’s and Allard’s exit is aligned so closely with HP’s acquisition of Palm, and the likelihood of a webOS-based HP tablet device as a result? Rosoff thinks not, pointing out that Allard was a principal designer of Windows Phone 7. In maintaining an advisory role, as Microsoft said today he would, Allard may prefer to retain ties to his own creation rather than let a competitor use him to help sink it.

    On the other hand, Allard’s “advisory role” could end up looking like the “advisory role” a TV show creator is bestowed by its executive producer after he’s been fired. “There are lots of companies who would value Bach’s and Allard’s experience in this space,” Matt Rosoff told Betanews. “Or, they might retire from the tech industry for good.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati






    MicrosoftAppleGoogleiPhoneSony

  • Adobe Reader faces its first genuine competition from a free alternative

    By Scott M. Fulton, III, Betanews


    Download Nitro PDF Reader 1.1.1.13 from Fileforum now.


    The new, free Nitro PDF Reader

    Even today, we tend to use the phrase “Adobe PDF” when referring to the Portable Document Format, despite the fact that Adobe released its ownership of the standard into the open community in 2008. The typical opinion has been that releasing PDF as ISO/IEC 32000-1 was more of a symbolic gesture, but that Acrobat would always remain the principal application for creating PDF files.

    But it isn’t a monopoly that users particularly like anymore. A security study last February by security software provider Webroot of its own SMB customers revealed that nearly a quarter believe they are susceptible to cyber-attacks on account of insecure plug-ins including Adobe Reader. That feeling is compounded by raw data from Mozilla providing evidence that as many as half of all Firefox browser crashes are triggered by either Adobe Flash or Adobe Reader — a fact which compelled Mozilla’s engineers to redesign the entire plug-in model for Firefox 3.6.4.

    Today, the makers of what started out as a set of Acrobat plug-ins and later became a commercial alternative to Acrobat itself, are making their first real bid to unseat Adobe as the de facto software provider for PDF. The way Nitro PDF Software is doing this is by releasing today a free alternative to Adobe Reader that replaces the Reader plug-in entirely.

    The replacement will be an application that not only lets users edit the PDF files they download, but type new text at any point into those files, and create new PDF files as well.

    “PDF is all that we do. We live and breathe it. So the next logical step for us was to introduce a free PDF reader, but one that is revolutionary and that the world has been waiting for,” pronounced Gina O’Reilly, Nitro PDF’s senior vice president of marketing, in an interview with Betanews.

    “With Adobe Reader, I’m sure you’re familiar with the typical complaints: It’s bloated, it’s vulnerable to security attacks, but more importantly, it’s simply lacking in functionality. All it is, is a viewer and a printer; and unless you have paid a premium to do anything else, that’s all it does,” O’Reilly continued. “A lot of other options in the market come with some sort of compromise. So in the case of Adobe Reader, that might be a large footprint and annoying reminder updates, [along with] restricted functionality, in the fact that there is none.”

    Multiple collaborators can comment on edits to a PDF document in Nitro PDF Reader.

    Multiple collaborators can comment on edits to a PDF document in Nitro PDF Reader.


    Nitro PDF Reader is a stand-alone application that uses the Ribbon UI functionality made popular by Microsoft Office. If it has any drawbacks over Adobe Reader at all, it’s the fact that it doesn’t view PDF files inside the browser context. Instead, you download PDFs through whatever your browser happens to be, and they appear in the separate Nitro application context.
    But in that context, the application will offer some features that will compete with Acrobat, Adobe’s commercial PDF producer application line, the most obvious being the ability to create and save PDF files from scratch.

    Nitro PDF’s chief product officer Lonn Lorenz — a veteran product manager from Adobe — gave Betanews a demonstration: “Let’s say that I have a file that I want to send around for comment and review. If I zoom in on this page and I want to create a Post-It note…As I put a note in place, it keeps track of who put the note down, what time and date that they added it, and I can add my note to that.

    In Nitro PDF Reader, a user adds a comment to an existing passage, as part of its fully-featured review system.

    “When I send this out to people to get comments back on,” Lorenz continued, “I can just go to the File menu and e-mail this PDF, and it’ll automatically launch your e-mail client, attach this PDF, and you can send it off for review. Once you open this file and see this comment, you could add your own reply. With your reply to this comment, you could start a thread of conversation around this review. You could also use the text highlight tools, where you can highlight, cross out, or underline text…I can actually start to make use of my PDF files. Where the Adobe Reader and other readers are limited in this regard is, I can actually save my file! How simple is that? All the basic functionality in a free product.”

    One feature that I found truly inspiring — something I would literally use every day — is a simple button that lets you stamp a PDF form with a scan of your signature. Many legal firms have gotten into the habit of printing out the final page of a PDF contract, then signing the contract, scanning it back in, and appending it to the PDF, just so the final page can bear a signature. With this feature, Nitro PDF keeps scans of multiple signatures on hand, and you can stamp one and resize it to fit any location on a page.

    Nitro PDF Reader enables a fairly simple, though tremendously useful, quick feature: a way to sign any form using a scan of your signature.

    Though it’s too early to declare that Nitro PDF “thought of everything,” it’s clear that during its beta process, its engineers did think ahead. The free Nitro PDF Reader does include the ability to protect files with a simple password, and to embed fonts where necessary (the company’s commercial edition has more extensive options). And to help deter situations where certain PDFs can be maliciously re-engineered, users can selectively block their PDFs from having the ability to make embedded calls to Web sites, with selected exceptions — a kind of localized firewall. Lorenz also told us users will have the ability to turn off JavaScript execution ability and leave it off.

    “What Adobe has potentially made a very grave mistake in doing is not being more proactive in responding to attacks, and not really addressing them in any meaningful way,” argued Nitro PDF’s Gina O’Reilly. “So when we conceptualized Nitro PDF Reader…one of our main goals was to ensure that our product is secure, and that we are responsive. We’re currently working with a third party to get validation for Nitro PDF Reader, so that we can stand up and say…the product stands up, and in the areas that we might not stand up, we will be addressing. We’ve kind of turned the security issue on its head, in that rather than expecting things to happen and then addressing them, we’re trying to get on the forefront with this. It’s very important that our users feel safe and secure when using the product.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Toward a ‘Fourth Way:’ Congress prepares to completely overhaul telecom law

    By Scott M. Fulton, III, Betanews

    In an acknowledgment that the regulatory compromise proposed earlier this month by Federal Communications Commission Chairman Julius Genachowski, at the very least, may be inadequate for a long-term resolution to the debate over who or what gets to regulate the Internet in the US, Democratic leaders of both houses of Congress said today they will set the wheels in motion, starting now, for a potential rewriting of all US telecommunications law.

    Such an act could, if successful — and if it can be accomplished in our lifetimes — finally codify just what Internet communications is and what it does, not with respect to or in comparison with the telephone or the public airwaves. And it could very well result in an entirely new regulatory structure that’s not the FCC as we know it today, but may yet have the Congressional authority and mandate to regulate network neutrality in some form.

    The news came in an extremely brief press release this afternoon from the Senate Commerce Committee, chaired by Jay Rockefeller (D – W.V.): Speaking for Sen. Rockefeller and his House counterpart, Rep. Henry Waxman (D – Calif.), plus the chairmen of the Senate and House subcommittees on the Internet, Sen. John Kerry (D – Mass.) and Rep. Rick Boucher (D – Va.), the statement included this sentence: “As the first step, they will invite stakeholders to participate in a series of bipartisan, issue-focused meetings beginning in June.”

    By anyone’s measure, the complete rewrite of the Telecommunications Act of 1934 would be a colossal undertaking. The last partial rewrite, completed in 1996, took at least 12 years. With that in mind, Sen. Kerry’s office issued a statement to The Hill late this afternoon, indicating that Chairman Genachowski’s “Third Way” proposal — regulating the Internet as though it fell under Title I of the Telecom Act, treating it like a telephone except where it’s obviously not a telephone — may be necessary in the interim. “This process is complimentary to the efforts at the FCC, not a substitute for them,” the statement reads in part.

    In one respect, opposition to a rewrite of telecom law could be boxed in at this point. After Genachowski announced his “Third Way” proposal, ranking Republicans including the Commerce Committee’s Kay Bailey Hutchison (R – Texas) voiced their opposition to the notion that the FCC could declare itself the Internet’s formal regulator without Congressional mandate. A rewrite of telecom law would provide such a mandate, to the FCC or whatever agency may emerge from the process.

    Early response to the announcement from advocacy groups appears somewhat positive, if tentative. The American Cable Association’s CEO, Matthew Polka, released a statement to Betanews and others saying he’s hopeful that this process would give the nation’s smaller cable operators an opportunity to air their grievances over the current system of cable TV regulation: “Congressional action that would clarify the extent of the Federal Communications Commission’s authority to regulate cable broadband service holds the promise of providing greater certainty with fewer unintended consequences for operators and their customers. Review of the Communications Act also provides Congress the opportunity to address other issues impacting small cable operators and their consumers in smaller markets and rural areas, such as outdated retransmission consent and broadcast carriage rules, and ineffective program access regulations.”

    And one of the groups that has been most supportive of the “Third Way” to date, the Open Internet Coalition, indicated its leaders are nodding their heads as well. Writes OIC Executive Director Markham Erickson, “Our view from the day the case was decided was the FCC needed to act immediately to provide a stopgap protection for consumers and Congress needed to revisit the Communications Act for changes over the longer term…The FCC has already taken the critical first step, putting forward a common-sense plan that would allow the Commission to promote broadband deployment and to protect consumer choice this year. Today’s announcement will make sure Congress begins the necessary longer-term steps in this area that will support the FCC’s broadband plan. This partnership between the FCC and Congress is exactly what is necessary to make sure the US regains the lead in high quality, affordable, and widely available broadband Internet services.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Seagate tries a new hybrid solid-state HDD, this time without Microsoft’s help

    By Scott M. Fulton, III, Betanews

    Seagate Technology top story badgeThe latest hybrid notebook storage device announced today by Seagate Technology, the 2.5-inch form factor Momentus XT, promises radical performance improvements from every hybrid drive that has come before. Seagate says it can now offer this performance by literally divorcing the drive from, and breaking all connections with, the Windows-based technology that catalyzed the company’s entry into the hybrid SSD business to begin with.

    Back in 2005, Seagate appeared to stand firm against what many believed to be the coming wave of solid-state storage technology, made feasible by more reliable flash RAM technology whose costs were plummeting and form factors shrinking. Seagate said at the time that flash wasn’t exactly as reliable as it seemed on paper compared to magnetic disks, in which the company was solidly invested.

    One year later, Microsoft helped bring about the formation of an industry alliance for building hybrid solid-state/hard disk drives. It did so by making the ability to support hybrid drives a requirement for notebook PC manufacturers to obtain the much-desired Vista Premium logo, one of the higher tiers of Microsoft’s originally intended multi-level support program for Windows Vista. So the following year at CES 2007, standing on the podium together to represent the new market that Microsoft was effectively forcing open, were representatives from Samsung, Toshiba, and most surprisingly of all, Seagate.

    The technology that these companies were relying upon was something called ReadyDrive, a series of Vista drivers that enabled caching in the SSD part of hybrid drives, especially to help accelerate boot times. But by 2008, hybrid drive manufacturers were complaining that the advantages offered by ReadyDrive were more than outweighed by the disadvantages of simply dealing with Vista.

    Seagate's cross-section depiction of its new Momentus XT hybrid SSD/HDD.  [Courtesy Seagate]Fast-forward to today, when Seagate product marketing manager Joni Clark triumphantly announces to Betanews that Momentus XT will not be bound to ReadyDrive at all.

    “We are no longer dependent on the [operating system] to control what gets put into the flash,” Clark told us. “If you recall, the first [Seagate hybrid] drive needed ReadyDrive to enact or enable that functionality. We have a new driver that our engineers put in place — actually more of an algorithm — that we call ‘Adaptive Memory.’ This is a self-learning algorithm that monitors the LBA [Large Block Addressing] access on the drive. That algorithm doesn’t care whether what OS is running it, whether it’s a game or an Office application, or who the user is. All it knows is that there are frequently accessed LBAs, and some of them are tougher to get to than others. So it’ll take the ones that are frequent and tough to get to, even though it takes longer, and place those LBAs into the solid-state flash.

    “This learning doesn’t happen just in the beginning,” Clark continued, “it’s constantly, always learning in the background. So it doesn’t matter whether you’re during the day a corporate executive, and by night you’re a gaming enthusiast; it’s going to constantly monitor and put these files into that solid-state, so you’re always going to be performing at the top of your game, no matter what you’re doing.”

    Clark freely admits now that Seagate’s original Momentus product line, which included the 5400 PST announced in June 2006, didn’t live up to many users’ expectations, and that it can’t heap all the blame upon Microsoft. Many of the complaints came from enthusiasts, including gamers, who didn’t see any performance improvements — and often saw the reverse — because their programs weren’t making use of ReadyDrive. Others came from Linux users…Let’s face it, there is no ReadyDrive for Linux.

    “With the first drive [5400 PST], we took a standard drive and we bolted on hybrid technology,” Clark had no trouble adding. “This new drive is hybrid-architected from the core.” Momentus XT’s new flash cache has been raised from 128 MB (or 256 MB in later models) all the way to 4 GB.

    How will these facts directly impact all the programs that users run, not just some of them, and all the functionality they experience? “The first thing that people will notice, I’ll tell you up front, is the boot time,” Seagate’s Joni Clark responded. “After the third boot with this drive, I’ve seen people come back and say, ‘My boot time was cut in half.’ I don’t know about you, but when I’m running from meeting to meeting, I’m the next presenter, and I’m waiting for my system to boot so I can share some files, it’s very, very frustrating.”

    A slide from a Seagate presentation revealing the benchmark test results for 'real-world' applications with its Momentus XT hybrid SSD/HDD drive.  [Courtesy Seagate]

    In a test designed to approximate real-world workloads, Seagate engineers tested a high-performance Asus G51J gaming notebook system using a script that automated the same series of user interactions starting at bootup, then proceeding to running Excel, to calling up files from iTunes, then calling up the game Crysis Warhead, and so on. A fully solid-state drive ran the script sequence in 140.1 seconds. Momentus XT — a 7200 RPM drive — ran the script in 153.8 seconds, which is not much slower but certainly appreciably faster than the 10,000 RPM drive at 188.2, and a standard Seagate 7200 RPM drive at 225.6 seconds.

    Perhaps most importantly, Momentus XT booted up the Windows 7 64-bit system in 23 seconds versus 59 seconds for the 7200 RPM HDD.

    With respect to price, Seagate is also aiming for a sweet spot: $153 suggested retail for the 250 GB model, as opposed to $78 for a standard 250 GB 7200 RPM HDD, and $808 for a full 250 GB solid-state drive. Models at 320 GB and 500 GB will be available also, with pricing information to be revealed Wednesday.
    The fact that Seagate didn’t test Momentus XT in a system known for power conservation is a clear indicator that there’s not much about it that’s “green.” That’s a big lesson learned from the Momentus 5400 PST days, when Seagate also tried to appeal to the energy cost-conscientious crowd.

    “The first hybrid drive did try to cut down on power consumption,” Clark told us, “and actually did quite a bit. There was some spin-down going on…and we tried to increase the reliability of the drive and the performance. We were trying to be everything to everybody with that first drive. But the market came back to us and said, if you’re going to put solid-state on a drive, make sure it’s for performance. That’s the number-one thing…So with this drive, we quit trying to be everything to everyone, and we focused on the one value point that customers told us they wanted: affordable performance. So you will not see this drive spin down at all. We will spin like a normal 7200 — it doesn’t consume any more power than a normal 7200 RPM drive. Your shock is the same, and even your acoustics are identical.”

    Seagate is inviting the general public to see this performance for itself, in a live webcast scheduled for Wednesday, May 26 at 2:00 pm EDT / 11: 00 am PDT. Seagate is taking reservations now at this address, and attendees will be eligible for prize drawings including one of three Asus G73 notebooks.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Zuckerberg: Facebook will respect the privacy of those who really prefer it

    By Scott M. Fulton, III, Betanews

    If a user would rather that Facebook not share her personal information with other services unknowingly, then there should be a simple switch that turns off Facebook’s ability to do that. This was the message delivered by Facebook CEO Mark Zuckerberg, in an op-ed piece published in Sunday’s Washington Post.

    “Facebook has been growing quickly. It has become a community of more than 400 million people in just a few years,” Zuckerberg wrote. “It’s a challenge to keep that many people satisfied over time, so we move quickly to serve that community with new ways to connect with the social Web and each other. Sometimes we move too fast — and after listening to recent concerns, we’re responding.”

    The problem with automatically sharing personal data with other sites was magnified with last month’s unveiling of the ‘Like’ system, also known as Open Graph. Ostensibly, it enables sites such as YouTube to inform Facebook about those videos that its users signify that they “Like,” so that Facebook can respond by feeding that user more information about, for instance, their producers or subject matter.

    Facebook does give users a way to effectively say, “No, I’d rather not,” with respect to sharing information in this manner, but only on a site-by-site basis. In his op-ed piece yesterday, Zuckerberg explained that this type of “granularity” was something he had thought people would prefer, “but that may not have been what many of you wanted. We just missed the mark.”

    The CEO stated that a solution will be made available “in the coming weeks,” in response to what he characterized as complaints from a minority of users. The majority of others don’t complain, he said, but that won’t stop Facebook from trying to please everyone, including those few who think privacy is really important.

    “We have also heard that some people don’t understand how their personal information is used and worry that it is shared in ways they don’t want,” he wrote. “Many people choose to make some of their information visible to everyone so people they know can find them on Facebook.”

    Recently, many users have discovered their information was already made visible, and not by choice. That prompted Sen. Chuck Schumer (D – N.Y.) to ask the Federal Trade Commission to create new guidelines for all social networking sites, and to act as the police force for compliance nationwide. And earlier this month, Rep. Rick Boucher (D – Va.) introduced legislation that would mandate that any act of personal information sharing between Web sites be expressly indicated to the user at the time it happens, with the user being given the option to stop it.

    Zuckerberg’s solution — at least, to the extent he discussed it in the Post — would fall short of that mandate, opting instead to give users an extra option to turn all third-party sharing off. Conceivably, that option may be presented to all users upon logging into Facebook.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati






    FacebookMark ZuckerbergWashington PostFederal Trade CommissionYouTube

  • AT&T to raise two-year termination fee by 86% on iPhones, smartphones

    By Scott M. Fulton, III, Betanews

    In a cautiously worded notice to customers this afternoon, AT&T advised that it will be raising its early termination fee (ETF) for wireless service for smartphones and netbooks, evidently including Apple’s iPhone. Beginning June 1, the base rate for ETFs from two-year service agreements will be raised from $175 minus $5 per month of tenure, to $325 minus $10 per month.

    “One of the ways we do this is to offer you the industry’s leading wireless handsets below their full retail price when you sign a two-year service agreement,” reads AT&T’s notice. “In the event you wish to cancel service before your two-year agreement expires, you agree to pay a prorated early termination fee (ETF) as an alternative way to complete your agreement.”

    To help balance out the revenue stream, the ETF for basic phone and feature phone users will decrease after June 1 by $25, the company said, to $150 minus $4 per month of tenure.

    AT&T made its last adjustment to early termination fees on May 25, 2008, when it first adopted the “pro-rated” approach in what was seen at the time as response to pressure from Congress. A bill had been introduced before the floor — the Cell Phone Consumer Empowerment Act of 2007 — that would mandate pro-rating for ETFs, and possibly institute caps as well. The benchmark for ETFs that the bill’s author, Sen. Amy Klobuchar (D – Minn.), had set as being too high for most smartphone customers was $175. Soon after AT&T set its newly pro-rated fee to $175 and other carriers followed suit, action on the bill subsided, and it was never passed.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • That’s one expensive logo: Symantec gets VeriSign checkmark for $1.28 B

    By Scott M. Fulton, III, Betanews

    VeriSign (now Symantec) Trust Seal (250 px)On the surface, it might sound like one of those amateurish conclusions a blogger might reach after having just read the press release: Symantec, a software company now mainly known for security products, acquires some assets from a non-competitor in order to get that company’s logo. But in the deal between Symantec and VeriSign announced yesterday, there is no mistaking the fact that the antivirus products maker acquired, among other things, the single asset that just last week VeriSign argued was the ticket to its own future stability: quite literally, its own logo.

    Up until yesterday, its name was the VeriSign Trust Seal. A big part of VeriSign’s business had been the licensing of that logo to “trusted” Web sites whose security services pass VeriSign’s test. So when online shoppers see that pixelated checkmark inside the circle, they conclude the site they’re shopping on is safe…and they’ll buy more.

    A marketing brochure published by VeriSign in March (PDF available here) tells the story of how the licensing of the Trust Seal logo to the online mall TheFind, which represents a multitude of smaller retailers, discovered that for those retailers it serviced that did display the Trust Seal, click-throughs increased by 18.5% over rates for retailers without the seal (Symantec estimates the “sales uplift” for retailers bearing the logo as high as 36%).

    “One of the most important issues to users about an online retailer is its procedure for safeguarding personal data such as credit card numbers that travel over the Internet when customers make purchases,” the brochure reads. “With the rampant growth of phishing and identity theft, consumers are increasingly wary about providing this information, especially to companies they do not know. Therefore one of the pieces of information TheFind publishes about retailers is the protection they employ for transmitting private data. In many cases a generic ‘SSL Encryption’ logo appears. When the retailer uses VeriSign SSL Certificates, however, users see the VeriSign seal.”

    The business that got VeriSign as far as it’s come thus far has been the sale of SSL certificates to Web sites, and the subsequent licensing of its Trust Seal to those sites that meet VeriSign’s conditions. The brochure implied that the checkmark was an indicator of trustworthiness that goes over and above the little padlock symbol that browsers use to indicate the presence of SSL or TLS encryption.

    Late last month, VeriSign published its financial results for Q1 2010, and they’re not all bad. But they were a continued indication that the SSL certificates business was flagging, as executives credited themselves with bumping up the company’s Naming Services division — where it competes with the likes of GoDaddy and Register.com — and picking up the slack.

    VeriSign began its expansion of its Trust Seal Services business last February. Two weeks ago, during its quarterly conference call with analysts, CEO Mark McLaughlin made a bold pronouncement in a response to a Baird & Co. analyst (Seeking Alpha transcript available here): His company would begin marketing the Trust Seal to Web sites that don’t use SSL certificates, in a move that would risk diluting the meaning of the seal in exchange for addressing a much broader potential market.

    “The plan is to at least two groups of folks. The first one we are after is a broad-broad market that we’ve never addressed before, which would be sites that do not require SSL certificates. So they are non-transactional sites in nature,” said McLaughlin. “There are ten times more sites in that category than folks who would be in the e-commerce SSL total addressable market.”

    Just seven days ago, VeriSign announced the expansion of its Trust Seal Partner Program, in what was then considered an effort to get the checkmark pasted onto just about any site in the world that someone, somewhere might consider “good.”
    As VeriSign VP for marketing Armando Dacal stated at the time, “With the VeriSign Trust Seal, our partners now can bring the trusted VeriSign brand to a much broader marketplace, including content publishers, ad-supported Web sites, small online businesses, and e-commerce sites whose shopping carts are managed by a third-party service. Now every Web site whose success relies on a trusted relationship with consumers can display an extension of the most recognized trust mark on the Web.”

    However, as analysts from the online financial service Trefis predicted following that announcement, that dilution strategy would not be as effective as VeriSign had hoped. The fact that the checkmark had come to stand for quality SSL certification, it concluded, would reduce its attractiveness for anyone else who thought it could stick the checkmark on its site at random, and call itself a VeriSign partner.

    A slide from Symantec's May 19, 2010 presentation depicting its key acquisition from VeriSign: its Trust Seal 'checkmark' logo.

    As Symantec made clear during its acquisition announcement yesterday, although it’s acquiring VeriSign’s SSL certification and Trust Services business, along with its logo, it’s not acquiring that company’s strategy. SSL certification could become an influential selling point for Symantec’s existing enterprise security products, such as Symantec Protection Suite. Rather than broaden the Trust Seal’s addressable market, Symantec now plans to tighten its focus, making it more of an incentive for online retailers to purchase not only SSL certificates but other Symantec products and services as well.

    So while a billion and a quarter in cash is a lot to pay for a logo, Symantec seized an opportunity to save an influential business from drowning itself in its own market strategy. In yesterday’s announcement, Symantec said it would try to keep VeriSign employees who were critical to the business, though it acknowledged that some would be let go. That might not be a bad idea either.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • 10 questions for MPEG LA on H.264

    By Scott M. Fulton, III, Betanews

    Prior to Google’s announcement earlier today of its open sourcing the VP8 video codec, a spokesperson for MPEG LA — the licensing agent that manages the patent portfolio for multimedia technologies relating to the H.264 codec, among others — agreed to answer ten questions submitted to the agency in advance. Those questions regard how it licenses the codec that Microsoft and Apple consider the best solution for HTML 5, the next markup language for the Web.

    Here, Betanews presents the questions and MPEG LA’s responses without editorial comment.

    1. One of the principal confusions we see people having, evidently, concerns the royalties process for non-commercial video. We know that MPEG LA does not charge royalties to the producers of videos encoded using H.264, for the non-commercial use of video made with cameras that use this codec, or through software that uses this codec. But royalties are paid, by someone, at some time, and perhaps people would have a better understanding of this process if they could comprehend who, when, and why. Because people evidently are under the impression that they will, at some time, owe a bill to MPEG LA; and the current conspiracy theory is that it will come due in 2015, which is the expiration date for the current royalty-free terms.

    MPEG LA: Please permit me to provide some general background with respect to our AVC/H.264 (“AVC”) License: The AVC License is effectively divided into two halves: (i) sublicenses granting the rights to “manufacture and sell” AVC Products and (ii) sublicenses granting the rights to “use” such AVC Products to provide AVC Video content for remuneration. An AVC Product is defined to be a product that contains one AVC decoder, one AVC encoder or a single product including a combination of the two (for example, hardware products, software products, etc.).

    In the case of (i) sublicenses granting the rights to make and sell AVC Products, the party that offers the end products concludes the License and is responsible for the applicable royalties associated with the products they distribute. Included in the royalty paid by a Licensee for the manufacture and sale of AVC encoders/decoders is the limited right for a Consumer to use the encoders/decoders for their own personal use (for example, in a teleconference or to watch personal video content). But, when the encoders/decoders are used to provide AVC video content to an end user for remuneration, the provider of such video service may be responsible for a royalty for the right to use the encoders/decoders in connection with the remunerated video. Specifically, AVC Video offered on a Title-by-Title, Subscription, Free Television, or Internet Broadcast basis will benefit from coverage under the AVC License (the first three bear royalties; the fourth [Internet Broadcast] does not).

    2. Has there ever been an instance in the history of the MPEG LA organization when an individual has been responsible for royalties for non-commercial use of technologies in its portfolio?

    MPEG LA: Individuals are responsible when they make [or] provide video for remuneration of the type for which a license is required and a royalty is payable, but there has been no instance where a consumer or end user was pursued for a royalty.

    3. Is it possible for any party to be held responsible in five years’ time for the non-commercial use this year of technology in MPEG LA’s portfolio?

    MPEG LA: We assume that by “non-commercial use” you mean something such as a Web site that distributes AVC video content free to end users (which is referred to in our AVC License as “Internet Broadcast AVC Video”). While that Web site would benefit from being a Licensee to our AVC License, it does not pay a royalty for the distribution of such video through December 31, 2015. Decisions regarding royalties for the period after 2015 are considered near the end of the prior term when then current business models and related conditions can be taken into account.

    4. If a Web site were to charge subscription fees for up-front access to its content, and among that content happened to be links to, or embedded streams of, videos that were created using H.264 for which there appeared to be no commercial intent at the time of creation, is anyone responsible for royalties? The producer, the Web site, the viewer? (For example, say The Wall Street Journal under a paywall was to host an H.264 video that it presents as having been independently produced, and the paywall pertains to the site as a whole.)

    MPEG LA: Yes, since the Web site is receiving remuneration for the AVC video content it makes available on a subscription basis, it would benefit from the coverage our AVC License provides. The amount of royalties owed, if any, would depend on the number of Subscribers to that website during a calendar year: 100,000 or fewer subscribers/year = no royalty; 100,001 – 250,000 subscribers/year = $25,000; 250,001 – 500,000 subscribers/year = $50,000; 500,001 – 1,000,000 subscribers/year = $75,000; and more than 1,000,000 subscribers/year = $100,000.

    Next: Why are browser and plug-in makers also responsible for royalties?

    5. If a Web site were to present a video produced using H.264 as an advertisement for a product or service sold commercially through that site, is that considered a commercial use of the codec, and if so, who’s responsible (and for how much, if you can say)?

    MPEG LA: Assuming there is no remuneration for the video itself (in this case, an advertisement), this would fall under “Internet Broadcast AVC Video.” The Web site would benefit from being a Licensee to our AVC License, but would not need to pay a royalty for the distribution of such video at least the license term ending December 31, 2015.

    6. When an independent producer wishes to legitimately sell (or make commercial use of) a video or movie produced using technologies in MPEG LA’s portfolio, how does this producer make arrangements with MPEG LA?

    MPEG LA: For more information about MPEG LA’s AVC License or to request a copy of the License, the producer should visit this page.

    7. Since a plug-in technology such as Adobe Flash (which utilizes H.264) may or may not be used by viewers in the processing of videos that were distributed commercially, and for which royalties were apparently paid, why is Adobe responsible for royalties also? And why would the manufacturer of a (hypothetical) H.264 codec plug-in for Mozilla Firefox be responsible as well?

    MPEG LA: As explained earlier, Adobe is considered a seller of its AVC Product, and Adobe would benefit from coverage under our AVC License. The royalties owed, if any, would depend on the number of units Adobe distributed during a calendar year: 100,000 or fewer units/year = no royalty; 100,001 ??” 5,000,000 units/year = US$0.20/unit; 5,000,001+ units/year = $0.10/unit, with a cap of $5 million in 2010. Adobe and Mozilla would be responsible for paying the royalty as described above since they are the providers of the AVC Products.

    8. Some are under the understanding that when an open source, non-commercial codec that does not use H.264 is used in the processing or creation of videos that may be playable in consoles or with devices or software that utilize H.264, the creator of that codec is not responsible for royalties to MPEG LA. In other words, if a developer avoids the use of H.264 technology to create a video that a true H.264 codec recognizes as compatible, no charges apply. Is that accurate, and why or why not?

    MPEG LA: By definition, an H.264 video is playable using an H.264 codec. To the extent that is true, coverage is provided and applicable royalties are payable under the AVC License.

    9. Is there any reason for individuals to suspect that, if today they happen to use technology that attempts to be compatible with H.264, and that is later found to infringe upon MPEG LA patents, the individual users themselves (the viewers and producers of videos using infringing codecs) would become liable for unpaid royalties?

    MPEG LA: As answered in #2 above, the consumer, or end user, is not responsible for concluding our AVC License or paying a royalty. However, as also explained in #1 above, when the encoders/decoders are used to provide AVC video content to an end user, the provider of such video service will benefit from coverage under the AVC License.

    10. Under current US law, an Internet service provider is granted “safe harbor” against liability for copyright infringement, if the system with which videos or other content is hosted there, or flows through their channels, is automatic and without the ISP’s intervention. That law is said to apply to YouTube (and thus Google) when videos from a major content producer (such as Viacom) are uploaded there (although Viacom is, of course, challenging this). But it’s my understanding that Google is a payer of royalties to MPEG LA and others, for the use of H.264 in the display of YouTube videos, which may include content where it’s protected against copyright infringement. Why do royalties apply to the providers of videos which may include not only non-commercial works but non-authorized or illicit ones as well?

    MPEG LA: Google is responsible for the use of patents in connection with any distribution of AVC video content that occurs on its Web site because its Web site is where the transaction took place with the end user, regardless of who supplied the video or whether the video infringed anyone’s copyright.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Microsoft: IE9 won’t block VP8 video, won’t build it in either

    By Scott M. Fulton, III, Betanews

    In a pair of blog posts released simultaneously this afternoon, Microsoft’s Internet Explorer General Manager, Dean Hachamovitch, walked on eggshells in explaining why his group is staying the course with respect to its decision on the H.264 codec in IE9. This in the wake of Google’s historic move today to release the VP8 video codec it acquired under a full open source license under the umbrella title WebM, even though it could mean legal action against Google down the road.

    “The issue of potential patent liability is ‘ultimately for the courts to decide,’” wrote Hachamovitch in one post, citing an Engadget article from earlier this month. Reaffirming his company’s commitment to the ideals of HTML 5 — whatever those may be today — he stated at two points, “IE9 will support playback of H.264 video as well as VP8 video when the user has installed a VP8 codec on Windows.”

    When one parses the sentence for the fact that Hachamovitch did not mention the user needing to install an H.264 codec on Windows (although Silverlight does provide such a service), one may conclude that the way one enables Internet Explorer to play any VP8 video will actually remain unchanged from today. Microsoft may not be distributing the VP8 codec itself with either IE or Windows Media Player, so IE users (along with Apple Safari users, most likely) will find themselves downloading the codec separately. Though it’s too early in the game to say for certain, Google could probably establish a portal for such distribution, maybe even through YouTube.

    In a second blog post, gently but very deliberately, Hachamovitch addressed what he characterized as “the uncertainty” over H.264 and HTML 5 by explaining that IE9 will be open in its acceptance of third-party plug-ins that offer functionality above and beyond HTML 5. Notice how that casts VP8 in the “other” category.

    “Of course, IE9 will continue to support Flash and other plug-ins,” Hachamovitch wrote for IEBlog. “Developers who want to use the same markup today across different browsers rely on plug-ins. Plug-ins are also important for delivering innovation and functionality ahead of the standards process; mainstream video on the web today works primarily because of plug-ins. We’re committed to plug-in support because developer choice and opportunity in authoring web pages are very important; ISVs on a platform are what make it great. We fully expect to support plug-ins (of all types, including video) along with HTML 5.”

    Hachamovitch noted that while Microsoft is a participant in the licensing group MPEG LA, which manages the portfolio for H.264, it actually loses money in the process, making back about half of what it puts in. “Microsoft pledged its patent rights to this neutral organization in order to make its rights broadly available under clear terms, not because it thought this might be a good revenue stream. We do not foresee this patent pool ever producing a material revenue stream, and revenue plays no part in our decision here,” he wrote.

    The general manager’s comments came a few hours after the posting of a detailed technical analysis of the VP8 codec by Jason Garrett-Glaser, the principal developer of the competitive x264 codec — an open source technology that produces video compatible with H.264. The long post shares valuable information that Garrett-Glaser may not have been able to share before today.

    Though generally balanced throughout, Garrett-Glaser opens up his dissection of the VP8 specification (as opposed to the software itself) with a noise familiar to readers of the comic strip “Peanuts,” usually found whenever Lucy yanks the football out from Charlie Brown’s feet. Prior to several pages of extremely thorough explanation as to why, he suggests that merely open-sourcing the spec may not be enough: “The spec consists largely of C code copy-pasted from the VP8 source code — up to and including TODOs, ‘optimizations,’ and even C-specific hacks, such as workarounds for the undefined behavior of signed right shift on negative numbers. In many places it is simply outright opaque. Copy-pasted C code is not a spec. I may have complained about the H.264 spec being overly verbose, but at least it’s precise. The VP8 spec, by comparison, is imprecise, unclear, and overly short, leaving many portions of the format very vaguely explained. Some parts even explicitly refuse to fully explain a particular feature, pointing to highly-optimized, nigh-impossible-to-understand reference code for an explanation. There’s no way in hell anyone could write a decoder solely with this spec alone.”

    Garrett-Glaser points to areas where VP8’s overall performance is at least competitive with, but sometimes less than competitive with, both H.264 Baseline Profile and VC-1, the standard previously advanced by Microsoft. In his closing, he makes one very ominous, though perhaps acutely accurate, statement: “With regard to patents, VP8 copies way too much from H.264 for anyone sane to be comfortable with it, no matter whose word is behind the claim of being patent-free.”

    Betanews has been advised to await a forthcoming statement from MPEG LA, which may address precisely this point.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Google open-sources Web video codec; Mozilla, Opera, Adobe sign on

    By Scott M. Fulton, III, Betanews

    Banner: Breaking News

    This afternoon, Google has opted for the most daring option available to it: It is making available both the technology and the source code for the VP8 codec it acquired, in its buyout of On2 Technologies, to a newly formed entity. That entity, the newly formed WebM Project, will then serve as a licensing agent on Google’s behalf for the VP8 video codec, the Vorbis audio codec, and the Matroska multimedia container, for royalty-free use, apparently in both free and commercial video.

    At least as extraordinary, if not more so, is the new WebM group’s list of charter supporters, which could be unofficially dubbed the “Everyone Except H.264 Coalition.” Browser makers Mozilla and Opera both appear on this list, along with Adobe, the maker of Flash — the Web’s most prevalent distribution system for streaming video. And on the hardware side, both AMD (parent of ATI) and Nvidia have signed on, along with all the principal players in handheld components: ARM, Freescale, Marvell, TI, and Qualcomm.

    Now, not only will VP8 become part of Google’s YouTube experiment with HTML 5 — as the newly launched WebM site confirmed today — but at least three of the world’s top 5 browsers will likely incorporate the WebM portfolio soon, and we can apparently (unless someone pulls that Flash logo off of the WebM site for some reason) expect Adobe Flash to provide the “other” browsers with the WebM support they would lack. It’s a deal that literally sews up all the loose ends, and if it works, would completely unify all opposition against the last holdouts for proprietary video on the Web: Microsoft, Apple, and Intel.

    In a statement released this afternoon, a Mozilla spokesperson confirmed that organization’s participation in the project.

    “Until today, Theora was the only production-quality codec that was usable under terms appropriate for the open Web,” reads a blog post moments ago from Mozilla chief evangelist Mike Shaver. “Now we can add another, in the form of VP8: providing better bandwidth efficiency than H.264, and designed to take advantage of hardware from mobile devices to powerful multicore desktop machines, it is a tremendous technology to have on the side of the open web. VP8 and WebM promise not only to commoditize state-of-the-art video quality, but also to form the basis of further advances in video for the Web.”

    Google developed its license terminology for the VP8 codec in WebM based in large part, it says, on the BSD license. An excerpt reads: “Subject to the terms and conditions of the above License, Google hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer this implementation of VP8, where such license applies only to those patent claims, both currently owned by Google and acquired in the future, licensable by Google that are necessarily infringed by this implementation of VP8.”

    That means a licensee can indeed sell implementations of commercial software products that include VP8. This is where Google is now playing with fire, as the company’s rights to do so may be challenged, as Betanews learned last month. Rights holders to video technology may claim that neither Google nor anyone else has the right to give away basic concepts that On2 at one time did have, or may have had, the right to sell.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Or perhaps it’s something else: Microsoft combats a CRM + cloud colossus

    By Scott M. Fulton, III, Betanews

    Banner: Analysis

    Customer relationship management (CRM) software has typically fallen outside the realm that Betanews has covered, at least in recent years. It falls outside the realm a lot of publications cover, not because it’s the least bit obscure or unimportant or even a segment of the information industry around which billions of dollars in invested capital revolve, but because it isn’t usually the stuff around which soap operas are based. If someone left a spare, unauthorized beta copy of Dynamics CRM at a bar, it’s not something Gizmodo would be snapping photos of and TMZ would be scooping interviews about.

    Nevertheless, CRM is a huge industry; and while Microsoft has been a big player in that market since its acquisition of Great Plains Software in 2001 and Navision the following year, it has never been the #1 player. PeopleSoft had a very competitive CRM offering for small businesses in the early part of the last decade, then Oracle acquired that company in 2004. Later, Siebel had the lead, and Oracle acquired that company in 2006. (You can see a pattern here.)

    Then, as though Calvin “Freakin’” Borel were jockeying it, Salesforce.com has surged forward with a spectacularly successful alternative that is now a prime example of the success of cloud-based Web apps. Microsoft could very well be left behind in this market unless it makes some deals.

    Evidently, that’s what it was trying to do: Jigsaw is a CRM software producer’s Holy Grail. Think of one colossal database of cloud-based, crowd-sourced contacts, assembled through the sheer momentum of tens of thousands of willing contributors. Connect Jigsaw to your CRM product and you have a high-bandwidth gold mine.

    Just last month, CRM Magazine‘s Lauren McKay learned, Jigsaw was preparing to announce a deal with a handful of companies, including Microsoft and Oracle, to extend its cloud-based connectivity to Microsoft Dynamics CRM and Siebel. The result would probably have been a set of branded add-ons that resemble this currently available third-party utility that connects Dynamics with Jigsaw.

    Jigsaw Data Fusion for Salesforce.com, its new parent company.

    Then Salesforce.com purchased Jigsaw outright. Almost immediately, Salesforce began marketing Jigsaw Data Management as its own product — the biggest link to the broadest cloud database of sales contacts in the world.

    The problem with cloud-based products is that they’re practically monopolistic by design. Just as there’s no sensible business reason for anyone to build “another Wikipedia,” it would be considered foolish for anyone to try a “competitive cloud” for even a barely-established product, including Jigsaw.

    The objective now for any competitor, including Microsoft, is to try to keep enough of a foot in the door to remain a competitor, before it gets boxed in with the slow horses in the middle. Apparently Microsoft chose to fire a broadside with the weapons at its disposal at the time, which yesterday were comprised of some very old patents on “technologies” such as stacking toolbars together — methods for which you’d think the rest of the world had already been granted perpetual license by default.

    But regardless of whether you buy into the theory that Microsoft has any rights to claim, for instance, the ability to embed a menu in a Web page (patent #5,742,768), it’s now an established fact that somebody in Tyler, Texas would buy into it, and it’s an academic matter to seat such a person on a jury. So even if Microsoft’s artillery rounds are made of paper, they can inflict damage.

    This lawsuit isn’t really about toolbar stacking. It’s about making certain a competitor with momentum and motive doesn’t lock down the cloud.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Microsoft to pay $200 M to VirnetX to make future patent suits go away

    By Scott M. Fulton, III, Betanews

    Two months ago, VPN builder VirnetX was awarded $105.75 million by a Tyler, Texas jury, for Microsoft’s infringing upon its patented tunneling protocol for private networks. Realizing that this could actually be the first home run by VirnetX in the same turn at bat, Microsoft has opted to pay $200 million to VirnetX as a settlement for this and all future lawsuits.

    The technology that triggered the initial award was a way for VoIP phones to conduct communications on secure channels, without the phone user having to log in using some kind of keyboard. What Microsoft wanted for its Unified Communications suite was a way to keep the same “dialtone” when a user picks up a voice receiver and dials a recipient, and yet keep the channel between the parties secure using VPN technology.

    VirnetX definitely held a patent on something meeting that general description, though Microsoft’s challenge was that the basic innovation behind VirnetX’s twist on tunneling wasn’t much of a twist. After its fifth-of-a-billion-dollar payout, Microsoft will not be appealing that argument.

    Instead, VirnetX will be putting its newfound revenue to use by funding something it calls the Secure Domain Name Initiative. Launched just last month, the company claims it will be utilizing the two patents it holds — the two upon which the jury said Microsoft infringed — to develop a system it describes as enabling always-on communications security between DNS endpoints, presumably using encrypted traffic. Imagine an HTTPS connection (or perhaps something more secure) where the browser doesn’t have to create the session key, and where all traffic is encrypted by default.

    To get to a Web where that’s the case, apparently engineers will have to go through VirnetX’s channels; and that $200 million payout doesn’t just pave the way, but puts up guardrails, fences, and gates as well.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • ‘Free’ and ‘open’ Web video may be impossible after Microsoft backs H.264 only

    By Scott M. Fulton, III, Betanews

    The good news should be, everyone with a major stake in the outcome of the Web video standards debate has now publicly expressed support for something called “open” or “openness.” But that’s where the similarities, and even the niceness, end. Yesterday, Apple CEO Steve Jobs personally weighed in on the subject by making it an “us against them” battle, with Adobe and Flash the villains.

    Late yesterday, the head of Microsoft’s Internet Explorer 9 project, Dean Hachamovitch, followed suit, representing the company whose decisions about what standards to support — or not support — have historically steered the course of Web development, for better or worse. Assuming a far more civil tone than Jobs, but with a message no less significant, Hachamovitch solidified Microsoft’s stance on high-definition Web video standards by announcing that IE9 would support H.264 for HTML 5 built-in video…and only H.264.

    “H.264 is an industry standard, with broad and strong hardware support,” Hachamovitch wrote. “Because of this standardization, you can easily take what you record on a typical consumer video camera, put it on the Web, and have it play in a Web browser on any operating system or device with H.264 support (e.g., a PC with Windows 7). Recently, we publicly showed IE9 playing H.264-encoded video from YouTube…For all these reasons, we’re focusing our HTML 5 video support on H.264.”

    The original reason for the creation of the <VIDEO> tag in HTML 5 was to enable browsers to implement built-in codecs that would play back “free video.” Soon, stakeholders in HTML 5 realized there may not be such a thing: While patent holders such as MPEG LA do extend royalty-free licenses to folks who view Web video, that’s because those royalties are considered paid by those who produce the video using encoder tools and codecs. And while open source developers have been actively creating encoding tools such as x264 that don’t incur royalties, the question of whether their underlying technologies may still be claimed by patent holders somewhere in the world, is thought to be a brutal battle just waiting to play itself out.

    Hachamovitch referred to this very point yesterday, in praising MPEG LA for its management of a licensing program that does not charge developers “additional royalty” for the use of the technology in H.264. Skillfully avoiding the use of the term “open,” he acknowledged that a critical difference exists between availability and ownership, and advised that perhaps the best course to follow is one where the owners are most reasonable and the availability is highest.

    But then he could not help but crash head-first into the issue of Adobe Flash. In his message yesterday morning, Steve Jobs thrashed Flash (which he also has a grudge against for also being a middleware platform) for being proprietary, insecure, and dictatorial — all of which he then went on to characterize Apple as not being. The fervor over Jobs’ message, coupled with the fact that Flash is the most prominent video format on today’s Web, made the issue unavoidable for Hachamovitch.

    “Today, video on the Web is predominantly Flash-based,” the IE9 team leader wrote. “While video may be available in other formats, the ease of accessing video using just a browser on a particular Web site without using Flash is a challenge for typical consumers. Flash does have some issues, particularly around reliability, security, and performance. We work closely with engineers at Adobe, sharing information about the issues we know of in ongoing technical discussions. Despite these issues, Flash remains an important part of delivering a good consumer experience on today’s Web.”

    And that’s where the message ended, leaving it for readers to infer from it that IE9 will continue to make it easy for Adobe to plug itself directly into the browser. Supporters of the original principles of HTML 5 had come out against the use of video plug-ins — the problem that the <VIDEO> tag was created to solve — but have recently acknowledged that if browsers seek to remain “open,” then they must remain accepting of the Web’s most prevalent video format — and the plug-in vehicle that comes with it, security risks and all.

    Yesterday afternoon, in a video interview with The Wall Street Journal, Adobe CEO Shantanu Narayen answered Steve Jobs’ attack by saying that it is iPhone that is the proprietary platform, and Flash that is the open one, as evidenced by the huge wealth of quality Flash video on the Web. Narayen’s implication was that, simply because Adobe owns the methodologies behind Flash, doesn’t make Flash any less open or more proprietary than H.264 — the format which Jobs says Apple supports.

    Though it was Apple that stirred the pot yesterday, in recent months, it has been Google that turned up the heat from “simmer” to “boiling.” Its role in the Web video issue has been to catalyze debate and keep everyone else guessing, as its own stance on the subject has been all over the map.

    The one thing we do know for certain is that Google supports the <VIDEO> tag in HTML 5. But last year, Google threw a monkey wrench into the adoption process by setting itself squarely against the use of Theora, the open source video codec that was Mozilla’s preference, as the one HTML 5 codec. Google engineers literally predicted that if Theora were adopted, the resulting traffic from the sheer bulk of poorly encoded video would stall the entire Web.

    Within weeks of breaking that iceberg and setting it adrift, Google purchased On2 Technologies, the company actually responsible for creating the underlying principles of VP3, on which Theora was based. That led to speculation that Google would produce VP8, the current version of that codec, under an open source license — something that Betanews was told Google may not have the authority to do even though it now owns the company behind VP8.

    Google could, however, issue a royalty-free license for VP8, perhaps with little or no dispute. That would make VP8 appear to be the HTML 5 codec of choice for its Chrome Web browser, which is growing in popularity.

    But then, having yet to exhaust its supply of monkey wrenches, Google began testing building Flash directly into Chrome, helping to cement the position of its YouTube division as the world’s principal supplier of Flash video for the foreseeable future. Just yesterday, Adobe followed up by announcing direct support for Flash Player 10.1 in smartphones with Android, Google’s open-source small device operating system, starting in June. Which makes things murky enough had Google, not three weeks earlier, announced it was openly funding the continued development of a version of Theora — the very codec its engineers threatened would cripple the Web, and which now stands in opposition to VP8 — as an ARM component that could be built into the firmware of smartphones everywhere, including both Android and iPhone, bypassing whatever it is that their browsers may choose to build in or plug in.

    The headline for that April 9 announcement was, “Interesting times for Video on the Web.” You think?

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Microsoft aims to embed Media Center directly into HDTVs

    By Scott M. Fulton, III, Betanews

    Customizable screen from Windows Media Center in Windows Embedded Standard 7.

    “More and more people are getting excited about the opportunity of what PCs can do for them in their living rooms to improve their entertainment experience.” That was the message I was getting as far back as 2005, as companies including AMD, Intel, Microsoft, and yes, even Sun were exploring form factors for “entertainment PCs.” Soon, we’d be seeing brands like Intel Viiv, AMD Live, and Microsoft TV at a store near you.

    It’s five years later, and reality has officially set in. “Most people, from a consumer perspective, would not like to have a PC in their living room,” said Irena Andonova, the director of product management for Windows Embedded 7 at Microsoft — the company where entertainment PCs were once all the rage. Today, with HDTV manufacturers embedding Internet connectivity and versatile functionality directly into their sets, the PC is just one more box. Microsoft realizes that now, so with Windows Embedded 7 — which released to manufacturing Tuesday — it’s aiming to put the operating system and the media player in the TV where it now says they belong.

    “Making Windows Media Center available in specialized devices like set-top boxes and connected media devices, we now have devices that can live in the living room, and customers will be accustomed to having them there. It brings the same functionality and features from Windows PCs to the world of specialized devices.”

    Historically, the market for Windows Embedded has been industrial system makers, who build high-quantity or ultra-small form factor devices that need full-featured applications. And in recent years, the company has marketed Mediaroom, rather than Windows per se, to video service providers as an adaptable software platform for delivering cross-platform digital media. At CES 2010, Mediaroom actually stole the show from Windows 7 among device manufacturers.

    Customizable screen from Windows Media Center in Windows Embedded Standard 7.

    The new play for Windows Embedded Standard 7 will be to effectively wedge all of Windows into some of Mediaroom’s market space. With it would then come Internet Explorer, custom applications, and yes, Silverlight.

    As Andonova told Betanews, OEMs and platform customers looking to embed the new operating system can use Silverlight as their development platform for customized apps that can appear as part of the Media Center environment. It’s a somewhat familiar looking place, especially for users familiar with Windows Media Center today, as well as the company’s overall look-and-feel for media-driven platforms such as Zune and Windows Phone 7.

    But as a Microsoft demo indicates, some components of that look and feel may be customized by platform customers, so that it’s not Microsoft’s branding that comes through — a lesson learned from Mediaroom.

    “Windows Media Center is a customizable and extensible platform, so [OEMs] can actually customize the Media Center UI to provide the branding that they would like. They cannot change the horizontal and vertical menus — they can’t change that paradigm — but they can change the font, the background, and the branding,” she told us.

    A forthcoming Windows Embedded Standard 7-based home media device being announced by Swiss manufacturer Reycom.

    One early adopter of Windows Embedded 7 for digital home media libraries, Microsoft tells us, is Swiss media device manufacturer Reycom. It’s expected to produce its first Win7-based device (pictured above) in a few months. Andonova said Reycom’s take on the Media Center UI is “very modern, very sleek, very nice looking, something that you would expect to find sitting in a modern living room.”

    Customizable screen from Windows Media Center in Windows Embedded Standard 7.

    One of those customizable areas will be the movie library (shown above), which could theoretically become customized by potential customers whose names Microsoft couldn’t help but share…including Netflix.

    “Service providers like telecommunications companies, satellite and cable providers, what Windows Media Center and Windows Embedded Standard 7 offers is the opportunity to provide incremental, over-the-top content. We have demonstrated different services, like Netflix and combining videos from local sources and the Internet, bridging the experience between the Internet and broadcast, or viewing your pictures and photos on Standard 7 and merging Flickr, for example, with personal photo libraries…There can be different, additional revenue potential for service providers, by providing these incremental, over-the-top services.”

    Or to break that down another way: Cable and satellite companies could find additional revenue streams by funneling customers to Internet-based services. But to do that in such a way that viewers aren’t being transported to the Web browser/PC operating system world they might rather leave at work, Web apps need to become channels — things that viewers can tune in on their dials. Nevertheless, because Web apps come and go, and change, more rapidly than the firmware in most set-top boxes can keep up with, the operating system needs to be able to install these apps and uninstall them just like on a PC.

    Thus the creation of an extensible Windows Media Center on a real Windows operating system, but without the PC, embedded in the back of the latest HDTVs as well as the newest DVRs and home media devices like the one European customers should see from Reycom in the coming weeks.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Oh really? NAB head suggests to Congress FCC’s Broadband Plan is ‘voluntary’

    By Scott M. Fulton, III, Betanews

    Capitol Hill (Washington) top story badgeThere are a handful of issues of contention that broadcasters (who transmit content over the public airwaves) have with the Federal Communications Commission’s Broadband Plan. One such outstanding dispute concerns the FCC’s proposed reallocation of unused digital spectrum from broadcast to broadband purposes — a way to get at least some of the estimated 180 MHz of spectrum wireless operators say they need, without another complete re-auction.

    On Tuesday, FCC Chairman Julius Genachowski announced the formation of a so-called Spectrum Task Force, which many see as his way of connecting the necessary dots between the public airwaves (the FCC’s natural purview), wireless, and the Internet (the FCC’s disputed territory). In his announcement yesterday, the Chairman said, “To lead the world in mobile, the FCC must ensure that our nation’s spectrum is being put to its highest and best use.”

    It would seem, on first glance, that this stance would threaten the integrity of broadcasters’ hold over the public airwaves, which they had previously thought was solidified after they traded their old analog TV frequencies — which were then sold at auction to wireless providers — for new and broader digital channels. But in testimony before Congress on Tuesday, the President of the National Association of Broadcasters, former senator Gordon Smith, surprisingly told lawmakers he wasn’t as worried as he might have been.

    The reason, according to Smith (as covered by Multichannel News), is that in a speech to Smith’s NAB, Genachowski said the Broadband Plan’s goals were not mandates, and that “this broadband plan would never devolve from voluntary to compulsory.” Smith continued, “What he said is what he said, and we are prepared to work with him.”

    Indeed, certain elements of the Plan are voluntary for broadcasters, as Genachowski did point out, according to a transcript of his speech to the NAB convention on April 15 (PDF available here). After emphasizing, as he has in the past, the explosive data consumption rate among devices whose propagation among consumers is also exploding, Genachowski did lay out what he emphasized were options and choices that some heroic broadcasters could choose to make, though not necessarily all of them.

    The plan, as Genachowski described to the NAB, “proposes voluntary incentive auctions — a process for sharing with broadcasters a meaningful part of the billions of dollars of value that would be unlocked if some broadcast spectrum was converted to mobile broadband. The plan would give broadcasters the choice to contribute their licensed spectrum to the auction and participate in the upside. The plan would give broadcasters the option of channel sharing. For example, a broadcaster could contribute half of its capacity and share spectrum with another broadcaster in the market, continuing to broadcast their primary programming streams and more, while lowering their operating expenses and gaining infusions of capital.”

    The Chairman went on to characterize the potential of channel sharing as a cost-cutting move. “One, these auctions are voluntary. Period. Participation is up to the licensee and no one else. Two, for the Plan to work, we don’t need all, most, or even very many licensees to participate. If a relatively small number of broadcasters in a relatively small number of markets share spectrum, our staff believes we can free up a very significant amount of bandwidth. And rural markets would be largely unaffected by the recommendation in the broadband plan because the spectrum crunch will be most acute in our largest population centers.”

    The day after Genachowski revealed the “voluntary” language, the National Journal reported that the Chairman made the pledge to the NAB’s Smith as a “backroom handshake,” the existence of which was revealed accidentally by Smith “during an exclusive, impromptu interview” with its technology blog. Or, the FCC chairman made the pledge during a public speech, which may be just another staging ground for a “backroom handshake.”

    In any event, the voluntary nature of this aspect of the plan was conveyed to Congress by Smith, so there’s certainly nothing “backroom” about it now. The opposition to Smith’s interpretation came from the representative of the wireless industry, former congressman Steve Largent, who heads CTIA. According to Multichannel News, Largent warned that if broadcasters voluntarily opt out of their participation in this plan, the wireless industry can’t wait another 15 years for the next spectrum auction.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Actual Analysis: NPD’s Ross Rubin on the formula for making HP + Palm work

    By Scott M. Fulton, III, Betanews

    Banner: Analysis

    For the record, the connection between Hewlett-Packard and Palm, Inc. was not something most of us saw coming, and which very few reputable observers of this industry bet their reputations on.

    In retrospect, one sees now some of the obvious connections we missed then: the fact that Todd Bradley, now chief of HP’s Personal Systems Group, was Jon Rubinstein’s predecessor as CEO of Palm; the fact that HP’s smartphone market share fell last year to below one hundredth of one percent, and yet Bradley was still charged with the task of devising an instant comeback; and the fact that HP’s latest iPaq, announced last December (pictured below) bears so little distinction from a BlackBerry Curve, Samsung BlackJack, or Motorola Q that it may as well be called the “Me2.”

    The HP iPaq Glisten smartphoneBut waving in front of HP’s obvious red flags were several more obvious white ones:

    • Since Mark Hurd took over as CEO, the company has dramatically cut costs, with an emphasis on streamlining operations.
    • While it’s nice to experiment with product categories outside one’s typical bailiwick, HP’s last go-round — its 2006 acquisition of high-class custom computer maker VooDoo PC — resulted in that division’s productivity declining to around that of iPaq.
    • With margins falling in the one-time-goldmine printer (and printing ink) business, the company’s wrestling with a complete makeover of its second most important consumer-facing brand, possibly merging it with the Personal Systems Group (PSG) division that handles PCs.
    • HP’s growth divisions today are services and enterprise technologies, especially now that it’s acquired EDS, and now that it’s championing Dell on almost every front.
    • Finally, the cash-tight HP is already paying almost $3 billion for 3Com, the network technology pioneer that — so ironically now — became Palm’s parent company in 1997 through its acquisition of USRobotics, and then spent six years (1999 – 2005) spinning it off into a separate entity, in such a costly venture that 3Com became ripe for a buyout by HP.

    So HP buying Palm? Nah. No way.

    Anyone listening in yesterday to HP’s hastily arranged analysts’ call might have gotten the impression that just 24 hours earlier, HP execs would have said exactly the same thing — even former Palm CEO Bradley. When asked repeatedly how they plan to make this marriage work, Bradley and VP for Investor Relations Jim Burns indicated they may not actually know the answer to that for several months.

    Veteran smartphone business observer and NPD Executive Director of Industry Analysis Ross Rubin — whose insight is not only 20/20 but often 360 degrees as well — sees some of the coefficients, if not yet the entire formula, for sealing the deal. In an interview with Betanews yesterday, Rubin pointed out the synergies that could make the deal work…provided Palm transforms itself into yet another new kind of brand for an untapped market. Again.

    NPD Executive Director of Industry Analysis Ross RubinIndeed, last week, Rubin was talking with us about the synergies that could make Lenovo + Palm work. “At least some of the factors that we discussed for Lenovo apply to HP,” Rubin told us late yesterday. “So there are all those opportunities to gain entrance into the carrier channel, to beef up what had been a modest smartphone portfolio business, and to gain access to an operating system that can be used on a broader range of devices, including tablets and smartbooks — which HP has either already launched or shown publicly. So all of that holds true.

    “There are some additional synergies between the two companies. Palm is doing quite a bit in terms of cloud backup and cloud services, which of course is an important business for HP. And HP has world-class R&D, global reach, and financial resources. On the conference call, Todd Bradley said that HP will invest heavily, both in the R&D and sales and marketing for Palm.”

    Usually an acquisition target does not announce a revenue shortfall on the morning that the deal is to be done. Palm did, in a move that by noon led veteran observers to believe that any deal — with Lenovo, Huawei, Wal-Mart, anyone — was off the table. Did HP really come up with a miracle solution for Palm in just minutes?

    “When you’re dealing with companies like HP, there has to be some thought given beyond the current quarter,” responded Rubin. “This is certainly an acquisition that is not income positive for HP in the short term.”

    Bradley and Burns did indicate that HP would have to spend more on R&D in the coming months than Palm is spending today. On Bradley’s side, Rubin pointed out, is his experience with just this kind of turnaround: He was credited with picking up the pieces of the otherwise disastrous HP + Compaq buyout, and making that formula work as well. But in so doing, iPaq appeared de-emphasized.

    This is where Ross Rubin perceives an opportunity for Palm to actually help out HP rather than the other way around: giving it retail prominence in markets the iPaq could not crack.

    “PCs aren’t smartphones; on the one hand, they are different channels. HP has developed great expertise in retail distribution,” he reminded us. “That may be more important for smartphones and other kinds of devices powered by webOS moving forward, but it’s certainly a different channel from the carrier channel. The Palm business gives them complementary distribution at the carriers where they haven’t seen a lot of business in the past.”

    Next: The potential of HP + Palm + Microsoft…

    The potential of HP + Palm + Microsoft

    Anyone who thinks HP hasn’t managed, or cannot manage, a software platform on its own has forgotten — or is wholly ignorant of — HP’s success as the master of HP-UX, which makes it the “Face of UNIX” for a big chunk of enterprise customers. When HP uses the single word “Scale” to describe the benefits it can offer Palm, that’s the scale it’s talking about.
    Not a lot of consumers know (or care) about HP-UX. “However, we have seen HP move to try to differentiate its products from other PCs running Windows by doing things such as developing the TouchSmart layer for its all-in-ones, and some of its touchscreen notebooks,” noted Rubin, in his talk yesterday with Betanews. “Perhaps, particularly faced with the prospect of having limited control over the user interface in Windows Phone 7, having access to webOS allows [HP] to define the customer experience a lot better than licensing another operating system might.”

    Wouldn’t HP have had that same opportunity if it had proceeded with what many expected it to do anyway: produce a line of Android phones? “Certainly Android has a lot of momentum in the marketplace right now,” Rubin responded. “There’s one liability with Android: Google has made some moves that might cause a major global company like HP to have some concerns, such as the relationship with the Chinese government, for example, or competing with its own partners as it has with the Nexus One.

    Since neither HP nor anyone else appears to know precisely what its Palm roadmap for the future looks like, it’s fair to say that, at least today, nothing precludes it from continuing to produce the same iPaq phones it was planning to produce anyway — continuing to support Windows Mobile 6.5, or maybe even making that Android phone, despite the risks. This doesn’t have to be bad news for Microsoft, which some may have prematurely perceived as having lost another smartphone partner (after Motorola) to a competitive platform.

    The formula HP perceives for its synergies with Palm, from an investors presentation April 28, 2010.

    The formula HP perceives for its synergies with Palm, from an investors presentation April 28, 2010.


    This is where NPD’s Ross Rubin perceives a potential opening for Palm and Microsoft, which have partnered before — at the time, rather successfully. Although that partnership began soon after Todd Bradley left Palm, HP’s existing close ties to Microsoft certainly don’t preclude the possibility of HP’s bringing Microsoft back into the Palm picture.

    How could it do that without kicking webOS aside? As Ross Rubin reminded us, Microsoft can make deals, and has made several already, with smartphone makers without binding them to Windows Phone 7.

    “HP has a massive enterprise business, and it’s very strong in several verticals in those industries. That’s part of what is going to drive demand for some of these new kinds of devices. Todd Bradley said a lot of these devices are very product categories, and it remains to be seen how they’ll grow. But while BlackBerry certainly is very strong in the enterprise today…the Exchange group at Microsoft is willing to partner with handset makers other than those using Windows Phone 7 to compete with RIM. A great example of that is the partnership that Microsoft and Nokia have struck — obviously, Nokia not using Microsoft’s operating system in its phones, but using the Exchange ActiveSync architecture. Apple, of course, [is] using Exchange ActiveSync; and Palm today is using it. So BlackBerry very much remains in the crosshairs of Microsoft for a couple of reasons.”

    Extending Exchange ActiveSync technology to webOS would expand Microsoft’s service presence to yet another platform. And services are where platform makers cash in; typically, native platforms are mechanisms for funneling customers to native services (see: iTunes), but if other platforms provide the same funnels, that’s fine, too.

    As HP execs conceded yesterday, they hadn’t really thought about building up services for Palm — for instance, a competitor to iTunes, or to BlackBerry’s enterprise e-mail. With Microsoft’s help, it wouldn’t have to — it could fill in the gaps necessary to make Palm competitive in the enterprise, so that its device offering won’t look as limp and lifeless as the iPaq “Glisten.” If HP can deploy ActiveSync e-mail, and implement a portal to something similar to Microsoft’s Live mobile services, on the webOS cloud platform it’s acquiring from Palm, perhaps with portals to Office apps and Zune.net…everybody’s happy all of a sudden.

    …Okay, okay, maybe we should have seen this coming.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • HP execs: Fate of Palm’s R&D team, iPaq, Pre, and Pixi still undecided

    By Scott M. Fulton, III, Betanews

    If financial analysts had concerns about Hewlett-Packard’s ability to resurrect Palm’s flailing fortunes, those concerns may have actually deepened following HP’s announcement call with analysts Wednesday afternoon.

    During the call, which lasted under 20 minutes, Executive Vice President Todd Bradley told analysts that he expects HP’s track record for building out communications infrastructure with eight of the world’s ten largest telecom carriers will earn HP points when making its case for carrying Palm products.

    “Today, HP provides the infrastructure for eight of the ten top carriers in the world, and as we build our execution plans, we focused on leveraging several large carriers instead of large numbers of small carriers. So we think that leverage and that focus will provide a very significant growth platform for these products as we go forward.”

    Those top 10 carriers, as reported by The Wall Street Journal last month, are: China Mobile, Vodafone Group (UK, co-owner with Verizon of Verizon Wireless in the US), Telefónica SA (Spain), América Móvil (South America), Telenor (Norway), Deutsche Telecom AG, China Unicom, TeliaSonera AB (Sweden), France Télécom SA (a.k.a. Orange), and Bharti Airtel (India).

    HP’s Bradley acknowledged that building out Palm’s developer network for webOS will be a key priority, and VP for Investor Relations Jim Burns confirmed that HP plans to invest in developers above and beyond what Palm has done so far. No numbers yet, but the key direction here is up.

    The Personal Systems Group (PSG) accounted for over one-third of HP’s revenue in fiscal 2009, with $10.6 billion. With quite a bit of costs involved, it eked out $530 million in revenue for that year. Palm issued guidance just this morning, warning investors that its numbers for fiscal Q4 2010 (three months ending in May) could come in below its early estimates. It could reap as little as $90 million in revenue for this quarter, on account of “slow sales of the Company’s products, which has resulted in low order volumes from carriers. Palm also expects to close its fourth fiscal quarter with a cash, cash equivalents, and short-term investments balance between $350 million and $400 million,” as Palm disclosed to the SEC this morning.

    That’s a staggering plummet from $350 million in revenue in the previous quarter, certainly due to more than just “seasonality.” Palm lost $22 million for that quarter. Current R&D costs for Palm were estimated by a JP Morgan analyst at $190 million annually — a figure which HP’s Jim Burns promised would increase. So with HP’s PSG earning upwards of $134 million per quarter, and increased R&D costs to come, one can imagine Palm draining PSG’s entire profit for at least the remainder of this year.

    That’s why it’s so important to determine just who will be responsible for providing that R&D. Among the few questions that analysts were allowed today were a few that probed into the fate of Palm’s current R&D team. The fact that HP execs responded by saying Palm’s executives’ careers were safe, was a bit ominous.

    Speaking on behalf of Palm CEO Jon Rubinstein, HP’s Bradley passed on his excitement. “He’s very excited about staying and building out, actually executing his vision for the webOS into a broader market,” he said, “and I think HP brings those capabilities to him to do that. And I think it’s fair to say his team is excited as well.”

    When pressed by an RDC analyst as to whether HP intends to acquire Palm’s R&D team whole, and then keep it whole or integrate its members into the company, Bradley was forced to concede that his plan appears to be to maintain one and only one R&D team. “We intend to operate [Palm] as a business unit, which is in line with the way we’re structured today,” he responded, before reiterating Rubinstein’s and Palm executives’ desire to stay on.

    “Palm’s operating at a loss right now,” added Burns at one point, “so we got work to do.”

    At one point, Bradley said HP could not go into specifics as to its future roadmap for Palm OS until it completed the deal and received the necessary regulatory approval. Estimates as to when that could take place were all over the map, extending into the beginning of HP’s fiscal 2011 (this December). Only when the final signatures are affixed to the paper will we start to see guidance about such important issues as: the fate of HP’s existing iPaq line (some may recall it actually has one — it consists of Windows Mobile phones at the moment); integration of webOS with existing HP hardware such as its Slate tablet; and just how HP plans to expand Palm’s presence, as Bradley promised at one point, into the commercial space to compete against RIM.

    “I think you have to break it up along [different] product categories. While Palm currently has the Pre and the Pixi smartphones, we see that as one space that is very consumer-oriented, and we’ll look at how we leverage both our retail and commercial channels to broaden the distribution of those sets of products,” HP’s Todd Bradley told Cross Research. “I think the tablet/slate products are such new markets, we see opportunities broadly for consumers. But at the same time, having just finished up our partner conference, enormous interest on behalf of channel partners with specific vertical deployments in things like health care and education. So I think you’ll see these products deployed in both segments, consumer and commercial. Again, we’ll talk about that time line as we get closer to completion.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati