Author: Scott M. Fulton, III

  • Office 2010 releases to manufacturing, availability as soon as May 1

    By Scott M. Fulton, III, Betanews

    Banner: Breaking News

    Microsoft Office 2010 alternate top story badgeThe first volume licensing arrangements for Microsoft Office 2010 will be made through company partners on May 1, almost two weeks earlier than expected. This news today from the company’s Office Engineering team, which released the final build of all versions of the company’s principal applications suite today.

    “Since the start of our public beta in November 2009, we’ve had more than 7.5 million people download the beta version — that’s more than 3 times the number of 2007 beta downloads!” reads this afternoon’s post by the Engineering team. “The feedback that we’ve received from all these programs has shaped the set of products we’re excited about, and that I’m sure will delight our customers.”

    Pre-orders for individual US customers have already started from Microsoft’s online store. There, customers will find the Home and Student package (Word, Excel, PowerPoint, and OneNote) available for $149.99. This time, it’s Outlook that’s the premium component in the bundle; the Home and Business Package, which adds only Outlook to the Home and Student arrangement, sells for $279.99. The Professional bundle, which adds Publisher and Access, sells online for $499.99. Although SharePoint, Visio, and Project 2010 share the marketing umbrella with the other Office components, they are sold separately.

    The delivery date for consumer Office bundles has not yet been set. However, the official Office launch date (any more, software is almost never launched once only) is May 12, when Microsoft officials including Business Division President Stephen Elop will lead a gala press presentation from the NBC Studios in New York City.

    Though Microsoft unofficially absorbed a truckload of user-crafted suggestions from the MakeOfficeBetter.com Web site launched by two company employees (and recently shut down), it’s the beta program where company engineers did the most listening to tester suggestions. From personal experience in that program, I can happily report that engineers were very receptive to input.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Interview: Internet coalition leader sees a way through for the Broadband Plan

    By Scott M. Fulton, III, Betanews

    Last Tuesday, in a conference that included invited members of the press including Betanews, Markham Erickson, the Executive Director of the Open Internet Coalition — which advocates for Google, Facebook, PayPal, Netflix, Skype, Sony, Twitter, Amazon, and TiVo, among others — urged the Federal Communications Commission to bounce back from its loss to Comcast last week in DC Circuit Court, by affirming its right to regulate broadband Internet services under a different section of US telecommunications law than it’s used before.

    Since that time, a surprising amount of water has passed under the bridge, including a round of Senate hearings Wednesday in which leaders suggested new legislation could solve the problem, so that the FCC would not have to declare regulatory authority under Title II of the Telecommunications Act — the part that typically applies to telephone networks. A key Senate Republican, Kay Bailey Hutchison (R – Texas), vowed to oppose any effort by the FCC to redeclare under Title II.

    Then in a public letter posted to many online political sites on Wednesday, including The Hill, Commerce Committee member and former presidential candidate Sen. John Kerry (D – Mass.) called upon ordinary citizens to act on the FCC’s behalf. In an extraordinary bit of irony, he asked interested parties to call their congressperson and urge her to not do anything…the level of inaction necessary to enable the FCC to redeclare its broadband mission, unencumbered by lawmakers.

    The details get a little technical, but it basically boils down to this: back in the Bush Administration, the FCC classified the Internet as an “information service” rather than a “communications service.” This limits what the FCC can do, which is, of course, just the way the big telecom companies want it.

    But the FCC could reclassify the service and preserve its traditional role. The telecom companies are giving it everything they’ve got to keep this from happening, and if you don’t speak up, they could win.

    A win for them would mean that the FCC couldn’t protect Net Neutrality, so the telecoms could throttle traffic as they wish — it would be at their discretion. The FCC couldn’t help disabled people access the Internet, give public officials priority access to the network in times of emergency, or implement a national broadband plan to improve the deplorable situation where the United States — the country that invented the Internet — lags far behind in our broadband infrastructure. In short, it would take away a key check on the power of phone and cable corporations to do whatever they want with our Internet.

    The telecom companies try to say that only Congress can pass a law to make this better. But having suffered through a year of record filibusters and procedural hurdles to grind the process to a halt, do you really think it’s a good idea for Congress to try and do this, when the FCC can have the authority right now?

    Look, eventually we may need to build a new legal framework for broadband service, but the Internet is moving too fast, the economy needs the innovation of the Internet too badly, to wait. Especially because we don’t have to. The FCC can act right now.

    Throwing monkey wrenches into the process since Sen. Kerry’s plea, two former FCC chairmen, in an interview with the Washington Post‘s Cecelia Kang, both advised the agency they led not to attempt reclassification, as well as to start deciding what powers of broadband regulation the FCC should not have, by definition. Under existing laws, the former chairmen agreed, the FCC could still implement the current chairman’s Broadband Plan.

    “The Broadband Plan is the first comprehensive plan for rolling out communications and media services to all Americans, the first in the history of our country,” former FCC chairman Reed Hundt told the Post. “We’ve always approached this task, for better or worse, on a piecemeal basis. Now we have a complete plan…The FCC has plenty of jurisdictional power under existing statutes to implement those plans. Exactly which one is the one that will get through the very, very difficult passageway that is always the Court of Appeals, I don’t know. That’s too hard for me, but they have plenty of powers. There’s no question about that. Congress does not need to pass a law to have the Broadband Plan to go forth and be put into place, and the same thing is true with respect to net neutrality.”

    Then as if there weren’t enough fuel for this fire, Washington-based Internet policy advocate (and former ZDNet blogger) George Ou argued that even if the FCC were to redeclare broadband a telecom service under Title II, it would still lack the ability to tell Comcast, for instance, to stop throttling its Internet traffic.

    “Up until a 2005 when the FCC reclassified wireline transport into an ‘Information Service’ under Title I, Title II Common Carrier requirements had only applied to the underlying DSL transport (the physical telephone wiring and the DSL head-end switches called DSLAMs) which were labeled as ‘Telecommunications Services,’” Ou wrote on Wednesday. “Title II classification required the Telecoms to share their DSL transport infrastructure with competing ISPs, and it gave the FCC the authority to regulate wholesale transport prices. But even before the 2005 reclassification, Title II had only applied to the transport and not the Internet Services riding on top of that transport. That means the entire discussion on Title II is irrelevant to the Comcast/BitTorrent case since that was an issue at the Internet service level and not transport.”

    Betanews discussed all these events happening in the interim with the leader of Tuesday’s press conference, Open Internet Coalition Executive Director Markham Erickson, also founding partner of the Washington law firm Holch & Erickson LLP.

    Open Internet Coalition Executive Director Markham EricksonMARKHAM ERICKSON, OIC: I think George is right with point #1, that the DC Circuit didn’t obliterate the concept of ancillary authority; that legal theory still exists. The court just further described what they think ancillary authority means. It means that anything you’re doing under Title I has to be tied to a specific statutory mandate under Titles II, III, or VI of the Communications Act; and that what the FCC was doing in the Comcast decision — relying primarily on Section 706 and 230 of the Communications Act — neither of those sections provided a statutory mandate, and they were mere policy statements rather than statutory mandates.

    The DC Circuit then went further to talk about several other provisions of the Communications Act that they didn’t think gave the FCC enough of a statutory hook to utilize a theory of ancillary authority. So while they left open some room for ancillary authority, it’s very narrow room, and it’s not quite clear whether you’d have, under their tests, a survivable theory of ancillary authority to regulate the behavior that Comcast was engaging in.

    However, George is wrong on his second point, and that is, if you do move to reclassification, certainly the FCC would have the legal authority to draft regulations that would govern exactly the kind of behavior Comcast was engaging in. This is where George and others sometimes mistakenly create a straw man, intentionally or unintentionally. If the 2002 cable modem order — which began the process of calling Internet access services “information services” rather than telecommunications services — are revisited or reversed or modified in some way, it doesn’t mean you have to go back to the old-style regulatory approach that governed telecommunications services for so many decades. That’s a straw man, I think, that network operators and advocates on the other side like to use, and you hear the talking points that advocates on our side would like to return to old-style telephone regulations, burdensome regulations that would involve things like requiring tariffing, and wholesale provisioning of capacity for competing ISPs, and other things. It’s indeed not an either/or in that situation. Title II certainly gives, particularly under Sections 201 and 202, a model for dealing with the facilities-based access providers, for dealing with the Transport layer, and governing how the Transport layer deals with the content that flows over the Transport layer.

    So I think I would strongly disagree with George’s statement that, if you reclassify, you would have to go back to things that were done prior to the reclassification of these services from telecom, to information services. If you go back to classifying them as telecom services, it doesn’t mean you have to create all those old-style telephone regulations again. It’s just not the case.

    Next: If redeclaration is step #1, what’s step #2?…

    SCOTT FULTON, Betanews: If you don’t have to revert to old-style, 1994 telephone regulations, what does one do instead? It seems like you’re implying that reclassification is a first step, and that there are several steps thereafter.

    MARKHAM ERICKSON, Executive Director, Open Internet Coalition: Yes, I think there’s at least another step after that. First, you’d have to reclassify. You would then have to forebear from some of the Title II provisions that you wouldn’t want to see applied to today’s Internet access providers, that may have applied to old-style telephone carriers. And then you’d have to develop rules that would interpret the provisions in Title II, or use those sections of Title II, and maybe parts of Title I under ancillary authority, to address the things that are in the Broadband Plan, whether it’s network neutrality or USF reform, or the privacy issues, the truth in billing provisions, the things that [FCC] Chairman Julius Genachowski [Wednesday] in his Senate testimony talked about wanting to move forward on.

    There’s not just one way to do it; there are several options ahead of them, and I don’t want to get into those yet, because we’re still working through those, and I’m sure that Chairman Genachowski’s team is working through those. But I think the talking points you’re seeing from George Ou are really setting up a non-existent straw man.

    SCOTT FULTON: You already answered my question on Tuesday with regard to the role of Congress. Can the Commission by itself do these things that you are requesting of it, without the intervention or the oversight of Congress?

    MARKHAM ERICKSON: Well, they’ll always have the oversight of Congress. The FCC, by law, is overseen by Congress, they have oversight hearings pretty regularly — [Wednesday] was one of those. But whether you need a Congressional enactment, a new piece of legislation, no, absolutely not. You don’t need that. The FCC, when they engaged in the first place to classify Internet access as information services in 2002, it [rendered] a declaratory ruling without any Congressional intervention. And they can simply revisit that decision on their own, and reverse that decision like they did…in 2002. Now, they have to do so in a well-reasoned way, with legal and factual arguments that would be able to survive review at the DC Circuit, or whatever circuit is looking at that. I think there are facts that are handed to them that they can utilize.

    At the same time, if Congress is going to work out a piece of legislation concurrently, that can happen too. I think the point to remember is that Congress doesn’t do anything very quickly. The ’96 Act took roughly ten years from theory to final passage. So the choice really for all of us is, if the FCC had the legal authority to move forward on the National Broadband Plan, should they do that, or should they wait for Congress, which potentially takes ten years? I think the answer is, they should not wait ten years, that they should move forward, and in the meantime, work with Congress as Congress works on a piece of legislation too.

    SCOTT FULTON: Without actually signing his name to any particular way of thinking, I noticed that Sen. Rockefeller [Wednesday] said that if it should become necessary for Congress to rewrite laws, and thereby help the Commission along…then he’s happy to start that process. Should Chairman Genachowski, in your mind, say, “Thanks, Jay, but no, thanks?”

    MARKHAM ERICKSON: No, I think what you saw Chairman Rockefeller say is both. He said, they’re willing to stand at the ready to draft legislation if it’s necessary. But he also said that, in his opinion, the FCC has all the tools they need to move forward without Congress. So I think he was giving the Chairman the green light to move forward unilaterally without Congress, but saying, if you get stuck, he’ll stand ready to move legislation. I actually think that you could go forward concurrently.

    SCOTT FULTON: You mentioned there were only 68 legislative session days before this term is out [67 as of Friday]. Ranking Member Kay Bailey Hutchison drew a line in the sand, warning the Chairman against trying a redeclaration, not necessarily saying what she’ll do as repercussion, but I’ll assume even though the Republican party may be in the super-minority today, it’s likely that after the next Congressional election it will not be…Since it takes years for Congress to get anything done, the Republicans could mount a very significant counter-offensive. We’ve heard the term “net neutrality” bandied about, lifting the spirits of the proponents of the Broadband Plan; I’m imagining how that will play against “big government.”

    MARKHAM ERICKSON: The way I look at it, I think in some ways, the Comcast decision was interesting in that, under the concept of ancillary authority as proposed by the Commission — and the Comcast court pointed this out — it was hard to see what the limits of the FCC’s authority would be. If the FCC were to take a much more conservative approach, by narrowly reclassifying broadband access facilities as telecommunications services, you’re talking about a narrow segment of industry, and…we would expect to see a very light touch regulation even on those providers, just to accomplish the goals of the National Broadband Plan and network neutrality. For those who are worried about the FCC’s larger reach into other segments of the Internet, and other things, I think the Comcast decision sort of solves that issue for you. I think the reclassification issue is actually the smaller government, more narrow approach to handling issues like network neutrality.

    SCOTT FULTON: Well, when you use “net neutrality” and “light touch” in the same sentence, there are a lot of people who would say it takes a lot more than a light touch — maybe more of a fiery touch — to be able to reach in and tell an Internet service provider, for example, you may not limit the use of an application on your network in a particular fashion, or you may not employ this type of network management technique. Inevitably, someone will call out interference.

    MARKHAM ERICKSON: You know the rules, at least as proposed, would provide network operators an extraordinary amount of flexibility to manage their networks without any second-guessing, without any sort of blacklists or whitelists about things they may or may not do. And I think that’s the right approach. I want to see, and I think most stakeholders want to see, the ISPs be able to manage their networks without a lot of second-guessing and without having to look over their shoulders and secure their networks and deal with congestion, and again I think that tends to be a straw man that isn’t based on, at least, the way I read the rules and what we’re interested in seeing.

    SCOTT FULTON: If the Commission goes forth with the plan as you see it, and tries reclassification, is there hope for the Commission being able to achieve getting back on that track before the current legislative session expires?

    MARKHAM ERICKSON: Absolutely.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Terms of ACTA draft agreement to be revealed, EU promises no ‘three strikes’

    By Scott M. Fulton, III, Betanews

    In a news release today from Wellington, New Zealand, the site of the latest round of worldwide negotiations over terms for the Anti-Counterfeiting Trade Agreement (ACTA), the European Union announced it has gotten its wish: Negotiators have unanimously agreed to reveal the terms of their latest draft to the general public, not necessarily for comments but certainly for general inspection, in an official release next Wednesday.

    That draft, the EU said, should contain no trace of a controversial provision compelling governments to impose “three strikes” legislation (also known as graduated response) for accused intellectual property infringers, similar to legislation still being tried in France even after courts there declared them unconstitutional.

    The EU, at least, believes the current draft respects the terms of the 1994 Marrakesh agreement of Trade-Related Aspects of Intellectual Property Rights (called TRIPS, even though the acronym could just as easily be “TRAPS”), which is one of the founding documents of the World Trade Organization. The TRIPS agreement, among other things, allows member nations to exclude from patentability such things as surgical methods and biological processes; though it does allow nations to permit patents on other processes — a permission which has always been assumed to include mathematical processes as well.

    “There is no proposal to oblige ACTA participants to require border authorities to search travellers’ baggage or their personal electronic devices for infringing materials,” reads today’s announcement. “In addition, ACTA will not address the cross-border transit of legitimate generic medicines.”

    But there’s no mention today of another of the agreement’s more controversial negotiating topics: a provision believed to have been proposed by US negotiators that would compel nations to limit the imposition of “safe harbor” from liability for copyright infringement, to ISPs that have taken proactive measures to thwart such infringement.

    US copyright law refers to the state of affairs when an independent party, perhaps inadvertently, causes copyright infringement to take place, as secondary liability. The provision the US is believed to have proposed would compel many nations, including the United States itself, to modify or completely overhaul their existing laws. As Electronic Frontier Foundation International Director Gwen Hinze pointed out last January, New Zealand would be one of those countries. There, Hinze wrote, “Intermediary liability exists only where intermediaries are found to authorize specific infringing activity.

    “Requiring other countries to harmonize with the US secondary liability standards via ACTA is dangerous for several reasons. First, it would change the existing relationships and balance of power between content providers, intermediaries and their users, with unpredictable consequences for citizens’ access to knowledge and Innovation policy. Second, it overrides other countries’ national sovereignty and the public policies reflected in their national liability standards,” Hinze continued. “Third, it will reduce flexibility and harm the ongoing development of these concepts in both the US, and in other countries.”

    Opponents of changes to safe harbor provisions, the EU among them, may look to the fact that New Zealand is the host country for this round of negotiations, for a hopeful sign that the purported US delegation proposal may have been rejected. The negotiating stances of individual member nations will not be revealed next Wednesday, the EU press statement noted, although the EU’s position on several matters has already been made crystal clear.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • 2006: Google considered a PR campaign against content owners’ ‘foot-dragging’

    By Scott M. Fulton, III, Betanews

    In what may have, by now, become an exercise in the airing of corporate dirty laundry (or, in some cases, not even dirty), Viacom yesterday released more evidentiary documents from its court battle with Google. This second round of what was promised to be three public releases (though there could be more to come) includes confidential Google executive presentations from the period prior to its acquisition of YouTube, when the company was considering instead bolstering Google Video to become more competitive.

    Rather than being incriminating, the documents paint Google executives to be cunning, shrewd, and eager to assume the mantle of the moral high ground. But they also reveal instances where the company considered pressing its high-ground position to its advantage, with prospective PR campaigns and likely incentives to the press that would encourage the viewpoint that media companies such as Viacom (parent of Paramount Pictures and MTV Networks) were behind the times, inflexible, and unwilling to open up their content vaults for fair use.

    One bullet point recommendation for consideration from a June 2006 presentation on strategic directions for Google Video, concerned the use of a PR campaign to coerce media companies, explicitly including Viacom (whose logo appeared on one slide along with others’), to make more “viral” video available for premium (paying) users. “Threaten a change in copyright policy as part of a PR campaign complaining about harm to users’ interests through content owner foot-dragging — use threat to get standard deal sign-up,” the suggestion read.

    A slide from a 2006 Google executives' presentation showing they considered retaliating against content owners with a press campaign.

    A slide from a 2006 Google executives’ presentation showing they considered retaliating against content owners with a press campaign. From documents released April 15 by Viacom.


    Although a Viacom press statement yesterday accused Google of “explicitly advocating that Google use the threat of copyright theft to advance its business interests,” a closer read of the presentation in its entirety, and an understanding of the context in which it was made, suggests another interpretation: Google may have been considering the implementation of a new policy whereby content owners such as Viacom would have been more directly responsible for material that users may have uploaded, forcing them to more publicly prohibit the specific use of certain clips or segments of video. Imagine how Viacom might have looked if they had to post a public message saying, “You can’t watch clips from Iron Man yet,” on a big public display. Google would develop a “review tool” to help premium content partners scan the system for clips from Iron Man and other properties, but in the absence of action on the partners’ part, Google would compel the press to publish articles that paint partners as dinosaurs.

    Again, this is not what Google actually did — as we all know, Google acquired YouTube instead. But this examination of the strategy the company could have used to combat YouTube — a strategy which may very well have actually worked — demonstrates the kind of template that Google uses when framing a corporate strategy. This is a template we may very well see Google put to use today, as it considers whether to release the VP8 codec it acquired in the purchase of On2 Technologies, under an open source or royalty-free license.

    In a later slide from the 2006 presentation, Google executives consider the ramifications of three different copyright policy choices for Google Video (GV). One of them is the use of digital rights management technology for premium content. If it had taken that step, the slide suggested, GV’s content team would have been tasked with prompting partners including Viacom to make more viral content available, perhaps in shorter streams or promotional clips. Other slides, as well as other documents and presentations from the same period, make it clear Google was aware of partners’ concerns that Google was not in the business of promoting content, or featuring or spotlighting some shows for a limited time like a video rental store would…or like YouTube had started doing. But Google was also under the impression (possibly a very correct one) that if it threw away its organic advertising model in favor of a pre-scheduled promotional platform, it would fail to build an audience, and maybe even drive away its existing audience.

    So one alternative under consideration was the exact opposite of adding DRM: It considered letting clips happen, if you will. People would upload them, just as they do on YouTube. Under the right circumstances, though, those people could be doing the content owners a favor, especially if GV and those owners had a deal to present those clips anyway.

    But content owners tend to decide which clips they don’t want visible, after they’re already visible. In which case, a slide suggested, in the event of loosening content restrictions for clips, the company could “Support partners’ use of review tools,” and, “Reach out to non-partner content owners — actively promote review tool.”

    Problem is, beyond evangelism, no one could force content owners to use that review tool. So the plan would address that eventuality: “Increase staffing and/or resources to content acquisition, ops, and legal teams to handle complaints and potential litigation,” and, “Limit damage through public policy, investor relations, press and premium partner meetings.”

    The presentation also showed that Google carefully examined the ramifications of building a video system that forced content owners to explicitly “opt out” their own premium clips. In a slide entitled, “Potential results of changing copyright enforcement policies,” sure, traffic would improve as users found more and more stuff available, especially those choice “viral” clips. But that influx of users could muddy the waters for Google in determining which specific segments of interesting video could be developed for more refined tastes among small to moderate pockets of GV users. It was here that Google envisioned its real payoff: distinguishing itself from YouTube by providing not viral video, but very interesting video such as documentaries, symposiums, educational works, and stuff small groups of people would pay big money for. In the company’s metaphor, where paid premium content such as big movies would be the “head” and user-submitted videos of possums chasing squirrels would be the “tail,” this payoff segment was called the “torso.” Google reasoned that, if it opened up the floodgates for “tail” content that ended up including bits of “head” content, it would render itself less capable of generating user communities and social networks around “torso” content.

    Which, history revealed, was absolutely correct.

    In the eventuality of loosening the “tail,” Google executives reasoned they would need to mitigate the problem by “Modifying copyright protection through applying public pressure through increased collaboration with content owners and indirect pressure through press and public policy.” As a result, the executives predicted, “Some content owners sue Google” (again, very wise). Another potential problem would be certain smarter elements of the press detecting two conflicting messages from the company — one for loosening copyright for “tail” video content, and another for respecting copyright for all those literary copyright holders whose works were being digitized by Google’s ongoing project with the world’s major libraries.

    The bottom line could be affected, executives reasoned, as the company’s AdSense and other platform advertisers “Wish to avoid negative associations.” So contrary to Viacom’s public assertion that Google was advocating a thwarting of copyright policy to compel content owners to disgorge their clips before a maddening public horde, the evidence actually shows where Google executives logically weighed the negative ramifications of possible public policy changes.

    It also shows the beginning of the no-win scenario that eventually led Google to purchase YouTube rather than compete.


    Complete documents from the Viacom v. YouTube case are obtainable from this Viacom Web page.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • With Silverlight 4 and Flash Catalyst, the RIA battle begins in earnest

    By Scott M. Fulton, III, Betanews


    Download Silverlight 4 RTM for Windows from Fileforum now.


    In recent years, most Web applications in widespread use have been developed with Web browsers as their platform. Here, one imagines Java advocates are already composing their complaint letters. But with Web resources bound to URLs, for most developers, it’s made sense to utilize the functionality most commonly associated with URL-bound resources: HTML, JavaScript, and now its rapidly maturing derivative, AJAX.

    The drive to move rich Internet applications (RIAs) off the browser got two big shots in the arm this week, with Adobe’s release of Creative Suite 5 on Monday, followed by Microsoft’s Tuesday formal release of Visual Studio 2010 and today’s formal release of Silverlight 4. While richer video is, of course, one component of both platforms, they’ve also developed new methodologies for designing simpler client-side Web apps that maintain data on more robust server-side applications.

    Just as information wants to be free (especially if you pay enough for it up front), Web apps by design are often constrained by the framework of the browser. Java developers always understood this, but only recently has Sun — and now Oracle — devoted serious effort to building a rich Internet application framework around Java, with JavaFX having exited beta in December 2008. Neither Java nor JavaScript nor XML, JavaFX is a declarative scripting language for building the components of an RIA front-end — its counterpart for Silverlight and .NET developers would be XAML.

    While new corporate parent Oracle has pledge both financial and emotional support for JavaFX, even its ardent supporters have commented that neither Sun nor Oracle have built a developers’ toolset for JavaFX that enables both rich and rapid app development. This is where both Microsoft and Adobe are racing to fill the app gap.

    Adobe Flash Catalyst converts a multi-layered graphic from Photoshop into a workable Web app front panel.  [Screenshot courtesy Adobe]

    Flash Catalyst is Adobe’s interaction development environment, designed to compete on a level more with Microsoft’s Expression toolset than Visual Studio. It’s for the designer who wants to lay out the tools with which the user operates the Web app, using the most likely tool she already uses for envisioning the Web app in the first place: Photoshop. Catalyst lets designers convert Photoshop layers into addressable graphic objects without coding. Expression has similar functionality, but it also relies on Photoshop to some extent, since Microsoft doesn’t have a competitive artistic design tool; this way, Adobe cuts off Microsoft at the pass.

    By addressing the need to transport envisioned elements easily to a fully operative state, Catalyst also bypasses the whole scripting route that JavaFX is still trying to evangelize Java developers to adopt with their hearts as well as their minds. Granted, there’s some underlying Flash scripting taking place in the background (just as Expression automatically creates the underlying XAML), but with Catalyst, the designer is shielded from that process entirely.

    Although one of Silverlight’s principal development tools was officially released Tuesday, Microsoft developers have already been actively using the Visual Studio 2010 Release Candidate (with its “Go-Live” licensing for those eager to deploy) since the early part of last year. The real surge forward for the Microsoft camp today has to do with the escalation of what had been called .NET RIA Services to the new WCF RIA Services (Windows Communication Foundation), acknowledging its usefulness to Silverlight as well.

    With this new WCF RIA Services support in Silverlight 4, released today, the developers of so-called n-tier applications (distributed apps that run on multiple platforms simultaneously, often three at once) can engineer front end client apps in Silverlight that don’t have to replicate the state of the application from the server, just so it can pass control from one window to another. Rather, a new Silverlight 4 app can be developed more like Web pages, which are mindless of the context of other Web pages, and request the information about the running Web app from the server on startup, and over time as necessary.

    A sophisticated order management system appears within a Web browser framework (IE8) using Silverlight 4 and WCF RIA Services.  [Screenshot courtesy Microsoft.]

    In this example application from a Microsoft Hands-On Lab published in December 2009 (ZIP file available here), a sophisticated order management system can either run in its own window or, as shown here, within a Web browser. (Notice how this sample line-of-business app is asking the user’s permission to access the Clipboard. System Clipboard access is a new feature for Silverlight 4, though it comes with safeguards.) Silverlight 4 is handling the display component, but it’s also interfacing with WCF RIA Services so that the logical component of the app (written using a supported .NET language) can be crafted independently of the presentation layer.

    A multi-tier Web application using WCF RIA Services enables the entire Web client logic to be separated from both the presentation layer at one end, and the database layer at the other.

    The reason for doing that becomes clearer when you think cross-platform: With an n-tier application, you can separate the application’s core logic from the code used to display it. This way, you can craft multiple Silverlight-based clients for PCs, smartphones, or tablets, each of which may handle the layout differently from one another. The Web logic will be able to gather and manage data from the server as necessary, and format the results of queries so that each Silverlight client (or alternately, as this slide from a MIX 10 presentation suggests, each AJAX client) can lay out the presentation as it sees fit.

    Both Adobe and Microsoft have made significant gains in the race to build the biggest and best RIA platform. That makes the job of catching up for Oracle and JavaFX even tougher.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Key senator gears Congress for a long fight to reform the FCC

    By Scott M. Fulton, III, Betanews

    Sen. Jay Rockefeller (D - W. V.)A long-planned hearing on Capitol Hill to discuss the Federal Communications Commission’s Broadband Plan took on new meaning yesterday, a week after the DC Circuit Court ruled the Commission lacked the authority to implement net neutrality regulations. With a coalition of Internet business interests pleading with the FCC to declare itself the “cop-on-the-beat” for net neutrality under a different provision of US telecom law than it had been using, now Sen. Jay Rockefeller (D – WV), chairman of the Senate Commerce Committee, says the FCC may not need to take that step.

    In his remarks yesterday, Sen. Rockefeller told his committee he’s ready to begin the long, and undoubtedly arduous, process of changing the law to give the FCC the authority that the Broadband Plan assumed it had to begin with.

    “No doubt, this ruling adds to the complexities of the FCC’s task, but for me, two things are clear. First, in the near-term, I want the agency to use all of its existing authority to protect consumers and pursue the broad objectives of the broadband plan. Second, in the long-term, if there is a need to rewrite the law to provide consumers, the FCC, and industry with a new framework, I will take that task on,” stated Rockefeller.

    The FCC had been acting on what it thought was the mandate of Title I of the Telecommunications Act, last amended in 1996. Prior chairpersons of the Commission had concluded that it had ancillary authority under Title I, which covers information services, to regulate Internet practices. Title II covers telecommunications services, like telephone networks; and since the Clinton administration, all FCC leaders up to and including the current Julius Genachowski had distinguished Internet service from telecommunications service.

    Last week’s DC Circuit ruling stated that the FCC failed to prove its Title I case when ordering Comcast not to throttle BitTorrent traffic. Not waiting for the Commerce Committee chairman to make his case, Ranking Member Sen. Kay Bailey Hutchison (R – Texas) yesterday drew a line in the sand, warning the Commission that Republicans would take action if it were to suddenly jump tracks from I to II.

    “In my judgment, if the FCC were to take the action Chairman Genachowski and his colleagues appear to be considering, reclassifying broadband without a directive from Congress and a thorough analysis of the facts and the potential consequences to investment, the legitimacy of the agency would be seriously compromised,” remarked Sen. Hutchison yesterday. “I hope that we can take a step back to consider the consequences of such a decision and whether there are alternatives we can work together on to clarify the authority of the FCC while preserving an environment that encourages investment. I am confident we can find common ground, but that will not happen if the FCC takes this action.”

    For his part, Chairman Genachowski yesterday proceeded with his prepared remarks, emphasizing the Plan’s goals for building out broadband, and underscoring the US’ lagging success toward those goals compared to other countries. One of the Plan’s key goals concerns retooling the existing Universal Service Fund — created in 1996 to fund the expansion of telephone service to rural areas and small schools and libraries — for building out broadband as well. Once again, the FCC could only shift those priorities if it had existing authority, granted by Congress, to do so — something the Comcast ruling asserted it might not have.

    That fact forced the Chairman to add a footnote to his prepared remarks, though he lavishly embellished that footnote with every impending danger imaginable, including from terrorism and natural disaster: “Notwithstanding the decision last week in the Comcast case, I am confident that the Commission has the authority it needs to implement the broadband plan. Whatever flaws may have existed in the specific actions and reasoning before the court in that case, I believe that the Communications Act — as amended in 1996 — enables the Commission to, for example, reform universal service to connect everyone to broadband communications, including in rural areas and Native American communities; help connect schools and rural health clinics to broadband; take steps to ensure that we lead the world in mobile; promote competition; support robust use of broadband by small businesses to drive productivity, growth, job creation and ongoing innovation; protect and empower all consumers of broadband communications, including thorough transparency and disclosure to help make the market work; safeguard consumer privacy; work to increase broadband adoption in all communities and ensure fair access for people with disabilities; help protect broadband communications networks against cyber-attack and other disasters; and ensure that all broadband users can reach 911 in an emergency.”

    Apparently, the Chairman wrote his remarks without any idea beforehand of what Sen. Rockefeller would say in his. So he ended up being pre-empted by Rockefeller’s assertion that the Broadband Plan was long on goals, but short on methods.

    “The report has over 200 recommendations. But it takes no action. It is long on vision, but short on tactics,” the Commerce Committee chairman said. “So I am going to challenge the FCC. I am going to challenge the FCC to make the hard choices that will help bring broadband to every corner of this country. Putting ideas on paper is not enough. Just seeking comment on a slew of issues is not enough. It’s action that counts.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Google may face legal challenges if it open-sources VP8 codec

    By Scott M. Fulton, III, Betanews

    Last February, at the time Google completed its purchase of On2 Technologies, the video technology patent holder and maker of the VPx series of video codecs, the Free Software Foundation posted an open letter urging Google to release the latest version, VP8, to the open source community. Though Google has been pretty vocal since then about what it has perceived as the bright prospects for On2 under its wing, the volume was turned down to low on Tuesday, immediately after the digital television news service NewTeeVee cited anonymous sources as saying Google intends to do just as FSF asked.

    Google declined official comment on the story to Betanews, but the tone of the spokesperson’s declination speaks volumes, especially from this characteristically forthcoming company: “We’re excited to be working with the On2 team to continue to improve the video experience on the Web, but we have nothing to announce at this time.”

    Theoretically, opening up the VP8 codec to the community would enable not only Google but other browser manufacturers, including Mozilla and Opera, to include the codec inside their browsers rather than as add-ons, or forcing users to download add-ons from other sources. Such a development could enable the originally intended use of the <VIDEO> element in HTML 5: a way for individuals to freely share Web video with some assurance that its viewers will be able to see and hear it.

    Based on the information Betanews has already been receiving since last August, when the On2 buyout deal was first announced, it appears possible, at the very least, that if Google were to attempt to release a version of On2’s VP8 codec under an open source license, rights holders and patent owners could mount a legal challenge. Although On2 is a patent holder for three principal video compression technologies, all of which were intentionally presented as alternatives to other proprietary technologies such as H.264, codecs are developed around several bedrock technologies. Companies that either have claims to those technologies, or at least believe they do, could very well file suit if they believe Google was never licensed to give away methodologies they contend they have created, and thus own.

    Today, Betanews asked a video technology business source whether our theory held water — whether technology owners could legally challenge Google, or other users, if it attempts to offer a free license for technology without the owners’ consent or license. The source replied affirmatively. While Google may very well own rights to a proprietary version of VP8 for its own sale and licensing purposes, outside of On2’s own patents, if Google and other users are not licensed under applicable patents, the “patent-free” state of that codec could be challenged in court, Betanews was told.

    The key to the viability of whatever move Google makes, Betanews research has determined and our source is confirming, is the nature of the license under which the codec is offered. Rather than an open source license, which entitles users to also acquire the source code, Google may instead offer the codec royalty-free. You could use it, but you couldn’t take it apart. Also last February, the licensing body for H.264 and AVC, MPEG LA, pledged to extend the term of free licensing to 2015 to individuals who used H.264 codecs for which royalties were already paid, for producing freely distributed Internet videos. That move could potentially set an example for Google to do something similar, providing free Web users with a way to use VP8 in a manner that rights holders may not object to.

    But there’s already historical precedent for a company attempting to offer a royalty-free license for a codec whose underlying technologies it didn’t completely own. In 2005, Microsoft offered its WMV9 technologies as the royalty-free standard VC-1. As Microsoft soon discovered, WMV9 was not “patent-free” outside of Microsoft, and its underlying technologies were not royalty-free either. Today, Microsoft’s service agreement on VC-1 includes a notice saying, among other things, that AVC — one of the bedrock encoding technologies claimed by other rights holders — may be used in the VC-1 codec, under a license granted to Microsoft by MPEG LA. That license covers Microsoft when it, in turn, licenses the use of VC-1’s three essential encoding technologies, for non-commercial purposes.

    “The software may include H.264/MPEG-4 AVC and/or VC-1 decoding technology,” the agreement reads, prior to the addition of a paragraph the agreement itself says MPEG LA requires.

    In an e-mail interview with StreamingMedia.com last February, MPEG LA CEO Larry Horn explained why such clauses continue to exist in license agreements. The newly extended H.264 agreement, for example, charges royalties for all deployments of H.264, but refrains from charging royalties for its free use on the Internet.

    “Virtually all codecs are based on patented technology, and many of the essential patents may be the same as those that are essential to AVC/H.264,” Horn told reporter Jan Ozer. “Therefore, users should be aware that a license and payment of applicable royalties is likely required to use these technologies developed by others, too. MPEG LA would consider offering additional licenses that would make these rights conveniently available to the market under a single license as an alternative to negotiating separate licenses with individual patent holders.”

    Again that month, the principal developer of the free x264 encoder for H.264 video, Jason Garrett-Glaser, reminded readers of his own blog of the history of Microsoft’s attempt to offer a royalty-free codec: “A few years ago, Microsoft re-released the proprietary WMV9 as the open VC-1, which they claimed to be royalty-free. Only months later, dozens of companies had come out of the woodwork claiming patents on VC-1. Within a year, a VC-1 licensing company was set up, and the ‘patent-free’ was no more. Any assumption that VP8 is completely free of patents is likely a bit premature. Even if this does not immediately happen, many companies will not want to blindly include VP8 decoders in their software until they are confident that it isn’t infringing. Theora has been around for six years and there are still many companies (notably Nokia and Apple) who still refuse to include it! Of course this attitude may seem absurd, but one must understand who one is marketing to. One cannot get rid of businesspeople scared of patents by ignoring them.”

    Technically, a new “licensing company” was not set up, though Microsoft’s licensing arrangement did give users the impression that one sprung up overnight. But even with all that, Garrett-Glaser went on, there may be one completely unexpected reason why a Google attempt to offer VP8 under an open source license may fail miserably: You might not want it even if it is free.

    “VP8 is proprietary, and thus even if opened, would still have many of the problems of a proprietary format. There may be bugs in the format that were never uncovered because only one implementation was ever written (see RealVideo for an atrocious example of this),” he wrote. “And given the quality of On2’s source releases in the past, I don’t have much hope for the actual source code of VP8; it will likely have to be completely rewritten to get a top-quality free software implementation.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Analyst roundtable reunion: The last remake of Palm

    By Scott M. Fulton, III, Betanews

    Though the news is relatively fresh that Palm Inc. has been negotiating with China’s Huawei Technologies about a possible buyout, the word from Investors’ Business Daily sources is that these negotiations have actually been ongoing for at least two months. That nothing has come of them since February may be the most important, and potentially distressing, news of all.

    In light of that realization, Palm is suddenly in need of yet another comprehensive makeover to save its flagging image. Suggestions from the field include relatively simple ones from Betanews contributor Carmi Levy — that it should keep its Pre Plus and Pixi Plus hardware, and focus on building up its applications base — and the completely opposite suggestion from widely respected industry analyst Dr. Gerry Purdy, who has published his viewpoints on mobile technologies in what’s now called the MobileTrax newsletter, since 1986.

    In a stunner of a report this morning, Dr. Purdy suggested that Palm’s next comeback include dumping its platform’s current form factors, in favor of one with a wider screen and keyboard, and with a very familiar brand:
    “Drop Pre. Bring back Treo. It’s a known brand. It represents everything good that Palm did for years to lead the SmartPhone market,” he wrote. “With this branding, we could all then say that the ‘Pre’ was the ‘Precursor’ to the return of Treo. Plus there’s a new definition for the three key service elements in Treo: 1) phone, 2) apps, and 3) services such as social networking.”

    Dr. Purdy went on to suggest that Palm partner with a major player in the field, one of Lenovo’s or Motorola’s or Nokia’s standing. Then it could wipe the slate clean for its renewed Treo brand, purge the webOS from its systems, and deploy a remodeled form of MeeGo, the Linux derivative formed from the merger of Intel’s Moblin with Nokia’s maemo. Using that as a launching pad, he suggested Palm then launch an apps store for MeeGo apps to compete against Apple — something which Intel has said MeeGo would permit individual vendors to do.

    “There are a lot of excellent assets at Palm. They have a great team (investors, board, management, staff & partners). They have a successful history,” the MobileTrax analyst concluded. “To many, they still represent the notion of ease-of-use that enabled the Treo to succeed so well in the market. These assets need to be preserved before they slip away.”

    The precedent for some of what Dr. Purdy suggests was set in September 2005. At that time, in its biggest attempt to date to remake itself and to rediscover relevance against the surge of BlackBerry devices, Palm found a partner in Microsoft. It would build Windows Mobile 5 devices under the Treo brand. Up to that point in time, Treo had been the vehicle for the Palm OS that the company had just sold off to a browser maker in Japan called Access.

    On the week of that announcement, I covered the story for Tom’s Hardware with an analysts’ roundup. Anyone who knows that old publication, as well as Betanews, knows all the participants. First to proclaim the imminent death of Palm OS as a result of the Microsoft deal, was a certain Info-Tech Research analyst.

    Carmi Levy, September 27, 2005: The most obvious early casualty of the Microsoft/Palm and Intel/RIM announcements is clearly the Palm OS. PalmSource, which stumbled in delivering version 6, now finds itself fighting an uphill battle to convince its former owners — and the mobile/wireless market in general — that Palm OS will remain relevant in the middle of these powerful new hardware/software/carrier alliances. Frankly, I don’t see that happening. The market has already voted with its feet: Palm OS-based devices are in freefall, and the conditions are such that this trend will not be reversed.

    Palm OS as we know it is dead. PalmSource has announced plans to base future revisions on a Linux core. While this may help the firm carve out a low-end niche, its long-held hope of appealing to the enterprise market is now history. IT’s needs are being — and will be — nicely met by the remaining players.

    Speaking with Betanews today, Carmi essentially said Palm’s problem now is a magnification of Palm’s problem then. The hardware isn’t the tarnished part, but the brand.

    “Palm’s salvation will not be new and improved hardware because success in this market is no longer decided by who has the best phone. Palm could introduce the world’s best mobile phone tomorrow and it still wouldn’t change the company’s fate,” Levy told us. “Encouraging the company to built a successor to Apple’s App Store is like telling an on-the-ropes automaker like Chrysler to build a hybrid that delivers better mileage than Toyota’s Prius and a more engaging driving experience than a BMW. Palm can’t simply conjure up a magically successful online app store — indeed, its initial effort has been a dismal failure because developers won’t commit to a platform with a questionable future. Palm is out of options in this regard.”

    Disagreeing directly with Carmi in 2005 was none other than Dr. Gerry Purdy, who made the case that selling off PalmSource to Access was necessary for the Palm OS’ survival:

    Dr. Gerry Purdy, September 27, 2005: I don’t think that Palm is getting squeezed out. I do think that PalmSource will migrate itself to become a value added services layer on top of Linux which will become the dominant OS in the 3G phone market. What happened is that the OS has become less relevant in the wireless handheld world but value added services has become more valuable. It was actually very wise for PalmSource to acquire China MobileSoft as that gets them into the Linux game and then Access was smart to buy PalmSource because it allows their browser technology to expand outside of Japan into the Linux world as well.

    What does all this mean? I think you’ll see Palm become a major handset manufacturer in the wireless handheld space using Access as their supplier of Linux and the value added services (such as user interface, core applications, file management system, Web access browser, content delivery, etc.) for their broad 3G offerings that will work on multiple carriers.

    Next: Ross Rubin weighs in…and where was Joe?

    Warning us in 2005 that the problem before PalmSource was distinguishing Palm OS from being just “good enough” to being outstanding, in the face of new competition from the Treo, was a certain Jupiter Research analyst:

    Joe Wilcox, September 27, 2005: I wouldn’t say the Treo is exceptional, but it crosses the “good enough” threshold for many people. Another distinguishing feature is the PalmSource software. It’s a different operating system than some of the Windows stuff out there — looks different, feels different. Now we have this Palm device running Windows Mobile software, and there are going to be a lot of devices out there running Windows Mobile software, and as we saw in the Windows space, as demonstrated [there], we may see greater difficulty to differentiate over time.

    Most of the devices that Palm ships run PalmSource software. So I think it would be way, way premature to write PalmSource off here. These two companies were one at one time, and they have a long-standing relationship. Just the fact that the Windows Mobile was announced for one device says something about the extent of the commitment. Yes, Palm is licensing Windows Mobile, but it’s toe-in-the-water stuff. PalmSource has a whole leg in the water, at least. PalmSource has to make its future clearer now. We just had the acquisition, and then right afterwards, we had the major partner going with a rival’s software. So it’s now up to PalmSource to really articulate a clear road map for its products, and to begin delivering on some new capabilities as soon as possible. At the least, it wants to hold on to its major licensee, and in the best-case scenario, extend its software to other devices.

    Writing for us a few weeks ago, Wilcox suggested the first layer of Palm’s problem is one of perception — a kind of self-fulfilling prophecy that keeps folks from investing in the platform because so many other analysts suggest it’s not worth investing in.

    “There could be more positive perceptions about Palm, if anyone really looked,” Wilcox wrote. “WebOS is simply one of the best smartphone operating systems ever developed. Palm marketing is aggressive and compelling. Then there are the carrier deals, which keep coming with the bad news. Today, Palm announced that AT&T will distribute Palm Pre and Pixi. That puts three of the four major US carriers in Palm’s palm. But who is going to buy a Palm smartphone with all the negative crap being said about the company’s future? Who wants to buy a product today from a company that might not be around tomorrow?”

    Forseeing a convergence in mobile phone operating systems around Linux — or perhaps an emerging brand in the Linux space — was NPD analyst Ross Rubin, now the company’s executive director of industry analysis for consumer technology. In 2005, Rubin suggested that brands start finding ways to distinguish themselves through applications and functionality, to introduce themselves to the consumer as something of unique value in themselves, as an alternative to dumping themselves onto a collective platform as PC manufacturers did in the 1990s.

    Ross Rubin, September 27, 2005: Having Windows Mobile is probably of more interest to enterprises than consumers, but unless there are other benefits that come with that, such as greater stability. I think the difference is getting back to this set of capabilities that many of these operating systems can provide. Unlike in the PC world, where you really had Office driving a lot of Windows sales, because those are the applications people wanted to use, you haven’t really gotten that “killer app” for cell phones. To the extent that there are compelling applications, there are a number of ways that those could be brought to the platform. It’s not as simple as just having an application on top of an operating system. In the PC world, you have things like Brew, Java. New applications could be delivered over the air. So in some ways, it’s a more dynamic environment where the operating system and the application are not as strongly tied together in the real world as they are on the desktop.

    Carriers have a vested interest in having a broad portfolio of handsets, with different features. I think also that the nature of mobile phones is that they’re more personal than a lot of PC devices are, and again, why do we purchase PCs? It’s really to run different applications…I think one way of looking at it is, maybe the best comparison is not necessarily to the PC itself, which has a certain set of features driven by volume in the marketplace, that certain components tend to win out, but really more sort of like the Web. No two Web experiences are really identical. No two Web sites are really identical. I think in many ways, what carriers are looking to do is take that kind of diversity and make it available to consumers through different kinds of applications, and optimizing different kinds of handsets. Carriers certainly have an interest in keeping healthy competition alive among different handset manufacturers.

    Speaking with Betanews today, Rubin said much the same thing: He believes the task before Palm is to continue to distinguish itself, to define a clear set of functions and actions that define what a consumer can do with a Palm phone — to brand the function, not just the device.

    Responding to Dr. Purdy’s suggestion this morning, Rubin told Betanews, “Yes, Palm could do any number of things to improve the product; a larger screen device would help show off webOS, but the company instead chose to go for a smaller, more comfortable device with a Zen aesthetic resulting in greater differentiation. Palm’s enemy is not so much Apple, Google, or Microsoft as it is the clock. As [Palm CEO] Jon Rubinstein has noted, the company needs to reach the scale of large manufacturers to achieve ultimate profitability. It is launching now on multiple carriers, which will help deliver scale, but it is not happening fast enough. This is why so many of the rumored acquirers are large-scale manufacturers — either in the handset space such as Motorola, RIM, Nokia, or HTC, or primarily outside (Lenovo and Dell). Making changes such as merging webOS with MeeGo — itself a recently merged operating system with very low volume — would require a luxury of time and attention Palm doesn’t have at the moment.”

    And responding to my contention that perhaps Sprint may not have been the optimum platform, from a consumer standpoint, on which to launch the new Pre in 2009, Rubin said, “There’s little use to Palm in looking back at Sprint in hindsight. Palm played the best hand available to it, and Sprint did support the Pre with extensive promotion. It now has its handsets on the top three of four carriers in the US as well as in Europe, and needs to educate and explain its advantages to the consumer while continuing to court the developer.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • After one economic pothole, Intel is wary of another

    By Scott M. Fulton, III, Betanews

    Intel CEO Paul OtelliniWhat saved Intel’s neck during the worst part of the last economic downturn was the Atom processor, the heart of netbooks that started selling well as consumers’ budgets tightened. Now that the 2008-09 dip is over, and even businesses’ budget belts are loosening, the company’s attention returns to the server side of the equation.

    In Intel’s quarterly conference call yesterday evening (Betanews thanks Seeking Alpha for the transcript), CEO Paul Otellini pointed to cloud computing and virtualization as trends that are empowering a resurgence in business sales…and helping the company to overcome an apparent tapering off in consumers’ interest in netbooks.

    “There are a number of things going on. The move to cloud we think is very good. Not everything will go to cloud but the shift to cloud base services is good for Intel. The shift to virtualization is good for Intel,” Otellini told a UBS analyst. “If you plot out the growth in data traffic and network traffic, and the kinds of things modern servers are doing, that growth curve is faster than the refresh rate for old versus new equipment. We see a very robust scenario for servers going forward.”

    But that’s as much flavor or color as the CEO was willing to provide about “going forward.” Usually at the end of the first calendar quarter, Intel has no problem starting to provide limited guidance about the remainder of the year, especially the third quarter leading into the holiday season. Despite repeated requests from analysts on yesterday’s call, neither Otellini nor CFO Stacy Smith would provide anything remotely approaching the definition of a forecast going into the second half of the year.

    “I think we won’t talk about the second half at this point in time except to say we are putting in place sufficient capacity to handle…any demand scenario you could imagine,” Otellini told a Barclays Capital analyst. Referring to the company’s ongoing transition from the 45 nm to the 32 nm process, and how much its fabrication facilities should continue to press on assembling 45 nm parts to meet growing demand, he added, “We are assuming continued growth and units over the course of the year. We are going to be ramping 32nm as fast as possible. So the only question we have from a supply standpoint is how much 45 we keep on and what is our assembly test loading capabilities. We will put some bucks around those to make sure we have sufficient capacity.”

    It’s not that consumers don’t want mobile processors. In fact, CFO Smith conceded, Intel didn’t keep as much of its 32 nm Arrandale series mobile processors on hand. What’s happening now appears to be a shift back in consumer demand away from netbooks — the life raft of the downturn — toward traditional notebook PCs. That’s bad news for Linux proponents who were looking for Intel Atom-based netbooks to be the launching ground for platforms such as MeeGo, the forthcoming merger of Intel’s Moblin with Nokia’s maemo; as well as Google’s Chrome OS, the Linux extension of its browser technologies. But it’s good for Intel overall, Smith noted, because it actually helps enable Atom production costs to come down, driving up margins for netbook sales even as those numbers subside. And, of course, it’s really good for Arrandale.

    “Because of the increases we saw over the course of the quarter in demand on the new mobile platform Arrandale I wasn’t as able to get as much inventory in place as I had hoped,” said Smith. “If you look at it between dollars and units, what you see is, units are up a bit more than dollars but still not as much as I would want as we kind of move into the second quarter, and based on the strength we are seeing and the ramp of these new products. If you deconstruct the inventory between the processes, what we saw was more than 100% of the increase in inventory and more than 100% of the increase in units was on 32nm. So we got a little bit in place there. Everything else was down. As I think about Q2 my hope is I can build some inventory into the second quarter in anticipation of a second half that is higher.”

    How much higher? Again, Intel won’t go there. But in some departments, you actually can’t get much higher: First quarter gross margin for Intel was a stunning 63.4%, contributing to a $2.4 billion net income quarter on $10.3 billion of revenue. This was despite a 19% drop in revenue from the Atom segment year-to-year, to $355 million. Calling the income picture for the quarter “288% annual growth” wouldn’t really be accurate; neither Intel nor anyone else is really tripling its growth per year. That incline is mostly due to coming out of the last economic pothole. As hardware analyst firm iSuppli tells us, global PC shipment numbers for Q1 2010 are likely to be 17.1% higher than for Q1 2009, but seasonality will level off that growth significantly through the spring.

    So this last quarter was not really a boom, as much as the deafening ringing sound left in one’s ears after the bust wears off. That’s why Intel’s not providing much guidance besides maintaining gross margin at around 64% throughout the year, peppered with a pinch of hope that, once the dust settles, businesses realize how old their client PCs have suddenly become.

    “The average fleet of notebooks is four years old out there,” remarked Otellini. “The average fleet of desktops is five years old. You are getting to the point where as CIO’s are feeling a bit better about their business it makes economic sense to swap these out just from an ongoing cost of ownership standpoint.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Skype and colleagues to FCC: Declare yourself fit to regulate the net

    By Scott M. Fulton, III, Betanews

    Last week’s staggering defeat to Comcast in a landmark DC Circuit Court decision left the US Federal Communications Commission stripped of any “ancillary authority” it thought it had to regulate the practices of Internet service providers. As of now, it isn’t exactly clear just which government agency does have that authority.

    Rather than wait for Congress to make a decision on the matter — an event which may, arguably, never happen at all — a coalition of major Internet stakeholders, including Skype, Google, eBay, Amazon, Netflix, TiVo, and Facebook are calling on the FCC to take action. Quite literally, they want the Commission to convene a hearing declaring its intention to fill the gap left by the court’s removal of FCC authority…with FCC authority.

    In other words, the FCC may not be the best-suited to regulate the Internet under current US law…but no other candidates exist.

    “We think that time is of the essence here,” stated Markham Erickson, Executive Director of the Open Internet Coalition, in a press conference Tuesday morning. “While we’re not opposed to Congress getting involved in trying to address what happened with the Comcast decision, at the same time, the FCC needs to move quickly to open a proceeding to classify high-speed Internet access services as telecommunications services. In fact, that’s been the norm at the FCC for most of the history of essential communications platforms — that they’re treated as telecommunications services. If the FCC were to do that, it would be a fairly straightforward process of reversing the 2002 [Brand X] cable modem order, and it would re-establish the FCC’s legal authority, allowing it to move forward on the Broadband Plan, and the network neutrality rulemaking.”

    What Erickson is asking for is a complete U-turn — for the FCC to effectively declare Internet communications the same, from a legal standpoint, as telephone communications. The FCC steered clear of that interpretation in 2002 when, under the leadership of then-Chairman Michael Powell, it declared the type of service delivered to customers via cable modem as an information service, distinct and different from a telecommunications service.

    As the 2002 declaration reads (PDF available here), “In this proceeding, as well as in a related proceeding concerning broadband access to the Internet over domestic wireline facilities, we seek to create a rational framework for the regulation of competing services that are provided via different technologies and network architectures. We recognize that residential high-speed access to the Internet is evolving over multiple electronic platforms, including wireline, cable, terrestrial wireless and satellite. By promoting development and deployment of multiple platforms, we promote competition in the provision of broadband capabilities, ensuring that public demands and needs can be met. We strive to develop an analytical approach that is, to the extent possible, consistent across multiple platforms. For the reasons discussed…we conclude that cable modem service, as it is currently offered, is properly classified as an interstate information service, not as a cable service, and that there is no separate offering of telecommunications service.”

    That declaration effectively freed the FCC from having to resolve the issue of how, or whether, broadband carriers must be forced to open their services up to multiple Internet access providers. The same laws that forced the Bell System to open up its long distance lines to MCI, may have applied in compelling AT&T to offer Internet service from a menu of competitors. One of those competitors would have been a small firm called Brand X, whose name will forever grace the history books as the subject of the Supreme Court’s 2005 “Brand X Decision.” Overturning the appeals court, the nation’s highest court sided in favor of the FCC, in a decision authored by Justice Clarence Thomas and dissented solely by Justice Antonin Scalia.

    Scalia’s dissent was classic Scalia, complete with frequent alliteration, fluent vocabulary, and a pizza analogy. The point the justice made was that, from a consumer’s perspective, whether he receives service from a service provider or from a carrier, he receives service. The difference would be about as trivial as whether a pizza restaurant delivers food to customers’ doors, or hires a cab driver to do it instead.

    Since the delivery service provided by cable (the broad-band connection between the customer’s computer and the cable company’s computer-processing facilities) is downstream from the computer-processing facilities, there is no question that it merely serves as a conduit for the information services that have already been “assembled” by the cable company in its capacity as ISP. This is relevant because of the statutory distinction between an “information service” and “telecommunications.” The former involves the capability of getting, processing, and manipulating information…The latter, by contrast, involves no “change in the form or content of the information as sent and received.” …When cable-company-assembled information enters the cable for delivery to the subscriber, the information service is already complete. The information has been (as the statute requires) generated, acquired, stored, transformed, processed, retrieved, utilized, or made available. All that remains is for the information in its final, unaltered form, to be delivered (via telecommunications) to the subscriber.

    This reveals the insubstantiality of the fear invoked by both the Commission and the Court: the fear of what will happen to ISPs that do not provide the physical pathway to Internet access, yet still use telecommunications to acquire the pieces necessary to assemble the information that they pass back to their customers. According to this reduction…if cable-modem-service providers are deemed to provide “telecommunications service,” then so must all ISPs because they all “use” telecommunications in providing Internet functionality (by connecting to other parts of the Internet, including Internet backbone providers, for example). In terms of the pizzeria analogy, this is equivalent to saying that, if the pizzeria “offers” delivery, all restaurants “offer” delivery, because the ingredients of the food they serve their customers have come from other places; no matter how their customers get the food (whether by eating it at the restaurant, or by coming to pick it up themselves), they still consume a product for which delivery was a necessary “input.” This is nonsense. Concluding that delivery of the finished pizza constitutes an “offer” of delivery does not require the conclusion that the serving of prepared food includes an “offer” of delivery. And that analogy does not even do the point justice, since ” ‘telecommunications service’ ” is defined as “the offering of telecommunications for a fee directly to the public.”…The ISPs’ use of telecommunications in their processing of information is not offered directly to the public.

    What Erickson and his Coalition are requesting is for the FCC under Chairman Julius Genachowski to declare Internet service a “Title II” service under the existing Telecommunications Act, and effectively concede Justice Scalia was correct after all — a step which he says actually would not be unprecedented.

    Next: The FCC’s “spare tire”…

    The FCC’s “spare tire”…

    The Open Internet Coalition’s Markham Erickson believes that the FCC can salvage its ability to execute the Broadband Plan proposed earlier this year by Chairman Julius Genachowski, if it can declare itself the regulator of merit for Internet service under a different legal theory than the one struck down last week by the DC Circuit Court, in a ruling favoring Comcast.

    “I would almost look at reclassification as sort of a spare tire that lets the Commission move forward on its agenda, and Congress can always make a comprehensive fix to the Telecom Act at the same time,” said Erickson in response to a question from Betanews. Though he also suggested that Congress may not act at all, citing the fact that it took at least ten years for it to debate the last set of changes to the Telecommunications Act under the Clinton administration.

    Declaring the Internet a Title II service, Erickson suggested, would be “an elegant solution, in that it is a narrow approach that returns the FCC to narrowly regulating just the on-ramps to the Internet. At the same, I think, it protects the edge-based providers and the Internet as a whole from being put under regulation under a broader theory, and a more uncertain theory of ancillary authority under Title I [of the Telecommunications Act].

    “I actually think that the Comcast decision, in many ways, was a blessing,” he continued, “in that it’s really saying that the Commission needs to jettison the amorphous concept of ancillary authority, because it’s not clear exactly how far that extends into the Internet. If it refocuses on just the last-mile facilities of the Internet access provider, they would be on solid legal foundation. It’s also something that the Supreme Court mentioned in the Brand X Decision, that the FCC could of course revisit its decision and reverse its decision, and if they did so, they would be on solid legal foundation. It was Justice Scalia who dissented from affirming the FCC’s ruling in the cable modem order, saying quite clearly, the facilities [of] the Internet access provider are separate offerings that are telecommunications services, not information services.”

    Betanews asked Erickson, wouldn’t such a move by the FCC simply delay, or at least overlook, the inevitable necessity of Congress to make new law with regard to who should regulate the Internet, or parts of the Internet?

    “The Comcast court [DC Circuit] was clear that they were very skeptical of the use of Title I ancillary authority to regulate Internet access providers. I would make a distinction between regulating the Internet and regulating the last-mile facilities of the Internet access providers,” he responded. “But if the FCC were to classify these services as telecommunications services — which many of them were until they were reclassified as information services under Title I — under Title II, the Commission would have a solid legal foundation.”

    The DC Circuit order, Erickson noted, not only leaves the door open for the FCC to make that declaration, but suggests that it could still do so — not only leading the horse to water, but shoving its nose into the lake. “If the Commission revisits that [2002 Brand X] order, they would be on solid legal foundation. It’s not to say that Congress may not want to update the [Telecommunications] Act in and of itself. I don’t think it’s entirely necessary. I think that these are essential communications platforms, and telecommunications services under Title II have historically applied to essential communications platforms — that is, two-way communications where the facility provider that allowing for those two-way communications to happen, isn’t interfering in the communications.”

    Representing Skype’s interest in the affair (certainly a telecommunications service in the technical sense, and in some countries, the legal sense as well), its Senior Director of Government and Regulatory Affairs, Christopher Libertelli, told the press conference today that the bigger, traditional carriers such as Verizon and AT&T aren’t going to start regulating themselves — despite their public promises — in the absence of leadership from the FCC, or from somebody.

    “Carriers have long engaged in dialog around this idea of a voluntary code of conduct that would, I guess, substitute for a government policy,” Libertelli remarked. “And I think it’s interesting, because after the Comcast case, government has no policy in this space. It lacks subject matter jurisdiction, as the FCC lacks subject matter jurisdiction to enforce its Internet policy statement. So we should think about these efforts to do voluntary, industry-led enforcement mechanisms against this vacuum. The carriers know that it’s not sustainable for the chairman of the FCC to not have subject matter jurisdiction in this important area, and have no policy.

    “The key is enforceability,” he continued. “My job is to protect the Skype community. If you’re a Skype user, my job is to bring your concerns to the regulator should the carriers’ worst behavior block or degrade your conversation. So at some level, this whole Title I/Title II debate is about, where do consumers go? Title II is a mechanism that would provide consumers with a place to go to the FCC, to bring to regulators’ attention conduct that harms consumers. If this notion of a voluntary process lacks that essential enforceability feature, I think it’s going to fall short of establishing a real government policy in this area.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • After buying its own client, Twitter toys with sending ads to clients

    By Scott M. Fulton, III, Betanews

    Twitter top story badgeIn the history of anything whatsoever, timing is rarely, if ever, coincidental. More often these days, however, the strategy behind it looks confusing. Just days before it’s scheduled to hold its developers conference in San Francisco (tomorrow and Thursday), Twitter revealed that it is in the process of either acquiring or building applications that will compete directly with the Twitter clients these developers will be taught how to build.

    On Friday, Twitter revealed it was in the midst of purchasing Tweetie, believed to be the most popular Twitter client for Apple’s iPhone. That product will become “Twitter for iPhone.” That same day, the service released a Twitter client for BlackBerry; and it’s that second event that let developers know, as Arlo Guthrie once put it, that there’s a movement.

    This morning, we learned what that movement is towards: In a blog post, the company revealed its first genuine advertising platform, in the form of so-called Promoted Tweets. Taking the same general format as an ordinary tweet, these sponsored tweets will begin appearing individually (not collectively) at the top of results for Twitter searches.

    But perhaps more importantly, they will appear inside Twitter clients for mobile devices — not immediately, but rather after a first phase rollout on the Web. During that time, Twitter wants to learn how users interact with these entities, which will be real tweets, after all. It will be that interaction, founder Biz Stone says, which determines whether a Promoted Tweet remains visible to users for any length of time.

    “We strongly believe that Promoted Tweets should be useful to you,” Stone wrote this morning. “We’ll attempt to measure whether the Tweets resonate with users and stop showing Promoted Tweets that don’t resonate. Promoted Tweets will be clearly labeled as ‘promoted’ when an advertiser is paying, but in every other respect they will first exist as regular Tweets and will be organically sent to the timelines of those who follow a brand. Promoted Tweets will also retain all the functionality of a regular Tweet including replying, Retweeting, and favoriting. Only one Promoted Tweet will be displayed on the search results page.”

    That sets up the format for Twitter ads on the Web, which Stone promises will remain limited and, in some sense, community-defined. As for dedicated mobile clients, that’s to be determined: “Before we roll out more phases, we want to get a better understanding of the resonance of Promoted Tweets, user experience and advertiser value. Once this is done, we plan to allow Promoted Tweets to be shown by Twitter clients and other ecosystem partners and to expand beyond Twitter search, including displaying relevant Promoted Tweets in your timelines in a way that is useful to you.”

    It doesn’t take long for developers to express their skepticism, especially on and about a medium whose entire premise is self-expression. Last Sunday evening, Twitter platform team leader Ryan Sarver found himself quelling developers’ uproar in a Usenet/Google Groups post. There, Sarver pointed to a chain of dots connecting elements of the company’s burgeoning ecosystem that perhaps developers, so intent upon their singular goals of producing Twitter clients and competing with one another, may be too wrapped up in themselves to notice.

    That chain of dots begins, curiously enough, with the iPhone: “We love the variety that developers have built around the Twitter experience and it’s a big part of the success we’ve seen. However when we dug in a little bit we realized that it was causing massive confusion among user’s who had an iPhone and were looking to use Twitter for the first time. They would head to the App Store, search for Twitter and would see results that included a lot of apps that had nothing to do with Twitter and a few that did, but a new user wouldn’t find what they were looking for and give up. That is a lost user for all of us. This means that we were missing out an opportunity to grow the user base which is beneficial for the health of the entire ecosystem.”

    The history of computing is replete with examples of growing markets with multiple players, that all experience “shakeouts” where players converge with or acquire one another, or else drop out. Sarver here appears to be outlining such a market model in “fast forward” mode, where it would be better off for everyone if we just had the shakeout now. Twitter would dutifully take the lead, as the one player left, should everyone else be happy to oblige. So the chain of dots leads like this: Too many developers lead Twitter users to confusion when searching in the App Store, because so many things are called Twitter. Or something like Twitter, such as “Tweetie.” Solution: Rename Tweetie with something that has “Twitter” in it, and have it promoted to the top of Apple’s chain. Trim the fat off of the ecosystem. Result: Happier customers, and a swelling audience.

    The notion that you control your ecosystem by keeping it nice and trim, resonates well with venture capitalist Mark Suster. In a blog post over the weekend, Suster wrote about the possibility that a non-controlled Twitter client could easily branch out and become other services’ clients as well: “Think about the creative tension. If a single Twitter client could amass a large volume of users (lets say 20M+) then the relationship with the consumer is divided between the client and Twitter itself. Ultimately to become defensible the client application would want to diversify its ‘stream’ so would start supporting Facebook, MySpace, and perhaps even IM products.

    “And aside from having great market power (the main reason for Twitter to own the client and the customer) advertising is one of the primary reasons that I believe Twitter needs to own the client applications,” Suster continued. “As people consume Twitter on mobile clients they are almost definitionally [sic] not doing so on Twitter.com. How can you offer up advertisements as Twitter if you don’t control the place where people consume their Tweets? Kind of obvious, huh?”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Mono’s de Icaza: Novell MonoTouch to forge ahead on iPhone OS despite 3.3.1

    By Scott M. Fulton, III, Betanews

    An amendment to the terms of Apple’s iPhone OS Developers’ Agreement, called Section 3.3.1, uncovered last week, would expressly prohibit developers from building apps for iPhone, iPod Touch, or iPad that were not created exclusively for that platform, using Apple’s tools, and linking to no other APIs except Apple’s. That “clarification” threatens the existence of cross-platform support for the iPhone platform, not only from Adobe Flash (whose apps can be devised to run on iPhone), and Oracle Java (same story), but also from development tools whose apps don’t have to be jerry-rigged to run on iPhone.

    Those include Unity3D, the 3D gaming platform originally for Mac OS that dropped Java in 2008 for Novell’s Mono; and MonoTouch, Novell’s extension of its .NET Framework-compatible platform for iPhone OS. In a notice on MonoTouch’s home page, the development team expressed optimism that Apple would find MonoTouch to be in compliance with the company’s new terms.

    “If Apple’s motives are technical, or are intended to ensure the use of the Apple toolchain, MonoTouch should have little difficulty staying compliant with the terms of the SDK. MonoTouch runs only on Mac OS X, and integrates tightly with XCode and the iPhone SDK,” the notice reads. “Applications built with MonoTouch are native applications indistinguishable from native applications, only expose Apple’s documented APIs and uses a rigorous test suite to ensure that we conform to the iPhoneOS ABIs and APIs.”

    But that optimism is based on hope that Apple will write a note back to the Mono team, which it evidently has not done yet.

    Even if MonoTouch ends up being in compliance with Apple’s terms from a legal standpoint, it’s probably stands in defiance with the spirit of Apple’s motives. In an e-mail message to a popular Mac tools developer whose work may be affected by the new rule, Apple CEO Steve Jobs made it clear he believes intermediate software (presumably including runtimes such as Java and Mono) create one more layer of headaches for developers, separating them from the ability to properly use the hardware as its architects intended.

    Betanews asked the Mono team’s leader, Miguel de Icaza (also the co-developer of the GNOME graphical environment for Linux), whether he felt Apple could interpret the continuation of MonoTouch as a defiance of Apple’s intentions.
    “We believe that there might be some valid concerns from a UI perspective in some cases if you use cross-platform tools that isolate the developer from the underlying platform,” de Icaza told Betanews. “It seems that Steve Jobs alluded to that in an alleged e-mail exchange that is making the rounds around the net.

    “From that perspective, MonoTouch is in compliance: MonoTouch is not really a cross-platform tool in that it does not try to offer a layer on top of Apple APIs, so developers get full direct access, without a middleman to the native APIs. This has the unfortunate effect that code written for MonoTouch cannot be ported to other platforms, but it delivers native UI experiences which is what iPhone and iPad users have come to expect.”

    In other words, as de Icaza sees it, the extra steps that a developer must take to make an app built with MonoTouch, or ported to MonoTouch from .NET, work with iPhone, may only be taken by individuals who intentionally write an app for iPhone. That should be the modicum of respect paid to the platform that Jobs is looking for.

    A January 2009 Ars Technica article by Ryan Paul explains how Mono had been getting past Apple’s rules and regulations up to now: For iPhone, it uses a concept called ahead-of-time compilation, which involves pre-compiling the assemblies in such a way that the Mono platform can convert them into native code, before a JIT compiler would have done the equivalent.

    As the Mono Project explains, “AOT compilation works in two stages. The first stage consists of precompiling the assemblies. As of Mono 1.2, this is a manual process that individual deployments must do. The second stage is automatic, the Mono runtime will automatically load any precompiled code that you have generated.”

    Early in MonoTouch’s history, Novell representatives stated their intention was to bring mainstream developers who already work in C# (created by Microsoft), and who never would work in Objective-C (Apple’s object-oriented counterpart), into the iPhone development field. Training developers on a new language, they said, was an expensive process that could be avoided altogether through the use of what they have characterized as a cross-platform tool.

    But Jobs’ motivation here is clearly to capitalize on the iPhone’s platform’s size and traffic level, by forcing developers to pay attention to it directly if they are to play ball with Apple at all. Doesn’t that motivation clearly exclude MonoTouch by definition?

    “Our intention is to work with apple to get a clear understanding of their concerns, and we continue to believe that MonoTouch provides significant value to iPhone developers by liberating developers from manual memory management, but also liberating them from a whole class of bugs and problems that are endemic to C/C++/Objective-C programming and allow them to focus on innovating and building more powerful applications,” the Mono Project’s Miguel de Icaza told us.

    “There is a downside to this approach: It means that developers who want to target Windows 7 or Android with our upcoming MonoDroid have to rewrite all of their UI, and all of their native integration with the platform as we strive to provide C# access to whatever the underlying platform offers, and we do not abstract that code. As I said at my presentation at MIX, this means that developers would have to split their code in ‘Engine/Core’ and ‘UI’ code if they ever want to reuse any of their code across phones.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Crossing swords over cross-platform: Apple vs. Adobe Flash, C#, and Mono

    By Scott M. Fulton, III, Betanews

    It should come as no surprise to anyone that Apple is not a cross-platform tools company, nor a supporter of cross-platform technologies that would threaten to nullify Apple’s baked-in advantages — only during the years Steve Jobs was not in charge had the company even considered opening up its platforms. So the strategy behind the company’s reinforcement of its iPhone OS 4.0 licensing terms, first discovered by Daring Fireball blogger John Gruber last Thursday, is both obvious and unchanged: to direct the course of iPhone/iPod Touch/iPad development traffic directly, exclusively, and entirely through Apple’s channel.

    States the newly added paragraph: “3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).”

    There are several shockers to this paragraph, assuming they are all to be read very literally (which, in the case of Apple, has never not been the case). The clincher, however, is the notion that anything you make to run on iPhone OS must be developed for iPhone OS, including Web apps. If you develop a Web app, it must be made for execution by iPhone’s WebKit, even though WebKit is an open source rendering engine. If you develop a stand-alone app, even though it may use a standard language such as C or C++, it must link to nothing else but iPhone’s APIs.

    So you may use some standard tools, just in a different, prescribed way. For developers, that’s like saying you can’t use standard tools.
    “The key is where they say, ‘Applications must be originally written in Objective-C, C, C++.’ Take a pause and think about what that ‘originally’ really means,” wrote music media developer Hank Williams (no relation) on Thursday. “Developers are not free to use any tools to help them. If there is some tool that converts some Pascal or, Ruby, or Java into Objective-C it is out of bounds, because then the code is not ‘originally’ written in C. This is akin to telling people what kind of desk people sit at when they write software for the iPhone. Or perhaps what kind of music they listen to. Or what kind of clothes they should be wearing. This is insane.”

    The iPhone development channel is Apple’s to direct as it will. Nevertheless, there’s considerable outrage over how the company’s efforts may counteract those of legitimate supporters who had been working to steer mainstream phone app and Web app development Apple’s direction.

    The keynote was sounded Friday, when Adobe Platform Evangelist Lee Brimelow, in a blog post that simultaneously made clear he was speaking for himself and bore the official Flash logo, flipped Apple the bird: “What they [Apple] are saying is that they won’t allow applications onto their marketplace solely because of what language was originally used to create them. This is a frightening move that has no rational defense other than wanting tyrannical control over developers and more importantly, wanting to use developers as pawns in their crusade against Adobe. This does not just affect Adobe but also other technologies like Unity3D…Speaking purely for myself, I would look to make it clear what is going through my mind at the moment. Go screw yourself Apple.”

    For his part, Gruber commented Thursday that he understands the reasoning behind the company’s move, and in a well-reasoned analysis, concluded that iPhone users may end up winning in the end: “I can see two arguments here. On the one side, this rule should be good for quality. Cross-platform software toolkits have never — ever — produced top-notch native apps for Apple platforms. Not for the classic Mac OS, not for Mac OS X, and not for iPhone OS. Such apps generally have been downright crummy. On the other hand, perhaps iPhone users will be missing out on good apps that would have been released if not for this rule, but won’t now. I don’t think iPhone OS users are going to miss the sort of apps these cross-platform toolkits produce, though. My opinion is that iPhone users will be well-served by this rule. The App Store is not lacking for quantity of titles.”

    Adobe Flash engineer Adrian Ludwig demonstrates a Flash app appearing in Apple's iPhone App Store for the first time.

    In October 2009, an Adobe developer shows a Flash application appearing in Apple’s App Store — a provision which Apple will no longer continue to make.


    Someone who knows from personal experience how Steve Jobs thinks on the matter is former Apple products division President (later founder of Be, Inc.) Jean-Louis Gassée. In a blog post Sunday evening that employed one of his trademark metaphors (you just have to read it for yourself), Gassée also made clear he understood exactly where Jobs was coming from, and that he doesn’t look so insane: “Steve Jobs has seen enough in his 34 years in the computer business to know, deeply, that he doesn’t want to be at the mercy of cross-platform tools that could erase Apple’s competitive advantage…Does anyone mind that Jobs won’t sacrifice the truly strategic differentiation of the iPhone platform on the altar of cross-platform compatibility? Customers and critics don’t. They love the end-result.”

    But if there’s anyone who absolutely knows 100% of the time where Steve Jobs is coming from and where he’s going, it’s Steve Jobs. Usually uncommunicative with the press, Jobs did take the time to respond by e-mail to Greg Slepak, whose company Tao Effect makes utility software for Mac. In an exchange that began with Jobs’ signaling his appreciation of John Gruber’s post, Slepak wrote to Jobs, “From a developer’s point of view, you’re limiting creativity itself. Gruber is wrong, there are plenty of [applications] written using cross-platform frameworks that are amazing, that he himself has praised. Mozilla’s Firefox just being one of them. I don’t think Apple has much to gain with 3.3.1, quite the opposite actually.”

    To which Jobs responded: “We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.”

    One of the developers whose key product falls under the category to which Jobs referred, and was thus the target of his criticism, is Miguel de Icaza, whose Mono platform from Novell extends a .NET Framework-like runtime, and the ability to write in C#, to both Linux and Macintosh. The MonoTouch package extends some of that capability to the iPhone platform. De Icaza provided some comments to Betanews late this morning, which we’ll present shortly.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • The big change coming to Safari 5: Kernel-level multi-processing

    By Scott M. Fulton, III, Betanews

    Apple has been challenging Google on many fronts this week — first with its mobile platform, then with its advertising platform. Earlier today, its developers launched the first volley in the battle’s third front, releasing the first public code for the next WebKit rendering and processing kernel that will likely drive the Safari 5 browser.

    With Google Chrome using a reworked form of WebKit, the Apple team did something that perhaps any other free and open source developer would be publicly stoned for doing, but which Apple might just have the savvy to get away with: It openly one-upped another developer’s open contribution.

    “WebKit2 is designed from the ground up to support a split process model, where the Web content (JavaScript, HTML, layout, etc) lives in a separate process,” wrote Apple developer Anders Carlsson to WebKit’s public mailing list yesterday. “This model is similar to what Google Chrome offers, with the major difference being that we have built the process split model directly into the framework, allowing other clients to use it.”

    The “process split” model to which Carlsson refers is the architecture that enables processes spawned by the browser, including add-ons and Web apps, to be run as separate processes in the operating system while still being protected by the browser’s “sandbox.” Google’s Chromium team developed the first such model in working form for its Chrome browser.

    How the developers of the Chromium open source components of Google's Chrome browser perceive the components of their software stack.

    But it was the Chromium team that tried one-upping Apple first, by extracting just the WebKit rendering engine from its open source project files, and replacing its JavaScript interpreter with V8. That may have been a smart move from a performance standpoint at the time. However, in implementing its innovative multi-process model, the Chromium team split the rendering code into two components: a single process host, and a multi-process-capable agent. The two components were designed to communicate with one another via proxy, as Chromium’s developers first explained: The renderer and rendering host jointly comprise, they said, “Chromium’s ‘multi-process embedding layer. It proxies notifications and commands across the process boundary. You could imagine other multi-process browsers using this layer, and it should have dependencies on other browser services.”

    We could imagine it, certainly; but sharing open source concepts often comprises doing something more than merely imagining. The WebKit team that originated the components that Chromium split into parts, have imagined something different: They foresee moving the user interface components into the multi-process realm, and then enabling APIs from other applications to communicate with those forked processes individually. That way, conceivably, a new single kernel can drive multiple browser tabs whose processes reside on different CPU cores.

    How the developers of the WebKit components of Apple's Safari browser perceive the components of their software stack.“Notice that there is now a process boundary, and it sits below the API boundary,” reads a document published by Apple’s WebKit team yesterday. “Part of WebKit operates in the UI process, where the application logic also lives. The rest of WebKit, along with WebCore and the JS engine, lives in the Web process. The Web process is isolated from the UI process. This can deliver benefits in responsiveness, robustness, security (through the potential to sandbox the web process) and better use of multicore CPUs. There is a straightforward API that takes care of all the process management details for you.”

    Unlike Chrome and the Chromium team’s work, the WebKit team goes on, they have a responsibility to provide a framework for others to explore and use for their purposes. So if they do a multi-process framework, then it must be in such a way that other developers (even including Google) could facilitate it. However, the facilitators themselves have no such responsibilities. WebKit gently chided Google for having developed Chrome under strict secrecy (something I suppose Apple knows nothing about).

    “That was an understandable choice for Google — Chrome was developed as a secret project for many years, and is deeply invested in this approach,” reads WebKit’s wiki today. “Also, there are not any other significant API clients. There is Google Chrome, and then there is the closely related Chrome Frame. WebKit2 has a different goal: We want process management to be part of what is provided by WebKit itself, so that it is easy for any application to use. We would like chat clients, mail clients, Twitter clients, and all the creative applications that people build with WebKit to be able to take advantage of this technology. We believe this is fundamentally part of what a Web content engine should provide.”

    That element which distinguishes WebKit’s development philosophy from that of Chrome and Chromium (at least in Apple’s eyes) may very well draw a new contour around the type of program — or rather, the type of platform — that Safari may yet become. Rather than having a tab for Twitter and a tab for Facebook and a tab for iTunes, a future Safari for all platforms (Mac, iPhone, iPad, Windows) could include a kind of “embedded desktop,” or Web-top, where independent applications may reside. They might not appear to run under the context of the browser at all, and if you looked inside each of these processes, you might consider them independent too, capable of being accessed independently through API function calls. But a proxy/stub relationship may connect these processes to the WebKit2 core.

    That would go a long way towards solving the iPad’s single-tasking problem. The new architecture might also provide a kind of extended platform for something else Apple launched this week: its iAd revenue-sharing advertising system. Imagine, if you will, an advertisement that runs out of the context of the browser. It would not even have to run in the context of the Web page; instead, it could be delivered through one of the applications to which the Safari user subscribes. It might not even be something you can block, not through the ordinary means with which Firefox and IE users are already accustomed. It could even be a potential new revenue stream (or, more accurately, a river) for iTunes.

    While you’re imagining that…we’ll be right back.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Unfazed, FCC plods ahead with Broadband Plan, starts a flame war with Verizon

    By Scott M. Fulton, III, Betanews

    VerizonIn a pair of blog posts since the DC Circuit Court of Appeals’ finding Tuesday that the Federal Communications Commission lacked the statutory authority to tell Comcast how to manage traffic on its broadband network, the FCC demonstrated it had officially joined the Internet era by making the dispute into a flame war.

    No, the Court did not revoke the FCC’s natural authority to regulate the Internet industry, the Commission stated yesterday. However, it may have removed the FCC’s epaulets and badge, along with its right to serve as what General Counsel Austin Schlick called “the cop-on-the-beat for 21st Century communications networks.”

    “Does the FCC still have a mission in the Internet area? Absolutely. The nation’s broadband networks represent the indispensable infrastructure for American competitiveness and prospects for future job creation, economic growth, and innovation,” Schlick wrote. “The Court did not adopt the view that the Commission lacks authority to protect the openness of the Internet.”

    Actually, it did. Specifically, the DC Court took the FCC to the proverbial woodshed for presuming that when Congress — or, more accurately, specific congresspersons — set policy guidelines rather than made legislation, the Commission could assume those guidelines to be a congressional mandate. The Court cited a 1976 case concerning the National Association of Regulatory Utility Commissioners (NARUC), where the same DC Court overturned the same FCC’s authority to regulate how utility companies could make use of the then-nascent digital capabilities of cable TV lines to report, for example, electricity consumption. The FCC could not, the Court found, regulate the use of a medium outside of its mandate for communications. The Court also cited three other significant cases, including some in which it found in the FCC’s favor, which are introduced in Betanews’ story yesterday.

    …Unlike the way it successfully employed policy statements in Southwestern Cable and Midwest Video I, the Commission does not rely on section 230(b) [of the Communications Act of 1934] or section 1 [of the portion of the Act that created the FCC] to argue that its regulation of an activity over which it concededly has no express statutory authority (here Comcast’s Internet management practices) is necessary to further its regulation of activities over which it does have express statutory authority (here, for example, Comcast’s management of its Title VI cable services). In this respect, this case is just like NARUC II. On the record before us, we see “no relationship whatsoever”…between the Order [regulating Comcast] and services subject to Commission regulation. Perhaps the Commission could use section 230(b) or section 1 to demonstrate such a connection, but that is not how it employs them here.

    Instead, the Commission maintains that congressional policy by itself creates “statutorily mandated responsibilities” sufficient to support the exercise of section 4(i) ancillary authority. Not only is this argument flatly inconsistent with [the four key cases establishing ancillary authority, see Betanews’ Thursday story], but if accepted it would virtually free the Commission from its congressional tether. As the Court explained in Midwest Video II, “without reference to the provisions of the Act” expressly granting regulatory authority, “the Commission’s [ancillary] jurisdiction…would be unbounded.”…Indeed, Commission counsel told us at oral argument that just as the Order seeks to make Comcast’s Internet service more “rapid” and “efficient”…the Commission could someday subject Comcast’s Internet service to pervasive rate regulation to ensure that the company provides the service at “reasonable charges”…Were we to accept that theory of ancillary authority, we see no reason why the Commission would have to stop there, for we can think of few examples of regulations that apply to Title II common carrier services, Title III broadcast services, or Title VI cable services that the Commission, relying on the broad policies articulated in section 230(b) and section 1, would be unable to impose upon Internet service providers. If in Midwest Video I the Commission “strain[ed] the outer limits of even the open-ended and pervasive jurisdiction that has evolved by decisions of the Commission and the courts”…and if in NARUC II and Midwest Video II it exceeded those limits, then here it seeks to shatter them entirely.

    This is easily the strongest language to date from the DC Court on the limits of the FCC’s regulatory powers. Essentially, the Court is saying that the FCC cannot accept Congress’ request for it to build a Broadband Plan for the nation (as well as to decide what “broadband” really means anyway) as justification for telling Comcast it can’t throttle BitTorrent downloads.

    Which is why yesterday afternoon’s response from the FCC’s Schlick is so astonishing. Blatantly, Schlick’s post states that the Court’s ruling has only limited effect on the Broadband Plan, which he characterizes as a new kind of mandate. And just who granted the Commission that mandate? Why, Congress, of course, in its request for it to produce the Broadband Plan:

    In 2009, Congress directed the agency to develop a plan to ensure that every American has access to broadband. Just three weeks ago, the Commission released its National Broadband Plan. The Plan contains more than 200 recommendations for bringing high-speed service to underserved individuals and communities, and using broadband to promote American competitiveness, education, healthcare, public safety, and civic participation.

    The Comcast/BitTorrent opinion has no effect at all on most of the Plan. Many of the recommendations for the FCC itself involve matters over which the Commission has an “express statutory delegation of authority.” These include critical projects such as making spectrum available for broadband uses, improving the efficiency of wireless systems, bolstering the use of broadband in schools, improving coordination with Native American governments to promote broadband, collecting better broadband data, unleashing competition and innovation in smart video devices, and developing common standards for public safety networks.

    Next: After Comcast, the FCC sets its scopes on Verizon…

    In a ceremonial flexing this morning of a muscle the Court said the FCC shouldn’t have, Commission Chief of Staff Edward Lazarus posted a blog entry taking Verizon CEO Ivan Seidenberg to task for comments he made last Tuesday in a speech to the Council on Foreign Relations. In recent months, the FCC has suggested the presence of a looming spectrum crisis, citing recent comments from attorneys and executives of Verizon’s rival AT&T, warning of an apocalyptic event some call the exaflood. In that terrible time, demand for bandwidth would become so huge that the Internet would literally run out of spectrum like a desert lake runs out of water.

    The Commission had been leveraging those warnings to make its case for potentially reclaiming bandwidth from US broadcasters and repurposing it for wireless Internet communications. For their part, broadcasters argue they may still make use of unused spectrum allocated to them, perhaps for becoming Internet service providers themselves.

    But in his speech Tuesday, Verizon’s Seidenberg said there was no such crisis on this, or potentially any, horizon. He used that proclamation as leverage for Verizon’s ongoing argument that technology problems are best resolved by market forces rather than by regulation.

    “Now, of course, if I took the self-serving approach, it would be okay, screw the broadcasters. Let’s get their spectrum and we can put it to use in our wireless and cellular business or broadband business,” said Seidenberg in response to a question from the audience. “My reaction is going to surprise you. I don’t think the FCC should tinker with this. I think the market’s going to settle this. So in the long term, if we can’t show that we have applications and services to utilize that spectrum better than the broadcasters, then the broadcasters will keep the spectrum.

    “Cable companies have bought spectrum over the last 10 or 15 years that’s been lying fallow. They haven’t been using it,” the Verizon CEO continued. “So here the FCC is out running around looking for new sources of spectrum, and we’ve got probably 150 MHz of spectrum sitting out there that people own that aren’t being built on. I don’t get that. This annoys me…Not to leave the broadcasters out of the debate, there are lots of issues that we have with retransmission and things of that nature we need to solve. But basically, confiscating the spectrum and repurposing for other things, I’m not sure I buy into the idea that that’s a good thing to do.”

    But is there a spectrum crisis looming, the questioner persisted? Sure, responded Seidenberg, if we decide to put the equivalent of a transmission tower in everyone’s back pocket: “If video takes off, could we have a spectrum shortage in five or seven years? Could be, but I think that technology will tend to solve these issues. And…I happen to think that we’ll advance fast enough that some of the broadcasters will probably think, let me cash out and let me go do something different. So I think the market will settle it. So I don’t think we’ll have a spectrum shortage the way this document [the Broadband Plan] suggests we will.”

    The FCC’s Lazarus saw Seidenberg as somewhat of a turncoat, abandoning what he had thought was a team effort to prove a spectrum crisis existed. Citing prior efforts by Verizon to promote the identification of new spectrum that could be allocated for broadband, Lazarus wrote, “The recent statements by Verizon’s CEO are rather baffling. The fact is, Verizon played a major role in building an overwhelming record in support of more mobile broadband spectrum, consistently expressing its official view that the country faces a looming spectrum crisis that could undermine the country’s global competitiveness…The National Broadband Plan record contains widespread agreement and a solid foundation of factual evidence on the need for the FCC to pursue policies that would free up 500 MHz for mobile broadband by 2020. We hope to work with Verizon and other companies across the communications sector on ways to achieve the important goal of ensuring that the United States has world-leading mobile broadband infrastructure.”

    In other words, hopefully Verizon will come back around and rejoin the Plan, writes Lazarus. In recent months, the FCC had used the spectrum crisis as a way of tying the Internet to the concept of the airwaves (which the FCC does regulate statutorily) as opposed to private pipelines (which it does not). If it could focus public attention on the airwaves and wireless as the homebase of the Internet, then it could prove it has more than ancillary authority to serve as its “cop-on-the-beat.” But that argument may have already been choked off earlier this week.

    Any authority for the FCC to regulate the Internet directly will need to be mandated by Congress, and not with existing law but with new law. Any effort to produce that new law will inevitably confront opponents sounding the battle cry against “big government.” Market forces, unfortunately, will not resolve that looming crisis.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • End of the road in sight for Windows on Itanium

    By Scott M. Fulton, III, Betanews

    IntelIt was perhaps one of the most drawn-out, painful launches in Intel’s long history: the introduction last February of Tukwila, the latest generation of its Itanium 64-bit processor architecture. Not everyone in the Itanium Solutions Alliance hung on for the five-year ride, with Unisys having been its most prominent drop-out last year, citing competitor HP’s dominance in the field. Microsoft held on for the entire stretch; but last week, the company announced it would not lend its support to whatever the generation after Tukwila might become.

    In a blog post last Friday, Windows Server Senior Technical Product Manager Dan Reger said that his company will continue to support existing Itanium architecture, including Tukwila, for another eight years. But Windows Server 2008 R2 will be the last version of the operating system to support IA-64.

    After a delay of officially two years, and unofficially longer, the dual- and quad-core 65 nm Itanium 9300 series was launched last February 8 to somewhat muted fanfare. The newest Itaniums ended up following, not leading, many of Intel’s latest process innovations, with the company’s 32 nm Xeon 5600 series, including a six-core model, premiering just last month. Itanium can no longer claim power savings as a selling factor, with the new Xeons also boasting 130W TDP at clock speeds superior to the new Itaniums’.

    Five years ago, the drive toward industry-wide adoption of IA-64 architecture was started by the Itanium Solutions Alliance, steered somewhat by Intel (of course) as well as HP, the architecture’s most prominent vendor. But Microsoft was also a very vocal charter member — at the time, seen as more influential than charter member Red Hat. The reason was that, from 2001 up until 2005, the factor keeping enterprises from investing in Itanium was the lack of a clear migration path. Windows compatibility was perceived as the substance of a bridge.

    Here’s how I covered the Itanium Solutions Alliance in a January 2006 article for Tom’s Hardware:

    The Alliance’s efforts go to the heart of what had been characterized as Itanium’s chief deficiency: not its architecture, which after a rough start, actually has proven itself very capable. It is an altogether different platform, not because it’s truly 64-bit, but because it would have its developers embrace a concept called Explicitly Parallel Instruction Set (EPIC). It is the industry’s first CISC-based multithreading architecture, based on a simple concept to explain but a difficult one to implement: the idea that when processes fork and parallelism begins, it’s because the code of the program tells it to. It’s this concept which most starkly distinguishes Itanium from x86 (x64) architecture, which actually has no parallelism principles of its own. The ability of multithreaded x86 processors to fork processes into separate cores is based mainly on their ability to ascertain for themselves when such forking is permissible. One of Intel’s most ambitious tests of this capability has been hyperthreading, which is a parallelism technique for single-core processors. But HT is an experiment that will probably come to an end as dual-core and multicore processors become mainstream. As they do, they will undoubtedly bring Intel’s version of 64-bit x86 architecture (EM64T) as well as AMD’s (AMD64) into the mainstream even in high-performance categories.

    I was wrong about one thing: Hyperthreading does live on in Intel’s current Core microarchitecture. Nevertheless, the fork in the road we saw in early 2006 has only grown wider today, and three major influences have changed the scenario for Itanium over that period:

    • Linux is finding a comfortable home among high-end servers, because it’s lightweight and has lower up-front costs (assuming businesses don’t buy premium support licenses), and Intel is embracing Linux more and more.
    • The hyperbolic growth in virtualization means that enterprises don’t have to run Windows Server just to enable their clients to run Windows 7. Exchange, Office Communications Server, and SharePoint remain important, but any more, Windows Server exists to support them rather than the other way around.
    • Heterogeneity in server design means businesses that require high-performance computing servers can now purchase Itanium in more limited quantity, leaving Windows Server to handle high-availability segments of the network.

    In some ways, many of the factors standing in the way of the Itanium Solutions Alliance’s goals five years ago, have actually been removed through the natural course of the industry’s evolution. But that evolutionary course has not exactly favored Microsoft. So it’s noteworthy that, in announcing its “transition” away from Itanium support last Friday, Microsoft’s Reger (or at least, the blog post from Reger, whose opening had one tone and his closing another) took a parting shot at Itanium, aiming squarely at what the Alliance had touted as one of the architecture’s key features: scalability.

    “The natural evolution of the x86 64-bit (x64) architecture has led to the creation of processors and servers which deliver the scalability and reliability needed for today’s ‘mission-critical’ workloads,” Reger wrote. “Just this week, both Intel and AMD have released new high core-count processors, and servers with 8 or more x64 processors have now been announced by a full dozen server manufacturers. Such servers contain 64 to 96 processor cores, with more on the horizon. Windows Server 2008 R2 was designed to support the business-critical capabilities these processors and servers make available. It supports up to 256 logical processors (cores or hyper-threading units), so it’s ready for the ever-increasing number of cores.”

    Or to put it another way, six is greater than four. Of course, Reger’s comment (perhaps intentionally) ignores one important element: After Itanium finally included the QuickConnect memory bus (the equivalent of AMD’s DirectConnect architecture, developed years earlier), and with Itanium’s brand of parallelism being implicit (built into the code) rather than explicitly stated as with x64, three dual-core IA-64 processors should bear little performance difference from one six-core processor.

    In a May 2005 interview, HP’s representative to the Alliance, Stephen Howard, told me IA-64 architecture was noteworthy for having “very high-RAS features: reliability, availability, scalability. Those hooks are built into the chip architecture itself, and the operating systems can make use of those features, and the individual companies and software vendors can build on the most highly reliable system. So you get up into the mainframe replacement and high-end RISC replacement classes of servers, and anywhere that this kind of equipment is used is going to be an ideal spot for Itanium.”

    At present, Microsoft remains a member of the Alliance, and may conceivably continue to do so until the next edition of Windows Server, perhaps in two years’ time. But one of the Alliance’s goals, Howard told me five years ago, was to create a single point of contact for customers wanting to purchase Itanium hardware and software all in one stop. That isn’t exactly what happened, but perhaps Intel’s continued embrace of Linux, along with Red Hat’s continued membership in the Alliance, could actually promote that goal in Microsoft’s absence.

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Obama vs. Congress on radio’s royalties exemption

    By Scott M. Fulton, III, Betanews

    Perhaps the word that best describes the atmosphere in Washington this year is “showdown,” with respect to every political issue imaginable — even those around which there’s technically no disagreement. The debate continues over whether terrestrial radio broadcasters (the ones with the big transmitters and the public airwaves) should begin paying the same performers’ royalties as Internet broadcasters like Last.fm and Pandora, even though it often seems drowned out by the noise over the just-passed health care reform act, and ongoing legislation on jobs protection and banking reform.

    Now, the US Commerce Dept. has come down squarely on the side of the recording industry and rights holders. In a letter sent April 1 to Senate Judiciary Committee Chairman Patrick Leahy (D – Vt.), Commerce Dept. General Counsel Cameron F. Kerry urged Congress to pass legislation that would apply Pandora’s royalties to radio stations.

    US Commerce Dept. General Counsel Cameron F. Kerry“At the national level, establishing a public performance right in sound recordings and eliminating the exemption for terrestrial broadcasters follows principles of US copyright law,” wrote Kerry, the brother of the Senate Foreign Relations Committee Chairman and former presidential candidate. “In the words of the Supreme Court, ‘The encouragement of individual effort by personal gain is the best way to advance public welfare through the talents of authors and inventors…’ Consistent with this historic rationale for copyright, providing fair compensation to America’s performers and record companies through a broad public performance right in sound recordings is a matter of fundamental fairness to performers. It would also provide a level playing field for all broadcasters to compete in the current environment of rapid technological change, including the Internet, satellite, and terrestrial broadcasters. In today’s digital music marketplace, where US performers and record labels are facing both unprecedented challenges and opportunities, the Department believes that providing such incentives for America’s performing artists and recording companies is more important than ever.”

    Kerry’s comments continue the policy set forth in June 2008 by his predecessor from the Bush administration, Lily Fu Claffee. Both Claffee’s letter two years ago and Kerry’s last week effectively preached to the choir, with then-House Internet Subcommittee Chairman Howard Berman (Claffee’s recipient) and Sen. Leahy both having co-authored this legislation several times already.

    But the choir is not the congregation, as a majority of House members from both parties have already signed their name to pledges sponsored by the National Association of Broadcasters opposing the legislation. If it’s to pass the House this term, it may need to be attached to a bill that’s popular enough to pass with mere perfunctory debate.

    The problem is, in this term of Congress, hardly any such bill exists.
    “NAB was aware this letter was coming, which is a position taken previously by the Bush Commerce Department,” reads the response from NAB Executive Vice President Dennis Wharton. “We’re disappointed the Commerce Dept. would embrace legislation that would kill jobs in the US, and send hundreds of millions of dollars to foreign record labels that have historically exploited artists whose careers were nurtured by American radio stations. The good news is that 260 members of the House of Representatives and 27 US Senators are standing with hometown radio stations and against the RIAA.”

    Wharton’s comments come at the start of a potential tidal wave of public sentiment, in response to the US Copyright Royalty Board’s call for public comments on royalty rates for both broadcast and Internet radio, for the term starting January 1, 2011 and ending December 31, 2015. The CRB is taking those comments by e-mail sent to [email protected].

    Two terms ago, then-Sen. Majority Leader Trent Lott was notoriously successful at getting an unpopular Internet anti-gambling measure passed and signed by the President by attaching it as a rider to a bill that was certain to get Mr. Bush’s signature: one authorizing increased anti-terrorism security for US shipping ports. Perhaps in an effort to forestall a repeat of the Lott Maneuver, the NAB this morning pointed press sources, including Betanews, toward an op-ed article in TVNewsCheck by noted broadcasting strategy consultant Tom Wolzien.

    Ostensibly about broadcasters’ reactions to the FCC’s Broadband Plan, the article suggests that if broadcasters continue to be weakened by US policy and new legislation (evidently including the performers’ rights bill), citizens would start relying on broadband services on their phones for public safety information, rather than the radio system that works so well today. Or, put another way, weaken the broadcasters and the terrorists might win.
    Wolzien writes, “As the nation considers a broadband policy, it has the opportunity to solve the challenge of simultaneous emergency communication to a fractionalized populous. It is a need that we hope won’t come, but for which we know we must be prepared. The complementary strengths and weaknesses of broadband and broadcast must be considered, as though our lives depended on understanding those differences. They may.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • Security researcher: ‘Trivially easy’ to buy SSL certificate for domain you don’t own

    By Scott M. Fulton, III, Betanews

    Last week, Betanews reported on the discovery by two university researchers, made at a recent security conference, that security companies often deal with governments that can compel certificate authorities to produce SSL security keys for them. Those keys can then be used to sign certificates as any other Web site, enabling a law enforcement authority — hypothetically speaking, of course — to spoof virtually any other site.

    Today, Betanews heard from world-renowned security expert Kurt Seifried, author of numerous books on Linux system administration, network security, and cryptography. In the May 2010 issue of Linux Magazine, Seifried reports on his own discovery, which goes one very critical step further: You don’t need to be a government, he found, to compel a certificate authority (CA) to issue an SSL certificate for a major Web mail service of your choice. You just need a valid credit card.

    “Brief summary: One way to get certificates for domains you don’t own: 1) Find a free Web mail provider. 2) Register an account such as ssladmin. 3) Go to RapidSSL.com and buy a certificate. When given the choice of what e-mail address to use, simply select ssladmin. 4) Go through certificate registration process (this takes about 20 minutes). 5) You will now have a secure Web certificate for that Web mail provider,” Seifried told Betanews this afternoon.

    In his Linux Magazine article, Seifried lists several other permutations of generic-sounding e-mail account names that may be given to the guy in charge of administration, including the obvious postmaster, administrator, and root. In his own tests, Seifried says, it usually took only a half-hour to acquire a perfectly valid certificate for a major Web mail service.

    “The industry-accepted standard for confirming someone is who they say they are and that they control a domain is that ‘the CA takes reasonable measures to verify,’ which is very ambiguous at best and meaningless at worst,” reads Seifried’s article. “One CA proposed that customers could fax a signed letter on company letterhead as proof that they controlled a domain (Have they not heard of word processors and image editing programs? Or online fax services?). CAs want to sell as many certificates for as little money as they can; if this puts users at risk but doesn’t cost the CA anything, then there is no incentive to fix things.”

    We asked Seifried, what can the general user do to protect himself against a possible authoritative spoof using a false certificate? We didn’t like the sound of his answer: “Nothing. User education hasn’t worked and won’t work…The only reason I know the difference is I investigated this a while back; I’ve been writing about how broken SSL is off and on for a decade now.”

    Seifried credits Mozilla Firefox for at least giving the user good visual clues as to the validity of a signed certificate — for instance, using the color-coded bars next to the HTTPS: address in the upper bar. But ask everyday folks what those colors mean, he said, and they wouldn’t be able to tell you.
    Are there further steps Mozilla, or any other browser maker, could take to make “Trust” more meaningful to the user, and less likely to be something else for him to ignore? “Well there would be one possibility, but it’ll never happen, and that would be to boot out all the CAs that don’t do a good job verifying domains/etc. and only have root CAs that do a good job,” Seifried responded.

    “Basically right now, when a CA checks ‘ownership’ of a domain, it checks one e-mail address, which is trivial to bypass especially with, say, a free Web mail provider,” he continued. “If it were to add more checks — i.e., the CA generates a random string (say an MD5 sum) and requires you to place 8987a978d987e987c978.html or whatever in your webroot at www.yourdomain.com to prove you have control over the Web server as well; and maybe a DNS check, like requiring you to create a DNS record of iugasdcviuoba.yourdomain.com to prove that you have control over the DNS — that would greatly help, because in that case, you either are a legit domain owner, or the attacker has such a degree of control over your domain that any checks won’t matter. The funny thing is, Google used to do this for some of its services like Google Analytics. Also making the e-mail check more stringent — i.e., only e-mail_address@the domain listed in WHOIS, or well-known and typically controlled e-mail address such as postmaster@, would also help greatly.

    “But then buying a certificate would take time and the verification process would fail more often (waiting for DNS propagation/etc.), so it’s very unlikely to happen. Once you get a certificate in the root CA store, you basically have a license to print money.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • One more try to modernize US surveillance laws for the Internet age

    By Scott M. Fulton, III, Betanews

    You may think that your communications with other individuals over the Internet may be protected from unreasonable use by US law enforcement without subpoena and due process. The truth is, judges have been loosening the interpretation of a 1986 wiretapping law, almost pretending that it did apply to present circumstances. But perhaps the greatest problem with the current Electronic Communications Privacy Act (ECPA) lay with its definitions, which at one point appear to be applicable (after several stretches of logic) to the Internet…and then, upon further review, does not.

    “Electronic communication” means any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photoelectronic or photooptical system that affects interstate or foreign commerce,” paragraph 12 of section 2510 begins. Sounds fair enough, until you go on: “…but does not include (A) the radio portion of a cordless telephone communication that is transmitted between the cordless telephone handset and the base unit; (B) any wire or oral communication…”

    If you subscribe to the FCC’s emerging definition of the Internet, as a global device that consumes spectrum, then the exceptions listed here would appear to exclude your smartphone as a component of electronic communication. That exclusion could conceivably give a crafty law enforcement officer or prosecutor the means for issuing a subpoena for information from a wireless service provider, without just cause as determined by a judge.

    “‘Subpoena’ is Latin for ‘no judge has ever approved this,’” said Electronic Frontier Foundation senior staff attorney Kevin Bankston, during a news conference yesterday. “To me, that’s the distinction here. Subpoenas are issued without any judicial review, and we really need that check-and-balance, that critical protection.”

    The news conference was assembled by the Center for Democracy and Technology to announce the publication of a set of proposed (and revised) principles for members of Congress to consider. Congress will, perhaps this season, hold hearings (again) on the possible modernization of ECPA, to make it clear that the same protections that applied to wiretapped telephone communication apply to Internet conversations. CDT’s leaders have once again assembled a policy coalition (its last try at this was in 2008) to promote a set of four principles that it believes new law must follow.

    This time around, however, the Digital Due Process group has enlisted the support of Microsoft and Google, two names that have figured prominently in the debate over Internet users’ rights. (Facebook also figures prominently in this debate by its conspicuous absence from this coalition.)

    “All private communications content stored with a service provider should be protected just as if it were stored on a laptop, or printed out or stored in a file,” stated CDT Vice President for Public Policy Jim Dempsey yesterday, listing the first of the new group’s four proposed principles. “That is, it should be protected by the warrant standard issued by a judge, based upon a finding of probable cause to believe that a crime is being committed or has been committed, and that the information is relevant to that crime.

    “Currently, some e-mail stored online is protected by the warrant and some isn’t,” Dempsey continued. “And the rules as to what is protected and what isn’t protected are pretty obscure and completely unknown to the average citizen. For example, there’s the ‘180-day rule,’ which says that after 180 days at the very longest, all of your stored e-mail loses the protection of the warrant and is available to the government, with a subpoena issued without a judge, and without a finding of probable cause. So we would say, one uniform rule across the board.”

    The second principle would apply a copy of that uniform rule to GPS and location information retrieved from an individual’s smartphone or laptop. Third, a law enforcement body or government entity should be required to show just cause for requesting e-mail or information about the e-mail or other communication (e.g., the names of parties in the discussion). Fourth — and certainly not least importantly — the principles suggest new law should make clear that any subpoena issued under the existing Stored Communications Act should apply to an individual or an account belonging to an individual, and requests for information belonging to anything else (such as a company or group) must be approved by a judicial finding.

    “Most laypeople don’t realize this, but a subpoena is issued by the prosecutor, and often prosecutors hand it off to the FBI agents to fill in,” explained Dempsey to a reporter who asked for a summary of how the law works today under the 1986 provisions. “They may be served in the name of a grand jury, or they may be administrative subpoenas, and a number of agencies have administrative subpoena authority. Those are issued at the discretion of an executive branch official with no judicial review. The Supreme Court has said that you can issue a subpoena…not because you believe the law is being violated, but merely to assure yourself that the law is not being violated. The standard is relevance to an ongoing investigation, and relevance is the lowest and broadest of the standards for compulsory access. It would be incumbent upon a service provider to challenge a subpoena when it’s issued; often the subpoenas are issued with a delayed notice provision, meaning that the true party of interest, the customer, is not told about the subpoena in time to object to it.

    “So there really are no checks and balances there that are meaningful,” he continued. “A few service providers, in a few cases, have challenged subpoenas, or have pushed back. But that in and of itself is an expensive and unpredictable process.”

    Google’s representative on the new coalition made a familiar case for Google: that the public’s expectations for privacy rights have evolved faster than legislators have been able to keep up.

    “This coalition is [in favor of] a very important initiative to advance what the legal protections are that cover the data that people are uploading to online services, those provided by many of the coalition members,” said Google Senior Counsel Richard Salgado yesterday. “We’re seeing tremendous change in the volume of data that people are uploading to services, the sensitivity of that data, and how that data and those services play a role in the day-to-day lives of people. Very different than how things looked in 1986 when the Web…didn’t even exist…We’re so far from that now, that you can hardly recognize the world of 1986; and yet, we’ve got a statute that envisions that bygone era. What we want to do is adjust some of the legal thresholds in the statute in a way that would make them more consistent with what users expect as their privacy right over the data that they’ve provided to these companies, and that they should expect, and doing so with thresholds that are very familiar to judges, very familiar to prosecutors, and that won’t hinder the important work that government has to do.”

    For his company’s part, Microsoft Associate General Counsel Mike Hintze yesterday pointed out that cloud technologies are preparing to rewrite the definitions yet again, and that any legal framework based squarely on 2010 could very soon look like 1986.

    “ECPA…just hasn’t kept up with technological changes. It doesn’t reflect how people use online services and cloud services today. Therefore, a lot of the distinctions in the statute are illogical or unclear or inconsistent, which creates challenges in terms of compliance. It’s unclear what the standard is, it creates friction between companies and law enforcement, and it creates confusion for the customers.

    “More importantly than that, though, is the fact that, as more and more people embrace the benefits of cloud computing — and Microsoft…has invested huge amounts in cloud technologies, and believe there are enormous benefits to the economy and to individual users…as that technological reality permeates our society, and people start moving documents from their file drawers and their individual computers into the cloud, we just don’t believe that the balance between privacy and law enforcement should be fundamentally turned on its head,” Hintze continued. “The US Constitution protects data in your home on your own PC at a very high standard; and as people take advantage of cloud services, we don’t believe that that traditional balance of privacy vis-à-vis the state, should be fundamentally altered.”

    Next: Would revised surveillance law protect all personal data?

    US Capitol building, Senate side

    Would revised surveillance law protect all personal data?

    Recently, content providers including Google and Microsoft have been racing to comply with dueling sets of governments’ provisions worldwide: one that mandates how long they must retain information about their customers, and another that mandates they must anonymize that data, or get rid of it, after a given period of time. But as university researchers including Harvard’s Christopher Soghoian demonstrated, anonymization with respect to single databases may be pointless, as engineers with only meager knowledge of how databases work could conceivably reconstruct personally identifiable data by linking records from multiple databases.

    This gets into the larger question of aggregate data — information that’s discoverable through manipulation. The principles proposed by Digital Due Process yesterday appear, on the surface, to apply to law enforcement requests for specific records from specific databases applicable to specific investigations. But what happens when data those agencies may already have, reveal something they didn’t know they needed to know, when it’s all pieced together?

    We asked the CDT’s Jim Dempsey: “Our principle #4 addresses this issue: It says that when the government seeks aggregate data, it must get a court order; it cannot use a prosecutor’s subpoena,” he told Betanews. With regard to requests for personally-identifiable data (PID) versus non-PID — data that can be compiled to reveal PID — Dempsey said, “ECPA does not distinguish between personally identifiable and non-personally identifiable data. Even the current law covers data that is aggregate and supposedly not personally identifiable.”

    But even as the principles are currently written, could new law based on those principles effectively omit aggregate data, creating a loophole? For instance, could a law enforcement agency thwart the new rules by mining data collected in the course of other, unrelated investigations; in so doing, determine new connections between elements of data; and then characterize the resulting evidence as “plain sight” discoveries?

    “Nothing in current law or in our proposal limits the government’s use of data already collected,” responded CDT’s Dempsey. “If the government lawfully acquires data one day, it can use that data months or years later in another case. ECPA and the Fourth Amendment [of the US Constitution, pertaining to citizens’ protections against unreasonable searches and seizures] address forcing companies to disclose customer data; they do not address how long the government can keep the data.”

    In a statement issued yesterday, Sen. Patrick Leahy (D – Vt.), who currently chairs the Judiciary Committee, promised to hold a new set of hearings to consider the new group’s proposals…some of which have been considered before, including in committees headed or steered by Leahy.

    “I applaud the announcement today by Digital Due Process and the Center for Democracy & Technology that a coalition of privacy advocates, legal scholars, and major Internet and communication service providers have joined together to release a consensus set of proposals to modernize the Electronic Communications Privacy Act. I look forward to reviewing these ideas,” stated Sen. Leahy. “While the question of how best to balance privacy and security in the 21st century has no simple answer, what is clear is that our federal electronic privacy laws are woefully outdated. In the coming months, I plan to hold hearings on much-needed updates to the Electronic Communications Privacy Act.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati



  • The end, finally, at last, hopefully? Jury finds Novell retained UNIX copyrights

    By Scott M. Fulton, III, Betanews

    Novell 'N' top story badgeAs first reported this afternoon by Groklaw, the publication that made its name covering the ugliest chapter in the history of computing, a jury in Utah district court has found that the copyrights to UNIX were never transferred to the original Santa Cruz Operation by way of a 1995 asset purchase agreement.

    The decision may finally put to rest a 15-year-old argument over who, or what, has the rights to UNIX.

    Several changes in leadership ago, Novell changed its mind about pursuing developing “UnixWare,” and had thought it was entering into an agreement with SCO (again, several changes in leadership ago) that would enable the two to pursue a different UNIX strategy in cooperation with IBM. That cooperation absolutely never manifest itself, as it became tremendously unclear among all parties just what was agreed to — even to the extent that the instruments of the agreement itself were challenged.

    The legal tangle quite literally drove some of those involved mad. As for SCO, as of last October, it’s a skeleton operation in receivership that cut off official ties with its notorious former CEO, Darl McBride, and that devoted its only remaining resources towards driving its legal operation on autopilot. The August 2007 judgment in Novell’s favor — effectively ruling the same thing the jury decided today — would have actually been a blessing for SCO, enabling it to become a company that made products again, were it not for an Appeals Court ruling two years later stating the matter was too complex for the judge to have issued a summary judgment.

    Groklaw’s reporter on the scene this morning knew something was up when, from his post at a café across the street, he saw many suited people, including former CEO McBride, entering the courtroom. A half-hour later, the jury informed the judge it had reached a verdict. Moments later, each juror was polled, had the amended asset purchase agreement not transferred UNIX copyrights from Novell to SCO? And each one answered in the affirmative.

    As the Salt Lake Tribune reports this afternoon, the court-appointed trustee for SCO, Edward Cahn, said the company he manages will continue to press its lawsuit against IBM, on the basis of contracts that SCO claims IBM violated. Cahn may not have any choice at this point, as the trustee of the closest thing the computing industry has ever seen to a zombie.

    Although Pamela Jones’ Groklaw report this afternoon began with the words “It’s over,” she conceded this doesn’t yet signify the official closing of the curtain: “Of course, it’s possible they might appeal. I’ve learned when it comes to SCO never to assume it’s over, just because it should be over…Even in this Novell trial, there are some issues the judge has yet to decide. This saga is not finished.”

    That doesn’t mean the leading Linux advocate isn’t celebrating: “And don’t you want to thank the jury? I know I do. Did I not tell you that juries can be trusted? I hope this helps some of you cynics.”

    Copyright Betanews, Inc. 2010



    Add to digg
    Add to Google
    Add to Slashdot
    Add to Twitter
    Add to del.icio.us
    Add to Facebook
    Add to Technorati