Author: davidkirkpatrick

  • Look for the Fed to raise interest rates this fall

    The Federal Reserve has already borrowed $200 billion and parked it in the Treasury for just this move.

    From the second link:

    Most U.S. business economists expect the Federal Reserve to raise benchmark interest rates within six months by between a quarter and a half percentage point, according to a survey released on Monday.

    A majority of economists in the National Association of Business Economists’ semiannual survey found the Fed’s current stance of rates near zero percent is appropriate. A growing number, however, believe the U.S. central bank’s policy’s are too stimulative, according to a poll of 203 members taken February 4-22.

    “A majority believes that a rise in interest rates is both likely and appropriate in the next several months,” said NABE President Lynn Reaser.

  • The schizophrenia of Islamists

    Here’s a MEMRI report on Hama’s response to the Goldstone report on the Gaza War of 2008/2009.

    I bolded the final graf. Hit the first link for the entire report:

    Following Hamas’ June 2007 Gaza coup, Palestinian Authority President Mahmoud Abbas fired its ministers from the national unity government. Hamas, for its part, did not recognize the dismissal, nor does it recognize the legitimacy of Prime Minister Salam Fayyad’s government and of Mahmoud Abbas’s presidency. Instead, it regards Isma’il Haniya and his government as the legitimate representatives of the Palestinian people. Recently, it has instructed the media to stop calling it al-hukouma al-muqala (“the dismissed government”).

    Hamas’ view of its government as the legitimate representative of the Palestinian people was reflected in its recent response to the Goldstone report. The response, delivered to Curt Goering, head of the UN Office of the High Commissioner for Human Rights in Gaza, was submitted not in the name of Hamas but in the name of the PA. Signed by Hamas’ Minister of Justice, Muhammad Faraj Al-Ghoul, it bore the PA seal and the PA Ministry of Justice letterhead (see image below). In fact, Al-Ghoul explicitly stressed that the document was not a Hamas response to the Goldstone report but rather the official response of the Palestinian Justice Ministry. He emphasized that “the government is the one issuing the response, because it is the body handling the issue, rather than the resistance factions,”[1] thereby indicating that the Gaza government does not represent Hamas, but rather the entire Palestinian people.

    Apparently, Hamas’ goal in submitting this report is to improve its international status and to gain the UN’s recognition. The rationale is that by accepting the document, the UN would in effect be recognizing Hamas’ status as the official representative of the PA.

    The Hamas response, which was published in the movement’s magazine Al-Risala, contained an apology for rocket attacks that harmed Israeli civilians; later Hamas denied issuing an apology.

  • Carbon nanotubes open new area of energy research

    Nanotechnology is revolutionizing how we see and deal with electricity, everything from storage to wiring. Now a team at MIT has discovered carbon nanotubes produce electricity in an entirely new way, opening a brand new area in energy research.

    From the final link:

    A team of scientists at MIT have discovered a previously unknown phenomenon that can cause powerful waves of energy to shoot through minuscule wires known as carbon nanotubes. The discovery could lead to a new way of producing electricity, the researchers say.

    The phenomenon, described as thermopower waves, “opens up a new area of energy research, which is rare,” says Michael Strano, MIT’s Charles and Hilda Roddey Associate Professor of Chemical Engineering, who was the senior author of a paper describing the new findings that appeared in  on March 7. The lead author was Wonjoon Choi, a doctoral student in mechanical engineering.

    Like a collection of flotsam propelled along the surface by waves traveling across the ocean, it turns out that a thermal wave — a moving pulse of heat — traveling along a microscopic wire can drive electrons along, creating an electrical current.

    The key ingredient in the recipe is carbon nanotubes — submicroscopic hollow tubes made of a chicken-wire-like lattice of carbon atoms. These tubes, just a few billionths of a meter () in diameter, are part of a family of novel carbon molecules, including buckyballs and graphene sheets, that have been the subject of intensive worldwide research over the last two decades.

  • A realistic look at a close encounter of the third kind

    This post from J. Storrs Hall at Nanodot, the blog of the Foresight Institute, offers a dystopian, and realistic, view of how mankind’s first encounter with extraterrestrial intelligence will likely play out.

    Hall’s premise is the most likely first visitor will be a civilization commanding “full-fledged nanotech” and “hyperhuman AI.” Given that construct we won’t be welcoming emissaries of  an alien race, we’ll be facing its resource seeking nanobots that will likely be looking to loot our solar system of raw material and harness the sun with a Dyson sphere. Kind of makes you long for little green men, hostile or not, eh?

    From the first link:

    Star travel is expensive; it costs on the order of a ship’s own mass in equivalent energy to get it up to relativistic speeds. Any culture capable of that will be at least a Kardashev Type I civilization, and most likely a Type II.  And the reason they’ll be doing star travel is to work their way up towards Type III.  Any sentient creatures that actually get here will be nanotech-based robots, not water-based organisms.  They won’t have spacecraft, they’ll be spacecraft.  They will be unlikely interested in thecarbon-poor mudballs of the inner solar system, but reap abundant carbon from the outer planets and carbonaceous asteroids to build Dyson-sphere-like structures around the orbit of Mercury.

    We simply aren’t going to see less sophisticated visitors due to the starship paradox: send a starship out now with all Earth’s current technological resources behind it, and then wait and send one in 50 years with full nanotech.  The second one gets there first.

    For more background reading, here’s an explanation of the Kardashev scale cited above. And for a little reassurance Hall’s outcome isn’t all that inevitable read the comments at the linked post.

  • First time farce, second time tragedy

    Read this whole piece on the Liz Cheney group Keep America Safe’s shameless attack on U.S. Justice attorneys who upheld American legal tradition and the Constitution by defending Guantanano Bay detainees. I blogged on this topic earlier this week here.

    From the first link:

    Interviewing Liz Cheney, Bill O’Reilly ran side-by-side photos of Deputy Solicitor General Neal Katyal and Salim Hamdan, Osama bin Laden’s driver who Katyal successfully represented in the Supreme Court. (Neal Katyal, I should mention, is my Georgetown colleague, on leave to the SG’s office.) Some readers might remember Steven Colbert’s hilarious 2006 interview with Katyal soon after the Hamdan decision. Colbert began, “You defended a detainee at Gitmo in front of the Supreme Court — for what reason? Why did you do it?” Neal replied: “A simple thing: he wanted a fair trial….” Colbert (cutting Katyal off): “Why do you hate our troops?” It brought gales of laughter from the audience. Watch the whole thing — it’s one of the few times that Colbert was actually upstaged by his guest.

    First time farce, second time tragedy. Colbert’s joke is Bill O’Reilly’s reality — the reality of a nauseating reprise of McCarthyism. No one is laughing now.

    (Hat tip: the Daily Dish)

  • Ahmadinejad joins the 9/11 truthers

    Like the rest of his crazy statements, this one is calculated for effect in Iran. Doesn’t make it any less nuts, though.

    From the link:

    Perhaps concerned that his repeated suggestions that the Holocaust might not have happened have become less shocking over time, Iran’s president, Mahmoud Ahmadinejad upped the ante on Saturday, telling intelligence officials in Tehran that the destruction of the World Trade Center on September 11, 2001 was staged.

    In remarks reported by IRNA, an official Iranian news agency, and translated by Reuters, Mr. Ahmadinejad said, “The September 11 incident was a big fabrication as a pretext for the campaign against terrorism and a prelude for staging an invasion against Afghanistan.” Mr. Ahmadinejad also reportedly described the attacks in New York as a “complicated intelligence scenario and act.” Conspiracy theorists in the Middle East have suggested that the attacks were not the work of Al Qaeda, but carried out by Israeli or American intelligence operatives.

  • Saturday video fun — 1969 IHOP ad

    Yep, I’m doubling down on the strange and unusual today. This gem from the mind of late-60s admen looks like the result of a little too much microdot with a splash of funny mushrooms thrown in for good measure. It’s weird, and it’s trippy, but does it really make you want to head down to the International House of Pancakes?

    (Hat tip: boing boing)

  • Saturday video fun — “Я очень рад, ведь я, наконец, возвращаюсь домой”

    Apparently the title translates to, “I am very glad, in fact I, at last, I come back home.” This thing is really making the rounds and seems to have become the new RickRoll. Over 1.2 million views as of this posting.

    It’s weird and looks to be Soviet, and here’s the video … make of it what you will.

  • Small business loan relief courtesy of Congress

    Finally.

    From the link:

    Added incentives for banks to make Small Business Administration-backed loans will continue through the end of March, thanks to a fresh funding infusion authorized by Congress as part of Tuesday’s bill extending unemployment benefits.

    Since early last year, the SBA has waived its fees and offered banks guarantees of up to 90% on the small business loans the agency backs. Created as part of the Recovery Act, the deal sweeteners helped SBA-backed lending rebound from its near collapse in late 2008, in the wake of the financial crisis.

    Congress initially authorized the incentives to continue through September of this year, but the measures proved so popular that their funding was quickly exhausted. The SBA has been relying since late November on temporary extensions to keep the incentives running.

    The unemployment benefits extension bill — passed by the Senate and signed by President Obama late Tuesday after Sen. Jim Bunning, R-Ky., dropped his objection — allocates $60 million to fund the program’s subsidies for another month.

    Just this issue alone illustrates how Bunning’s so-called “principled” roadblock tactic put real short-term hurt on Main Street. Over 100,000 federal employees missed a paycheck because of that asshat’s grandstanding. How would you like to make a mortgage, or other bill, payment late because one Senator wanted to make an inane point about federal spending? Particularly a Senator who offered no fiscal backbone for eight years of profligate federal spending with zero attempt to pay for the outlay under the previous administration.

  • Multiple choice theories of everything

    Back in January I blogged about “the most beautiful structure in mathematics,” the basis of a physics theory-of-everything proposed by Garrett Lisi, That theory is part of an article on “Knowing the mind of God” at NewScientist outlining seven different theories of everything.

    From the last link, more on Lisi’s concept”

    E8

    In 2007 the physicist (and sometime surfer) Garrett Lisi made headlines with a possible theory of everythingMovie Camera.

    The fuss was triggered by a paper discussing E8, a complex eight-dimensional mathematical pattern with 248 points. Lisi showed that the various fundamental particles and forces known to physics could be placed on the points of the E8 pattern, and that many of their interactions then emerged naturally.

    Some physicists heavily criticised the paper, while others gave it a cautious welcome. In late 2008, Lisi was given a grant to continue his studies of E8.

    And here’s the article’s synopsis of string theory, one of the better known ideas out there:

    String theory

    This is probably the best known theory of everything, and the most heavily studied. It suggests that the fundamental particles we observe are not actually particles at all, but tiny strings that only “look” like particles to scientific instruments because they are so small.

    What’s more, the mathematics of string theory also rely on extra spatial dimensions, which humans could not experience directly.

    These are radical suggestions, but many theorists find the string approach elegant and have proposed numerous variations on the basic theme that seem to solve assorted cosmological conundrums. However, they have two major challenges to overcome if they are to persuade the rest of the scientific community that string theory is the best candidate for a ToE.

    First, string theorists have so far struggled to make new predictions that can be tested. So string theory remains just that: a theory.

    Secondly, there are just too many variants of the theory, any one of which could be correct – and little to choose between them. To resolve this, some physicists have proposed a more general framework called M-theory, which unifies many string theories.

    But this has its own problems. Depending how you set it up, M-theory can describe any of 10500 universes. Some physicists argue that this is evidence that there are multiple universes, but others think it just means the theory is untestable.

  • Faking job references

    Providing a bogus job reference in the form of a friend or relative is nothing new, but I had no idea the concept has become so organized and commodified.

    From the link:

    But a niche business has cropped up that takes that a step further. Web sites that offer fake job reference services are available for any job seeker whose credentials and references don’t stand on their own. That’s bad news for hiring managers, according to Jeff Wizceb, a vice president with HR Plus, a division of AlliedBarton Security Services that provides background screening services.

    Click here to find out more!“You basically sign up and create your own company that you want to have worked at or create a position at a legitimate company,” said Wizceb. “You plug in references, position, salary, all that information, and if an employer were to call the number you provided, these sites will pose as a reference and it would be basically this fake company that would ‘verify’ the information.”

  • Don’t call it a comeback — portfolio management

    Long ago I included a short bit on portfolio management in a larger piece on product development (yep, all that is about as exciting to both write and to read as it sounds) After some years in the biz-buzz wilderness, it looks like portfolio management is coming back into vogue.

    From the link:

    Portfolio management is poised to go “retro.” As organizations are preparing to come out of the recession, they are thinking more broadly about the types of investments that will be required to support business growth. As a result, some organizations are revisiting the idea of portfolio management as a way to organize and evaluate.

    When portfolio management was a hot topic in the middle part of the last decade, it was driven in part by some good management thinking from people like Peter Weill at MIT CISR and Dr. Howard Rubin and in part by some software tool vendors. At the time, most companies added some kind of portfolio thinking or at least dabbled with it. As my partner Jim Quick and I recently discussed, most of the fanfare was confined to the IT organization, but some of the more pioneering organizations actually drove portfolio thinking higher in the organization. In fact, this is where it belongs, where a complete view of a business investment can be calculated and evaluated.

    Many organizations likely found portfolio management too complex, too heavy on overhead, or too theoretical and, thus, they were unable to find value in it. So why would it come back into fashion now?

    Care to read my portfolio management bit from around ten years ago? Here it is:

    Portfolio Management

    Portfolio management is defined as a dynamic process which a company uses to regularly review the list of product development projects and allocate resources to the projects in a prioritized manner. The activities involved in portfolio management include reviewing the entire portfolio and comparing the individual projects against each other, making go/kill decisions on individual projects, developing a product strategy for the business and making the strategic resource allocation decisions. This approach is not unlike concept of managing a portfolio of stocks where you constantly evaluate the entire group to maximize your return and weed out the weak links.

    Without a good portfolio management system in place a company risks spreading their resources too thinly over a group of weak projects and not making effective go/kill decisions by allowing the weaker projects progress through the development process. Done correctly, the management of the project portfolio should be a funnel that keeps the effective projects in the stream and weeds out the weaker projects on a regular basis. This gives the strong projects more resources to help maximize their effectiveness. The difficulty in portfolio management is the fact that the process is dynamic and focuses on what might be and compares projects at differing stages of completion.

    To implement a portfolio management system, a company should approach the process as though they were designing a new product (the new management system) for an end user (the company.) By doing this the process is broken into steps with particular goals in mind to ensure that the result will be successful. The first step is to define the requirements for implementing a portfolio management system. This step involves learning about what portfolio management entails by researching literature on the subject and taking a look at what other firms are doing with portfolio management. This step also includes creating a task force within the company to act as the “project team” that develops the process.

    The second step is to design the portfolio management process itself. The first stage should provide a frame of reference to work from and should have defined problem areas that need to be addressed, like having a development process that is easily reviewable such as the Stage-Gate process. This step also involves creating a new product strategy and a portfolio review process. The result of this stage should be a portfolio management process that is down on paper and has been reviewed by the task force, users and top management within the company.

    The last step is to implement the process itself. A portfolio process manager should be chosen to cover the day-to-day management of the system and there should be training for the project team members on the new process. All new and existing projects should be put into the new system as quickly as possible and performance measures, or metrics, should be defined to allow for evaluation of the entire process.

    The result of the time and resources spent to develop a good portfolio management system is a more effective and accountable process of new product development.

    If you’re into product management, or just a glutton for business punishment, head below the fold for my entire piece on product development. It was written for Office.com in the late 1990s, earliest 2000s back when Office.com was owned by Winstar and buying Super Bowl ads. I don’t think this particular piece ever ran on the site (couldn’t find it at the Internet Archive), and since the old Office.com has been long gone for eight years or so I’ve made a HubPages site out of the article. Be warned, it is long.New Product Development — From Idea to Market

    By David Kirkpatrick

    Overview

    New product development is a vital part of any business. It doesn’t matter whether the product is for consumers or other businesses, whether it is a tangible object or a service. The constant change in markets and technology require that companies take steps to meet new challenges. Developing new products and improving existing products is an important step in meeting this challenge. New product development can be just what it sounds like—the creation of a completely new product that fills a previously unaddressed niche in the economy. Product development also includes re-examining an existing product to maximize its market potential through adding features, a design change or maybe just tweaking the marketing.

    Fortunately, product innovation is not a completely hit or miss proposition. There are steps a company can take to improve the likelihood of a successful development process. There is no one “best” method for developing products, and what works for one segment of a particular industry may not work for another industry, or maybe not even for another segment of that industry. The mix of elements will be different for every product development project, but companies can look to a basic framework to help keep all the different elements on track.

    The goal of the product development process is to end up with the best possible product. One that is well suited for the intended audience and contains features that are needed and desired. No matter how great the new product may seem, if the market rejects it, it’s a failure. Taking the product development process seriously can go a long way toward making the end result a success.

    Outline

    I. What is New Product Development

    II. Categories of Products

    III. The Stage-Gate Process

    IV. The Importance of Market Research]

    V. Types of Research

    VI. Choosing the Research Firm

    VII. The Customer Experience

    VIII. Managing the Process

    IX. Portfolio Management

    X. Resources

    What is New Product Development

    The new product begins as an idea or a concept. Maybe even hastily scribbled on a napkin at lunch. Product development is the process that takes that idea through a series of stages until the concept emerges at the end of the process as a completed product ready for the market.

    New products drive progress and can be seen as the lifeblood of business. They can create new companies and entirely new industries. The automobile is an example of a product that has spawned a huge industry with constantly changing design of the core product, a wide range of accessories that reflect ongoing innovation and a huge host of industries, like petrochemicals, that were directly affected by the creation of that one product.

    New product development can allow companies to reinforce or change their strategic direction, they prevent companies from becoming stagnant and new products can be a rallying point to create excitement and commitment at companies. A strong new product can also help a company’s entire product line with a “pull-through” effect.

    It is important for companies to remember that product innovation is not a static process. Abbie Griffin, editor of the Journal of Product Innovation Management says that “there’s no silver bullet” in product development. There’s no perfect process that a company can find and use over and over again. Each project requires its own set of objectives and mix of elements to be successful. What worked with the last project may not work for the next development process.

    “Product development goes in cycles at every single company,” says Griffin. “It’s almost like there’s a reversion to the mean. You do well for a while, you get cocky, you forget how to do it and then you lose it.”

    Companies should remember that the goal of the entire process is to end up with a successful product. A product that improves on the competition or fills a new niche in the marketplace.

    “I think that the best products are really for an individual–a clear, real person with real needs, aspirations, goals (and) values. And the more that products can be conceived and designed to address the needs of real people, the more they are going to resonate with, or excite consumers at the end of the day,” states Darrel Rhea, principal at Cheskin Research. “So I think that it’s incredibly important to have product developers, all the people involved in product development including the technical and engineering side of the equation, to have a deep appreciation for who they are designing for”

    Noncontrollable factors in product development:

    Market potential, size and growth rate

    The availability of resources

    How competitive the market is

    The external environment surrounding the product

    Controllable factors in product development:

    The proficiency of marketing and technological activities

    Whether the end user is involved in decisions throughout the process

    The support of top management

    The new product strategy and process

    Some questions to ask about the development process:

    Are we increasing our research and development effectiveness?

    Are we improving the utilization of our manufacturing operations?

    How are we leveraging our marketing effectiveness?

    Are we effectively utilizing our human resources?

    Are we including the consumer in our development process?

    Questions to ask about the new product:

    Will this new product enhance our corporate image?

    Are we reinforcing or changing our strategic direction with this product?

    Will this development help us achieve a strategic advantage over a competitor?

    How will this product improve our financial return?

    Does this new product create excitement at our company?

    Categories of Products

    New products come in many guises, but almost all of them can fit into one of five categories:

    The breakthrough product is something that is entirely new. The first telephone is an example of a breakthrough product. So is the first tennis racket made of a graphite composite and not wood or metal. These are the most dramatic of new products and they are what most people think of when they hear the words product innovation.

    The “new to us” product is one that is already on the market, but a company is developing their own version for the first time. There are many reasons a company may want to develop a product that already exists. They may be able to improve upon what is available, or can make a comparable product at a cheaper price. Or possibly they are looking to grab a share of a profitable market segment. A danger of overusing this approach is to be seen as a non-innovative copy cat company within the industry.

    The next generation product falls under the new and improved umbrella. That is, taking a successful product and adding some new benefit, reducing the cost or enhancing the overall design to improve the product.

    Many companies use a simple line extension product for much of their product development. This approach may not be too exciting, but it can be very profitable by tweaking a good product to meet different segments of the market. Some examples would be the creation of an economy model of a product, or conversely a higher-end model with more features. This approach should be carefully researched to avoid the over-segmentation of the market with a large and confusing assortment of product choices.

    The final product category is the three Rs–repackaging, repositioning and recycling. A new package design can invigorate and draw attention to an existing product. By repositioning a product, companies can open new and profitable markets to that product. The recycled product can find new life in a new role.
    A good example would be the old, mechanical push lawn mower. Gas and electric power mowers dominate the market, but there is an increasing demand for the simple and environmentally friendly push mower.

    The Stage-Gate Process

    The traditional product development process is referred to as sequential new product development and consists of five basic steps: idea generation, screening and evaluation, business analysis, development, testing and commercialization. Each step is overseen by a manager who makes the determination whether or not to proceed to the next step.

    Robert G. Cooper founded the Stage-Gate ™ product development process that refines the basic framework and provides the development team a blueprint for managing the process. The Stage-Gate process defines the cross-functional and parallel activities that each stage should engage in. Between the stages are gates which control the process and serve as go/kill checkpoints for the project as well as offer quality-control of the process.

    This process breaks the development cycle into a number, usually four, five or six, of identifiable stages. Each stage includes the various activities that each functional area undertakes in parallel while working together as a team under the project team leader. The design is set up where each stage gathers information to drive down uncertainty about the success of the project. Each successive stage is also more costly than the previous stage. The idea is to allow an increase in spending on the development of projects as the uncertainty goes down. Another aspect of the Stage-Gate process is a built-in level of flexibility to help accelerate the development process. Stages can overlap each other, or a Go decision can be made on the next stage although the current stage is not completed yet. Stages can even be combined if that is found to be necessary.

    A generic example of a five-part Stage-Gate process:

    Stage 1: The Preliminary Investigation–This should be undertaken by a basic team from the technical and marketing functions to investigate the scope of the project. Some of the activities include the preliminary market, technical and business assessments.

    Stage 2: Detailed Investigation–The activities of this stage should lead to a business case. They include market research studies like user needs and wants, competitive analysis and concept testing. The activities also include in-depth business and financial analyses. This stage should involve the technical, marketing and manufacturing functions and should yield a defined product and a framework for the following stages.

    Stage 3: Development: Here is where is the product is designed and prototyped. At this stage some customer research is conducted to refine the design, but much of the testing involves the manufacturing requirements and processes. The marketing plan for the new product is also created. The entire cross-functional team, including the marketing, technical, manufacturing functions as well as the purchasing, sales, quality assurance and finance functions, should be in place by the development stage.

    Stage 4: Testing and Validation–This stage is where the proposed new product endures extensive testing of the production, the marketing and the product itself. This stage determines if the product is ready to actually go to the market.

    Stage 5: Market Launch and Production: The full commercialization of the new product. The marketing plan is fully engaged and the project team now should take on the role of monitoring and making refining adjustments on the launched product as necessary.

    Built into the Stage-Gate process is the gate between each stage. This gate is a go/kill decision point that determines if the project continues. The gate also serves a quality-control checkpoint and helps determine the resource commitment the project receives from the company. The gate meeting should be attended by the senior members of each of the functions represented on the development team. Gates have three aspects that are evaluated during the meeting. The input is what the previous stage delivered to the meeting. The criteria are the questions that the meeting members use to make their go/kill and resource allocation decisions. The criteria involve both quantitative figures, like financial return forecasts, and qualitative data, like market attractiveness. Each of the criteria are judged on mandatory or desirable merits to determine the outcome of the meeting. The output of the gate meeting is the decision that the participants end up with. If the decision is a “go” the participants set a resource allocation level to the next stage and a deadline for the next set of deliverables.

    Part of the flexibility of the Stage-Gate process is that decisions can be made with incomplete information–a provisional go decision can be made depending on positive results occurring early in the next stage. The Stage-Gate process is a cross-functional approach that involves many different business function areas throughout the entire product innovation cycle.

    The traditional product development cycle does not involve market research until close to the end of the process. This view looks at research as a mean to “market” the new product, not allow market research to drive the actual development decisions. An important aspect of the Stage-Gate process is that the marketing function is involved in every step of the development cycle. Following is breakdown of some specific marketing functions that should be employed at specific stages of the process.

    The Preliminary Market Assessment: This study is used at the onset of the development cycle to determine the market attractiveness and acceptance of the concept

    User Needs and Wants: These face-to-face interviews and in-depth surveys provide important user information to the design team

    Competitive Analysis: Just as it sounds, this study looks at the competition within the targeted marketplace

    Concept Testing: This tool uses a prototype or representation of the proposed product to test market acceptance.

    Customer Reaction: This should be employed throughout the various steps of development to keep the project team focussed on the end user.

    User Tests: This is the final test before the actual onset of marketing where consumers use the product under customer conditions to confirm the market attractiveness and acceptance.

    Test Marketing: The new product is launched on a limited basis to test every element of the marketing and product before the full launch.

    Market Launch: The full release of the new product, backed up by a sound development process and the resources necessary for market success.

    The Importance of Market Research

    Traditionally product development has used market research at the end of the process to validate the new design or product. Market research is actually an important aspect of every stage of development. The earlier a company spends market research dollars in development, the more return they will see on those dollars. The problem with utilizing validation research at the end of the process is that the research itself becomes a “disaster check.” The best result a disaster check can come back with is that you have a viable product that the market will embrace. More likely the research will uncover areas where the development team made incorrect assumptions about some aspect of the new product, maybe a configuration issue or a tone of the brand, that turns out to be incorrect and created a weaker product than was possible. The worst-case scenario has the research uncovering an actual “disaster”, that is, a product that is not suited for the target audience in the least. At that point the development process must start over, losing valuable time and wasting money. A problem that could have been averted by effective research early in the development process.

    Rhea explains the value of using research early in the development game, “If you take the time to really do serious planning of the process, if (you) take the time to focus your efforts, to aim before you fire, you’re going to have a much more efficient development process.” By utilizing research to help conceptualize the direction of the product development, companies can increase the return on their development investment. The best way to accomplish this is to really understand who the user of the new product is and bring that product to life for that user. By doing this the development team has a powerful shared vision and understanding of whom they are solving the problem for.

    “Ultimately research should help provide the fundamental material to create a vision for a brand or product,” says Rhea. “And that vision should be based on a really deep, intuitive clear understanding of human beings as real people, not as statistics and data.”

    Market research can also help the development team realize that they are not just creating a product. They are creating a customer experience. A relationship with the customer that goes beyond the product or service created at the end of the development process. This requires a deep understanding of brand and that brand is the promise and relationship that the customer has with the company.

    “Ultimately, in the market, it (product development) creates products that people just don’t buy, they buy into,” states Rhea. He goes on to cite the Body Shop, Patagonia and Nike as companies whose products consumers buy in part because of the philosophy, values and sensibilities of their brands.

    Types of Research

    Market research encompasses a wide variety of types of research. The two basic categories are quantitative and qualitative. Quantitative research includes the collection of demographic and psychographic information, like the target age and gender of the product, the hobbies of the target group, how strongly they feel about various issues. Qualitative research goes more deeply into the actual problems that the customer may want solved by the new product.

    The trend in product development has been moving toward more qualitative tools. One of these is cultural ethnology which borrows skills from visual anthropology to observe the consumer in various settings and develop deep, contextual insights about the end user from these observations. This process can allow the researcher to learn what is going on in the consumer’s life which the consumer may not want to tell the researcher or may not even be aware of themselves. Using this along with other tools allows a skilled researcher to define specific problems that can be solved through the product innovation process.

    An important aspect of research is to utilize the entire toolbox of research skills to define the consumer and the problems that need to be solved—using the correct methodology at the right time to solve the right problem. By not doing this, the development team risks ending up with a product that is less than optimal. Most companies will not have this highly specialized skill in-house in the marketing department. The solution is to hire a market research firm that can provide worthwhile market research to the development team.

    Choosing the Research Firm

    When searching for a market research firm to add to the development team, the company should be aware that not all market research companies will be able to provide the type of results that product development demands. A large portion of the market research field specializes in quantitative data collection, which is very worthwhile for many purposes, but these companies will not provide the skill sets that the development team requires. The company should seek out a research firm with experience with the appropriate skills for product development. This means finding someone who can help you understand your target consumer and understands the product development process. Each area of the development team, like the engineer or the industrial designer or the advertiser, needs something different from the research. The effective market researcher will be able to translate their research into usable insight for each different group and put that insight into each group’s language.

    Speed of development is often an issue, and good market research can help streamline the process. Christopher Ireland, CEO of Cheskin explains, “We think people can really get things developed faster if they just spend a little time up front listening to their customer. Because more often than not, what happens is they start developing something and halfway through they got to test it, they find that they’ve made a lot of mistakes and then they have to go back and reinvent for a while. And they lose a lot of time.”

    Research Firm Checklist

    ___Does the firm understand the product innovation process?

    ___Does the firm have experience with market research for product development?

    ___Can the firm provide usable information for each functional area in the language of that area?

    ___Is the research firm willing to be integrated into the development process as a member of the project team, or will they just provide reports and information to the team?

    The Customer Experience

    The current marketplace is a radically changed arena. The needs and concerns of customers from even five years ago may no longer be valid which underscores the importance of understanding what the end user wants today to product development. Companies that rely on the traditional methods of allowing technology or existing products drive development may find themselves left behind. Research is finding that benefits, or perceived benefits, drive consumers to make purchase decisions. Learning what the customer finds beneficial can be a critical element in a new product design. Cheskin Research has developed a the Cheskin Research Design Experience Model to pull together the elements of customer’s everyday interactions with products by observing customers as they experienced many hundreds of commercial designs.

    Stage one of this model is “life context.” This refers to the background of the consumer’s life—what the consumer thinks, feels and does. It also includes elements such as beliefs, attitudes and perceptions. Life context does not remain static, as new innovations such as the Internet change the consumer’s behavior. As the behavior changes, the needs of the customer changes. This element should be reinvestigated for every new product or idea as it is in a constant state of flux.

    Examples of research used to explore life context:

    Ethnographic Studies—observing people in their natural context.

    Expert Interviews—learning what experts in the field of interest have to say.

    Identity Studies—determining how and why people feel a certain way about products and companies.

    Customers must make a transition that Cheskin calls “involvement” before entering the second stage of “engagement.” Engagement is the customer’s first interaction with the new design. What makes this stage important is that the design itself should be interesting enough to engage the customer without the help of a specific brand, company or product category.

    The successful design will accomplish three tasks at this stage. The product will have a cognitive presence which triggers one of the five senses and causes customers to make a distinction between the new product and its competitors. The design will grab the customer’s interest through attraction, and the customer will receive communication about the product’s key attributes. This stage may last only a few seconds, but it is vital that the consumer completes the three tasks of cognitive presence, attraction and communication.

    Examples of research used to explore engagement:

    Physiological Response Studies—tracking consumer’s physical response to the design elements.

    Visual Mapping Studies—tracking what design elements attract the consumer’s eye. This bridges physiological responses and more emotional responses.

    Communications Studies—these explain how the design elements impact consumer’s perception of the product.

    The third stage is “experience.” At this stage the consumer actually uses the new product and continually assesses the quality of their experience with the product. The task is to create a product that “delivers”, that is, meets the customer’s expectation and then goes a little further to give the customer something extra. You want the customer to go beyond being merely satisfied with the new product to seeing it as something great.

    Examples of research used to examine experience:

    Attitudes and Usage Studies—track how people interact with products and provide insights about this usage.

    Usability Perception Tests—measure how consumers feel about the products functionality.

    In-Use Testing—or “beta testing”. Allowing a select group of consumers to use prototypes of the product and report on the strengths and limitations that they find.

    The final stage is “resolution.” This stage occurs when the product is no longer used by the consumer. It is preceded by a transition process that Cheskin calls “disengagement.” Marketers and designers have traditionally considered disengagement the end of their products interaction with the customer, but new environmental standards have changed this view. Product disposal has become an issue that companies must enter into the product development process. Companies must become aware of how the product is disposed of whether it is returned to the manufacturer, recycled or just sent to the local landfill.

    Examples of research used to explore resolution:

    Customer Satisfaction Surveys—track customer’s ongoing reactions to products and companies

    Point-Use Studies—track the strategies and processes that consumers use to dispose of products.

    These studies can tell companies whether their customers can move through the next transition of “integration” which will allow the customer to weave their product perception back into life context and begin the cycle again.

    The following questions taken from the Cheskin Research document, “A new perspective on design: focusing on customer experience”, can help you think about your company’s products in relation to this customer-based model.

    1. How deeply does your company seek to understand customers before engaging in design? What might you do to deepen this understanding?

    2. How might it benefit your company’s design process to start with a focus on customers and their concerns rather than on existing technology or products?

    3. Are there any model stages your company is not presently evaluating during the product development process or marketing efforts? If so, how might greater evaluation benefit your customers? Your products? Your company?

    4. Think about how customers experience your product—from initial exposure to disposal. Does anything in the design distract customers from addressing their concerns? Are there any unnecessary points of effort, induced awareness, inconvenience, or irritation?

    5. How might you improve customer’s experience of your product so that they will be favorably predisposed toward repurchasing it? Toward accepting your company’s other product offerings?

    Managing the Process

    The traditional view of product development has a single project team leader overseeing the individual project and each of the functional areas, like technical, marketing, manufacturing, finance and logistics. Many companies are now moving toward a team-based approach with each functional area represented by a leader. These are commonly called cross-function or mulit-functional teams and are a representation of the corporate trend of reducing the hierarchical structure of the firm, or creating a “flat” organization. Although more companies are only paying lip service to the idea of a flat organization than are actually implementing the idea, the use of cross-functional teams can be very effective in product innovation. The Stage-Gate process demands a cross-functional team with players from every function and a team leader to represent them. Although the team structure is dynamic with members entering and leaving the team as necessary, the project retains a core group of members and team leader that are involved and responsible through the entire development process.

    With the management and responsibility of the individual projects covered by the project team leader and the core members from the different functional areas involved in product innovation the management issue becomes more one of the entire scope of product development of the company.

    Portfolio Management

    Portfolio management is defined as a dynamic process which a company uses to regularly review the list of product development projects and allocate resources to the projects in a prioritized manner. The activities involved in portfolio management include reviewing the entire portfolio and comparing the individual projects against each other, making go/kill decisions on individual projects, developing a product strategy for the business and making the strategic resource allocation decisions. This approach is not unlike concept of managing a portfolio of stocks where you constantly evaluate the entire group to maximize your return and weed out the weak links.

    Without a good portfolio management system in place a company risks spreading their resources too thinly over a group of weak projects and not making effective go/kill decisions by allowing the weaker projects progress through the development process. Done correctly, the management of the project portfolio should be a funnel that keeps the effective projects in the stream and weeds out the weaker projects on a regular basis. This gives the strong projects more resources to help maximize their effectiveness. The difficulty in portfolio management is the fact that the process is dynamic and focuses on what might be and compares projects at differing stages of completion.

    To implement a portfolio management system, a company should approach the process as though they were designing a new product (the new management system) for an end user (the company.) By doing this the process is broken into steps with particular goals in mind to ensure that the result will be successful. The first step is to define the requirements for implementing a portfolio management system. This step involves learning about what portfolio management entails by researching literature on the subject and taking a look at what other firms are doing with portfolio management. This step also includes creating a task force within the company to act as the “project team” that develops the process.

    The second step is to design the portfolio management process itself. The first stage should provide a frame of reference to work from and should have defined problem areas that need to be addressed, like having a development process that is easily reviewable such as the Stage-Gate process. This step also involves creating a new product strategy and a portfolio review process. The result of this stage should be a portfolio management process that is down on paper and has been reviewed by the task force, users and top management within the company.

    The last step is to implement the process itself. A portfolio process manager should be chosen to cover the day-to-day management of the system and there should be training for the project team members on the new process. All new and existing projects should be put into the new system as quickly as possible and performance measures, or metrics, should be defined to allow for evaluation of the entire process.

    The result of the time and resources spent to develop a good portfolio management system is a more effective and accountable process of new product development.

    Resources

    Robert G. Cooper, Scott J. Edgett and Elko J. Kleinschmidt, Portfolio Management for New Products, Addison-Wesley, 1998

    Robert J. Thomas, New Product Development: Managing and Forecasting for Strategic Success, John Wiley & Sons, Inc., 1993

    Kim B. Clark and Steven C. Wheelwright, ed., The Product Development Challenge: Competing Through Speed, Quality and Creativity, A Harvard Business Review Book, 1994

    John A. Hall, Bringing New Products to Market: The Art and Science of Creating Winners, Amacom, 1991

    “A New Perspective on Design: Focusing on Customer Experience”, Volume One, Number One, Cheskin Research, 1998

    Product Development and Management Association www.pdma.org

    Journal of Product Innovation Management www-east.elsevier.com/pim/

  • Detecting malware on mobile devices

    Malware and other dark computer arts will become a problem for smartphones and other mobile devices. It’s definitely a matter of when, and not if. This idea to combat the problem seems pretty ingenious. The solution involves checking the device’s RAM for usage or anomalies that expose the presence of  malware.

    From the link:

    Yesterday at the RSA Conference in San Francisco, a researcher presented a new way to detect malware on mobile devices. He says it can catch even unknown pests and can protect a device without draining its battery or taking up too much processing power.

    Experts agree that malware is coming to smart phones, and researchers have begun to identify ways to protect devices from malicious software. But traditional ways of protecting desktops against threats don’t translate well to smart phones, says Markus Jakobsson, a principal scientist at Xerox PARC and the person behind the new malware detection technology. He is also the founder of FatSkunk, which will market malware-detection software based on the research.

    Most antivirus software works behind the scenes, comparing new files to an enormous library of virus signatures. Mobile devices lack the processing power to scan for large numbers of signatures, Jakobsson says. Continual scanning also drains batteries. His approach relies on having a central server monitor a device’s memory for signs that it’s been infected, rather than looking for specific software.

  • Silicon nanowires may improve solar costs

    Silicon photovoltaics offer incredible solar cell efficiency and now it looks like nanotechnology may offer a way to add low production cost to that mix. This type of headway and improvement is what will make solar a market-viable power option.

    The release:

    Trapping Sunlight with Silicon Nanowires

    MARCH 03, 2010

    Lynn Yarris

    This photovoltaic cell is comprised of 36 individual arrays of silicon nanowires featuring radial p-n junctions. The color dispersion demonstrates the excellent periodicity present over the entire substrate. (Photo courtesy of Peidong Yang)

    This photovoltaic cell is comprised of 36 individual arrays of silicon nanowires featuring radial p-n junctions. The color dispersion demonstrates the excellent periodicity over the entire substrate. (Photo from Peidong Yang)

    Solar cells made from silicon are projected to be a prominent factor in future renewable green energy equations, but so far the promise has far exceeded the reality. While there are now silicon photovoltaics that can convert sunlight into electricity at impressive 20 percent efficiencies, the cost of this solar power is prohibitive for large-scale use. Researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab), however, are developing a new approach that could substantially reduce these costs. The key to their success is a better way of trapping sunlight.

    “Through the fabrication of thin films from ordered arrays of vertical silicon nanowires we’ve been able to increase the light-trapping in our solar cells by a factor of 73,” says chemist Peidong Yang, who led this research. “Since the fabrication technique behind this extraordinary light-trapping enhancement is a relatively simple and scalable aqueous chemistry process, we believe our approach represents an economically viable path toward high-efficiency, low-cost thin-film solar cells.”

    Yang holds joint appointments with Berkeley Lab’s Materials Sciences Division, and the University of California  Berkeley’s Chemistry Department. He is a leading authority on semiconductor nanowires – one-dimensional strips of materials whose width measures only one-thousandth that of a human hair but whose length may stretch several microns.

    “Typical solar cells are made from very expensive ultrapure single crystal silicon wafers that require about 100 micrometers of thickness to absorb most of the solar light, whereas our radial geometry enables us to effectively trap light with nanowire arrays fabricated from silicon films that are only about eight micrometers thick,” he says. “Furthermore, our approach should in principle allow us to use metallurgical grade or “dirty” silicon rather than the ultrapure silicon crystals now required, which should cut costs even further.”

    Yang has described this research in a paper published in the journal NANO Letters, which he co-authored with Erik Garnett, a chemist who was then a member of Yang’s research group. The paper is titled “Light Trapping in Silicon Nanowire Solar Cells.”

    A radial p-n junction consists of a layer of n-type silicon forming a shell around a p-type silicon nanowire core. This geometry turns each individual nanowire into a photovoltaic cell.

    A radial p-n junction consists of a layer of n-type silicon forming a shell around a p-type silicon nanowire core. This geometry turns each individual nanowire into a photovoltaic cell.

    Generating Electricity from Sunlight

    At the heart of all solar cells are two separate layers of material, one with an abundance of electrons that functions as a negative pole, and one with an abundance of electron holes (positively-charged energy spaces) that functions as a positive pole. When photons from the sun are absorbed, their energy is used to create electron-hole pairs, which are then separated at the interface between the two layers and collected as electricity.

    Because of its superior photo-electronic properties, silicon remains the photovoltaic semiconductor of choice but rising demand has inflated the price of the raw material. Furthermore, because of the high-level of crystal purification required, even the fabrication of the simplest silicon-based solar cell is a complex, energy-intensive and costly process.

    Yang and his group are able to reduce both the quantity and the quality requirements for silicon by using vertical arrays of nanostructured radial p-n junctions rather than conventional planar p-n junctions. In a radial p-n junction, a layer of n-type silicon forms a shell around a p-type silicon nanowire core. As a result, photo-excited electrons and holes travel much shorter distances to electrodes, eliminating a charge-carrier bottleneck that often arises in a typical silicon solar cell. The radial geometry array also, as photocurrent and optical transmission measurements by Yang and Garrett revealed, greatly improves light trapping.

    “Since each individual nanowire in the array has a p-n junction, each acts as an individual solar cell,” Yang says. “By adjusting the length of the nanowires in our arrays, we can increase their light-trapping path length.”

    While the conversion efficiency of these solar nanowires was only about five to six percent, Yang says this efficiency was achieved with little effort put into surface passivation, antireflection, and other efficiency-increasing modifications.

    “With further improvements, most importantly in surface passivation, we think it is possible to push the efficiency to above 10 percent,” Yang says.

    Combining a 10 percent or better conversion efficiency with the greatly reduced quantities of starting silicon material  and the ability to use metallurgical grade silicon, should make the use of silicon nanowires an attractive candidate for large-scale development.

    As an added plus Yang says, “Our technique can be used in existing solar panel manufacturing processes.”

    This research was funded by the National Science Foundation’s Center of Integrated Nanomechanical Systems.

    Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research for DOE’s Office of Science and is managed by the University of California. Visit our website at http://www.lbl.gov.


    Peidong Yang (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)

    Peidong Yang (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)

    Additional Information

    For more about the research of Peidong Yang and his group, visit the Website at http://www.cchem.berkeley.edu/pdygrp/main.html

    For more about the Center of Integrated Nanomechanical Systems (COINS) visit the Website at http://mint.physics.berkeley.edu/coins/

  • National Broadband Plan seeks $25B

    The United States lags in broadband access, plus infrastructure investment of this nature is an investment in the future of the nation. An example of good government spending.

    From the link:

    Federal Communications Commission Chairman Julius Genachowski’s coming National Broadband Plan will propose up to $25 billion in new federal spending for high-speed Internet lines and a wireless network for police and firefighters as part of a broader plan that appears to be a win for wireless companies.

  • Nanotech and skin care

    Nanotechnology is changing diverse areas from electronics to medicine and even skin care. Here’s a release from the American Academy of Dermatology that just hit the inbox:

    Sizing Up Nanotechnology: How Nanosized Particles May Affect Skin Care Products

    MIAMI, March 4 /PRNewswire-USNewswire/ — The rapidly growing field of nanotechnology and its future use in cosmetic products holds both enormous potential and potential concern for consumers. Currently, major cosmetic manufacturers have imposed a voluntary ban on the use of nanoparticles in products while they await a ruling from the Food and Drug Administration (FDA) regarding the safety of this technology.  However, these manufacturers know that when ingredients in products such as sunscreens and anti-aging products are converted into nanosized particles, the end product displays unique properties that can benefit the skin in ways that otherwise could not be achieved using larger-sized particles.

    Speaking today at the 68th Annual Meeting of the American Academy of Dermatology (Academy), dermatologist Adnan Nasir, MD, PhD, FAAD, clinical assistant professor in the department of dermatology at the University of North Carolina in Chapel Hill, presented an overview of nanotechnology and how nanoparticles may eventually be used in cosmetic products.

    “Research in the area of nanotechnology has increased significantly over the years, and I think there will be considerable growth in this area in the near future,” said Dr. Nasir. “The challenge is that a standard has not been set yet to evaluate the safety and efficacy of topical products that contain nanosized particles.”

    Nanotechnology: On the Plus Side

    Products incorporating nanotechnology are being developed and manufactured at an ever-growing rate, especially among clothing manufacturers that incorporate nanomaterials into fabrics to enhance stain and wrinkle resistance, and water repellence.  However, Dr. Nasir explained that a substantial proportion of patents issued for nanotechnology-based discoveries are currently in the realm of cosmetic and consumer skin care products. In fact, the cosmetic industry leads all other industries in the number of patents for nanoparticles, which have the potential to enhance sunscreens, shampoos and conditioners, lipsticks, eye shadows, moisturizers, deodorants, after-shave products and perfumes.

    One example of how nanoparticles are being considered for use is to improve some of the undesirable properties of skin care products. Dr. Nasir explained that when certain ingredients are included in micrometer-sized particles, which are considerably larger than nanosized particles, the result is a product than can be cosmetically unappealing.

    For example, one common ingredient in broad-spectrum sunscreens, which protect the skin from both UVA and UVB rays, is avobenzone, which can make a sunscreen greasy and very noticeable when applied to the skin. Since titanium, another common sunscreen ingredient, requires an oily mixture to dissolve, a white residue can be apparent on the skin upon application. However, when these active ingredients in sunscreens are converted into nanoparticles, they can be suspended in less greasy formulations – which seem to vanish on the skin and do not leave a residue – while retaining their ability to block UVA and UVB light.

    “While widespread use of this technology is currently under evaluation, I think one of the main benefits of nanoparticles used in sunscreens will be that the particles can fit into all the nooks and crannies of the skin, packing more protection and more even coverage on the skin’s surface than microsized particles,” said Dr. Nasir. “Since sunscreen formulations using nanoparticles may be more cosmetically appealing and seem to vanish when applied, consumers may be more inclined to use them on a regular basis.”

    Nanotechnology also is generating excitement for its potential use in anti-aging products. When properly engineered, nanomaterials may be able to topically deliver retinoids, antioxidants and drugs such as botulinum toxin or growth factors for rejuvenation of the skin in the future.

    In anti-aging products, Dr. Nasir added that nanotechnology may allow active ingredients that would not normally penetrate the skin to be delivered to it. For example, vitamin C is an antioxidant that helps fight age-related skin damage which works best below the top layer of skin. In bulk form, vitamin C is not very stable and has difficulty penetrating the skin. However, in future formulations, nanotechnology may increase the stability of vitamin C and enhance its ability to penetrate the skin.

    “Since anti-aging products that contain nanoparticles of antioxidants will be harder to make, we expect that these products will cost more than products using traditional formulations,” said Dr. Nasir. “Once these products are determined to be safe, the consumer will have to decide if the increased costs are worth the added benefits.”

    Nanotechnology: Future Melanoma Treatment

    Researchers also are reviewing the use of nanomaterials for the treatment of melanoma. In particular, gold, when turned into a nanomaterial called nanoshells, has been shown to be a useful treatment for melanoma in animal studies.

    According to Dr. Nasir, gold nanoshells can be engineered to absorb specific wavelengths of light.  If the wavelength of light unique to a particular type of gold nanoshell is used on it, the particle generates heat. In one animal study done at MD Anderson Cancer Center in Houston, investigators joined gold nanoshells with a molecule which homes to melanoma.  When these gold nanoshells are injected into mice harboring melanoma, the nanoshells accumulate in the cancerous tissue.  When mice are illuminated with the proper wavelength of light, their tumors, laden with gold nanoshells, heat up and are effectively killed. The surrounding tissue, which lacks targeted gold nanoshells, is unharmed.

    “Nanotechnology holds promise for new non-invasive treatment methods, particularly for challenging dermatologic conditions, such as atopic dermatitis and ichthyosis,” said Dr. Nasir.

    Nanotechnology: More Consumer Information Needed

    Because the skin is the first point of contact and the first line of defense for newly manufactured nanomaterials, Dr. Nasir noted that many dermatologists have concerns about the potential health risks posed by nanotechnology. “Although nanotechnology is an exciting area that holds enormous potential,” said Dr. Nasir, “we anxiously await the FDA’s review of the safety of nanoparticles which will determine their future role in skin care products.”

    Headquartered in Schaumburg, Ill., the American Academy of Dermatology (Academy), founded in 1938, is the largest, most influential, and most representative of all dermatologic associations. With a membership of more than 16,000 physicians worldwide, the Academy is committed to: advancing the diagnosis and medical, surgical and cosmetic treatment of the skin, hair and nails; advocating high standards in clinical practice, education, and research in dermatology; and supporting and enhancing patient care for a lifetime of healthier skin, hair and nails. For more information, contact the Academy at 1-888-462-DERM (3376) or www.aad.org.

    Source: American Academy of Dermatology

    Web Site:  http://www.aad.org/

  • Latest Beige Book outlines slow recovery

    The quick recap — yep, things are getting better, and no, not very quickly. And all that snow in February didn’t help things. The coming unemployment report is expected to show the rate rising to 9.8 percent.

    From the link:

    Of the Fed’s 12 regions surveyed, nine showed improvement. The Richmond district, which includes Maryland, Virginia and the Carolinas, was hurt the most by the bad winter. That region reportedeconomic activity had “slackened or remained soft across most sectors” because of the weather.

    The economic setbacks from the weather come at a fragile time: The economy is struggling to recover from the worst and longest recession since the 1930s.

    After a big growth spurt at the end of 2009, many economists believe the recovery lost steam in the first three months of this year. They predict it will grow at a pace of around 3 percent from January to March. That won’t be fast enough to drive down the unemployment rate, now at 9.7 percent.

    The jobs market “remained soft throughout the nation,” the Fed reported.

  • Microsoft wants to tax you …

    … to help pay for correcting its sieve-like OS and application coding. Now I’m not saying Microsoft is the only reason malware, phishing, botnets and other cybercrime goes on out there, but its shoddy and ubiquitous products are to blame for a very large majority. And that statement comes from a Microsoft user and supporter.

    This internet usage tax idea from MS’s “trustworthy computing” veep is the height of stupidly ballsy statements. Maybe Microsoft should remunerate every computer user whose identity has been stolen, data compromised or computer files corrupted or lost due to yet another security fix that came a little too late.

    Taxing internet usage to fix a problem largely caused by a single entity? Not a good idea. Try again, Scott Charney.

    From the link:

    How will we ever get a leg up on hackers who are infecting computers worldwide? Microsoft’s (MSFT) security chief laid out several suggestions Tuesday, including a possible Internet usage tax to pay for the inspection and quarantine of machines.Today most hacked PCs run Microsoft’s Windows operating system, and the company has invested millions in trying to fight the problem.

    Microsoft recently used the U.S. court system to shut down the Waledac botnet, introducing a new tactic in the battle against hackers. Speaking at the RSA security conference in San Francisco, Microsoft Corporate Vice President for Trustworthy Computing Scott Charney said that the technology industry needs to think about more “social solutions.”

  • Wednesday video fun — amazing chalk art

    Here’s the title from the YouTube page, “Jamin’s Crazy Chalk Drawing #2 – Where The Wild Things Are.”

    And here’s the video …

    (Hat tip: wakooz)

  • Preserving digital knowledge

    This is a much larger issue than most people realize, and I’ve blogged on this exact topic just over a month ago. The problem is different formats and hardware advancement that can render data unreadable. Not because the data file is corrupted (even though that’s a real issue as well), but because there’s no device that can access the data on an archaic storage medium. I agree with the opening sentence of this release — it is one of the most pressing challenges of the information age.

    The release:

    Blue Ribbon Task Force Report:
    Preserving Our Digital Knowledge
    Base Must be a Public Priority

    Dollars Won’t Do It Alone: Deluge of Digital Data Needs Economically Sustainable Plans

    February 26, 2010

    By Jan Zverina

    Addressing one of the most urgent societal challenges of the Information Age – ensuring that valued digital information will be accessible not just today, but in the future – requires solutions that are at least as much economic and social as technical, according to a new report by a Blue Ribbon Task Force.

    The Final Report from the Blue Ribbon Task Force on Sustainable Digital Preservation and Access, called “Sustainable Economics for a Digital Planet: Ensuring Long-term Access to Digital Information”, is the result of a two-year effort focusing on  the critical economic challenges of  preserving an ever-increasing amount of information in a world gone digital. The full report is available online
    at http://brtf.sdsc.edu/biblio/BRTF_Final_Report.pdf

    “The Data Deluge is here.  Ensuring that our most valuable information is available both today and tomorrow is not just a matter of finding sufficient funds,” said Fran Berman, vice president for research at Rensselaer Polytechnic Institute, and co-chair of the Task Force. “It’s about creating a “data economy” in which those who care, those who will pay, and those who preserve are working in coordination.”

    The challenge in preserving valuable digital information – consisting of text, video, images, music, sensor data, etc. generated throughout all areas of our society – is real and growing at an exponential pace. A recent study by the International Data Corporation (IDC) found that a total of 3,892,179,868,480,350,000,000 (that’s roughly 3.9 trillion times a trillion) new digital information bits were created in 2008. In the future, the digital universe is expected to double in size every 18 months, according to the IDC report.

    While much has been written on the digital preservation issue as a technical challenge, the Blue Ribbon Task Force report focuses on the economic aspect; i.e. how stewards of valuable, digitally-based information can pay for preservation over the longer term. The report provides general principles and actions to support long-term economic sustainability; context-specific recommendations tailored to specific scenarios analyzed in the report; and an agenda for priority actions and next steps, organized according to the type of decision maker best suited to carry that action forward. Moreover, the report is intended to serve as a foundation for further study in this critical area.

    In addition to releasing its report, the Task Force earlier this month announced plans for a one-day symposium to provide a forum for discussion on economically sustainable digital preservation practices. The symposium, to be held April 1 in Washington D.C., will include a spectrum of national leaders from the Executive Office of the President of the United States, the Academy of Motion Picture Arts and Sciences, the Smithsonian Museum, Nature Magazine, Google, and other organizations for whom digital information is fundamental for success.

    Value, Incentives, and Roles & Responsibilities
    The report of the Blue Ribbon Task Force focuses on four distinct scenarios, each having ever-increasing amounts of preservation-worthy digital assets in which there is a public interest in long-term preservation:  scholarly discourse , research data, commercially-owned cultural content (such as digital movies and music), and collectively-produced Web content (such as blogs).

    “Valuable digital information spans the spectrum from official e-documents to some YouTube videos. No one economic model will cost-effectively support them all, but all require cost-effective economic models,” said Berman, who was director of the San Diego Supercomputer Center at the University of California, San Diego, before joining Rensselaer last year.

    The report categorizes the economics of digital preservation into three “necessary conditions” closely aligned with the needs of stakeholders: recognizing the value of data and selecting materials for longer-term preservation; providing incentives for decision makers to preserve data directly or provide preservation services for others; and articulating the roles and responsibilities among those involved in the preservation process. The report further aligns those conditions with the basic economic principle of supply and demand, and warns that without well-articulated demand for access to preserved digital assets, there will be no supply of preservation services.

    “Addressing the issues of value, incentives, and roles and responsibilities helps us understand who benefits from long-term access to digital materials, who should be responsible for preservation, and who should pay for it,” said Brian Lavoie, research scientist at OCLC and Task Force co-chair. “Neglecting to account for any of these conditions significantly reduces the prospects of achieving sustainable digital preservation activities over the long run.”

    Task Force Recommendations
    The Blue Ribbon panel report cites several specific recommendations for decision makers and stakeholders to consider as they seek economically sustainable preservation practices for digital information. While the report covers these recommendations in detail, below is a summary listing key areas of priority for near-term action:

    Organizational Action

    • develop public-private partnerships, similar to ones formed by the Library of Congress
    • ensure that organizations have access to skilled personnel, from domain experts to legal and business specialists
    • create and sustain secure chains of stewardship between organizations over  the long term
    • achieve economies of scale and scope wherever possible

    Technical Action

    • build capacity to support stewardship in all areas
    • lower the costs of preservation overall
    • determine the optimal level of technical curation needed to create a flexible strategy for all types of digital material

    Public Policy Action

    • modify copyright laws to enable digital preservation
    • create incentives and requirements for private entities to preserve on behalf of the public (financial incentives, handoff requirements)
    • sponsor public-private partnerships
    • clarify rights issues associated with Web-based materials

    Education and Public Outreach Action

    • promote education and training for 21st century digital preservation (domain-specific skills, curatorial best practices, core competencies in relevant science, technology, engineering, and mathematics knowledge)
    • raise awareness of the urgency to take timely preservation actions

    The report concluded that sustainable preservation strategies are not built all at once, nor are they static.

    “The environment in which digital preservation takes place can be very dynamic,” said OCLC’s Brian Lavoie. “Priorities change, policies change, stakeholders change. A key element of a robust sustainability strategy is to anticipate the effect of these changes and take steps to minimize the risk that long-term preservation goals will be impacted by short-term disruptions in resources, incentives, and other economic factors. If we can do this, we will have gone a long way toward ensuring that society’s valuable digital content does indeed survive.”

    About the Blue Ribbon Task Force on Sustainable Digital Preservation and Access
    The Blue Ribbon Task Force on Sustainable Digital Preservation and Access was launched in late 2007 by the National Science Foundation and The Andrew W. Mellon Foundation, in partnership with the Library of Congress, the Joint Information Systems Committee of the United Kingdom, the Council on Library and Information Resources, and the National Archives and Records Administration. The Task Force was commissioned to explore the economic sustainability challenge of digital preservation and access.  An Interim report discussing the economic context for preservation, Sustaining the Digital Investment:  Issues and Challenges of Economically Sustainable Digital Preservation, is available at the Task Force website, http://brtf.sdsc.edu .  Please visit the website for more information about the Task Force and its upcoming symposium, called A National Conversation on the Economic Sustainability of Digital Information, to take place April 1, 2010 in Washington D.C. A similar symposium will be held in the United Kingdom on May 6, 2010, at the Wellcome Collection Conference Centre, in London. Space is limited so early registration is advised.  More information is available at

    http://www.jisc.ac.uk/whatwedo/programmes/preservation/BRTFUKSymposium.aspx