Author: Serkadis

  • Climate Change Modelling Under Attack




    Allow me to make the problem crystal clear.  Mathematics can solve a problem having two objects and one controlling nonlinear variable.  We call it the two body problem.  Until the introduction of my work in the recently accepted paper mentioned in my profile, no other exact tools properly existed.
    The climate is influenced by a number of non linear variables in a three dimensional space, not least been the daily solar cycle.  If we could get them measured properly and actually put together a convincing equation, it is possible to estimate things on a continuing basis and to hope it is meaningful.
    The problem is divergence.
    Everything about the climate predicts a self correcting system of processes that act and react to ensure that the net result is zero. We call it weather.
    This means that if you set out to prove divergence you need extraordinary proof.
    It has not been forthcoming for the CO2 hypothesis and this recognition has been postponed by suppressing the data.
    The computer models are estimation systems that will be right or wrong often depending on the bias of the observer.
    Scientists’ use of computer models to predict climate change is under attack
    Washington Post Staff Writer 

    Tuesday, April 6, 2010

    The Washington Nationals will win 74 games this year. The Democrats will lose five Senate seats in November. The high Tuesday will be 86 degrees, but it will feel like 84.

    And, depending on how much greenhouse gas emissions increase, the world’s average temperature will rise between 2 and 11.5 degrees by 2100.

    The computer models used to predict climate change are far more sophisticated than the ones that forecast the weather, elections or sporting results. They are multilayered programs in which scientists try to replicate the physics behind things such as rainfall, ocean currents and the melting of sea ice. Then, they try to estimate how emissions from smokestacks and auto tailpipes might alter those patterns in the future, as the effects of warmer temperatures echo through these complex and interrelated systems.

    To check these programs’ accuracy, scientists plug in data from previous years to see if the model’s predictions match what really happened.

    But these models still have the same caveat as other computer-generated futures. They are man-made, so their results are shaped by human judgment.

    This year, critics have harped on that fact, attacking models of climate change that have been used to illustrate what will happen if the United States and other countries do nothing to limit greenhouse gas emissions. Climate scientists have responded that their models are imperfect, but still provide invaluable glimpses of change to come.

    They have found themselves trying to persuade the public — now surrounded by computerized predictions of the future — to believe in these.

    If policymakers don’t heed the models, “you’re throwing away information. And if you throw away information, then you know less about the future than we actually do,” said Gavin Schmidt, a climate scientist at NASA’s Goddard Institute for Space Studies.

    “You can say, ‘You know what, I don’t trust the climate models, so I’m going to walk into the middle of the road with a blindfold on,’ ” Schmidt said. “But you know what, that’s not smart.”

    Climate scientists admit that some models overestimated how much the Earth would warm in the past decade. But they say this might just be natural variation in weather, not a disproof of their methods.
    As computers have become faster and cheaper, models both simple and sophisticated have proliferated across government, business and sports, appearing to offer precise answers to questions that used to be rhetorical.

    How many games will the Redskins win next season?

    The Web site Footballoutsiders.com, which uses computers to show fans hidden dimensions of pro football, uses a model with about 80 variables. It looks at a team’s third-down conversions, the experience of its coaches, even the age of its defensive backs.

    No crystal balls

    How much cleaner would the Chesapeake Bay be if it had twice as many oysters?

    The Environmental Protection Agency uses a model that divides the bay into 55,000 slices, and maps how pollution progresses through them, from upstream tributaries into the deeper waters of the Chesapeake. It could imagine thousands more oysters — which filter water as they feed — and watch cleaner water spread out via currents and tides.

    But, some of the time, these electronic futures haven’t come true.

    The Football outsiders site predicted the Redskins would win 7.8 games in 2009. The real-world team won four. The EPA’s Chesapeake Bay model has been criticized repeatedly for over-optimism, for creating a virtual bay that looked cleaner than the real one. Last month, another model’s prediction was busted: a Georgia Tech professor’s computer said Kansas would win the NCAA men’s basketball tournament. The Jayhawks lost in the second round.

    These and other models are only as smart as the scientists who build them — they rely on data that scientists have gathered about the real world, and the accuracy of estimates about how all the factors fit together (Is an experienced coach more or less important than young defensive backs?).

    They also depend on the computers running them. To accurately depict how individual clouds form and disappear, for instance, the computers that model climate change would need to be a million times faster. For now, the effects of clouds have to be estimated.

    But scientists say complexity doesn’t guarantee accuracy. The best test of a model is to check it against reality.

    “We’re never going to perfectly model reality. We would need a system as complicated as the world around us,” said Ken Fleischmann, a professor of information studies at the University of Maryland. He said scientists needed to make the uncertainties inherent in models clear: “You let people know: It’s a model. It’s not reality. We haven’t invented a crystal ball.”

    Scientists say they don’t need models to know that the world is warming: There is plenty of real-world evidence, gathered since the mid-1800s, to suggest that. “There’s no climate model in that conclusion,” said Christopher Field, of the Carnegie Institution for Science in California.

    There are more than a dozen such models running around the world: mega-computers whose job is creating a virtual Earth.

    These usually combine a weather simulation with other programs that mimic effects of rain and sun on the land, currents in the ocean, and emissions of greenhouse gases. First, these models imagine all the factors interacting within a “grid box” — an imaginary cube of land, water and sky that might be 60 miles long and 60 miles wide.

    Then, the computer imagines effects in one box spilling into the next, and so on.

    As the model runs, imaginary cold fronts sweep over virtual oceans, simulating weather at rates such as five years per day. In some cases, the models are re-run with different weather conditions, until a pattern emerges in global temperatures.

    The pattern is the point. It is man’s signature, a guide to what could happen in the real world. All the major climate models seem to show that greenhouse gases are causing warming, climate scientists say, although they don’t agree about how much. A 2007 United Nations report cited a range of estimates from 2 to 11.5 degrees over the next century.

    “It’s an educated, scientifically based guess,” said Michael Winton, an oceanographer at the National Oceanic and Atmospheric Administration. “But it’s a guess nonetheless.”

    Raining on their parade

    But Warren Meyer, a mechanical and aerospace engineer by training who blogs at www.climate-skeptic.com, said that climate models are highly flawed. He said the scientists who build them don’t know enough about solar cycles, ocean temperatures and other things that can nudge the earth’s temperature up or down. He said that because models produce results that sound impressively exact, they can give off an air of infallibility.

    But, Meyer said — if the model isn’t built correctly — its results can be both precise-sounding and wrong.

    “The hubris that can be associated with a model is amazing, because suddenly you take this sketchy understanding of a process, and you embody it in a model,” and it appears more trustworthy, Meyer said. “It’s almost like money laundering.”

    Last month, a Gallup poll provided the latest evidence of a public U-turn on climate change. Asked if the threat of global warming was “generally exaggerated,” 48 percent said yes. That was up 13 points from 2008, the highest level of skepticism since Gallup started asking the question in 1997.

    But scientists say that, during this time, they have only become more certain that their models work.

    Put in the conditions on Earth more than 20,000 years ago: they produce an Ice Age, NASA’s Schmidt said. Put in the conditions from 1991, when a volcanic eruption filled the earth’s atmosphere with a sun-shade of dust. The models produce cooling temperatures and shifts in wind patterns, Schmidt said, just like the real world did.

    If the models are as flawed as critics say, Schmidt said, “You have to ask yourself, ‘How come they work?’ “
  • Chevrolet expanding into South Korea:

    Chevrolet, the most famous and best-selling General Motors brand, will expand into South Korea in 2011, the automaker said on Wednesday.

    Enthusiasts in South Korea are in for a treat: At the Busan International Motor Show where the announcement was made, Chevy showed a Camaro, which will be one of the products offered.

    Last year, Chevy accounted for 44 percent of GM’s global sales and saw a sales increase of 21 percent over the previous year. It’s also seeing an influx of global products, such as the Cruze sedan and the Spark small car.

    Chevrolet may be the most singular American brand–baseball, hot dogs, apple pie and all. But it’s also seeing an increased global presence. Last year, 3.3 million vehicles wearing the bow tie badge were sold in 130 markets around the world.

    South Koreans are also conscious of the 99-year-old Chevy brand and its iconic logo. Research finds that half of South Koreans are aware of the brand, and more than 80 percent are familiar with the bow tie.

    “This is indicative of the positive brand image that already exists among consumers in Korea toward Chevrolet,” GM Daewoo CEO Mike Arcamone said in a statement. “We see tremendous upside with its introduction.”

    Chevrolet traces its roots to 1911 and is named after race-car driver Louis Chevrolet. There are different stories as to how the logo came to grace the sheetmetal of Chevys, but one version has it that GM founder Billy Durant saw it in a French hotel room and adapted it to represent his cars.

    For more


    Chevrolet Camaro

    Source: Car news, reviews and auto show stories

  • Bellwether Procter & Gamble Falling After Revenue And Outlook Light

    Major economic bellwether Procter & Gamble (PG) is falling slightly in pre-market activity after the company reported squishy earnings.

    EPS of $.83 beat by a couple of pennies, but revenue of $19.2 billion was well below estimates of $19.5 billion.

    The overall market remains up.

    Join the conversation about this story »

  • Negative Index Metamaterial Designed



    This will clearly be finding its way into solar cells.  It is efficient in operation and design and even tunable.  Thus an optimum range of incoming light can be accepted.
    It seems plausible that one day we will produce a solar panel able to absorb and largely consume light over a spectrum somewhat larger that the visible portion.  This technology promises to be part of it.
    I want to see more about what graphene can do in terms of converting that energy into electron flow.  We seem to be going there.
    The materials revolution continues.
    Novel negative-index metamaterial that responds to visible light designed
    April 22, 2010
    Arrays of coupled plasmonic coaxial waveguides offer a new approach by which to realize negative-index metamaterials that are remarkably insensitive to angle of incidence and polarization in the visible range. Credit: Caltech/Stanley Burgos
    A group of scientists led by researchers from the California Institute of Technology has engineered a type of artificial optical material—a metamaterial—with a particular three-dimensional structure such that light exhibits a negative index of refraction upon entering the material. In other words, this material bends light in the “wrong” direction from what normally would be expected, irrespective of the angle of the approaching light.
    This new type of negative-index metamaterial (NIM), described in an advance online publication in the journal Nature Materials, is simpler than previous NIMs—requiring only a single functional layer—and yet more versatile, in that it can handle light with any polarization over a broad range of incident angles. And it can do all of this in the blue part of the visible spectrum, making it “the first negative index metamaterial to operate at visible frequencies,” says graduate student Stanley Burgos, a researcher at the Light-Material Interactions in Energy Conversion Energy Frontier Research Center at Caltech and the paper’s first author.
    “By engineering a metamaterial with such properties, we are opening the door to such unusual—but potentially useful—phenomena as superlensing (high-resolution imaging past the diffraction limit), invisibility cloaking, and the synthesis of materials index-matched to air, for potential enhancement of light collection in solar cells,” says Harry Atwater, Howard Hughes Professor and professor of applied physics and materials science, director of Caltech’s Resnick Institute, founding member of the Kavli Nanoscience Institute, and leader of the research team
    What makes this NIM unique, says Burgos, is its engineering. “The source of the negative-index response is fundamentally different from that of previous NIM designs,” he explains. Those previous efforts used multiple layers of “resonant elements” to refract the light in this unusual way, while this version is composed of a single layer of silver permeated with “coupled plasmonic waveguide elements.”
    Surface plasmons are light waves coupled to waves of electrons at the interface between a metal and a dielectric (a non-conducting material like air). Plasmonic waveguide elements route these coupled waves through the material. Not only is this material more feasible to fabricate than those previously used, Burgos says, it also allows for simple “tuning” of the negative-index response; by changing the materials used, or the geometry of the waveguide, the NIM can be tuned to respond to a different wavelength of light coming from nearly any angle with any polarization. “By carefully engineering the coupling between such waveguide elements, it was possible to develop a material with a nearly isotopic refractive index tuned to operate at visible frequencies.” 
    This sort of functional flexibility is critical if the material is to be used in a wide variety of ways, says Atwater. “For practical applications, it is very important for a material’s response to be insensitive to both incidence angle and polarization,” he says. “Take eyeglasses, for example. In order for them to properly focus light reflected off an object on the back of your eye, they must be able to accept and focus light coming from a broad range of angles, independent of polarization. Said another way, their response must be nearly isotropic. Our metamaterial has the same capabilities in terms of its response to incident light.”
    This means the new metamaterial is particularly well suited to use in solar cells, Atwater adds. “The fact that our NIM design is tunable means we could potentially tune its index response to better match the solar spectrum, allowing for the development of broadband wide-angle metamaterials that could enhance light collection in solar cells,” he explains. “And the fact that the metamaterial has a wide-angle response is important because it means that it can ‘accept’ light from a broad range of angles. In the case of solar cells, this means more light collection and less reflected or ‘wasted’ light.”
    “This work stands out because, through careful engineering, greater simplicity has been achieved,” says Ares Rosakis, chair of the Division of Engineering and Applied Science at Caltech and Theodore von Kármán Professor of Aeronautics and Mechanical Engineering.
    More information: “A single-layer wide-angle negative index metamaterial at visible frequencies,” Nature Materials, April 2010.

    Provided by California Institute of Technology
  • Here’s The Questions You Should Be Asking About The Gulf Of Mexico Oil Spill

    (This is a guest post from Gail the Actuary at The Oil Drum it is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License)

    We have all been reading about the blowout that led to huge fire and sinking of the Deepwater Horizon drilling rig in the Gulf of Mexico. Now there is news that there is a huge oil spill coming from the underground pipe where this occurred, and there is a possibility that it will be months before it can be stopped. What does this all mean? How could this happen?


    Figure 1. Forecast area to be covered by oil slick. Image by National Oceanic and Atmospheric Administration (NOAA). Click for larger image.

    Below the fold, I will tell you the story as I understand it. It seems to me that the great depth and attendant pressures, and the learning curve that goes working within these new parameters, probably contributed to the initial leak, and is contributing to the difficulties that are now occurring in stopping the leak.

    This particular well was not an important one–one source said it had economic importance only because of its proximity to a platform which was already in the area. The issues are more the possible environmental damage and the political fallout that could come from the accident. Unfortunately, most of the “easy oil” is gone. The oil that remains all has some challenges–but the fact of the matter is that the world economy cannot run without oil. So there are no easy answers.

    1. Are these spills very common?

    Huge blowouts (explosions, followed by fire, occurring when wells are being drilled), occurring in US waters, are uncommon. The last one was the Santa Barbara Union Oil Blowout in 1969 – a little over 40 years ago. The leak lasted 11 days, and the amount of the spill was estimated to be 200,000 gallons (5,000 barrels of oil), so was less than the amount of the current spill. But it was close to shore, and the oil damaged beaches, besides affecting wildlife.

    Much more common are oil spills, typically occurring when a ship powered by oil, or a ship carrying oil, collides with another object. The biggest recent oil spill in US waters was the Exxon Valdez oil spill, which occurred in Alaska in 1989. This occurred when an oil tanker ran aground, and spilled 10.9 million gallons (250,000 barrels of oil). If the current spill is 1,000 barrels a day, the Exxon Valdez spill would be the equivalent of the spill continuing for eight months. No one expects the current spill to continue for that long.

    An analysis from NOAA shows that there have been many oil spills from ships over the years. Modern double hull oil tankers are not as susceptible to spills, but the many small ships (especially low budget, unlicensed ships) carrying goods of all types can and do run aground, causing smaller spills. The US Coast Guard has regulations to try to prevent problems of this type.

    Besides spills, there are naturally occurring underground seeps that allow hydrocarbons to enter the water. In fact, it is these seeps that led to the discoveries of many of the oil deposits found at sea. National Geographic talks about huge underwater asphalt volcanos being discovered off of California, caused by underwater eruptions of hydrocarbons. These eruptions likely caused huge oil slicks.

    2. How did the blowout occur?

    The earliest oil wells in the Gulf of Mexico were in shallow waters near the coast. But as these wells have become depleted, it has been necessary to drill in ever-deeper waters. When one drills in deeper water, the challenges are greater–the pressures are greater, the temperature of the oil is higher, and the stresses on the metals involved are greater.

    The oil industry is creating ever-more technologically advanced equipment to deal with these issues, but the fact remains that it is virtually impossible to solve every new problem that may arise through computer simulations. If one tweaks one part of the equipment to make it stronger (to deal with the higher pressures, and greater temperature differential between the hot oil and the cold water), it can cause unforeseen problems with another system that interacts with it.

    Unfortunately, there is an element of trial and error whenever technology attempts to overcome new hurdles. These issues aren’t unique to oil and gas–they are just as much challenges to any new technology, including offshore wind and carbon capture and storage technology. While one would like to move smoothly from one technology to the next, in a short time frame, one really must test equipment in the real world. This means progress tends not to be as fast as one would like: it often is punctuated by setbacks when something that looks like it would work in computer simulations, doesn’t really work, or when some unforeseen combination of events takes place.

    We don’t yet know precisely what happened to cause the blowout–there will no doubt be months of investigations. The basic idea of what happened is that Transocean, under contract with BP, was attempting to drill a new well, not too far from existing wells in a deep water area of the Gulf of Mexico. The well was almost complete–in fact, the well seemed to be far enough along that the danger of blowout appeared to be very low. The casing had been cemented, and work was being done on getting a production pipe installed.

    Apparently, a pressure surge occurred that could not be controlled. While the equipment includes all kinds of controls and alarms, and a huge 450 ton device called a blowout preventer, somehow it was still not possible to control the hydrocarbon flow. At such high pressures, some of the natural gas separated from the oil within the hydrocarbon stream and ignited causing the explosion.

    Some of our readers have provided their ideas as to what might have happened. Rockman has suggested that the strength of the pipes (to withstand the underwater pressure) might have made it impossible for the shear rams in the blowout preventer to slam shut and cut off the pipe, as they were intended to do. Westexas has suggested that perhaps metallurgical failure at such great depths may have contributed to the accident. It is possible that there was some element of human error as well. Without a thorough investigation, it is impossible to know exactly what happened, and even then, there are likely to be gaps in our knowledge.

    3. What is being done to stop the leak?

    For the last several days, BP has been trying to use sub-sea robots, operating at 5,000 feet below the surface, to engage the blowout preventer and turn off the flow, which seems to amount to about 1,000 barrels (42,000 gallons) per day. With each day that passes, the chance of this working would seem to go down. If the blow out preventer didn’t activate properly originally, and hasn’t engaged during past attempts by robots, why would a new attempt work any better?

    There are two alternative approaches BP is using to cutting off the flow. One approach is to drill a second well to intercept the first well, and inject a special heavy fluid to cut off the flow. Workers will then permanently seal the first well. This procedure is expected to take several months.

    The other approach is designing and fabricating an underwater collection device (dome) that would trap escaping oil near the sea floor and funnel it for collection. According to NOAA, this approach has been used successfully in shallower water but never at this depth (approximately 5,000 feet). NOAA reports construction of such a dome has already begun.

    Until one of these plans works, the approach is to try control the oil that rises to the surface. According to one source:

    BP is throwing all the resources it has available at the spill, so the cost to the company may be substantial. It has deployed 32 spill response ships and five aircraft to spray up to 100,000 gallons of chemical dispersant on the slick and skim oil from the surface of the water and deploy floating barriers to trap the oil.

    Another approach that is being tried is burning the oil trapped on the surface. This approach would seem to work best when seas are calm.

    Even with these approaches, there is a significant chance the spill will reach shore, perhaps by this week end. Even if it remains at sea, it can be damaging to marine life.

    4. How important are wells such as this one for oil production?

    In general, the world is running short of good places to drill for oil. There are a few places where oil still can be extracted inexpensively, but these are becoming fewer and fewer in number. What we seem to have left is expensive hard-to-extract oil, especially in this hemisphere.

    In the absence of new deep water wells in the Gulf of Mexico, Gulf oil production would likely be declining.


    Figure 2. Graph of Gulf of Mexico Production in Federal Offshore Region, produced by the EIA.

    It can be seen from Figure 2 that between 2003 and 2008, oil production in this region was declining, but in the last couple of years, as deep water wells have started coming on board, it has begun increasing again. If companies are successful in drilling more deep water wells, oil production in the Gulf of Mexico may grow again, perhaps to 2 or even 2.5 million barrels a day, before resuming its decline. This would not be a huge amount relative to world production of crude oil of 73 million barrels a day, but compared to the US’s crude oil production of 5.4 million barrels a day, this would be a substantial part. In the absence of deep water drilling, Gulf of Mexico production would likely continue to decline, as it did in the 2003 to 2008 period.

    Governmental agencies like to talk about “liquids” as if they were all equivalent to crude oil, but they really aren’t. On a “liquids” basis, the US is said to have 9.3 million barrels, including ethanol, and natural gas liquids, and the expansion in volume that occurs when refineries combine US natural gas with crude oil imported from overseas, as part of the refining process. (As mentioned above, crude oil production is only 5.4 million barrels a day.) Deep water oil is generally good quality oil, that can be refined to produce the products our economy needs, where these other products are of lower energy value, and generally lower price. Losing high quality oil would be much more of a blow than losing lower quality products that have been added to the reporting category, to disguise our true shortage of high-quality crude.

    The world is at this point struggling with financial difficulties. We like to think that our current order of things, in which we can depend on imports from abroad, will continue indefinitely. But I do not think this will be the case. As more and more countries (Greece, Portugal, and Spain, to start with) struggle with their debts, oil exporters will have less and less willingness to sell oil to those with questionable credit. Many in this country think that the US is immune to this problem, but if it turns out that the US has difficulties as well, we may lose even more true oil than a comparison based on overstated “liquids” would seem to suggest. In that case, we would be very happy to have some home-produced oil, at least for a short time, while it lasts.

    5. The natural order of things.

    Most of us don’t take time to think about what the true natural order of the world is. If we go back 100,000 years, there were no cars, no superhighways, and no oil wells. There were also very many fewer people, no wind turbines, and no computers. There was no problem with ocean acidification. Fish were abundant in the seas. The world was very different then.

    The natural order of things keeps changing, on its own, without our intervention. One type of animal dies out, and another replaces it. Plants undergo natural selection, so as to adapt to changes in the environment.

    In the fossil fuel world, we know their have been changes as well, and will continue to be. Where there are not cap rocks on oil supplies, hydrocarbons tend to migrate upward. When they do this, microbes in the atmosphere tend to break them down. Eventually, oil that is not under a tight cap rock tends to disappear–which is why we are having so much difficulty finding oil now.

    The oil that escapes as oil spills will also migrate upward to the water surface, just as it does when it migrates upward through oil seeps. In warm, sunny areas, like area around the Gulf of Mexico, hydrocarbons that migrate upward will biodegrade will fairly quickly–within a few years. Some residue may remain for longer. This will be sticky at first, but then turn to asphalt, before it too breaks down.

    It seems to me that as world oil supplies deplete, the world will tend back toward what I have described as the natural order of things. We won’t be able to support as many people on the earth. Highways will disappear, as governments no longer have funds to resurface them. Without roads, automobiles will no longer be useful. Cows, and pigs will decline in numbers. Fish will return to the seas. Plant and animal life will change, to fill in the gap we left. We will really have to fight to avoid this natural rearrangement, and even then, we are not likely to be very successful.

    We have a large number of people who classify themselves as environmentalists. They have a very different view of the world, and what is important for the long term. One of their concerns is that beaches not be despoiled by what looks like asphalt from oil spills. But these people seem to have little concern about the long stripes of asphalt that are being used for interstate highways. They are very concerned about the tens of thousands of birds that have been killed by oil spills, but they are not concerned (or not very much concerned ) about the billions of fish that are being removed from the oceans by fishermen every year. It seems to me that a major part of their concern is not really for the environment–it is for maintaining business as usual (BAU). Having pretty beaches, now. A nice place for their (many) children. Their plan seems to be for a light green BAU.

    6. Where should we be putting our energies now?

    If we lived in a world with plenty of energy, my vote would be with the environmentalists. If we don’t really need the oil, why not just close the industry down? No need to worry about asphalt on our beaches, or our fishermen getting big enough catches. If we need more oil, we can just use our large financial surpluses to buy more oil from abroad. With all the energy, we probably wouldn’t need to worry about enough jobs for the US population either.

    But if we are headed toward an energy-constrained world, it seems to me that we need to be thinking about our choices more clearly.

    Do we really have options for oil that are better? Can we count on world imports? Should we expect Brazil to do real-time experiments, to try to figure out how to extract its deep water oil, and then export it to us? Should we count on the Saudis, with their unaudited reserves and questionable “spare capacity” to keep up their production? Should we expect someone, somewhere, to find four or six new “Saudi Arabias” of additional oil over the next 20 years?

    If we can’t depend on imports, do we have more locally produced oil that we can ramp up? From an environmental point of view, would ramping up the oil sands in Canada be better? Or how about oil shale, out in the dry areas of the US West? Would we be willing to devote scarce water supplies in that area to ramping up oil production?

    It is easy to say that there should be more rules for the oil and gas industry, but there can be a downside to these rules as well. More rules will delay extraction, and will likely lead to a smaller amount of oil extracted, but at a higher price. There is also the question of whether the rules will really prevent oil spills. If the issue is really that new technology has to be tested live, no amount of rules will really fix the situation. There will always be accidents.

    If one is thinking about new rules, one should think about the Sarbanes-Oxley Act. It imposed substantial rules for US public companies after a number of major corporate accounting scandals. But how much good have these rules really been for preventing the financial crisis and aggressive behavior by banks? It seems to me that new rules are usually designed to prevent last year’s problem, not next year’s problem.

    If we don’t have oil, would we rather have coal, and a CO2 sequestration site underneath our homes, as technicians test to see whether their computer models are really correct with respect to how well the CO2 will stay underground? (If it doesn’t stay underground, it could form a low lying cloud and smother those in its way.)

    There are indirect implications of a loss of oil, too. A fisherman may have more fish to catch, if all oil spills are prevented. But if, in the process, the fisherman doesn’t have enough fuel for his boat, or his customers don’t have jobs and can’t buy the fish, he is not as much better off as he would seem to be.

    Given all of the environmental concerns regarding oil and gas, I can understand why many people would decide that the best decision is to err in the direction of caution regarding future oil production. But if this is the route we take (and even if it isn’t), we need to be thinking about where this puts us relative to the natural order of things. Presumably, with less oil, the downslope in the direction of the natural order of things will be even quicker. While some may not object to this, it would seem to me that it would make the urgency of adapting to the new world order even greater than it would be otherwise. This would suggest that we should be putting our efforts into energy sources that are truly renewable with only local materials–small scale wind, run of the stream hydro, and solar of the type that might be used to heat hot water a bit, but not to create electricity. These would not allow us to maintain BAU, or a world very close to BAU.

    To me, there are no easy choices.

    Join the conversation about this story »

  • AutoblogGreen for 04.29.10

    Know what you want and need, or you may not get to buy a Nissan Leaf
    “We may tell the customer, ‘Look, you’d be better off buying an Altima or a Sentra because your driving patterns are not ideal for this car.’”
    Oklahoma is newest market for natural-gas burning Honda GX
    State number four.
    Report: Tesla’s Elon Musk says new SUV model due in 2013
    More to come after that.
    Other news:

    AutoblogGreen for 04.29.10 originally appeared on Autoblog on Thu, 29 Apr 2010 06:03:00 EST. Please see our terms for use of feeds.

    Read | Permalink | Email this | Comments

  • Chrysler Sebring to be renamed Nassau, report says:

    The Chrysler Sebring will reportedly get a new name later this year: Nassau.

    The midsize sedan is due for a freshening, including an updated interior, along with its Dodge sibling, the Avenger. The Detroit Free Press, citing anonymous sources, says the new name will be Nassau.

    Chrysler is not commenting on the possible name change.

    “[We] definitely don’t have anything to announce about the possible name change later this year,” spokesman Rick Deneau said.

    The Nassau moniker should ring a bell with car fans. It was the name of Hemi-powered, four-passenger luxury concept shown at the 2007 Detroit auto show. With striking looks and a prominent grille, the concept displayed a dash of panache–potentially for a future Chrysler. The concept rode on a 120-inch wheelbase and was meant to look more visually compact than a comparable Chrysler 300C, summoning the style of a shooting brake.

    Still, the Sebring refresh is more of an update, so look for the name change to be the extent of the Nassau genetics that make it onto the new sedan. The Nassau name was also used famously by Chrysler in the 1950s.

    Look for the new sedan to get Chrysler’s Pentastar V6 and with a Fiat-developed dual-clutch transmission–dramatic upgrades to the powertrain.

    Automotive News also reported in March that the Sebring name will be dropped because the updates are so extensive, according to CEO Sergio Marchionne.

    Chrysler Sebring

    Chrysler

    The Chrysler Sebring is due for updates to the engine, transmission and interior this year. Look for a new name: Nassau.

    The Sebring and the Avenger are sorely in need of updates to increase their competitiveness in a midsize segment loaded with viable entries. Toyota and Honda have long ruled the sales charts, but the Chevrolet Malibu and the Ford Fusion are showing strength in the market as well, as American buyers again consider domestic brands.

    Chrysler’s midsize products have languished as the company endured ownership changes and bankruptcy. Marchionne has made it a priority to strengthen the company’s products in that area with quick changes rather than waiting for full redesigns which could take years.

    Last week, Fiat announced that the Sebring will be built in Turin, Italy, along with the Alfa Romeo Giulia, which is also a midsize sedan.

    For more


    Chrysler Sebring

    Source: Car news, reviews and auto show stories

  • Facebook Like Button Seen by Billions Already

    Facebook created quite a splash with its announcements at the f8 developer conference. The new features are a bold move forward, but plenty of people were a little worried about the power Facebook was seemingly amassing. Facebook has released an overview of the conference and also several interesting numbers. The most interesting: more than 50,000 w… (read more)

  • Canada Will Get Much More Detailed Data in Google Maps

    Canada is a big place and, with much of it being pretty remote and inaccessible, mapping it can be an arduous task. Exploring it is even harder, though, which is why plenty of people probably appreciate the effort companies like Google put into providing accurate mapping data. Google is now taking things one step further and has announced t… (read more)

  • Citroen Nemo flips over in simple avoidance test

    First America’s Consumer Reports showed the world that the Lexus GX 460’s stability control system was, well, lacking control. Now, A British consumer magazine, Which?, decided to one-up CR and do some testing of their own on a few small-yet-tall MPVs.

    With the recent recall of the Lexus GX 460, there was significant talk and even criticism regarding the relevance of the test and whether or not Toyota was simply being targeted due to their string of recent recalls. The folks from Which? magazine and the German Automobile Association decided to team up and test just how relevant and important stability control systems really are – regardless of manufacturer.

    The fodder
    Which? decided to test the Citroën Nemo Multispace, Fiat Qubo and Peugeot Bipper Teepee. All three vehicles are almost identical in their general shape and design, but only the Fiat Qubo even had electronic stability as an option – the others had none, with no ability to even add it (in the United Kingdom), as Which? points out.

    The test (More after video)

    The results
    As the video clearly demonstrates, electronic stability control can play a huge role aiding drivers during evasive maneuvers and avoidance situations. Which? says that in the United Kingdom the vehicles that come equipped with ESC are 25 percent less likely to be involved in fatal accidents due to the high fatality risk associated with total loss of control and/or rolling a vehicle.

    “This test highlights the importance of stability control. Which? wants all cars to be fitted with stability control as standard, as research by the Department for Transport has shown ESC-equipped vehicles are involved in 25% fewer fatal road accidents. Such a vital safety feature shouldn’t be optional – it should be built in from the start,” said George Marshall, senior researcher at Think? magazine.

    Manufacturers take notice – respond quickly
    Just as Lexus responded by quickly choosing to test and address the issue with the stability control system on its GX 460, European auto manufacturers also responded to Think?’s evaluation. PSA, which is the parent company of both Peugeot and Citroën, announced that as a result of the testing they will begin to make ESC a standard feature no later than the fall of 2011.

    References
    1. ‘Cintroen Nemo MPV rolls over…’ view

       

    Source: Leftlane

  • Everything’s Looking Up Today (Though Two Big Red Spots Stand Out)

    Per Finviz’s map of the futures market, it’s obvious that folks are feeling more optimistic today — not surprising since the Athens Stock Exchange is shooting up 4%.

    But note there are a couple of bright red spots (not including the Nikkei, which had the day off for holiday), which are copper and lumber, both highly cyclical, and arguably leading. Keep a particularly close eye on copper.

    Meanwhile, Palladium is blowin up.

    chart

    Join the conversation about this story »

  • Greece Only Has One Answer: Leave The Euro, And Then Default

    (This guest post comes courtesy of the author’s blog)

        “There is no means of avoiding the final collapse of a boom brought about by credit expansion. The alternative is only whether the crisis should come sooner as the result of a voluntary abandonment of further credit expansion or later as a final and total catastrophe of the currency involved.”

        – Mises

    Nouriel Roubini calls Greece “the tip of the iceberg”.  PIMCO’s El-Erian says Greece is likely to default.  Greece is the news story that keeps on giving.  Unfortunately, the problems in Greece are persistent because the Euro has been a persistently flawed currency system.  Much like the gold standard, the single-currency-under-multiple-governments simply does not work in the best interest of all involved.  I have been racking my brain for several weeks now attempting to think of a good outcome in the Greek debt crisis.  As John Mauldin recently wrote, there really are no good solutions here.  I keep coming back to the same conclusion – Greece should leave the EMU and default on their debt.  It sounds quite extreme and could cause some very serious near-term panic, but in the long-run I believe the Greeks must do what is in THEIR best interest and that involves bringing back the drachma and taking control of their currency.   No country should ever relinquish the power to control their own currency and be their own banker.

    Martin Feldstein believes the default in Greece is inevitable and like myself believes this has been essentially self imposed by the handcuffs that come along with being a member of the EMU.  Feldstein writes:

    “Greece will default on its national debt. That default will be due in large part to its membership in the European Monetary Union. If it were not part of the euro system, Greece might not have gotten into its current predicament and, even if it had gotten into its current predicament, it could have avoided the need to default.

    There simply is no way around the arithmetic implied by the scale of deficit reduction and the accompanying economic decline: Greece’s default on its debt is inevitable.”

    By joining the EMU Greece essentially handcuffed themselves.  They are not their own banker and have no control over the issuance of the local currency. As I mentioned above the problems imposed on Greece by being a member of the EMU are similar to what nations have gone thru under the gold standard in the past.  A single global currency under differing governments simply does not work equally for each nation because each of those nations has different economies, different trade needs and differing monetary and fiscal policy needs.  Feldstein elaborates on the current errors in the EMU:

    “Greece might have been able to avoid that outcome if it were not in the eurozone. If Greece still had its own currency, the authorities could devalue it while tightening fiscal policy. A devalued currency would increase exports and would cause Greek households and firms to substitute domestic products for imported goods. The increased demand for Greek goods and services would raise Greece’s GDP, increasing tax revenue and reducing transfer payments. In short, fiscal consolidation would be both easier and less painful if Greece had its own monetary policy.

    Greece’s membership in the eurozone was also a principal cause of its current large budget deficit. Because Greece has not had its own currency for more than a decade, there has been no market signal to warn Greece that its debt was growing unacceptably large.

    If Greece had remained outside the eurozone and retained the drachma, the large increased supply of Greek bonds would cause the drachma to decline and the interest rate on the bonds to rise. But, because Greek euro bonds were regarded as a close substitute for other countries’ euro bonds, the interest rate on Greek bonds did not rise as Greece increased its borrowing – until the market began to fear a possible default.”

    While I believe default would be the most beneficial situation for Greece I am not so certain that they will be allowed to default.  The BIS shows that European banks have almost $190B in Greek exposure and would be unlikely to get more than about 30-40 cents on the dollar – that is an intolerable hit to already fragile European banks.  But I am certain that there is no pretty solution to the current problems.  After all, these are fundamental problems in the structure of the EMU.  It’s simply impossible to coordinate mutually beneficial monetary and fiscal policy under one currency when you have such differing governments and economies.  As a highly indebted trade deficit nation, Greece has substantially different needs than a country such as Germany.  By not controlling their own currencies these nations are unable to properly target the needs of their own economies and best service their citizenry.

    Greece might agree to a bailout package, but this would likely result in years of painful austerity.  In addition it would do nothing to solve the structural problems of their monetary system – the fact that they cannot control their own currency.  This greatly increases the likelihood of future problems.  The next recession is likely to result in one or more nations confronting similar issues.   In that case, we haven’t solved anything.  We’ve simply kicked the can down the road.  Lastly, a Greek bailout assumes that these same problems won’t be confronted in the near future with Ireland, Spain, Italy or Portugal.  If Greece gets the rumored bailout it’s only a matter of time before other countries demand assistance.  JP Morgan estimates that a bailout of Greece, Spain, Portugal and Ireland would cost €600B or 8% of Eurozone GDP.

    There really is no good solution here.  It was a mistake for so many nations with such vastly different economies to enter the EMU and this crisis is a glaring example. If I were the Greeks I would be looking into the cleanest and least damaging way to defect from the EMU, bring back the drachma and ensure that a foreign central bank never controls my people’s money again.  These are problems that Greece will confront again if they do not confront them now.  But let’s be honest here.  We live in a bailout world.  It’s highly unlikely that Greece will be allowed to default and that is perhaps the scariest scenario in all of this.  While it would alleviate near-term fears it does nothing but kick the can down the road.  In addition to years of a very weak and painful Greek economy it will also set the table for future bailouts and inevitable future problems within the EMU.

    Get more market commentary at PragCap.com >

    Join the conversation about this story »

  • Toyota resumes sales of Lexus GX 460:

    Toyota has resumed sales of the 2010 Lexus GX 460 SUV, two weeks after it announced a recall of the 9,400 models already sold.

    The Japanese automaker stopped sales of the vehicle after Consumer Reports magazine declared the SUV a safety risk, warning readers that it was prone to oversteer in turns.

    Lexus tested the car in response to the warning and made changes to the vehicle’s electronic stability control software. New vehicles are being built with the new software and dealers will install software upgrades on the recalled vehicles.

    The Lexus GX 460 is built on the same platform as the Toyota 4Runner, which — along with the larger Toyota Sequoia and Land Cruiser SUVs — is currently being tested by Toyota to detect any similar concerns.

    For more


    a side view of the 2010 Lexus GX 460.

    Source: Car news, reviews and auto show stories

  • How Long Is The Fed’s “Extended Period”? It Can Be A Lot Longer Than You Expect

    Catherine Rampell at the NY Times Economix asks: How Long Is an ‘Extended Period’?

    Short answer: Longer than many analysts expect.

    First we can compare to the “considerable period” language in 2003:

  • June 25, 2003: Lowered Rate to 1%, Unemployment Rate peaked at 6.3%
  • August 12, 2003: “the Committee believes that policy accommodation can be maintained for a considerable period.” Unemployment rate at 6.1%
  • December 9, 2003: Last statement using the phrase “considerable period”. Unemployment rate at 5.7%
  • January 28, 2004: the Committee believes that it can be patient in removing its policy accommodation. Unemployment Rate 5.7%
  • May 4, 2004: “the Committee believes that policy accommodation can be removed at a pace that is likely to be measured.” Unemployment Rate 5.6%
  • June 30, 2004: FOMC raised the Fed Funds rate 25 bps. Unemployment Rate 5.6%
  • So “extended period” is probably 6+ months after the language changes – the next meeting is June 23rd and 24th, so the earliest rate hike would probably be in December (barring a significant pickup in inflation or rapid decline in unemployment).

    chartClick on graph for larger image in new window.

    This graph shows the effective Fed Funds rate (Source: Federal Reserve) and the unemployment rate (source: BLS)

    In the early ’90s, the Fed waited more than a 1 1/2 years after the unemployment rate peaked before raising rates. The unemployment rate had fallen from 7.8% to 6.6% before the Fed raised rates.

    Following the peak unemployment rate in 2003 of 6.3%, the Fed waited a year to raise rates. The unemployment rate had fallen to 5.6% in June 2004 before the Fed raised rates.

    Although there are other considerations, if we assume the unemployment rate peaked in October 2009 – and add 18 months – then the Fed would probably wait until early 2011 to raise rates (at the earliest). My guess is the Fed will probably wait until the unemployment rate is closer to 9% before removing the “extended period” language, and it is unlikely they will raise rates until the unemployment rate is below 8%. Last September I wrote: Fed Funds and Unemployment Rate. Here is an excerpt with an updated graph:

    Join the conversation about this story »

  • Gulf Oil Leak Is 5X Worse Than Initially Thought, As Desperate Officials Begin To Burn It

    Slick Oil Map

    Yesterday we showed you the gigantic oil slick that’s floating its way towards New Orleans.

    If it looks really big to you, that’s because it is really big.

    According to Bloomberg, the Coast Guard has upped its estimate for the size of the spill from 1000 barrels per day to 5000 barrels.

    The slick is expected to hit land today.

    Meanwhile the first test burns have begun and Obama has been briefed on the situation.

    More throughout the day as warranted.

    Join the conversation about this story »

    See Also:

  • Baidu Posts Spectacular Revenue Growth Spurred by Google’s China Move

    Everyone saw this one coming, Baidu, the leading Chinese search engine, has posted great financial results in the first quarter after Google abandoned its search operations locally and moved all search efforts to its Hong Kong service. Baidu has been growing in terms of revenue for quite a while now, but these latest numbers … (read more)

  • Mitch Wagner Asks About Ethics Of Downloading Media You Already Paid For

    A few weeks back, we linked to the NY Times’ Ethicist, Randy Cohen, explaining why it’s not unethical to download a digital copy of a book, if you’d bought a hard copy of the book — even though it probably violates copyright law. That created quite a lot of anger from folks who felt that it was clearly an ethical violation as well. Mitch Wagner, apparently missed that kerfuffle, as he’s written up a short blog post for Computerworld asking people their thoughts on the ethics of downloading media that you purchased legally:


    I recently got a hankering to re-read some of my favorite books. I already own them, in hardcover and paperback. But I’d like to re-read them as e-books. Do I need to buy the e-book versions, or can I download a pirated copy of the e-book for free?

    The argument that says it’s wrong is pretty simple, and clear-cut: When I bought the books, I bought individual copies of the books. All I own is that one copy. If I lost the copy, I wouldn’t be entitled to a free replacement. It wouldn’t be right for me to shoplift the book from the local Barnes & Noble. I’d have an obligation to buy a new copy, or borrow one legitimately, before re-reading the book.

    On the other hand: I already paid for these books legitimately. They’re my books. The shoplifting analogy is specious, because in that case, I’m depriving the rightful owner — the owner of the bookstore — of their copy of the book. If I download a copy of the e-book, nobody else is deprived of their copy.

    However, he goes on to make another point that also deserves some scrutiny:


    Every couple of years, TiVo hiccups and fails to record a favorite TV show. In that case, I have to decide whether to wait for the show to come out on DVD, or just download the episode from the BitTorrents.

    Now there will be people who will claim that, due to the fact that it likely infringes on copyright to do so, it’s automatically unethical. But morality isn’t determined by the law. In general, I’ve always argued that if the economics increase the overall market and opportunity, then there’s no moral issue to speak of — and it’s hard to see how someone downloading an episode their TiVo missed would harm the overall economy in any way. But, I’m guessing that some folks here will disagree…

    Permalink | Comments | Email This Story





  • Euro Ticks Up, Athens Stocks Surge: Did Obama Save The Euro?

    doctors obama

    Things seem to be looking a bit up in euroland this morning. A bit.

    The euro is ticking up, and the Athens Stock Exchange is up 4%, obviously on hopes that the worst fears of Krugman and El-Erian won’t be realized.

    So, will the Greeks name their next temple after Obama?

    Yesterday he leaned heavily on Merkel, and it sounds as though she may be softening her stance a little.

    Of course, his pressure does nothing for the German parliament which still needs to approve a bailout, but still.

    At this early hour on Thursday, things are looking a little brighter. If history is any guide, expect more shoes to drop throughout the day.

    Join the conversation about this story »

    See Also:

  • Yahoo Interested in Buying New Companies

    Yahoo has had a rough time for the past few years and the company has only recently started showing some optimistic signs. Under CEO Carol Bartz, Yahoo has gotten a lot more focused and has shed off a lot of dead weight. Spurred by great financial results in the first quarter, things are looking up, but there are still questions of whether Yahoo has what… (read more)

  • Report: Slow-selling Brilliance to exit European market

    A new report suggests China’s Brilliance Automotive’s run could be over in the European market. The news of Brilliance’s exit from the European market comes just months after the automaker’s European importer – HSO Motors – filed for bankruptcy. Brilliance has been handling its own European distribution since November.

    Citing poor sales, a Brilliance executive revealed to AutoWeek that the company will halt all European sales. Brilliance was one of just a handful of Chinese automakers that managed to crack in to the European market.

    Brilliance’s presence in the European market seemed doomed from the start, with the company’s first European offering – the BS6 – receiving zero out of five stars in Germany’s ADAC crash test. Brilliance’s pricing scheme didn’t do the company any favors, either.

    “They didn’t want to lose any money per car,” former HSO boss Hans-Ulrich Sachs revealed. “We told them that this is the entry fee you have to pay to get established in Europe. They told us that we should make the investment to cover the shortfall; that we would have to subsidize the brand.”

    Conflicting reports
    Refuting AutoWeek’s report, one Brilliance insider told the Global Times that the Chinese automaker has no plans to exit the European market. “Brilliance will never pull out of Germany and Europe, even though it is confronted with bleak sales and thin profit margins,” the source said.

    However, Brilliance’s lineup barely conforms to Europe’s current Euro IV regulations, with the Chinese automaker admittedly struggling to comply with Euro V regs. “They abandoned Europe IV in less than a year and a half, and will put Europe V in place in the second half. We can hardly meet the new standards with domestic auto part suppliers. We have to use overseas ones, which will raise our costs,” the source added.

    Even if Brilliance does manage to limp along in the European market in the short-term, its long term future looks bleak. Those within BMW – Brilliance’s China partner – have revealed the automaker has stopped work on meeting the new Euro V standards, with sales well below expectations. Although reports vary, Brilliance has sold no more than 4,000 units since entering the European market in 2006, with some reports as low as 502 units.

    References
    1. ‘China automaker Brilliance…’ view
    1. ‘Brilliance’s future uclear…’ view

       

    Source: Leftlane