The first beams of 2010 circled the LHC earlier today.
Now, here comes the science …
I’ve done tons of blogging on graphene and this news seems to be direct competition with this graphene news I covered about a week ago. The issue is turning graphene into a semiconductor to allow the material to eventually replace silicon in electronic devices. The last link up there goes to a post outlining the concept of using nanoribbons of graphene, the middle link goes to research claiming a “nanomesh” is a superior method of turning the carbon nanomaterial into a semiconductor.
The release:
New graphene ‘nanomesh’ could change the future of electronics
Graphene, a one-atom-thick layer of a carbon lattice with a honeycomb structure, has great potential for use in radios, computers, phones and other electronic devices. But applications have been stymied because the semi-metallic graphene, which has a zero band gap, does not function effectively as a semiconductor to amplify or switch electronic signals.
While cutting graphene sheets into nanoscale ribbons can open up a larger band gap and improve function, ‘nanoribbon’ devices often have limited driving currents, and practical devices would require the production of dense arrays of ordered nanoribbons — a process that so far has not been achieved or clearly conceptualized.
But Yu Huang, a professor of materials science and engineering at the UCLA Henry Samueli School of Engineering and Applied Science, and her research team, in collaboration with UCLA chemistry professor Xiangfeng Duan, may have found a new solution to the challenges of graphene.
In research to be published in the March issue of Nature Nanotechnology (currently available online), Huang’s team reveals the creation of a new graphene nanostructure called graphene nanomesh, or GNM. The new structure is able to open up a band gap in a large sheet of graphene to create a highly uniform, continuous semiconducting thin film that may be processed using standard planar semiconductor processing methods.
“The nanomeshes are prepared by punching a high-density array of nanoscale holes into a single or a few layers of graphene using a self-assembled block copolymer thin film as the mask template,” said Huang.
The nanomesh can have variable periodicities, defined as the distance between the centers of two neighboring nanoholes. Neck widths, the shortest distance between the edges of two neighboring holes, can be as low as 5 nanometers.
This ability to control nanomesh periodicity and neck width is very important for controlling electronic properties because charge transport properties are highly dependent on the width and the number of critical current pathways.
Using such nanomesh as the semiconducting channel, Huang and her team have demonstrated room-temperature transistors that can support currents nearly 100 times greater than individual graphene nanoribbon devices, but with a comparable on-off ratio. The on-off ratio is the ratio between the currents when a device is switched on or switched off. This usually reveals how effectively a transistor can be switched off and on.
The researchers have also shown that the on-off ratio can be tuned by varying the neck width.
“GNMs can address many of the critical challenges facing graphene, as well as bypass the most challenging assembly problems,” Huang said. “In conjunction with recent advances in the growth of graphene over a large-area substrate, this concept has the potential to enable a uniform, continuous semiconducting nanomesh thin film that can be used to fabricate integrated devices and circuits with desired device size and driving current.
“The concept of the GNM therefore points to a clear pathway towards practical application of graphene as a semiconductor material for future electronics. The unique structural and electronic characteristics of the GNMs may also open up exciting opportunities in highly sensitive biosensors and a new generation of spintronics, from magnetic sensing to storage,” she said.
###
The study was funded in part by Huang’s UCLA Henry Samueli School of Engineering and Applied Science Fellowship.
The UCLA Henry Samueli School of Engineering and Applied Science, established in 1945, offers 28 academic and professional degree programs, including an interdepartmental graduate degree program in biomedical engineering. Ranked among the top 10 engineering schools at public universities nationwide, the school is home to seven multimillion-dollar interdisciplinary research centers in wireless sensor systems, nanotechnology, nanomanufacturing and nanoelectronics, all funded by federal and private agencies.
For more news, visit the UCLA Newsroom and follow us on Twitter.
Another incremental step toward functional quantum computing. We don’t need quantum computing just yet, but we will.
The release:
UW-Madison physicists build basic quantum computing circuit
MADISON — Exerting delicate control over a pair of atoms within a mere seven-millionths-of-a-second window of opportunity, physicists at the University of Wisconsin-Madison created an atomic circuit that may help quantum computing become a reality.
Quantum computing represents a new paradigm in information processing that may complement classical computers. Much of the dizzying rate of increase in traditional computing power has come as transistors shrink and pack more tightly onto chips — a trend that cannot continue indefinitely.
“At some point in time you get to the limit where a single transistor that makes up an electronic circuit is one atom, and then you can no longer predict how the transistor will work with classical methods,” explains UW-Madison physics professor Mark Saffman. “You have to use the physics that describes atoms — quantum mechanics.”
At that point, he says, “you open up completely new possibilities for processing information. There are certain calculational problems… that can be solved exponentially faster on a quantum computer than on any foreseeable classical computer.”
With fellow physics professor Thad Walker, Saffman successfully used neutral atoms to create what is known as a controlled-NOT (CNOT) gate, a basic type of circuit that will be an essential element of any quantum computer. As described in the Jan. 8 issue of the journal Physical Review Letters, the work is the first demonstration of a quantum gate between two uncharged atoms.
The use of neutral atoms rather than charged ions or other materials distinguishes the achievement from previous work. “The current gold standard in experimental quantum computing has been set by trapped ions… People can run small programs now with up to eight ions in traps,” says Saffman.
However, to be useful for computing applications, systems must contain enough quantum bits, or qubits, to be capable of running long programs and handling more complex calculations. An ion-based system presents challenges for scaling up because ions are highly interactive with each other and their environment, making them difficult to control.
“Neutral atoms have the advantage that in their ground state they don’t talk to each other, so you can put more of them in a small region without having them interact with each other and cause problems,” Saffman says. “This is a step forward toward creating larger systems.”
The team used a combination of lasers, extreme cold (a fraction of a degree above absolute zero), and a powerful vacuum to immobilize two rubidium atoms within “optical traps.” They used another laser to excite the atoms to a high-energy state to create the CNOT quantum gate between the two atoms, also achieving a property called entanglement in which the states of the two atoms are linked such that measuring one provides information about the other.
Writing in the same journal issue, another team also entangled neutral atoms but without the CNOT gate. Creating the gate is advantageous because it allows more control over the states of the atoms, Saffman says, as well as demonstrating a fundamental aspect of an eventual quantum computer.
The Wisconsin group is now working toward arrays of up to 50 atoms to test the feasibility of scaling up their methods. They are also looking for ways to link qubits stored in atoms with qubits stored in light with an eye toward future communication applications, such as “quantum internets.”
###
This work was funded by grants from the National Science Foundation, the Army Research Office and the Intelligence Advanced Research Projects Agency.
If you like Napoleon Dynamite (from the same creators) or Flight of Conchords (stars Jemaine Clement) be sure to check out Gentlemen Broncos. It’s coming out Tuesday on DVD and I caught a pre-street tonight. Funny, quirky, a little bit stupid and totally worth seeing.
Ever wonder if your crazy tax deduction idea would pass the IRS’s muster? Here’s a list of 14 “oddball” deduction that did just that.
And here’s three samples from the link:
4. Cat food. A couple who owned a junkyard were allowed to write off the cost of cat food they set out to attract wild cats. The feral felines did more than just eat. They also took care of snakes and rats on the property, making the place safer for customers. When the case reached the Tax Court, IRS lawyers conceded that the cost was deductible.
And:
7. Breast augmentation. In an effort to get bigger tips, an exotic dancer with the stage name “Chesty Love” decided to get implants to make her a size 56-FF. The IRS challenged her deduction, saying the operation was cosmetic surgery. But a female Tax Court judge allowed this taxpayer to claim a depreciation deduction for her new, um, assets, equating them to a stage prop. Alas, the operation later proved to be a problem for Ms. Love. She tripped, rupturing one of her implants. That caused a severe infection, and the implants had to be removed.
And finally:
10. Free beer. In a novel promotion, a service-station owner gave his customers free beer in lieu of trading stamps. Proving that alcohol and gasoline do mix — for tax purposes — the Tax Court allowed the write-off as a business expense.
Shockingly (er, not really) neither side had a total grasp on the truth.
It’s worth hitting the link for the entire piece, but here’s the FactCheck.org summary:
- Sen. Lamar Alexander said premiums will go up for “millions” under the Senate bill and president’s plan, while President Barack Obama said families buying the same coverage they have now would pay much less. Both were misleading. The Congressional Budget Office said premiums for those in the group market wouldn’t change significantly, while the average premium for those who buy their own coverage would go up.
- Alexander also said “50 percent of doctors won’t see new [Medicaid] patients.” But a 2008 survey says only 28 percent refuse to take any new Medicaid patients.
- Sen. Harry Reid cited a poll that said 58 percent would be “angry or disappointed” if health care overhaul doesn’t pass. True, but respondents in the poll were also split 43-43 on whether they supported the legislation that is currently being proposed.
- Obama repeated an inflated claim we’ve covered before. He said insured families pay about $1,000 a year in their premiums to cover costs for the uninsured. That’s a disputed figure from an advocacy group. Other researchers put the figure at about $200.
- Sen. Tom Coburn said “the government is responsible for 60 percent” of U.S. health spending. But that dubious figure includes lost tax revenue due to charitable contributions to hospitals and other questionable items. The real figure is about 47 percent.
- Reid said “since 1981 reconciliation has been used 21 times. Most of it has been used by Republicans.” That’s true, but scholars say using it to pass health care legislation would be the most ambitious use to date of this filibuster-avoiding maneuver.
- Rep. Charles Boustany said the main GOP-backed bill would reduce premium costs by “up to about 10 percent.” According to CBO, that’s true for the small group market, which accounts for only 15 percent of premiums. But premiums in the large group market would stay the same or go down by as much as 3 percent.
I have solar and nanotechnology release two-fers pretty often since both of those technologies pump out a lot of news. This is bit more rare — a Nintendo Wii two-fer. Here’s the first, and below you can find the second.
The release:
Wii™ video games may help stroke patients improve motor function
Abstract LB P4
Note: The Abstract will be presented at 5:30 p.m. CTStudy highlights:
- The use of virtual reality Wii™ game technology holds the promise as a safe and feasible way to help patients recovering from stroke improve their motor function.
- Researchers said it’s too early to recommend it as standard stroke rehabilitative therapy.
American Stroke Association meeting report:
SAN ANTONIO, Feb. 25, 2010 — Virtual reality game technology using Wii™ may help recovering stroke patients improve their motor function, according to research presented as a late breaking poster at the American Stroke Association’s International Stroke Conference 2010.The study found the virtual reality gaming system was safe and feasible strategy to improve motor function after stroke.
“This is the first randomized clinical study showing that virtual reality using Wii™ gaming technology is feasible and safe and is potentially effective in enhancing motor function following a stroke, but our study results need to be confirmed in a major clinical trial,” said Gustavo Saposnik, M.D., M.Sc., director of the Stroke Outcomes Research Unit at the Li Ka Shing Institute, St. Michael’s Hospital and lead investigator of the study carried out at the Toronto Rehabilitation Institute at the University of Toronto, Canada.
The pilot study focused on movements with survivors’ impaired arms to help both fine (small muscle) and gross (large muscle) motor function.
Twenty survivors (average age 61) of mild to moderate ischemic or hemorrhagic strokes were randomized to playing recreational games (cards or Jenga, a block stacking and balancing game) or Wii™ tennis and Wii™ Cooking Mama, which uses movements that simulate cutting a potato, peeling an onion, slicing meat and shredding cheese.
Both groups received an intensive program of eight sessions, about 60 minutes each over two weeks, initiated about two months following a stroke.
The study found no adverse effects in the Wii™ group, reflecting safety. There was only one reported side effect in the recreational therapy group: nausea or dizziness. The Wii™ group used the technology for about 364 minutes in total session time, reflecting its feasibility. The recreational therapy group’s total time was 388 minutes.
“The beauty of virtual reality is that it applies the concept of repetitive tasks, high-intensity tasks and task-specific activities, that activates special neurons (called ‘mirror neuron system’) involved in mechanisms of cortical reorganization (brain plasticity),” Saposnik said. “Effective rehabilitation calls for applying these principles.”
Researchers found significant motor improvement in speed and extent of recovery with the Wii™ technology.
“Basically, we found that patients in the Wii™ group achieved a better motor function, both fine and gross, manifested by improvement in speed and grip strength,” Saposnik said. “But it is too early to recommend this approach generally. A larger, randomized study is needed and is underway.”
Wii™ is a virtual reality video gaming system using wireless controllers that interact with the user. A motion detection system allows patients their actions on a television screen with nearly real time sensory feedback.
Co-authors are Mark Bayley, M.D.; Muhammad Mamdani, Pharm.D.; Donna Cheung, O.T.; Kevin Thorpe, Mmath; Judith Hall, M.;Sc.; William McIlroy, Ph.D.; Jacqueline Willems; Robert Teasell, M.D.; and Leonardo G. Cohen, M.D.; for the Stroke Outcome Research Canada (SORCan) Working Group. Author disclosures are on the abstract.
The Effectiveness of Virtual Reality Using Wii Gaming Technology in Stroke Rehabilitation (EVREST) Study was funded by a grant from the Heart and Stroke Foundation (HSFO) and the Ontario Stroke System (OSS) in Canada.
Click here to download audio clips offering perspective on this research from American Stroke Association spokesperson, Pamela Duncan, Ph.D., PT, FAPTA, Professor and Bette Busch Maniscalico Research Fellow, Division of Physical Therapy, Department of Community and Family Medicine; Senior Fellow Duke Center for Clinical Health Policy Research, Duke University, Durham, N.C.
###
Statements and conclusions of study authors that are presented at American Heart Association/American Stroke Association scientific meetings are solely those of the study authors and do not necessarily reflect association policy or position. The association makes no representation or warranty as to their accuracy or reliability. The association receives funding primarily from individuals; foundations and corporations (including pharmaceutical, device manufacturers and other companies) also make donations and fund specific association programs and events. The association has strict policies to prevent these relationships from influencing science content. Revenues from pharmaceutical and device corporations are available atwww.americanheart.org/corporatefunding.
I’ve blogged about the utility of Nintendo’s Wii gaming system before and here’s new research showing that exergames, that is, video games that combine gaming with exercise, can stave off depression in older adults. I love the Wii and heartily recommend the Wii fit plus for all ages to have fun while engaging in light to moderate exercise. It’s great for improving your core strength, muscle tone and balance. (Here’s a link to Amazon for the Wii Fit Plus with Balance Board)
The release, from the second link:
Video games may help combat depression in older adults
IMAGE: Dilip V. Jeste, M.D., is a researcher at the University of California, San Diego.
Research at the Sam and Rose Stein Institute for Research on Aging at the University of California, San Diego School of Medicine suggests a novel route to improving the symptoms of subsyndromal depression (SSD) in seniors through the regular use of “exergames” – entertaining video games that combine game play with exercise. In a pilot study, the researchers found that use of exergames significantly improved mood and mental health-related quality of life in older adults with SSD.
The study, led by Dilip V. Jeste, MD, Distinguished Professor of psychiatry and neurosciences at UCSD School of Medicine, Estelle and Edgar Levi Chair in Aging, and director of the UC San Diego Sam and Rose Stein Institute for Research on Aging, appears in the March issue of the American Journal of Geriatric Psychiatry.
SSD is much more common than major depression in seniors, and is associated with substantial suffering, functional disability, and increased use of costly medical services. Physical activity can improve depression; however, fewer than five percent of older adults meet physical activity recommendations.
“Depression predicts nonadherence to physical activity, and that is a key barrier to most exercise programs,” Jeste said. “Older adults with depression may be at particular risk for diminished enjoyment of physical activity, and therefore, more likely to stop exercise programs prematurely.”
In the study, 19 participants with SSD ranging in age from 63 to 94 played an exergame on the Nintendo Wii video game system during 35-minute sessions, three times a week. After some initial instruction, they chose one of the five Nintendo Wii Sports games to play on their own – tennis, bowling, baseball, golf or boxing.
Using the Wii remote – a wireless device with motion-sensing capabilities – the seniors used their arm and body movements to simulate actions engaged in playing the actual sport, such as swinging the Wii remote like a tennis racket. The participants reported high satisfaction and rated the exergames on various attributes including enjoyment, mental effort, and physical limitations.
“The study suggests encouraging results from the use of the exergames,” Jeste said. “More than one-third of the participants had a 50-percent or greater reduction of depressive symptoms. Many had a significant improvement in their mental health-related quality of life and increased cognitive stimulation.”
Jeste said feedback revealed some participants started the study feeling nervous about how they would perform in the exergames and the technical aspects of game play. However, by the end of the study, most participants reported that learning and playing the videogames was satisfying and enjoyable.
“The participants thought the exergames were fun, they felt challenged to do better and saw progress in their game play,” Jeste said. “Having a high level of enjoyment and satisfaction, and a choice among activities, exergames may lead to sustained exercise in older adults.” He cautioned, however, that the findings were based on a small study, and needed to be replicated in larger samples using control groups. He also stressed that exergames carry potential risks of injury, and should be practiced with appropriate care.
###
Additional authors include Dori Rosenberg, Jennifer Reichstadt, Jacqueline Kerr and Greg Norman, UCSD Department of Family and Preventative Medicine; and Colin A. Depp, Ipsit V. Vahia and Barton W. Palmer, UCSD Department of Psychiatry.
The study was funded in part by grants from the National Institute of Mental Health, the UCSD Sam and Rose Stein Institute for Research on Aging, and the Department of Veterans Affairs.
Or so this news from Princeton purports.
The release:
SCIENTISTS FIND AN EQUATION FOR MATERIALS INNOVATION
Posted Feb 25, 2010By Chris Emery
Professor Emily Carter and graduate student Chen Huang developed a new way of predicting important properties of substances. The advance could speed the development of new materials and technologies. (Photo: Frank Wojciechowski)
Princeton engineers have made a breakthrough in an 80-year-old quandary in quantum physics, paving the way for the development of new materials that could make electronic devices smaller and cars more energy efficient.
By reworking a theory first proposed by physicists in the 1920s, the researchers discovered a new way to predict important characteristics of a new material before it’s been created. The new formula allows computers to model the properties of a material up to 100,000 times faster than previously possible and vastly expands the range of properties scientists can study.
“The equation scientists were using before was inefficient and consumed huge amounts of computing power, so we were limited to modeling only a few hundred atoms of a perfect material,” said Emily Carter, the engineering professor who led the project.
“But most materials aren’t perfect,” said Carter, the Arthur W. Marks ‘19 Professor of Mechanical and Aerospace Engineering and Applied and Computational Mathematics. “Important properties are actually determined by the flaws, but to understand those you need to look at thousands or tens of thousands of atoms so the defects are included. Using this new equation, we’ve been able to model up to a million atoms, so we get closer to the real properties of a substance.”
By offering a panoramic view of how substances behave in the real world, the theory gives scientists a tool for developing materials that can be used for designing new technologies. Car frames made from lighter, strong metal alloys, for instance, might make vehicles more energy efficient, and smaller, faster electronic devices might be produced using nanowires with diameters tens of thousands of times smaller than that of a human hair.
Paul Madden, a chemistry professor and provost of The Queen’s College at Oxford University, who originally introduced Carter to this field of research, described the work as a “significant breakthrough” that could allow researchers to substantially expand the range of materials that can be studied in this manner. “This opens up a new class of material physics problems to realistic simulation,” he said.
The new theory traces its lineage to the Thomas-Fermi equation, a concept proposed by Llewellyn Hilleth Thomas and Nobel laureate Enrico Fermi in 1927. The equation was a simple means of relating two fundamental characteristics of atoms and molecules. They theorized that the energy electrons possess as a result of their motion — electron kinetic energy — could be calculated based how the electrons are distributed in the material. Electrons that are confined to a small region have higher kinetic energy, for instance, while those spread over a large volume have lower energy.
Understanding this relationship is important because the distribution of electrons is easier to measure, while the energy of electrons is more useful in designing materials. Knowing the electron kinetic energy helps researchers determine the structure and other properties of a material, such as how it changes shape in response to physical stress. The catch was that Thomas and Fermi’s concept was based on a theoretical gas, in which the electrons are spread evenly throughout. It could not be used to predict properties of real materials, in which electron density is less uniform.
The next major advance came in 1964, when another pair of scientists, Pierre Hohenberg and Walter Kohn, another Nobel laureate, proved that the concepts proposed by Thomas and Fermi could be applied to real materials. While they didn’t derive a final, working equation for directly relating electron kinetic energy to the distribution of electrons, Hohenberg and Kohn laid the formal groundwork that proved such an equation exists. Scientists have been searching for a working theory ever since.
Carter began working on the problem in 1996 and produced a significant advance with two postdoctoral researchers in 1999, building on Hohenberg and Kohn’s work. She has continued to whittle away at the problem since. “It would be wonderful if a perfect equation that explains all of this would just fall from the sky,” she said. “But that isn’t going to happen, so we’ve kept searching for a practical solution that helps us study materials.”
In the absence of a solution, researchers have been calculating the energy of each atom from scratch to determine the properties of a substance. The laborious method bogs down the most powerful computers if more than a few hundred atoms are being considered, severely limiting the amount of a material and type of phenomena that can be studied.
Carter knew that using the concepts introduced by Thomas and Fermi would be far more efficient, because it would avoid having to process information on the state of each and every electron.
As they worked on the problem, Carter and Chen Huang, a doctoral student in physics, concluded that the key to the puzzle was addressing a disparity observed in Carter’s earlier work. Carter and her group had developed an accurate working model for predicting the kinetic energy of electrons in simple metals. But when they tried to apply the same model to semiconductors — the conductive materials used in modern electronic devices — their predictions were no longer accurate.
“We needed to find out what we were missing that made the results so different between the semiconductors and metals,” Huang said. “Then we realized that metals and semiconductors respond differently to electrical fields. Our model was missing this.”
In the end, Huang said, the solution was a compromise. “By finding an equation that worked for these two types of materials, we found a model that works for a wide range of materials.”
Their new model, published online Jan. 26 in Physical Review B, a journal of the American Physical Society, provides a practical method for predicting the kinetic energy of electrons in semiconductors from only the electron density. The research was funded by the National Science Foundation.
Coupled with advances published last year by Carter and Linda Hung, a graduate student in applied and computational mathematics, the new model extends the range of elements and quantities of material that can be accurately simulated.
The researchers hope that by moving beyond the concepts introduced by Thomas and Fermi more than 80 years ago, their work will speed future innovations. “Before people could only look at small bits of materials and perfect crystals,” Carter said. “Now we can accurately apply quantum mechanics at scales of matter never possible before.”
I’ve written about the 1031 exchange — also known as a Starker exchange — in the past (here’s a link to menu of my 1031 offerings) and it’s a great tax-deferred way get out of an investment property you are no longer interested in owning.
Here’s my intro from the first link:
The Internal Revenue Code 1031 exchange, also known as a Starker exchange, is a powerful tool investment second home owners can use to sell their existing real estate and purchase new property with all capital gains taxes deferred as long a certain criteria are met. A 1031 exchange is considered a “like kind” exchange of property. This exchange can be tricky and should be conducted through the services of a Qualified Intermediary, also referred to as an Exchange Accommodator. This independent party helps accommodate both the sale and subsequent purchase transactions.Before pursuing a 1031 exchange remember this option is only available for investment property. If you’re not sure if your second home is considered investment real estate, check where it falls in the four tax categories for second homes. If you use your second home for no more than 14 days in a year, or 10% of the days rented if that number is greater, the IRS will consider your second home investment real estate.
And here’s a link to a recent article at Forbes.com. The article provides a nice overview of whys and hows of a 1031 exchange, plus the comments provide additional insight into the process.
From the above link:
Because of the concentrated nature of a real estate investment, it is important for portfolio managers to have the flexibility to rebalance their portfolios and make tactical bets in either different property sectors or investment regions. A 1031 exchange encourages such rebalancing by allowing investors to move in and out of real estate exposures through the exchange of one property for another without the burden of immediately incurring capital gains taxes. By continually using 1031 exchanges when acquiring and unloading property, investors can defer the capital gains tax until it is time to liquidate some or all of the portfolio, there is a favorable change in the tax law, or they have accrued enough capital losses to offset the capital gains obligation.
Via KurzweilAI.net — Very interesting, and I can see where composers would be concerned, but I think Cope’s getting a little ahead/full of himself with the final quote about computers, humans and soul. Hopefully the quip was tongue-in-cheek that just didn’t translate to print.
Triumph of the Cyborg Composer Culture & Society, Feb. 22, 2010 David Cope’s algorithmic compositions rival the beauty of music by human composers and have passed the musical equivalent of the Turing Test (listeners cannot determine which music is human-composed). They herald the future of a new kind of musical creation: armies of computers composing (or helping people compose) original scores, he believes.
But some — especially composers — are threatened by the ability of artificial creativity programs to compose works fast that are good and that the audience likes.
Undeterred, Cope thinks humans are actually more robotic than machines. “The question,” Cope says, “isn’t whethercomputers have a soul, but whether humans have asoul.”
Read Original Article>>
I stridently opposed restrictions on short-selling last April, but added this caveat:
I agree some regulation [ … kills me to write that] in the financial and public sector needs to come to pass, but this accomplishes nothing aside from cheap public relations. If the markets are so weak selling short is capable of breaking them, maybe they should be broken.
Not too sure this move by the SEC is the answer, but it does seem measured and could well fall under the “some financial regulation is necessary” rubric I created in the previous blog post. I don’t like the idea the SEC is stifling the open market, but given the amount of pure jacking around the market has endured over the last two years, curbing “spiraling sales sprees” is probably not that bad an idea. It’s tough to remain a market purist in the face of market failure and the reality of ongoing market tinkering.
From the second link:
Federal regulators on Wednesday imposed new curbs on the practice of short-selling, hoping to prevent spiraling sales sprees in a stock that can stoke market turmoil.
The Securities and Exchange Commission, divided along party lines, voted 3-2 at a public meeting to adopt new rules.
The rules put in a so-called “circuit breaker” for stock prices, restricting for the rest of a trading session and the next one any short-selling of a stock that has dropped 10 percent or more.
Short-sellers bet against a stock, in a practice that is legal and widely used on Wall Street. They borrow a company’s shares, sell them and then buy them when the stock falls and return them to the lender — pocketing the difference in price.
The SEC move followed months of wrestling with the controversial issue. The SEC asked for public comment last April on several alternative approaches to restraining short-selling, and a bipartisan group of senators have been pushing the agency to act or face legislation.
The agency got more than 4,300 comments on the issue.
Winner of the men’s Professional Bowlers Association Tour Tournament of Champions, and the first woman to win a title on the men’s PBA tour. I don’t agree with Rick Reilly a lot of the time (and his writing ability continues on its downhill slide), but he totally nailed this column.
Here’s the key sentence (and what qualifies as a paragraph in the Reilly world of column writing):
What Kulick just did is one of the single greatest female sporting achievements in history.
As an occasional art conservator, I always find new developments in the field interesting. I don’t do painting restoration, but this technique sounds like it’s fairly unobtrusive and gets the job done. Plus lasers are always cool.
Laser surgery technique gets new life in art restoration
IMAGE: Art conservationists cleaned the two angels on the left with traditional restoration methods. They cleaned the one on the right using an advanced laser technique, which produced better results.
A laser technique best known for its use to remove unwanted tattoos from the skin is finding a second life in preserving great sculptures, paintings and other works of art, according to an article in ACS’ monthly journal,Accounts of Chemical Research. The technique, called laser ablation, involves removing material from a solid surface by vaporizing the material with a laser beam.
Salvatore Siano and Renzo Salimbeni point out that laser cleaning of artworks actually began about 10 years before the better known medical and industrial applications of the technique. Doctors, for example, use laser ablation in medicine to remove unwanted tattoos from the skin. In industry, the technique can remove paints, coatings and other material without damaging the underlying surface.
In the article, the scientists note that laser ablation has had an important impact in preserving the world’s cultural heritage of great works of art. They describe the latest advances in laser cleaning of stone and metal statues and wall paintings, including masterpieces like Lorenzo Ghiberti’s Porta del Paradiso and Donatello’s David. They also discuss encouraging results of laser cleaning underwater for materials that could deteriorate if exposed to air.
###
ARTICLE FOR IMMEDIATE RELEASE “Advances in Laser Cleaning of Artwork and Objects of Historical Interest: The Optimized Pulse Duration Approach”
DOWNLOAD FULL TEXT ARTICLE http://pubs.acs.org/stoken/presspac/presspac/full/10.1021/ar900190f
This study fits in with “wearable electronics” concept. For wearable electronics to be effective you need comfortably wearable juice to power those devices. Looks like some interesting medical applications here as well.
The release:
An electrifying discovery: New material to harvest electricity from body movements
IMAGE: “Piezo-rubber, ” super-thin films that harvest energy from motion, could be worn on the body or implanted to power cell phones, heart pacemakers, and other electronics in the future.
Scientists are reporting an advance toward scavenging energy from walking, breathing, and other natural body movements to power electronic devices like cell phones and heart pacemakers. In a study in ACS’ monthly journal, Nano Letters, they describe development of flexible, biocompatible rubber films for use in implantable or wearable energy harvesting systems. The material could be used, for instance, to harvest energy from the motion of the lungs during breathing and use it to run pacemakers without the need for batteries that must be surgically replaced every few years.
Michael McAlpine and colleagues point out that popular hand-held consumer electronic devices are using smaller and smaller amounts of electricity. That opens the possibility of supplementing battery power with electricity harvested from body movements. So-called “piezoelectric” materials are the obvious candidates, since they generate electricity when flexed or subjected to pressure. However, manufacturing piezoelectric materials requires temperatures of more than 1,000 degrees F., making it difficult to combine them with rubber.
The scientists describe a new manufacturing method that solves this problem. It enabled them to apply nano-sized ribbons of lead zirconate titanate (PZT) — each strand about 1/50,000th the width of a human hair — to ribbons of flexible silicone rubber. PZT is one of the most efficient piezoelectric materials developed to date and can convert 80 percent of mechanical energy into electricity. The combination resulted in a super-thin film they call ‘piezo-rubber’ that seems to be an excellent candidate for scavenging energy from body movements.
###
ARTICLE FOR IMMEDIATE RELEASE “Piezoelectric Ribbons Printed onto Rubber for Flexible Energy Conversion”
DOWNLOAD FULL TEXT ARTICLE http://pubs.acs.org/stoken/presspac/presspac/full/10.1021/nl903377u
From the link:
Today ESO has released a dramatic new image of NGC 346, the brightest star-forming region in our neighbouring galaxy, the Small Magellanic Cloud, 210 000 light-years away towards the constellation of Tucana (the Toucan). The light, wind and heat given off by massive stars have dispersed the glowing gas within and around this star cluster, forming a surrounding wispy nebular structure that looks like a cobweb. NGC 346, like other beautiful astronomical scenes, is a work in progress, and changes as the aeons pass. As yet more stars form from loose matter in the area, they will ignite, scattering leftover dust and gas, carving out great ripples and altering the face of this lustrous object.
NGC 346 spans approximately 200 light-years, a region of space about fifty times the distance between the Sun and its nearest stellar neighbours. Astronomers classify NGC 346 as an open cluster of stars, indicating that this stellar brood all originated from the same collapsed cloud of matter. The associated nebula containing this clutch of bright stars is known as an emission nebula, meaning that gas within it has been heated up by stars until the gas emits its own light, just like the neon gas used in electric store signs.
Many stars in NGC 346 are relatively young in cosmic terms with their births dating back only a few million years or so (eso0834). Powerful winds thrown off by a massive star set off this recent round of star birth by compressing large amounts of matter, the first critical step towards igniting new stars. This cloud of material then collapses under its own gravity, until some regions become dense and hot enough to roar forth as a brilliantly shining, nuclear fusion-powered furnace — a star, illuminating the residual debris of gas and dust. In sufficiently congested regions like NGC 346, with high levels of recent star birth, the result is a glorious, glowing vista for our telescopes to capture.
NGC 346 is in the Small Magellanic Cloud, a dwarf galaxy some 210 000 light-years away from Earth and in close proximity to our home, the much larger Milky Way Galaxy. Like its sister the Large Magellanic Cloud, the Small Magellanic Cloud is visible with the unaided eye from the southern hemisphere and has served as an extragalactic laboratory for astronomers studying the dynamics of star formation.
This particular image was obtained using the Wide Field Imager (WFI) instrument at the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. Images like this help astronomers chronicle star birth and evolution, while offering glimpses of how stellar development influences the appearance of the cosmic environment over time.
More information
ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory, and VISTA the largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
… to prep for raising the interest rate.
Via KurzweilAI.net — This warning from Michael McConnell shouldn’t be dismissed as another Bush 43 administration official hoping to paint Obama as unprepared for security threats and attempting to preemptively pin any future attacks on the purported incompetence of the White House. McConnell served as NSA director under Clinton before his stint as Director of National Intelligence under Bush and then briefly under Obama. Cyberwarfare is one threat the U.S. faces where the overwhelming might of our military does not make a whit of difference.
U.S. Unprepared for ‘Cyber War’, Former Top Spy Official Says BusinessWeek, Feb. 23, 2010 The U.S. isn’t prepared for a massive attack on its computer networks by another country and would lose, former Director of National Intelligence Michael McConnell told a Senate panel today.
Read Original Article>>
That is, overhyped as a revolutionary game-changing technology that doesn’t even come close to expectations? Who knows. K.R. Sridhar is getting plenty of attention and if the Bloom Box comes near to delivering on its promise may well become a truly revolutionary piece of technology. The skeptic in me keeps me from holding my breath in excitement. (And yes, that last sentence is dripping with snark even though it doesn’t come through in the writing.)
From the link:
The hot energy news for this week comes in the form of a small box called the Bloom box, whose inventor hopes that it will be in almost every US home in the next five to 10 years. K.R. Sridhar, founder of the Silicon Valley start-up called Bloom Energy, unveiled the device on “60 Minutes” to CBS reporter Leslie Stahl on Sunday evening. Although Sridhar made some impressive claims on the show, he left many of the details a secret. This Wednesday, the company will hold a “special event” in eBay’s town hall, with a countdown clock on its website suggesting it will be a momentous occasion – or at least generating hype.
As Sridhar explained to Stahl, the Bloom box is a new kind of fuel cell that produces electricity by combining oxygen in the air with any fuel source, such as natural gas, bio-gas, and solar energy. Sridhar said the chemical reaction is efficient and clean, creating energy without burning or combustion. He said that two Bloom boxes – each the size of a grapefruit – could wirelessly power a US home, fully replacing the power grid; one box could power a European home, and two or three Asian homes could share a single box. Although currently a commercial unit costs $700,000-$800,000 each, Sridhar hopes to manufacture home units that cost less than $3,000 in five to 10 years. He said he got the idea after designing a device for NASA that would generate oxygen on Mars, for a mission that was later canceled. The Bloom box works in the opposite way as the Mars box: instead of generating oxygen, it uses oxygen as one of the inputs.
Update — Here’s the latest on the Bloom box from PhysOrg.
Technically it’s a loan guarantee rather than a true investment, but this Department of Energy move shows just how serious the Obama administration is concerning alternate energy sources. There are a lot of exciting developments in solar power right now and government money in this amount only helps grease the wheels of innovation and private-sector investment.
From the first link:
The U.S. Department of Energy has announced a $1.37 billion conditional loan guarantee for the Ivanhoe Solar Complex in the Mojave Desert. The project, managed by Brightsource Energy, will use mirrors to concentrate sunlight, creating high temperatures that can be used to generate electricity. The complex will include three power plants that together will produce about 400 megawatts of electricity.
Basically, the guarantees would cover the loans in the case of default. The money for the loans is expected to come from the Federal Financing Bank.
One of the biggest challenges that large solar developments face is getting financing, particularly because few such solar power plants have been built. The DOE guarantees help on this front.