Blog

  • Local Motors Rally Fighter: Jay Leno’s Garage

    Local Motors Rally Fighter

    I’m not really sure if anyone needs a high-performance off-road buggy for the street, but just in case, the Local Motors Rally Fighter is available for purchase for the measly sum of $99,000.00. Jay Leno recently met up with Rally Fighter creator Jay Rogers to discuss exactly what goes into making these amazing machines so outstanding.

    Source: JayLenosGarage.com

  • Samsung Galaxy Note 8 Reportedly Caught On Camera, Inherits Design Language Of Galaxy Note II

    tab4_b_7733

    Samsung’s Galaxy Note 8 is rumored for an official unveiling at Mobile World Congress next week in Barcelona, but this week it’s been the subject of a lot of rumors and speculation ahead of release. Today, there’s a new leak from Italy’s DDAY.it that purports to show the Galaxy Note 8 in action in the hands of an actual user. The pics are a lot more convincing than the SamMobile leak of a brochure shot of the same from earlier in the week, with a design that’s reminiscent of Samsung’s most recent smartphones.

    The Galaxy Note 8 that’s apparently depicted in these photos looks essentially like a blown up Galaxy Note II, with a rounded rectangle shell framing an 8-inch display. Down at the bottom of the bezel you see a physical home button, framed on either side by a back button and what looks like a multitasking button as touch-sensitive keys. You can also see a front-facing camera, as well as a rear camera without a visible flash. There’s also an S-Pen holster integrated into the bottom right of the rear casing, as you can see in the first image above.

    The Galaxy Note 8 is supposedly going to arrive with 2GB of RAM on board, with the front camera pegged at 1.3 megapixels, and the rear at 5 megapixels. The display is supposed to be around 1280 x 800, which, while not mind-blowingly dense, still beats the iPad mini with 189 ppi vs. 163 ppi for Apple’s smaller tablet. A report from earlier this week suggest it will have a 1.6GHz quad core processor under the hood.

    A MWC unveiling makes sense for this device, since Samsung showed off the Galaxy Note 10.1 with S-Pen at the event in Barcelona last year. And this design, while different from the SamMobile leak from Tuesday, bears a striking similarity to that device, meaning one or the other likely represents a slightly different pre-production design.



  • Where Your Cubicle Came From

    People have been trying to reform the office, it seems, since the office first appeared. Who was A Christmas Carol’s Bob Cratchit but an early office reformer, whining for heat so that his ink didn’t freeze? Office reformers of today point to trends like hotelling, co-working, and (my personal favorite) working from home — worthy endeavors, all. Maybe not as worthy as central heat, but pretty good ideas nonetheless.

    But maybe we can learn a lesson from the humble cubicle. No one sets out to design the most hated office furniture of all time, unless perhaps you work for the Spanish Inquisition, and the cubicle is no exception. Originally intended to free office workers from their hierarchical, codified drudgery of an existence (can’t you just taste the irony?), the cubicle has become universally loathed.

    The Herman Miller furniture company introduced the innovation in 1964 as “the world’s first open-plan office system of reconfigurable components and a bold departure from the era’s fixed assumptions of what office furniture should be.” A team led by Robert Propst, the company’s head of research, and George Nelson, the company’s director of design realized the flexible workspace. Propst was the consummate inventor, with many patents to his name, while Nelson is almost single-handedly responsible for the look and feel of the modern office.

    The original cubicle, named the Action Office, was intended to give office workers more space than the so-called bullpen office that assigned one worker to one smallish desk. With more work spread out before them, more space for filing, with desks that adjusted heights, the designers reasoned, individuals were bound to be more productive. The design — which included two desks, a couple of chairs, a small table, and some vertical filing stands — even accommodated working while standing up. It is a glory to behold (you can see a picture of here, in the fifth slide), especially when you work in an office where people stack up books to create their own standing desks. The Action Office was a great idea.

    It was also a flop.

    It was too expensive and difficult to assemble, and the requisite square footage per employee made it poorly suited to large organizations. So Propst and the designers went back to the drawing board, producing in 1968 the Action Office II, which corrected the perceived deficiencies of the first version. Each employee got one desk, and the addition of low walls afforded some privacy and contained each worker. It also meant that more desks could be crammed closer together while still allowing neighbors to interact. . . . I’m not going to spell it out; you can see where this is going.

    Soon people were writing novels about veal-fattening pens, making videos of cubicle hurdles, inventing cubicle periscopes, and recommending that you not “prairie dog.” IDEO even designed Dilbert’s Ultimate Cubicle for Scott Adams.

    George Nelson, Herman Miller’s design director, was so disgusted that he left the project well before the Action Office II went to market. And even Robert Propst, before he died in 2000, bemoaned his contribution to what he called “monolithic insanity.”

    This short history isn’t meant to be totally depressing or enervating. After all, offices have changed radically, and often for the better, since 1964. But it should serve as a reminder that organizing comes with costs. It’s up to us to recognize what they are before we dive, willy-nilly, into any reform effort. If you don’t, you might end up with something that looks like this.

  • Tanzania’s pass/fail roller coaster

    A primary school teacher answering questions in her class. Picture: Neema Kambona/DFID

    A primary school teacher answering questions in her class. Picture: Neema Kambona/DFID

    You know that heart stopping feeling when you crest the first peak of a big roller coaster as it goes into free fall? That feeling of dread is perhaps only equalled by the torture of opening up your exam results – at the time it seems your whole life might depend on the hidden grades inside!

    In the UK last year, GCSE (Grade 10 for 16 year olds) pass rates finally were reported as having ‘dropped’ for the first time ever by an ‘under whelming’ half of one percentage point, reversing a decades long upward trend. Many have commented that exams, and increasingly interwoven coursework, have become easier to pass – ‘grade inflation’ – potentially to allow more students to enter tertiary education. There were howls of protest and legal challenges this year over how the pass mark for English GCSEs were being adjusted and its effect on grades and students’ career prospects.

    Over the Christmas holidays, Tanzanians were shocked and bemused to receive the outcomes of the Primary School Leaving Examinations (PSLE) taken by students around 14-15 years old in age and are usually considered necessary to enter secondary school. National pass rates (grades A-C) were reported as having plummeted from 57% in 2011 to 30% in 2012, that’s almost halved – not one half of a percentage point drop. It was reported that in two rural Western regions that 48 schools had no students pass at all. However, not all failing students face ruin. It appears that entry requirements into secondary school will be relaxed, as the government continues to expand access to secondary education (enrolment rates have tripled since 2005).

    Secondary school enrolment since 2000

    Exam results can be used for different purposes to filter out students for a limited intake into more advanced levels of education or as an absolute measure of competences. Major changes in pass rates are not that unusual if one looks at Tanzanian results in past years, but this one does seem unexpectedly large and has left many people scratching their heads for solutions. If failing students are being sent en masse to secondary schools, is the problem merely being shunted up the system?

    Did the switch to automated marking of multiple choice questions cause confusion or did it prevent cheating? For an exam taken by close to a million students the benefits of automation are clear, in previous years teacher training colleges stopped lessons for weeks as trainee teachers were co-opted in for marking by hand. Were the questions or curriculum made harder, or the grade boundaries adjusted? If you have any ideas please let me know; we are also discussing with government colleagues for possible explanations.

    MDG 2: Achieve Universal Primary Education

    Over the past decade the emphasis in developing countries has evolved from education expansion – ‘bums on seats’ in pursuit of the MDG 2 on access – to all children learning at school (or elsewhere).  Clearly examination pass rates are one measure of learning; as posted earlier, other approaches such as civil society led testing of children on basic literacy and numeracy skills now provide useful alternative measures that demonstrate disturbingly low levels of ability in children in Africa and South Asia. We face a real challenge to determine how best to support Tanzania’s children to learn. Primary School Leaving Exam (PSLE) pass rates are one of the key indicators agreed to measure progress between the UK, other development partners and government, but in this instance it appears our tape measure or stopwatch may have malfunctioned!

  • Backed By Y Combinator And Google Ventures, CircuitHub Aims To Be A One-Stop Shop For Electrical Part Info

    419012_317697981624973_1401993484_n

    Say you’re building a gadget. You’ll probably need several widgets, gizmos and electronic thingymabobs. CircuitHub is now here to help. The startup launched today and is attempting to be the world’s first free online, collaborative parts library. Best of all, it works seamlessly with popular design programs.

    This tool is aimed squarely at makers. By offering a comprehensive and detailed parts library, CircuitHub hopes to be the main resource for finding electronic components. But CircuitHub will only be successful if it can build this massive database. The entire system is open for group collaboration. Spend a few minutes and add some parts to the database.

    Using Dropbox for cloud storage, CircuitHub integrates nicely with Altium, Eagle, OrCAD and Allegro. Use CircuitHub’s library with your design software. That’s the genius here. CircuitHub isn’t attempting to disrupt a maker’s workflow; the startup is trying to improve it.

    By sourcing the right part from the start, makers will experience less hassle when approaching manufacturing.

    “Kickstarter is the largest crowd-funding site where anyone can help fund ideas proposed by anyone else” explained Andrew Seddon, CircuitHub’s co-founder, in a released statement. “The single biggest project and the highest funded category are both dominated by electronics. Yet 84 percent of the top physical product-based projects were severely delayed primarily due to problems with interfacing design data into and through factories. This problem is exactly what the CircuitHub library is designed to address.”

    CircuitHub is backed by Y Combinator with investments from Google Ventures and notable angel investors including Paul Buchheit (the inventor of Gmail), Matt Cutts (creator of Google SafeSearch), Alexis Ohanian (cofounder of Reddit), Harj Taggar (cofounder Auctomatic), and Garry Tan (cofounder of Posterous), among others.

  • Weekly Radar: Managing expectations

    With a week to go in January, global stock markets are up 3.8 percent – gently nudging higher after the new year burst and with a continued evaporation of volatility gauges toward new 5-year lows. That’s all warranted by a reappraisal of the global economy as well as murmurs about longer-term strategic shifts back to under-owned and cheaper equities. But, as ever, you can never draw a straight line. If we were to get this sort of move every month this year, then total returns for the year on the MCSI global index would be 50 percent – not impossible I guess, but highly unlikely. So, at some stage the market will pause, hestitate or even take a step back. Is now the time just three weeks into the year?

    Well lots of the much-feared headwinds have not materialized. The looming US budget ceiling showdown keeps getting put back – it’s now May by the way, even if another mini-cliff of sorts is due in March — but you get can-kicking picture here already. The US earnings season looks fairly benign so far, even given the outsize reaction to Apple after hours on Wednesday. European sovereign funding worries have proven wide of the mark to date too as money floods to Spain and even Portugal again. And Chinese data confirms a decent cyclical rebound there at least from Q3′s trough. All seems like pretty smooth sailing – aside perhaps from the UK’s slightly perplexing decision to add rather than ease uncertainty about its economic future. So what can go wrong? Well there’s still an event calendar to keep an eye on – next month’s Italian elections for example. But even that’s stretching it as a major bogeyman the likely outcome.

    In truth, the biggest hurdle is most likely to be the hoary old problem of over-inflated expectations. Just look at the US economic surprise index – it’s tipped into negative territory for the first time since late last summer. Yet incoming US data has not been that bad this year. What the index tells you more about has been the rising expectations. (The converse, incidentally, is true of the euro zone where you could say the gloom’s been overdone.) Yet without the fuel of positive “surprises” we’re depending more on a structural story to buoy equity and that is a multi-year, glacial shift rather than necessarily a 2013 yarn. The start of the earnings season too is also interesting with regard to expectations. With little over 10 percent of the S&P500 reported by last Friday, the numbers showed 58% had beaten the street. That’s not bad at first glance but a good bit lower than the 65% average of the past four quarters. On the other hand, it’s been top-line corporate revenues that have supposedly been terrifying everyone and it’s a different picture there. Of the 10% of firms out to date, 65 percent have reported Q4 revenues ahead of forecasts – far ahead of the 50% average of the past four quarters. Early days, but that’s relatively positive on the underlying economy at least.

    And the Apple story is yet another case in point. Even though its shares fell about 10% in after-hours trade on anything from a slight revenue miss, future guidance and market-share concerns — it says more about the scale of expectations built into this one, if spectacular, corporate story. Look at the actual numbers  and you see in absolute terms, its supposedly worrying iPhone shipments were still up 29% over the year to a new record and iPhone sales in greater China more than doubled. A tough crowd to please now, clearly, but again telling us more about expectations that underlying activity. For what it’s worth, Apple’s bottom-line earnings beat the street. 

    And finally, the other big – structural rather than cyclical – story in play over the past 10 days has been the unwind of the euro safe-haven plays – hardly surprising given that now two of the three bailout countries (Ireland and Portugal) are back in the private markets again default-free and the one-time big worry (Spain) is drowning in foreign creditors all of a sudden. Bund, Treasury and Gilt yields of course have all been pushing higher, even though QE limits that move. But perhaps the biggest manifestation of the safe-haven exit has been the 3% Swiss franc retreat – who said the SNB couldn’t hold the line? Sterling’s slide too is as much to do with this as it is related to Cameron’s EU sideswipe. Watch out for others too – Nordic markets perhaps? London property? Gold is still higher on the year, but that just underlines the fact it was always more an inflation-hedge rather than haven from systemic shocks.       

    Next week turns macro again in west of the Atlantic again – with the FOMC, US December payrolls and US Q4 GDP easily the dominant releases for world markets. European earnings season will keep people on their toes too as the likes of Deutsche Bank reporting amid some renewed jitters about European and German bank restructuring and regulatory pressures.

    GLOBAL DATA/EVENTS TO WATCH

    Europe Q4 earnings Mon: Ryanair

    EZ Dec M3 money/credit Mon

    Italy Jan consumer confidence Mon

    US Q4 earnings Mon: Caterpillar, BioGen, Yahoo

    US Dec durable goods orders Mon

    US 2-yr note auction Mon

    India rate decision Tues

    Europe Q4 earnings Tues: AMS, Sandvik

    France Jan consumer confidence Tues

    US Q4 earnings Tues: Ford, Pfizer, Amazon, Corning  

    US Jan consumer confidence Tues

    US 5-yr note auction Tues

    Europe Q4 earnings Weds: Nordea, Roche, Swedbank

    EZ Jan consumer/biz confidence Weds

    Italy govt bond auction Weds

    German 30-yr bund auction Weds

    US Q4 earnings Weds: Boeing, ConocoPhillips, Marathon

    US ADP Jan private sector jobs data Weds

    US Q4 GDP Weds

    FOMC rate decision Weds

    US 7-yr note auction Weds

    Japan Dec industrial production Thurs

    RBNZ rate decision Thurs

    Europe Q4 earnings Thurs: Deutsche Bank, Astrazeneca, Shell, BSkyB, Diageo, Ericsson, Infineon, LVMH, Novo Nordisk

    German Jan CPI/jobless Thurs

    UK Dec mortgage/credit data Thurs

    Egypt central bank meeting Thurs

    US Q4 earnings Thurs: Dow Chemical, Viacom, UPS, Colgate-Palmolive

    US Jan Chicago PMI Thurs

    Japan Dec jobless/spending Fri

    Europe Q4 earnings Fri: BBVA, BT, Electrolux

    Global Jan manufacturing PMIs Fri

    EZ Dec jobless, flash Jan CPI Fri

    US Q4 earnings Fri: Exxon, Chevron, Aon, Merck

    US Jan payrolls Fri

    US Jan UMich consumer confidence Fri

     

  • Skill, Luck, and the NHL’s Shortened Season

    Fans of the National Hockey League (NHL) were excited to see the season open this past weekend after a four-month lockout. Similar to the National Basketball Association (NBA), which went through a lockout last year, the NHL will play a reduced schedule. Last year, NBA teams played 66 games during the shortened season and the NHL will play 48 games this year (a full season is 82 games for both leagues). A closer examination of the truncated seasons offers some useful lessons for executives.

    These situations seem similar on the surface. The leagues play an identical number of games during the same time of year and the sports have a like number of players in the game at a time. Basketball has more scoring, but hockey games are longer. Playing a slate of games that is 60% or more of the full season seems to be a comparable solution.

    But if the objective of the regular season is to determine which teams are best, there is a huge difference between the NBA and the NHL. The key is the relative contribution of skill and luck in determining results for a season. Of the professional sports leagues — which also includes include the National Football League, the Premier League (soccer), and Major League Baseball — skill plays the largest role in the NBA (PDF) and the smallest role in the NHL.

    Here’s the way to think about it. When a sport has lots of skill and little luck, you don’t need a large sample size to determine which team or individual is more skillful. Less than 1/6th of a minute of running against Usain Bolt, for example, would do the job completely. But when there’s lots of luck, you need a large sample size to ensure that luck evens out and skill shines through. Baseball has a lot of luck, but you can be reasonably confident that the better teams make the playoffs because the regular season includes 162 games (over 27,500 minutes).

    To make this concrete, we can calculate the number of games you need in the NBA and the NHL to get an equivalent contribution of skill and luck. Because basketball has lots of skill, you only need 10-15 games, or about 15% of the season, to obtain the same signal that you get from 70-75 hockey games, or nearly 90% of the season. Sixty-six games for the NBA is plenty to allow the top teams to climb to the surface. But 48 games in the NHL is way too few. The NHL season is simply too short to be confident that the more skillful teams will rise to the top. If there’s a year for a mediocre team to do unexpectedly well, this is it.

    What does all of this have to do with business? Employees within a company carry out a variety of tasks. Some are like basketball — almost all skill and very little luck. One example is six sigma, where a company seeks to have no more than 3.4 defects per million products it manufactures. Others are similar to hockey, where there’s lots of luck. Strategy development is a case in point. A company can devise a wonderful strategy that fails as the result of bad luck. Michael Raynor, a consultant at Deloitte, shares the example of Sony’s strategy for the MiniDisc. He argues that Sony’s strategy was brilliant but that the product failed because of bad luck.

    As a business leader you need to do two things:

    1. Make a judgment about where a business activity lies on a continuum from all-luck and no-skill on one end to no-luck and all-skill on the other.

    Answering three questions can get you on your way.

    First, ask if you can easily assign a cause to the effect you see. When cause and effect are clear, skill tends to be prominent.

    Next, what is the rate of reversion to the mean? If reversion to the mean is slow, then skill is shaping results.

    Finally, ask: Where can we predict well? Where predictions are accurate, skill is present. If predictions are little better than random, then luck is carrying the results.

    Once you know where an activity lies on the continuum you can appropriately tailor feedback, remuneration, and incentives.

    Where skill is dominant, outcomes and skill are highly correlated and feedback is straightforward. Where luck plays a large role, feedback based on outcomes is insufficient. In these cases, you need to focus on the process. It’s hard to know a blackjack player’s skill based on her short-term outcomes but you can evaluate how she plays her cards and gain a sense of her ability.

    The principle behind thoughtful remuneration is easy enough: Pay for skill but not for luck. Companies have violated this principle in recent years, creating a sense of unfairness. One example is executive pay through stock options that are not indexed. In bull markets, CEOs haul in large sums as the beneficiaries of luck. In bear markets, superior CEOs fare poorly as a consequence of bad luck.

    2. Finally, align incentives to match the activity. On the skill end, incentives can be tied to output because skill and results are closely aligned. On the luck end, incentives should reflect a proper process, and you should consider only long-term results.

    The main lesson from the shortened season is events that may appear similar on the surface can be quite different once you analyze them. In your company, some activities are more like basketball and others are more like hockey. Understanding those differences and tailoring feedback, remuneration, and incentives to match the task is the ultimate goal.

  • Toyota Highlights Made In America at Chicago Auto Show

    Toyota plans to highlight their “success stories,” Made in America business plan and the Tundra that pulled the shuttle at the Chicago Auto Show. The focus is on their bright future and we couldn’t agree more.

    Toyota Highlights Made In America at Chicago

    This image from the 2009 Chicago Auto Show is of Toyota’s then large exhibit. Can’t wait to see a larger one!

    The plan is to blend new products, concepts vehicles and success stories as part of a 45,000-square-foot stand. While the stand was impressive in Detroit at 35,000 square-foot, Toyota is going big. What’s really interesting to someone who went to that show is how Toyota really didn’t have a good presence in Detroit. Can this bigger stand draw more attention.

    With most the “Big 3″ automakers having big stands and putting all their effort into Detroit. Toyota seems to be taking a different route. For example, within the last few days, they have announced an indoor ride course and a larger footprint. Why the focus on Chicago?

    “Chicago has long been considered one of the top drawing consumer auto shows in the country,” said Brent Marrero, auto show planner for Toyota Motor Sales, U.S.A., Inc. “This year, we wanted the exhibit to not only showcase our incredible products but also our heritage in the United States. We know that ‘made in America’ is especially important to people living in the Midwest, and we feel we have a great story to tell, with 70 percent of all our vehicles sold in Unites States being built here as well.”

    Toyota states in their press release that they plan on having 50 vehicles to check out including the all-new RAV4, Avalon, Corolla Furia Concept, Fun-Vii and a special edition SEMA/Kyle Bush Camry. Most of these cars have already been unveiled to the public.

    What is Toyota not saying? Yep, you guessed it, they aren’t confirming that a 2014 Toyota Tundra will be at the show. While sources have told us that the truck will be there, unfortunately, we don’t have a confirmation from Toyota.

    If/when it is unveiled will probably be at Toyota’s PR event at 9am on February 7th. I will be at the show and will be probably live updates via Facebook (dependent on cell phone coverage). For the latest, make sure you like our Facebook page.

    Related Posts:

    Search terms people used to find this page:

    • 2014 tundra

    The post Toyota Highlights Made In America at Chicago Auto Show appeared first on Tundra Headquarters Blog.

  • Cambridge University To Open £25M Graphene R&D Centre With Backing from Nokia, Plastic Logic & Others

    cambridge university logo

    Material scientists and nanotechnologists get very excited about the potential of graphene — a one-atom-thick sheet of bonded carbon atoms which is exceptionally strong, lightweight and flexible and is a better conductor than silicon  – but they are not the only ones to see huge potential in it. Nokia, Plastic Logic, Philips, Dyson, and BaE systems are among more than 20 industry partners who have pledged £13 million worth of support for a new graphene R&D centre to be established at Cambridge University. The Centre is also backed by more than £12 million of government funding.

    It’s unclear exactly what kind of support Nokia et al will be providing the centre but we’ve reached out to Cambridge University for more information and will update this story with any response. Nokia does already have an R&D lab in Cambridge, at the University’s Broers Building — one of a network of labs Nokia operates. According to the website, Nokia’s Cambridge lab studies ”physical, chemical and biological phenomena and manipulation of matter at the nanoscale enables generation of knowledge for enhancing human capabilities”.

    The new Cambridge Graphene Centre aims to develop graphene from a material with a lot of raw potential — researchers have already been looking at how graphene could improve battery capacity, and exploring its water-repelling properties —  to a point where it can “revolutionise flexible, wearable and transparent electronics”. Future industrial applications envisaged by the University are said to lie in the areas of “flexible electronics, energy, connectivity and optoelectronics”. So, hopefully, bendy, see-through wearable smartphones here we come — albeit, not tomorrow.

    One of the issues scientists have with graphene to-date is making (i.e. growing) large quantities of it — large enough to be useful for industrial applications. Graphene sheets are also difficult to manipulate and connect with other materials. So one of the projects the Centre will undertake will look specifically at the “manufacturability” of graphene and other, layered, 2D materials — focusing on a growth method called chemical vapour deposition that has been used to enable industrial scale production of other materials, such as diamond, carbon nanotubes and gallium nitride.

    As well as tackling graphene manufacturing, the Centre will investigate how graphene can be integrated into networked devices — “with the ultimate vision of creating an ‘Internet of things’” — and look into how it can improve the performance of super-capacitors and batteries. So potentially improving the longevity of consumer electronics devices, such as phones and MP3 players, but also aiming to provide “a more effective energy storage for electric vehicles [and] storage on the grid”.

    “Graphene’s potential is beyond doubt, but much more research is needed if we are to develop it to a point where it proves of benefit to society as a whole,” said Professor Sir Leszek Borysiewicz, Vice-Chancellor of the University of Cambridge, in a statement.

    Professor Andrea Ferrari, who will be the Centre’s Director, added in a statement: “We are now in the second phase of graphene research, following the award of the Nobel Prize to Geim and Novoselov. That means we are targeting applications and manufacturing processes, and broadening research to other two-dimensional materials and hybrid systems. The integration of these new materials could bring a new dimension to future technologies, creating faster, thinner, stronger, more flexible broadband devices.”

    The Cambridge Graphene Centre will start “activities” on February 1 this year, although the dedicated research facility isn’t slated to open until the end of the year. Its activities will be funded by a more than £12 million government grant, allocated to the University in December by the Engineering and Physical Sciences Research Council.  A further £11M of European Research Council funding will support activities with the Graphene Institute in Manchester, and Lancaster University.

    Cambridge University’s full release follows below.

    A centre for research on graphene, a material which has the potential to revolutionise numerous industries, ranging from healthcare to electronics, is to be created at the University of Cambridge. The University has been a hub for graphene engineering from the very start and now aims to make this “wonder material” work in real-life applications.


    The Cambridge Graphene Centre will start its activities on February 1st 2013, with a dedicated facility due to open at the end of the year. Its objective is to take graphene to the next level, bridging the gap between academia and industry. It will also be a shared research facility with state-of-the-art equipment, which any scientist researching graphene will have the opportunity to use.

    The Centre’s activities will be funded by a Government grant worth more than £12 million, which was allocated to the University in December by the Engineering and Physical Sciences Research Council (EPSRC).  The rest of this money will support projects focusing both on how to manufacture high-quality graphene on an industrial scale, and on developing some of its potential applications.

    Graphene is a one-atom thick layer of graphite with remarkable properties. It is exceptionally strong, yet also lightweight and flexible, enables electrons to flow faster than silicon and functions as a transparent conductor. Researchers in industry and academia are keen to harness its potential to make significant technological advances. This work might lead to numerous new devices and applications which could then be commercialised by industry and help to boost economic growth.

    There is still much to be done before that early promise becomes reality. The first job for those working in the Cambridge Graphene Centre will be to find ways of manufacturing and optimising graphene films, dispersions and inks so that it can be used to good effect.

    Professor Andrea Ferrari, who will be the Centre’s Director, said: “We are now in the second phase of graphene research, following the award of the Nobel Prize to Geim and Novoselov. That means we are targeting applications and manufacturing processes, and broadening research to other two-dimensional materials and hybrid systems. The integration of these new materials could bring a new dimension to future technologies, creating faster, thinner, stronger, more flexible broadband devices.”

    Professor Sir Leszek Borysiewicz, Vice-Chancellor of the University of Cambridge, said: “Graphene’s potential is beyond doubt, but much more research is needed if we are to develop it to a point where it proves of benefit to society as a whole. The pioneering work of Cambridge engineers and scientists in fields such as carbon nanotechnology and flexible electronics, coupled with our record working with industry and launching spin-out firms based on our research, means that we are in a unique position to take graphene to that next level.”

    Professor Bill Milne, who will be part of the Centre’s management group, said:  “Graphene has amazing fundamental properties but at the moment we cannot produce it in a perfect form over large areas. Our first aim is to look at ways of making graphene that ensure it is still useful at the end of the process. We have to find modes of production that are consistently effective – and there is still a lot of work to be done in this respect.”

    One such project, led by Dr Stephan Hofmann, a Reader and specialist in nanotechnology, will look specifically at the manufacturability of graphene and other, layered, 2D materials. At the moment, sheets of graphene that are just one atom thick are difficult to grow in a controllable manner, manipulate, or connect with other materials.

    Dr Hofmann’s research team will focus on a growth method called chemical vapour deposition (CVD), which has already opened up other materials, such as diamond, carbon nanotubes and gallium nitride, to industrial scale production.

    “The process technology will open up new horizons for nanomaterials, built layer by layer, which means that it could lead to an amazing range of future devices and applications,” Dr Hofmann said.

    The Government funding for the Centre is complemented by strong industrial support, worth an additional £13 million, from over 20 partners, including Nokia, Dyson, Plastic Logic, Philips and BaE systems. A further £11M of European Research Council funding will support activities with the Graphene Institute in Manchester, and Lancaster University.

    Its work will focus on taking graphene from a state of raw potential to a point where it can revolutionise flexible, wearable and transparent electronics. The Centre will target the manufacture of graphene on an industrial scale, and applications in the areas of flexible electronics, energy, connectivity and optoelectronics.

    Professor Yang Hao, of Queen Mary, University of London, will lead Centre activities targeting connectivity, so that graphene can be integrated into networked devices, with the ultimate vision of creating an “internet of things”.

    Professor Clare Grey, from Cambridge’s Department of Chemistry, will lead the activities targeting the use of graphene in super-capacitors and batteries for energy storage. The research could, ultimately, provide a more effective energy storage for electric vehicles, storage on the grid, as well as boosting the energy storage possibilities of personal devices such as MP3 players and mobile phones.

  • Morning Advantage: One Entrepreneur Who Will Not Be Rushed

    A nine-year-old in Irvine, California, frustrated with the quality of videogame virtual-reality goggles, began tinkering with VR helmets he’d bought cheap from government auctions and hospital supply shops. Eventually, he succeeded in producing a glitch-free device that, with smooth perfection, can serve up a game’s visuals in any direction you look. Called Oculus Rift, it caused a sensation at this year’s Consumer Electronics Show.

    Reporters and gamers are clamoring for it. But Palmer Luckey, now 20, is taking his own sweet time to market — which he’s free to do since his funding comes strings-free from Kickstarter. Right now, he’s releasing it only as a developers’ kit. “It would be irresponsible for me to say when we’ll have consumer products,” he tells this enthusiast from Popular Mechanics, since the device still lacks sound — and who knows what else? If the developers say it needs some new functions, he insists (in an approach more reminiscent of the old mainframe makers than a 21st century start-up), he won’t release it until he’s perfected those functions, too.

    WE’LL LEAVE THE LIGHTS ON FOR YOU

    Would Coworking Give Your Business an Edge? (Fast Company)

    Denizens of incubators well know the benefits of working in the same building with other aspiring entrepreneurs, who share their passions, frustrations, odd hours, advice, and companionship. In Deskmag’s Annual Global Coworking Survey, Lydia Dishman sees data suggesting that established businesses try the approach, as the staffs from Gawker, Foursquare, Tumblr, and Vimeo did temporarily in the wake of Hurricane Sandy. Fully 71% of study participants reported a boost in creativity since joining a coworking space, and 62% said their standard of work had improved. Only 30% use the space during normal working hours, which Dishman sees as a sign of go-the-extra-mile engagement, but might also indicate a lot of people moonlighting.

    DISPATCHES FROM THE GLOBAL VILLAGE

    Where Two Parents Aren’t Better than One (Foreign Policy)

    The benefits of two-parent families in helping their children get ahead, as success in school translates into higher incomes in life, has been well-documented in the developed world (though the statistics here are well worth a look — they’re pretty dramatic). But researchers are not finding that the same holds true in the developing world, where data show children from single-parent households do as well, and often even better. Why? Investigator W. Brad Wilcox offers up three reasons. Extended families are more common, fathers are less engaged in any event, and the disparity of school quality is so great it matters far more than family structure.

    BONUS BITS:

    Food for Thought

    Why You Truly Never Leave High School (New York Magazine)

    Describing Difficult Stuff in Simple Words (The Guardian)

    How Humans Keep Time (Smithsonian)

  • Apple’s Tim Cook Sees “Huge Opportunity” For iPad In Mac Cannibalization

    ipad-with-ipad-mini

    Apple CEO Tim Cook responded to questions about the issue of potential cannibalization of Mac sales by iPad devices on today’s earnings call, a question made more timely by the fact that Mac sales were down considerably on the quarter. He reiterated that supply constraints are leading to fewer sales, but also tackled cannibalization as a broad topic, noting that there is opportunity there for the iPad in a couple of important ways.

    Cook reiterated that Apple “never fear[s] cannibalization,” since it’s always better to cannibalize your own products rather than have someone else do it to you. But then he went on to address the larger picture, talking about the PC market in general. ”On iPad in particular we have the mother of all opportunities here, because the Windows market is much larger than the Mac market,” he said. “I’ve said in the past that I believe the tablet market would be larger than the PC market at some point and I still believe that.”

    Another point he made sure to bring up was the so-called “halo effect” that the iPhone has been shown to have, whereby first-time buyers of Apple devices who pick one up tend to then purchase other products. The iPad, too, has plenty of potential to trigger that phenomenon.

    “If someone buys an iPad mini or an iPad and it’s their first Apple product, we have great experience over the years knowing that there’s a great percentage they’ll buy another iPad product,” he said. “We’re very confident that that will happen and we’re seeing some evidence of that on the iPad as well, so I see cannibalization as a huge opportunity.”

    Cannibalization is something Apple has always embraced, but that’s because the products that replace it always tend to rack up way more sales than the ones they’re pushing to the periphery. The Mac may be on the decline, but as long as the iPad continues to shine, it’s true that that’s likely of limited concern to Apple and its top brass.

  • Apple’s Tim Cook Says iPhone And iPad Supply Component Order Cut Rumors Don’t Tell The Whole Story

    iphone5-camera

    Apple CEO Tim Cook took time on the company’s earnings call today to comment on a specific rumor, which is an extreme oddity for Apple’s top-tier executives. He prefaced it by saying he doesn’t want to make a habit of addressing rumors, but went on to comment on recent reports that iPad and iPhone part order volumes have been cut owing to weak demand.

    “I know there’s been lots of rumors about order cuts and so forth,” he said. “I would suggest it’s good to question the accuracy of any kind of rumor about build plans, and even if a particular data point were factual, it would be impossible to interpret that data point for what it means for our overall business.”

    Cook ended his discussion of the issue by summarizing that a “single data point is not a great proxy for what’s going on.” The intent was clearly to defuse the ability of supply chain reports to affect analyst outlooks on the company and subsequently stock price, since the recent outburst of these kinds of stories coming from suppliers are likely a key component of recent stock price volatility.

  • Ebscer Launches Free Mileage Tracker App for BlackBerry 10

    Mileage Tracker for BlackBerry 10 is a financial app that keeps detailed information about fuel, efficiency, costs and the nature of the trip taken. Useful for users who want to keep track of their fuel efficiency or for people who use a car for business and need to keep good books for reimbursement.

    Mileage Tracker has a great look for a BlackBerry 10 app with thin, legible fonts and deep purple accents.

    Mileage tracker gives users the ability to separate data by billing period, project, or vehicle. Use the in-app payment system to upgrade to the pro version giving you full access to the .csv and .html exporting features.

    Click here to download Mileage Tracker for BlackBerry 10 free from BlackBerry World.

  • Right target, but missing the bulls-eye for Alzheimer’s

    Alzheimer’s disease is the most common cause of late-life dementia. The disorder is thought to be caused by a protein known as amyloid-beta, or Abeta, which clumps together in the brain, forming plaques that are thought to destroy neurons. This destruction starts early, too, and can presage clinical signs of the disease by up to 20 years.
     
    For decades now, researchers have been trying, with limited success, to develop drugs that prevent this clumping. Such drugs require a “target” — a structure they can bind to, thereby preventing the toxic actions of Abeta.
     
    Now, a new study out of UCLA suggests that while researchers may have the right target in Abeta, they may be missing the bull’s-eye. Reporting in the Jan. 23 issue of the Journal of Molecular Biology, UCLA neurology professor David Teplow and colleagues focused on a particular segment of a toxic form of Abeta and discovered a unique hairpin-like structure that facilitates clumping.
     
    “Every 68 seconds, someone in this country is diagnosed with Alzheimer’s,” said Teplow, the study’s senior author and principal investigator of the NIH-sponsored Alzheimer’s Disease Research Center at UCLA. “Alzheimer’s disease is the only one of the top 10 causes of death in America that cannot be prevented, cured or even slowed down once it begins. Most of the drugs that have been developed have either failed or only provide modest improvement of the symptoms. So finding a better pathway for these potential therapeutics is critical.”
     
    The Abeta protein is composed of a sequence of amino acids, much like “a pearl necklace composed of 20 different combinations of different colors of pearl,” Teplow said. One form of Abeta, Abeta40, has 40 amino acids, while a second form, Abeta42, has two extra amino acids at one end.
     
    Abeta42 has long been thought to be the toxic form of Abeta, but until now, no one has understood how the simple addition of two amino acids made it so much more toxic then Abeta40.
     
    In his lab, Teplow and his colleagues used computer simulations in which they looked at the structure of the Abeta proteins in a virtual world. The researchers first created a virtual Abeta peptide that only contained the last 12 amino acids of the entire 42–amino-acid-long Abeta42 protein. Then, said Teplow, “we just let the molecule move around in a virtual world, letting the laws of physics determine how each atom of the peptide was attracted to or repulsed by other atoms.”
     
    By taking thousands of snapshots of the various molecular structures the peptides created, the researchers determined which structures formed more frequently than others. From those, they then physically created mutant Abeta peptides using chemical synthesis.
     
    “We studied these mutant peptides and found that the structure that made Abeta42 Abeta42 was a hairpin-like turn at the very end of the peptide of the whole Abeta protein,” Teplow said.
     
    The hairpin turn structure was not previously known in the detail revealed by the researchers, “so we feel our experiments were novel,” he said. “Our lab is the first to show that it is this specific turn that accounts for the special ability of Abeta42 to aggregate into clumps that we think kills neurons. Abeta40, the Abeta protein with two less amino acids at the end of the protein, did not do the same thing.”
     
    Hopefully, the work of the Teplow laboratory presents what may the most relevant target yet for the development of drugs to fight Alzheimer’s disease, the researchers said.
     
    Other authors on the study included Robin Roychaudhuri, Mingfeng Yang, Atul Deshpande, Gregory M. Cole and Sally Frautschy, all of UCLA, and Aleksey Lomakin and George B. Benedek of the Massachusetts Institute of Technology.
     
    Funding for the study was provided by grants from the State of California Alzheimer’s Disease Research Fund, a UCLA Faculty Research Grant, the National Institutes of Health (AG027818, NS038328) and the James Easton Consortium for Alzheimer’s Drug Discovery and Biomarkers.
     
    The Mary S. Easton Center for Alzheimer’s Disease Research at UCLA is part of the UCLA Department of Neurology, which encompasses more than 20 disease-related research programs, along with large clinical and teaching programs. These programs cover brain mapping and neuroimaging, movement disorders, Alzheimer’s disease, multiple sclerosis, neurogenetics, nerve and muscle disorders, epilepsy, neuro-oncology, neurotology, neuropsychology, headaches and migraines, neurorehabilitation, and neurovascular disorders. The department ranked first among its peers nationwide in National Institutes of Health funding (2002–09). 
     
    For more news, visit the UCLA Newsroom and follow us on Twitter.

  • Apple’s Q1 2013 Breaks iPhone And iPad Sales Records With 47.8M, 22.9M Units Sold Respectively

    Screen Shot 2013-01-23 at 2.49.12 PM

    Apple just released its earnings report for Q1 2013, ending in December of last year, with a solid hardware quarter overall. The iPhone dominated with 47.8 million units sold in the quarter, up quarterly and yearly, with Apple also breaking records with 22.9 million iPads sold.

    The iPhone 5 saw its first full quarter of availability this period, as well as a nice holiday sales boost. Analysts had suggested earlier this month that iPhone 5 production orders had been cut on signs of weak demand.

    22.9 million iPads sold is a solid increase from last quarter’s 14 million. It’s also a 33 percent YOY increase, up from 15.4 million last year. The iPad missed predictions last quarter.

    Though Apple doesn’t break out specific numbers on various models, it’s fair to assume the iPad mini, which was available for the majority of the period, played a part in the increased sales along with the holiday spike. And let’s not forget, Apple also introduced an upgraded 4th-generation iPad with Lightning port alongside the little guy.

    However, the iPad mini has more to make up for, as its gross margin is significantly lower than other products.

    Apple sold 47.8 million iPhones over the three-month period, vs. 26.9 million last quarter and 37 million last year. That represents YOY growth of 23 percent.

    Analysts believe that the iPhone may have already saturated developed markets like the U.S. and the UK, which are Apple’s strongest regions, which explains the production cuts.

    However, Apple is rumored to be developing two versions of the next-gen iPhone, and one is said to be a budget model aimed at developing markets.

    In terms of iPods, the new family of colorful iPod products has managed to breathe a little life into a flagging business for Apple. The introduction of the iPhone has most certainly chomped into this segment of the business, but Apple still managed to sell 12.7 million, up from 5.3 million last quarter, representing a YOY loss of 18 percent.

  • Apple Sells 4.1M Macs In Q1 2013, Down 21% Year Over Year And 16% From Previous Quarter

    index_hero

    Apple’s Mac sales for its first fiscal quarter of 2013 were not very impressive, despite high holiday appetite among consumers and new models. The company moved 4.1 million Macs in total, including notebooks like the new 13-inch Retina MacBook Pro released last quarter, and all-in-ones like its refreshed iMac line, which also made their first appearance during Apple’s fiscal Q1.

    Apple introduced a slew of new Macs during the past quarter, in fact, including brand new 21.5 and 27-inch iMacs with ultra thin new cases, a brand new Mac mini with faster processors and a quad core options, as well as the new 13-inch Retina MacBook Pro, the second in Apple’s line of notebooks with HiDPI displays capable of smooth on-screen graphics rendering that makes digital graphics mostly indistinguishable from high-quality prints. Despite the new hardware, Mac sales dipped year over year after experiencing decent growth last holiday season.

    Mac sales missed the previous quarter’s 4.9 million Macs by 800,000 units, or 16 percent, and also fell short of 2011 holiday sales of Apple’s computer hardware by 1.1 million devices, missing last year’s sales by 21 percent. Macs may make up a relatively small portion of Apple’s overall sales, but the numbers this time around show that interest is dwindling fast in traditional computing form factor.

    It should be noted that while Apple’s newest Macs were introduced during the quarter, many of them weren’t actually available for most of the reporting period, including the 27-inch iMac, which only began shipping out to customers in mid-December. Still, the 13-inch Retina MacBook Pro and revamped Mac mini were out and out and available around a month into the quarter. Supply issues might be to blame for the low numbers, but it could also be that more users are opting to buy iPads, since iPad sales were up year over year.

    Apple also had one less week in this reporting period versus the year ago quarter, with 13 weeks instead of 14, so that could account for some of the variance, but not all. And while unit sales dropped over 20 percent, revenue from those sales took a lesser hit – it declined 16 percent from fiscal Q1 2012.

  • Germany’s Green Energy Destabilizing Electric Grids

    Germany is phasing out its nuclear plants in favor of wind and solar energy backed-up by coal power. The government’s transition to these intermittent green energy technologies is causing havoc with its electric grid and that of its neighbors–countries that are now building switches to turn off their connection with Germany at their borders. The intermittent power is causing destabilization of the electric grids causing potential blackouts, weakening voltage and causing damage to industrial equipment.

    The instability of the electric grid is just one of many issues that the German government is facing regarding its move to intermittent renewable technologies. As we have previously reported, residential electricity prices in Germany are some of the highest in Europe and are increasing dramatically (currently Germans pay 34 cents a kilowatt hour compared to an average of 12 cents in the United States). This year German electricity rates are about to increase by over 10 percent due mainly to a surcharge for using more renewable energy and a further 30 to 50 percent price increase is expected in the next ten years. These changes in the electricity generation market have caused about 800,000 German households to no longer be able to afford their energy bills.

    The Destabilization Problem

    More than one third of Germany’s wind turbines are located in the eastern part of the nation where this large concentration of generating capacity regularly overloads the region’s electricity grid, threatening blackouts. The situation tends to be particularly critical on public holidays when residents and companies consume significantly less electricity than usual with the wind blowing regardless of the demand and supplying electricity that isn’t needed. In some extreme cases, the region produces three to four times the total amount of electricity actually being consumed, placing a strain on the eastern German electric grid. System engineers have to intervene every other day to maintain network stability.

     Fluctuating output, wind and solar

    Source: http://www.spiegel.de/international/germany/bild-850419-389683.html

    To illustrate the problem that renewable energy instability can cause, here is an example. When the voltage from German’s electric grid weakened for just a millisecond at 3 am, the machines at Hydro Aluminum in Hamburg ground to a halt, production stopped, and the aluminum belts snagged, hitting machines and destroying a piece of the mill with damages amounting to $12,300 to the equipment. The voltage weakened two more times in the next three weeks, causing the company to purchase its own emergency system using batteries, costing $185,000.

    These short interruptions to the German electric grid increased by 29 percent and the number of service failures increased 31 percent over a 3-year period, with about half of those failures leading to production stoppages causing damages ranging from ten thousand to hundreds of thousands of Euros. These power grid fluctuations in Germany are causing major damage to a number of industrial companies, who have responded by getting their own power generators and regulators to help minimize the risks. However, companies warn that they might be forced to leave if the government does not deal with the issues quickly.[i]

    To deal with the excess electricity, eastern Germany exports it to western Germany, Poland and the Czech Republic. In 2009, exports of electricity to these areas totaled 6.5 gigawatts on days with strong winds, an amount that will increase as wind capacity increases. While the eastern German region would like to channel its excess electricity to southern Germany and the industrial Rhineland area, it lacks infrastructure to do so. Because German energy laws stipulate that “green” power must always have priority on the grid, control centers cannot take wind farms off the grid when too much electricity is being generated. System operators also try to avoid shutting down their coal, gas and nuclear facilities because they rely on these power plants to produce a consistent level of baseload power at all times. Thus, they need to export the wind capacity that exceeds their demand.[ii]

    Eastern German wind energy exports

    Source: http://www.dw.de/wind-energy-surplus-threatens-eastern-german-power-grid/a-14933985

    Germany’s Plans for Additional Transmission Infrastructure

    The German Cabinet backed a plan to build three “power autobahns” stretching north to south to move growing supplies of renewable energy across the country. The plan involves laying about 1,740 miles of new transmission lines and upgrading 1,800 miles of existing cables by 2022, bringing wind power generated in the north to consumers in the south. This plan is a scaled down version of the recommendation made by Germany’s four main grid operators, who indicated that the country’s energy overhaul required about 2,400 miles of new cables and a fourth power-line corridor,[iii] costing $25 billion.[iv] The government also wants to cut the time it takes to develop power lines from 10 to 4 years, and most recently, there have been calls to nationalize the electrical grid.

    Long Lines, power grid Germany

    Source: http://www.spiegel.de/international/germany/bild-850419-359975.html

    In the meantime, Germany’s neighbors, Poland and the Czech Republic, are taking action on Germany’s use of their power grid that Germany undertook without asking permission and without paying for its use. These countries are building a huge switch-off at their borders to block the import of green energy that is destabilizing their grids and causing potential blackouts in their countries.[v] This action by German’s neighbors fragments the European electrical grid, turning Germany into an electrical island.

    germanyGrid

    Germany’s Renewable Program

    Germany is planning to get 80 percent of its energy from renewable energy by 2050 and phase out its nuclear program by 2022. Despite significant investment in wind and solar power, Germany still faces an energy shortfall because the renewable energy it invested in does not work in the cold winter weather when the sun does not shine and the wind does not blow. Further, the shift to renewable energy is taking a toll on family budgets.[vi] Germany increased a special tax levied on consumers to finance subsidies for green energy by almost 50 percent this year, increasing electricity prices by 10 percent. There are also growing concerns that price increases are hurting businesses, although the German response has been to charge some consumers with much more of the burden than favored industries.

    Ironically, to back-up the wind and solar energy, German utilities are using coal because it is cheaper than natural gas in Europe. For the most part, natural gas is moved through pipelines in Europe, and tends to be used close to where it originates. It is priced regionally and often linked to the price of oil. Many European gas contracts were negotiated years ago with the Russian gas company, Gazprom, and remain high. For example, in the summer of 2012, natural gas prices in Europe were more than three times the gas price in the United States and definitely more expensive than coal. According to Bloomberg New Energy Finance, at the beginning of November 2012, utilities in Germany were set, on average, to lose €11.70 when they burned gas to make a megawatt of electricity, but to earn €14.22 per megawatt when they burned coal.[vii]

    Conclusion

    The high use of renewable energy in eastern Germany driven by government green energy policies is  causing instability to its own electric grid as well as to neighboring countries, resulting in industrial companies having to purchase generators and emergency back-up systems rather than face replacing equipment damaged during disruptions of service. Electricity bills are also expected to go up by 10 percent this year. With residential electricity prices in Germany already about 3 times higher than prices in the United States and increasing further, it is no wonder that 800,000 German households can’t afford their electricity bills.

    The German government recently cut its 2013 growth expectations to 0.4 percent from an earlier estimate of 1 percent. Germany was prospering in 2011 with growth at 3 percent, but it dropped to 0.7 percent in 2012. While the European economy as a whole and the switch to the Euro has affected Germany, one wonders how much the country’s energy program is contributing. Perhaps, the United States should use the German experience as a warning regarding the right choice of energy policy.



    [i] Spiegel, Grid Instability Has Industry Scrambling for Solutions, August 16, 2012, http://www.spiegel.de/international/germany/instability-in-power-grid-comes-at-high-cost-for-german-industry-a-850419.html

    [ii] Wind energy surplus threatens eastern German power grid, March 26, 2011, http://www.dw.de/wind-energy-surplus-threatens-eastern-german-power-grid/a-14933985

    [iii] Bloomberg, Merkel Cabinet Backs Power-Line Plan to Absorb Renewables Growth, December 19, 2012, http://www.bloomberg.com/news/2012-12-19/merkel-cabinet-backs-power-line-plan-to-absorb-renewables-growth.html

    [v] International News, Poland and Czech Republic Ban Germany’s Green Energy, December 12, 2012, http://www.thegwpf.org/poland-czech-republic-ban-germanys-green-energy/

  • Should I leave a comment on TED.com? A commenting manifesto

    Commenting

    You’ve just watched a TED Talk, and now you have some thoughts — about the subject, about the speaker, about life.

    In the world of TED ideas, those reflections and reactions are some of our most important resources. Yet, for every 1,000 views on TED.com, only 1 viewer writes a comment in the space below the video. Perhaps the other 999 viewers had nothing to say? Somehow we doubt it.

    What can a great talk comment do? It can provide more information, suggest an argument to the contrary, explain a personal connection to the subject matter — among other things. (See some of our favorite comments of 2012.) It’s also a great way to help us understand the impact that individual talks are having: A video share tells us it interested you, but a comment can tell us why.

    If you’d like to start commenting, we’d love to hear from you! But before you hit submit, we’d like to let you in on a little-known secret: We do enforce the TED.com Terms of Use for comments. To guarantee that your comment will find a permanent home on the site, please keep in mind these three basic guidelines.

    1. Civility. All the bold font in the world couldn’t stress this one enough. If you have even a nagging suspicion that your comment will come off as nasty and sarcastic, it probably will. We understand that some talks inspire very strong emotions, but a polite, well-worded argument communicates more effectively than rudeness — any day.

    If your comment crosses the line, our moderation team will remove it and send you an email. Don’t be discouraged — just take a step back and try again. We’re not out to get you; we’re just trying to keep the discussion respectful. If you want more information, we’re always happy to talk to you about what did and didn’t work.

    2. Substance
    . Whether you’re lavishing praise or expressing your disagreement, the more specific the better. The best comments, both negative and positive, are those which add new levels of meaning to the talk. If possible, please try to limit the number of posts you leave on a single talk — a large number of comments from one person can be mistaken for spam.

    And, of course, comments should be about the talk itself. If a TED Talk has inspired you to discuss a different but related topic, you can start a TED Conversation and let others know by tagging the talk.

    3. Style
    . Natural writer or not, native English speaker or not, please take a moment to proofread. If you’re more comfortable leaving the comment in a different language, go right ahead! We want to make sure that comments reflect the very best of the commenter, and that others who read your comment know just what you’re saying.

    So, should you leave a comment on TED.com? Yes, please! We would love to hear what you have to say, and the same goes for our speakers. Our commenting system isn’t perfect — heck, no website’s is — but with your help, we can continue to build a thought-provoking collection of member-submitted ideas, critiques, and stories around each TED Talk.

    You can contact [email protected] with feedback and suggestions. Happy commenting!

  • Updating Your BlackBerry Apps: Why It Matters

    update-apps

    When you blog about BlackBerry devices, your family and friends automatically assume you’re a “tech guru” and know everything about every BlackBerry device ever created — not to mention how to program your grandparents’ VCR. You’re the person in the group who knows just a little more than everyone else, and now by default, you are the expert. That said, I do know a little something about BlackBerry apps.

    A friend recently asked a question that prompted this post. He asked, “When should I update my apps, and when should I just ignore the “spark” (spark) prompting me to update?” I answered, “As soon as possible!” Here’s why:

    • Updates may improve app and device stability. If your app is experiencing periodic “crashes” or issues with a feature, the developer may be aware of it and may have fixed it in an updated release. Staying current may just solve the issue you’re having.
    • Updates may improve app and device security. Keeping their apps secure is one of the most important things to a developer since their reputation may depend on it. By making sure your app is up to date, any new security issues may be identified and patched.
    • Updates may add exciting new features. App updates aren’t always “all work and no play“– updates may also contain new features you’ll find useful or even crucial to your usage.
    • Updates may improve app and device speed. There may be times where an app update simply increases the efficiency of an app or service, making the app run faster. Who wouldn’t benefit from some additional speed?
    • Updated apps may be the first step of a support call. In the event you have to connect with our support team, many times the first thing they ask you to do is to update the app or your BlackBerry smartphone OS; you might as well save the step before you call.

    spark One last thing before you go running for your BlackBerry device to start downloading updates: Perform software updates over Wi-Fi whenever possible. Downloads will typically happen faster, and if you have a limit on your monthly data plan, you may save some room later in the month to download kitten pictures.

    Do you update your apps when you see the spark? Share your reasons why or why not in the comments below.

  • Gun Owner Saves Boy from Pit Bull Attack! Wait … Police Say His Actions Could be ‘Criminal’ ?

    Tim Lynch

    Today’s Washington Post reports that a boy in a DC neighborhood was out riding a new bike that he received on Christmas.  As he was riding through his neighborhood, he turned a corner and suddenly came upon three unattended pit bulls who proceeded to maul him. Fortunately for this 11-year old boy, a neighbor saw what was going on, ran into his house, got his handgun, and then returned and shot one of the pit bulls.  A DC police officer, nearby on bicycle, heard the shot, got to the scene, and then shot the other two pit bulls.

    The boy, unidentified by the newspaper, is traumatized.  His uncle describes his injuries as “horrific” – all three pit bulls had their teeth clenched in the boy’s extremities just before the neighbor and officer shot them. 

    Is the unidentified neighbor hailed as a hero?  No – just the opposite – he apparently needs a lawyer because he is reportedly under “investigation” for violating our capital city’s firearms laws!  You see – he may have discharged his weapon beyond his property line. Talk about no good deed going unpunished.  

    Every single day, Americans use guns to save lives but we do not hear about these incidents on the evening news–and that’s mostly because the gun only has to be brandished and the bad guy takes flight. Just not considered “news.”  Another reason is media bias–as the past few days illustrate.  Yesterday, CNN had full coverage of a gun crime in Houston.  This story–civilian uses gun to save an 11-year old’s life–only a few paragraphs back in the metro section of the newspaper.

    Whether or not the prosecutors file charges here, DC laws need to change–so residents don’t have to hope for good sense to prevail (while paying attorneys fees).   House Republicans do have jurisdiction over DC affairs – so here’s an opportunity to send a bill to the president’s desk to get the needed changes in place.

    To draw more attention to how often Americans use guns in self-defense, Cato published this paper and created this self-defense map.