Blog

  • Innovate Faster or Innovate Better?

    The other week I met with the leader of a new growth business for a large Asian company. The meeting was miles away from the corporate headquarters. The leader proudly showed me around her office, pointing out how the open, energetic feel compared to the closed-door, corporate nature of headquarters. The young staff certainly seemed to be enjoying itself in the lounge that was well-stocked with booze and snacks.

    “So, what does your corporate parent give you?” I asked.

    “Absolutely nothing,” the leader replied without some degree of pride. “Except funding, of course. Otherwise, they completely leave me alone.”

    I didn’t want to burst the leader’s bubble, but I gently explained to her that what she said was actually quite troubling. The reality is she is fighting a tough battle with one hand behind her back. And the odds are pretty high that she is going to lose.

    Yale School of Management Professor Dick Foster notes that a single firm cannot innovate faster than the market in which it participates. Why is that? Consider three key differences between a startup and an autonomous business formed by a large company:

    1. Talent. A startup chooses the very best talent it can find to tackle an opportunity. The autonomous business more often than not chooses key leaders from its parent company, even if they haven’t had relevant experience (a problem I described here).
    2. Funding. The startup receives a finite slug of funding to demonstrate its viability. It has to zig and zag to find success before it runs out of money. The autonomous business will typically draw funding via the annual budgeting processes. As long as it doesn’t fall on its face, it can chug along with its predetermined strategy. Since that strategy is likely to be wrong, the lack of adjustment is bad, not good. Too much capital can be a curse.
    3. Governance. The startup is governed by a Board of Directors that typically includes an eclectic mix of founders, financiers, and advisors. That Board probably meets at least monthly and is on call if any quick decisions need to be made. The autonomous business is controlled by the parent company. Reviews might happen quarterly. Just getting a meeting on a senior leader’s calendar can take weeks. Forget about quick decisions.

    The end result too frequently is the market speeds ahead of the autonomous organization. A large company just can’t innovate faster than the market.

    But a large company can innovate better than the market.

    There are some things that only large companies can do, because they have unique assets like technology, channel relationships, relationships with regulators, scale operations, and so on. In my recent HBR article, “The New Corporate Garage” I profiled fourth-era corporations that created powerful growth businesses by combining these kind of difficult-to-replicate assets with “just enough” entrepreneurial behaviors. And in “Two Routes to Resilience,” Clark Gilbert, Matt Eyring, and Richard Foster described how companies that have successfully transformed their business in the face of disruptive change have made smart use of a “capabilities exchange” that allows new growth businesses to selectively draw on unique enabling capabilities without being overly constrained by legacy business models and mindsets.

    If a company really wants pure unbridled entrepreneurialism, it should invest in a startup rather than creating a compromised organization that neither has complete freedom nor truly unique capabilities. If a company really wants to do something remarkable, it has to confront the very real tensions between operating a big business and supporting entrepreneurial behavior and between leveraging unique capabilities and being constrained by them. Those tensions will always make a company move more slowly than an unburdened startup. But mastering these tensions can allow companies to do what a startup cannot.

  • Sponsored post: Big data drives high performance for Cars.com

    Cars.com is an award-winning online destination for car shoppers visited by approximately 12 million unique visitors each month. In addition to millions of listings, the website also offers a variety of comparison resources, pricing tools, expert automotive content and dealership reviews.

    To support its high volume of site traffic, Cars.com must maintain optimal website performance and loading speed — especially given the expectations of today’s online consumers.

    Uptime and site performance is a priority for the Cars.com application management team, which maintains a highly distributed, technical environment. As part of their duties, members of the team are responsible for application data collection, monitoring and analysis. According to the application management team, up to eight man-hours per week were spent on manual data acquisition, diverting staff from their core focus.

    Splunk Enterprise was implemented to automate and standardize the collection, monitoring and analysis of application log files and make this machine-generated big data available across the organization. As a result, real-time Splunk dashboards have provided increased visibility into site traffic and usage patterns at Cars.com supporting both technical and business-driven analysis and decision making.

    Read more about how Splunk is helping Cars.com drive revenue generation and cost savings.
    Read the full analyst report: http://www.splunk.com/view/splunk-at-cars/SP-CAAAHF4?ac=exec_gigaom_cars_post
    Advertisement

  • Mistrust of government often deters older adults from HIV testing

    One out of every four people living with HIV/AIDS is 50 or older, yet these older individuals are far more likely to be diagnosed when they are already in the later stages of infection. Such late diagnoses put their health, and the health of others, at greater risk than would have been the case with earlier detection.
     
    According to the Centers for Disease Control and Prevention, 43 percent of HIV-positive people between the ages of 50 and 55, and 51 percent of those 65 or older, develop full-blown AIDS within a year of their diagnosis, and these older adults account for 35 percent of all AIDS-related deaths. And since many of them are not aware that they have HIV, they could be unknowingly infecting others.
     
    Various psychological barriers may be keeping this older at-risk population from getting tested. Among them are a general mistrust of the government — for example, the belief that the government is run by a few big interests looking out for themselves — and AIDS-related conspiracy theories, including, for example, the belief that the virus is man-made and was created to kill certain groups of people.
     
    Now, a team of UCLA-led researchers has demonstrated that government mistrust and conspiracy fears are deeply ingrained in this vulnerable group and that these concerns often — but in one surprising twist, not always — deter these individuals from getting tested for HIV. The findings are published Jan. 29 in the peer-reviewed journal The Gerontologist.
     
    “Our work suggests that general mistrust of the government may adversely impact peoples’ willingness to get tested for HIV/AIDS,” said Chandra Ford, an assistant professor of community health sciences at the UCLA Fielding School of Public Health and the study’s primary investigator. “HIV/AIDS is increasing among people 50 and older, but there’s not a lot of attention being paid to the HIV-prevention needs of these folks. Older adults are more likely to be diagnosed only after they’ve been sick, and as a result, they have worse prognoses than younger HIV-positive people do.
     
    “Also, the CDC recommends that anyone who’s in a high-risk category should be tested every single year,” she said. “These findings mean that the CDC recommendations are not being followed.”
     
    The researchers sought to test the association between mistrust of the government, belief in AIDS conspiracy theories and having been tested for HIV in the previous year. For the cross-sectional study, they worked with data from 226 participants ranging in age from 50 to 85. Participants were recruited from three types of public health venues that serve at-risk populations: STD clinics, needle-exchange sites and Latino health clinics.
     
    Of the participants, 46.5 percent were Hispanic, 25.2 percent were non-Hispanic blacks, 18.1 percent were non-Hispanic whites and 10.2 percent were of other races or ethnicities. The data were collected between August 2006 and May 2007.
     
    The researchers found that 72 percent of the participants did not trust the government, 30 percent reported a belief in AIDS conspiracy theories and 45 percent had not taken an HIV test in the prior 12 months. The more strongly participants mistrusted the government, the less likely they were to have been tested for HIV in the prior 12 months.
     
    Several of the findings surprised the researchers — for example, the fact that HIV testing rates among this population were not higher at the locations where the participants were recruited, given that these locations attract large numbers of people with HIV.
     
    “This finding is concerning because the venues all provide HIV testing and care right there,” Ford said.
     
    And there was an even bigger, perhaps counterintuitive surprise. The more strongly participants believed in AIDS conspiracy theories, the more likely they were to have been tested in the previous 12 months.
     
    “We believe they might be proactively testing because they believe it can help them avoid the threats to personal safety that are described in many AIDS conspiracies,” Ford said. “For instance, if I hold these conspiracy beliefs and a doctor tells me I tested negative, I might get tested again just to confirm that the result really is negative.”
     
    By contrast, individuals who reported mistrusting the government may not have been tested because the venues where they were recruited were, in fact, government entities, Ford said.
     
    The study has some weaknesses. For instance, the study design did not allow the researchers to determine whether the participants held their beliefs before or after being tested; thus, the researchers couldn’t tell what prompted their mistrust of the government or conspiracy beliefs. Also, it’s possible that the prevalence of these theories is higher in this group than it is in the general public and that some participants may have been afraid to tell the truth.
     
    The next step in the research is to study other groups of older adults to determine if these views are more widely held than just among the at-risk population the researchers studied.
     
    Steven P. Wallace, Sung-Jae Lee and William Cunningham, all of UCLA, and Peter A. Newman of the University of Toronto co-authored the study.
     
    The National Institute of Mental Health (5 RO1 MH069087, 5K01MH085503, R34MH089719); the UCLA Resource Centers for Minority Aging Research Center for Health Improvement of Minority Elderly (RCMAR/CHIME), under a grant from the National Institute on Aging (P30-AG02-1684); the UCLA AIDS Institute; the UCLA Center for AIDS Research (CFAR); the California Center for Population Research (5R24HD041022); and the National Institute on Drug Abuse (R01 DA030781) funded this study.
     
    The UCLA Fielding School of Public Health is dedicated to enhancing the public’s health by conducting innovative research; training future leaders and health professionals; translating research into policy and practice; and serving local, national and international communities.
     
    The Resource Centers for Minority Aging Research Center for Health Improvement of the Elderly (RCMAR/CHIME) is part of the effort to reduce health disparities between minority and non-minority older adults. It does so by increasing the number of researchers who focus on the health of minority elders; enhancing the diversity in the professional workforce by mentoring minority academic researchers for careers in minority elders health research; improving recruitment and retention methods used to enlist minority elders in studies so that research can accurately identify and work toward solutions to health disparities; and creating culturally sensitive health measures that assess the health status of minority elders with greater precision and increase the effectiveness of interventions designed to improve their health and well-being. A central coordinating center provides logistical support to the RCMAR centers, facilitates communication and collaboration, and oversees dissemination activities designed to reach the larger research and health professional communities, public policymakers and consumers. The coordinating center is also the national clearinghouse for measurement tools, instruments, publications, community activity, pilot research and other resources developed by RCMAR investigators.
     
    For more news, visit the UCLA Newsroom and follow us on Twitter.

  • Report: YouTube will start charging for premium content

    YouTube plans to launch paid subscriptions as early as this spring, according to unidentified sources cited in AdAge. The sources say that YouTube is asking media companies that have already gained large a YouTube following — like Machinima, Maker Studios and Fullscreen — to submit ideas for paid channels that would cost “somewhere between $1 and $5 a month.” A Google spokesperson confirmed to AdAge that YouTube is “looking at” subscriptions.

    YouTube would initially launch around 25 paid channels, according to the report, and “is also considering charging for content libraries and access to live events, a la pay-per-view, as well as self-help or financial advice shows.” It’s unclear if the channels would cost $1 to $5 apiece or would be lumped together as an inexpensive bundle.

    YouTube’s paid offerings might help the site compete against streaming services like Netflix, Hulu Plus and Amazon Instant Video, all of which are developing their own original content. But the subscriptions would also give content creators with large audiences a chance to pull in revenue beyond ads. And the opportunity to charge might provide brands that were previously wary to put video up on YouTube with the incentive to do so.

    Will viewers pay? They might if transactions are easy — Google could integrate Google Payments, for instance — and might also be interested in perks like downloads for offline viewing, or bundles of videos that previously had to be tracked down individually.

  • PNNL smart grid management technology licensed to Calico

    The Department of Energy’s Pacific Northwest National Laboratory and Calico Energy Services of Bellevue, Wash., today announced that Calico has licensed a portfolio of advanced energy management intellectual property developed by PNNL. The technology was licensed by Battelle, which manages PNNL for the Department of Energy.

    The technology was developed in response to the critical challenges facing electric utilities today, including the need to improve reliability, reduce costs and integrate renewable energy. It coordinates large numbers of smart grid assets, including demand response, distributed generation, and distributed energy storage, typically owned and controlled by customers, to form a virtual control system with the smooth, stable, predictable response required by utility operators.

    “PNNL’s technology represents a major leap forward in our nation’s ability to manage grid reliability, balance the ever-expanding complexities of our electricity distribution system, integrate renewables and engage consumers in energy savings programs,” said PNNL engineer Rob Pratt, who led the team that developed the licensed technology. “We look forward to seeing utilities and consumers benefit from this technology.”

    PNNL’s development of the technology was funded by DOE’s Office of Electricity Delivery and Energy Reliability and the American Recovery and Reinvestment Act.

    The innovative technology portfolio is based on a single, integrated smart grid model that uses an economic signal to automatically balance supply and demand at the lowest possible cost. Sophisticated algorithms enable a variety of intelligent devices within a distribution system to address electricity imbalances in real-time using automated demand response and real-time bidding that is orders of magnitude faster than human operators. These devices include generation, storage, renewable energy generation, and end-point controls such as thermostats, hot water heaters, and large load controllers.

    “PNNL’s patent portfolio is a breakthrough that allows an electric power system to virtually balance itself,” said Jesse Berst, founder and chief analyst at SmartGridNews.com. “The traditional method uses centralized manual dispatch to coordinate supply and demand. But manual methods will never keep up with our new systems, which will have hundreds of thousands of distributed resources scattered throughout. To manage that kind of complexity, you must distribute and automate the process, as PNNL has now made possible.”

    “PNNL’s technology will be commercialized into a module of our Energy Intelligence SuiteTM, or EIS, and will be an excellent complement to the energy management platform we deliver to our utility customers today,” said Mike Miller, president and CEO of Calico Energy Services. “EIS serves as a unified operations center that integrates disparate data, devices, software engines, and applications. It allows utilities to make informed decisions and to precisely control energy resources and grid assets.  The capacity to leverage distributed automation provides a unique capability and adds substantial value to our solutions.”

    PNNL’s technology has already proven highly effective in real-world installations. For example, it was a key part of the Pacific Northwest GridWiseTM Demonstration Project, which PNNL led on Washington state’s Olympic Peninsula from 2006 to 2007. A related version of the technology is also being used in the Battelle-led Pacific Northwest Smart Grid Demonstration Project, a large-scale demonstration project designed to help bring the nation’s electric transmission system into the information age. It has shown the ability to provide a market mechanism to reward electricity consumers, while reducing energy consumption where and when it is needed using real-time demand and pricing signals.

    For utility operations teams, the intelligence of PNNL’s technology – particularly its ability to provide automated demand management and price bidding – will also reduce administrative complexity while providing far faster control over loads.

    “At one time, the Soviet Union used a centrally planned economy, but the complexities of the modern world forced it to switch to a market-based approach,” Berst said. “In the same fashion, the electric power system is still trying to get by with centralized dispatch and control. Thanks to these breakthroughs from PNNL, it can now adopt a market-based approach that is far faster and more precise.”

  • 10 places where anyone can learn to code

    Teens, tweens and kids are often referred to as “digital natives.” Having grown up with the Internet, smartphones and tablets, they’re often extraordinarily adept at interacting with digital technology. But Mitch Resnick, who spoke at TEDxBeaconStreet in November, is skeptical of this descriptor. Sure, young people can text and chat and play games, he says, “but that doesn’t really make you fluent.”

    Fluency, Resnick proposes in today’s talk, comes not through interacting with new technologies, but through creating them. The former is like reading, while the latter is like writing. He means this figuratively — that creating new technologies, like writing a book, requires creative expression — but also literally: to make new computer programs, you actually must write the code.

    The point isn’t to create a generation of programmers, Resnick argues. Rather, it’s that coding is a gateway to broader learning. “When you learn to read, you can then read to learn. And it’s the same thing with coding: If you learn to code, you can code to learn,” he says. Learning to code means learning how to think creatively, reason systematically and work collaboratively. And these skills are applicable to any profession — as well as to expressing yourself in your personal life, too.

    In his talk, Resnick describes Scratch, the programming software that he and a research group at MIT Media Lab developed to allow people to easily create and share their own interactive games and animations. Below, find 10 more places you can learn to code, incorporating Resnick’s suggestions and our own.

    1. At Codecademy, you can take lessons on writing simple commands in JavaScript, HTML and CSS, Python and Ruby. (See this New York Times piece from last March, on Codecademy and other code-teaching sites, for a sense of the landscape.)
      .
    2. One of many programs geared toward females who want to code, Girl Develop It is an international nonprofit that provides mentorship and instruction. “We are committed to making sure women of all ages, races, education levels, income, and upbringing can build confidence in their skill set to develop web and mobile applications,” their website reads. “By teaching women around the world from diverse backgrounds to learn software development, we can help women improve their careers and confidence in their everyday lives.”
      .
    3. Stanford University’s Udacity is one of many sites that make college courses—including Introduction to Computer Science—available online for free. (See our post on free online courses for more ideas.)
      .
    4. If college courses seem a little slow, consider Code Racer, a “multi-player live coding game.” Newbies can learn to build a website using HTML and CSS, while the more experienced can test their adeptness at coding.
      .
    5. The Computer Clubhouse, which Resnick co-founded, works to “help young people from low-income communities learn to express themselves creatively with new technologies,” as he describes. According to Clubhouse estimates, more than 25,000 kids work with mentors through the program every year.
      .
    6. Through CoderDojo’s volunteer-led sessions, young people can learn to code, go on tours of tech companies and hear guest speakers. (Know how to code? You can set up your own CoderDojo!)
      .
    7. Code School offers online courses in a wide range of programming languages, design and web tools.
      .
    8. Similarly, Treehouse (the parent site of Code Racer) provides online video courses and exercises to help you learn technology skills.
      .
    9. Girls Who Code, geared specifically toward 13- to 17-year-old girls, pairs instruction and mentorship to “educate, inspire and equip” students to pursue their engineering and tech dreams. “Today, just 3.6% of Fortune 500 companies are led by women, and less than 10% of venture capital-backed companies have female founders. Yet females use the internet 17% more than their male counterparts,” the website notes.
      .
    10. Through workshops for young girls of color, Black Girls Code aims to help address the “dearth of African-American women in science, technology, engineering and math professions,” founder Kimberly Bryant writes, and build “a new generation of coders, coders who will become builders of technological innovation and of their own futures.”

  • Is Yahoo Poised For A Search Comeback?

    Yahoo released its earnings report for Q4 and the full year 2012. The report was better than many analysts had expected, and this was helped significantly by better-than-expected search performance from the company who has outsourced its search back-end to Bing.

    Here are the search highlights from the release:

    • GAAP search revenue was $482 million for the fourth quarter of 2012, a 4 percent increase compared to $465 million for the fourth quarter of 2011. GAAP search revenue was $1,886 million for the full year of 2012, a 2 percent increase compared to $1,853 million for the prior year.
    • Search revenue ex-TAC was $427 million for the fourth quarter of 2012, a 14 percent increase compared to $376 million for the fourth quarter of 2011. Search revenue ex-TAC was $1,611 million for the full year of 2012, a 9 percent increase compared to $1,478 millionfor the prior year.
    • Paid clicks, or the number of clicks on sponsored listings on Yahoo! Properties and Affiliate sites, increased approximately 11 percent compared to the fourth quarter of 2011 and increased approximately 8 percent compared to the third quarter of 2012.
    • Price-per-click increased approximately 1 percent compared to the fourth quarter of 2011 and decreased approximately 2 percent compared to the third quarter of 2012.

    During a conference call following the report’s release, CEO Marissa Mayer, a major player in Google’s search efforts over the years, indicated that search is a big priority for the former king of search engines. Wired quotes her:

    “Overall in search, it’s a key area of investment for us,” Mayer said. “We need to invest in a lot of interface improvements. All of the innovations in search are going to happen at the user interface level moving forward and we need to invest in those features both on the desktop and on mobile and I think both ultimately will be key plays for us.”

    “We have a big investment we want to make and a big push on search. We have lost some share in recent years and we’d like to regain some of that share and we have some ideas as to how.”

    Despite rumors that have been whispered throughout the industry from time to time, there’s nothing here to suggest that Yahoo and Bing will be breaking its deal off prematurely, as Mayer seems much more concerned with the front end. It will be interesting to see what becomes of it.

    The company has already been pushing out a redesigned home page to users (though we’ve seen more complaints than praise).

    Can Yahoo make a comeback in search? What do you think?

  • That Cheap iPhone 5 Will Borrow Design Elements From The iPod Touch And iPod Classic [Report]

    A cheaper iPhone 5 for emerging markets has been hinted at for a while now. Apple has denied all such rumors, but a recent report from iLounge suggests otherwise.

    After revealing pretty much everything about Apple’s latest redesign for the next-generation iPad, iLounge’s Jeremy Horwitz now has the scoop on the cheap iPhone 5 that’s been rumored since early this year. He confirms a number of rumors about the device that we’ve been hearing for a while now, but there’s some new information to be had as well.

    First and foremost, the cheap iPhone 5 will indeed be made out of plastic. That being said, Horwitz says that it won’t just be an iPhone 5 made out of plastic. The new device will reportedly borrow design elements from several products of Apple’s past and present to create a pretty unique device.

    According to Horwitz, the cheap iPhone 5 will still remain about the same size as its more expensive brother. Specifically, the cheaper version will only about half-millimeter taller, half-millimeter wider and a half-millimeter thicker. The display will still be 4-inches, just like the iPhone 5. Since most people won’t notice the size difference, the cheaper iPhone 5 will look almost identical to the current iPhone 5 from the front.

    The device’s design, however, will reportedly change once you start to look at the side, back and bottom of the device. For starters, the curves of the device will be more similar to that of the iPod classic in that the back and sides are flat, but connected by a curve. The volume buttons on the side will also shift from the circular buttons of the iPhone 5 to the elongated pill designs of the iPod Touch.

    Finally, the back of the device will feature a camera, microphone and rear flash that are almost identical to the layout of the iPhone 5. The bottom is also similar to the iPhone 5 in that it features a headphone port, Lightning port, bottom microphone and speaker in roughly the same positions.

    The cheap iPhone 5 isn’t the only iPhone coming out this year. The next iteration, dubbed the iPhone 5S, is also reportedly on the way later this year. Check here for more details.

  • Jamie-Lynn Sigler Engaged To Baseball Player

    Jamie-Lynn Sigler, who most of us will always remember fondly as Meadow Soprano, is engaged to ballplayer Cutter Dykstra.

    The couple–who have been together about a year–announced the happy news on Twitter, along with a photo of the new rock taking up residence on Sigler’s left hand. This will be the second marriage for 31-year old Sigler; she was formerly married to her manager, A.J. Discala. The two divorced in 2005.

    The pair were introduced by mutual friends last year and hit it off despite their 8-year age difference.

  • Foursquare Launches App Specifically for Businesses

    Foursquare has just unveiled their second app, and this one is targeted at business owners.

    Available today, Foursquare for Business is a standalone iOS app that allows business owners who manage a location on Foursquare to post updates, check analytics, and manage specials.

    As of now, the most useful aspect of the brand new app is the analytics. Business owners who have already claimed their location on Foursquare can view recent check-ins and tips – so they’ll know if Susan B just told other users that the spicy beef sandwich is to die for or if the thai noodles are cold and icky. Business owners can also see stats for all-time check-ins, likes, and view “recent top customers.”

    Businesses can also use the new app to post photo updates, and then easily cross-post them to Twitter and Facebook. When a business posts an update on Foursquare, anyone and everyone who has ever checked-in or even looked at their page will see the update in their stream – no subscription or “follow” necessary.

    Lastly, the app lets businesses manage their specials. Note that I said “manage.” Businesses still have to create the special on the web, but once it’s created they can manage it with the app. This is a limitation, but one that we expect Foursquare to deal with in the coming weeks. The app did just launch, of course.

    As of now, the app is only available for U.S. businesses.

    [h/t The Next Web]

  • Colorado Governor John Hickenlooper’s Entrepreneurial Approach to Leadership

    Colorado Governor John Hickenlooper got his start in public service thanks to a bartender. Hickenlooper had spent 15 years building a brew pub business up from scratch, eventually expending it to three states. In 2003, after a conversation with customers about the plan to change the name of Mile High Stadium, one of his bartenders told him he should run for mayor of Denver. So he did, won, and pledged to bring his entrepreneurial approach to the public office.

    In 2010, he was elected governor and is now bringing that approach to this broader role. We caught up with Governor Hickenlooper at the World Economic Forum’s Annual Meeting in Davos to check in on how that approach has fared and hear what lessons he’s learned along the way. An edited transcript of our conversation appears below.

    Give our readers a sense of your experience in the private sector and how it shaped your approach to government.

    I actually came to Colorado as a geologist. Loved the job, but I got laid off. I was out of work for two years and had the idea of launching a brew pub — this was before they were a common concept, in 1986. I spent two years honing the idea and raising money and we opened in 1988. We were the first brew pub in the Rocky Mountain region and we opened in an area known as lower downtown that was a bunch of abandoned warehouses. Not a lot of people went down there. So one of the first things we did was collaborate with the other businesses in the area to try and raise awareness. We bought ads together, we had a brewers’ fest to bring people down and spend time in the neighborhood.

    In 2003, all my customers were complaining about the renaming of Mile High Stadium. And a bartender of mine said: “You should run for mayor.” So I talked to a lot of mayors around the country to understand the job — could you make a difference, was it fun, would I be good at it? And so I decided to run. Our whole campaign was about bringing an entrepreneurial style of government to the office.

    So what did that mean?

    We tried to hire a lot of people from business. That was the first thing. Most people, when they get elected, they hire the people that helped get them elected because they think they’ll help cover their back. But they aren’t managers. They’ve never been trained. They don’t have the experience, so you have risk adverse people who grew up where there’s no benefit to taking a risk. There’s only downside. So we brought in a talented group of people from all walks of life — not just business — but not one was a political appointee. I had only met 10 percent of the team before the transition process happened and we had a transition of 450 people.

    So you found success with that approach at the city level. How has it done at the state level? Have you had to change your approach?

    Oh man. In government they love to attack each other. People get so dug in with their positions and they end up with a self-interest that is about protecting that position no matter what. To get business done, you have to get people to expand their sense of self-interest and then get those self-interests to overlap. That’s when you get a transaction. Then you get progress.

    Have you been able to get people to come together to find commonality?

    Sometimes. Here’s an example. In Colorado we have a lot of shale gas and with hydraulic fracturing we can drop the price dramatically. But there’s been a lot of concern over safety and what’s in the fracking liquid. And the fracking companies weren’t disclosing. So I called the CEO of Halliburton and said you’re getting killed over here on this. You have to find a way to let people know what’s in there while still protecting your intellectual property. I told him even Coca-Cola puts its list of ingredients on the can. So he came over and we brought in the regional head of the Environmental Defense Fund and we got to see where there was some common ground. We ended up having a press conference where the regional head of the EDF and the regional head of Halliburton both claimed victory. That’s where transactions happen — defending your old turf isn’t going to bring about progress.

    You mentioned hiring before. What’s your approach there? What do you look for in a candidate?

    I really [want] to see how well can they bring people together. How well they empathize. How they listen to people. That’s the best way to persuade someone: listen to them. Geoff Smart, the author of a book called “Who,” helped us with our transition to the governor role. He really trained me on this — before you hire anyone for a really senior position, spend a lot of time and write down what skills you need in the role, what the objectives are, what kind of person you really want. Sounds obvious but if you talk to CEOs and ask them how much time they spend on this, it isn’t a lot. Writing a job description is one thing but really looking at the characteristics, the traits, the kind of experiences you want is so key.

    What’s one of the most important lessons you’ve learned as a leader, either in the public or private sector?

    The single biggest thing came from when I was running the brew pub. I learned how contagious your own mood is. That’s the single thing. If you’re in a bad mood, within a half hour, everyone on the staff is. When you go through that door it’s show time. Kurt Vonnegut was a friend of my dad’s at Cornell and he said something once: “You have to be very careful who you pretend to be, because that’s who you’re going to become.” We had a couple real tough years in the restaurant business and by just forcing myself to positive and optimistic and cut jokes, things went better. I was still working 60, 70 hours a week, but all the sudden we started getting breaks.

  • Gears of War: Judgment Multiplayer Early Access Coming March 15

    Epic Games this week announced that some players who pre-order Gears of War: Judgment will get three days of early access to the game’s new “OverRun” multiplayer mode. Unfortunately, the deal only applies to gamers who order Judgment from GameStop, either online or in-store.

    Those who do pre-order with GameStop will be able to start playing “OverRun” at 3 am EST Saturday, March 16. The early access will run until the launch of the game on March 19. During that time, players can earn XP that will carry over to the full game. The pre-order also comes with a “Young Marcus” character skin and a Lambent weapon skin.

    For those who don’t want to order from GameStop, or aren’t sure if they want to buy another Gears title, a demo for the “OverRun” mode will be available through Xbox LIVE on March 19 when Judgment launches. “OverRun” is a new class-based multiplayer mode that pits COG players (soldiers, medics, engineers, and scouts) against Locust players who can take on the role of wretches, tickers, grenadiers, ragers, kantus, giant serapedes, and maulers.

  • Bradley Cooper Rumors: Is He Playing Lance?

    Since the news broke that beloved American cyclist Lance Armstrong was actually hiding a big doping secret for years, filmmakers have been clamoring to get the rights to tell his story, and along with that comes rumors that different big-name actors are attached to the project. The latest is Bradley Cooper.

    J.J. Abrams, who has hold of the film rights, was reportedly in talks with Cooper to play Armstrong; however, Cooper says he’s not close to the project and that the rumors are getting out of control.

    “Oh my god, that’s so nuts!” he said in an interview with Access Hollywood. “I was in Manchester, doing the BBC morning show… I had no idea what [the interviewer] was talking about. I didn’t even know that J.J. [Abrams] has the rights, I had no idea. I don’t know anything about it.”

    Cooper has been approached about the rumor several times, but for right now he’s attached to no fewer than three projects that will be coming out in 2013, so time is definitely a factor. He did say that, if given the opportunity, he wouldn’t turn the role down.

    “I would be interested in that. I think he’s fascinating. What a fascinating character,” he said.

  • Amazon Launches Elastic Transcoder In Beta

    Amazon announced the launch of Amazon Elastic Transcoder for Amazon Web Services for transcoding video files between different digital media formats. You can use the service, which Amazon says is highly scalable, to convert video files from their source format into versions that will play on smartphones, tablets and PCs.

    “For example, customers can use Amazon Elastic Transcoder to convert their large high resolution ‘master’ video files into smaller versions that are optimized for playback on websites, mobile devices, connected TV’s and other video platforms,” the company says. “Amazon Elastic Transcoder removes the need to manage infrastructure and transcoding software, providing scalability and performance by leveraging AWS services. The service manages all aspects of the transcoding process transparently and automatically. It also supports pre-defined transcoding presets that make it easy to transcode video for smartphones, tablets, web browsers and other devices. With Amazon Elastic Transcoder, customers can create enterprise, training, user-generated, broadcast, or other video content for their applications or websites.”

    On the product page, Amazon says:

    Amazon Elastic Transcoder manages all aspects of the transcoding process for you transparently and automatically. There’s no need to administer software, scale hardware, tune performance, or otherwise manage transcoding infrastructure. You simply create a transcoding “job” specifying the location of your source video and how you want it transcoded. Amazon Elastic Transcoder also provides transcoding presets for popular output formats, which means that you don’t need to guess about which settings work best on particular devices. All these features are available via service APIs and the AWS Management Console.

    Amazon Elastic Transcoder explained

    There are no contracts or monthly commitments to use the service. You pay based on the minutes you need to transcode and the resolution of the content.

    Check out Elastic Transcoder here.

  • Using a tweet to get the power back on faster

    Your power just died — what’s the first thing you do? No, not go get candles. If you’re like many of us, you probably grab your phone and tweet, or write a Facebook message, about how supremely annoying and inconvenient losing power is (ugh, you were in the middle of Downton Abbey!). But turns out, bitching publicly on social media could actually be helpful to your local utility, if they’re using new big data software recently launched by GE.

    This week at Distributech — it’s like the CES for the power grid sector — GE is formally unveiling its big data analytics and visualization software called Grid IQ Insight. It sucks in data from everywhere — grid sensors, smart meters, weather reports — including public social media data that consumers write about their electricity. The analytics can find and determine if the data is relevant (“My power’s been out for an hour, PG&E sucks!”) and can use the location data from the phone tweet to get a picture of an outage in a certain area.

    The idea is to give utilities a better window into when outages occur before they start getting phone calls from angry customers. They’ll still get those, but if they see an explosion of social media messages coming from a certain neighborhood, they can potentially reach out to those customers first and let them know they’re working on the problem. It’s about better customer service and quicker fixes to power outages.

    During a demo on Monday night, GE execs showed a demo of a visualization of a grid in a neighborhood and tweets that would come into the system in real time. The social media messages were coded — red for negative, blue for neutral and green for positive. ‘They’re usually red,” joked a GE exec.

    The grid visualization can show all sorts of other data in real time, not just social media messages. Importantly utility workers can see when connected grid systems like substations and transformers are having problems. The big data analytics platform uses Cassandra and MySQL databases under the hood.

    GE isn’t the first to do this. A startup called Space Time Insight has built a grid data visualization tool that organizations like the California Independent System Operator Corporation use to watch California’s grid in real time. For Distributech — or DTech as the industry likes to call it — Space Time Insight launched the latest version of its software and also announced Canadian utility Hydro One as a new customer.

    Developing tools to help utilities manage the massive influx of big data from the power grid is a hot space. (Make sure to check out our 13 energy data startups to watch in 2013). GE’s big data software is also part of its efforts to sell technology for the “Industrial Internet,” or bringing digital technologies to the sectors like transportation, aviation, locomotives, power generation, oil and gas development, and other industrial processes.

  • New From NAP 2013-01-29 10:45:20

    Prepublication Now Available

    Increasing renewable energy development, both within the United States and abroad, has rekindled interest in the potential for marine and hydrokinetic (MHK) resources to contribute to electricity generation. These resources derive from ocean tides, waves, and currents; temperature gradients in the ocean; and free-flowing rivers and streams. One measure of the interest in the possible use of these resources for electricity generation is the increasing number of permits that have been filed with the Federal Energy Regulatory Commission (FERC). As of December 2012, FERC had issued 4 licenses and 84 preliminary permits, up from virtually zero a decade ago. However, most of these permits are for developments along the Mississippi River, and the actual benefit realized from all MHK resources is extremely small. The first U.S. commercial gridconnected project, a tidal project in Maine with a capacity of less than 1 megawatt (MW), is currently delivering a fraction of that power to the grid and is due to be fully installed in 2013.

    As part of its assessment of MHK resources, DOE asked the National Research Council (NRC) to provide detailed evaluations. In response, the NRC formed the Committee on Marine Hydrokinetic Energy Technology Assessment. As directed in its statement of task (SOT), the committee first developed an interim report, released in June 2011, which focused on the wave and tidal resource assessments (Appendix B). The current report contains the committee’s evaluation of all five of the DOE resource categories as well as the committee’s comments on the overall MHK resource assessment process. This summary focuses on the committee’s overarching findings and conclusions regarding a conceptual framework for developing the resource assessments, the aggregation of results into a single number, and the consistency across and coordination between the individual resource assessments. Critiques of the individual resource assessment, further discussion of the practical MHK resource base, and overarching conclusions and recommendations are explained in An Evaluation of the U.S. Department of Energy’s Marine and Hydrokinetic Resource Assessment.

    [Read the full report]

    Topics: Energy and Energy Conservation | Earth Sciences

  • New From NAP 2013-01-29 10:45:01

    Prepublication Now Available

    The standard incandescent light bulb, which still works mainly as Thomas Edison invented it, converts more than 90% of the consumed electricity into heat. Given the availability of newer lighting technologies that convert a greater percentage of electricity into useful light, there is potential to decrease the amount of energy used for lighting in both commercial and residential applications. Although technologies such as compact fluorescent lamps (CFLs) have emerged in the past few decades and will help achieve the goal of increased energy efficiency, solid-state lighting (SSL) stands to play a large role in dramatically decreasing U.S. energy consumption for lighting. This report summarizes the current status of SSL technologies and products—light-emitting diodes (LEDs) and organic LEDs (OLEDs)—and evaluates barriers to their improved cost and performance.

    Assessment of Advanced Solid State Lighting also discusses factors involved in achieving widespread deployment and consumer acceptance of SSL products. These factors include the perceived quality of light emitted by SSL devices, ease of use and the useful lifetime of these devices, issues of initial high cost, and possible benefits of reduced energy consumption.

    [Read the full report]

    Topics: Energy and Energy Conservation | Engineering and Technology

  • Microsoft Office 15 and Office 365 Now Available For Windows 8

    Microsoft has often touted Windows 8 as an operating system for consumers. It has been arguably successful thus far, but Microsoft hasn’t forgotten its bread and butter – the enterprise and productivity markets. For those consumers, Microsoft is finally launching the newest version of Office today.

    Compared to past launches of Office, this one is a little different. Microsoft will be launching two distinct versions of its productivity software for different consumers. The first is the traditional Office 15 that customers will pay a one time fee for. The second is the cloud-based Office 365 that Microsoft has offered for some time now, but it now includes the updated Office 15 tools for Windows 8.

    The traditional Office 15 will come in multiple flavors depending on the user’s needs. The three versions available include Office Home & Student 2013, Office Home & Business 2013, and Office Professional 2013. What makes these stand out over previous versions of Office is that they’re built specifically for Windows 8 and its new touch controls. Working on a text document or spreadsheet with touch controls doesn’t sound super exciting, but at least it’s an option.

    Those wanting to get the newest versions of Office better be prepared to break the bank, however, as Microsoft’s usual pricing is in full effect for the newest versions of Office. Home & Student 2013 costs $139.99, Home & Business costs $219.99, and Professional costs $399.99.

    If you want a cheaper option, you’ll want to go for Office 365 Home Premium. With the service, users will have access the entire software suite included in Office for $99.99 a year. With the annual subscription, you can also install Office on up to five devices. Students have it even better as Microsoft offers special pricing of only $80 for a four year subscription. University students, staff and faculty are eligible for the promotion.

    For a limited time, Microsoft is also offering one month of Office 365 Home Premium for free. The free trial nets you the entire Office software suite, 20GB of SkyDrive storage and 60 minutes of Skype calls. You can grab it here. Businesses can also try a limited trial of Office 365 with small businesses (50 or less employees) getting a 90-day trial and midsize businesses (50,000 or less) getting a 30 day trial. You can view all your options here.

    [h/t: Business Insider]

  • D.C. Circuit Court Chastises EPA for Biofuel Bias

    The D.C. Circuit ruled this past Friday that the EPA issued biofuel mandates with the explicit goal of helping its favored biofuel industry.  In so doing, EPA violated the Clean Air Act’s requirement that the agency make its mandates in an objective, scientific manner.

    In 2007, Congress created a renewable fuel standard (RFS) program mandating a certain level of production for biofuels to be used as transportation fuel.  The EPA was tasked with establishing the year-to-year level of production mandated.  To make sure the EPA did not set outrageous mandates for renewable fuel, Congress required that the EPA predict, based on Energy Information Administration estimates, how much biofuel would be required.

    Earlier this month, we noted that the EPA had utterly failed to match its predictions to actual production over the past three years.  Here’s the graph we made from government data on biofuel production:

    EPA-cellulosic-biofuel-predictions2 (2)

     

    It turns out that the EPA forecasting incompetence resulted from bias in its predictions process.  In responding to comments on its forecasts, EPA stated that its “intention is to balance [] uncertainty with the objective of promoting growth in the industry.”  The EPA went on to say that setting the biofuel mandate at a high level would “provide the appropriate economic conditions for the cellulosic biofuel industry to grow.”  It is estimated that EPA levied nearly $7 million in fines on U.S. refiners for failing to use non-existent biofuel in 2011.

    In the words of the D.C. Circuit, EPA’s actions were a case of “let[ting] its aspirations for a self-fulfilling prophecy divert it from a neutral methodology.”  The result was that fuel companies were hurt by the inevitable failure of biofuel producers to meet impossible projections.  The Court noted that the EPA essentially said, “Do a good job, cellulosic fuel producers.  If you fail, we’ll fine your customers.”

    The D.C. Circuit did a good job of policing EPA excess in this case, but only the most diehard optimist would expect that the EPA will refrain from playing favorites again in the future.  The only way to permanently deprive the EPA of the power to choose winners and losers is to take away its power to mandate levels of renewable fuel production.

     

  • Google Makes AdWords API Usage Free

    Google announced that it is going to start making AdWords API usage free starting March 1. They’re doing away with the preferred pricing model.

    There will be two levels of API access once the changes take effect: basic and standard. The former will be the default, allowing for up to 10,000 operations per day. The latter will be available to qualified developers with no daily limit on operations. Neither option comes with a charge.

    “If you have an approved AdWords API token and plan to execute fewer than 10,000 operations per day, there’s no action needed. You’re covered with basic access,” says AdWords product manager S. Srikanth Belwadi.

    “Based on your history with the AdWords API and details you’ve shared with us, you might be pre-qualified for standard access,” says Belwadi. “If so, we will contact you within the next week and let you know. Please keep your contact email address up-to-date in the My Client Center (MCC) account associated with your developer token. If you haven’t been contacted or if you haven’t applied for standard access by February 28th 2013, your token will only have basic access starting March 1st 2013.”

    If you think you need standard access, you can apply for it here.

    There’s an FAQ section in Google’s help center further discussing the changes here.