Blog

  • Fran Warren Dies; Big Band Singer Was 87

    Fran Warren, a popular big band singer in the 40s and 50s, has died at the age of 87.

    Warren is reported by the Associated Press to have died of natural causes on March 4 at her home in Brookfield, Connecticut. The day was her 87th birthday.

    Born Frances Wolfe, Warren began singing for bands led by stars such as Randy Brooks, Cahrlie Barnet, and Claude Thornhill. Her first big his was 1947′s “A Sunday Kind of Love,” which she recorded with Thornhill’s band.

    Warren went on to have a solo career during the 50s. Her biggest hit during that period was a duet version of the song “I Said My Pajamas (and Put on My Pray’rs) she sang with Tony Martin. She also starred in the movie Abbott and Costello meet Captain Kidd and performed in comedy musicals such as Finian’s Rainbow.

  • Microsoft attacks Samsung by bashing last year’s Galaxy phone [video]

    Microsoft Samsung Attack Ad
    In yet another curious marketing decision, Microsoft (MSFT) has decided to get in on the Samsung (005930) bashing game by releasing an ad this week that knocks the Galaxy S III, Samsung’s former flagship smartphone that’s almost a year old and is about to be supplanted by the Galaxy S 4. In the video, a Microsoft representative approaches two Galaxy S III owners — whom the ad helpfully informs us are “real people, not actors” — and shows them how the Nokia (NOK) Lumia 920 takes much better pictures than their current smartphones. At the end of the ad, the two real people decide to trade in the Galaxy S III for a Lumia 920 after viewing one picture taken with the device. The one plus side, however, is the fact that this new ad doesn’t feature any breakdancing or beatboxing. The full video is posted below.

    Continue reading…

  • The Most Terrifying Mr. Rogers Video Ever Made

    Today is the birthday of famed children’s television host Fred Rogers, also known as Mr. Rogers. I just learned that his middle name was actually McFeely (according to Wikipedia and Google’s knowledge graph).

    Now, here’s possibly the most terrifying thing I’ve ever seen. Warning: you will be disturbed.

    Oh, what you find on YouTube.

  • Switching from Google to Microsoft, part 2 — Teething problems

    Second in a series. You know when you go somewhere on holiday and in a moment of fancy you think to yourself “I could live here”? But a small part of you knows deep down inside that the reality would be very different from the fantasy? That’s a bit like what my first experience of swapping from Google to Microsoft has been like so far.

    I’ve used Internet Explorer on and off over the years, but I’ve never used it for very long. The last time it was my main browser was in 2003, ten years ago. Similarly I’ve used Outlook.com since it launched, but not as my main email provider. So in setting them up to use on a daily continual basis I’ve found it all quite odd. I’m adrift in a place where they do things differently. Not worse — well not really — just differently.

    Trying to set up Outlook.com to send and receive emails from my Gmail account was an interesting experience. I went into Email Settings in Outlook.com, clicked “Your email accounts” and then clicked “Add a send-and-receive account”. I filled in my email address and password where prompted and clicked Next. And then some sort of alarm went off at Google HQ — with dire red alerts appearing on every Google page imaginable, including YouTube, telling me that someone had tried to access my account and advising me to change my password.

    I calmed Google down, told it that it was me accessing my account (Google for its part remained utterly unconvinced and demanded I sign a waiver), and then I went back into Outlook.com and tried again to set things up properly this time. The “Add a send-and-receive account” wizard didn’t want to be very wizardly, and just refused to do anything aside from tell me that I needed to enable “POP Download” in Gmail. POP was already enabled, so I decided to just go into Gmail settings and configure forwarding there, then set up a sending only account for Gmail in Outlook.com, which worked fine. And then, more out of curiosity rather than anything else, I went back to the “Add a send-and-receive account” and added my second Gmail account. Everything worked fine this time. Very odd.

    Setting Up Internet Explorer

    Switching from Firefox to Chrome was incredibly easy. Switching from Chrome to IE was a bit more awkward. I don’t ever use the Modern UI version of Internet Explorer 10, because I hardly ever only have just one site open, and the desktop version is better suited to my needs (it allows me to jump between open pages much quicker). I imported my bookmarks/favorites from Chrome to IE10 without any real drama, but they were imported in an apparently random order which took a while to organize because I have around a hundred or more!

    I have a folder of bookmarks of favorite sites that I open every morning. IE had no problem opening these but for some reason decided to change the zoom on half of them. So some pages required a magnifying glass to read, while others I could make out very clearly from the other side of the room. I adjusted the zoom for all of the pages to 100 percent and now everything is more or less fine. I’ve no idea what that was all about, and it won’t do it again, so maybe Internet Explorer recognized me as the new boy and was welcoming me with a spot of hazing.

    I then finished off by customizing IE to make it more useable. As someone who likes to have a ridiculous amount of tabs open at all times, I had to move the tab bar on to a separate row so I could actually see the tabs properly. I have to assume people who usually use Internet Explorer only have a maximum of two tabs open at a time, because with the giant address bar there’s not much room for any more.

    Similarly, because I have so many tabs open I don’t want thumbnails of all of them to display when I mouse over the IE button on the taskbar, so I went back into settings and killed that feature.

    Now I’m using it all the time, IE’s page rendering looks really weird (Amazon in particular looks awful — the writing is very small, and even blown up to 125 percent it’s not great), but that’s something I guess I’ll get used to quickly enough. There’s nothing really wrong with it, I hasten to add, it’s just not what I’m used to.

    On to add-ons next. I have Adblock Plus installed on Firefox and Chrome, but it isn’t available for IE, so I’ll have to find a replacement, or just have ads. The majority of the add-ons I use for Firefox and Chrome aren’t available, so I’m going to try and just use the vanilla version of Internet Explorer for a while, and then seek out any add-ons I find I really can’t live without.

    Interestingly, while writing this IE10 has twice replaced Outlook.com with message starting “This page can’t be displayed” and suggesting solutions for the problem. A problem that neither Firefox nor Chrome has. I really hope that’s not a sign of things to come or this could be a very short lived experiment.

    Photo Credit: Sam72/Shutterstock

  • It’s not Skynet yet: In machine learning, there’s still a role for humans

    If you’ve ever seen any of The Terminator films, you’re familiar with Skynet, the self-aware computing system at odds with humanity. But, even though a perception persists that machines can increasingly solve complex problems and process large amounts of data on their own, machine learning experts say humans still play a very important role.

    Human intervention is critical at multiple layers, from choosing the algorithms to apply to feature creation to crafting the entire structure within which a machine will learn, said Scott Brave, founder and CTO of Baynote, at GigaOM’s Structure: Data conference Wednesday.

    Down the road, he said, there will be more opportunities for machine-man collaboration, as data scientists observe what the machines may be learning and then add new inputs and ideas to the system.

    “A lot of times we forget that even though it’s big data, the amount of data that the machine has access to pales in comparison to the amount of data we’re absorbing and have access to,” he said. “We’re building intuitions and holistic pictures in our minds and we see these connections that the machine might not even have the possibility of seeing because it doesn’t have the right data.”

    Humans have a powerful role in figuring out the sources of data to give the machine and projecting their intuition, he added.

    Still, Timothy Estes, founder and CEO of Digital Reasoning, pointed out that there are three key areas in which machine bests man – and, over time, they could give rise to some interesting social and cultural questions.

    Humans will never be able to consume the sheer amount of data machines can process (unless it’s with some “Ray Kurzweil-style” man and machine merging), humans weren’t designed to receive thousands of inputs at once, and we’re ill-equipped to create a unified model of knowledge across that scale of information and make judgements from it, Estes said.

    Recognizing that, he said, he predicts a social debate between adopting a “Google”-like model to artificial intelligence, in which the machine simply tells you what to do next, and a software model, that assumes more human agency.

    “I believe we’re going to see that [debate] play out in the next decade between the software-centric model – a personal empowerment model – and a collective model,” he said. “And that’s the Skynet problem… you get a computer with intentionality that has access to data and the next thing you know you’re looking for a robot coming back from the future.”

    Check out <a href=”http://gigaom.com/2013/03/20/structuredata-2013-live-coverage/“>the rest of our Structure:Data 2013 coverage here</a>, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Sword of the Stars: The Pit Review (PC)

    I have about 25 more bullets for my assault rifle and I get the feeling that I will not find many more for a while, so I resolve to use my trusty Marine blade to take out the enemies that stalk me throughout the level and keep the ammo in reserve for a really big and scary threat.

    I sprint through the rooms I have already explored, remembering h… (read more)

  • Cloud Computing: A CFO’s Perspective

    Cloud computing and the technologies surrounding the platform have made a big impact on the modern business. Now, more organizations are looking for ways to leverage the cloud and see where it can create cost savings. Although many IT professionals will fight for a cloud model – in many cases, the CFO needs to make a good recommendation as well.

    The IT infrastructure is an absolutely vital part of any company. In fact, IT is now at the top of the CFO’s agenda. According to Gartner’s The CFO’s Role in Technology Investments, 26 percent of IT investments require the direct authorization of the CFO and 42 percent of IT organizations now report to the CFO. This is why, in recent years, the IT department and the business organizations have become much closer in terms of the technologies the entire unit wants to deploy. Just like any new technology, the cloud can have very positive results for a company. However, these results only come about after thorough planning around cloud computing.

    In this whitepaper, HP takes a deeper look at the cloud – but directly from a CFO’s viewpoint. This means analyzing key benefits in moving towards a cloud platform. This includes:

    • Moving capex to opex.
    • Adding speed and flexibility.
    • Creating instant access to innovation.
    • Creating a better and more resilient environment.

    Download HP’s whitepaper to see how a CFO should view the cloud and where key benefits are located. Remember, there are a lot of uses for cloud computing. Many organizations can leverage a cloud model to reduce legacy systems or create a private infrastructure capable of agile growth. However, the key here is ensuring that the entire business entity can see the direct benefits of cloud computing. That means other IT departments, other business units, and of course – the CFO.

  • Nonprofits: Master "Medium Data" Before Tackling Big Data

    Every day humanity adds approximately 2.5 quintillion bytes of data to our collective store of knowledge. Looking over this treasure trove, scientists, financiers, and business leaders are justifiably giddy about the potential of Big Data. For the nonprofit community, Big Data also offers immense potential. But with our mere billions of data points we’re not quite ready for it. Instead, we need to get “medium data” right first.

    Big Data is the search for meaning in the haystacks of massive databases of transactions, sensor readings, and records. For nonprofits, medium data is a humbler but essential prerequisite: structured information about who you are, what you’re trying to do, and what’s happening.

    This may seem like a low bar but nonprofits face legitimate challenges in gathering, organizing, and using even basic data. First, most nonprofits are simply too cash-strapped to invest in cutting-edge information systems to track their activities, engage with their stakeholders and understand their context. Second, the diversity of organizations makes comparison difficult: how could we possibly compare the work of the University of Chicago to a homeless shelter in Albuquerque or to Greenpeace? Third, it isn’t easy to know which are the most effective programs for battling climate change or child slavery or homelessness. Finally, the unique economics of the nonprofit sector — the buyer (donor) is frequently a different person from the user (beneficiary) — interrupt the direct feedback loop that often drives innovation in business.

    These are not just abstract problems; they have very real consequences. Nonprofits need to be able to answer urgent questions like, “Who else is working on homelessness in my town?” or “Has anyone else ever tried this approach for reducing teen pregnancy?” or “What do the people in our job training program think of our work?” The lack of data makes day-to-day tactical decisions hard and long-term planning practically impossible. And who pays the price? The people, communities, and ecosystems that nonprofits serve.

    But years of effort to address these challenges have begun to reap rewards. According to research by the Hewlett Foundation (which I led when I worked there), there are now 371 platforms for gathering data about the nonprofit world: social indicators, capital flows, research about what works, and more. My own organization, GuideStar, now has about 1.4 billion individual pieces of data about the nonprofit world in the United States.

    There’s more good news. There’s a rising culture of transparency and accountability in the nonprofit community. Nonprofit leaders are feeling the pressure to give up blatantly non-transparent practices. And new tools from big players like Salesforce, boutique firms like Social Solutions, and open source platforms like Fluxx all make it easier and cheaper to run data-driven organizations.

    If nonprofits can embrace this new culture, build on these new platforms, and use these tools, medium data could transform the work of nonprofits. Organizations could radically increase their ability learn quickly. Better data could enable more and better collaboration among nonprofits and with other sectors of the economy. And there is reason to believe that the highest-performing organizations will be rewarded with additional funding. Philanthropy will always be emotional and personal, but recent research shows that the right information systems can help direct billions of dollars to proven high-performers — and away from organizations unable or unwilling to prove their effectiveness.

    Of course, nonprofits cannot hope to magically reap the potential of medium data without some hard work. Here’s what nonprofit leaders need to keep in mind:

    1. Don’t freak out. Nonprofits often panic about data because they worry about revealing weaknesses or compromising their funding. Remember that data are meant to complement intuition and stories, not replace them. Information should inform, not decide.
    2. Focus on what nonprofits have in common. Nonprofits need to agree on — and then adopt — basic data standards. Every nonprofit may well be its own unique snowflake, but if we focus only on what makes us different, we’ll never reap the rewards of medium data. We all must be willing to share our stories in similar language, through shared formats, and on common platforms. As part of this, nonprofits need to support the central players that are building the core information infrastructure — whether the Foundation Center (data about foundations), the Global Impact Investing Network (data on impact investing), or my own organization, GuideStar (data about individual nonprofits).
    3. Default to openness. Medium data only works if we share. Most nonprofits are simply too small to have a critical mass of data on their own. But together, we have enough data to reap immense insight and impact. There will always be data an organization should not share — but we need to switch our default from opacity to openness. Instead of opting-in to transparency when nonprofits feel like it, we should opt-out only when necessary.

    It’s time for every nonprofit leader to step up to the challenge of data. Fear not: It doesn’t require terabytes of data and supercomputers. Medium data is simply organized storytelling — and if there’s one thing nonprofits do well, it’s tell stories about the need in our communities. Now is the time for us to tell an honest, open, shared story about ourselves.

    This post is part of an online debate about How Big Data Can Have a Social Impact, which we’re hosting in partnership with the Skoll World Forum on Social Entrepreneurship. You can view the entire debate here.

  • Levees and the National Flood Insurance Program: Improving Policies and Practices

    Prepublication Now Available

    The Federal Emergency Management Agency’s (FEMA) Federal Insurance and Mitigation Administration (FIMA) manages the National Flood Insurance Program (NFIP), which is a cornerstone in the U.S. strategy to assist communities to prepare for, mitigate against, and recover from flood disasters. The NFIP was established by Congress with passage of the National Flood Insurance Act in 1968, to help reduce future flood damages through NFIP community floodplain regulation that would control development in flood hazard areas, provide insurance for a premium to property owners, and reduce federal expenditures for disaster assistance. The flood insurance is available only to owners of insurable property located in communities that participate in the NFIP. Currently, the program has 5,555,915 million policies in 21,881 communities3 across the United States.

    The NFIP defines the one percent annual chance flood (100-year or base flood) floodplain as a Special Flood Hazard Area (SFHA). The SFHA is delineated on FEMA’s Flood Insurance Rate Maps (FIRM’s) using topographic, meteorologic, hydrologic, and hydraulic information. Property owners with a federally back mortgage within the SFHAs are required to purchase and retain flood insurance, called the mandatory flood insurance purchase requirement (MPR). Levees and floodwalls, hereafter referred to as levees, have been part of flood management in the United States since the late 1700’s because they are relatively easy to build and a reasonable infrastructure investment. A levee is a man-made structure, usually an earthen embankment, designed and constructed in accordance with sound engineering practices to contain, control, or divert the flow of water so as to provide protection from temporary flooding. A levee system is a flood protection system which consists of a levee, or levees, and associated structures, such as closure and drainage devices, which are constructed and operated in accordance with sound engineering practices.

    Recognizing the need for improving the NFIP’s treatment of levees, FEMA officials approached the National Research Council’s (NRC) Water Science and Technology Board (WSTB) and requested this study. The NRC responded by forming the ad hoc Committee on Levee and the National Flood Insurance Program: Improving Policies and Practices, charged to examine current FEMA treatment of levees within the NFIP and provide advice on how those levee-elated policies and activities could be improved. The study addressed four broad areas, risk analysis, flood insurance, risk reduction, and risk communication, regarding how levees are considered in the NFIP. Specific issues within these areas include current risk analysis and mapping procedures behind accredited and non-accredited levees, flood insurance pricing and the mandatory flood insurance purchase requirement, mitigation options to reduce risk for communities with levees, flood risk communication efforts, and the concept of shared responsibility. The principal conclusions and recommendations are highlighted in this report.

    [Read the full report]

    Topics:

  • Adblock Plus Gets Self-Updating App To Circumvent Google Play Ban

    Last week, Google made everybody angry by announcing the retirement of Google Reader. While everybody was fuming about that, the company also started removing ad blockers from Google Play, including Adblock Plus. It didn’t take long, however, for the app to make its way back to Android.

    The Adblock Plus team introduced version 1.1 of its Android app today. This version brings with it automatic updates to get around the fact that it’s not welcome on Google Play anymore. The team took the opportunity to implement a number of other changes as well:

  • Implemented automatic updates
  • Added a dialog to help with the manual proxy configuration
  • Separated filtering and proxy activation settings to avoid loss of connectivity after manual configuration
  • Switched to the Holo user interface theme
  • Improved icon hiding
  • Implemented a workaround for a Chrome issue causing blank pages
  • Fixed an issue with URLs containing apostrophes
  • It’s pretty obvious that Google didn’t like Adblock Plus because it prevented the company from earning ad revenue off of apps and mobile browsers. It will be interesting to see if Google does anything to combat its return.

    One of the key advantages of Adblock Plus is that it doesn’t require your device to be rooted so I can imagine Google introducing a change in future versions of Android that blocks Adblock Plus and similar software on non-rooted devices.

    Even if Google were to do that, something would come along to bring ad blocking back to Android. Consumers have shown through ad blocking software that they simply don’t like the current form ads take on. So instead of fighting ad blockers, perhaps Google should find a way to make mobile ads less obnoxious.

    [h/t: The Next Web]

  • Better Broadband Means Better Economy in Rural Areas

    Yesterday Telecompetitor mentioned a new report by the National Agricultural & Rural Development Policy Center (NARDeP) Rural Broadband Availability and Adoption: Evidence, Policy Changes and Options. Here’s the info in a nutshell in terms of the connection between broadband and economic vitality:

    • Broadband and economic health are linked in rural areas (potentially in a causal direction):
      • Low levels of adoption, providers, and broadband availability were associated with lower median household income, higher levels of poverty, and decreased numbers of firms and total employment in 2011
      • Increases in broadband adoption between 2008 and 2010 resulted in higher levels of median household income and total employment for non-metro counties
      • Broadband adoption thresholds have more impact on changes in economic health indicators between 2001 and 2010 than do broadband availability thresholds in non-metro counties

    And some of the metro-rural differences:

    • The broadband adoption gap between metro and non-metro areas remained at 13 percentage points in both 2003 and 2010; however, this gap increased among low income, low education, and elderly
    • The most rural (non-core) counties experienced significant improvements in broadband adoption between 2008 and 2011
    • Traditional factors – income, education, age, race, and non-metro location – played a role in adopting broadband for both 2003 and 2010; low levels of providers had a negative impact on adoption while higher levels of broadband availability had a positive impact

    These findings agree with Jack Geller’s findings on the issue. He often shows using Roger’s Theory of Adoption curve. We’ve seen a broadband adoption increase at a good clip over the last few years – and the remaining non-adopters are laggards. They have lower incomes, lower levels of education, they are older, minorities in rural areas.

    adoption curve

    Jack also points out that in some ways this is a demographic that will take care of itself, the older demographic more quickly than the young. I think the NARDeP research might indicate that it’s worth the effort and investment to reach out to these folks – especially if the increases in household income and employment seen from 2008-2010 could transfer to these laggards as well. The most difficult thing will be convincing the non-adopters. As the research indicates…

    When asked their primary reason for not using broadband 40% of rural residents in 2003 said they didn’t need it. By 2010 that number had climbed to 47%.

    The NARDeP also makes some policy recommendations…

    • Draw broadband infrastructure to less economically robust regions lacking it (via programs such as the FCC’s Connect America Fund)
    • Focus adoption programs on populations with lower levels of income and education as well as racial/ethnic minorities; involving community anchor institutions is particularly important
    • Build on diffusion factors such as trialability, observability, compatability to expose nonadopters to the technology
    • Though wireless deployment is helpful, many of the productivity gains and economic advantages of broadband are limited through this technology
    • Support improved data gathering related to price / affordability (including bundles) and service quality (speed)

    Could turn out to be some good advice for Minnesota Legislators as they think about the Office of Broadband Development.

  • Galaxy S 4 could help Samsung ‘unseat Apple as king of innovation’

    Galaxy S 4 Innovation
    Expectations were high leading up to last week’s Galaxy S 4 unveiling and while no one was blown away by the handset’s design, a number of industry watchers believe Samsung (005930) delivered an impressive device that will carry the company’s momentum forward thanks to intriguing new software features. There are strong opinions on both sides of the argument regarding just how innovative Samsung’s next-generation flagship phone really is, but one recent report suggests the Galaxy S 4 could be the start of shift that sees Samsung “unseat Apple (AAPL) as king of innovation.”

    Continue reading…

  • Vasona Networks Inks $22M

    Vasona Networks Inc., a developer of platforms for mobile network capacity and resource management, has closed a recent $12 million Series B round, bringing its total funding to $22 million. Bessemer Venture Partners led the round, with participation by New Venture Partners. Vasona Networks is based in Santa Clara, California, with research and development offices in Tel Aviv, Israel.

    PRESS RELEASE
    Vasona Networks, Inc.®, a provider of platforms for mobile network capacity and resource management, today announces total funds raised of $22 million, including a recent $12 million Series B round. The venture capital financing was led by Bessemer Venture Partners, with participation by New Venture Partners and a strategic investor, all of them participants in the company’s Series A round.
    The newest financing follows Vasona Networks’ success bringing to market its pioneering SmartAIR1000™ edge application controller, which is in deployments by mobile network operators around the world. The company will use the proceeds to accelerate its growth and expand field operations, while continuing investment in its research and development activities.
    “During the last few years, Vasona Networks has rapidly defined, developed and validated its platform, establishing a new category of solution for pressing mobile operator needs,” says Bob Goodman , a partner at Bessemer Venture Partners and member of Vasona Networks’ board of directors. “This new financing positions the company to fulfill and build on the strong demand it’s experiencing.”
    Founded in 2010, Vasona Networks recognized that bringing together certain mobile, media and IP networking technologies could address the deluge of growing application usage that mobile operators face today. Vasona Networks’ resulting flagship SmartAIR1000 edge application controller works with mobile traffic across all applications, at granularity of every cell in a network. It assesses and acts on congestion based on exactly where it is occurring and what is causing it. Bandwidth is allocated to each application in real time for the best overall subscriber experiences. This solution empowers mobile operators to more effectively use resources as they face expensive and complex upgrades to add network capacity.
    “Vasona Networks is succeeding across our operations including multidisciplinary engineering, launching the first edge application controller, deployments with top global operators, and substantial support from our venture investors,” said Biren Sood , CEO, Vasona Networks. “Our funding validates Vasona Networks’ leadership positioning, with participation including Bessemer Venture Partners, one of the most venerable venture capital firms, and New Venture Partners, a sophisticated investor in communications infrastructure markets.”
    For more information, visit www.vasonanetworks.com.
    About Vasona Networks
    Founded in 2010, Vasona Networks, Inc.® works with global mobile network operators to deliver better subscriber experiences. The company’s pioneering edge application controller, the SmartAIR1000™, takes a holistic approach to addressing mobile network data traffic congestion that occurs in each cell, monitoring every application demanding bandwidth. With this visibility, Vasona Networks’ RateControl™ technology allocates bandwidth by precise determination of user needs and experiences. The company has received investments from Bessemer Venture Partners and New Venture Partners. Vasona Networks is based in Santa Clara, California, with research and development offices in Tel Aviv, Israel. For more information, visit www.vasonanetworks.com.

    About Bessemer Venture Partners
    With $4.0 billion under management, Bessemer Venture Partners (BVP) is a global venture capital firm with offices in Silicon Valley, Cambridge, Mass., New York, Mumbai, Bangalore and Herzliya, Israel. BVP delivers a broad platform in venture capital spanning industries, geographies, and stages of company growth. From Staples to Skype, VeriSign to Yelp, LinkedIn to Pinterest, BVP has helped incubate and support companies that have anchored significant shifts in the economy. BVP also led the Series A financings of Broadsoft, Intucell and Flarion. More than 100 BVP-funded companies have gone public on exchanges in North America, Europe and Asia. See www.bvp.com or follow BVP on Twitter: @bessemervp

    The post Vasona Networks Inks $22M appeared first on peHUB.

  • Custom Data Centers: Responsibilities of the Stakeholders

    This the third article in series on DCK Executive Guide to Custom Data Centers.

    Like any large scale project, when commissioning a data center design, whether standard or custom, a clear understanding of the responsibilities and points of contact (POC) and/or project managers (PM) need to be carefully selected and agreed to by all involved parties. It is highly recommended that the POC or PM for the organization that is purchasing or leasing the data center be generally familiar and have some experience with the operation and basic technologies of a data center. This is especially important for a custom design, and simply appointing an “all purpose” internal POC or PM without any specific data center experience should be avoided if at all possible. If such a qualified person is not available internally, consider utilizing a qualified independent consultant to act as the POC or PM or at the very least a trusted advisor. While they do not have to be an engineer, they do need to be able to fully understand what is being asked of the bidding data center design and build firms and the implications of their responses, questions or change requests as the designs are developed.

    Before delving into the details, let’s first clarify the general data center categories and terms; standard, build to suit and of course a custom design.

    Standard Data Center
    While there really is no such thing as a generic “standard” data center, it generally involves a design that follows common industry standards and best practices. This usually covers the layout of the rows of cabinets, typically capable of supporting a moderate power density, then selecting the tier level of infrastructure redundancy and the total facility size commensurate to your organization’s immediate and future growth expectations. This type of data center is readily available for lease or purchase (please see part 1 of this series “Build vs Buy”) and is built using standard equipment and straightforward designs.

    Build-to-Suit Data Center
    The “Build to Suit” term and other similar marketing names such as “Turn Key” and “Move-In Ready” are used by some data center builders and providers in the industry. While the name sounds like, and would seem to imply a completely custom design, it generally offers a somewhat lower level of customization within certain limits of a basic standard design. This should be given serious consideration, since in many cases it may meet some or most, if not all of your specialized requirements, with a minimal cost impact. Also by keeping within the basic framework of a standard design, it would be less likely to face early obsolesce should a normal traditional technology refresh occur.

    Custom Design Data Center

    Like a custom built race car, designed and built for performance, a custom data center should represent a technically leading edge, tour de force design. In the case of a data center, the extreme performance is typically manifested in the form of higher flexibility, reliability, energy efficiency and power density, or some combination thereof.

    Hardly a week goes by without some headlines in the data center publications announcing a new custom built data center based on a radical new design, most commonly by a high profile firm in the Internet search, social media and cloud services arena, such as Google, Facebook, or Microsoft. It is important to understand that these are typically based on very large scale dedicated applications and may involve specialized custom built hardware for use in so called hyper-scale computing. As an example, Facebook and Google utilize unique custom built servers (each has their own different server design), which do not have standard enclosures and require special matching cabinets, as well as specialized power and cooling systems.

    This results in some technical and financial advantages, primarily related to lower cost per server and better overall data center energy efficiency. However, before embarking down the path of a highly customized data center design, it is important to understand that it requires a sufficiently large scale and IT architecture. It also may limit the general ability to support standardized racks and IT equipment. Let’s look at some emerging trends in custom data center designs.

    Hybrid and Multiple Tier Levels
    Tier levels generally refer to the level of redundancy and fault-tolerance resulting in a projected level of availability rating for a data center (1 lowest, 4 the highest).

    One area of customization that is becoming more popular is the incorporation of multiple tier levels of infrastructure redundancy within the data center. This can lower costs and may increase energy efficiency by creating a lower tier level (i.e. Tier 2) zone for less critical applications, while still providing a high level of redundancy (Tier 3-4) area for the most critical systems and applications.

    There are also those data center operators and owners that do not feel that they have to exactly follow all the requirements of the tier level system, but may prefer to use selected concepts and have a hybrid design. This allows them the flexibility to allow for greater level of redundancy of the electrical systems (i.e. 2x[N+1] dual path system — comparable to a tier 4 design), while using a less complex and lower cost cooling system, with only N+1 cooling components (for more details on tier levels please refer the “Uptime” section in part 1 “Build vs Buy”).

    Of course, once you have begun to explore a custom design you may choose to mix the multiple and hybrid design schemes to match you organizations various applications and systems requirements and may also lower your CapEx and OpEx costs.

    There is also a growing trend to try to segregate hardware by environmental requirements. Systems such as tape backup equipment in particular requires tight environmental control, yet does not require much actual cooling or power density. By isolating them from other hardware such as servers, you are able to properly support and maintain the reliability of more sensitive disk based storage and tape library equipment, by tightly controlling the temperature and humidity. This also improves the energy efficiency of the cooling system for other more robust hardware such as servers, or the new solid state storage systems, by allowing for raised temperatures and expanded humidity ranges (for more on this please refer to part 3 “Energy Efficiency”).

    Containerized Data Center
    The data center in a container is an alternative that is beginning to find some traction in the data center industry. These can be either an add-on to a traditional facility or the basis for an entire “data center”, based primarily on containerized or modular prefabricated units. Some designs are based on a core power and cooling infrastructure system meant to support these systems that are weather-proof units which can be placed on a prepared slab and then connected to the core power and cooling systems. While some other containers may require a warehouse type building to shelter them and again need to be to be connected to the core support systems.

    Although similar in concept, it is important to distinguish the difference between actual container units and some modular data center systems. It is important to note that containerized solutions or modular systems are not necessarily an inexpensive alternative to a traditional brick and mortar data center facility. They are typically best suited to very high density applications of tightly packed mostly identical hardware, typically thousands of small servers or several hundred blade servers, configured to deliver hyper-scale computing. Their main attraction is for those large organizations that required the ability to respond quickly to rapid growth in computing power and also to a certain degree to minimize initial capital expense, by just being able to add containers or modules, on an as needed basis.

    Regardless, it is important to note that whether you consider a container or a modular system, they still have to be installed at a data center facility that will support and secure them and that the overall facility infrastructure must be pre-designed and pre-built for the total amount of utility power, generator back-up capacity, as well as power conditioning (i.e. UPS typically required for most containers), and in some, but not all cases, a centralized cooling plant.

    Containers can be part of a hybrid custom design, based on a more traditional data center building as a core primary data center building which is relatively standard. However, the overall facility infrastructure has pre-allocated space, as well as power and cooling infrastructure for containerized systems which can then be easily added as needed for rapid expansion.

    Open Compute Project
    There are also some resources for “non-standard” or leading edge “outside of the box” designs. One in par¬ticular is the Open Compute Project (OCP) which has published its highly energy efficient basic designs and specialized IT equipment specifications. While not every organization is an ideal candidate for all the elements disclosed in the OCP designs, some aspects of the designs can be chosen selectively and incor¬porated into a custom data center. Some data center providers offer to build a data center based on the OCP designs.

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

  • Watch Live: President Obama’s Middle East Trip

    This week, President Obama is making the first trip of his second term, visiting Israel, the West Bank, and Jordan. We will be posting regular updates from the road and livestreaming several of the President's events on whitehouse.gov/live.

    • Wednesday, March 20 (2:05 PM ET) — President Obama and Prime Minister Netanyahu hold a press conference at the Prime Minister’s Residence in Jerusalem
    • Thursday March 21 (11:00 AM ET) — President Obama delivers a speech at the Jerusalem Convention Center
    • Friday March 22 (11:45 AM ET) — President Obama and King Abdullah II of Jordan hold a press conference in Amman, Jordan

    read more

  • It’s not you, LinkedIn is down — no up, down, up

    When I signed onto group chat this morning, my colleagues bantered about problems accessing LinkedIn. They couldn’t. I navigated to the site easily enough, but got this message when trying to log in: “An Error occurred during authorization, please try again later”. The social network’s Twitter feed confirms there are problems, but information is contradictory.

    About two hours ago: “We’re aware that the site is currently down, and our team is working on it right now. Stay tuned”. An hour later: “The issues you may have experienced with our site earlier have been cleared. Thanks for your patience”. But they weren’t fixed. At 9:21 am EDT: “Our site is currently experiencing some issues. Our team is continuing their work on this. Stay tuned”.

    I can access the main site but can’t log in. So problems persist, and they’re intermittent. Colleague Mihaita Bamburic couldn’t log in before I started this post, but at 10:38 am posted to BetaNews group chat: “It works now. But then again it did before, then went down again”.

    The cloud is great until it storms, eh? LinkedIn isn’t my primary social network, but I know many, many people who depend on the cloud service for business purposes, which means many of our readers.

    The reaction on Twitter surprises me. “If #LinkedIn is down, how will I know when an Anonymous IT Professional from South Africa views my profile???” Michael Ratty tweets. He jests, right? Lori B. doesn’t. She replies: “As a recruiter, my day is officially unproductive if this is the case”.

    Helen Piña: “As a digital marketer, I selfishly feel good knowing a website can tragically go down on anyone, even #LinkedIn”. Scott Stratten: “LinkedIn is down. Suddenly my life is devoid of MLM pitches and group spam. I feel empty”.

    At 10:44 am, LinkedIn tweeted: “Our site is now back up and running. Thank you for your patience”. I successfully logged in. And you?

    Photo Credit: olly/Shutterstock

  • SDN can turn the network into a big data “curator,” claims Juniper

    Software-defined networking (SDN) will help application developers provide context for all the data their services generate and consume, according to Juniper Networks product management lead Jennifer Lin.

    SDN involves the abstraction of the network’s brains, as it were, from its hardware. This is analogous in some ways to server virtualization, in that it makes it much easier to build smarter systems on top of commodity hardware. Juniper’s take on this sees the network as four layers, namely forwarding, control, services and management — in Juniper’s vision, everything but the forwarding layer should be centralized.

    Lin, who joined Juniper through the company’s acquisition of SDN specialist Contrail Systems, said at GigaOM’s Structure:Data conference in New York on Wednesday that federating the control function and eliminating “manual error-prone processes” would help the big data because:

    “We’re seeing a huge opportunity here to reposition the role of the network as a curator of big data and make sure that role is easily exposed through abstractions of the network. The role of the network is interesting because the network is the only thing that’s globally pervasive and … uniquely knows a lot of the contextual information that is required to drive insights back into the system.”

    Lin argued that this “rich context” would enable new types of business models such as collaborative data exchanges, without anyone needing to worry about the technology architectures involved. “The role of the network is changing quite heavily and the pace of innovation for hyperconnected data is really astonishing,” she added.

    Check out the rest of our Structure: Data 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

  • Health Tracking Gets More Up-Close And Personal With Tiny Blood Monitor Implant

    Screen Shot 2013-03-20 at 10.11.59 AM

    I thought it was impressive that Withings now offers an affordable home scale that tracks your body fat percentage and heart rate, but scientists have developed a tiny Bluetooth-capable blood monitoring device that resides comfortably under the skin, according to the BBC this morning. It’s likely to go into testing with intensive care patients soon, but it’s an example of how intense home health monitoring could get over the course of the next few years.

    The device was created by a team of Swiss medical scientists, and is designed to be installed (that really is the most appropriate term here) in a patient’s abdomen, leg or arm skin, using only a needle. It can last for months, and reports back information about blood glucose and cholesterol levels, so as you might imagine it would be extremely useful for patients with chronic conditions like diabetes who are used to having to draw blood on a much more regular basis.

    It’s not a new idea, but the Swiss team’s design is unique in that it can track a number of different markers at once. In other words, it’s the ultimately quantified self device for real internal cues. The immediate benefit of this tech is obviously for those with serious conditions, and that’s likely who will see the benefits in the immediate future. The team hopes to have it generally available to patients in need within the next four years.

    But beyond that, it’s easy to see similar unobtrusive sub-dermal implants gaining traction with the growing number of people who seem to want to keep close tabs on their bodies and health. Cholesterol levels and other indicators that can be found by this type of close monitoring will also probably become even more interesting to current advocates of the Quantified Self movement as the population ages.

    It may seem far-fetched to imagine a future where the general desire to know and track more information about ourselves in real-time extends to devices we wear beneath the skin, but ten years ago who could’ve predicted the rise of successful startups like Withings who have built a brand on home health tracking, or the advent of a device like the Basis wristband? Devices like this one might just be the next wave of health monitoring tech ripe for consumerization.

  • ‘Capture’ All of Your Hangout Antics with New Google+ App

    Google is making it easier for you to capture all of your Hangout shenanigans with the launch of a new app inside the popular Google+ feature.

    It’s called Hangouts Capture.

    “Google+ Hangouts bring people together, and when they do, all sorts of awesome can unfold,” says Google’s Jeremy Ng. “The challenge, oftentimes, is capturing your favorite moments as they happen, so today we’re introducing the new Hangouts Capture app. With it you can take pictures of your Hangouts-in-progress, including a number of features not available in the usual screenshot workarounds.”

    What the Capture app does is add a camera button to the bottom of your Google+ Hangout screen. When the app is open, you can snap unlimited screenshots with a single click. Of course, you could just snap screenshots the old fashioned way, but this really is much more efficient.

    Mainly because every “capture” you make is saved to a shared album which is only visible to the other people in the Hangout.

    Afterward, the photos will be accessible in your own photo albums, or by finding the original Hangout post. Google says that Hangout participants will always know that the Capture app is in use, and whenever someone actually snaps a photo.

    You should see the Capture option pop up soon, says Google.

  • Without human input augmentation, algorithms alone are making us dumber

    Are computer algorithms making you dumber? Yes, says Eric Berlow, founder of Vibrant Data Labs. Speaking at the GigaOM Structure: Data 2013 conference on Wednesday, Berlow offered several compelling examples of this phenomenon as well as an approach to augment algorithms with more human input.

    “There’s lots of content in the newspaper,” Berlow noted. “After viewing the most stories for a few weeks, I asked myself, where did all the news go?” Think back to the Presidential debates, Barlow said. If you focused solely on topics provided by news algorithms, you’d be reading nothing but stories about Big Bird and binders full of women.

    Amplifying crowd behavior is a start when it comes to managing societal data but we need to flip the approach, Berlow said. ”How do we harness and amplify our preferences to solve the world’s problems?”

    To do this, we need to find crowd-sourcing solutions that aren’t just the sum of parts, but are greater than the sum of parts. Photocity is a good example, according to Berlow. It takes user submitted 2D camera images and creates 3D images from them; a product that didn’t exist until the crowd’s data was assimilated.

    This leads to one of the biggest challenges of our time with data: The personal data problem, where you are both the customer and the product. How can we spark a new personal data economy?

    Through the WeTheData.org project, Berlow offers a suggestion, finding how all of our personal data is interconnected. By gathering human input first on approximately 90 personal data challenges and mapping this complexity, the project determined that the top problems emerging are digital access, digital trust, data literacy, platform openness.

    This may sound obvious in retrospect, Berlow said, but “so too was gravity after we discovered it. A human panel surfacing these four issues show that if we can solve these issues, many of the other 90 challenges will be improved as well. And now that the personal data economy problem is better defined, algorithms can be applied to focus on the biggest issues, not every single one.

    Check out the rest of our Structure:Data 2013 coverage here, and a video embed of the session follows below:


    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.