The Open Networking Summit (ONS) is underway in Santa Clara this week, and software-defined networking (SDN) news is everywhere. ONS created a visual.ly infographic to depict the SDN revolution and LightReading interviewed Dan Pitt, executive director of the Open Networking Foundation, on his thoughts about the new SDN landscape. The Open Networking Summit conversation can be followed on Twitter hashtag #ONS2013.
LSI introduces ARM-based Axxia 4500 processors. LSI introduced its Axxia 4500 product family of communication processors designed to accelerate network performance while supporting increasing traffic loads throughout the enterprise. Designed for software defined networks as well as enterprise and data center networking applciations, the Axxia 4500 is LSI’s first ARM technology-based communication processor family. It includes LSI acceleration engines and includes up to four ARM Cortex A15 cores with a CoreLink CCN-504 coherent, QoS aware interconnect in 28 nanometer process technology. It features up to 100Gb/s of L2 switching function to reduce board space and bill of material costs. “As more business and personal information moves to the cloud, the need to access data quickly and securely from any location, at any time, is critical and is forcing customers to look at new ways to manage this traffic,” said Jim Anderson, senior vice president and general manager, Networking Solutions Group, LSI. “The networks that carry the data are being tasked to do more than ever before, and the Axxia 4500 communication processor family is purpose-built to deliver the high performance required by these demanding trends.”
Extreme Networks Showcases SDN solutions, partnerships. Extreme Networks (EXTR) announced that at the Open Networking Summit (ONS) this week it will be highlighting Software Defined Networking (SDN) strategies, technology and multi-vendor interoperability demos with Big Switch Networks and NEC. ”The SDN market is rapidly evolving and we ship Ethernet SDN solutions from the campus to the data center that leverage our hallmark performance as well as innovations via the ExtremeXOS operating system,” said David Ginsburg, CMO for Extreme Networks. “At ONS, we are furthering our Open Fabric approach to the edge and data center by showcasing unique, first to market performance capabilities, partnerships with leaders like Big Switch and NEC, and extensive interoperability tests and performance demonstrations based on OpenFlow.” Extreme Networks supports Automated Flow Management and hardware-based Quality of Service with traffic shaping and QoS for its SDN-enabled Ethernet switches. Its ExtremeXOS modular OS for Ethernet switches provides QoS profiles for SDN traffic with support for bandwidth rate limiting, and rate shaping with single or dual rate QoS policies and configurable drop policies.
Ixia IxNetwork OpenFlow test solution. Ixia (XXIA) announced that it now enables service providers to fully benefit from the reliability and scale of standards-based software-defined networking technologies, including OpenFlow. With the new v1.3 protocol, service providers will gain increased value from the greater scale, reliability and features available. Ixia IxNetwork is an off-the-shelf, standards based OpenFlow test solution with OpenFlow v1.3. IxNetwork speeds application delivery across the network by allowing users to test and validate network infrastructure, capacity and scalability. “As the OpenFlow standard continues to develop and expand, it is critical to have test tools available for development and predeployment testing,” said Michael Haugh, Senior Market Development Manager, Ixia. “By providing the industry’s leading OpenFlow test solutions, being an active member of the ONF, and serving as the chair for the Testing-Interop Working Group, Ixia continues to contribute significant advances to the industry.”
Yesterday I attended the hearing session for HF985…
HF985 (Johnson) Telecommunications enforcement authority clarified, new requirements for tariffs added, proprietary information protected, criteria for certificates of authority specified, alternative regulation plans terminated, definitions added, technical corrections made, obsolete provisions removed, and conforming changes made.
The hearing was informational only. It was interesting to hear the different parties present the issues. I’m going to leave my notes pretty much asis with the understanding that the video is here to fill in the blanks.
Bill summary:
A rewrite of Chapter 237. It’s an effort to modernize telecommunications policy – in the words since the breakup of the bell companies. There are 25 sections of the bill. Many are repealed:
1-7 – includes definitions; basic service is a one unbundled single line (aka POTS)
8 – effective July 1, 2019 – Commerce duties are transferred to PUC.
2-3 – telecos can provide reduced rates to schools, gov agencies t al
9 – basic telecommunications services at reduced rates
11 – Telecos charging ICC fees will continue to file tariffs but may also create contracts with providers
12-16 – PUC’s power in terms of investigation, complaint investigation,
15 – PUC find wholesale service, establishment of rate and price,
16- 17 – confidentiality, discrimination prohibited / information subject to protective order
18 – allows LEC who has not been compensated by another provide to discontinue providing service
19 – telecos can’t slam (changing a user’s local exchange carrier without the user’s permission)
20 – Assessment of costs
21 – how advanced service providers will assess charges (for hard of hearing and low income services)
22- sets fee to serve them
23- no more third-party charges on end customer bills
24 – certification, registration & mapping – Advanced providers must register with Commission
25 – Related to AFOR (alternative forms of regulation). AFORs gives greater flexibility to regulators-providers negations.
We will work on this in the interim and talk about it in 2014.
Testimony
Scott Bowler with Frontier
Serve 190,000 lines in MN; we support the bill
7.25 million lines in Minnesota
4.25 are wireless;
1 million are cable or VoIP;
1.5 million are provided by ILECs
Right now only ILECs are regulated
There is a robust, competitive marketplace; the state law should reflect this.
This might be a deregulatory bill – but the industry has taken the lead when most of the providers are not beholden to regulation.
Other providers obtain wholesale services from ILECs. This has been done with ICC. The FCC regulates most of those responsibilities.
Other states has taken similar steps with no negative impact on customers.
QUESTIONS
So is Commerce going to give up regulation?
Yes.
How do we make sure there is transparency?
This won’t affect USF or other charges.
Brent Christensen – MN Telecom Alliance
Represents more than 70 independent providers
Consumers have a choice now. Less than half of this group has a landline and that’s indicative of the state.
The breakup of AT&T, the 1996 Telecom Act has brought about changes but Minnesota has not kept up. The industry has changed.
User charges that used to be controlled by PUC.
I have three numbers coming to my smartphone
– landline – regulated by FCC, PUC & Commerce
– Cell number –regulated by FCC
– Google Voice isn’t regulated at all
This bill maintains oversight between wholesale transactions; it maintains basic phone service for folks who need it.
We will continue to work with industry and consumer groups on this bill.
QUESTIONS
Who are major stakeholders?
Wireless providers, CLECs, cable, VOIP providers, Commerce, PUC.
How many states have made this change?
10 states have enacted legation or are working on it. WI, IA, IL, IN are working or have worked on this. The FCC’s plans have spurred action.
Tariff is related to service fees, which is regulated by FCC.
Does this regulate in broadband areas too?
Broadband is not regulated; it’s regulated by FCC. There’s a distinction between the broadband pipe and the VoIP services that run over broadband.
How does this relate to last mile broadband?
You’ve got broadband that comes to your home – over that you get a range of services such as TV, phone, and access to the Internet. Those three services are regulated very differently. We’re looking only at telephone services.
Peter Brickwedde – Department of Commerce (Department of Government Affairs)
The Department has been working on this for several years.
We’ve been working with MTA and hope that continues. We may have differences but work together.
Concerns:
The State does need to modernize. But that does not mean deregulate. We want to focus on consumer protections.
The FCC is not settled yet. The situation is still fluid. We will continue to monitor.
QUESTIONS
What are your concerns?
Basic service – narrowing the definition is a primary concern
Not having a regulatory entity at the state level to turn to is a concern
A venue and method for complaints are concerns.
Who is regulated and how?
We could put together info to share after hearing. Otherwise I think folks have covered
State regulated traditional wired services and
Wireless is regulated at federal level.
Non-traditional services (cable) regulated at local and federal level.
There is a distinction between phone service and broadband. And the FCC is moving towards a broadband world of regulation.
When do you think things will be settled (at FCC)?
It is difficult to know. And it’s a good reason to really examine the landscape. A flexible regulation scheme that is not tied to technology will be helpful.
We expect movement later this year.
There seems to be some concern about pricing. Is that due to last of oversight?
That is the broad concern. We are trying to highlight the possibility of cost going up and consumers having nowhere to go to complain.
Wouldn’t the PUC take the complaints?
The concern is that under the current process the PUC take complaints and the Department of Commerce investigates. This bill would eliminate that review. The FCC is setting some rates – but the details will be the tricky part and how does the State protect.
And pricing fixing there’s concern over mergers?
Those issues are addressed in research notes – Commission will be addresses in PUC.
The state will not be protect consumers in mergers as we did with CenturyLink – Qwest merger.
Are there issues with losing state regulation?
Without Department of Commerce & PUC having jurisdiction a small business would not have a state-based recourse if they were having problems with their services.
Brent –
If someone has a compliant now, they file with PUC. Department of Commerce investigates complaints with PUC. This bill would eliminate some of that now and by 2014. We’re no longer monopolies. We’re regulated by the consumer. They have choices. Businesses with 4 lines or more are already deregulated.
The reason for July 1, 2019 date – PUC & Commerce not regulate local service rates and access changes (long distance payments to local providers) but the FCC essentially takes these off the tables by the FCC.
Call completion is a big issue for us – but it’s up to the provider who will work with the FCC or do a workaround.
Dan Lipshultz – Larson Baudette (represents CLECs – such as Integra)
– ILEC – incumbent local exchange carrier – traditional phone companies (such as US West)
– CLEC – competitive local exchange carrier – new providers (post 1996)
Our retail services are regulated by PUC.
Our clients believe the best protection for consumers is competition.
First broadband service was DSL. It was invested by bell monopoly. First providers was a CLEC.
The consumer – think about your office. You have a phone and laptop. Both are hardwired and connected to Internet. Every business has a hardwired phone and computer. Your phone and broadband are coming from ILEC or cable. We rely on landline service for voice and broadband.
It’s the pipe that matters. We need a statute that supports open market, which means opening up the pipes. It will never make sense to rebuild infrastructure time and time again.
A large ILEC will try to keep competitors out. Every company wants to be a monopoly. TO protect the consumer we need regulators to make sure the pipes are available to everyone.
“The era of PC dominance with Windows as the single platform will be replaced with a post-PC era where Windows is one of a variety of environments that IT will need to support”, Van Baker, Gartner research vice president, says. The days of Windows as the applications and device hub are over.
The implications are huge for businesses, which must adapt to something else, too. While native mobile apps are all the rage today, their future is uncertain. Gartner forecasts that by 2016, more than half of those deployed will be hybrid, and that’s good for any platform favoring HTML5, including Windows.
Post-PC is a Lie
Baker and I disagree on one important point, however. There is no post-PC era. It’s a fiction perpetrated by Apple cofounder Steve Jobs, who when alive wanted to sell more devices, and for analysts like Baker who want clients to buy more reports and services. We have entered the contextual cloud computing era, where context and not device or location determines tech’s usage.
Context isn’t new in business computing, but accelerates because of cloud benefits. BlackBerries, laptops and PDAs — going back into the last century — were contextual devices around which employees commingled behavior and data. The trend simply accelerates, as consumers purchase more devices and bring them to work. Employees have more choices, not just Windows PCs, and for tech all the more personal.
In this rapidly evolving future, where the cloud makes content and data available anywhere, anytime and on anything, context is king. There is no post-PC, rather the personal computer changes roles, being one of many platforms. During the transition, platform creators and developers emphasized apps, like they did for the PC — and many are native. But that’s not a sustainable approach because of contextual demands — your professional or personal stuff available on anything. Developers make do by creating apps for each platform, which isn’t a sustainably sound strategic approach going forward. There is too much investment in too many places.
As businesses support more devices, they should add them to existing applications and platforms, rather than supporting them separately, Gartner recommends. That’s actually good for Windows’ longevity, being the incumbent and because of development priorities Microsoft made for the current desktop and server platforms. Windows 8’s Modern UI is all about active content, and developers using HTML5, JavaScript and native code to write apps and to support connected services.
BYOD Jungle
“The BYOD trend and the increased pressure on organizations to deploy mobile applications to accommodate mobile work styles of employees will lead businesses to manage a portfolio of mobile application architectures”, Baker says, “and hybrid architectures will be especially well-suited to business-to-employee applications”.
HTML5’s role cannot be understated. “While hybrid apps will be the majority of enterprise mobile apps, web technologies like HTML5 will make up the most commonly used languages for building mobile applications by 2015”, David Mitchell Smith, Gartner vice president, predicts.
That’s good for cross-platform development, which is more sensible when supporting so many devices — everything from legacy Windows to mobile operating systems like Android, BlackBerry, iOS or Windows Phone.
Where Windows sits on the enterprise will greatly affect many IT developers’ choices about what to invest in and where. Those decisions aren’t easy because adoption of other devices is an accelerating trend.
As I explained three years ago: “Mobile device-to-cloud competition’s shifting relevance bears striking similarities to the move from mainframes to PCs, and it is a long, ongoing trend. Microsoft’s newer problem is sudden and unexpected: Competing operating systems moving up from smartphones to PCs or PC-like devices”.
Out with the Old
Looking at past historical trends, the pace is slow at first reaching a crescendo, where there is a dramatic shift to the new from the old occurring within a short time span. Some older technologies continue for a time and disappear, while many others remain but in new niches. Some recent, and not-too-hard-to-grasp, examples:
Cottage industries and factories
Horse drawn carriages and trains
Trains and automobiles
Telegraphs and telephones
Mainframes and PCs
Digital music downloads and CDs
We’re at the accelerating end now, where much changes fast in a short time. For example, between October and March surveys, Twitter usage dramatically shifted, according to Strategy Analytics. On laptops, the number dropped to 64 percent from 77 percent, while on smartphones and tablets rose to 71 percent from 65 percent.
“The immediacy of Twitter communications requires devices which are close to hand at every waking moment,” David Mercer, Strategy Analytics vice president, says. “By definition this suggests mobile phones and tablets should be preferred devices for Tweeting and the survey evidence points clearly in this direction”.
Twitter, like many other cloud apps and services, is highly contextual and personal. Business mobile apps will be increasingly so as organizations establish sound policies about devices used for home and work purposes and maximize their benefits.
Again, the trend accelerates. Signs are everywhere. Many IT organizations may find their businesses on the short end of the curve and using BYOD as means of making do while catching up. But decisions they make in process, about Windows’ role and supporting and managing new platforms, matter now. It’s one reason to anticipate hybrid mobile apps, at least in the short term.
As for Windows, it’s changing role is inevitable now. The question to be answered: What will be Windows new role in a one-of-many world?
Google is running a doodle on its homepage in India today, celebrating the 160th anniversary of India’s first passenger train journey.
On April 16, 1853, the first passenger train service went between bori Bunder in Mumbai and Thane, covering a distance of 24 kilometers (21 miles), hauled by three locomotives, Sahib, Sindh, and Sultan, according to T. Stanley Babu’s “A shining testimony of progress” (as cited on Wikipedia).
According to the article, “This was soon followed by opening of the first passenger railway line in North India between Allahabad and Kanpur on March 3, 1859.”
For an industry whose lot in life is to invent the future and challenge the status quo, technology’s giants are astonishingly stubborn when faced with change. And no two companies personify that more than Microsoft and Intel — the glimmer twins of the personal computer revolution. For decades the PC buying cycle left these two companies sitting on a mountain of cash higher than even the highest Himalayan peaks. I guess when you are sitting at such heights, it is hard to look down and recognize that the base is being chipped away.
To be sure, I am not saying that Microsoft and Intel are going to go away tomorrow. Their fiscal muscle is enough to put even Popeye to shame. And monopolies (even quasi-monopolies) take forever to fade.
But for the first time they are facing a challenge that is much more profound and broader than they have ever faced in their monopolistic lives: competition and changing tastes. How they deal with these changes is going to write the next chapter of their corporate history.
PC sales horror show
But let’s take a step back. The signs of crumbling came last week when research companies like IDC and Gartner shared data that showed double digit percentage declines in PC sales during the first quarter of 2013. To be sure, the first ninety days of the year are relatively slow for sales of consumer goods, considering that people go on a buying binge during the holiday season, but still a 14 percent year over year decline during the quarter is not something to skim over. It was so bad that even downward trend defying Apple PC sales are expected to head south.
Many media reports blamed the Windows 8 operating system for this debacle, but this is the fourth quarter in a row we have seen PC sales sagging; we can’t blame the new operating system. The reason why media and analysts continue to make that correlation is because we have in the past made that correlation: new Windows equals big PC sales, almost like clockwork every three or four years. Except now it is not true because our relationship with PC (as we knew it) has changed.
The new personal
It has been just about six years since Apple’s iPhone launched and changed our expectations of computers and our relationship with technology. It became more intimate and personal than either Intel or Microsoft had imagined. It wasn’t as that the companies were unaware of mobile phones, or that iPhone was the first smartphone — Nokia and Palm had been selling them for quite a few years — but the iPhone and later Android phones became truly “personal.”
They made us spend less and less time on our PCs. They were always there, and even when the PC sat on the table, the phone in your hand was more fun and easy to use. And then three years ago came the iPad (and later other tablets) to take away even more of our attention from the PC. And when the iPad launched, I knew my PC was going to become less important. The iPad was my slate of imagination.
In the end, an increasing number of people are finding that they don’t need a whiz-bang PC anymore and they don’t need to upgrade because they can do a lot of things on their iPad or Kindle Fire or Samsung Android tablet.
The signs of this change were obvious to anyone who was paying attention. When Apple dropped “computer” from its name, the late (and then chief executive) Steve Jobs pointed out that it was a sign of the times and where the world was going. Here is what I wrote then:
Apple is making the phone do all things a computer does – surf, email, browse, iChat, music and watch videos. Nary a keyboard or mouse in sight, and everything running on OS-X. While I am not suggesting that this replaces our notebooks or desktops for crucial productivity tasks, the iPhone (if it lives up to its hype) is at least going to decrease our dependence on it.
The future is here
Six years later, the world has really changed for the twin gods of the PC. Unlike Apple and Google, who have hitched their bandwagons to wireless devices, Microsoft and Intel are still weighed down by the legacy of their past. I mean, it is hard for Microsoft to look beyond the profits from Windows and Office. It will always look at the future through the lens of those two products. I have been suspect of Intel’s ability to come out ahead as well.
Intel, too, is so married to the idea of selling more expensive PC chips and silicon for servers that it doesn’t know how to readjust its focus and its fiscal models around a world that wants lower priced chips for a different and always shifting market. Since then the world has embraced the little pocket marvels with amazing speed and that in turn has unleashed a new cellphone economics. The mobile chips are getting faster and faster. And thanks to demand that far strips the demand of classic PC devices, they are getting cheaper.
The mobile phone market is so big that it has attracted all sorts of chip makers into the business: Qualcomm, MediaTek and Nvidia are some of the players in the mobile chip business that are relentlessly flooding the market with faster, cheaper and more powerful chips. They are being helped by ARM Holdings, which keeps beefing up its chip technology and expanding its possible uses by focusing on not making chips, but instead licensing chip designs to others like Qualcomm.
Intel has to react to these guys; not to Advanced Micro Devices, the perennial also-ran that was always weighed down with an anemic balance sheet and an inability to compete even when it had better chips. And we all know, Qualcomm is no AMD. MediaTek knows how to play the mobile chip game better than anyone else. What does Intel have to show for its mobile efforts?
Change is hard
A lot of noise – press releases, product releases and a handful of devices. Sorry, but I remain resolute in my belief that the company’s DNA is making this transition to anywhere computing very difficult. That inability to change is reflected in the company’s current dilemma over the chief executive position. In an article this week, The New York Times detailed the likely replacements for outgoing CEO Paul Otellini.
Analysts say the two top contenders to be Intel’s next C.E.O. are Brian Krzanich and David Perlmutter, who are close to Intel’s core business. Mr. Krzanich, Intel’s chief operating officer, oversees its fabrication facilities. Mr. Perlmutter, the chief product officer, oversees chip design. Renee James, the head of Intel’s software group, is considered a more remote chance to run what has long been a hardware company. And Stacy Smith, Intel’s chief financial officer, is well liked inside and outside the company, but like Mr. Otellini, lacks an engineering background, which diminishes his prospects.
Regardless of who becomes the new Intel chief, the problem is that they were all weaned on the classic PC business, one that is changing with the rise of smartphones and tablets and lower power anywhere-computing devices.
That said (and as my wise colleague Kevin Tofel continues to remind me), Intel is doing relatively well with its Atom lineup of chips and he feels it is one of the reasons why Microsoft RT on ARM devices is facing challenges.
The full Windows 8 tablets that run on Atom processors priced at the same price as RT devices (and with the similar battery life) should give Intel some hope. However, their addiction to the PC-style model and hefty margins that come from being almost monopolistic are going to challenge Intel in the future. As I wrote in the past, companies are defined by their corporate DNA and that determines their outcome.
Microsoft too has similar challenges as it grapples with the idea of competition and a world it doesn’t and can’t control anymore. More on that another day, but in closing, I would like to repeat what I said at the start of this piece: the companies that spearhead the talk of disruption and innovation are the ones who are afraid to disrupt themselves.
Big data isn’t just about Hadoop distributions and analytics software — you also need servers to process it and disks on which to store it. On Tuesday, research firm IDC quantified the market for the latter aspect, predicting that the business of selling storage into big data deployments will be worth nearly $6 billion in 2016, up from just $379.9 million in 2011.
However, as a press release explaining the new report highlighted, defining “storage” for the purposes of big data is an exercise in subjectivity. There are systems for archiving data, and systems for storing post-processed data and systems — like the Hadoop Distributed File System — that put storage on the same servers that process data. There also are storage systems designed for operational data and those designed for transactional data, and very likely something in between.
Presumably, these numbers don’t account for the amount of storage baked into analytic database appliances like those from Teradata and Netezza. And, although the report doesn’t appear to address it, there also will be a market for storing data in the cloud — both provider-side and user-side. Even here, there are a variety of options from Hadoop services to data warehouse services to software-as-a-service applications.
As we’ve said time and time again, though, trying to put a dollar value on the big data market is in many cases akin to herding cats (that we might also shear for fur and that might provide a valuable service killing off crop-damaging varmints). There are so many disparate facets and delivery models that touch so many different business uses and revenue sources that it’s difficult to capture big data, or any of its individual components, into a single market. But however IDC and other research firms define it, the only thing that matters in the end is probably the ever-rising revenue arrow.
Eighteen months ago I attended the first Open Networking Summit at Stanford’s campus. The event was billed as a place to learn what people were doing with the OpenFlow protocol as well as a primer on software defined networking. The event aimed to attract about 200 people, but around 600 signed up (half of those were shunted to the wait list).
Last night I attended the opening cocktail reception for a radically different ONS and had the chance to reflect on how rapidly the once-staid field of networking is changing. There were about 1,500 people registered, which was the limit of the venue. The event had grown to the Santa Clara Convention Center and attendees were a fairly even mix of suits and engineers.
The biggest change was the exhibitor section. Where in 2011 the exhibitor hall was a narrow corridor at the Stanford conference center where a little more than a dozen students, companies and non profits had set up “booths” to showcase their ideas for Open Flow, there was now a few rows of booths — most of which were quite professional.
In October 2011, I attended the show for one day and moderated a panel where I recall asking Dave Ward, who was then CTO and Chief Architect of the Platform Systems Division at Juniper, what he would do if Stuart Elby, the VP of digital architecture at Verizon, a Juniper customer, got so excited about the promise of OpenFlow and SDN that it stopped buying expensive Juniper gear.
Ward danced a bit but essentially said that Juniper had the features and expertise to pull networking gear together that Verizon would pay for. The subtext (and knowing Ward, it may have been directly stated) was that he wasn’t an idiot and he was well aware that the networking industry was shifting. But his company would figure it out.
Six months later, the same conference had grown to 700 people and had Google showing off its own networking coup — it had built a software defined network using OpenFlow that connected its data centers. And Ward was still on a panel I moderated, only now he was at Cisco: preaching the same ideas but now at a company with the resources to carry it through.
Fast forward to the opening of the summit this year on Tuesday, and I’m eager to see what awaits. All I can tell is that so far the industry has moved from the excitement of translating a new technology into a commercial endeavor — one that scored a $1.26 billion transaction when VMware purchased Nicira — to one where use cases are more common and vendor fighting has started capturing a bit of the event conversation.
Indeed, mixed among the many case studies I’ve heard so far is speculation about the vendor-led Open Daylight Foundation that includes IBM, Cisco and VMware as strange bedfellows trying to build an open source controller for the software defined data center.
Just eighteen months removed from its inaugural event, software-defined networking has clearly learned to walk — if not run.
Earlier this year, Google launched a new design for its image search, and ever since, there has been a substantial amount of backlash from webmasters claiming that the changes have decreased the amount of traffic they get to their sites.
Webmasters complaining about changes made by Google is nothing new. Every time Google releases a major algorithm update like Penguin or Panda, the outcry is everywhere. But, like it or not, that’s Google trying to better its algorithm, and ultimately improve its search results. You could also argue that any traffic one site loses, another gains. Somebody wins.
The Image Search story is a bit different, however. This is not an algorithmic change designed to point users to higher quality images or more relevant image results. It’s a cosmetic change, and while some users may find the experience to be an upgrade, it’s clear that many webmasters have not welcomed the redesign.
We got over seventy comments about the changes on a previous article we published. Not many were positive. In fact, most were from webmasters talking about the traffic they lost almost instantly. Here are a few examples:
“55% dropped for websites with images…”
“My traffic has dropped to 1/5 of what it was before the new Google Images search roll out…”
“My traffic was cut by half overnight…”
“My image based website has lost 2/3 of the visitors after the change…”
“Google image traffic has dropped by 50-70% on my site…”
That was back in January. It doesn’t appear that things have gotten much better.
Define Media Group published some findings from a recent study on Monday (hat tip to Search Engine Land). According to the firm, you might as well spend your time in other areas of search engine optimization and online marketing, and not worry so much about optimizing for image search anymore.
“We analyzed the image search traffic of 87 domains and found a 63% decrease in image search referrals after Google’s new image search UI was released,” explains Shahzad Abbas. “Publishers that had previously benefitted the most from their image optimization efforts suffered the greatest losses after the image search update, experiencing declines nearing 80%.”
“In the eleven weeks after Google’s new image search was released, there has been no recovery – which means for image search, the significantly reduced traffic levels we’re seeing is the new normal,” he adds. “In the aftermath of the new image search experience, image SEO has been severely compromised, and we have no choice but to recommend deprioritizing image SEO when weighed against other search traffic initiatives.”
Of course, there’s always the chance that your images could turn up in universal search results on Google’s web results pages, but even then, personalized “Search Plus Your World” results tend to get the emphasis when applicable.
It’s all made even more interesting due to the fact that Google pitched the changes as good for webmasters, indicating that they would actually drive more traffic to sites.
“The domain name is now clickable, and we also added a new button to visit the page the image is hosted on,” wrote associate product manager Hongyi Li in the announcement. “This means that there are now four clickable targets to the source page instead of just two. In our tests, we’ve seen a net increase in the average click-through rate to the hosting website.”
“The source page will no longer load up in an iframe in the background of the image detail view,” Li added. “This speeds up the experience for users, reduces the load on the source website’s servers, and improves the accuracy of webmaster metrics such as pageviews. As usual, image search query data is available in Top Search Queries in Webmaster Tools.”
It’s possible that some sites are seeing more traffic from the Image Search changes, and just aren’t being as vocal, but there has been an overwhelming amount of complaints since the redesign, and this new study is not doing anything to defend Google’s case.
Of course, Google is all about placing users first (even over webmasters), and they’ll continue to do what they think is best for them. From a user experience perspective, the changes aren’t bad. But that’s little consolation for those who now have to find other ways to get their content in front of an audience.
Apple’s next-generation “iPhone 5S” will reportedly feature an upgraded 12-megapixel camera with improved image capture capabilities in low light. The report comes from Tinhte.vn, which has on occasion accurately reported details of unannounced Apple devices in the past. The Vietnamese tech blog names Wonderful Saigon Electrics, the Vietnamese arm of a China-based mobile camera component supplier, as its source. No other details were provided, though Tinhte.vn notes that its source at Wonderful Saigon Electrics accurately stated that the iPhone 5 would include an 8-megapixel camera last year while rumors of an upgraded 10-megapixel module were circulating. Apple’s iPhone 5S is expected to launch this summer with an upgraded processor, a new camera and possibly new color options.
Yet again, third-party accessory makers have revealed an upcoming iPad design. Alibaba.com is awash in cases for a redesigned iPad. Nearly every case is for a device that has a thinner bezel and slimmer profile. In short, the next iPad will look like the iPad mini — except, you know, just not mini.
This is the standard story line for Apple devices. Months before a major product is released, accessory makers start pumping and dumping cases. Most of the time these cases are rebadged and sold under a brand name. This process takes time, which is why the cases are available prior to the device launching.
This has happened for nearly every iDevice launch since the iPhone 4. Every iPad — full size or mini — was revealed prior to Apple’s announcement through case makers. And a good chunk of case makers display their wares on Alibaba.com.
Apple is part of this cycle, too. A vast accessory ecosystem is part of the iOS magic. A buyer knows that they can purchase the latest iPhone or iPad and customize it to their fitting without any fuss. Even dime stores sell iPhone cases. It’s oft been circulated that Apple releases the dimensions of upcoming devices to accessory makers months before the announcement so the device launches to a full assortment of items.
These factories have likely not seen the next iPad yet. Most of the images are physical mockups or renderings. But they know the device’s dimensions.
So again, act surprised when Apple reveals the next iPad in the coming months. Pretend like you hadn’t seen it before.
Hemlock Grove, the next original series from Netflix comes to subscribers on Friday. All episodes of the first season will be made available at once, much like the format for House of Cards.
We’ve seen a number of trailers for Hemlock Grove so far, but today, Netflix put out the first red band trailer, with all of the sex, violence, and language fans of Eli Roth (the show’s director) would expect.
Leap Motion, the company making extremely accurate gesture detection hardware, has signed a deal to bundle and then integrate its motion-based controller into select HP products. This is a big win for Leap, which already has a deal with ASUS that will bundle the Leap Motion device in with its all in one computers as well as select ASUS notebooks this year.
Bundling is good, but integration is always better in the consumer world, since most consumers may not have any idea that they want gesture-based controls or even why. Leap’s system works like a Kinect with an exterior piece of hardware attached to the computer that detects hand motions with a high degree of accuracy — within 1/100th of a millimeter. As for why someone might want this on their machine, it’s an enabler for new types of computing experiences.
When the company raised an additional $30 million earlier this year, I wrote how excited I was at the potential for gesture-controls to change how we think of the PC by enabling new applications like molding clay, manipulating spreadsheets in 3-D or playing an instrument. From the post:
That’s a nice win in the computing space, but the real question for me is can a new UI change how we interact with computers, and perhaps help keep the PC relevant? David Holz, the a co-founder and CTO of Leap told me that he helped invent the product because he wanted to do things on his computer, like play an instrument or make a model, that were made far too complicated by the existing programs limited by drop down menus necessitated by having a keyboard or mouse interface.
This deal with HP may help drive the adoption of more of those Leap-specific applications by helping deliver a larger audience for developers. Already Leap has sent out 12,000 units for free to developers to prime the pump for new applications, but now it needs to give those programmers an audience. As is always the case with a new user interface platform, it could be the most awesome experience since the touch screen, but if people don’t use it, the apps won’t arrive.
In the latest Webmaster Help video from Google, Matt Cutts responds to a question about Penguin’s effect on internal links that use the same anchor text. The exact question is:
Do internal website links with exact match keyword anchor text hurt a website? These link help our users navigate our website properly. Are too many internal links with the same anchor text likely to result in a ranking downgrade because of Penguin?
“My answer is typically not,” says Cutts. “Typically, internal website links will not cause you any sort of trouble. Now, the reason why I say ‘typically not’ rather than a hard ‘no’ is just because as soon as I say a hard ‘no’ there will be someone who has like five thousand links – all with the exact same anchor text on one page. But if you have a normal site, you know…a catalog site or whatever…. you’ve got breadcrumbs…you’ve got a normal template there…that’s just the way that people find their way around the site, and navigate, you should be totally fine.”
He continues. “You might end up, because of breadcrumbs or the internal structured navigation, with a bunch of links that all say the same thing, that point to one page, but as long as that’s all within the same domain, just on-site links, you know, that’s the sort of thing where, because of the nature of you having a template, and you have many pages, it’s kind of expected that you’ll have a lot of links that all have that same anchor text that point to a given page.”
Long story short, this isn’t an issue you should have to worry about. Like with everything else, just don’t abuse it and make it an issue.
A purported case designed to fit Apple’s fifth-generation iPad has been pictured, suggesting once again that a major redesign will be introduced when Apple takes the wraps off its new full-size iPad in the coming months. Following a leak this past January in which the rear shell from Apple’s upcoming full-size iPad was pictured, a number of subsequent reports have supported claims that Apple’s next iPad will take design cues from the iPad mini and scale them up to fit a 9.7-inch Retina display. Now, images of protective cases for Apple’s fifth-generation iPad obtained by Engadget, again suggest an all new design will debut on the “iPad 5.” Additional photos of the case follow below.
It looks like Samsung’s Galaxy S4, one of the most highly anticipated smartphones of year, finally has a ship date on the nation’s leading smartphone carrier. AT&T on Tuesday made the Galaxy S4 available for preorder starting at $199.99 on a two-year agreement for the 16GB model. The phone will ship by April 30th according to the carrier’s website, and Engadget reports that handsets will be delivered to those who preorder by Friday, May 3rd. The Samsung Galaxy S4 features a 5-inch Super AMOLED display with full HD resolution, a quad-core Snapdragon processor, a 13-megapixel camera, 2GB of RAM, up to 64GB of internal storage and Android 4.2 Jelly Bean.
Back in November, news came out that Google acquired BufferBox, a locker service for people receiving packages from online retailers.
A Google spokesperson said at the time, “We want to remove as much friction as possible from the shopping experience, while helping consumers save time and money, and we think the BufferBox team has a lot of great ideas around how to do that”
Well, it looks like they’re getting started. BufferBox co-founder Mike McCauley tweeted on Monday that there’s now a BufferBox pickup in a San Francisco Coffee Bar, which is reportedly the first U.S. location (hat tip: Drew Olanoff):
Dell has announced new additions to its networking portfolio, with new Active Fabric solutions for SDN-enabled designs, next-generation management software – Dell Active Fabric Manager, and the Dell Networking S5000 modular LAN/SAN switching platform.
“We’re challenging conventional wisdom with new products and solutions designed to accelerate our customers’ migration to virtualized and cloud data center environments,” said Tom Burns, vice president and general manager, Dell Networking. “We’re excited about these new offerings and their ability to simplify operations, boost performance and improve economics.”
Dell Active Fabric
Dell Active Fabric is a any-to-any multipath network architecture for virtualized data centers and private clouds. Active Fabric solutions flatten the traditional data center network architecture using high-density and low-latency, fixed-form factor 10/40GbE switches that can reach hyperscale proportions. As a logical extension to this, SDN (software defined networking) delivers a software abstraction layer that is designed to enable open programmability and make infrastructure flexible and adaptable to different customer environments. The Dell approach to SDN encompasses networking virtualization overlays (NVO), OpenFlow and legacy interface capabilities.
Dell Active Fabric supports NVOs using Microsoft, VMware and OpenStack hypervisors, Open Flow-based controllers from vendors such as Big Switch Networks, and legacy programmatic interfaces including Telnet/CLI, TCL, REST, SNMP, Perl and Python scripting. Features include 10GbE and 40GbE L2/L3 multipath fabrics, OpenFlow support, and LAN/SAN convergence using iSCSCI, Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE). It is a purpose-built solution for virtualized, converged, and SDN environments.
“Dell Active Fabric networking technology is leading edge, providing 40G connectivity to our storage fabric, and 10G switching for east/west traffic,” said Kevin Dunn, vice president for Business Information Services – Infrastructure and Operations at First Command Financial Services. “It gives us the ability to move workloads across our Dell infrastructure quickly and easily, maximizing our hardware utilization and increasing the value of our investments.”
The Dell Active Fabric Manager is a software tool that automates the tasks associated with planning, designing, building and monitoring fabrics. It features a design wizard for simplifying the mapping process, automated provisioning, validation and configuration, and it enables easy integration by abstracting the fabric as a single entity.
Dell S5000 Switch
The Dell Networking S5000 is a 1U top-of-rack 10/40GbE switch equipped with native FC and FCoE capabilities. The S5000 accommodates four modules allowing customers to populate a single module and add as necessary instead of buying all four modules at once. It has a maximum of 64 x 10GbE ports, or 48 x Ethernet/FC ports with 16 x 10GbE ports. With its modularity and system design it can be easily upgraded in the future without sacrificing existing infrastructure equipment, and it has complete support for iSCSI, RoCE, NAS, FCoE and FC fabric services, all on the same platform.
“We’re interested in the S5000 as its first switch we’ve seen with these capabilities in a 1U top-of-the-rack form factor,” said Maurizio Davini, CTO , IT Center, University of Pisa Italy. “This is very important for our Datacenter network configuration. We also like the modular aspect so we can scale as we need, while saving CapEx and OpEx in the meantime.”
“Networking professionals need solutions that solve real business problems today, and SDN, while very promising, is still very nascent,” said Bob Laliberte, Senior Analyst of IT Infrastructure and Networking at Enterprise Strategy Group. “Dell Networking is expanding its product line to introduce offerings into the market that tackle today’s immediate needs while enabling organizations to take advantage of future developments.”
Comedian and actor Patton Oswalt, a frequent Twitter user on a day-to-day basis, took to the medium following Monday’s tragedy in Boston to offer some brief thoughts, as many other celebrities and ordinary people did.
“Look for the helpers. You’ll always find people who are helping.” — Fred Rogers, on what to do when scary things are on the news #boston
But Oswalt really opened up on Facebook, in a post that so far has over 250,000 likes and nearly 190,000 shares. Here’s what he said:
Boston. Fucking horrible.
I remember, when 9/11 went down, my reaction was, “Well, I’ve had it with humanity.”
But I was wrong. I don’t know what’s going to be revealed to be behind all of this mayhem. One human insect or a poisonous mass of broken sociopaths.
But here’s what I DO know. If it’s one person or a HUNDRED people, that number is not even a fraction of a fraction of a fraction of a percent of the population on this planet. You watch the videos of the carnage and there are people running TOWARDS the destruction to help out. (Thanks FAKE Gallery founder and owner Paul Kozlowski for pointing this out to me). This is a giant planet and we’re lucky to live on it but there are prices and penalties incurred for the daily miracle of existence. One of them is, every once in awhile, the wiring of a tiny sliver of the species gets snarled and they’re pointed towards darkness.
But the vast majority stands against that darkness and, like white blood cells attacking a virus, they dilute and weaken and eventually wash away the evil doers and, more importantly, the damage they wreak. This is beyond religion or creed or nation. We would not be here if humanity were inherently evil. We’d have eaten ourselves alive long ago.
So when you spot violence, or bigotry, or intolerance or fear or just garden-variety misogyny, hatred or ignorance, just look it in the eye and think, “The good outnumber you, and we always will.”
Over 10,000 people have responded to Oswalt’s words in the comments of the post alone. The reactions have been overwhelmingly positive.
One of the buildings at Digital Dallas, the Digital Realty Trust campus in Richardson, Texas.
Digital Realty is getting a boost from digital effects. RenderWurks, a provider of large-scale server farms that can be used for animation and computer generated visual effects, has signed a lease for Turn-Key Flex space and plans to deploy more than 10,000 servers at a Digital Realty data center in the Dallas market, the companies said Monday.
RenderWurks is a new company offering hosted digital effects, which says it can offer access to rendering farms at more affordable pricing than existing providers, most of whom are based in the Los Angeles area. RenderWurks, which is a sister company of dedicated hosting provider TruSurv, will have a Los Angeles office that offers remote access to the server farm in Dallas.
In June RenderWurks will commence go live in the Digital Realty campus in Richardson, Texas with 5,280 servers in the Dallas facility, harnessing 42,200 CPU cores to bring more than 105 Terahertz of processing power to its clients. This project will make RenderWurks the largest continually available, publicly accessible dedicated render farm in the world, the company said. Primary and failover storage arrays totaling 420 usable terabytes in size will be located in the Dallas data center.
More than 10,000 Servers Soon
RenderWurks said it has signed two contracts that will fully utilize the new space through the remainder of 2013. The company plans to add bring online another 5,280 servers before the end of the year, and will also double the storage capacity.
“RenderWurks is a pioneer in the server/rendering farm industry and we are pleased to welcome it to our portfolio,” said Andrew Schaap, vice president of sales at Digital Realty. “Our flexible solutions will allow RenderWurks to provide its customers with a solution that is both scalable and cost effective. It is exciting for us to be part of the early-stage growth of such a dynamic firm and industry.”
“Digital Realty’s state-of-the-art data center will make it possible for us to accommodate the rendering needs of our customers across a range of verticals – animation, computer-generated visual effects, engineering and architecture,” said Jeremey Poe, Marketing Manager for RenderWurks and TruSurv. “The way business is done today, we have to provide a solution that is flexible and will allow our customers to grow or shrink their services as their processing and data storage needs change on a per project basis.”
The TV remote control will not die. And that’s a good thing. Try as they might, startups have yet to provide a true remote control replacement. A dedicated remote is like a trusty pickup truck: It might not be the best looking vehicle but it gets the job done with little fuss. But even though dedicated remotes probably won’t be replaced, that doesn’t mean smartphone apps can’t supplement their existence.
Harmony Ultimate, packs the standard Logitech’s Harmony brand has long turned out some of the very best universal remote controls. Their latest, the affair of hardware including a multitude of buttons, touchscreens, and easy setup through Harmony’s web-based interface. However Logitech also made this $349 system compatible with its Logitech Harmony Smartphone apps, allowing smartphones to fill in when the remote control inevitably goes AWOL.
Or, if you just prefer to use a smartphone altogether, the company also just announced the $129 Logitech Harmony Smart Control, a system that puts the smartphone as the primary controller (like the old Harmony Link) but also includes a small physical remote for backup (below left).
Both systems are compatible with nearly every home entertainment device ever made including game systems and the Philips Hue lighting system. Using IR blasters and your home’s WiFi network, devices can be controlled from the remote or smartphone even when they’re packed away out of sight.
With the rise of the smartphone, many technology pundits put the venerable remote on death watch. But it’s still here. Many smart TVs can now be controlled through a smartphone, but most cable boxes and entertainment systems require extra hardware like the Harmony Smart Control or Griffin’s Beacon.
I’ve owned and tested about a dozen high-end universal remote controls starting with an original Harmony before the company was purchased by Logitech. I’ve also tried most of the iOS remotes but find using my smartphone (or tablet) clunky and not nearly as intuitive as a physical remote. A remote control, while often a mind-boggling mess of buttons, is still the best way to control a complex home entertainment system and mindlessly channel surf on lonely Saturday nights.
The Harmony Ultimate will hit stores in the U.S. and Europe this month for $349. The Harmony Smart Control will drop in May for $129.