Google Isn’t ‘Leveraging Its Dominance,’ It’s Fighting To Avoid Obsolescence

by Geoffrey Manne on March 12, 2012 · 1 comment

Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates.

As Social networks become more integrated with the Internet at large, reviews from trusted contacts will continue to change the way that users search for information. As David Worlock put it in a post titled, “Decline and Fall of the Google Empire,” “Facebook and its successors become the consumer research environment. Search by asking someone you know, or at least have a connection with, and get recommendations and references which take you right to the place where you buy.” The addition of social data to search results lends a layer of novel, trusted data to users’ results. Search Engine Land’s Danny Sullivan agreed writing, “The new system will perhaps make life much easier for some people, allowing them to find both privately shared content from friends and family plus material from across the web through a single search, rather than having to search twice using two different systems.”It only makes sense, from a competition perspective, that Google followed suit and recently merged its social and search data in an effort to make search more relevant and personal.

Inevitably, a host of Google’s critics and competitors has cried foul. In fact, as Google has adapted and evolved from its original template to offer users not only links to URLs but also maps, flight information, product pages, videos and now social media inputs, it has met with a curious resistance at every turn. And, indeed, judged against a world in which Internet search is limited to “ten blue links,” with actual content – answers to questions – residing outside of Google’s purview, it has significantly expanded its reach and brought itself (and its large user base) into direct competition with a host of new entities.

But the worldview that judges these adaptations as unwarranted extensions of Google’s platform from its initial baseline, itself merely a function of the relatively limited technology and nascent consumer demand present at the firm’s inception, is dangerously crabbed. By challenging Google’s evolution as “leveraging its dominance” into new and distinct markets, rather than celebrating its efforts (and those of Apple, Bing and Facebook, for that matter) to offer richer, more-responsive and varied forms of information, this view denies the essential reality of technological evolution and exalts outdated technology and outmoded business practices.

And while Google’s forays into the protected realms of others’ business models grab the headlines, it is also feverishly working to adapt its core technology, as well, most recently (and ambitiously) with its “Google Knowledge Graph” project, aimed squarely at transforming the algorithmic guts of its core search function into something more intelligent and refined than its current word-based index permits. In concept, this is, in fact, no different than its efforts to bootstrap social network data into its current structure: Both are efforts to improve on the mechanical process built on Google’s PageRank technology to offer more relevant search results informed by a better understanding of the mercurial way people actually think.

Expanding consumer welfare requires that Google, like its ever-shifting roster of competitors, must be able to keep up with the pace and the unanticipated twists and turns of innovation. As The Economist recently said, “Kodak was the Google of its day,” and the analogy is decidedly apt. Without the drive or ability to evolve and reinvent itself, its products and its business model, Kodak has fallen to its competitors in the marketplace. Once revered as a powerhouse of technological innovation for most of its history, Kodak now faces bankruptcy because it failed to adapt to its own success. Having invented the digital camera, Kodak radically altered the very definition of its market. But by hewing to its own metaphorical ten blue links – traditional film – instead of understanding that consumer photography had come to mean something dramatically different, Kodak consigned itself to failure.

Like Kodak and every other technology company before it, Google must be willing and able to adapt and evolve; just as for Lewis Carol’s Red Queen, “here it takes all the running you can do, to keep in the same place.” Neither consumers nor firms are well served by regulatory policy informed by nostalgia. Even more so than Kodak, Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters. If regulators force it to stop running, the market will simply pass it by.

[Cross posted at Forbes]

SHARE: spacer 1 comment

How scary was the White House’s cyber simulation for senators?

by Jerry Brito on March 9, 2012 · 0 comments

On Wednesday, administration and military officials simulated a cyber attack for a group of senators in an attempt to show a dire need for cybersecurity legislation. All 100 senators were invited to the simulation, which “demonstrated how the federal government would respond to an attack on the New York City electrical grid during a summer heat wave, according to Senate aides.” Around 30 Senators attended. Some post-game reactions:

After the briefing, [Sen. Jay] Rockefeller spokesman Vincent Morris said: “We hope that seeing the catastrophic outcome of a power grid takedown by cyberterrorists encourages more senators to set aside Chamber of Commerce talking points and get on this bill.” [Sen. Mary] Landrieu said the simulation “just enhanced the view that I have about how important” cybersecurity is. She added: “The big takeaway is it’s urgent that we get this done now.”

So how catastrophic did the simulation get? How many casualties? What was the extent of the simulated damage? Did thousands die a la 9/11? A “cyber 9/11″ if you will? We’ll likely never know because such a simulation will be classified.

Yet as policymakers consider the cost-benefit of cybersecurity legislation, I hope they’ll remember that we’ve already had many a blackout in New York City in real life and, well, they didn’t lead to catastrophic loss of life, panic or terror. As Sean lawson has explained:

Continue reading →

SHARE: spacer 0 comments

Rebecca MacKinnon on Internet freedom

spacer

by Jerry Brito on March 6, 2012

On the podcast this week, Rebecca MacKinnon, a former CNN correspondent and now Senior Fellow at the New America Foundation, discusses her new book, “Consent of the Networked: The Worldwide Struggle for Internet Freedom.” MacKinnon begins by discussing “Net Freedom,” which she describes as a structure that respects rights, freedoms, and accountability. She discusses how some governments, like China, use coercion to make private companies act a as subcontractors for censorship and manipulation. She goes on to discuss a project she launched called Global Network Initiative, where she urges companies like Google and Facebook to be more socially responsible. MacKinnon believes technology needs to be compatible with political freedoms, and she issues a call to action for Internet users to demand policies that are compatible with Internet freedoms.

Related Links

  • “Consent of the Networked: The Worldwide Struggle For Internet Freedom”, by MacKinnon
  • Global Network Initiative
  • “Handmaidens to Censorship”, The Wall Street Journal
  • The Case Against Letting the U.N. Govern the Internet, Time Techland

To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

SHARE: spacer

Courts, not FCC, Should Protect Free Speech against Mobile Service Shut-offs

by Berin Szoka on March 2, 2012 · 2 comments

Today, the FCC issued a Notice of Inquiry, responding to an emergency petition filed last August regarding temporary shutdown of mobile services by officers of the San Francisco Bay Area Rapid Transit (BART) district. The petition asked the FCC to issue a declaratory ruling that the shutdown violated the Communications Act. The following statement can be attributed to Larry Downes, Senior Adjunct Fellow at TechFreedom, and Berin Szoka, President of TechFreedom:

What BART did clearly violated the First Amendment, and needlessly put passengers at risk by cutting off emergency services just when they were needed most. But we need a court to say so, not the FCC.

The FCC has no authority here. The state did not order the shutdown of the network, nor does the state run the network. BART police simply turned off equipment it doesn’t own—a likely violation of its contractual obligations to the carriers. But BART did nothing that violated FCC rules governing network operators. To declare the local government an “agent” of the carriers would set an extremely dangerous precedent for an agency with a long track-record of regulatory creep.

There are other compelling reasons to use the courts and not regulators to enforce free speech rights. Regulatory agencies move far too slowly. Here, it took the FCC six months just to open an inquiry! Worse, today’s Notice of Inquiry will lead, if anything, to more muddled rulings and regulations. These may unintentionally give cover to local authorities trying to parse them for exceptions and exclusions, or at least the pretense of operating within FCC guidelines.

It would have been far better to make clear to BART, either through negotiations or the courts, that their actions were unconstitutional and dangerous. Long before today’s action, BART adopted new policies that better respect First Amendment rights and common sense. But now the regulatory wheels have creaked into motion. Who knows where they’ll take us, or when?

SHARE: spacer 2 comments

No NSA monitoring in McCain cyber bill, seems better on privacy than Lieberman-Collins (UPDATED)

by Jerry Brito on March 1, 2012 · 1 comment

After the NSA’s aggressive pursuit of a greater role in civilian cybersecurity, and last week’s statement by Sen. John McCain criticizing the Lieberman-Collins bill for not including a role for the agency, some feared that the new G.O.P. cybersecurity bill would allow the military agency to gather information about U.S. citizens on U.S. soil. So, it’s refreshing to see that the bill introduced today–the SECURE IT Act of 2012–does not include NSA monitoring of Internet traffic, which would have been very troubling from a civil liberties perspective.

In fact, this new alternative goes further on privacy than the Liberman-Collins bill. It limits the type of information ISPs and other critical infrastructure providers can share with law enforcement. Without such limits, “information sharing” could become a back door for government surveillance. With these limits in place, information sharing is certainly preferable to the more regulatory route taken by the Liberman-Collins bill.

It seems to me that despite Sen. McCain’s stated preference for an NSA role, the G.O.P. alternative is looking to address the over-breadth of the Lieberman-Collins bill without introducing any new complications. The SECURE IT bill is also more in line with the approach taken by the House, so it would make reaching consensus easier.

I’ll be posting more here as I learn about the bill.

UPDATE 12:06 PM: A copy of the bill is now available. Find it after the break.

UPDATE 2:55 PM: Having now had an opportunity to take a look at the bill and not just the summary, it does appear it includes a hole through which the NSA may be able to drive a freight train. While NSA monitoring of civilian networks is not mandated, information that is shared by private entities with federal cybersecurity centers “may be disclosed to and used by”

any Federal agency or department, component, officer, employee, or agent of the Federal government for a cybersecurity purpose, a national security purpose, or in order to prevent, investigate, or prosecute any of the offenses listed in section 2516 of title 18, United States Code …

That last bit limits law enforcement’s use of shared cyber threat information to serious crimes, but the highlighted bit potentially allows sharing with the NSA or any other agency, civilian or military, for a any “national security” reasons. That is troublingly broad and a blemish on this otherwise non-regulatory bill.

Information sharing with the NSA might be fine as long as it is not mandatory and the shared information is used only for cyber security purposes.

Cross posted from JerryBrito.com

Continue reading →

SHARE: spacer 1 comment

Keeping the NSA out of civilian cybersecurity: there’s a reason

by Jerry Brito on February 29, 2012 · 0 comments

Tomorrow Sen. John McCain, along with five other Republican senators, plans to unveil a cybersecurity bill to rival the Lieberman-Collins bill that Majority Leader Harry Reid has said he plans to bring to the Senate floor without an official markup by committee.

At a hearing earlier this month, Sen. McCain criticized the Lieberman-Collins bill for not giving the NSA authority over civilian networks. And as we’ve heard this week, the NSA has been aggressively seeking this authority–so aggressively in fact that the White House publicly rebuked Gen. Keith Alexander in the pages of the Washington Post. But as CDT’s Jim Dempsey explains in a blog post today,

The NSA’s claims are premised on the dual assumptions that the private sector is not actively defending its systems and that only the NSA has the skills and the technology to do effective cybersecurity. The first is demonstrably wrong. The Internet and telecommunications companies are already doing active defense (not to be confused with offensive measures). The Tier 1 providers have been doing active defense for years – stopping the threats before they do damage – and the companies have been steadily increasing the scope and intensity of their efforts.

The second assumption (that only the NSA has the necessary skills and insight) is very hard for an outsider to assess. But given the centrality of the Internet to commerce, democratic participation, health care, education and multiple other activities, it does not seem that we should continue to invest a disproportionate percentage of our cybersecurity resources in a military agency. Instead, we should be seeking to improve the civilian government and private sector capabilities.

The military, and especially the NSA, has great experience and useful intelligence that should leveraged to protect civilian networks. But that assistance should be provided at arms-length and without allowing the military to conduct surveillance on the private Internet. Military involvement in civilian security is as inappropriate in cyberspace as it is in the physical world.

As Gene Healy has explained, civilian law enforcement and security agencies “are trained to operate in an environment where constitutional rights apply and to use force only as a last resort”, while the military’s objectives are to defeat adversaries. The NSA’s warrantless wiretapping scandal speaks to this difference. “Accordingly, Americans going back at least to the Boston Massacre of 1770 have understood the importance of keeping the military out of domestic law enforcement.” The Senate Republicans would do well to leave NSA involvement in civilian networks out of a new cybersecurity bill.

And FYI: I will be presenting at a Cato Institute Capitol Hill briefing on cybersecurity on March 23rd along with Jim Harper and Ryan Radia. Full details and RSVP are here.

Cross posted from JerryBrito.com

SHARE: spacer 0 comments

Sen. Levin’s IPO Tax Would Hurt Small Start-ups, Discourage Investment, Slow Growth

by Berin Szoka on February 29, 2012 · 0 comments

Sen. Carl Levin wants Facebook to pay an extra $3 billion in taxes on its Initial Public Offering (IPO). The Senator claims the Facebook IPO illustrates why we need to close what he calls the “stock-option loophole.” (He explains that “Stock options grants are the only kind of compensation where the tax code allows companies to claim a higher expense for tax purposes than is shown on their books.”) He wants Facebook to pay its “fair share” and insists that “American taxpayers will have to make up for what Facebook’s tax deduction costs the Treasury.”

One could object, on principle, to Levin’s premise that tax deductions “cost” the Treasury money—as if the “national income” were all money that belonged to the government by default. One could also point out that Mark Zuckerberg, will pay something like $2 billion in personal income taxes on money he’ll earn from this stock sale—and that California is counting on the $2.5 billion in tax revenue the IPO is supposed to bring to the state over five years.

But the broader point here is that Sen. Levin wants to increase taxes on IPOs—and any economist will tell you that taxing something will produce less of it. IPOs are the big pay-off that fuels early-stage investment in risky start-ups—you know, those little companies that drive innovation across the economy, but especially in Silicon Valley? So, while Sen. Levin singles out Facebook as an obvious success story, his IPO tax would really hurt countless small start-ups who struggle to attract investors as well as employees with the promise of large pay-offs in the future.

It’s especially ironic that Sen. Levin proposed his IPO tax just a day after GOP Majority Leader Eric Cantor introduced the “JOBS Act,” a compilation of assorted bi-partisan proposals designed to promote job creation by helping small companies attract capital. That’s exactly where we should be heading: doing everything we can to encourage job creation by rewarding entrepreneurship. Sen. Levin would, in the name of fairness do just the opposite—and, in the long-run, almost certainly produce less revenue by slowing economic growth.

And just to underscore the drop-off in tech IPOs since the heydey of the dot-com “bubble” in the late 90s, check out the following BusinessInsider Chart: Continue reading →

SHARE: spacer 0 comments

“Open Government,” or “Open Government Data”?

by Jim Harper on February 29, 2012 · 0 comments

Paying close attention to language can reveal what’s going on in the world around you.

Note the simple but important differences between the phrases “open government” and “open government data.” In the former, the adjective “open” modifies the noun “government.” Hearing the phrase, one would rightly expect a government that’s more open. In the latter, “open” and “government” modify the noun “data.” One would expect the data to be open, but the question whether the government is open is left unanswered. The data might reveal something about government, making government open, or it may not.

David Robinson and Harlan Yu document an important parallel shift in policy focus through their paper: “The New Ambiguity of ‘Open Government.’”

Recent public policies have stretched the label “open government” to reach any public sector use of [open] technologies. Thus, “open government data” might refer to data that makes the government as a whole more open (that is, more transparent), but might equally well refer to politically neutral public sector disclosures that are easy to reuse, but that may have nothing to do with public accountability.

It’s a worthwhile formal articulation and reminder of a trend I’ve noted in passing once or twice.

There’s nothing wrong with open government data, but the heart of the government transparency effort is getting information about the functioning of government. I think in terms of a subject-matter trio—deliberations, management, and results—data about which makes for a more open, more transparent government. Everything else, while entirely welcome, is just open government data.

SHARE: spacer 0 comments

Time Warner Cable & Usage-Based Pricing, Take Two

by Adam Thierer on February 28, 2012 · 0 comments

Time Warner Cable (TWC) has announced it will once again attempt an experiment with usage-based pricing (UBP)

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.