It’s Google, stupid. | Blogads

It’s Google, stupid.

by henrycopeland
Tuesday, September 24th, 2002


If you are in the news business, forget how you manage and serve information. Don’t bother going to fancy content management summits. Instead, spend some time thinking about how readers acquire information.

Eager to test-drive the next content management system? Open a web-browser. Type www.google.com. Voila.

Serving over 5 billion searches a month, Google is by far the world’s biggest single information server, the global content management system. For premium, information-hungry readers, Google is, defacto, both the homepage and prefered acquisition tool for most important information.

What does this mean for news publishers? Consider New York, where Google thrashes the city’s paper of record on its own front stoop.

The New York Times portrays itself as The City’s Leading Information Source. And as one discovers by crunching the NYTimes.com’s own audience figures, the paper gets an average of 1.2 million visitors a day or roughly 11 million total users in a month.

These numbers pale when we consider that Google serves 12,195,400 searches a month for the words “New York.” And 68,400 for “World Trade Center.” And 91,200 for “Bloomberg.” And 144,400 for “NYSE.” And 630,700 for “Broadway.” And 752,300 for “Manhattan.” And 22,800 for “Pataki.” And 60,800 for “Empire State Building.”

You get the idea. Here’s the scary thing; the number of Google searches for “New York” has grown 62% since March. When was the last time the New York Times grew its web audience by more than 20% a year? (All Google figures gleaned from its old Adwords program.)

Here are some other Google search tallies for publishers to chew on. Google gets 11,260,800 searches a month for “London.” “Atlanta” gets 2,302,300 a month. “Los Angeles” gets 3,442,100 a month.

Now, Google goes for the news jugular. Google has been running an alpha version of its news scraper for months, putting relevant headlines atop search results. This week, its “news.google” page began serving up whole pages of relevant news scraped from 4,000 sources.

Noting that the NYTimes URLs in News.Google include the word “partner,” Dave Winer suggests some special benefit will accrue to the paper. I don’t know what he’s thinking. Will Google skew its news judgement to send some extra visitors to the Times? My bet is that the partnership simply (and only) jumps visitors past the Times’ registration module.

In fact, News.Google shames the NYTimes.com. On the ten articles highlighted on the current news aggregation for “New York,” only two are from the New York Times. Only one of ten for the “New York City” search is from the Times.

Assuming Google’s content relevance and peer weighting algorithms continue to run the show, News.Google will boost well-networked bloggers as Google’s source of highly referenced sites expands. The key thing to watch — when and how will Google expand the list of 4,000 news providers.

Kuro5hin and Slashdot are already included. (But no Metafilter?) Will Blogcritics or Instapundit or Scripting News be next? Will Drudge, the human headline squeegee, ever make the list?

The bottom of Google’s new says: “This page was generated entirely by computer algorithms without human editors. No humans were harmed or even used in the creation of this page.”

No humans harmed… but more than a few corporations will drown as the river of news floods and erases its old banks.

Want the latest news and views about News.Google? Where better to check than the [url=http://news.google.com/news?hl=en&lr=&ie=ISO-8859-1&q=google&sa=N&tab=wn]source itself.

(9/26/02 Nick Denton, former CEO of headline aggregating Moreover.com, examines a Google fumble in presenting news. And Leslie Walker writes: “the former editor in me feels humbled at how a computer is able to assemble on the fly an adequate version of what it takes a dozen or two humans to do at most major Web news sites.”)

Facebook comments


Our Tweets

More...

Community