Seven Things Human Editors Do that Algorithms Don't (Yet)
A recommendation from the recommendation frontier: You may not want to fire your human editor just yet.
For the last year, I've been investigating the weird, wild, mostly hidden world of personalization for my book, The Filter Bubble. The "if you like this, you'll like that" mentality is sweeping the web — not just on sites like Amazon and Netflix that deal with products, but also on sites that deal with news and content like Google search (users are increasingly likely to get different results depending on who they are) and Yahoo News. Even the New York Times and the Washington Post are getting in on the act, investing in startups that provide a "Daily Me" approach to the newspaper.
The business logic behind this race to personalize is quite simple: if you can draw on the vast amount of information users often unwittingly provide to deliver more personally relevant content, your visitors have a better experience and keep coming back. And of course, once you've got the code running, why suffer the overhead of expensive human editors? In theory, personalization can do a better job at a lower price: what's not to love?
In practice, a lot. While personalized feeds are taking off, they still fall short of good human editors in some important ways. Here are seven of them:
Anticipation. As it turns out, algorithms at sites like Technoratiand MediaGazer are quite good at figuring out what the Internet is buzzing about right now, but they're quite bad at predicting what's going to be news tomorrow. Artificial intelligence simply isn't good enough yet to know that while there are only a few info-drips about "Obama middle east speech" on Tuesday, by Thursday it's all anyone will be able to talk about.
Risk-taking. Chris Dixon, the co-founder of personalization site Hunch, calls this "the Chipotle problem." As it turns out, if you are designing a where-to-eat recommendation algorithm, it's hard to avoid sending most people to Chipotle most of the time. People like Chipotle, there are lots of them around, and while it never blows anyone's mind, it's a consistent three-to-four-star experience. Because of the way many personalization and recommendation algorithms are designed, they'll tend to be conservative in this way — those five-star experiences are harder to predict, and they sometimes end up ones. Yet, of course, they're the experiences we remember.
The whole picture. A newspaper front-page does a lot of informational heavy lifting. Not only do the headlines have to engage readers and sell copies, but most great front pages have a sense of representativeness — a sampling of "all the news that's fit to print" for that day. The front page is a map of the news world. That sense of the zoomed-out big picture is often missing from algorithmically tailored feeds: you get the pieces that are of most interest to you, but not how they all fit together.
Pairing. As any restaurateur worth his sea salt knows, it's not just the ingredients, it's how you blend them together. Great media does the same thing, bringing together complementing and contrasting pieces into a whole that's greater than its parts (think of great issues of your favorite magazine, or your favorite album). Even those of us with a wicked sweet tooth can't survive on dessert alone. Algorithms are pretty clumsy about this — they lack a sense of which flavors pair well together.
Social importance. A recent study found that stories about Apple got more play on Google News than stories about Afghanistan. Few of us would argue that Steve Jobs' latest health gossip is as important as the war we have soldiers fighting on our behalf, but that's a hard signal for algorithms to pick up on — after all, people click on the stories about Apple more. Maybe it's time for a Facebook "Important" button to go next to the "Like" button.
Mind-blowingness. While we're adding buttons to Facebook, what about a "it was a hard slog at first, but then it changed my life" button? So many of the media experiences that change our lives, that we remember 5 or 10 years later — the things that keep us coming back to our favorite periodicals and websites — are hardly the most clickable. They may not even be that share-able — James Joyce's Ulysses wouldn't fare very well if it had to compete for attention on Facebook with cat photos and celebrity gossip. Great human editors can see beyond the clicks; they sometimes pick pieces that may not be the most accessible but stay with the readers who take the journey.
Trust. Even if we can't put our finger on it, most of us can feel when the proceeding qualities are lacking. And it means that we don't trust these algorithms very much: sure, we'll glance at what Netflix recommends, but it's only as good as its current recommendations. That trust is critical because it's what allows editorial institutions to push us out of our comfort zones — "you might not think you'd be interested in this fashion industry kingpin/new cooking fad/small country in southeast Asia, but trust me, you will be." And that's how new interests are born.
To be clear, I don't think we can put the algorithm genie back in the bottle. The pull of algorithmic personalization is too strong — and after all, there's just too much to sort through for humans to do all the sorting. But we need these algorithms to learn a bit more from their predecessors, and bring the best of 20th century journalistic thinking into the 21st.
Eli Pariser is the author of The Filter Bubble and the President of the Board of MoveOn.org.
No comments:
Post a Comment