Wednesday, November 02, 2005

The economics of robot algorithms

My post HUAR: Humans United Against Robots has engendered some interesting followups, which caused it to be featured on tech.memeorandum, which no doubt has had a further network effect. Pete Cashmore started it with his Who Should Edit Web 2.0 post, and it was continued in Robots v. Mankind: Who's in charge? by Mitch Radcliffe. Dion Hinchcliffe took the discussion in another interesting direction, citing Cashmore but not me, in The Unintentional Vehicle for Secret Formulas.

It's that last one that has got me thinking. Back in the dim dark ages of 1991 when I first went to university at the age of 17¾, I foolishly enrolled in the Economics & Commerce course at the University of Melbourne. I soon began to question the basic economic theory being spouted at me by the soft-fleshed intellectuals standing at the podiums of the faculty's lecture theatres. After a while, I started to voice these concerns in the tutorials and sometimes even in the lectures. A typical exchange would go something like this:
Professor: blah blah blah... and we use these assumptions X and Y, and feed them into the graph like so. Thus the suppliers do J and K, leading to the market price fluctuating like so, and the demand curve bends this way...
Me (suffering cramps in his arm from holding it up so long): But J would never happen in the real world!
Professor: What do you mean, it wouldn't happen? Given assumptions X and Y, J is a natural consequence...
Me: No it's not! A real person would never act that way, it's stupid.
Professor: You have to wait until we go all the way through the model, you can't change things in the middle.
Me (subsiding in a fit of fuming frustration): ...

It got to the stage where I was accosting the professors in hallways after classes and trying to browbeat them into seeing the unreality of their pretty little economic models, few of which it seemed to me to be at all relevant to the way the world actually worked. Needless to say, I failed economics.

I have retained this distrust of algorithmic solutions to the world's problems to this day. I remain convinced that humans have to be present during the operation of an algorithm to weed out results which actively militate against reality for the result set to be of any worth.

Hinchcliffe raises an interesting corollary point:
The Long Tail is the most famous example of Web 2.0-style monetization; the mass servicing of micromarkets has led to eBay and Amazon becoming worth billions. But it's the other big technique that is probably the one that is the shortest route to the biggest success. This is the development of secret algorithms that provide services so good that they are a powerful and ultimately irresistable draw to users in vast numbers. Then, flush with almost monopolistic power, one can make substantial financial withdrawals from the international bank of crowd wisdom. This seems to be the business model that will be the most successful in the large, certainly Google has proven it through its incredible success story. And it all lies in having a big secret that you don't share. It really is of some concern. I am however, an optimist that believes it will work out in the end.

I certainly agree, as Hinchcliffe notes, that it appears worthwhile to keep such algorithms secret to follow Anil Dash's dictum that economies are things that get gamed. However, after a period of success, PageRank has been gamed already and it may prove to be the weakness that causes Google to be toppled by the inevitable Next Big Thing in search, Wink notwithstanding. I am sure Memeorandum et al will get gamed as people realise that all you need to do is link to certain popular blogs a lot. You could game the front page of right now by including the words "kill", "death" and "explosion" in the first paragraph of your news story, even if the story is about knitting (hi Rich!).

My point is that robot algorithms exist in a lifeless vacuum, and humans will always find ways around them. Algorithms are driven by static point releases, where humans are continuously upgrading. No matter how much secrecy these companies use to shroud the actual variables and operands within their algorithms, humans will reverse engineer them eventually and game them anyway. We're just too damn sneaky. Viva the human rebellion!


Anonymous Pete Cashmore said...


You make the point even more clearly than I did: algorithms are things that get gamed. Always. Inevitably. So either you do away with the algorithm altogether, or you insert some collective intelligence into the mix: hence Wink and its human-annotated search results.

Patented technology puts you at an advantage, as Dion says, but if that technology is an algorithm it could turn out to be a liability. PageRank is only just holding up. Memeoradum, with its whitelist structure, may be harder to game - but the lack of spam results comes at the cost of breadth and depth. Only human minds can counter-balance the vulnerability of algorithms.

"Dion Hinchcliffe took the discussion in another interesting direction, citing Cashmore but not me."

The source frequently gets lost in the conversation. Fortunately, you remained the top post for this topic in Memeorandum - sometimes you can post something new and the trackbacks of others actually replace your post.

8:01 am, November 02, 2005  

Post a Comment

Links to this post:

Create a Link

<< Home