Tinfinger    

Australian entrepreneur with FanFooty (alive) and Tinfinger (dead) on his CV. Working on new projects, podcasting weekly at the Coaches Box, and trying not to let microblogging take over this blog.

Tuesday, January 30, 2007

Kipple of the day: workgroups

Today marks the launch of the Media 2.0 Workgroup, following in the prestigious footsteps of such towering, industry-changing, juggernaut organisations as the Web 2.0 Workgroup. The members of such elite inner circles as as gods to us puny mortals, and through their shared workgroup activities they wield such fearsome collective power that entire countries are laid waste in their paths. They get so much WORK done, it's amazing! A little-known fact is that the real achievements on world peace are not being made in Davos, they're being nutted out over frappucinos and mocha lattes by our glorious workgroup leaders.

Some may say that workgroups such as this are an exact facsimile of what used to be called "web rings", but I think that's a mealy-mouthed, small-minded viewpoint. Others may estimate the serious usefulness of such organisations to have a half-life of about 2 weeks, until the mailing list peters out into gossip and petty squabbling. How cynical those doubters must be, I feel sorry for their souls. Some may go so far as to accuse Chris Saad of Touchstone of starting the Media 2.0 Workgroup as part of an agenda to astroturf 2.0 bloggers into treating his pet subject of attention with a level of respect which it surely does not merit. To those naysayers, I say nay! Chris is a fellow Aussie, and I would never suspect him of such base motives (not that I'd know him from Adam).

No, such criticisms are beneath any self-respecting blogger, and are fit only for the lowly troll. Membership of a workgroup confers a saintly quality not matched by any earthly honour, save perhaps beatification by old Pope Benny himself. I, for one, welcome our new media 2.0 overlords.
























Blackballing bastards.

Monday, January 29, 2007

"Wikipedia biography is doomed"... plus beta signup form!

Via Dave Winer, Mark Bernstein sings some sweet music today as Tai (flirting briefly with the name Tony, but now reverting like a Wikipedia edit) and I are putting the finishing touches on Tinfinger in preparation for the beta, which we envision as happening this week some time (see signup form below).

Wikipedia biography is doomed, at least for living people. People who are passionately admired by some and detested by others are going to generate revert wars and nonsense. People who don’t, won’t — but nobody will notice.

That is exactly what Tinfinger is for. As our 5-second pitch goes: Tinfinger will be to the Who's Who what Wikipedia was to Encyclopedia Britannica. Through our fandom system - where only the most devoted fans of a person get top-level editing privileges over profiles, pictures, tags and similar biographical information for the objects of their devotion - the idea is that profile pages for people on Tinfinger will present a positive face.

Not that it's going to be all whitewashed niceness. Trolling will play a big part in Tinfinger. But that will have to wait for the full launch. In the meantime, here's the signup form if you're interested in the closed beta. Like I said, it should be open by the end of the week.

Name:

E-mail:


HumEngadget versus RoboScoble

Engadget, under fire from Robert Scoble for being stingy with the outbound links, was defended by Ryan Block yesterday with some interesting PNOOMA estimates of the structural distribution of the problog network school of journalism:

Just for grins here’s my totally unscientific breakdown Engadget content:

60% news found on other tech sites, blogs, forums, etc. (non-MSM)
15% press releases / directly sourced news
10% MSM news (found there or editorial)
10% original feature content
4% insider info, tip-offs, etc.
1% announcements, contests, etc.

Jason Calacanis, founcer of the Weblogs Inc empire of which Engadget is a major part, argues that Engadget has a "symbiotic relationship" with smaller sites. I'm not a regular Engadget reader, but I would agree to the extent that Engadget relies on tiny niche sites for most of its content - though I suspect Calacanis might be blowing sunshine by saying it's symbiotic where "parasitic" or even "inoculatory" might be more accurate. By "inoculatory" I mean that my guess is that a lot of readers skim Engadget's bald rewrites of other stories and rarely click on the link - note that with those rewrites the ONLY links from within the story are to other Engadget pages, and the external link only appears at the very end.

The Weblogs Inc/Gawker business model is not primary journalism, it is to be a human-edited aggregator/memetracker. It's not modeled on CNET, it's more like Techmeme and Digg. I think Scoble's frustration comes from the fact that the end result of Engadget's human editors is different to the robotic, algorithmic norm of Techmeme and the memeoclones. To put it simply, Scoble prefers robots. Where Techmeme will always put the originating link at the top of the tree, sometimes a Weblogs Inc editor will decide to omit relevant pieces of primary journalism because they don't think it would, as Block says, "benefit[...] our editorial".

Should a meme's outbound link history define its hierarchy? For those who think so, there's Techmeme. For those who prefer a more granular, subjective view, there's Weblogs Inc and Gawker. For those who prefer a bunch of monkeys throwing feces against a wall, there's Digg. As for Tinfinger, we'll see what you make of it real soon now.

Friday, January 26, 2007

Toecutter 2.0

Three posts today, I'm outdoing myself. As part of my self-appointed role as the Toecutter Of Web 2.0, I keep an eye on Nick Carr's blog, and spent a bit of time yesterday flaming him and his supporters in one of his regular Wikipedia bashing sessions. The tennis match between myself and censorware crusader Seth Finkelstein got so long that Nick promptly reprinted it as a separate post, with some snide asides inserted by Nick.

For those of you wondering what a toecutter is, Wikipedia won't help you. Neither will Google, surprisingly, without knowing what you're looking for already as I did. This is a good toecutter definition:

The term toe cutter is Australian slang for a person who lives by torturing other criminals, then robbing them. As the name implies the torture usually involves painful removal of the digits or in some cases the complete foot. Few victims ever inform since their loss has been acquired illegally. An infamous toe cutter was "Jimmie the Pom". His gang operated in the Sydney area during the seventies. They prayed on fellow criminals threatening bodily harm, till they disclosed the whereabouts of their ill begotten gains. Their modis operandi was to cut people's toes off, with bolt cutters. By day, the leader of the extorionists, ran a dress shop. He emigrated to Australia in 1967 and claimed to be a member of the notorious Kray Brothers Gang from East London where he picked up the idea. His technique seemed to work because over the years it is reputed the Toe Cutter Gang were able to amass considerable loot from their fiendish toe fetish. Less adept copycats used blowtorches applied to the soles of the feet to achieve the same end. Tablillas were pillories used by the Spanish Inquisition and immobilised the toes when the victim was bound to the rack. Sharp wedges were hammered head-on into the toes one by one to obliterate the phalanx.

The most famous toecutter was Mark Brandon "Chopper" Read, from the movie of the same name. Now, I don't actually go around to Nick Carr's house and apply an arc welder to his extremities, but I have taken it upon myself to do so in a figurative manner. I snark the snarkers. My reward is not wads of drug money, as it was for Chopper and Jimmy the Pom, but the satisfaction of seeing a sniper get a taste of his own medicine. Well worth the effort.

They can rarely take it as well as they give it.

The business case for Google to nerf Wikipedia

One part of the recent Wikipedia rel=nofollow hullabaloo which didn't get enough play, in my opinion, was Shelley Powers' suggestion that Google remove Wikipedia from their search results entirely, and perhaps add them as a sidebar to every search. This would certainly cause much rejoicing amongst SEO professionals who fight with Wikipedia for top billing on almost every worthwhile search phrase, but what would the discussion be like if Matt Cutts and a bunch of Googlers sat round a table and jawed it over?

First, it must be said that Wikipedia is just the sort of content that Google wants to link to. Structured prose, highly subject-specific, stuffed with keywords, nice and lengthy, and arguably the best linklove in the business. I don't think removing it altogether is going to fly. As to whether they get relegated to a sidebar, I don't think that's out of the question.

Second, let's look at the bottom line and be a teensy bit cynical about the big G's motives for a second. Would pushing Wikipedia below the fold increase Google's revenues from AdSense? It would need to be tested, but I'm guessing it would boost revenues significantly. My guess is that users' eyes (and mouse arrow) would gravitate towards the trusted Wikipedia entry if it's above the fold in #1-3 result position, but if it's not there then the average user's attention would wander more frequently to the AdSense positions. Not by a lot, but given the volumes we're talking about here it would add a considerable amount to Google's revenues.

Third, would it hurt the user experience? Some argue that it's pretty ridiculous that Wikipedia is on the front page of Google results for every damn thing. It's getting to the stage where if you're looking for non-encyclopedic content, the Wikipedia result is actively getting in the way of your search experience. On the other hand, many people use Google as their Wikipedia search engine, so they might be discouraged by the shifting of the result to a less accessible place and stop using Google.

Last, does it violate Google's culture? It might be argued that it's a little bit evil, but there are arguments on the other side as well to say that it would be good for users. It is also, arguably, a perversion of search engine ethics since the only results you should really be tampering with are those from other search engines, for obvious reasons. Wikipedia is not a search engine. But is it a special case which requires a new rule? Google might even be helping Wikipedia if the campaign to add rel=nofollow to all Wikipedia links (which now has a Drupal module and a Wordpress module) gains traction.

I would love to be a fly on the wall for any real discussion about nerfing Wikipedia (or IMdB or tv.com or other SEO category killers, as Everton Blair points out) on the Google campus. I'm sure they'd bring up other points, but the above is how I imagine it to be framed on the whole. What will they decide, if they ever do meet about this?

Thursday, January 25, 2007

Link-based search algorithms lose followers

After earlier in the week Wikipedia's Jimmy Wales decided to reinstate the rule that all outbound links in Wikipedia would have the rel=nofollow attribute, meaning that search engines would not count them for the purposes of calculating a page's popularity, Dave Winer has announced that he wants to do the same sort of thing with Techmeme for his Scripting.com blog.

Actually, what Dave announced was that he was blocking Techmeme's crawler script, called Wazzup, via Scripting.com's robots.txt file. However, it came out eventually at the Scripting.com Wordpress annex that he didn't want to block himself from appearing at Techmeme at all, which is what the robots.txt change would have done, but wanted to stop his links being used as part of Techmeme's algorithm. He was quite fine with being indexed and his stories appearing as headlines, he just didn't want to be a secondary discussion link.

From a technical standpoint, there is no way to get exactly what Dave wants at the moment, because the rel=nofollow attribute is not search-engine-specific, as robots.txt can be. Maybe, if Dave can get some support for the idea, there will be a new "version" of robots.txt where the site can still be indexed but you can specify that links are to be discarded for the purposes of TechMeme or Google et al.

It all adds up to a trend of political and epistemological discontent for the power structure that link-based search algorithms have accreted. Andy Beal decided to champion a crusade to reduce Wikipedia's PageRank to zero by encouraging every blogger to add rel=nofollow to their links to Wikipedia. Wales' stated reasoning was to prevent spam by cutting the power source for SEO experts who were filling Wikipedia with decreasingly relevant links, while Dave's agenda appears to be more to subvert the hierarchy of patronage that Techmeme has become. I wonder what the next step will be in this evolving power struggle.

Sunday, January 21, 2007

Google vs Internet: Internet ftw

Following up from the hullabaloo over Google's non-existent plan to replace the Internet, let's remind ourselves about the scales we're talking about here.

The Internet: 106,875,138 servers, growing at an ever-accelerating rate that was last clocked at 1.63 million per month.

Google: 450,000 at last estimate, in July 2006. In April 2004 Rich Skrenta quoted the figure at 100,000, giving a monthly growth rate between estimates of just over 15,000... which is evidently where Rich gets his current figure of 500,000.

To be fair, Google's growth rate is currently outpacing that of the Internet, but at the moment it's less than 1/200th the size. To get to 1/120th the size of the Internet, Google will have to keep building data centres until... the year 2030.

What was that about "a huge proxy server for the Internet"? Mmm-hmm. Google is having enough trouble providing resources to run its own applications, which of course is what all these data centres are actually for. The Internet can take care of itself.

Saturday, January 20, 2007

Cringely: Google rising, sky falling

Robert Cringely contradicts himself today by saying Google is going to save the Internet. Of course, two weeks ago he said the Internet would crash due to the mainstreaming of video downloading. Yet here he is today, attributing that same phenomenon to Google aiming to become "the proxy server of the Internet". So which is it, Bob? Will the Internet crash or will Google ride in on its primary-coloured horse to rescue us?

In amongst Cringely's typical PNOOMA methodology, his main argument seems to be that Google's ever-increasing data centre assets will lead to its takeover of the ISP industry. This is wrong. Those data centres are not eating anyone's lunch, they're actually supporting an industry: the carrier-neutral data centres, the peering exchanges. I know a bit about this because I worked for a peering organisation called AusBONE during the first boom. I kept a close eye on companies like Equinix and Savvis at the time, particularly Equinix because I thought it had the best business model. Equinix is still going, despite going through some very tough times post-first-bust while trying to fund expansion, and after flirting with disaster at below US$3 per share in 2003 it has exploded to US$83 per share today.

I wouldn't be surprised to hear about Google either buying Equinix or more aggressively partnering with it. I like Equinix's tagline of "The Home Of The Internet", and it certainly converges with Google's goals. Google first entered Equinix's data centres in 2001, and it's still there making new peering arrangements with ISPs. All that super secret Google fibre that Cringely references is merely to (a) connect Google's distributed data centres with one another, and (b) connect those data centres to peering points - like those of Equinix - to pump data through to ISPs.

Where Cringley gets it most wrong is in thinking that Google is interested in killing the ISP business. That does not advance Google's goal of organising the world's information. However, Google is interested in gaining a powerful market position so that it can dictate the terms of engagement for the market - and it's not afraid to forego large sums of money to do so. In the case of the music industry, that meant negotiating huge cash settlements with copyright holders so that YouTube could eat the media giants' lunches. In the case of the ISP industry, that will mean that Google will provide ISPs with free or near-free data at peering points... as long as they embrace Net neutrality. If they cause trouble, like Ed Whitacre of AT&T, they can expect to sit across the table and have a peering contract containing a lot of zeroes shoved under their noses. Google doesn't want to have to charge those huge numbers to carriers because its users would suffer, it's merely a bargaining tactic to get what it really wants.

Call me an optimist, but I don't think Google is the business of destroying industries just because it can. Let's face it, they could pretty much enter any industry and gut it at the moment with the backing of the revenue from their search business, but they're smarter than that. A decimation of ISPs' profit margins would actually hurt Google, because it would leave itself open to antitrust investigations. Cringely is getting ever less relevant, and he's backing the wrong horse here.

UPDATE: Mathew Ingram agrees with me.

Tuesday, January 16, 2007

A handful of stuff

1. Gareth Butler of the BBC left a comment on last October's entry Who is pmeme's Gareth Butler? saying he was not pmeme's Gareth Butler, but that he did write a book about politics. That just leaves the author of this article about freelancing as the only other candidate as to the identity of Mr Butler. The pmeme site seems not to have changed or improved at all in the intervening ten weeks since I first wrote about it: the design hasn't changed, there are no new features, and the FAQ/about pages are still just placeholders. The scripts are evidently still running because it has fresh content, but it looks as dead as Findory at the moment.

2. Speaking of which, a couple more interesting posts on the sunsetting of Findory. Over at Yoick, Chris Saad tried to use Findory's demise as an argument for his project Touchstone by tying the two together as parts of the "attention economy". Yaron Galai over at Web X.0 makes an excellent point about this attention falderol:

I bet many people (me included) would love to sit around and consume content on Findory (or Digg, etc), but just don't have the time to do so... Findory required me to spend more of my scarce attention to use it. What I need is a service that has 'net positive attention emissions'... A service that saves me time rather than consumes more of it.

Again, Yaron has his own barrow to push with his venture outbrain, but he does a good job highlighting what is I think is an adversarial relationship between the attention economy, if such a thing exists, and the "wisdom of crowds" technique. It's the difference between believing that you are the font of all wisdom, or admitting that thousands of other people can think better collectively than you. It's solipsism versus solidarity. Sociopathy versus socialisation. This tension is why the "personalised memetracker" of the kind that Matthew Chen is experimenting with is so problematic in a conceptual sense: how can you take the received wisdom of the mob and reconcile it with your own "special" biases and prejudices? Pramit Singh probes the same area and concludes that it was just a matter of advertising and user support, but I think the problem is more structural.

3. On Matthew, he was interviewed by Search Engine Journal and let slip a few new facts amongst all the typical handwaving and obfuscation that seems necessary to not let your competitors know what you're really up to :D . Evidently he has "several site sponsors in talks", and I wish him luck with them. It will be interesting to see if Matthew decides to give them featured blog post placement, as Gabe Rivera pioneered with Techmeme and Memeorandum, or whether the advertisers are more interested in straight display and/or rich media ads.

4. And finally, another Y Combinator clone called TechStars was launched by Brad Feld in Boulder, Colorado. One day, I'll do the same in .au. Ben Barren can play good cop and I'll be bad cop. That's the plan, anyway.

Monday, January 15, 2007

Findory's sleepers

Greg Linden has decided to discontinue working on Findory, his one-man personalised news site. This is strange to me on a couple of fronts, and I will try to list them here without speaking ill of the deadpool, or of Greg who deserves nothing but congratulations on his hard work over the four years of the Findory project.

First, Greg announced just over a year ago that Findory was cashflow-positive. I don't know what Greg's balance sheet for the site looked like, but if Google's server farm numbers 500,000 and Topix.net's contains 500 boxes as Rich Skrenta says, then I don't think Findory was anywhere near those orders of magnitude. At a guess, I'd say five boxes, max? I don't know. In any case, I don't think Greg's burn rate, given that he's only one dude, was anything more than a slow smoulder.

Second, Greg didn't mention anything about problems with revenue streams, but I'd hazard a guess that if that side of things was all roses and sunshine, he wouldn't have made this decision. Findory's revenue was (reportedly) based on a personalised advertising engine that Greg launched back in May 2005, originally based on Google AdSense but evidently shifting more towards Amazon affiliate ads over time. That sort of thing was necessary because Findory's front page is a dog's breakfast in terms of spidering - something I'm finding with the Tinfinger top-level "Human" category. When you have 10 different stories about 10 different topics on a page which change every 24 hours, the Google spiders don't know what to make of it, and they decide different things on different page loads about what that page is about. When I went to the Findory front page today the five AdSense links were all to spyware removers, for instance.

I found one comment on Greg's blog post interesting:

I really like Findory. In fact I have read 2,565 articles through findory so far.

Although I must admit I wondered about your monetization strategy

thanks very much for the service

As it is now, Findory's front page contains four Amazon affiliate ads (which are just shots of book covers) above the fold on the left edge, and then one wide skyscraper from AdSense towards the bottom left and another Amazon banner at the foot of the page. With all respect to Greg, that doesn't seem like a sound ad placement strategy. I know Greg is an ex-Amazonian, but Amazon ads don't merit that sort of prominence.

To me, Findory always seemed like a fabulous product that could have done with some better marketing. In some ways that's true of many News 2.0 sites: Kevin Burton, Matthew Chen, Gabe Rivera, myself, even Rich Skrenta are technologists first, second and third, and marketers about 17th behind all their other skills. For one-(or two-)person shops that can be a killer, and so it seems to have been with Greg. I wish him luck in whatever helse he puts his mind to in future - and the best of health too.

Monday, January 08, 2007

News 2.0 alpha algo mojo au-go-go

Today I had one of those moments of conceptual breakthrough that make this caper worthwhile, at least in this pre-launch alpha larval limbo stage we're in at Tinfinger. After reading about the widespread pooh-poohing of Daylife a few days ago, I had been worrying about several of the criticisms of Daylife and whether Tinfinger would itself stand accused of same when it officially launches. On one of those points, I think we're safe: the criticism, characterised best by Scott Karp, that Daylife's editorial algorithm offers nothing new or trustworthy.

Current providers in what has been called the News 2.0 sector have traditionally used one of three methods to sort their news items and figure out which ones should be given most prominence. The simplest is to use humans, either your users (Digg), paid editors (Netscape) or a combination of the two (Yahoo!). The second method, made most successful by Google News, is a fully automated set of algorithms giving rankings to various arbitrary concepts like relevance, recency, frequency of certain search terms, article length and so on. This ranking method can only be employed with an extremely imposing amount of processing power since the algorithms are very complex in their execution: last time I heard, Google updated their front pages only every 15 minutes. I don't know how much RAM that Topix.net, Newsvine, Inform.com and Gather.com throw at their algos, but it must be in a similar range to GN.

The third method, popularised by Memeorandum and Techmeme and the other sites in the Gabe Rivera family, is based on hyperlinks between articles, creating clusters of linked stories. This cluster method has the advantage of being relatively simple in database terms so that it can be operated on one-man-budget startups, such as those of Gabe, Kevin Burton of Tailrank and Matthew Chen of Megite.

The problem with both of these methods is they tend to produce samey results, leading to problems for the market followers to find differentiation. The ranking methodologists have the big G sitting there with all its toys giving it enough scale to comfortably keep its lead, and the Memeoclones have to suffer through forever being compared to Gabe. My guess is that Daylife, despite the VC dollars, doesn't have enough money to spend on iron big enough to handle GN-scale processing either, but they look like they are using that method along with Newsvine, Gather.com, Inform.com and the other wannabes.

Thus with the market winners already decided, so it seems, the smart operators have decided to take a different tack. Topix has already chucked in its lot with local newspapers, and is doing quite well from all reports. Matthew Chen is doing some interesting things with personalisation and has a link up on Megite for licensing. I'm sure Kevin Burton is dreaming up something in that humungous brain of his, though from the looks of it Tailrank might be moving away slightly from the Memeo link model and towards ranking algos.

So, where does Tinfinger fit in with all this? That was the subject of my revelation today. The Tinfinger algorithm, which goes by the name of tinscore, has been getting its first real workout over the past fortnight or so as we added a few more blogs, and it will get even more meat to chew on once our site discovery script is unleashed. For the first time, I can see how the equations resolve into reality with a non-pre-alpha-sized database to work with. And the results, I am happy to say, are different. They identify many of the same topic clusters as the other News 2.0 sites, but the stories which are chosen to get top billing are often not the primary or original breaking news story. They are the more thoughtful, longer, in-depth feature articles. That's the way that the algo works, unlike the status quo: timeliness is not much of a factor, as long as the story appeared in the last 24 hours; links mean nothing; all sites are ranked equally. What matters is the prominence given to the names, their frequency in the story, the amount of other people mentioned in the story, and the length of the article.

In some ways, Tinfinger will be different because it's not News 2.0, it's Feature 2.0. Substantial MSM feature articles, expansively-argued opinion pieces and passionately wordy blog rants which would on other aggregators get shifted to the bottom of the pile (or ignored entirely) because they would be considered as secondary sources are instead pushed in front of the footlights in our system, and used as the basis for their own clusters with names as the connecting factor.

Because of this thought, despite having previously architected the system to harvest external links from all articles and waiting until now to incorporate them into the Tinfinger algorithm, I am seriously considering not including links in our algo at all. That would defeat the purpose of having something distinctive. The algo's simplicity, its dumbness if you will, is what makes it unique. Hopefully it will prevent people throwing the same darts at Tinfinger on our launch day as were aimed at Daylife.

Saturday, January 06, 2007

My part in the Acer Ferrari laptop scandal

The inestimable Frank Arrigo links to me today in a most mysterious way. Apparently my Top 5 Aussie consumer tech blogs post way back in October was "Post Zero" for the scandal in .au territories. All shall be revealed tomorrow in the Australian Financial Review, evidently... though Josh Gliddon didn't contact me about it!

UPDATE: The article actually appeared today, and surprise surprise, it's not behind the AFR paywall! The piece, entitled Blogger takes free laptop in his stride, focuses mostly on Darren Rowse's bemusement, with quotes from Frank himself. Since Frank revealed my part in it already, I should say that I had no knowledge that any free schwag was involved, much less $3000 worth. All I was asked, by a Microsoft PR person whom I know very well, was to name the top 5 Australian consumer bloggers-who-were-not-also-journalists. Turns out that I didn't stick to the brief all that well since I have since found out that Stephen Withers, #5 on my list, is also a professional journalist. The other four are all worthy, however, despite Darren's protestation in the AFR that he doesn't "do reviews". (Another disclosure, if needed: I had not met or spoken to any of the five, except for William at last year's Influence event.) As someone has already pointed out via email to me, I created the list but didn't get a laptop, so boo for me! :D

Friday, January 05, 2007

A day in the life of Daylife

I've just come home from my grandmother's funeral so I missed the kerfuffle over Daylife. I consider Daylife to be a potential competitor, if tangentially, so it's an important issue for me.

Marc Hedlund, one of the names on the practically endless investor/advisor/friends list of Daylife founder Upendra Sharma, quoted Sharma's original five-second pitch for Daylife as being "IMdB for the news". IMdB has a database of hundreds of thousands of people in the movie/TV business, plus thousands more entries for the movies and shows, all cross-referenced. To that extent, you can see what Upendra was getting at with the "connection" feature, where each topic page includes links to related topics down the left side. However, one feature which Daylife hasn't got for its topics is permanent descriptive content, at which IMdB excels with its comprehensive CVs containing biographies, work histories, trivia and user comments. That kind of high-quality descriptive text is GOLD, JERRY, GOLD for Google spiders, they love that shizzle. Let me tell you, that's pretty fricken hard to build up, so I don't envy Upendra's job there. Unless he convinces his VCs to give him money to buy those databases off someone else, in which case good luck to him.

Speaking of VCs, Michael Arrington blasted Daylife even though he is an investor, causing a few raised eyebrows, but the reason is obvious: Michael has developed a sincere and public dislike for the doings of the New York Times group, and the NYT is the lead investor into Daylife... and most importantly, Daylife's structure has seemingly been designed to pander to the needs of the NYT and its fellow MSM broadcasters. Very few blogs appear in the results, it's mostly stuff you'd see on Google News. Take their John Howard page, which shows that Daylife tracked 72 mentions of his name in news sources on December 7 and only four blog mentions. Who wins out of Daylife? At the moment, it's not Daylife's investors - it's AP, Reuters and Getty Images. Like Newsvine, the VC moolah is not going mostly to fund new technology based on new understandings of the way media works - despite the manifesto - it's acting as outsourced R&D for the wire agencies... and Daylife is paying those agencies for the privilege!

Actually, on that manifesto, does it not strike you as a highly politically charged document? There is precious little focus on the user (or whatever you want to call the customer/consumer/reader/person). It seems to me to be a sop to the varied special interests in the Daylife investment ranks. To wit:

1. Provide a wide variety of perspectives, framings, and points of view – and help people triangulate stories and form their own perspective

Call this one the Jeff Jarvis Rule. Bloody hippies.

2. Surface the interconnections throughout the world we live in

Um... what? Someone was hitting the bong a bit too hard that day. The only things that should surface are things that had previously been submerged in liquid.

3. Provide more ways to engage with the news, and increase the utility of news and reporting

To... engage with news. And increase its utility. So nothing about increasing the quality of the news that users get, or broadening their choices. News is a nebulous Singular Thing, not to be fragmented. Yep, that's a sop to the NYT, alright.

4. Make the news ecosystem more transparent and self-correcting, for the benefit of all involved

All? Seems like that benefits the New York Times better, doesn't it, especially given their recent scandals?

5. Consider and serve all stakeholders - citizens, journalists, newsmakers, suppliers, the environment, shareholders and employees

This should never be a part of a manifesto by a startup. If you have a list of seven stakeholders, it should contain these seven only: users, users, users, users, users, users and users. And now, here's Gir to sing the Doom Song!

6. Develop new models for funding journalism

Aha, here's the Craig Newmark Rule. I knew it would get in here somewhere. He still feels the need to prove to all those journos he indirectly retrenched that he's not the Son of Satan. Good luck with that one, bro.

7. Enable a civil discourse that is pragmatic, solutions-oriented, and doesn't exaggerate divisions in favor of celebrating what unites us

Is Kofi Annan a secret investor? WTF? What sort of solutions could a news aggregator possibly provide for any division? Call this the Oh Fudge, Our Investors All Hate Each Other, Quick Write Something Telling Them Not To Scapegoat Us Rule. Well, I guess that one didn't work Mike! :D

Thursday, January 04, 2007

Too many snipers in the Web 2.0 clan

Pete Cashmore over at Mashable* pooh-poohs the use of the term Web 2.0 yet again today in what has become a rather boring and elitist trend, in my opinion. Michael Arrington has also been on the Web-2.0-is-passe bandwagon and of course any bandwagon that Mike gets on is bound to have a lot of passengers.

I think the chorus of snark at the Web 2.0 phrase rather misses the point. Who elected Michael or Pete - or Richard MacManus or Dave Winer or Doc Searls or whoever else wants to take a shot - as Final Arbiters Of What We Call Things? Most of the original push away was in response to the perception that Tim O'Reilly was trying to marshall the phrase for his own purposes, but in doing so the naysayers have puffed themselves up as self-appointed Phrase Police.

If the people want to hear about Web 2.0, and they obviously do if Pete's story on Wikipedia search phrases is true, then who is qualified to try to re-educate the masses? Web 2.0 is supposed to be about bottom-up control over content, not elitist broadcasting. Web 2.0 is a perfectly useful phrase, and its fuzzy definition is actually good for the industry because it does not limit its development or pigeonhole it into something that can be marginalised.

I'll continue to use it, especially on Tinfinger which now has a Web 2.0 news page which is slowly filling it up with new names harvested from a batch of sites I fed in. If your name is on that page and you feel constricted by being lumped in with the 2.0 crowd, then tough luck because that's how the world perceives you. Embrace your inner Web 2.0, people!