digg bury recorder? don’t get too excited

more than a few people have pointed me towards a tool created by ajax economy that purports to record all bury data for any submission made to digg, asking for comments. i would love to see a tool like this that works and captures 100% of the data but here’s why you shouldn’t get too excited about this particular tool. on the announcement page for version 0.2 of the tool, the site states that the tool is capturing 100% of the data but if you read into how the tool works you’ll see that a more accurate statement is that ‘it captures 100% of all available data’, which is not much by any means.

this application gets the json feed used by digg spy. it does this using ajax (i.e. the xmlhttprequest object) which requires a server side proxy due to domain security restrictions. due to the way that the json is returned from digg spy, it doesn’t set a variable equal to the returned object, which force us to use the before mentioned server side proxy and an eval statement instead of using dom manipulation. the application simply polls for updated data every 20 seconds which makes sure we don’t miss any data and that it doesn’t put too much strain on the server.

the simple problem here is that the tool relies on the json feed from digg spy and yes, it does capture 100% of the data shown by that feed. the feed from digg spy, however, only shows approximately less than 10% of the activity on digg (just do some basic math in your head and you’ll see that there is no way that it could show more data than that and be even remotely readable). so what you essentially have is 100% of 10% of all data, which for all purposes is highly inaccurate because even the 10% is not spread over all stories.

that said, if someone has come up with a better, more accurate way to record information, or thinks the above mentioned is incorrect, please have your say in the comments. for what i know digg shows partial data and there is no way to get the data for the all the activity on the site.

Technorati Tags: , , , , ,

6 thoughts on “digg bury recorder? don’t get too excited

  1. Gerard

    I don’t know whether the information regarding Digg spy is incorrect. I do believe it does not show all activity. I have seen it neglect to record my own activity many times. I’ve even logged out thinking the Spy utility would then record my Digg activity. Nothing.
    I do know that Ajaxonomy’s tool doesn’t work. I tested it out over the course of three days using spam (there’s always spam in the wee hours of the morning) I found in the queue, a few of my old submissions (undigging them allows me to bury them), and various browsers (Opera, Netscape, IE, and FF). Not one bury was recorded.

  2. David


    I wanted to let you know that we are looking into the statement about Digg Spy as I didn’t know of this limitation (I had read that it captured 100% of data). The early version of app missed Buries from the feed, but the beta 0.2 version has just been released and it fixes many issues. The new url is http://www.ajaxonomy.com/buryrecorder/. Also, we found that there was some user error in some cases, so please make sure you enter the url in correctly (or use the bookmarklet to avoid user error) or the app can’t work correctly.

    Your comments are helpful in continuing to make the tool work better.

  3. Gerard

    I noted your previous mention of the proper URL format on your Digg post several days ago. It does not work.

    Is Ajaxonomy’s Bury Recorder a violation of Digg’s TOS?

    I’m referring to Section 5, number 8:

    “with the exception of accessing RSS feeds, you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission.”

    If so, does Ajaxonomy have Digg’s permission?

  4. David

    In an update to my previous comment, you are correct that all data is not captured by the Digg Spy feed. However, after looking into it the feed does capture all bury data from upcoming and popular stories. Once a story has been fully buried the feed does not capture that data as it no longer matters.

    Looking at the database it has captured about 30,000 unique buries every 24 hours (please note that this data capture was as of January 2nd, so older stories will not have any data). Since in a blog post last year (2007) Kevin Rose said that Digg was getting about 5,000 new stories a day, the 30,000 buries would match the probably buries of these upcoming and popular stories. An interesting theory on why some stories are getting buried is that there is an autobury performed by a Digg admin. If this theory was true it would explain why some stories are buried with little or no buries (which depending on the content could be a good thing, but it would be nice to know if it is happening).

    So, thank you for your time in letting me explain how the application is working and showing that it still has value as buries really only matter on Upcoming and Popular stories as once a story has been fully buried it can’t get on the home or popular pages again.

  5. Pingback: Digg’s Top Buried Stories Revealed

  6. Chris Lang


    If this tool is inaccurate how can we gain insight into what does well on Digg and what gets buried due to being unpopular or being over shouted?

    I would love it if buries came with a comment, I don’t care who buried it, I just want to know why.

    Got any suggestions?


Leave a Reply

Your email address will not be published. Required fields are marked *