Heath |
01-04-2013 03:47 PM |
Library of Congress has archive of tweets, but no plan for its public display
http://www.washingtonpost.com/lifest...46f_story.html
Quote:
In the few minutes it will take you to read this story, some 3 million new tweets will have flitted across the publishing platform Twitter and ricocheted across the Internet. The Library of Congress is busy archiving the sprawling and frenetic Twitter canon ? with some key exceptions ? dating back to the site?s 2006 launch. That means saving for posterity more than 170 billion tweets and counting, with an average of more than 400 million new tweets sent each day, according to Twitter.
But in the two years since the library announced this unprecedented acquisition project, few details have emerged about how its unwieldy corpus of 140-character bursts will be made available to the public.
That?s because the library hasn?t figured it out yet.
?People expect fully indexed ? if not online searchable ? databases, and that?s very difficult to apply to massive digital databases in real time,? said Deputy Librarian of Congress Robert Dizard Jr. ?The technology for archival access has to catch up with the technology that has allowed for content creation and distribution on a massive scale. Twitter is focused on creating and distributing content; that?s the model. Our focus is on collecting that data, archiving it, stabilizing it and providing access; a very different model.?
Colorado-based data company Gnip is managing the transfer of tweets to the archive, which is populated by a fully automated system that processes tweets from across the globe. Each archived tweet comes with more than 50 fields of metadata ? where the tweet originated, how many times it was retweeted, who follows the account that posted the tweet and so on ? although content from links, photos and videos attached to tweets are not included. For security?s sake, there are two copies of the complete collection.
But the library hasn?t started the daunting task of sorting or filtering its 133 terabytes of Twitter data, which it receives from Gnip in chronological bundles, in any meaningful way.
|
|