Home

Most of my social services have shifted to streams: Facebook, Twitter, FourSquare, Scribd and the list goes on. Like email, these lists are in reverse chronological order and are designed to keep me up to date. But unlike email, the flux of the data through these systems is one to two orders of magnitude higher. I’ll receive 100-200 emails per day, but more than 1000-2000 tweets per day. 

Processing this data is an enormous challenge for me. As a result, I’ve been wondering how the brain processes increasing volumes of data and after some research, I discovered human brain data processing is a focal point for the neurology community.

There are two countervailing points of view on the issue: one is the brain processes data continuously and in parallel like a computer at high speed and the other is that the brain samples data and processes it discretely. Here are just a few of the studies: initial trials in 1985, hearing trials from 2005, multi-stimulus tests and so on.  

Drinking from a firehose

Glossing over the implications of the two points of view, it’s my experience that I simply can’t handle continuous streams of data. If I try, my productivity, effectiveness and depth of thinking all plummet to effectively zero. My brain just isn’t built to process data in this way (so I side with those who espouse the discrete theory).

A data stream is perfect for a computer but terrible for humans. Computers produce data streams naturally (example: stock tickers). Also, computers consume data in this format naturally (example: keystrokes on a keyboard). But I would be terrible at either task. I could choose to be slow and very accurate or to be fast and inaccurate. In either case, though, I’d likely fall asleep from boredom.

Given this difference between humans and machines, the stream seems like a great solution for computers, but not for humans. Handling the data output, which is just a product of computers aggregating a large scale of human thoughts, isn’t for me. I need some tools, analysis and frameworks for processing the data in a meaningful way. I need computers to pre-digest the data from high velocity streams (or combinations of many streams) into something packaged and usable.

Predigestion

News delivery needs to change over the next 18 months. Let me forsake the stream for a filtered collection of articles that have passed through Digg’s social filters and Netflix’s personalization engine. With the best content skimmed from the web, I would be comprehensive in my news analysis without having to read, parse and filter through each proverbial keystroke. 

If anyone knows of tools like this, please send them my way.

Advertisements