Did We Miss The Boat With The Stream?

The stream is the user interface design for many popular sites now. Facebook is using it as the main news feed and your home page. Twitter has never used anything but the stream. FriendFeed used the stream, but also made more popular or recent items “bump” to the top of the stream. Digg was not really a stream, but now it is moving to a design that looks more like a stream. Essentially, any site that is a microblogging site, uses the stream as its main user interface. As in the case with Digg, many other sites are trying to become more like Twitter and Facebook, so they are adding the stream as well.

However, if you look at the popularity of these sites, and the type of content that is posted to these sites, you will notice some common issues. First, if you follow enough people, the stream moves very fast. This can happen once you reach 1000 friends, or less if you follow people that post frequently. If you look at the type of content that gets posted, you do not notice any real differences. So, a link posted from this blog and CNN look basically the same. Photos and Videos also look the same, and in some cases are actually just links to a photo sharing service. Are these things really all the same? More importantly, should we really be treating these things all in the same manner?

Another problem is that most people are not very good at reading a large list of items. People like to see what is interesting and popular. They also like to see things that may be related to one another. I am not talking about personal customization, but about grouping related items. For example, over the past few weeks we have seen World Cup action. With a stream based interface, you may see a World Cup related item, and then 20 items about technology, humor, world news or religion. At the end of that list is another World Cup related item. There are only 2 things these 20+ items have in common, time and your network.

Why is this a problem? People tend to be very visual. People also take a lot of time to process information. So, if the stream is going by very rapidly, you can “miss” items due to your processing time. If your network is large enough, those missed items will likely reappear in your stream later. On the other extreme, if an item is popular enough you could see it several times in your stream which obviously does not add any real value. In this case, we are not using the power of the social network except as a source of information. We are not even aggregating the information.

A stream of unrelated items, or only related due to their time of publishing, is a terrible way for people to consume information. However, it is a perfect way for programs to consume information. Programs will use time as another attribute of the item, just like the person that published the item and any other information it can infer. When we see something that is more appropriate for a program rather than people, that something becomes a protocol or a specification. So, can we all agree that the stream, and the way that Facebook and Twitter post information, is the specification? Also, now that the specification is defined, can we start building things that people will like to use?

Just remember streams are good for programs, but suck for humans. The question now is what can we build with the streams? Can we now build the boat so that we can navigate and not drown in the stream?

13 thoughts on “Did We Miss The Boat With The Stream?

  1. Google News addresses the issue in an interesting way, by grouping (or at least attempting to group) articles that pertain to the same topic.

    At times, I have noticed that Facebook groups items that appear to be similar.

    Like

  2. John

    Google News is a different type of site. This was geared towards the social network related sites. I know that Facebook has grouped items from the same service, but I have not seen other groupings of items yet. Maybe they are testing a new feature.

    Like

  3. The stream may not be the best way for humans to digest information, particularly if you’ve added too many people.

    FriendFeed solved some (maybe most) of this through lists, duplicate detection and the activity bounce. Sadly, development has stopped there and many have departed the service. (I’m still there!)

    In thinking about this I wonder if it’s building the right boat, or creating a dam upstream so we can wade in the stream without being pulled under and away by a strong current.

    Put another way, maybe it’s a GIGO problem. Garbage In, Garbage Out. If you simply followed the ‘right’ people for you, would the stream become functional? I’m intensely interested in this idea because I believe Dunbar’s Number to be true and relevant online.

    Do you really need to follow 1,000 people to stay on top of things? I’m guessing there’s an 80/20 rule here. That you could get 80% (probably more) of the information by following just 200 of those 1,000. The question is figuring out who they are!

    I imagined my own unfollow app. http://www.blindfiveyearold.com/unfollow-on-twitter

    But perhaps I’m fighting a losing battle against human nature.

    Instead maybe you DO build the boat. Perhaps it’s something like Fever (http://feedafever.com/) or Flume (http://nathanchase.com/2010/05/flume-a-web-application-concept-for-duplication-elimination/)

    I too hope to see more development around the stream, whether it’s creating tributaries, boats or dams.

    Like

  4. AJ

    Filtering (or creating a dam upstream) has been done, but it only helps with finding relevant things we already know about. I am starting to think that we are not really using the power of the networks that we are building in conjunction with the stream of items.

    The problem with the 80/20 rule is that the network is not static. You follow a bunch of people, but eventually, some of those people will change segments. If you focus on the original 20% of “informants” you will miss the rise of other informants. This is also why the network could be so powerful, we can use the aggregate information to our advantage to see what’s new as well.

    I will take a look at Fever and Flume in the near future to see how they work. Thanks for the pointers.

    Like

  5. I have been saying for I don’t know how long that this so-called river of news is for the most part a river of white noise that gets louder with every person you add. This was further exhasberated by the adoption of faux “real-time” display.

    As much as we might like to believe, or be conned into believing, the human mind can not sustain a constant stream of information. At some point it will shutdown. No amount of illusionary “re-wiring” of our brains will make it so.

    Like

  6. You mentioned the visual nature we process information and clustering of social streams by topic. My brother prompted me to explore a visual image front end to a social aggregation and search tool that I bolted on twitter streams with the help of my technical web hacking sensei.

    You can see the tool in action here, imagebrowser.heroku.com.

    Like

  7. Steven

    Yeah, I keep seeing that on your blog 🙂

    My problem is that many sites seem to be moving towards the stream because it “worked” for Twitter and Facebook. I need to get my head around what would really work.

    Like

  8. Mark

    That is an interesting idea, but I am wondering if it is too visual. I think it is also why Techmeme stays so popular, they have nice groupings of content.

    I am curious to see how your site evolves though.

    Like

  9. Rob,

    Yes, you can’t have a static network. I’m always tweaking who is in my home feed on FriendFeed. Taking someone out and moving someone in and seeing what happens.

    Of course FriendFeed has Friend of a Friend, which ensures that I get to see new stuff from new people based on the filter of people I already follow.

    It would be interesting to have something that was better at telling me who was not contributing to my stream in a healthy way AND identify those who, based on my current network, behavior (views, clicks, comments, likes, shares), and semantics (topics, tags) I should add to make it more rich. (A richer MrTweet)

    I have this vision of dragging avatars of people in and out of my stream and having it dynamically change the timeline for the past hour. Perhaps it comes with a Items/Hour metric to determine the speed of your stream, which is broken down further into tags/topics.

    [Searching Google] Funny, I’d have thought someone would have already created an app that told you the speed of your Tweet stream and compared it to others.

    I’ll be interested to hear your thoughts and ideas as you think through the problem.

    Like

  10. In the case of Twitter, their introduction of ‘annotations’ allowing for more meta-context to be attached to each tweet will provide for machine and human curation.

    Other stream-like services will likely mirror this specification allowing for API and context interoperability.

    Like

  11. There is a lot of news that people don’t care about, BUT people have a need to know the news that they do care about as soon as possible.

    This would make it seem that the two most important variables, then, are individual interest and time.

    I would argue that context and prioritization are important as well, though, due to the economics of attention. (Bernardo Huberman at HP Labs is exploring the concept of the Attention Economy) There are things that people would always care to know as soon as possible and there are things that people care about but don’t need to know as soon as possible.

    I’m with you, Rob. The social network has the power to solve this problem. But it would also seem like the Internet Browser needs to be involved as well.

    Like

Comments are closed.