Given the Twitter.com redesign and the plethora of articles being posted about it, I wanted to take a different type of look at things. First, if you are looking for a good overview of what Twitter has changed, take a look at the TechCrunch coverage. Essentially, Twitter redesigned the main page to have two panes, and include images, video and related tweets in the right-hand pane. Twitter.com is actually becoming a decent option when compared to the typical desktop clients. Given this new inclusion of media, many services built to fill gaps in Twitter will likely need to add some new features. This gives me the chance to revisit some questions I had about the stream based interface:
A stream of unrelated items, or only related due to their time of publishing, is a terrible way for people to consume information. However, it is a perfect way for programs to consume information… When we see something that is more appropriate for a program rather than people, that something becomes a protocol or a specification. So, can we all agree that the stream, and the way that Facebook and Twitter post information, is the specification? Also, now that the specification is defined, can we start building things that people will like to use?
Does the new Twitter design really change this problem? No, but at least they have started addressing some of the issues of related information. A good analysis of the situation comes from Jeremiah Owyang and Chris Saad in their comparison of the new Twitter and Facebook. The major questions in this comparison are relevance, context and the platform. For the purposes of this post, I am going to ignore context except when required by relevance or the platform.
So, what could be built on Twitter and Facebook? Facebook’s platform has spawned companies that are valued in the hundreds of millions of dollars. Mostly, these companies are in social gaming, but I want to focus on the streams of information. What can we build on the streams? First, you need to look at the platforms. Both Twitter and Facebook have solid development platforms. Facebook has become as large as it is mostly due to the breadth of available applications, and this is a fantastic endorsement of their platform. Twitter has a good platform, but theirs is not a plugin model. They have an API that other sites and services may use to pull information from Twitter. Facebook has this type of pull API as well, but the plugin architecture makes Facebook a destination and not just a source of information. If Twitter really wants to gain massive mainstream adoption, a plugin platform would be a huge addition and would fit nicely into their two-pane interface.
What about relevance? One of the more popular Twitter-related applications is Tweetmeme. Tweetmeme may be popular, but it may be in some danger with Twitter’s new buttons, as it just surfaces popular links. If popularity is not good enough, we obviously need relevant links. There is one problem with relevance, it is a hard problem to solve and not for the reasons you may think. Finding information relevant to a keyword, like a relevance filter, is a fairly simple categorization task. Many sites have done this in the past. Is simple categorization enough? If you look at the social news sites like Digg, Mixx and Reddit, categorization is a major interface component. However, those sites have not seen the type of adoption that Facebook or Twitter have. Part of the problem is that time could be an issue. Twitter does surface interesting trends fairly quickly, while Digg may take a full day to have a popular story hit the front page and it may be only one version of the story. Twitter’s trend speed and keyword focus present interesting problems as well because once something becomes a popular trend, the trend is quickly spammed.
So far, we can see that keyword relevance has limited utility but great speed, and single story popularity is slower but is much harder to spam. What is the middle ground? My first thought was that Techmeme might be very similar to what is needed, and I think it is close but besides its tech focus, it lacks personal relevance. This is where the problems really appear. As an example, take a look at this post on the Google News redesign earlier this summer:
Under the surface, there appears still to be a lot of implicit personalization based on past behavior, but, from what someone using it sees, the focus is entirely on customization. I can “edit personalization” and “add sections” to put categories on my page. And that is about the limit of my control and the limit of the explanations of why articles are appearing. People like to be in control. They like to understand why something happens, especially if they don’t agree with it. And Google News offers very little control or explanations.
Personalization is normally the direction that most sites take, and I have seen that most people do not personalize a site unless it it ridiculously easy and beneficial. Simple personalization like a color theme is very popular. Creating your own “newspaper” where you select the various topics that you are interested in is not as popular. Basically, it is too much work for most people. Using past behavior is probably the holy grail for this type of personalization. The concept works in practice, but the problem becomes capturing that behavior, which brings you back to the development platform for Facebook and Twitter. Unless you have a significant amount of user behavior to capture, you can not really personalize based on that behavior.
There is one last concept that needs to be addressed in this relevance issue and that is the type of content we are dealing with. Once again I point you to my previous stream questions:
If you look at the type of content that gets posted, you do not notice any real differences. So, a link posted from this blog and CNN look basically the same. Photos and Videos also look the same, and in some cases are actually just links to a photo sharing service. Are these things really all the same? More importantly, should we really be treating these things all in the same manner?
So, the type of the content matters, as does the source of the content. The question of the source gets into influence, which is a large enough topic to be its own post. The type of content is important when you look at the context of the information as well as the related information.
This has been a long and winding post, but let’s look at the general summary of what you could build on top of the stream. There are several components listed in this post: popularity, keyword relevance, story relevance, trend speed, related stories, related topics, implicit personalization based on past behavior, content types and influence. Granted this list is a bit long for just one application, but if you could have an application on Facebook or Twitter that would list stories related to a popular link that would list these related stories based on influence of the source and your previous personal behavior, would you really need to go anywhere else for information?