Thoughts on technology and social web

October 6, 2009

Service federation in the cloud – PushButton or Google Wave?

Filed under: Real time web, Social Networking — Tags: , , — Ravikant Cherukuri @ 6:54 pm

PushButton technologies – Pubsubhubbub (PuSH), Webhooks and ATOM , provide a lighter weight and web friendly alternative to a more heavy weight alternatives that Google wave uses. Mainly for sever to server federation. On a high level, this seems true as the pshb uses http posts to communicates vs persistent connections of xmpp. Seems more web friendly and easier to implement subscribers and publishers on a language of your choice.

Disclaimer: On one level, pubsubhubbub and wave are solving different problems. But at a higher level both implement real time federation with pub-sub. My attempt is to evaluate wave as a vehicle for service federation. I actually think that wave is a great idea but its too heavy for the web as we have it today. Web is evolutionary wave is too drastic and revolutionary.

What makes wave compelling?

There are several interesting aspects. The first is the shell (the UX). The presentation is like e-mail but is more real time. It takes a paradigm that everybody understands and then extends it. The second is the extensibility. With gadgets and robots you could extend the wave system. With embedding you could extend the way waves are visualized and consumed. The third is federation. Federation makes it possible to have different providers who own their own data to communicate and make it possible for their user bases to work with each other. This kind of federation is not new. FriendFeed inter ops with many sites like flickr using SUP. Many IM networks inter operate too. But wave federation tries to standardize that to provides for XMPP like universal federation where you dont have to know or work with all the providers to federate with them.

What makes Wave complex?

Wave tries to solve a bunch of problems to make real time persistent communication possible. The protocol that wave uses to federate dictates that all federated partners maintain complete copies of objects even though the object is owned by one of them. Operational transformation(OT) dictates that all the operations on the document are stored along with the latest copy of the document. When a user from a federated provider joins a wave, the provider gets a copy of the wave in terms of the operations. All these features provide for an e-mail like decentralization to data. Wave builds on top of this with real time collaborative document editing. All these make the protocol incredibly complex.

The wave protocol imposes heavy requirements of what a remote partner has to implement. The protocol data model (which has to implement OT) would require a third party to implement state in terms of operations on it. That is, you will have to keep track of not only the current state of teh wave but also the history operations that made it so. This is not a pattern that services normally use. For a large scale service inter-op, each partner has to maintain a full copy of the wave. This doesn’t sound very feasible at web scale. Imagine an enterprise wave user joining a large public wave. Now suddenly my enterprise wave server needs to handle this barrage of updates.

Could it be simpler?

Activity streams is an effort by several people involved in social networking to standardizing protocols for different networks to inter-operate. The great thing about activity streams is that it defines the protocol for expressing activities without defining how the providers should implement their services. This is how web is today. We cant (and should not try to) anticipate how other would use a protocol/service. Just define the interface and leave the rest to the rest.

PuSH with ATOM is more web friendly than pub-sub protocol than XEP-0060 that wave uses. Simple HTTP POSTs to make and break subscriptions and receive notifications makes the interface very natural. Can a collaborative data sync algorithm like OT be implemented on top of PuSH or is it even necessary to achieve federation? As the format of the data going over PuSH is ATOM, services could exchange ATOM items to synchronize, the world would be much simpler.

Without OT, if the federating providers want to maintain copes of data that is actually owned by another provider, there is still the complexity of synchronization. In most cases, this level of sync is not required. Its only for when an item of small granularity if being edited simultaneously by multiple people. If the systems can identify conflicts and either overwrite items or prompt users to correct, it could be acceptable for most cases. The FeedSync protocol that Live Mesh uses works this way. Your content is a feed of items and items are synchronized using a simple algorithm and conflicts are flagged.

Combining Activity Streams with PuSH and FeedSync can provide a much simpler, more web like infrastructure for a universal federation model that is easy to build on and participate in. Federation is nothing but mashups with a business model. While the mashup space delights us with innovations everyday, service federation moves at snails pace. A simpler model to federate at web scale (in size and diversity) would lead to seamless aggregation of your data from any service (that your friends might be using) to the services that you use.


September 7, 2009

pubsubhubub and rss-cloud : changing the way you read the web

Filed under: Cloud Computing, Real time web, Social Networking — Ravikant Cherukuri @ 11:39 pm

Today WordPress declared support for RSSCloudPubsubhubub was adopted by blogger/ google reader a few days back. These technologies are being adopted much faster than I expected. Much like AJAX a few of years back, the buzz is building up. Anil Dash’s posts on pushbutton talk of the potential of these technologies. The basic idea is very simple. A RESTful pubsub protocol that provides real time notifications for web content.

  • Subscribers register their interest in publisher content notifications to hubs.
  • Publishers send contents pings to the hub when new content is posted.
  • The hub reacts to the ping by fetching content from the publisher and posting the content to the subscribers.
  • All content is in RSS/Atom format.
  • Communication to and from the hub happens over HTTP using webhooks.

This has the potential to change web as we know it. It could bring Twitter-like real-time notifications to changes on all of web’s content. As an end user, this is exciting because this could finally bring (much deserved) death to the browser refresh button by chanelling all the content you are interested into streams that can be delivered to you in real time. Pushbutton with the DiSO/Activity-Streams, is great for social network federation. Activity stream’s atom format should work well with pushbutton.

Hubs federating and chaining subscriptions with each other provides a distributed and decentralized pub-sub infrastructure. This has some definite advantages over XMPP. XMPP’s XEP-0060 provides a similar generic pub-sub functionality. But still the REST based approach is simpler and more web savvy. Webhooks take the honors here with its REST based callbacks as opposed to XMPP’s connection based approach.

Add the bidirectional communication channels on web pages (using long poll today or web-sockets in HTML 5) to the mix and we have a means to deliver notifications to end users. When I am on facebook and I get a mail in my gmail account, the notification could be delivered to me through facebook.

Update : A few of good posts comparing pubsubhubub and rsscloud –

How to publish and receive blog posts in Real-time
PubSubHubbub vs. rssCloud
PubSubHubbub = rssCloud + ping ?
RSSCloud Vs. PubSubHubbub: Why The Fat Pings Win
There’s a Reason RSSCloud Failed to Catch On
The Reason RSS Cloud Can Work Now
The Web At a New Crossroads

July 17, 2009

Real time roundup

Filed under: Cloud Computing, NextWeb, Real time web, Social Networking — Ravikant Cherukuri @ 6:50 pm

Real time web has been a favorite topic for me for a while now. I work on one of the large IM systems and am very interested in these developments. Real time web started emerging as the platform for the next web in the last year it two. The main idea here is that events happening on the web are brought to you as they happen. Think the coverage that the iranian election aftermath got with twitter. The more obvious scenarios are already a reality and there are several more subtle but equally impressive usages that many are working on. As with all technologies that are driven by guy-in-the-garage start ups, its tough to see where this is all leading (if at all). Real time web focuses on several aspects of the web.

  • filtered web streams
  • instant delivery of web posts
  • real time collaboration
  • responsive web applications
  • S2S data filtering

The lack of full duplex connectivity has been a down side of HTTP. This gave native applications a one-up. HTML5 is out to correct that with Web Sockets. Meanwhile, technologies like COMET, BOSH etc provide a near real time connectivity over HTTP. So, the technology seems to be in place for the real time web. But why does real time web matter? We already have near-real-time, polling based push technologies like RSS/Atom. In my mind the answer is user experience. Real time gives the most natural experience. Compare a walkie-talkie to a telephone. A walkie-talkie is near real time with a lot of user level sync. Its just a quirk of technology. A telephone conversation is a full duplex real life conversation. Makes a lot of difference.

More links to the real time web :

Introduction to the RealTimeWeb

The Real-Time Web – O’Reilly Broadcast

Is Real-time the Future of the Web?

Building Real Time Web Applications Using HTML 5 Web Sockets

How to Deal with the Real Time Web: Navigating the River

Real time search and filtered streams

For me the coolest part of microblogging is real time search. The significance of most information rapidly fades over time. So, getting short spurts of information as and when it happens is super userful. Twitter’s hash tags are a good example. You follow information rather than people. I find the following ways to track real-time information interesting.

  • Twitter hash tags. Send something like #iwant in the tweet and people can easily track your tweets and might contact you if they are selling what you want. Gives meta data  to your tweet and makes it trackable by other users. The #iwant and #ihave tags are a good example. There are third party sites that consume the twitter firehose and build a realtime marketplace .  Such mining verticals on real time data are fast emerging. Many hugely popular twitter games like spymaster are another example.
  • Google’s “e-mail me when a new item is found that matches my search criteria” feature. This is super useful. If you know what you are searching for, you can be the first to find information as it comes live. Google can deliver this to you with in a few hours of the informations getting posted on web. This is realtively real time (compared to others) but is still a push. When this becomes true real time, it will be awesome.
  • Aggregated real time search. There are several real time search engines out there that can get you real time feed search from twitter/facebook/identica etc in one interface. Some of these have AJAX powerd interfaces and some have true real time with XMPP (collecta).

The basic idea behind this is to subscribe to receive changes data on the web without tying in to specific web sites, in real time. Eventually, the bigger search engines like google/yahoo/bing will comeup with a tighter integration of real time into all of the web that they index. With that, real time search will seamlessly integrate with the normal web search as we know it.

Instant information delivery

This category of real time applications aim to deliver your data of interest to you as soon as its posted. Sample scenarios here include

  • Deliver blog posts and comments that you are interested in instantly. This builds on the current RSS based systems by bringing true real time to content delivery.
    • : Word press now allows you to keep track of other word press blogs and comments and comments on your wordpress blog over XMPP IM with clients like GTalk.
    • Tweet.IM delivers your twitter feed over IM in real time using XMPP
    • Friendfeed aggregates feeds form all your friends blogs, social streams and comments. Friendfeed has a feature that delivers these are friend feed finds them over XMPP IM.
    • Google Wave uses XMPP IM protocol to sync wave content in real time.
    • There are a couple of startups like iNezha that provide real time updates to all the blogs that you are interested in.
  • Enable efficient content aggregation uisng XMPP. Google PubSubHubub is a good example. There were also several experiments by companies like Gnip, FriendFeed etc to use XMPP for this purpose.
  • IM systems are integrated with email systems (by the same vendor) today. You get an email and you are presented with a toast my the IM system in real time.
  • Windows Live Messenger also supports alerts from different third party providers. This is real time events from arbitrary providers with whom you have registered your interest. You can click on the windows live alerts button on my blog and get alerts onto your messenger whenever I update my blog.

[More to come in a later post]

June 19, 2009

Protocols for the real-time web

Filed under: NextWeb, Social Networking — Ravikant Cherukuri @ 5:46 am

Today Collecta unveiled their real time search engine toady. One of the several players in this fastly evolving space. Others include twitter search, OneRiot, Tweetmeme, Facebook search, rumored google real-time search etc. The interesting thing about collecta is that they are true real time. You will see updates reach you within seconds (of collecta seeing them). This is because they use Jabber’s XMPP protocol to push updates to the client. This is one of many techniques used for real time communication on the web. Some invented as people needed them and others like XMPP that are standardized. What are these techniques and how do they stack up?

As early as 10 years back, we started seeing applications that tried to bring you information as it changes or as it is created on the web. This evolved into RSS/Atom based feeds. This is a polling based pull that simulates push. Your browser or blog reader will periodically poll for changes to your feeds and update you when they change. This evolved into feed aggregation services where all your feeds are aggregated into a single feed that you can poll from the client. This makes it efficient to poll on the client but the serivce that is the owner of the feed still gets the load. Consider this. For the web to be real time, the zillions of objects on web from product listings to blog posts to wikipedia articles have to be able to communicate to users in real time. The feed model just dosent scale to this.

As the web UI evolved we needed web applications to be more responsive and so AJAX was born. AJAX (Asynchronous Javascript And Xml) enabled javascript to make XML based calls to the web server and get data back to the current page without reloading it. This made web apps more responsive but the model is still the same as javascript now used XMLHttpRequest to poll feeds from the server.

Then consider real time collaboration scenarios like instant messages, collaborative document editing etc. These need real time responses as other users are watching the screen to see them. These applications need to be more realtime and constant polling will either overwhelm the servers (for short polling interval) or degrade user experience (with longer polling interval). Long polling / Comet comes to the  rescue here. The browser keeps long running connections open with the server so the server can send events to teh browser as they happen. The basic technique is for teh browser to make a request to the server for which the server does not respond till it has some data to send. Once the browser gets some data from the server, it make another request to the server. Many web apps like gmail, facebook, meebo etc use this technique to bring real time functionality to the web.

These techniques are also used by APIs that bring realtime to web by implementing web wrappers around existing proprietary realtime protocols like Messenger Web Toolkit,  Web AIM, Yahoo messenger SDK etc. The Messenger web toolkit provides a cool feature that allows you to send non IM messages that can be used to build higher level collaboration applications.

The Jabber/XMPP protocol is an extensible protocol that is used for publish-subscribe (mainly in instant messaging). This protocol is finding way to many real time web scenarios like –

  • real time search (Collecta), twitter (, friendfeed,
  • aggregators like PixelPipe and that let you interact with your social networks via XMPP,
  • WordPress firehose where partners like search engines and market intelligence providers who would like to ingest a real-time stream of new posts and comments the second they get published.
  • Twitter firehose where thrird parties can get the realtime stream of twitter data to mine and search.
  • Google wave extends XMPP to build a collaboration system

XMPP also has javascript API for the web like Strophe and xmpp4js. There is a technology similar to comet for XMPP to run on HTTP called BOSH. BOSH takes care of firewall traversal and tunnels XMPP over HTTP. Overall this is a fairly well designed and extensible protocol with a lot of good documentation and several reference implementations. This is becoming the protocol of choice for the real time web.

There is also the WebSockets API in the HTML5 specification. The HTML 5 specification introduces the Web Socket interface, which defines a full-duplex communications channel that operates over a single socket and is exposed via a JavaScript interface in HTML 5 compliant browsers. It tarverses firewalls and proxies and provides bi-directional transport with streaming capability without the long poll overhead. The javascript API is also very simple. COMET can surely take advantage of this and things become more straight forward without hidden iframes and arcane protocols. So can XMPP. Sounds like the holy grail in making the browser a two-way real-time medium.

This space is fast changing and a lot of smart folks are figuring out how to make the web more responsive and realtime. And the protocols keep evolving to acocomodate that.

Update : There is a good article about XMPP progress in 2009 at

June 14, 2009

Ubiquitous, rich and real-time – the future of communication

Filed under: Social Networking — Ravikant Cherukuri @ 11:47 pm

Communication models on the internet have been changing at a good pace in the last 20 years. This pace has picked up quite a bit lately and a new model is emerging. Based on federation, content models and the real time web, the near future promises great innovations in this space. Faster machines, unlimited bandwidth, cloud computing and the faster more agile mindset of developers will hasten these changes and the gold rush is on to get there. Social streams, Google wave, web integration of instant messaging, Open ID/OAuth are all driving this change. Google Wave in particular is ambitious in getting to the next paradigm. Some of what I discuss below is already in the realm of what google is going to release this fall.


Ranging from BBS, news groups, e-mail, instant messaging, SMS and lately social streams (from myriad social networking sites), each of these services provide some niche service and have their entrenched users. More users are using more than one of these services and the integration between these is in minimal and piecemeal. Aggregation services (like friendfeed) are going to play a big role going forward. Unifying these means of communications so they seamlessly inter-operate is a challenge not yet solved though many companies worked on this for decades. With web (HTML / HTTP / javascript) being the point where different technologies and platforms merge and gel together, this is more possible than ever. The race to be the frond-end where the users look to consume all this data is on. Facebook, friendfeed, google, microsoft are all looking to retain their users by bringing in all the users data into their presentation realm. Federation with other services is imperative in this more nuanced version of walled gardens.

Being ubiquitous is being able to roam your identity across several technologies (this is becoming a reality with wide spread acceptance of OpenID/OAuth). But perhaps more subtly, it also means that your conversations roam with you. For example, you should be able to continue a conversation you are having on a blog, on twitter. You should be able to reply to a comment on your facebook wall via an instant messaging conversation window. Taking this one step further, you should be able to take your conversation into the context of any web resource that you are browsing. Being ubiquitous also means that on my cell phone I have just one app that gets me all my communications. One push interface that gets me my information from all my channels. One UX for me to look at everything.


What else can make this communication bus more effective? Most information shared and exchanges in the older models (like e-mail) is mostly text with some urls and images. The tools to express yourself are evolving. I can share much more than text and links in my facebook feed. I can share richer application specific posts with which my friends can interact easily. Still what I can share and how easily I can do it depends on a lot of factors. Being able to share from desktop applications and web sites to any of my streams at different venues (hotmail in-box, facebook news-feed, twitter feed etc) easily and uniformly  is the next big step. What I am sharing has to be decoupled from how I am sharing it. That is, the technology that knows about what I am sharing needs to inter-operate with the technology that is the transport that gets the shared information from one place to another. With the complexity and richness of today’s (and tomorrows) apps and the simplicity of SMTP. Being able to embed objects into conversations, gives context and enhances the value of the communique to the users.

Another aspect of richness of communication is persistence and the conversation object model. Let me explain a bit here. Lot of today’s technologies are basic transports. They just get the information that you share from you to your buddy. Most of the semantics of the conversation is lost. Today, in case of email, you can  manually organize your mail into folders and make it easy on yourself.  Gmail broke this paradigm by making mail searchable (which works very well for me). Google Wave promises to bring a wiki like collaboration model to the mix. IMHO, this is a big step. Being able to have the information exchanged in a group conversation automatically available and organized for future reference is big. Today, most of the technologies, model the user and the buddies as entities that you can interact with. But conversation content is not treated that well. Its mostly treated as blobs passed between users. If the semantics of this conversation is understood and included in the object model of the application, a lot of possibilities open up. Semantic web itself is progressing (with micro-formats and now common-tags) in the direction of empowering the users to define semantic concepts within the context of their content and link it to the greater web. The same could be done with conversations too. Tools for the participants to organize the ideas and concepts in discussion would help.


In most of today’s communication applications, real time integrates as an after thought. Like integrating instant messaging into e-mail. You still have to distinct apps here, its just that you can access them from one UX. This is often overlooked because a few minutes delay (as in e-mail) doesn’t sound that bad for most conversations. Also, we are used to technologies that poll and pull data for us. But the real time aspect with protocols like XMPP are the new gold standard for responsiveness and interactive nature of collaboration. Being able to collaboratively edit a document (floor-plans/blueprint/health records) and see each others changes in real time, reliably is vital. As important is the ability to preserve this data in the rich conversation repository that can be edited and enhanced later.

There are several products today that provide a complete stack for real time collaboration. The problem with these is that they are constrained to their domain and are not built for building upon. They are not built as a platform. By far, XMPP has evolved into the most extensible and standards based real-time transport. This should be treated as TCP was for the network and as HTTP was to the web. Application level protocols are being built on top of XMPP as extensions (jingle is an XMPP extension for voice for gtalk). Rich and real-time should be built on such standards based stack to be able to scale and federate with the myriad technologies and social networks.

Real-time takes some effort out of following all the information you are interested in on the web. You can rest assured that if some thing happens you will get to know and you will get to know as soon as it happens. You can put the tired F5 key to rest. Real-time also takes some effort out of what the services need to do to keep you updated. Polling takes a lot of resources, especially at web scale. Imagine 50 million people each following a 100 object on the web and each want to know the moment they object changes. Here is an interesting presentation of how friendfeed crawled flickr 3 million times for 45000 users, only 6K of whom were logged in.

April 30, 2009

Walled gardens

Filed under: Social Networking — Ravikant Cherukuri @ 1:58 am

Wikipedia says

“A walled garden, with regards to media content, refers to a closed set or exclusive set of information services provided for users (a method of creating a monopoly or securing an information system).”

In context of social networks, these are the networks that will want you to stay in network by locking your information in network. Social networks are dime a dozen. Some are old and some new. Some are popular and some not so much. With innovation on steroids, how would a successful network retain its user base when what the user is looking for changes faster and the want (need??) for new ways to communicate becomes stronger by the day. I will move from MySpace to facebook (facebook to some other) in a minute if I am convinced that facebook gives me more.

When is a walled garden a good thing? Not after the party moves outside.  No wonder many social networks are racing to share and inter-operate so their users can party inside their network. There is an increased capacity of small companies to successfully out-innovate (and become big) any established player. At the same time, there is very little brand loyalty. People go to wherever it is hip to go today or whatever service provides them with the best service and rightly so. Any revenue model that depends on users staying in a walled network will not work. So, what can one do to keep users in?

Innovate. A good strategy but innovation within a company cannot consistently beat the distributed innovation that happens across the many startup garages and passionate minds. This distributed innovation will always win. It takes so little to start a service. Think youtube, facebook etc.

Acquire. Leave the innovation to the small and nimble guys and acquire them to build your service. Easier said than done. These rarely work. Individual pieces are worth more than the whole. Sure, the founders will make money, the big company that takes over will make some news. The probability of success. Especially guaranteed success is low. Hotmail was a good acquisition for Microsoft and youtube might prove to be for Google. But it is sure harder to pull off.

Partner. This is the model that many networks today are shooting for. Provide a platform for the small guys to innovate on. The small guy who used to reverse engineer protocols of the walled garden is the most prized asset today. Take advantage of distributed innovation and keep the users in your network with the diversity of content. But at the same time, open API could ruin your bottom line. Fewer banner ads as people are viewing your content via a different third party client. Sure. But its better than extinction. When you partner to share the innovation, you partner to share the revenue too. The hope is that you would expand faster and live longer. Win with scale. This definitely looks like a workable model.

I am a big fan of The Innovator’s Dilemma. Its explains why the leader in one generation of technology fails to lead the next even if they invented the next generation technology. Disk manufacturers are used as one of the examples. Today with the faster pace of innovation and easier path from idea to reality, this is truer than ever. The main reason for not making it in the next generation is that the leader is busy making money in the current setup and is reluctant to rock the boat.

Keep trying to disrupt yourself. Or somebody else will. Be paranoid. Keep your boat rocking. Linear plans are for suckers (or bell companies of the 70s). In fact, if you are making any money today, somebody is already trying to disrupt you out of existence. If you succeed in convincing your users that your old service is a joke and that they should use your new service that flips the current paradigm, good! You still have your users. If somebody else convinces your users, you are extinct.

Being open interop is not an option. Its necessary to survive. At the same time, openness should be coupled with a business model that can make money in the new world. An API to expose your service and build upon is good. But compelling features that bring users to your service should go along with this.

April 22, 2009

Social networks on desktop (and phones)

Filed under: Social Networking — Ravikant Cherukuri @ 4:57 am

Its getting tedious going to all these social networks to check friend statuses. If you are new to this, its a major pain point/barrier. I am used to a Instant Messaging client running on my computer and popping up for conversations. Looks like this paradigm could stretch to social networks as well. Recently I tried out different clients that do this for you. Some is a limited and simple way and some look to do a more complete integration. Its funny to see that these tools are getting some attention now. We kind of made a full circle from desktop to web 2.0 and back 🙂
Below is the list of clients and the good and bad I found in them.

Neat Instant messaging like interface and support a whole slew of networks. The interface is simple and usable.
Social Networks : Facebook/Twitter/MySpace/LinkedIn
Instant Messaging : Windows Live Messenger/ Yahoo Messenger/Gtalk/AIM/Skype
Email : Hotmail/Yahoo/Gmail/Pop/IMAP/AOL

This biggest down side is that they store your credentials for all these services on the server side. They do claim that they are stored in a way that cannot be compromised. But still, I am not a big fan of that. These credentials hold the key to our digital lives.

Works for Twitter and facebook. Simple interface and limited functionality. But works for following your friends status feeds.

Works for twitter and seesmic networks. Not sure about facebook. Again a simple interface with limited features. Maybe this is all we need.

Similar to Digsby. An IM like interface. Supports Orkut/MySpace/Facebook.

This is the most interesting client I found. Especially the friendfeed edition. Friend feed brings all the conversations your friends are having on the web to you. And AlertThingy gives you a desktop interface to that. Very powerful but a bit too complicated to see all that in one place for non-friendfeed users.

This falls into the digsby category. nuff said.

I have been using fring for a month now. Integrates IM very well. with and and a gtalk account, everything is delivered to your phone. But the interface sucks. My cell phone is not a desktop. The experience should integrate into my cell phone rather than simulate a desktop IM experience. Just my 2 cents.

There is still a lot scope for improvement here and i am sure there will be a lot more innovation. Compared to the IM evolution, the SN (Social Network) started with more open APIs thus removing the barrier of entry. New clients are free to innovate with access to a lot of SNs and so can instantly become useful. Compare that with the IM networks which did not evolve in that direction and as a result missed the bus on SN. Reminds me of the hard drive manufacturers in Innovators Dilemma.

April 7, 2009

Love the bar

Filed under: Social Networking — Ravikant Cherukuri @ 4:57 am

Have you seen the DiggBar? Looks like StumbleUpon and Reddit already had it but it still looks cool. So I started using digg :). Messenger Web Toolkit has a web-bar that lets you embed the Windows Live Messenger social coolness in your site. Its very well designed and looks great!

A tiny bit of Internet Duct Tape from my side and I have a bookmarklet that lets you have the Messenger WebBar on any page that you go to. Just add the link below to your Favorite Bar in IE or Bookmarks toolbar in firefox. visit your favorite site and then click on the toolbar button. Voila you have your social network that roams with you, a messenger web client accessible from anywhere for a quick chat. Still have a few more ideas to make this more useful… maybe later.

Looks like my blog doesn’t like bookmarklet urls. So here is a page with bookmarklet link.

April 4, 2009

Micro formats and web conversations

Filed under: Social Networking — Ravikant Cherukuri @ 4:56 am

Conversations on the web happen via e-mail/IM/posts and comments/groups/social network feeds/proprietary web sites/SMS etc. The meaning of the data transmitted is understood by the sender and receiver. The medium is just a pipe for the users to communicate.

There are concepts in a conversation that are more semantic. For example, on IM, I give my address to a friend. The fact that this is an address is lost in the medium I used to transfer the data. If that was preserved, my friend could have beamed that address to his smart phone and the phone would have mapped the directions for him. There are some things in the web today that work this way. For example, if I send you an email with a Jpeg, your email reader understands that its an image. If I send you a url, it indicates to you that its clickable. We take these for granted. Micro formats extend this model to higher level concepts.

Def “Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards.”

For example, take hCalendar. This can be used to represent a calendar event. For example (from the wiki) consider this event:

<div class="vevent">
  <h3 class="summary">XYZ Project Review</h3>
  <p class="description">Project XYZ Review Meeting</p>
  <p>To be held on <abbr class="dtstart" title="1998-03-12T08:30:00-05:00">12 March 1998 from 8:30am EST</abbr>
  until <abbr class="dtend" title="1998-03-12T09:30:00-05:00">9:30am EST</abbr></p>
  <p>Location: <span class="location">1CP Conference Room 4350</span></p>
  <small>Booked by: <span class="uid"></span> on
  <abbr class="dtstamp" title="19980309T231000Z">9 Mar 1998 6:00pm</abbr></small>

Which look like this in the browser:

XYZ Project Review

Project XYZ Review Meeting

To be held on 12 March 1998 from 8:30am EST until 9:30am EST

Location: 1CP Conference Room 4350

Booked by: on 9 Mar 1998 6:00pm


This is human readable and machine readable. When you send this to a user in an IM, the user can read it and import it into his primary calendar with a click. Of course, the IM client can recognize that its an event and show you just the title and an icon to import or tell you if you have any schedule conflicts. There are many such micro-formats that are already defined for us to use. Like hCard for people and organizations, VoteLinks for opinions ratings and reviews etc. And you could always define your own.

One road block for microformat adoption in canned/server generated content is that traditionally content on web is treated as text and the authoring tools today don’t support (at least none that I know of) microformats. The problem is much more manageable for user generated content. When you are writing a blog you have handy tools to add images and links to your post. In an IM windows you have handy emote-icons. Why not have short-cuts to add data in microformats to the conversation. These can be saved by the consumer of the content and imported into other applications. As the microformat itself is HTML, there is always a default rendering if the receiving application does not understand the format.

Some scenarios I can think of that will benefit from this.

  • Drag and drop a outlook contact on an IM buddy. The buddy receives a vCard format data (that might look like a visiting card) that he could right click on and choose to file it.
  • On you see a review for kindle that you want to share with your friend. You could send an hReview formatted review to your friend (without typing it yourself of course. Your browser can help you there maybe)
  • Share the location of a restaurant by sharing the geo location in the geo microformat.
  • Share a listing from craigs list with a buddy using hListing.

On the browser/IM client, tools to save and mine microformats will make it easier for users to collect these and share. Visual Studio like intelli-sense to make it each to express in terms of microformats and make it as easy as using emote icons in IM.

Interesting reading :

Microformats – Part 0: Introduction to Microformats
Microformats – Part 1: Structured Data Chaos
Microformats – Part 2: The Fundamental Types
Microformats – Part 3: Introducing Operator

Create a free website or blog at