I met a colleague there, Richard, and we got to talking about how already the term Web. 3.0, such a groaner term that it is, is becoming more popular.
Web 3.0 is often used to describe the Semantic Web, which has been around since Tim Berners-Lee started the ball rolling on it in 1999. I'd heard and was excited by the Semantic Web a few years ago, but since it was a theoretically concept, it didn't seem to get a lot of popular attention. Until the last few months it seems.
Recently, I'd read an article in Business 2.0 (Business 2.0 is still one of the best magazines covering the Internet topics) by Michael V. Copeland called "Weaving the [Semantic] Web". Since then I found a great, but very long article on the topic by John Borland called A Smarter Web in MIT's Technology Review. Richard also forwarded me a recent conversation with Berners-Lee on the issue in ITWorld Canada.
I don't claim to understand all the complicated science that would enable the Semantic Web, but I do get the need and the possible advantages it promises. Copeland describes the current limations of webpages and existing search technology:
Services like Google do a great job of sifting through all those webpages, but it's up to people to recognize the things they want when they see them in the results... The Web just isn't very smart yet; one webpage is the same as any other. It might have a higher Google ranking, but there's no distinction based on meaning. The semantic Web in the Berners-Lee vision acts more like a series of connected databases, where all information resides in a structured form. Within that structure is a layer of description that adds meaning that the computer can understand.
Borland expands on how the semantic web would work and the benefits:
[it] would provide a way to classify individual bits of online data such as pictures, text, or database entries but would define relationships between classification categories as well. Dictionaries and thesauruses called "ontologies" would translate between different ways of describing the same types of data, such as "post code" and "zip code." All this would help computers start to interpret Web content more efficiently. In this vision, the Web would take on aspects of a database, or a web of databases. Databases are good at providing simple answers to queries because their software understands the context of each entry. "One Main Street" is understood as an address, not just random text. Defining the context of online data just as clearly--labeling a cat as an animal, and a veterinarian as an animal doctor, for example--could result in a Web that computers could browse and understand much as humans do...
With computers able to read webpages more effectively, they'll be able to automate things for us such as finding the cheapest price on something or organizing an evening out with friends.
But if it only results in a simple query on a search engine not returning a gazilon results and make me scour for pages to find good information, then the Semantic Web is a winner to me.