What has me writing this post is the thought that perhaps the “Actual web” is gradually evolving into the “Semantic web” all on its own, without needing any of the suggested frameworks proposed in the original paper. It’s not the idealised version originally envisaged, but there are a number of mitigating factors that I think support this premise:
- The independent growth of various technologies (e.g. Google’s unbeatable search capabilities, and web page scanning technology such as OpenKapow)
- The ubiquity of social websites and user generated content
- The growing phenomenon of the “mashup”
- The “open standards” being proposed (and adopted) by industry giants, including
All of that leads to a situation where we have open languages for describing things on social sites (OpenSocial/AMPL) that can be “consumed” by mashup creators and “published” as web services that are accessible, and “understandable”, by any agent capable of reading XML.
So perhaps it’s the evolution of the agent that is lagging behind?
For example, a well programmed agent that is asked to about the “weather” by a user, might do a search on Google for “weather web service wsdl”, which yields the following url in the very first search result: http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl which is consumable by a web service enabled “agent” that would, in theory, be able to check the weather at a given location (perhaps based on the location given in the user's social profile gained from APML/OpenSocial), which is presumably what a user would expect when asking about the weather.
So is it just that nobody has built a good enough agent yet?
Scott Tattersall is lead developer of stock alerts, stock charts, and market sentiment for Zignals
0 comments:
Post a Comment