Accept no substitutes

Artificial Intelligence vs Social Intelligence

As Craig Silverstein was talking yesterday, I thought of Marvin Minsky the father and grand promoter of Artificial Intelligence at MIT. Not because Google uses very smart AI, but because they don’t do much of it at all.

Folks on the East Coast (overgeneralizing for effect) grabbed the Minsky challenge to create a machine intelligence that mimics and surpasses human intelligence. Afterall, machines can operate, doing certain functions, much more quickly and accurately than humans. Minsky-ites were also influenced by MIT linguists like Chomsky who pushed that concepts of deep structure that could be handled, if properly understood, by machines.

Salton‘s great work and insights at Cornell while not tied to AI seemed to promise that machine understanding of keyword might make for successful searches. But the real magic in SMART (Salton’s Magical Retriever of Text) was and is relevance feedback in which human evaluation of results, whether explicit or implicit, increased both the precision and recall of a search.

The more intelligently that the system could learn from the results noted as relevant, the better the subsequent result sets. Aggregating group results gave even better “Good” results and seemed magic (at the time) compared to simple keyword searches.

Larry Page’s insight in constructing PageRank was to see that goodness could also be indicated by the number of links related to a certain document. Again capturing the human intelligence embedded in the act of linking. Not just human intelligence but social intelligence. What we call knowledge capital or in a larger sense social capital.

Successful searches moved from less modeling of the brain, less artificial intelligence, and more and more social intelligence and what we now see in Web 2.0 as social networking. Not that computing and engineering didn’t matter, but that the machine component is most effective as a magnifier and enhancer and connector of the social intelligence supplied by humans.

Will AI ever be successful? Silverstein says “My guess is (it will be) about 300 years until computers are as good as, say, your local reference library in doing search.”

NOTE: I have not yet read John Battelle‘s The Search

1 Comment

  1. Ian Parker

    I am a little bit more optimisic. The basis of searching is to get the meaning of words correct. In other words we have to translate into a concet language. Looking at Google we can see tthat we are a long way from this.

    “El barco attravesto una cerradura.”
    and “La estacion de ressorte” I did not know stations had elasticity.

    There are however a number of techniques which will show considerable promise. If we employ LSA (Latent Semantic Analysis) which is basically Principlal Component Analysis applied to text we obtain word probabilities. A neccessary (and sufficient) condition for good translation is simply that “éclusia” comes above “cerradura” in terms of probability and that “primavera” comes top of the possibilities for “spring”.

    This problem is a key one in AI. VoIP is at the point where individual phonemes are analized as well if not better than human listeners. Why is VoIP so bad. Let us look at the translation paradyme again. Suppose we have order “calallero” and “noche” we may use LSA to do it, both words “(k)night” sound the same in English.

    This problem, as we can see, is a central one. The same solution appertains for translation, speech and retrieval. The problem is that LSA demands a matrix multiplication to find every word and spiders will need to operate on multicored servers.

Leave a Reply

© 2024 The Real Paul Jones

Theme by Anders NorenUp ↑