Skip to main content

Building Commonsense in AI

It is often debated that what makes humans the ultimate intelligent species is the innate quality of doing commonsense reasoning. Humans use common sense knowledge about the world around to take appropriate decisions, and this turns out to be the necessary ingredient for their survival.

AI researches have long thought about building commonsense knowledge in AI. They argue that if AI possess necessary commonsense knowledge then it will be a truly intelligent machine.

We will discuss two major commonsense projects that exploit this idea:

  • Cyc tries to build a comprehensive ontology and knowledge base of everyday commonsense knowledge. This knowledge can be used by AI applications to do human-like reasoning. Started in 1984, Cyc has come a long way. Today, OpenCyc 4.0 includes the entire Cyc ontology, containing 239,000 concepts and 2,093,000 facts and can be browsed on the OpenCyc website - http://www.cyc.com/platform/opencyc/. OpenCyc is available for download from SourceForge under an OpenCyc License.

  • Never Ending Language Language Learning (NELL) is a semantic machine learning system designed by Carnegie Mellon University that is running 24/7 since the beginning of 2010. NELL is continuously browsing through millions of web pages looking for connections between different concepts. NELL tries to mimic the human learning process. NELL achieves this by performing two tasks each day
    • Reading task: extract information from web text to further populate a growing knowledge base of structured facts and knowledge.
    • Learning task: learn to read better each day than the day before, as evidenced by its ability to go back to yesterday’s text sources and extract more information more accurately.
NELL is successfully trying to learn new facts which you can browse at http://rtw.ml.cmu.edu/rtw/

Commonsense Reasoning systems will be an essential element in question answering systems. You can make your own question answering system using either Cyc or NELL.

Cyc, after 31 years, for the first time has been used for commercial purpose by a company called Lucid, to develop their personal assistant. Using the vast repository of Cyc’s commonsense knowledge can make your personal assistant more accurate in answering questions, compared to the assistants devoid of commonsense knowledge.


References:

[1] Panton, Kathy, et al. "Common sense reasoning–from Cyc to intelligent assistant." Ambient Intelligence in Everyday Life. Springer Berlin Heidelberg, 2006. 1-31.

[2] Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka Jr, E. R.; and Mitchell, T. M. 2010a. Toward an architecture for never-ending language learning. In AAAI, volume 5, 3.

Comments

Popular posts from this blog

Understanding Generative Adversarial Networks - Part II

In "Understanding Generative Adversarial Networks - Part I" you gained a conceptual understanding of how GAN works. In this post let us get a mathematical understanding of GANs.
The loss functions can be designed most easily using the idea of zero-sum games. 
The sum of the costs of all players is 0. This is the Minimax algorithm for GANs
Let’s break it down.
Some terminology: V(D, G) : The value function for a minimax game E(X) : Expectation of a random variable X, also equal to its average value D(x) : The discriminator output for an input x from real data, represents probability G(z): The generator's output when its given z from the noise distribution D(G(z)): Combining the above, this represents the output of the discriminator when 
given a generated image G(z) as input
Now, as explained above, the discriminator is the maximizer and hence it tries to 
maximize

Understanding Generative Adverserial Networks - Part 1

This is a two part series on understanding Generative Adversarial Networks (GANs). This part deals with the conceptual understanding of GANs. In the second part we will try to understand the mathematics behind GANs.

Generative networks have been in use for quite a while now. And so have discriminative networks. But only in 2014 did someone get the brilliant idea of using them together. These are the generative adversarial networks. This kind of deep learning model was invented by Ian Goodfellow. When we work with data already labelled, it’s called supervised learning. It’s much easier compared to unsupervised learning, which has no predefined labels, making the task more vague. 

"Generative Adversarial Networks is the most interesting idea in the last ten years in Machine Learning." - Yann LeCun

In this post, we’ll discuss what GANs are and how they work, at a higher , more abstract level. Since 2014, many variations of the traditional GAN have come out, but the underlying conc…