Skip to main content


Showing posts from June, 2017

Will AI evolve to be as bad as humans?

Will AI be a threat to humans? Will it take over our job? These are questions that are popping out of newspapers, blogs and journals everywhere. These are really important questions, and I want to discuss them, but I would like to take a slightly different route. In this article, I will argue that the major reasons and then why I think there is a possibility of AI faltering in its course. But first, there is that difficult job of defining AI systems. Many authors who predict an AI victory over humans opt to skip defining what they mean by AI. And it is understandable. I have been working on AI systems for more than three years now, understanding, building and putting AI systems to use. But if you ask me to define AI systems, I am confounded. When it comes to calling something as AI, there are some peculiar phenomena that I have observed: No longer AI : A system loses its ‘ AI ness’ as it becomes more and more familiar. The chess program that beat the then world champi


For an introduction to Reinforcement Learning, its basic terminologies, concepts and types read Reinforcement Learning - Part 1 by following this link:   Q-Learning Q learning is an algorithm in reinforcement learning. It originates from the model based reinforcement learning. It can be referred to as a different kind of value function. The values are called Q values and are denoted by Q(s,a). It signifies the Q value when in a state 's' and taking an action 'a'. Mathematically,                       Q(s,a) = R(s) + γ Σ s' P(s,a,s') max a' Q(s',a') It can be defined as the value for arriving in a state which is obtained by learning via action 'a' and proceeding optimally thereafter. Also,                            V(s)     = max a Q(s,a)                                  л(s)      =   argmax a Q(s,a)