Posts

Students doing homework with ChatGPT is a non-issue

Image
Image by gpointstudio on Freepik A few weeks ago, I wrote an article claiming that the impact of automatic code generation by ChatGPT is much less than imagined. The hue and cry around the technology is an example of vividness bias. I am going to play the same tune again, this time about another scare around ChatGPT: students using it to cheat. As you know, ChatGPT is a generative model. It can write assignments, solve math problems and even write papers. Everyone seems to think that this will affect the way the teachers evaluate students. It will also affect the quality of learning in students. I beg to differ. I will argue that even accepting that this is a real issue, it actually affects a very small number of students. I am restricting myself to India, whose academic environment I am familiar with. As if students never cheated before ChatGPT came along! I studied at a very good engineering college in Mumbai. Almost everyone in my class, myself included, used to copy the journals ...

ChatGPT's code generation will not impact IT industry

Image
  Many of my conversations in the last few days revolved around ChatGPT. This is hardly surprising, given the impact the model from OpenAI has created. In particular, most people seem to think of ChatGPT’s ability to generate code as a big game changer or a big threat. Here I beg to differ. The importance being given to the code generation capability of Large Language Models such as ChatGPT is a classic example of vividness bias. In short, vividness bias describes the tendency of the human mind to ascribe more importance to phenomena that appear sensational. Airplane accidents attract a lot of attention, but in reality they are responsible for a very small percentage of accidental deaths. Coding is a very small part of the IT industry. A small percentage of people employed in the IT sector actually do any significant amount of coding. Let me explain how. I will restrict myself to India, as I know this industry. First of all, almost 80% of the people employed in the IT sector work ...

Language models — AI’s way of talking to us

Image
  Image by Rajashree Rajadhyax Language models are a buzzword these days. The current state-of-the-art in language modelling is the transformer based GPT-3, which was trained on a staggering amount of text data. Here’s a quick look at what exactly is a language model. Researchers believe that language began somewhere between 50,000 and 1,00,000 years ago. Language evolved from the human need to communicate with each other. The ability to communicate using language has given the human species a better chance at survival. Language is an incredibly important tool to pass on knowledge and to communicate thought. To acquire knowledge we read or simply listen to others and all of this is possible because of language. As my friend puts it, the human species has always remained a step ahead because of their ability to augment their capabilities. I’m adding a link to his article  here . AI is one more such attempt; the attempt to augment intelligence. Recent developments in AI have int...

Can false positives be life threatening?

Image
Usually, it is the false negatives that are associated with risk to life. Consider the case of a medical test for a serious disease. You don’t want the test to miss the disease in a patient. The test might flag the disease even when it is not there. This is bad but not so disastrous as missing it. In other words, false negatives (missing the disease) are life threatening, but false positives (predicting disease that is not there) are not. This is generally true, even in applications that are not life threatening but are still damaging. Take the example of  fraud detection. False positives (signaling fraud by mistake) is alright but false negatives (missing a fraud) is not. The major problem with false positives is their nuisance. An interesting case is that of the fall detection device for elderly people. This device sends emergency messages to close relatives when a senior person falls down. Sometimes, it signals a fall by mistake. The son or daughter leaves the office in a worri...

Buddha vs Child paradigm

Image
  Buddha vs Child Paradigm “Artificial Intelligence (AI) is like a child, who needs time to learn and adapt, whereas a typical IT system is like Buddha who knows everything about the problem it was supposed to solve.”              - Devesh Rajadhyax, Founder, Cere Labs. Let us in this post try to elaborate on this Buddha vs Child Paradigm which Devesh coined for  differentiating between conventional IT systems and AI. It is essential to know the difference because it  helps in building the right kind of attitude towards implementing AI systems. A typical IT system such as  ERP does the job for what it was implemented. It is assumed that it will solve the problem for what it was  made. Take for example an accounting system like Tally. It will help you to manage your accounts in highly accurate manner. This system is like a Buddha, who is enlightened from the start. We can’t expect it to do any mistakes (exc...