Posts

Passing the mantle of 'Flagship AI use case'

Image
The mantle of being the ‘flagship AI use case’ seems to have passed to the two month old ChatGPT. This title was held by the autonomous car for a long time. ChatGPT is a text generator. It takes one sequence of text (what you type) and generates another sequence of text (its response). Because of its training, this appears to be a conversation; as if you have asked a question and ChatGPT has replied. Within five days of its launch, ChatGPT acquired one million subscribers, something that took Facebook ten months to do. ChatGPT is being tried by everyone. Industry leaders, eminent professors, writers and celebrities are all asking questions to ChatGPT and coming out impressed. They are then writing about it, thus fueling a storm of popularity. Meanwhile the autonomous car, the outgoing flagship of AI, is losing its charm. For around 10 years, self-driving vehicles occupied a slide in every presentation on AI. They used to be one of the use cases cited in most business articles on AI or ...

Students doing homework with ChatGPT is a non-issue

Image
Image by gpointstudio on Freepik A few weeks ago, I wrote an article claiming that the impact of automatic code generation by ChatGPT is much less than imagined. The hue and cry around the technology is an example of vividness bias. I am going to play the same tune again, this time about another scare around ChatGPT: students using it to cheat. As you know, ChatGPT is a generative model. It can write assignments, solve math problems and even write papers. Everyone seems to think that this will affect the way the teachers evaluate students. It will also affect the quality of learning in students. I beg to differ. I will argue that even accepting that this is a real issue, it actually affects a very small number of students. I am restricting myself to India, whose academic environment I am familiar with. As if students never cheated before ChatGPT came along! I studied at a very good engineering college in Mumbai. Almost everyone in my class, myself included, used to copy the journals ...

ChatGPT's code generation will not impact IT industry

Image
  Many of my conversations in the last few days revolved around ChatGPT. This is hardly surprising, given the impact the model from OpenAI has created. In particular, most people seem to think of ChatGPT’s ability to generate code as a big game changer or a big threat. Here I beg to differ. The importance being given to the code generation capability of Large Language Models such as ChatGPT is a classic example of vividness bias. In short, vividness bias describes the tendency of the human mind to ascribe more importance to phenomena that appear sensational. Airplane accidents attract a lot of attention, but in reality they are responsible for a very small percentage of accidental deaths. Coding is a very small part of the IT industry. A small percentage of people employed in the IT sector actually do any significant amount of coding. Let me explain how. I will restrict myself to India, as I know this industry. First of all, almost 80% of the people employed in the IT sector work ...

Language models — AI’s way of talking to us

Image
  Image by Rajashree Rajadhyax Language models are a buzzword these days. The current state-of-the-art in language modelling is the transformer based GPT-3, which was trained on a staggering amount of text data. Here’s a quick look at what exactly is a language model. Researchers believe that language began somewhere between 50,000 and 1,00,000 years ago. Language evolved from the human need to communicate with each other. The ability to communicate using language has given the human species a better chance at survival. Language is an incredibly important tool to pass on knowledge and to communicate thought. To acquire knowledge we read or simply listen to others and all of this is possible because of language. As my friend puts it, the human species has always remained a step ahead because of their ability to augment their capabilities. I’m adding a link to his article  here . AI is one more such attempt; the attempt to augment intelligence. Recent developments in AI have int...

Can false positives be life threatening?

Image
Usually, it is the false negatives that are associated with risk to life. Consider the case of a medical test for a serious disease. You don’t want the test to miss the disease in a patient. The test might flag the disease even when it is not there. This is bad but not so disastrous as missing it. In other words, false negatives (missing the disease) are life threatening, but false positives (predicting disease that is not there) are not. This is generally true, even in applications that are not life threatening but are still damaging. Take the example of  fraud detection. False positives (signaling fraud by mistake) is alright but false negatives (missing a fraud) is not. The major problem with false positives is their nuisance. An interesting case is that of the fall detection device for elderly people. This device sends emergency messages to close relatives when a senior person falls down. Sometimes, it signals a fall by mistake. The son or daughter leaves the office in a worri...