Posts

Where AI Is Hiding Inside Gmail

Image
  I still remember the day I created my Gmail account. It felt like a small milestone. Back then, email was something you signed up for with a bit of excitement. I mainly used it to exchange notes with friends, share college updates, and coordinate group work. In India, Hotmail was the big name at the time, and having an email ID felt new and slightly special. Fast forward to today, and Gmail has quietly woven itself into everyday life. My day begins with emails and often ends with them too. I opened the app without thinking about it. It feels familiar, almost automatic. Yet in those few seconds of scrolling, replying, and searching, a lot is happening. Gmail shows me what I am likely to care about, helps me frame quick responses, neatly groups conversations, and finds old emails in moments. What makes all this feel so effortless is not just smart design or solid engineering. There is a quiet layer of AI working in the background, constantly learning and adjusting. And that is exac...

AI Guardrails

Image
  AI is a really powerful tool, but if not safeguarded can be misused causing serious damage. AI is as good as the knowledge it is fed with at the time of training and the intentions of the trainers. There have been a number of incidents when an AI tool had to be withdrawn because it was gullible to training at the hands of people with incorrect intentions.  Take for example Gallatica, the AI chatbot by Meta released in 2022. It was trained on 48 million scientific papers and designed to help researchers "organize science" and write scientific articles. Within two days, the demo was pulled offline because users found it easily generated highly authoritative-sounding, yet factually incorrect, biased, or racist "scientific" papers and misinformation (e.g., an authoritative-sounding article on "The benefits of eating crushed glass"). Galactica's failure highlighted the risk of LLM "hallucinations”, generating nonsensical or false information that sou...

How AI Is Learning to Measure Pain and Why It Matters

Image
I was reading about some recent advancements in AI, and I came across an article on how researchers are trying to measure pain using artificial intelligence. It immediately caught my attention. Pain is such a personal and complicated feeling, so the idea that AI could somehow understand or quantify it sounded very interesting. That curiosity made me read a few more papers and articles on the topic. It was only a cursory read, but even then I felt the work was interesting enough to share. Illustration by Rajashree Rajadhyax Why Pain Measurement Matters Pain may seem like something each of us simply feels and explains, but in healthcare it is one of the most difficult things to assess. Two people with the same issue can describe completely different levels of pain. Sometimes people under-report their discomfort because they do not want to bother anyone, and sometimes they simply cannot express it. This includes infants, patients in intensive care, people under anaesthesia, and those with...