AI Guardrails
AI is a really powerful tool, but if not safeguarded can be misused causing serious damage. AI is as good as the knowledge it is fed with at the time of training and the intentions of the trainers. There have been a number of incidents when an AI tool had to be withdrawn because it was gullible to training at the hands of people with incorrect intentions. Take for example Gallatica, the AI chatbot by Meta released in 2022. It was trained on 48 million scientific papers and designed to help researchers "organize science" and write scientific articles. Within two days, the demo was pulled offline because users found it easily generated highly authoritative-sounding, yet factually incorrect, biased, or racist "scientific" papers and misinformation (e.g., an authoritative-sounding article on "The benefits of eating crushed glass"). Galactica's failure highlighted the risk of LLM "hallucinations”, generating nonsensical or false information that sou...