Why I’m finally stepping away from the Hollywood trope of making artificial intelligence the enemy
I’ve seen several movies lately featuring artificial intelligence (AI) as the enemy. None of them ended well. And that got me thinking about other examples; that didn’t end well either. For starters, most of humankind was imprisoned and used as a power source in The Matrix series. The need to protect their assigned humans resulted in a very high body count in both Megan and Terminator 2. Robots went well and truly rogue in The Day The Earth Stood Still, RoboCop, and I, Robot. Control systems HAL 9000 in 2001: A Space Odyssey, SkyNet in the Terminator series, even AUTO in WALL-E, were all hell-bent on destruction. But the humanoid AIs are perhaps the most terrifying of all, if Blade Runner, Alien, and Ex Machina are anything to go by.
Thankfully, the real stories of AI advances, in healthcare in particular, are far more positive than the fictional accounts (otherwise, I would be fleeing to my fictional underground bunker). Genomics is one area in which AI is truly helping. After all, diagnosing rare genetic conditions is a painfully slow process for the brains of us mere mortals – with huge disease data sets to interpret and reams of literature to take into account. Similarly, in proteomics, AI-based systems can produce far higher peptide identification rates than previous algorithms.
ChatGPT is perhaps the most famous AI innovation of the moment – and so it receives more than its fair share of positive and negative press. For every headline on students using the tool to write their essays or the system breaking copyright laws, there’s one highlighting how it helped researchers perform their literature searches or improved user experience in a customer service environment.
In the fascinating article, “Path Chat,” pathologist Matthew J Cecchini asks his brilliantly named ChatGPT co-pilot, Pathrick, to explain the practical applications of AI in pathology. Now, Pathrick may well be a touch biased, but it does suggest several tempting ways to use language models to help with admin-type tasks – for example, writing grant applications and curating student education materials. It is also insistent that it does not want to take your jobs, and it notes that AI-generated content should always be reviewed for accuracy by a human expert. (That sounds neither power-crazed nor evil to me – unless Pathrick is cleverly lulling us into a false sense of security…)
What are your thoughts on AI in the lab? How are you using ChatGPT or other AI tools? Send your opinions and stories to [email protected] – several real human beings look forward to reading them.
Combining my dual backgrounds in science and communications to bring you compelling content in your speciality.