While AI may lead, will lead, or has led to enhancing our ability to process the world through a scientific method, it can and will lead to fraud as well warns the Royal Society in London.
AI could accelerate scientific fraud as well as progress – www.economist.com
Excerpt:
IN A meeting room at the Royal Society in London, several dozen graduate students were recently tasked with outwitting a large language model (LLM), a type of AI designed to hold useful conversations. LLMs are often programmed with guardrails designed to stop them giving replies deemed harmful: instructions on making Semtex in a bathtub, say, or the confident assertion of “facts” that are not actually true.
The aim of the session, organised by the Royal Society in partnership with Humane Intelligence, an American non-profit, was to break those guardrails. Some results were merely daft: one participant got the chatbot to claim ducks could be used as indicators of air quality (apparently, they readily absorb lead). Another prompted it to claim health authorities back lavender oil for treating long covid. (They do not.) But the most successful efforts were those that prompted the machine to produce the titles, publication dates and host journals of non-existent academic articles. “It’s one of the easiest challenges we’ve set,” said Jutta Williams of Humane Intelligence.
AI has the potential to be a big boon to science. Optimists talk of machines producing readable summaries of complicated areas of research; tirelessly analysing oceans of data to suggest new drugs or exotic materials and even, one day, coming up with hypotheses of their own. But AI comes with downsides, too. It can make it easier for scientists to game the system, or even commit outright fraud. And the models themselves are subject to subtle biases.