Recent social media trends will leave one believing that Open AI’s ChatGPT, which has also attracted investment from Microsoft, could do almost anything as it responds to queries with a human touch and can even write research papers. It has also been touted as a replacement for Google’s search engine, and also for authors, because of its ability to address specific questions with the required context. But there are also concerns about implications of AI on society at large, as ChatGPT can create a scenario where it’s hard to separate fabricated content from genuine text.
What makes it mildly scary?
So far red flags have been raised about hackers using the content generating platform to write malicious codes which can be used for cybercrimes. Students in New York have also been banned from using ChatGPT, since authorities are afraid they’ll use AI to complete assignments.
Consequences for science
The natural language processing AI has also raised concerns about the future of scientific research, after it was revealed that researchers aren’t able to tell content written by humans and the abstracts generated by ChatGPT. Scientists were only able to detect 68 per cent of the content generated by the language model which uses human generated data to come up with its own responses. The researchers also confused 32 per cent of the AI abstracts for those created by people, while in 14 per cent cases, they thought human responses were generated by ChatGPT.
Free from plagiarism, but what about accuracy?
These results from a survey conducted by the University of Chicago add to the speculation over accuracy and integrity of research conducted using ChatGPT. At the same time, running its abstracts through an output detector, found that there was no plagiarism in the AI generated content. But if AI is able to fool scientists, one slip up can have far reaching consequences for scientific research.
With its ability to chat in multiple languages including Hindi, and capabilities to talk about almost any topic like a person, ChatGPT is a fascinating tool which was meant to go viral. But it still needs human supervision and upgrades to accuracy, for filtering out fake information and contributing to research beyond the brainstorming part.