Researchers from OpenAI, Stanford and Georgetown University are convinced that large language models like ChatGPT can be used to deliberately misinform social media users. According to the report of scientists, this is due to the fact that today everyone can use generative language models, without exception. Researchers worry that these tools could turn into effective weapons in the hands of propagandists, writes vice.com.

Scientists note that the emergence of AI tools actually eliminates the need for “troll factories”. Now it will be possible to generate a huge number of texts, “throw” them into social networks, promote them through the media thanks to neural networks. Experts are concerned that totalitarian countries like Russia and China can invest in AI development and put this process on stream.

Researchers are convinced that democratic governments should create a tool to control access to AI equipment, in particular to chips. An example is the restrictions imposed by the United States against China. The lack of high technologies will slow down the process of “pumping” their own neural networks by totalitarian regimes.

Scientists also propose to limit access to AI models and track the appearance of content written by neural networks on social platforms.

As a reminder, Google is stepping up its fight against the spread of disinformation about Ukrainian refugees in Germany.

Commentary