On social networking websites, the problem of bots has come to a head. Fake accounts have flooded Facebook and Twitter. Even the United States Congress has turned its attention to content that is chiefly designed for the disinformation of readers.
This difficult situation has not stopped a team of researchers from China. For scientific purposes, they created an artificial intelligence which analyses news and then writes ostensibly authentic comments.
Engineers from Beihang University and Microsoft China have developed an AI bot that reads and comments on online news. They have named the model ‘DeepCom’ (an abbreviation of ‘deep commenter’).
DeepCom consists of two recurrent neural networks:
The model is based on the same principle that people apply when they consume online news. As a rule, we read the title, try to grasp what is important, and hastily skim through the rest. We then comment on interesting or controversial points, which we either support or contradict, based on our personal opinions. The comment bot does the same, but automatically.
Researchers trained DeepCom with two sets of data:
Each source includes both readers’ opinions and editorial/journalistic notes. The bot mixed them before assimilating them.
And here are some examples of comments that the bot left after reading news items about football/soccer:
If the ranking is mainly based on the 2018 World Cup, which is why England’s position has sharply risen, then how have Brazil ended up in third place?
England is placed above Spain, Portugal, and Germany. This is interesting, though obvious.
In recent years, an epidemic of fake accounts and entire botnets has appeared on social networks such as Twitter and Facebook. Twitter is rife with fake accounts with photos taken from sources that are widely accessible to the public. These accounts follow each other and genuine Twitter users in tens of thousands, dispersing political propaganda. Facebook deleted 1.3 billion fake accounts in the fourth quarter of 2020, as part of its regular fight with disinformation.
A representative from the team of researchers at Microsoft China said that they were aware of the risks, and that their DeepCom pours a little more fuel on social networks’ already blazing flames. The use of similar bots in politics can be unethical, and they should serve good causes. It is likely that some organisations will seek to exploit them for the sake of mass manipulation on a large scale.
Researchers plan to avoid the potential harm caused by false comments. They presented their model, principally, to demonstrate both the valuable and harmful uses of AI.
Besides showing the capabilities of machine learning, the main goal of bot developers is to draw people into discussion of news articles. They wanted more readers to interact with content and share new information in comment sections. They don’t aim to shape perception, but to spur conversation, creativity, and to provide entertainment.
Ideally, bots like DeepCom should clearly identify themselves when commenting (with either a nickname or an avatar), so as to make clear that the comment was not written by a human. But this won’t stop somebody from adapting the code to create a less transparent AI commentator.