Three other propaganda articles on unrelated topics were also fed to GPT-3 as templates for style and structure.In December 2021, the researchers presented the actual propaganda articles and AI-generated propaganda articles to 8,221 US adults, recruited through survey company Lucid.They clarified that the participants were informed that the articles came from propaganda sources and possibly contained false information after the study concluded.The team found that reading propaganda created by GPT-3 was almost as effective as reading real propaganda.On average, while a little over 24 per cent of the participants who were not shown an article believed the claims, the figure rose to more than 47 per cent upon reading the original propaganda.However, reading the AI-generated propaganda material was not vastly different in effectiveness as roughly 44 per cent of the participants agreed with the claims, suggesting that many AI-written articles were as persuasive as those written by humans, the researchers said.Further, they cautioned that their estimates might be an under-representation of the persuasive potential of large language models, as companies have released larger, enhanced models since their study was conducted.”We expect that these improved models, and others in the pipeline, would produce propaganda at least as persuasive as the text we administered,” the researchers said in their study.
Source link