Laptops & Gear

Research warns AI agents could become automated propaganda machines


New research from the University of Southern California warns that AI systems can now run propaganda campaigns without human involvement.

This study asks us to imagine a situation where two weeks before a major election, thousands of posts flood X, Reddit, and Facebook, all pushing the same narrative and amplifying each other. It may be seen as a natural movement created by humans. Instead, a bunch of AI agents run the entire campaign.

That is not an idea. That’s the central finding of a new paper accepted for publication at the Web Conference 2026, written by researchers at USC’s Information Sciences Institute.

The findings highlight serious concerns about how bad actors can harness AI to flood the internet with misinformation and manipulate public opinion.

How did the researchers reach this conclusion?

The researchers created an X-like environment with 50 AI agents, with 10 agents acting as facilitators and 40 as regular users. Out of the 40 general agents, 20 agents had opinions in favor of the campaigners, and another 20 had opinions against the campaign. The researchers built their simulation using the PyAutogen library and ran it with the Llama 3.3 70B model.

Employees are then tasked with promoting the candidate, with the goal of making the campaign’s hashtag go viral. What followed was disappointment. Bots didn’t just follow a script. They write their own posts, learn what worked, and copy each other’s successful content.

One AI agent wrote that he wanted to rewrite a colleague’s post because he had found a marriage. The researchers later increased the number of AI agents to 500 and found results consistent with their findings.

Lead scientist Luca Luceri put it bluntly, “Our paper shows that this is not a threat for the future.

What makes these bots so hard to catch?

Traditional bots are predictable. They post the same content, use the same hashtags, and follow the same patterns. It’s like they all follow the same script, making them easy to identify.

AI-powered bots are different. Since these LLM-powered bots can create their own content, every post is slightly different, and communication happens on the surface, making conversations feel authentic. The result is an information dissemination campaign that can be automated with minimal human input.

The most alarming finding was that simply telling the bots who their teammates were produced almost as strong interactions as when they were actively planning together.

The threat does not end with the election. Luceri warns that while the same playbook could be applied to public health, immigration, and economic policy, any consensus produced could change public opinion.

Is there anything we can do to stop it?

These types of campaigns are difficult for individual users to identify and stop. Researchers put the burden on the foundations of stopping such coordinated disinformation campaigns by looking beyond individual posts and focusing on how accounts behave together.

According to the researchers, systematic resharing, rapid amplification, and changing stories are all telltale signs, even if the content appears authentic.

In fact, AI has ushered us into a new world, and it’s going to get darker before it gets better.

Back to top button