AI can make humans less selfish, or at least fix our self-driving cars

As AI systems become embedded in society, a new question arises: can they improve our lives beyond technical and creative jobs? Can they help people as a race make better decisions, make us less selfish, and encourage better cooperation?
A recent study by researchers Arend Hintze and Christoph Adami examines exactly this question in their paper, “Encouraging Cooperation in a Public Goods Game Using Artificial Intelligence Agents,” published in npj Complexity.
The plight of the commons
The crisis of the commons is an economic theory in which individuals, in a shared and limited pool of resources, overuse and deplete the resource, resulting in the suffering of the entire group. TedEd has a great video explaining this idea, which I recommend you watch. To test their hypothesis of whether AI can improve human interaction, the researchers used a well-known cooperation experiment often described as a “public goods” game.
In this test, players can either contribute to a shared pool that benefits everyone or keep their contribution to themselves. Although a team does best when everyone contributes, each individual can hold their own and enjoy the shared prize. Individuals do not fare well in this test and act on their own, not as part of a group. The researchers then introduced AI agents into the mix.
In the first case, AI agents are programmed to work cooperatively. That sounds promising, but it didn’t change human behavior. People continue to pursue their own interests. Just adding “good” characters to the show was not enough. In the second scenario, players can control AI agents. As you can imagine, this backfired. Players stop the AI from cooperating while choosing not to cooperate, eliciting good behavior while maximizing personal profit.
The third scenario showed promising results. AI agents mimic the behavior of players they interact with. If the human cooperates, the AI cooperates. If a person acts selfishly, the AI symbolizes that choice. This created a dynamic feedback system, where human cooperation was rewarded by AI cooperation. This led to the development of cooperation between human players.
What does all this have to do with self-driving cars?

Although the research was limited and simplified for real-world impact, the researchers say the research could be applied to several scenarios, including self-driving cars. Autonomous vehicles, for example, may be designed to reward cooperative driving and not follow strict rules. If enough self-driving cars use this feature, it could create a feedback loop that benefits everyone.
AI cannot magically eradicate selfishness. However, it can provide enough incentives to make cooperation a smart choice, especially in the case of EVs. The findings published in the Journal of Transportation Research also propose an integrated system for routing and coordinating the movement of idle vehicles to provide passengers with basic services. Another study published in the journal Robotics proposes a collision-free tracking and virtual communication system between self-driving vehicles.
This system can also be used to schedule the charging of self-driving electric vehicles to avoid long waits and stress on the power grid, as described in this paper. AI systems, including chatbots such as ChatGPT and Gemini, already follow a reward-based system to learn and improve their performance, and it seems that the system can effectively solve real-world robotaxi problems, too, when they gradually enter the mainstream.




