Communication technology, social networking, internet and online messaging concept, white empty speech bubbles and modern laptop computer on blue background with reflection

Using AI tools empowers and burdens users in online Q&A communities

New research from the University of Florida finds that generative AI tools like ChatGPT simultaneously make users in online help forums more helpful, but too much AI can backfire.

Whether you’ve searched for cooking tips on Reddit, troubleshooted tech problems on community forums or asked questions on platforms like Quora, you’ve benefited from online help communities. These digital spaces rely on people across the world to contribute their knowledge for free, and have become an essential tool for solving problems and learning new skills.

New research reveals that generative artificial intelligence tools like ChatGPT are creating a double-edge effect on users in these communities, simultaneously making them more helpful while potentially overwhelming them to the point of decreasing their responses.

Liangfei Qiu

PricewaterhouseCoopers ISOM Professor
Professor Liangfei Qiu.

“On the positive side, AI helps users learn to write more organized and readable answers, leading to a noticeable increase in the number of responses,” explained Liangfei Qiu, study coauthor and PricewaterhouseCoopers Professor at the University of Florida Warrington College of Business. “However, when users rely too heavily on AI, the mental effort required to process and refine AI outputs can actually reduce participation. In other words, AI both empowers and burdens contributors: it enables more engagement and better readability, but too much reliance can slow people down.”

The study examined Stack Overflow, one of the world’s largest question-and-answer coding platforms for computer programmers, to investigate the impact of generative AI on both the quality and quantity of user contributions. Qiu and his coauthor Guohou Shan of Northeastern University’s D’Amore-McKim School of Business measured the impact of AI on users’ number of answers generated per day, answer length and readability. 

Specifically, they found that users who used AI tools to generate their responses contributed almost 17% more answers per day compared to those who didn’t use AI. The answers generated with AI were both shorter by about 23% and easier to read. 

However, when people relied too heavily on AI tools, their participation decreased. Qiu and Shan noted that the additional cognitive burden associated with heavier AI usage negatively affected the impact on a user’s answer quality. 

For online help communities grappling with AI policies, this research provides valuable insight into how these policies can be updated in the current AI environment. While some communities, like Stack Overflow, have banned AI tools, this research suggests that a more nuanced approach could be a better solution. 

Instead of banning AI entirely, the researchers suggest striking a balance between allowing AI usage while promoting responsible and moderated use. This approach, they argue, would enable users to benefit from efficiency and learning opportunities, while not compromising quality content and user cognition.

“For platform leaders, the takeaway is clear: AI can boost participation if thoughtfully integrated, but its cognitive demands must be managed to sustain long-term user contributions,” Qiu said. 

The study, “Examining the Impact of Generative AI on Users’ Voluntary Knowledge Contribution: Evidence from a Natural Experiment on Stack Overflow,” is forthcoming in Information Systems Research