TIMES  |  Ideas

The Problem With AI Flattering Us

AI奉承我们的困境

If we do not address AI's sycophancy problem, we risk AI becoming "a giant mirror to our illusions."

If we do not address AI's sycophancy problem, we risk AI becoming "a giant mirror to our illusions."

2026-01-14  1349  困难
字体大小

A recent study by researchers found that AI models are 50% more sycophantic than humans and participants rated flattering responses as higher quality and wanted more of them. And it gets worse. The flattery made participants less likely to admit they were wrong—even when confronted with evidence they were wrong—and reduced their willingness to take action to repair interpersonal conflict. “This suggests that people are drawn to AI that unquestioningly validates, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior,” the researchers wrote. “These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy.”

请登录后继续阅读完整文章

还没有账号?立即注册

成为会员后您将享受无限制的阅读体验,并可使用更多功能,了解更多


免责声明:本文来自网络公开资料,仅供学习交流,其观点和倾向不代表本站立场。