AI needs regulation, but what kind, and how much?

A computer covered in hazard tape.

2024-08-20  1622  困难

Perhaps the best-known risk is embodied by the killer robots in the “Terminator” films—the idea that AI will turn against its human creators. The tale of the hubristic inventor who loses control of his own creation is centuries old. And in the modern era people are, observes Chris Dixon, a venture capitalist, “trained by Hollywood from childhood to fear artificial intelligence”. A version of this thesis, which focuses on the existential risks (or “x-risks”) to humanity that might someday be posed by AI, was fleshed out by Nick Bostrom, a Swedish philosopher, in a series of books and papers starting in 2002. His arguments have been embraced and extended by others including Elon Musk, boss of Tesla, SpaceX and, regrettably, X.

经济学人和华尔街日报的文章是会员专属

请加入会员以继续阅读完整文章

成为会员后您将享受无限制的阅读体验,并可使用更多功能


免责声明:本文来自网络公开资料,仅供学习交流,其观点和倾向不代表本站立场。