81.png

Overview

<aside> 💡 Anthropomorphizing generative AI systems happens when users develop their own false sense of trust and overinflated expectations as they project or attribute human-like qualities to AI. This tends to happen because when people don’t understand how a thing work, we will naturally think of it in terms more familiar to us. This is heightened with our natural human motivation to seek out ‘interaction’. This thing ‘talks to us’ and because we don’t know how it works, we give it human powers. This phenomenon can have immediate short-term risks and longer term societal ones. In the short run, we will not think clearly about these systems’ limitations and will impact our ability to set limits to usage. Longer term, our collective response to enforcement and regulation of the companies creating LLMs may be weakened more than it should.

</aside>

When users or employees anthropomorphize AI, they might overestimate its abilities and intelligence[1]. This can lead to:

Citations: [1] https://psychcentral.com/health/why-do-we-anthropomorphize [2] https://www.brookings.edu/articles/the-danger-of-anthropomorphic-language-in-robotic-ai-systems/ [3] https://www.isaca.org/resources/news-and-trends/industry-news/2023/the-privacy-pros-and-cons-of-anthropomorphized-ai [4] https://www.nngroup.com/articles/anthropomorphism/ [5] https://hbr.org/2023/06/managing-the-risks-of-generative-ai [6] https://dictionary.apa.org/anthropomorphism [7] https://www.edsurge.com/news/2024-01-15-anthropomorphism-of-ai-in-learning-environments-risks-of-humanizing-the-machine [8] https://en.wikipedia.org/wiki/Anthropomorphism

Mitigations