The push to create humanlike AI is wasted effort

the-push-to-create-humanlike-ai-is-wasted-effort

The push to create humanlike AI is wasted effort

Scarlett Johansson’s voice is unmistakable — a smoky alto with an undercurrent of steel. So, when OpenAI demonstrated its latest ChatGPT voice assistant, “Sky,” last month and it giggled, inflected, and sounded like Johansson, the actress was understandably rattled. She had twice declined OpenAI CEO Sam Altman’s offer to voice the chatbot. Now, she found herself staring into an artificial intelligence-generated mirror.

This eerie mimicry underscores a growing obsession in the AI world: re-creating humanlike interaction. Artificial companions are undoubtedly fascinating — the idea has captivated generations of science fiction fans — but is this the best use of a technology with the potential to profoundly improve the human condition? Are we focused on AI spectacle over substance?

AI tools are revolutionizing medicine, allowing algorithms to analyze medical images for earlier, more accurate diagnoses. In finance, AI assesses risk and optimizes investments with a speed no human could match. In education, personalized learning platforms tailor lessons to individual student needs. These practical, almost invisible applications of AI that don’t involve uncanny human mimicry are already improving people’s lives, and their potential to help more people is vast.

Why the fixation on digital companions amid such more meaningful promise and opportunity? The answer is as ancient as the pyramids. Just as the pharaohs poured untold resources into monuments that mirrored their power and beliefs, some within Silicon Valley are pursuing lifelike AI as a grand symbolic achievement.

There’s also an echo of the ancient quest to commune with eternity, to grasp immortality, woven into AI chatbots like Sky. The pyramids served as eternal vessels for a pharaoh’s spirit; what is lifelike AI if not an attempt to capture and channel a human being’s essential nature?

The pursuit of lifelike AI seems more extravagant than practical. As Princeton University professors Sayash Kapoor and Arvind Narayanan recently noted, the AI systems that most closely mimic human behavior are also the most costly to develop. Yet, despite their hefty price tags, these systems often provide only incremental improvements over simpler, cheaper alternatives. These findings should make us pause and consider the practical utility of lifelike AI.

Some argue that AI that seems sentient could serve practical purposes. For example, perhaps a simulated human companion could provide much-needed social interaction for the elderly or isolated. Such arguments miss a crucial point, though. The attributes that can make an AI seem human — the ability to build rapport, show empathy, and gain trust — can also make it dangerous when misused. An AI that can expertly mimic human connection can deceive, manipulate, and exploit. Moreover, relying on AI for such deep human needs may erode our ability to connect.

It’s time we approach AI with a new perspective: not as something inherently good or evil, but as a tool that requires thoughtful restraint, care, and wisdom in its application. History is replete with examples of unbridled technological advancement leading to trouble. The unchecked rise of social media platforms, initially lauded for their ability to connect people, soon revealed a darker side, contributing to the spread of misinformation and societal polarization.

Conversely, consider the more measured development of nuclear power. There, the establishment of international treaties and regulatory bodies reduced the odds of large-scale disasters and allowed for the development of nuclear energy as a relatively safe and efficient power source. Just as with nuclear power, the development and deployment of AI demands a measured, cautious approach.

Rather than investing heavily in creating AI that achieves a human likeness, we should focus on increasing the availability, safety, and utility of the remarkable technologies already at our fingertips. These non-lifelike technologies can help us address pressing issues such as health care accessibility, educational inequality, and climate change. As for digital companions like Sky, they can wait. The true measure of AI’s success should not be how closely it mirrors humanity but how profoundly it improves it.

To thrive with AI, we must do something we’re still uniquely good at — being mindful.

Michael Mattioli is a professor of law and the Louis F. Niezer Faculty Fellow at Indiana University’s Maurer School of Law.

©2024 Chicago Tribune.
Visit at chicagotribune.com.
Distributed by Tribune Content Agency, LLC.

Exit mobile version