Kashmir Hill reports for the New York Times:
Schneiderman, the computer science professor, calls the desire to make machines that seem human a “zombie idea” that won’t die. He first noticed ChatGPT’s use of first person pronouns in 2023 when it said, “My apologies, but I won’t be able to help you with that request.” It should “clarify responsibility,” he wrote at the time and suggested an alternative: “GPT-4 has been designed by OpenAI so that it does not respond to requests like this one.”
Margaret Mitchell, an A.I. researcher who formerly worked at Google, agrees. Mitchell is now the chief ethics scientist at Hugging Face, a platform for machine learning models, data sets and tools. “Artificial intelligence has the most promise of being beneficial when you focus on specific tasks, as opposed to trying to make an everything machine,” she said.
For those who don’t know that data and probability drive chatbots, let alone knowing the technical bits, computers might as well have become magic machines that think for themselves. Building the models to sound like humans probably doesn’t help.
My only hope is that people grow more wary of the words they enter into chatbots and more skeptical of the probabilistic output that comes out. Every time my kids point out generative AI voice, pictures, or video feels like a win.