New York City

Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

October 17, 2023 Anthony Izaguirre, AP
Share this:

New York City Mayor Eric Adams has been using artificial intelligence to make robocalls that contort his own voice into several languages he doesn’t actually speak, posing new ethical questions about the government’s use of the rapidly evolving technology.

The mayor told reporters about the robocalls on Monday and said they’ve gone out in languages such as Mandarin and Yiddish to promote city hiring events. They haven’t included any disclosure that he only speaks English or that the calls were generated using AI.

“People stop me on the street all the time and say, ‘I didn’t know you speak Mandarin, you know?’” said Adams, a Democrat. “The robocalls that we’re using, we’re using different languages to speak directly to the diversity of New Yorkers.”

Subscribe to our newsletters

The calls come as regulators struggle to get a handle on how best to ethically and legally navigate the use of artificial intelligence, where deepfake videos or audio can make it appear that anyone anywhere is doing anything a person on the other side of a computer screen wants them to do.

In New York, the watchdog group Surveillance Technology Oversight Project slammed Adams’ robocalls as an unethical use of artificial intelligence that is misleading to city residents.

“The mayor is making deep fakes of himself,” said Albert Fox Cahn, executive director of the organization. “This is deeply unethical, especially on the taxpayer’s dime. Using AI to convince New Yorkers that he speaks languages that he doesn’t is outright Orwellian. Yes, we need announcements in all of New Yorkers’ native languages, but the deep fakes are just a creepy vanity project.”

The growing use of artificial intelligence and deepfakes, especially in politics and election misinformation, has prompted calls and moves toward greater regulation from government and major media companies.

Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions for election misinformation. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

A bipartisan bill in the U.S. Senate would ban “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire. This month, two Democratic members of Congress sent a letter to the heads of Meta and X, formally known as Twitter, to express concerns about AI-generated political ads on their social media platforms.

In recent weeks, a number of technology companies have shown off AI tools that can synthetically dub a person’s speech in another language in a way that makes it sounds as if that person is speaking in that language.

In September, the music streaming service Spotify introduced an AI feature to translate a podcast into multiple languages in the podcaster’s voice. More recently, the startup ElevenLabs in October introduced a voice translation tool that it said “can convert spoken content to another language in minutes, while preserving the voice of the original speaker.”

Adams defended himself against ethical questions about his use of artificial intelligence, saying his office is trying to reach New Yorkers through the languages they speak.

“I got one thing: I’ve got to run the city, and I have to be able to speak to people in the languages that they understand, and I’m happy to do so,” he said. “And so, to all, all I can say is a ‘ni hao.’”


Leave a Comment


Leave a Comment