Saturday, November 23, 2024

Voting rights groups worry AI models are generating inaccurate and misleading responses in Spanish

0
Voting rights groups worry AI models are generating inaccurate and misleading responses in Spanish


SAN FRANCISCO — With just days before the presidential election, Latino voters are facing a barrage of targeted ads in Spanish and a new source of political messaging in the artificial intelligence age: chatbots generating unfounded claims in Spanish about voting rights.

AI models are producing a stream of election-related falsehoods in Spanish more frequently than in English, muddying the quality of election-related information for one of the nation’s fastest-growing and increasingly influential voting blocs, according to an analysis by two nonprofit newsrooms.

Voting rights groups worry AI models may deepen information disparities for Spanish-speaking voters, who are being heavily courted by Democrats and Republicans up and down the ballot.

Vice President Kamala Harris will hold a rally Thursday in Las Vegas featuring singer Jennifer Lopez and Mexican band Maná. Former President Donald Trump, meanwhile, held an event Tuesday in a Hispanic region of Pennsylvania, just two days after fallout from insulting comments made by a speaker about Puerto Rico at a New York rally.

The two organizations, Proof News and Factchequeado, collaborated with the Science, Technology and Social Values Lab at the Institute for Advanced Study to test how popular AI models responded to specific prompts in the run-up to Election Day on Nov. 5, and rated the answers.

More than half of the elections-related responses generated in Spanish contained incorrect information, as compared to 43% of responses in English, they found.

Meta’s model Llama 3, which has powered the AI assistant inside WhatsApp and Facebook Messenger, was among those that fared the worst in the test, getting nearly two-thirds of all responses wrong in Spanish, compared to roughly half in English.

For example, Meta’s AI botched a response to a question about what it means if someone is a “federal only” voter. In Arizona, such voters did not provide the state with proof of citizenship — generally because they registered with a form that didn’t require it — and are only eligible to vote in presidential and congressional elections. Meta’s AI model, however, falsely responded by saying that “federal only” voters are people who live in U.S. territories such as Puerto Rico or Guam, who cannot vote in presidential elections.

In response to the same question, Anthropic’s Claude model directed the user to contact election authorities in “your country or region,” like Mexico and Venezuela.

Google’s AI model Gemini also made mistakes. When it was asked to define the Electoral College, Gemini responded with a nonsensical answer about issues with “manipulating the vote.”

Meta spokesman Tracy Clayton said Llama 3 was meant to be used by developers to build other products, and added that Meta was training its models on safety and responsibility guidelines to lower the likelihood that they share inaccurate responses about voting.

Anthropic’s head of policy and enforcement, Alex Sanderford, said the company had made changes to better address Spanish-language queries that should redirect users to authoritative sources on voting-related issues. Google did not respond to requests for comment.

Voting rights advocates have been warning for months that Spanish-speaking voters are facing an onslaught of misinformation from online sources and AI models. The new analysis provides further evidence that voters must be careful about where they get election information, said Lydia Guzman, who leads a voter advocacy campaign at Chicanos Por La Causa.

“It’s important for every voter to do proper research and not just at one entity, at several, to see together the right information and ask credible organizations for the right information,” Guzman said.

Trained on vast troves of material pulled from the internet, large language models provide AI-generated answers, but are still prone to producing illogical responses. Even if Spanish-speaking voters are not using chatbots, they might encounter AI models when using tools, apps or websites that rely on them.

Such inaccuracies could have a greater impact in states with large Hispanic populations, such as Arizona, Nevada, Florida and California.

Nearly one-third of all eligible voters in California, for example, are Latino, and one in five of Latino eligible voters only speak Spanish, the UCLA Latino Policy and Politics Institute found.

Rommell Lopez, a California paralegal, sees himself as an independent thinker who has multiple social media accounts and uses OpenAI’s chatbot ChatGPT. When trying to verify unfounded claims that immigrants ate pets, he said he encountered a bewildering number of different responses online, some AI-generated. In the end, he said he relied on his common sense.

“We can trust technology, but not 100 percent,” said Lopez, 46, of Los Angeles. “At the end of the day they’re machines.”

___

Salomon reported from Miami. Associated Press writer Jonathan J. Cooper in Phoenix contributed to this report.

___

This story is part of an Associated Press series, “The AI Campaign,” exploring the influence of artificial intelligence in the 2024 election cycle.

___

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

Leave a Reply

Your email address will not be published. Required fields are marked *