Well, it didn’t take long. I cited the risks of AI in politics on Monday, and we have several examples already.
The New Hampshire attorney general’s office is investigating a fake robocall impersonating President Joe Biden, which urged recipients not to vote in the presidential primary. The call, which appears to be artificially generated, is considered an unlawful attempt at voter suppression. A complaint from a prominent New Hampshire Democrat prompted the investigation. The call encouraged voters to save their vote for the November election and provided a phone number belonging to a former New Hampshire Democratic Party chair. The Biden campaign is discussing additional actions to address the situation.
A faked AI audio clip of Manhattan Democratic boss Keith Wright talking negatively about Assemblymember Inez Dickens has caused a stir in Harlem politics. This is the first known instance of AI-generated audio being used for nefarious purposes in New York politics. While some recognized the audio as fake, it was believable enough to fool others. The clip was shared at a pivotal moment as Dickens announced she was not seeking reelection.
And the Washington Post mentioned that we also have it the other way. Politicians, such as former president Donald Trump, are using the excuse of AI-generated content to dismiss allegations against them. Trump claimed that an ad featuring his public gaffes was created using AI, accusing the Lincoln Project of using AI in their commercials to make him look bad.
This directly relates to a KPMG survey. Three in five customers are wary of AI, and trust is the biggest concern when implementing AI. Customers are skeptical of AI systems’ fairness, security, and safety. Rushed implementation of AI can harm customer experience quality, and businesses need to ensure they are not sacrificing trust and customer satisfaction for cost savings.
The good news is that there are resources to leverage. Here’s one. A handbook has been released to help Singapore businesses adopt generative artificial intelligence (GenAI) and acquire the necessary skillsets. Developed in collaboration with SkillsFuture Singapore and AI Singapore, the resource guide aims to assist local organizations, including small and midsize businesses (SMBs), in integrating GenAI into their environment and facilitating the transition through skillsets and retraining. The guide highlights use cases, employee profiles, and skillsets needed for GenAI and is part of Singapore’s National AI Strategy 2.0.
Why do we care?
There is an urgent need for regulatory frameworks for this kind of content… or enforcement of existing fraud laws. The technology world should really care about the issue because trust is a critical component of AI adoption. This involves clearly communicating to customers how AI is being used, ensuring that AI decisions can be explained, and ensuring that AI systems are secure and respect user privacy. This also involves it not being used to harm society. Getting the implementation right is crucial for building trust and enabling the successful adoption of AI technology, but it’s only one portion.
Now, we should be conjugated that there are resources, like Singapore’s handbook. But we’ll quickly devolve into a world of lack of trust if AI doesn’t have guardrails, and some actors will use it to dismiss allegations or manipulate systems for their gain.