27 May 2023

AI in politics: Law expert urges transparency from political parties, more regulation

2:08 pm on 27 May 2023
GPT-4 sign on website displayed on a laptop screen and OpenAI logo displayed on a phone screen are seen in this illustration photo taken in Poland on March 14, 2023. (Photo by Jakub Porzycki/NurPhoto) (Photo by Jakub Porzycki / NurPhoto / NurPhoto via AFP)

Photo: JAKUB PORZYCKI

The use of artificial intelligence (AI) in political campaigning needs stronger regulation to help prevent voter manipulation, an Auckland law lecturer is warning.

The National Party admitted this week to using AI to generate images for its election campaign.

On Thursday, Privacy Commissioner Michael Webster released a list of expectations for the private and public sectors to apply when making use of generative AI.

But University of Auckland senior law lecturer and AI law expert Nikki Chamberlain told RNZ's Morning Report more was needed.

"I think we need to have transparency and we need to know where images and videos are coming from that are being created as part of marketing of political campaigns, because I think it goes to transparency - the voter not being manipulated," she said, "but I also think that we need to have laws that actually require this."

Chamberlain - who is also one of the editors of the latest, 3rd edition, of the Privacy Law in New Zealand textbook - said there were provisions in the Privacy Act 2020 setting out how people's personal information could be used, shared and analysed, but it had some weaknesses.

"It isn't a high watermark like the General Data Protection Regulation in the EU. We've just seen over the past week that Facebook has been fined €1 million for a breach of the GDPR and that is a punitive fine. In the privacy act the maximum amount of our punitive fine is $NZ10,000 so the Privacy Act needs more enforcement teeth."

University of Auckland law lecturer Nikki Chamberlain.

University of Auckland law lecturer Nikki Chamberlain Photo: Supplied

She said New Zealand also needed more specific regulation for the use of AI, "to make sure the voter isnt going to be manipulated and that there is transparency".

There were two main types of AI being used in political campaigning: machine algorithms like what was seen in the Cambridge Analytica scandal, and generative AI like ChatGPT and Midjourney, she said.

The former was a concern, because it could be used to target political advertisements to users to try to manipulate them to vote for a certain party or candidate.

"There was an application on Facebook, a quiz, and people took part ... their information and information of their friends got sucked into that and then it was used by Cambridge Analytica to essentially determine voter preferences," Chamberlain said.

"If you found that somebody was doing a lot of clicking on links related to crime for example, or particularly were interested in issues around crime, then you would draw on that. Perhaps they are fearful and then you would target advertisements to them around fear and around increasing police presence and coming down hard on crime."

She said Generative AI on the other hand takes a collection of data - series of words, or images - and uses it to create new data which closely resembles the kind of data fed into it. This means a person could request a particular kind of image or document, and the AI would produce what someone might expect to see based on the previous examples the software has access to.

This is the kind of technology behind ChatGPT for the rapid creation of text documents, or programmes like Midjourney for the creation of images - which is what National used for its campaign adverts.

Chamberlain said on the face of it, the only issue with that was putting stock-image actors out of work, but there was potential for greater misuse, for example with video AI like Deepfakes.

"The more nefarious part of it is that if it's not kept in check, you could essentially have images of people saying things that they didn't say, and voters not knowing [that it's not real].

"It's easy to get sucked into a narrative that might not be accurate."

She said any campaign material using AI should make that clear for transparency - and the law needed updating to prevent more nefarious uses of the burgeoning technology.

"I think the biggest thing I would say is a recommendation to any political party who's using AI that they actually note on the advertisement that they are using it, so people know where the image has come from and that it's not real."

She also had a message for voters: "check your sources and go to multiple sources to see that information is accurate".

  • Artificial intelligence privacy warnings issued for companies, departments
  • Chris Luxon defends National's use of AI
  • New Zealand start-up launches AI powered investment platform
  • AI used across 'multiple departments' in camera surveillance
  • Are you unwittingly helping to train your AI replacement?