Keir Starmer swore at a staffer and it went viral. That’s what a clip doing the rounds on X would have you believe. However, would it surprise you to learn that it was an AI generated recording?
The clip went live during the weekend, and MPs across the political spectrum were quick to point out it was fake, but by Monday it had gotten almost 1.5 million hits.
This has put even more emphasis on the need to put some control over how AI content can be generated. Especially as the Hansard recordings of parliamentary proceedings offer bad actors an on demand library of MPs saying almost anything. Giving them a litany of toys to play with.
However, the best opportunity for this regulation in the Online Safety Bill, has been missed. The bill empowers regulator Ofcom to ensure platforms keep users safe from harmful content but does not really do anything to empower them to ensure platforms prevent disinformation from arising within them. Other than ensuring the platforms are standing by their own policies.
Something that has most clearly failed. X’s policies state users cannot share synthetic, manipulated or out of context media, but the Starmer clip remains prevalent.
There are possible solutions however.
The Digital Regulation Cooperation Forum, which whilst being a voluntary organisation with no statutory powers, consists of those who claim experience in the field of AI.
The government has also entered the Election Act which will come into force in November. The act will give new powers to the Electoral Commission to enforce digital imprints on campaign material. Thus, telling viewers who has paid for and produced the advert online.
Of course, ultimately governments need the buy in from Big Tech.
On this front, Google is leading the way, having launched a test of watermarked content in late August. Whilst others are looking to beef up their existing disinformation teams.
All in all, time will tell, whether AI is used for good or for bad.