Submitted by Alan Blackwell on Mon, 19/05/2025 - 09:50
I started this blog to ask what AI would be like if it was invented in Africa. That was years before ChatGPT. Who could have imagined that Elon Musk’s Grok AI would become openly racist, programmed to promote the conspiracy theory that Musk’s fellow white South African immigrants were escaping “white genocide”, not just leaving post-apartheid Africa to get rich again in Donald Trump’s USA?
We’ve had problems with racist political language before. Nearly a century ago, in the early years of WW2, George Orwell wrote a small book about the problems that he turned into “newspeak” in 1984. The cover of my copy of Why I Write says “Political language is designed to make lies sound truthful and murder respectable”.
What has this got to do with AI? Orwell ends his book (p.118) by saying defending the English language “has nothing to do with correct grammar and syntax.” Those are the things ChatGPT does well, the best autocorrect yet, able to fix mistakes and even write whole student essays. But Orwell says that nice-sounding language is not enough. "What is above all needed is to let the meaning choose the word, and not the other way about”.
That’s the problem with AI chatbots. They work backwards - trained for predictive text, producing words, not meaning. The only meaning of a product like ChatGPT, or Musk’s Grok, is to make the person selling it richer. Just like politicians, idealogues and conspiracy theorists, they are driven by the size of their audience. These men and their machines will say anything, from flattering sycophancy to racist conspiracies, so long as you keep listening.
When the world has had enough bullshit, we might ask if intelligence should measure what is meaningful, rather than being satisfied with more words.
Links:
Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats
How an embarrassing U-turn exposed a concerning truth about ChatGPT