
Submitted by Alan Blackwell on Thu, 10/04/2025 - 11:10
How would a 1990s AI engineer have designed a system to maximise inequality? And is this what we did?
For six years in the early 90s I worked as an AI Engineer. Yes, you read that correctly - AI Engineer, not AI Researcher. And yes, this was over 30 years ago. In those days, AI was not big news for the general public. These were specialised tools used to solve business problems. As a technical team lead, my job was to understand what the client wanted, and build something to deliver that.
So I’ve been thinking - what if one of our clients had asked us to build them an AI system that would maximise financial inequality? What if our actual design brief was to make a few people rich at the expense of everyone else on the planet? I never did have a client like that, but I worked in the world’s largest companies, and for some of the world’s smartest, so I can imagine how this might have gone.
- Step 1 would exploit what we called “hypertext" technology. Before WWW, we used ideas from Vannevar Bush's Memex and Norman Meyrowitz's Intermedia, where the links were scholarly and rights stayed with the authors. We believed in the ideals of open source. If we wanted to maximise inequality, we could have exploited those agreements to harvest others’ work for free.
- Step 2 would be neural networks. We were familiar with these, but they had limited use because the content couldn't be inspected or debugged. To maximise inequality, we could have put the free content into a neural network, so it would be impossible to trace the original owners.
- Step 3 would be proprietary data centres. Back then, the greatest profits went to companies that centralised their data in mainframes, The machines were inefficient compared to the advanced microprocessors used for AI, but to maximise profits, we could have taken away personal computers, collecting the CPUs into mainframe-like server farms.
With these technical components, this looks like the basis for a feasible business plan, if the business model was to maximise inequality, and concentrate money and power. Of course, I haven’t discussed the whole brand of AI itself, which has always been a promise of magic, science fiction or snake oil, claiming to solve any “general” problem. Those claims have been made since the AI brand was created in the 1950s, before I was even born. AI engineers in the 1990s had to let the hype men do their work, but we never believed that stuff, and I still don’t.
Hypothetically, what engineering approach would I have taken 20 years ago, if a visionary investor had asked me to design the next generation of AI, for a business model designed to maximise inequality?
I’d create a walled-garden version of the world-wide web, controlled by an end-user licence agreement giving me free rights to everybody’s content. I would create some kind of hierarchical (“deep”) neural network, concentrating all that free content in a form only I can understand and use. I’d transfer all that data, and also the trained network, from the public internet onto private servers, in remote facilities owned by me. And I’d sell the results back to the same people who gave me their data for free, advertising it as some kind of magic.
I never saw it, but whoever did write that specification 20 years ago, we seem to have got the kind of AI they asked for. Is this the kind of AI Africa needs? Nope. Could AI Engineers have built something else? Yup.
If this blog post is the first thing you've seen from me, you may like to check out my book: Moral Codes - designing alternatives to AI.