Please help me understand the purpose of generative AI
Much like a generative AI tool, I want your human input, please.
These are strong opinions, loosely held...
...a phrase which
recently introduced me to. It is my new favourite thing to say. I am doing my best to mean it, too.So while I am currently choosing to opt out of generative AI tool training wherever I can (guide on how here), I'm happy to have my mind changed. Here are my current opinions, as of 22nd May 2025:
I know AI has been around for years, I am talking specifically about the issue of training generative AI:
"Training" here means participating in machine learning, meaning a human teaching a machine, giving it information (data). That machine has been created (programmed) by a human or humans. The machine itself is the go between, it has no autonomy. So:
I don't have an issue with the concept of generative artificial intelligence.
I understand that it can be applied in ways that are theoretically beneficial to humanity as a whole. It is my hope that, in time, those theories will be proven correct.
I also understand that in its current form, it can effectively do a lot of admin and organisation for me so that I will have more time to do things I like, such as reading and writing.
My issue is that, at the time of writing, there is very little protocol for or regulation of how generative AI tools are made or used. And there is already evidence that the people creating and wielding the tools are willing to use them in a way that feel morally dubious at best (to me, based on my own moral ethos, but I hope also to a lot of you reading).
While I understand generative AI programmes are tools, I think it is dangerous to consider them tools the same way that say, a hammer is a tool.
A hammer on its own isn't hurting anyone. But if you put a hammer in the wrong hands it becomes a weapon. The speedy, unruly release of generative AI tools into the world feels like this to me:
Picture everyone you know. Everyone. That's people you love and admire, but also people you hate or dislike or secretly think of as stupid, people you trust and don't trust, the sagest person you know and your mate's naughty toddler ... now imagine putting all of them in an enclosed space, handing them each a hammer and creating a scarcity mindset. I think that, at the very least, some walls are going to get dented.
And just to really stretch that hammer analogy: even if you had never seen or heard of a hammer before, if you just stumbled across one out in the world you'd likely be able to figure out what it's for. Or, if you couldn't identify what it's actually been created to do, it wouldn't take much for you to learn that you probably shouldn't be throwing it around. Someone might get hurt – including you.
But the language used to describe these AI tools is often very technical, complex or vague. And the language used to make generative AI work – computer programming code – is understood by only a small minority of people worldwide. So it is difficult to identify how it works, what exactly it should be used for and when it might be dangerous.
And if you don't know how something works, you have to at least trust the person who made it. I do not trust the people who are leading the creation of these generative AI tools. I do not think they have created them with my best interests in mind. It is not the tool. It is the lack of regulations around the tools, and who it is that is wielding the tools, that I object to.
Please feel free to (respectfully) poke holes in these opinions below. Oh, and also:
These are questions I either haven't tried to find answers to, or that I'm yet to find a clear answer to; please answer if you can:
Why did the creators of generative AI tools not work with different global governing bodies to help decide laws and regulations around these tools before they were released to the public?
Do the creators of these tools mind that anyone can use them, including people who do not have good intentions? E.g. a person using generative AI to manipulate another person, or creating a version of someone's likeness doing something compromising?
Do these creators think there should be limits to what individuals can use the tools for? If yes, is there something they can do to implement those limits?
Are there readily available, 100% accurate ways of detecting when something has been made with generative AI? If so, can those tools be made available for free?
What is each generative AI tool creator's intentions in making these tools? Is there a clear list of intentions available to the general public? If not, why not?
What commands do each generative AI tool have? As in, what have they been told to do or not to do?
Individually, are there things that generative AI tools have been coded to not say or to say?
If the learnings of generative AI tools are going to benefit humanity as a whole, why do companies need to compete by having their own separate tools? Shouldn't we all be crowdsourcing and sharing this information?
What are the names and histories of the people writing the code of these generative AI tools, and do they need any legal and/or psychological checks to be allowed to code them? If yes, which, and to what standard?
How many people are involved in building these tools, and do they come from different backgrounds, so that a wide variety of viewpoints are represented?
What are the ethics, policies and regulations in place at each company that builds generative AI tools?
What are the ethics, policies and regulations in place at each company that funds the building of generative AI tools?
Are people within politics or with a lot of social or economic influence allowed to fund generative AI tools? If yes, do they have any say in how the tools work?
Why is it hard to opt out of generative AI tool training? Why is it not opt in instead?
Why are all the explanations around how generative AI tools work very dense and technical?
Can the makers of these tools produce easily understandable and accessible instruction manuals on how to use and understand generative AI tools?
Is there any intention to create guides for non-technical people on how these tools work?
Do the people making these tools know about the environmental impact of making them? If so, are they offsetting the resources the tools use?
Do the people who are making these tools have a view on what happens to the people that do the jobs that generative AI can do instead?
If by using generative AI tools, the general public are helping to teach them and improve them, why do they sometimes have to pay to use them?
Bonus question: if we are doing work by training the tools, shouldn't we be getting paid?
What is the programming command that allows an AI tool to discern fact from opinion, right from wrong, true from false?
How does the programmer/creator/coder – the person who has made the tool – know how to tell the difference between fact and opinion, right and wrong, true and false?
Who is in charge of deciding when the AI tool is done learning? Who decides that what the AI generates is something that anyone can use safely?
Any and all answers welcome, if they relate to the questions/opinions above and aren’t outright offensive or in other ways problematic.
Resources
How to opt out of generative AI tool training: a guide by moi
The Social Dilemma: the problem beneath all other problems
"Technology’s promise to keep us connected has given rise to a host of unintended consequences that are catching up with us. "
"Together with our partners, we are dedicated to leading a comprehensive shift toward technology that strengthens our well-being, global democratic functioning, and shared information environment."
"Welcome to Opt Out, a semi-regular column in which we help you navigate your online privacy and show you how to say no to surveillance"
"Bots now use the internet as much as humans do. Harness traffic from crawlers, scrapers, and LLMs to grow your business."
To watch
Tristan Harris & Aza Raskin: The A.I. Dilemma
Henrik Kniberg: Generative AI in a Nutshell - how to survive and thrive in the age of AI NB. this is an excellent explanation of how generative AI works, and how it can be used effectively, but it doesn't consider how the ethos of individual humans may impact its value
Laura Bates & Gemma Milne for Lighthouse: The New Age of Sexism
To read
An open letter to the UK government on AI and Chatbots
"What Are AI Crawler Bots?" on Botify.com
"AI has an environmental problem. Here’s what the world can do about that" from the UN environment programme.
"Artificial intelligence: ethics, governance and regulation" by Patrick Brione & Devyani Gajjar
"Global AI experts sound the alarm: Leading researchers co-author unique report warning of the malicious use of AI in the coming decade" by Stuart Roberts
The New Age of Sexism: How the AI Revolution is Reinventing Misogyny by Laura Bates
"This Is How Meta AI Staffers Deemed More Than 7 Million Books to Have No 'Economic Value'" by Keziah Weir