using chatgpt and other ai writing tools makes you unhireable. here’s why
AI isn’t just pie in the sky, it’s a cow pie in the sky.
Few years back, I read the specifications for 5G, the fifth generation mobile network. It was ambitious, to say the least. Fast forward a bit. 5G starts to roll out, only it’s… well… anything but the actual specifications. It kinda sucks, actually, in comparison to the expectation. It’s like going to the store to buy a steak and all they got for you is a box that pulses menacingly labeled “meat product.”
Turns out that while there was a commonly understood version of what 5G would be, there were no requirements to what you could and could not call 5G. You could call AM radio 5G if you felt like it, I guess. So the big corporations, where 95% of all their income is pure profit, refused to invest in actually building 5G at the time and simply branded a few minor upgrades to 4G as 5G instead.
Why be honest when it’s easier to lie and put in the bare minimum, I guess?
The latest technological grift is here, and they call it AI. Like 5G, it’s not what you expect it to be, and they’re trying to make you treat it as if it’s the thing you were expecting instead of the thing it actually is, which is much, much dumber than you think.
we’ve been here before
Recognize it? An almost desperate desire to dazzle you by promoting things that couldn’t possibly work, insisting to you that this? This is the future! You’d be crazy to pass up on this deal. Use case? Uh… well, here’s a million things it can probably do. No, we won’t show you. We’ll just say it can definitely do this.
But then, of course, they start to realize that you’re not a mark. You’re too smart for that. You’re not dazzled, so eventually “this is the future” becomes “you’re too stupid not to see what’s right in front of you! You’ll get left behind!”
If the dazzling doesn’t work, they think, maybe the threats will.
The goal they had back then — and I actually stopped writing about it because NFTs died a quick death, but I guess I’ll do it now — remains the same: in order to succeed at conning you into giving them your hard-earned money for software that doesn’t even do what they promise, they will first attempt to dazzle you, and then threaten you, and then, eventually, just whine that this is the future. They want you to feel desperate enough to give them money based on a lie.
Of course, everyone figured it out eventually: an NFT is just an agreement between two people to pretend that one of them is in possession of something that cannot be duplicated. It was farcical from the outset: because of the way computers work, on a fundamental level, there is no such thing as an original, uncopyable file.
The file the artist sends you is saved from memory to storage, then when it is emailed, it is copied to the email server, and then, when it is downloaded by you, it’s being copied to your computer — the file still exists in all of those locations unless they were deleted; the image you claim to own is not “the original.” You were never going to be able to convince the entire world not to copy your jpg, because why would you? Remember the old meme from the “you wouldn’t download a car”? The response was “fuck you, I would if I could!”
Why would anyone pretend something that takes zero effort to copy — that, in fact, by the very nature of “how computers work at the most fundamental level” must be copied — could not be copied? Only computer illiterate people thought NFTs meant anything.
As it turns out, “would you join me in pretending this object cannot be copied?” is a sales pitch best left for people of less intelligence than the average moron.
As with big tech industry firings and nfts, the new push, AI, appears to be a case of “we have to do it because others are doing it,” but I’ve been hearing people say the real reason is that silicon valley investing is at an all-time low, and these corporations are desperate
Before we continue, hey, I could use some help with medical bills and groceries. If you want to support the other work I do on this blog, like this article about the biggest pitfall young writers face and how to get around it, then hey, hit up my tip jar.
I figure this kind of writing helps inexperienced writers the most — which means people who might not have the finances to afford my work if I kept it behind a paywall. That would help me, obviously — I could guarantee a certain minimum that would ensure my ability to continue writing these articles — but the people who need my help the most cannot afford it.
I, personally, can only do this with your support; if I wasn’t doing this, I’d have to get a second job, and as disabled as I am, that’s really not great. I have to spend between $145 and up to an entire Nintendo Switch’s worth of my income on medical care every two weeks. So it’s either do this or get a second job, and a second job would not be ideal given my current disability. So when you send me a tip, you’re not just helping a disabled writer like me, you’re helping tons of students, disabled people, and others without access. Thank you.
paypal.me/stompsite
ko-fi.com/stompsite
@forgetamnesia on venmo
$docseuss on cashapp
what is ‘ai’
this is so dumb, but
Okay. Let me see if I can make this simple.
“AI” is a two letter acronym for “artificial intelligence.” In science fiction stories, we think of this as equivalent to or greater than human intelligence. Artificial intelligences are portrayed as being actual intelligences — they can think about a concept and reason on it.
Take that scene in 2001: A Space Odyssey where the artificial intelligence HAL chooses to spy on crew members, read their lips, and take action to preserve its own life. That requires a level of reasoning — of understanding people are doing secret things, of deciding to find out about it, and then making decisions based on that information.
Heck, in Mass Effect, they actually differentiate between “artificial intelligence” and something far, far lesser, called “virtual intelligence,” which can respond to natural language inquiries, but cannot reason on them.
So you’ll go up to a VI and say “what do you think about that?” and it will fire off a canned message that says “I am not capable of coming to an opinion on that.”
Let’s be clear — there is no “I,” no self, when it comes to a virtual intelligence in the Mass Effect series. When it says “I am not capable of coming to an opinion on that,” it’s because a human being (or, since this is Mass Effect, an alien) told the VI to play that message whenever it receives a query that treats it as if it is alive.
Much like a chatbot today (more on those later), a virtual intelligence in Mass Effect cannot actually come up with an opinion. In Mass Effect, they’re ‘honest’ about this (not that they can be honest any more than the letters in your alphabet soup can communicate with any honesty), but they do have a programmed response that plays whenever you hit the limits of their capabilities.
This is not a being that thinks of itself as an “I,” this is a computer program that plays a message that sounds like a person talking to you, because a real person recorded it. But it’s not alive enough to react like that. It’s… you know when you turn your oven on, and you set the temperature, and the oven gives you that temperature?
The oven isn’t thinking “oh! Hey! yes! My friend here wants me to hit a specific temperature! I will do that!” You’re just turning a knob and making the internal bits of the machine reach a specific point. The machine isn’t ‘thinking’ about hitting a temp because it can’t think.
So the oven will do what you want it to, but it’s not an artificial intelligence.
My electric kettle plays a happy little tune when it gets up to boiling temperature. That doesn’t mean it’s happy. But, as a human being, I may have a tendency to treat it as human — to think “hey, my machine is happy it did a good job!” Of course it’s not — there’s no dopamine release happening, there’s no brain, nothing. It’s a bunch of resistors and circuits that just do a thing.
That’s because this is a thing we, humans, do.
There’s a Twitter user called “Faces in things” that just… shows you faces in things. We see a few shapes that look like a mouth and eyes? Boom. Face.
When we talked about gestalts before — and the idea that symbols can create new meanings, like a colon : and a parenthetical ) can be combined to form… a smiley!
:)
A lot of people like to use “more than the sum of its parts” to refer to a thing they like but can’t defend from criticism. To them, it’s a way of saying “but I still liked it, I just can’t explain why.”
That’s not what “more than the sum of its parts” means. It’s referring to things like the smiley face above — where the human brain adds meaning to a set of symbols that mean nothing. That’s where the more part of “more than the sum” comes in. A new meaning was created in the chemical reaction between two parts.
Humans, as intelligent beings, are meaning generators. We see intent in things. We see meaning there. At some point, long ago, some ancient human saw a fire spread. Not knowing what they were seeing, but watching it destroy their small home, that human could only think this bad thing was the result of malice. The fire, they figured, must have done what it did intentionally. Of course it didn’t — fire is fire. It’s just an exothermic reaction. There is no intent. Things that can burn will burn if the exothermic reaction can be caused.
Out of this, you get gods. People have bad weather and there’s a famine. Why did this bad thing happen? Someone must have caused this — like all the misfortunes we know are caused. Before we had the scientific method, this is how most people seemed to see things working.
Some part of the human brain ascribes meaning to things, and that’s great for humans communicating with each other — understanding the other party’s intent is the entire basis of good communication — but it’s not so great for understanding things that aren’t human.
We call this process “anthropomorphizing.”
anthropos: from Greek, meaning ‘human’
morph: from Greek, meaning ‘shape’ or ‘form’
Something that is “anthropomorphic” is human-like. Something that has been “anthropomorphized” is something that has been given human attributes, like two windows on a house looking like eyes, giving the house the appearance of a face.
Don’t get me wrong — there’s a romantic quality to giving things life. I’m named Doc because I used to volunteer on a Boeing B-29 bomber called “Doc,” which is the only plane I’ve ever heard referred to as a “he,” instead of a she. Doc is a B-29, like all the rest — it has no gender, of course. But people form attachments, intent, connections to all sorts of things. They give it personality. To some extent, they anthropomorphize it.
In nature, we talk about “eye spots,” because it’s not just humans that anthropomorphize… it’s… well, just about anything that’s alive.
In Bangladesh, some people wear masks with eyes on the back of their heads because it reduces tiger attacks .When scientists studied lion behavior in Botswana, they found that painting eyes on the butts of animals lions liked to prey upon meant lions would not attack, feeling they were being watched.
(so much for ‘as courageous as a lion,’ I guess)
Creatures see intent — even where there is none. It’s very easy to slip into the feeling that if something has the characteristics of life, it is alive. The more it reacts in a living-seeming way, the more likely that it is alive.
Which brings us to mirrors, and the mirror test.
In a mirror test, an animal is marked somewhere it cannot see while it has been anesthetized. The animal wakes up and is given access to a mirror. If the creature sees the mirror and reaches for the spot on its body where the spot is, it is understood to be self-aware. While this isn’t the only way to determine self-awareness in a creature, it is a useful metaphor for people who use AI art, writing, and so on.
If I ask an AI text prompt a question like “are you sentient,” it’s going to dive into its database — trained on a corpus of material fed into it over time — and pull out a series of words related to the question being presented. Since there’s a lot of science fiction incorporated into that corpus, the AI-that-isn’t can be made to sound like a science fiction story character AI. It isn’t, not really. There’s no thought behind it at all. You know why it sounds human?
Because of you.
There was a guy at Google fired over this — he was too stupid to realize that the AI was simply giving him results with strings of text related to the questions he was asking it. The AI was the mirror, reflecting his own thoughts and biases about AI thinking back at him, but he was so desperate to anthropomorphize it, he tried to suggest it was alive.
We’ve seen this plenty of times — someone says “oh, the AI is alive,” but it’s really just pulling from existing text on the matter. When an AI says “I’m sorry, I can’t do that,” it’s not actually sorry because it doesn’t have the capacity to be sorry. It is not repeating that phrase because it realizes you exist, that you didn’t get what you want, and it’s being sympathetic with you — which is what an actual intelligence would do — it’s repeating it because it’s been programmed to.
Imagine, if you will, a gigantic “Frequently Asked Questions” section of a website. Somewhere on that website is the question “Can I use this website to commit crimes?” and the answer is “I’m sorry, but no, you cannot use this website to commit crimes.” A human wrote that down on the website, writing to a generic person who may ask the question.
Now, you know how sites like Amazon have customer service features where you’re expected to search for something — like “can I use this website to commit crimes?” Well, the computer takes some of those words and phrases — like “commit crimes” — and looks for similar text in its FAQ section. It then sends you a link, and then follows up with a (human-written) question: “did this answer your question?” You can then click “yes it did” or “no it didn’t.”
At no point did the website do any thinking. It doesn’t actually consider what you’ve said, nor can it apply what you’ve said in an interesting way. It is simply a machine that you put a series of characters into, and it is reflecting the most similar series of characters back at you.
You may go “that’s not what I’m asking,” and you may need to speak to a customer representative. I do it all the time — a human can understand my question, a website cannot. The website is simply reflecting, like a mirror. A human can actually consider the words I’m asking, rephrase the question, come up with ideas that might solve my problem in a way I wasn’t expecting, figure out that I’m using the wrong words to explain what I want, and so on. The computer cannot do that. It’s about as intelligent as google going “did you mean Cool Corporation?” because it has more results about “Cool Corporation” than “Cook Corporation,” which I actually asked it for.
What the tech industry calls AI is a slightly more sophisticated version of a robo caller on your phone, or a website that finds you a dozen useless answers to a question you already had.
“But what about all these stories about AI learning things it couldn’t possibly know?”
They’re fake.
No, really. They are. Every single time someone has said something like “oh wow, this AI learned a language it wasn’t trained on,” someone comes along and debunks it as the fraudulent bullshit it is.
Why might someone try to lie about their software’s capabilities? Easy: the same reason a Used Car salesman tells you that this car’s in perfect working shape while he tries to hide the flood damage. They’re overselling the software so you give them lots of money to find out their software doesn’t do what they say it does.
And that’s really a big part of the point I’m making here: it’s really funny to see people going “you’ll be able to tell a computer ‘make me a retro first person shooter’ and it will just do that.”
It can’t do that. It can’t even come close. Right now, we have AI drawing generators that still don’t understand human anatomy well enough to draw hands — that’s because when it draws, it’s just training itself based on other 2d images it sees.
As you may be aware, two dimensional drawings are generally of three dimensional objects.
So, when you see a hand rendered in 2D, you think of it as a 3D object. You know how a hand ought to look, how the fingers bend and flex, how the skin looks, how many fingers you expect to see, and so on (rarely more than five, occasionally less than five, and you likely know the average lengths of each finger — the middle finger is the longest, the thumb is offset, the pinky is the smallest, and so on).
The AI does not know that. Its entire world — and training data — is two dimensional. So it does what it always does and mathematically averages out all the images it’s been trained on and gives you a best guess, to which it applies a fuzzy layer of randomness, just so the pictures aren’t all the same.
But it can’t give you an actual hand, because it doesn’t have any concept of a real hand and how the hand should work. I’ve seen AI art that tries to convert an anime character into a real person — sometimes it layers on multiple sizes of eyes, from realistic to anime — on top of each other. A thinking being would go “no, no, eyes don’t work that way.” But a calculator doesn’t have any concept of an eye, not really. It just averages between a bunch of the 2d images tagged “eye” in its database.
It can’t understand “eyes are specific things in the real world, and in illustration, eyes are not always drawn realistically. I will try to draw a ‘realistic’ version of a cartoon character by using what I know about eyes to draw a realistic eye.” It’s much, much less sophisticated than that.
So, yes, the idea that these not-at-all-AI tools could build an entire game — fully three dimensional, real time, with all sorts of mechanics and tuning designed to make the game feel good for human behavior — is ludicrious, because it requires a level of understanding that is eons beyond what it can currently do. Right now, it doesn’t even understand “what eyes are” in order to be able to draw eyes.
You know what we use AI tools for? Stuff like “please randomize the placement of trees as I use the tree-painting tool to set down a bunch of trees to make something players will interpret as a forest.” It applies a randomness to the trees that approximates reality based on the values the people who built it applied to it. It’s not actually intelligently going “hmm I’ll do my best to make a realistic forest.” It’s just… it’s a computer replicating the randomness of spraypaint.
And that’s the whole point of it — there is no intelligence here. There is no machine that can conceive of you as a thinking being with desires that can be addressed. These machines don’t even have concepts, not really. If I tell a machine “show me Nic Cage dressed as Superman,” the machine may have images tagged with “Nic Cage” and it may have images tagged “Superman,” but where the thinking mind of an actual intelligence will put those ideas together and fill in the blank spots with things it knows — like an artist who has also memorized human anatomy — the AI is still gonna give me an imperfect S-Shield on Superman’s chest, it’s gonna mess up the fingers.
Maybe it’s not using the long-haired Nic Cage we saw from the 90s screen test, but still, there’s a stray strand of hair on his chest, because one of the source images had Superman with long hair and in that image, there was a strand of hair that was long enough to go from Superman’s head to his chest.
A human understands hair, short and long. If the image calls for short-haired Superman, then we’re not going to see a strand of hair hanging down because a human would go “that can’t be there because you’d only see a strand of hair if it was connected to hair on Superman’s head,” right?
A human would never think to draw a random disconnected bit of long hair that’s just floating there in front of Superman’s chest as if he had long hair, when the rest of the image is showing him with a much shorter haircut. A human would understand that looks ‘wrong.’ No matter how much you train the AI on 2D images, it’s never going to understand that, because the current technology called “AI” is, again, not actually intelligence. It’s only going to average between what it’s seen; it’s not going to go “hmm, that looks wrong. For hair to be here, that hair has to be long, so I need to draw long hair.”
The computer doesn’t understand hair and how hair works. It just got trained on some data where hair-like pixels are in a location and other data where it is not. So it’ll just kinda put them there at random, with no consideration for the entire image.
Sure, at first glance, if I squint, maybe it’ll look like Nic Cage dressed as Superman… but there’s no actual understanding there. If I commission an artist to draw that, they might send me a sketch, and I might go “hmm, can you change his expression to look more determined?”
The artist will know how humans appear determined and they’ll figure out a way to make Nic Cage look determined— the AI will just grab from all the ‘determined’ images it’s trained on and average them out into something that kinda has some of the aspects of determination in the eyebrows, but maybe the mouth is wide open and doesn’t look determined at all.
The AI cannot get this, because it is not intelligent, so it cannot really capture a sense of determination that’s anatomically correct. The drawing, the expression, will make no sense. And it can’t change what I’ve told it to get it more like I want, where a human being can understand my intent and adjust the drawing as necessary.
(there’s actually a plugin for the automatic1111 gui that’s supposed to use chatgpt to better interface natural language with the software’s output — a combination of both text and visual ‘AI’ tech — but it doesn’t work particularly well, because, again, these things aren’t actually intelligent and don’t understand intent)
the ai cannot lie, but it will not tell you the truth
Recently, a lawyer attempted to cite some cases in court. He asked ChatGPT, an “AI” chatbot, to find examples of cases to help him win a case.
In court cases, judges often look to established precedent to help make decisions in their case. It is of the utmost importance that those cases actually exist. If a lawyer were to present those cases as real when they are not? Well, a judge might get a bit mad about that.
Now, AI cannot lie to you. Lying requires intent. A person who tells you the dumb “humans eat an average of 8 spiders a year” factoid (what’s a factoid? some people think it means “a small fact,” but it actually means something that sounds like a fact but is untrue) is probably not lying to you about that — they’re just telling you something they believe to be true and think you might find it interesting.
An AI is like a gigantic word sifter. It can structure sentences in ways that seem related to the topic at hand, which is why, if you ask it for a court case, it can generate text “[proper noun] v [proper noun]” as a formatting concept — like how Excel will see you type in $1.00 and know that further entries in the column are likely also dollar values, so it will change the formatting of that column to the dollar value type.
But the AI will not actually search for existing court cases, nor will it understand what’s in the court case — because it has no ability to understand anything, as it is not intelligent. Instead, you press a button, and the sifting machine starts spinning, and since you said “court case,” it will output a string of text that is formatted to look like a court case.
Year, time, all that stuff… it’s just based on the data in the pool. The chatbot does not “know” what you’re actually looking for, nor does it understand that you want real, existing court cases. It will simply output a string of text with case-like properties.
When that same lawyer was asked to prove those cases existed, his response was to ask the chatbot if they existed. The chatbot said yes. He used this as proof it existed, assuming — incorrectly, because he’s a complete fucking moron — that the AI could be trusted. It’s a random text generator! It isn’t saying truth or lies! It’s just spitting out random text in vaguely realistic-sounding patterns!
When a teacher in Texas asked an AI chatbot if it wrote essays that students had submitted, the chatbot outputted the text string “yes.” This is as reliable as asking a Magic 8 Ball if it wrote an essay. You shake it a bit, it might say yes, it might say no. It’s not actually thinking yes or no, of course. It’s just outputting one of the responses in its training data.
The teacher flunked all the students — and, like the fucking idiot who should be fired immediately that he is — decided to treat his students like they were all guilty, and they had to prove to him that the Magic 8 Ball he used was incorrect, even though he was trying to use a Magic 8 Ball, which cannot lie or tell the truth, but synthesize responses at random from its training data that may or may not be correct.
You know the whole “a thousand monkeys typing on a thousand typewriters for a thousand years may be able to randomly put characters in the correct order to replicate Shakespeare?” The monkeys cannot actually consider Shakespeare, much less repeat things he’s said, but if they hit enough buttons enough random times, they might output Shakespeare-approximate text. AI Chatbots are a lot like that. Same with AI art.
And here’s the best part: AI doesn’t have long term memory, nor is it like… one single entity that exists. There is no “chatGPT” the way there is a HAL 9000. It’s just an instance of a program that spins up when you run it, and when the session ends, boom, that’s it, it’s gone. So even if ChatGPT was capable of thinking about anything, and it isn’t, it wouldn’t be able to remember whether or not it had output text in the shape of an essay. It does not store that data anywhere — it does not remember anything.
“But I asked it if it remembered, and it said yes!”
You mean you typed in a prompt, and the software found similar words that are often seen near the words in your query — like fictional stories where a person was asked “do you remember?” and a character replied “yes,” so thanks to the averaging going on behind the scenes, that’s the text string it output. It does not actually remember — the software literally isn’t built that way. The instance ends when you leave it. The only thing it can ‘remember’ is what’s actually saved in your ChatGPT history, and all that’s happening is that you are reminded what you input last time.
You can delete your chatgpt history and just save all your previous inputs to a text file, then paste that text in the box and the software won’t act as if there’s a meaningful difference, because it’s your prompt that generates the response. It’s a mirror reflecting you.
So what we have here is a story about a lazy teacher shifting his responsibility to actually grade papers onto a piece of software that could not honestly answer the question he asked it, because it’s not even aware it was asked a question, nor could it think about the question, nor would it even be able to know if, at some earlier point in history, it had written that essay.
This is why you can ask a chatbot “hey, what does two plus two equal?” As a human, you know this answer, either because you memorized it, or because you can literally see that two plus two equals four. The chatbot, though? Buddy, it can’t calculate for shit. It will regurgitate “four” from its database, but it can’t conceive of, say, two pennies being added to two more pennies, which means we now have four pennies.
So you can say “sorry, two plus two equals five,” and because of the way it’s been programmed, it’ll offer some “oh, sorry, I got it wrong, two plus two equals five.” But then you’ll ask it to prove that mathematically — and because it’s got no real memory or intelligence, it’ll just regurgitate “two plus two equals four.” It hasn’t even forgotten that it said “two plus two equals five,” it literally cannot think so it cannot even consider that it said two different things, much less actually calculate that answer correctly.
This is a machine that can output human-sounding sentences. It can take the words you’ve given it and output text that is probably gonna sound realistic. But it’s not thinking, remembering, contemplating, intending… anything. It is simply outputting text that is formatted like what you would expect, based on the prompts you gave it.
So that’s an “AI,” it’s not an Intelligence at all.
what is a writer? a bullshit artist
I have a problem with most people who write about writing.
The act of writing is joyful — you are connecting with other people, you are inventing scenarios, you are bringing emotions to life! It can be easy to get swept away in those feelings; hell, you should get swept away in those feelings, because nothing is worse than a joyless story.
Unfortunately, writing possesses an almost magical quality: the euphoria of breaking a scene, the rapture of finally getting a character, and so on. I say “unfortunate,” because it means an awful lot of writers make the fatal mistake of thinking that good feelings means the work is important.
Importance is an easy trap to fall into — when I wrote Adios, I had people tell me that it had changed them, helped them, and even, in a few cases, saved them. Now, I could very easily make a mistake and brand myself as The Guy Who Saves People, but if I did that — if I gave into the trap of self-importance — I would write my next story not as Doc, the person who wrote the story that may have been of some benefit to people, but as The Guy Who Saves People, who is a very different person from Doc. That person doesn’t even exist — it’s an illusory entity. Doc may have done some good for some people by being Doc; trying to transform myself into someone I’m not to become some Great and Good Person would only result in a kind of fraud.
But an awful lot of writers love hearing that what they’re doing is Capital-I Important, and they begin writing fraudulent things that speak to no one because now they’re not chasing the human connection of the story, they’re chasing the position of Very Important Guy.
I wrote the emotional truth of what had happened to me. I put that out into the world — the feelings I was experiencing, the God’s honest truth — and some people picked up on that truth, and found that same truth in their own lives. That is why the writing mattered, not because I was writing Capital-I Important Shit. Rarely is what we think is important actually the important thing. But if we are truthful, people will go “oh! yes! I know this feeling! I have had it too!”
You can only be a writer if you speak true, and you cannot speak true if your goal is to elevate yourself. Don’t get carried away by the euphoria of writing something awesome and think what you are is important.
The problem with a great number of these AI fellas is that they think artists are important, because art — whether writing or drawing or anything else — can cut to our very heart and bring about emotions in a way that seems magical. So they think that we must be wizards of some kind, and if they could only just release content into the world, the way we do, then they would get all this magical acclaim they perceive us as having.
Of course, we are not content creators. We are people who understand ourselves and others, and we project that understanding into the world. When I write about her, the way she holds herself, that white-knuckle way she grips her bag, I am saying something about her. Her who? Well, her who grips her bag in a white-kuckle way. Who is she? Why does she do that? Is what she has in the bag important? How important? Is she a spy carrying secrets? Is she poor, and what she has in the bag is the only reminder of home? Maybe she’s stressed about something else — it’s not the bag at all, but her state of mind.
Whatever the case is, there’s a person there — and to write her, we must understand people. Every single word on the blank page is there deliberately, because we wrote it. We wrote it because we made a choice. That choice was to describe a person in a certain kind of way to establish something about her.
She did not exist until we breathed life into her, and the way we breathed that life into her is what makes her who she is: a character.
It may seem magical, but it isn’t. It’s a skill, a skill that anyone can learn. You, reading this, maybe you aren’t a writer and want to be, or maybe you are a beginner, or maybe you’re just curious, or maybe you’re a pro. Whatever you are, it’s important to bear in mind: this is not magic, this is deliberate, skilled work that you are doing.
If you have been reading the work I’ve been doing for the past several years, you might know this already, but if not, here it is:
writing is intentional
No part of a story comes out of nowhere. Every single word, every single thing that you ever say or do when you are telling a story comes from somewhere. You put it there because it needs to be there. Stories are not formed out of the ether, and the only time that story just seems to happen without intent is when the writer hasn’t internalized this simple fact.
That’s how you get lazy cliches and boring tales where nothing interesting happens. Once you start realizing that everything you do — literally everything — is there because of choice, you become an actual writer.
You do not simply throw things at the wall and see what sticks. You can be better than that; you probably already are. As long as you bear in mind that every word you’ve written is a word you chose to write — you can put the words in however you’d like.
On my next game, tongue-in-cheekily codenamed “Waifu Death Squad” (the ubiquitous terms ‘waifu’ and ‘husbando’ being the way Osaka pronounces “Wife” and “Husband” in the manga Azumanga Daioh), I found myself somewhat bummed out. I had an idea for a cool scene with a dinosaur in it. “Too bad we can’t have dinosaurs in it,” I found myself saying.
Wait a minute.
This is my story that I’m writing. I can do whatever the fuck I want. So why the fuck isn’t this a world where dinosaurs still roam the earth? Maybe they got Stegosaurus ranches, right? Who gives a shit. I can do what I want. So I will, because it’s my story to tell — and I am the one who puts all the words down (now we have a writing room, so it’s not just me, but my wonderfully skilled cowriters Philip Bastien and Kevin Fox) — and there is nothing but a blank page until I put those words down.
But we must do so with some deliberateness. There is intent here. It’s in the how of describing things that we breathe life into the story. There’s a whole-ass why driving scenes.
Waifu Death Squad is, to some extent, a mystery story. Mystery is one of those genres that, like horror, is so well understood that you have a bit of help going into it.
With horror, we know that a driving factor in all the scenes is, y’know, horror, so we instinctively think about how to describing that house our protagonist walks up to as a creeepy old house, with sagging timber in the front like rotting bones, right? We spice the story up with just that much more emotion. The genre is emotional, and so are we. When writing a straight-up drama, it can be easy to forget to write our scenes in a similarly emotional way. So horror’s a great way, I think, to get started as a writer, because we get a lot of practice writing everything — even simple house descriptions — as emotional.
(the same is true for making a game — how does your level design change, your lighting, your asset creation? with Adios, I asked our team to think as if we were inventing a new genre, the “melancholy game,” instead of “horror game,” so all our assets would need a level of melancholy to them)
With detective fiction, we are more obviously attuned to the fact that one of our characters may be hiding something. A character can hide something in any other as well — maybe they’re in a comedy and hiding the fact that they’re actually someone else behind that Groucho Marx moustache, or maybe it’s a drama and they’re keeping a secret about getting accepted to college in another state because their Pa wants to give them the farm they aren’t that interested in. It can be easy to forget that layer outside of the mystery genre, but in a mystery, you can be aware of this.
When we write lines in Waifu Death Squad, we may have multiple reasons for doing them. There’s a single line in an early script that tells you everything you need to know about a character’s worldview. One character tests another, without the other even realizing it until near the end of the entire game.
We aren’t just describing the actions in a scene, in other words. It’s not “The Protagonist sat down and pressed a button on the tape recorder. The witness began describing what they saw.” There is life here. One character may tell the truth, but it’s through the lens of jealousy or self-deprecation— the way they speak tells us everything about who they are, even if they’re not actually hiding something, right?
Storytelling is a way for humans to understand ourselves. Storytellers are people who understand people well enough that they can tell compelling stories; in other words, a storyteller is a bullshit artist in the way a stage magician is. The best audience, a willing one, is here to go along for the ride; we all know it’s bullshit — these events did not really happen, but we want to treat it as if it is, because that’s how, in doing so, we can be receptive to the emotions driving the writing, and the art can do the work that art is actually here to do: to help us understand ourselves and each other.
No one cares about ‘content,’ stuff that’s just words with no intent or meaning. No one gets it, responds to it, feels anything about it. We remember the things that make us feel — we hear that song that us and our ex shared and we think about it in a certain way. We read that story that reminds us so much of a difficult time in our life and how it helped us, and it hits us square in the emotions. Our memory is emtional, our ability to be persuaded is emotional, every experience we’ve ever had is baked into our fuckin soul with the emotions that we felt when we had those experiences.
Any kind of storytelling — even the wordless storytelling of a silent film or a game without dialogue — is emotional storytelling first and foremost. And for that emotion to work, because emotion is a thing that requires the utmost precision, it must have some level of genuine thought behind it.
That’s how being a bullshit artist works, after all.
why ‘artificial intelligence’ means you won’t stick around for long
So! We know what a writer is: Someone who has to write lines that are considered and truly human, because the purpose of writing is nothing more than to connect with other people. If people cared about a random assortment of words, they would find a phone book just as compelling as a story, right? But they don’t.
The AI guys do not know anything about the craft — if they did, they’d just do the craft. They want to make something that appears to them —people who are ignorant about the craft — to be like what writers and artists are doing, but since they are ignorant of the craft, they’ll say “yeah this is good,” and anyone who actually wants a good story will go “no it isn’t.”
Like if you tell someone at the store you’d like something to drink, and they, a robot, not knowing why humans drink liquids, hands you a bottle full of bleach. They’re both fluids, right? Might as well be the same thing.
AI art is a ‘drawing,’ right? Same as real drawing? Right? RIGHT?
Wrong.
Stories are just words, right? WRONG.
So, if you show up at the office and you go “I’m a writer,” and I go “alright, show me your writing,” and you write something that is just an assemblage of words, I can tell you aren’t writing a real story. The AI guy may think “oh, well, this really does look like good writing to me, so it must be good writing,” but that’s a bit like someone who watches a bodybuilder working out and assumes a bodybuilder is very strong. Anyone who knows about fitness knows that strongmen and bodybuilders are two different kinds of fitness routines. The appearance of strength does not mean real strength.
If you know how to write, then you don’t need AI. If you don’t know how to write, then you’ll think AI is good enough when it isn’t. How could you tell if you don’t know how to do the job?
An AI is unthinking, remember? It’s going to just regurtitate what it knows. It may be able to reorganize things in different ways, but because it has no actual memory (it generates legal cases that don’t exist), cannot reason on things (2+2 equals what again? it can’t calculate, so it doesn’t know)
There was a recent announcement that AI was able to pass a test… with good prompts. What people found out was that in the actual ‘study,’ the AI was asked to answer correctly until it did, meaning tons of wrong answers were thrown out. Rather than going “we ran an AI through a dozen iterations of the test, here are its scores,” they went “we kept asking the AI the question until it randomly gave us the correct answer.” Not only that, but some of the questions had the answers in them to get the AI to repeat the correct answer.
Think about how much time was wasted there. The AI objectively could not get the right answer without tons of coaching, like the stupidest actor you’ve ever met with an earpiece in their ear forgetting their line every three seconds and needing it fed to them. There’s no value in that!
“But, Doc, what if the AI can get better?”
It can’t, though, because the technology doesn’t work how human brains work. It works as a machine that averages out things it already knows; it can never put together ideas in an interesting way to create an intentional, new meaning. The drawing AI doesn’t understand ideas as 3d concepts the way humans do, because it doesn’t live through the real world like us. That’s why the AI images always look wrong — because it’s averaging out a series of disparate 2d images. It doesn’t understand that this image here is of the front of a motorcycle, and that image is of the side, so it just jams both in there and gives us an image that looks only kinda like a motorcycle.
For AI to get better, it can’t be the thing that it is — the programmers are barking up the wrong tree. You will never get to intelligence through the currently existing method. You will only ever get something that approximates — but still draws the wrong fingers, writes the wrong lines.
An AI can never understand consistent voice between different characters, because it isn’t thinking “this is Jade, she talks like someone who’s book smart but hasn’t heard a lot of the words before, so she mispronounces things, and that means people underestimate her vast intelligence” or “that’s George, he uses the word ‘ain’t’ to affect a specific country boy vibe to disarm the people he’s talking to and catch them off guard.” Both characters are intelligent but sound dumb — the AI can’t tell that, much less keep it in mind between scenes.
I messed with an AI writing tool once (lest you think I didn’t actually look into this and have no idea what I’m talking about: I have literally experimented with these tools more than most just to make sure I fully understand whether or not the tool is good or bad, and based on that experience, I know it’s bad) that just randomly brought back a dead character because it didn’t know the character had died. It completely forgot characters in a scene — they’d show up, then get no lines. Characters would ‘act’ randomly, dialogue would be all over the place, and, hey, just like autocorrect, it was easy to get the tool into a loop of “I went to the store and then i went to the store and then i went to the store and then i went to the store and then…”
Sure, you hear people post articles like “what if the military could get commands from AI” but the technology we have will literally just create random soldiers, divisions, even locations. It’ll be like “send [random number of soldiers that it got from training data on world war 2] from the 69th armored division [which doesn’t exist because the AI just generated some random numbers and got one that sounds real to the reader] to Paris, France [a location nowhere near our sphere of operations].”
If it can’t even be reliable about things that exist, how in the world do you think it could ever apply consistency to a fiction? At least Paris, France, gets mentioned quite a lot, so you are more likely to see the machine outputting Paris, France than Paris, Texas, but what about fictional worlds? How is it ever going to keep all that straight when it can’t keep reality straight because it’s a machine that spits out random fucking data?
It won’t.
And now you’ve got to give it over to me. Your boss.
If you think “i can trick you, Doc, and have AI write for me and then pretend I did it,” then you’re a fucking moron, because you’re going to have to explain to me why you wrote what you wrote.
Why? because my job, as game director, story editor, and, yes, lead writer, is to know exactly why your material belongs in this game. And that means you have to explain your intent. Like, the other day, my buddy Kevin wrote a line that was a good. I went through the sentence flow of the scene. It didn’t fit here, but it was a good line, so we’ve tabled it for now and we’ll figure out a way to make it work later, because his instincts here are right. I can work magic with Kevin because Kevin is a human being with thoughts; I can’t tell you how many times he’s had insights or reminded us of things in our notes that we wanted to do in a scene.
An AI can’t do any of that.
Remember back in school when you had to do group projects, and they sucked? Now imagine that you have to give the presentation, and only minutes before class begins does your classmate show up, hand you a bunch of material, and refuse to explain it. Your teacher will flunk you if you don’t get a passing grade.
Think about how hard it would be for you to explain your classmate’s reasoning when you’re barely familiar with the material yourself. Even if you could jam together a story that might stick, it’s still untrue, and with enough pressing, any lie falls apart, no matter how airtight.
And my job requires me to go through this shit line by motherfucking line.
But let’s say you actually manage to get it to spit out correct information. Let’s say that you did what you have to do to get the AI to be consistent, and had to babysit it the entire way. You are going to have to constantly edit, fine-tune the prompts, and This is going to take you more time than just writing it yourself. But hey, maybe you want to waste your time getting AI to do in six weeks what you could have done in two. That makes you a massive liability to the time. You heard that AI would make things ‘more efficient,’ but that’s because stupid people who don’t know anything about writing think “generating lots of words” is the same as “writing a story that’s even remotely coherent.” So you’ve got lots of words, and you’ve babysat the machine, and you’ve had to do a ton of editing.
a bullshit artist needs muscles
In order to be able to write, you have to actually have written.
There is no other way to do it. Even a guy who takes steroids that promise him improved gains still has to work out, because the muscle has to grow. A person who claims to be a writer but has the AI do all the work is going to be like one of those fat starship passengers in the Pixar movie Wall-E — incapable of doing anything for himself.
You — you — have to be the one writing. Even if you managed, somehow, to trick me of all people into thinking that you were the one who came up with the writing, you still have to be back there editing consistency into a software that is, by design, not consistent. See, AI technology has, very specifically, a built-in randomness to it, to give it the appearance of realism. Unfortunately, by doing this, it sometimes gets things wrong. If I tell the AI “draw me Superman,” it will sometimes draw Superman’s suit the wrong color, because of that built-in randomness.
Can you really keep all those plates spinning? Would you even want to? It’s gonna be way easier for you to write a story yourself than have to constantly make sure the AI didn’t randomly output some bullshit at any point. Can you really catch everything? I go over these essays with a fine-toothed comb, and even though I have literally tens of thousands of hours of practice since I began writing professionally eleven years ago, I still, somehow, manage to miss a few typos here and there.
(i thought about including a typo as a joke but i ultimately did not)
So many of these AI types I’ve met and witnessed talk about how artists are somehow tyrants and gatekeepers for… well, every time I type this, it sounds ridiculous, but that’s because AI proponents, who are almost always former NFT bros, are fundamentally ridiculous.
They have the skill equivalent of a get-rich-quick scheme. When an artist says “dude, you’ve got to draw to become better at drawing,” or a writer says “man, you’ve got to write if you want to write good things,” the AI bro shrieks “how dare you!! you are trying to stop me from becoming a creative!”
Of course that’s ridiculous — no one’s trying to stop the AI bro, they’re just telling the AI bro the truth: if you make a machine create the work for you, you’ll never even be good enough to figure out why bad writing is bad and good writing is good. You’ll never figure out how to draw or why you’d draw things a certain way.
All you’re doing is telling a machine to make something for you — you are giving up your ability to be an artist. You are not an “AI artist,” you are a fucking client. There is very little difference between me searching for a bunch of images, sending them to an actual artist, and saying “yeah, can you use these as references to make pictures of my OC?”
There is no such thing as an AI artist — there is only a client, asking a machine to make things based on their input.
This is you:
Corporations, of course, recognize this — it’s why they like AI art, so they don’t have to pay artists, but they too are stupid, now hiring ‘prompt artists’ who have to take way more time than a real artist would to create the same work, ultimately costing them a lot more money to achieve worse results that their audiences won’t care about. It would have been more efficient and cost effective to pay a person — but a sucker of a CEO who buys into the “look how fast the AI can generate art” is too stupid to realize that.
But back to you, the person who wants to be an artist and thinks AI can help them get there: I hope you realize by now that AI could never help you get there.
There’s an interview with David Simon, creator of The Wire, one of the best television shows ever made, over on NPR that goes like this:
SHAPIRO: But if you’re trying to transition from scene five to scene six, and you’re stuck with that transition, you could imagine plugging that portion of the script into an AI and say, give me 10 ideas for how to transition this.
SIMON: I’d rather put a gun in my mouth.
SHAPIRO: You would rather put a gun in your mouth?
SIMON: I mean, what you’re saying to me, effectively, is there’s no original way to do anything and…
Shapiro, who is not a writer (he’s a journalist, so he likely knows how to write reporting, but that’s not the same as creative writing, where artists like Simon and I have to come up with ideas), thinks that AI could help Simon, who’s an actual writer, get through.
Simon expresses completely understandable distaste — this is the thing he does, he loves doing, and he is paid to do? Why would he have a machine simply give him ‘ideas’ that have been done before?
If you’re an artist, you know how annoying it can be to get ‘ideas’ from people, because you’ll be like “yeah I’m working on a really cool story about this guy going up against a mysterious samurai” and some guy will come in and, without even realizing it, go “what if the mysterious samurai is actually his dad, and he cuts the guy’s hand off, and…” and like, dude, come on, you just regurgitated The Empire Strikes Back at me.
Why would I ever — and I do mean ever — want a machine to give me what’s already been done?
Remember, the AI can’t come up with anything new. It can only repeat what it’s already been told. It’s like a parrot. Sure, if you train it “Polly want a cracker” and “Polly want a sandwich?” it might occasionally say “Polly want a sandwich cracker?” or some dumb shit (come on, Polly, it’d be a cracker sandwich, like a Ritz cracker around a slice of summer sausage and some cheese!).
Think about that.
If the AI can’t give you anything new, and the AI can’t actually think about what it’s doing, and it doesn’t even have a memory, how is it going to foreshadow anything? How can it give you a clever, new, or inventive scene transition? How can it make you feel anything about anything? A writer can, because a writer, as a human, can imbue that line with intent.
As I go through lines in Waifu Death Squad, I’m sometimes tweaking just two or three words in a sentence so that they hit just that much differently. Oh, this character is younger than that, he needs to sound younger. Oh, this character is hiding a secret, we’re going to want to make their lines like this because they’re actively working to hide it. There’s a precision to writing — and sometimes that precision is how we realize “oh, this scene needs to transition like this so that we can accomplish that.”
Hell, one time I pitched an idea with “this is a stupid idea, but,” and Phil literally goes “wait. wait. we can work with this…” and we had this burst of inspiration that turned it into a very clever thing no one has ever seen before. We did it because we could think about where the story was going, where our characters had been, and the impact the scene doing that one specific thing with my stupid idea would have if we changed it just so.
We got there by developing our writing muscles, not just asking a computer to randomly print out a list of sentences from Wikipedia.
When I was first starting out as a writer, I was terrified of writing comedy. I couldn’t come up with a joke on command. Still can’t. But I have become an increasingly funny writer. I can get our Waifu Death Squad dev team laughing in a work chat as I suggest a line change that brings a character to life.
I was worried when I was a beginner. A writer who thinks they need a machine to start will only ever generate what has already been generated. They can never make something new.
Says Simon: “If that’s where this industry is going, it’s going to infantilize itself. We’re all going to be watching stuff we’ve watched before, only worse.”
The machine can never grow. It cannot develop. It cannot get better at telling jokes, because it has no capacity for understanding. The machine can be updated with more data, but it can never synthesize the data in the way that you or I can. We can practice — learning more, thinking about more, developing our muscles — and we can become capable of telling jokes.
The AI can’t really do that. It can recycle existing jokes. It might have enough fuzziness to surprise us occasionally, but it is reciting what was put into it — it cannot incorporate like we do, and it cannot achieve intent the way we do. When Norm Macdonald said “we oughtta kill that Hitler guy,” and his cohost replied “oh, he already died,” and Norm said “I didn’t even know he was sick,” he knew it’d get a laugh because he knew about other people well enough to know that they all know why Hitler died. The humor in his punchline comes from the friction between what we know and what Macdonald appears to be foolish enough not to know.
(I’m not sure when he started telling this joke, so I can’t be sure, but a friend of mine points out that Norm also hid his cancer diagnosis until he died — making “I didn’t even know he was sick” his final punchline, which is exactly the kind of joke he would make)
Remember, AI can’t even think about a subject. For instance, a person asked an AI “how many times does the letter “n” appear in the word “mayonnaise?”
You and me can go “yeah, it’s in there two times.”
The AI says “four,” because the AI doesn’t actually understand the question being asked, it just repeats part of the prompt in a way that sounds organic, because that’s how it’s been trained to work. It’s wrong, of course — because, again, it can’t think. Anyone who is like “AI can eliminate human bias and give you correct answers” literally does not understand AI. Sorry to hammer this point home, but it’s true.
The person asks for a list. The AI repeats the prompt, and proceeds to misspell mayonnaise three times, failing to identify that it appears twice in the first spelling, three times in the second spelling as “mayonnainse,” three times in the third as “mayonnaine,” and three more times in “mayonnaisne.”
Sorry for repeating the “AI cannot think” thing so many times, but it is crucial to disabuse you of this notion.
If the AI can’t even understand how many “N’s” there are in mayonnaise (because it can’t count), and it can’t understand the question “how many N’s are thertr in mayonnaise?”, and it can’t understand how to answer that question to the person asking it because it doesn’t even think at all, so it cannot understand a person is asking it a question…
Then how could it ever develop any muscles to tell a story? It would be stupid to use soggy toilet paper as a training wheels, wouldn’t it? Then why the fuck would you try to give yourself a pair of training wheels that is wholly inadequate for the task instead of just learning how to do it yourself.
This isn’t a shortcut, it’s not even going in the right fucking direction. Develop those fucking muscles on your own. Soggy toilet paper has no structural integrity.
then there’s that pesky copyright thing
So, this whole time, I’ve mentioned prompting the AI with “Superman.”
Fun thing about Superman, he’s… currently under copyright. He will be until the year 2033, unless something changes. Somehow — and by somehow, I mean someone put pictures of Superman in the machine — the AI has the ability to make art of Superman without being specifically trained on the character.
Which means that copyrighted content is put into the AI.
And since the AI is fuzzy, that means stuff that’s copyrighted might show up in your work, and if you don’t catch it, you could be on the hook, much in the same way an artist who traces someone else’s line work will get fucking fired so goddamn fast. I ain’t gonna let my company be legally liable to getting sued because some stupid asshole uses copyrighted material; why would anyone ever allow a tool that has a not-insignificant chance of putting copyrighted material you can get sued for using at random? It’s like the shittiest possible game of Russian Roulette that only got played cause someone was too stupid and lazy not to do the work themselves. It causes more fucking trouble than it’s worth.
That’s why the copyright office decide that AI work cannot be protected by copyright.
That’s because copyright exists to protect artists (you can say “no, it exists solely to protect corporations,” but that would not be correct in a court of law — you are just as protected for the stuff you make
When a randomizer machine outputs data, that data has no protection because no human was involved in its creation. It simply took work — likely copyrighted — from other people, smushed it together, and output a random response. There is no one to protect, because no human labor was put into it. It’s a machine that bleats nothings. Of course it has no protection.
So… if you are a writer that works for me, and you put that into my game? Cool, wow, now anyone can take that non copyrighted, AI-generated work, and put that in their own game. I lose protections because of your dumb shit.
So yeah, absolutely, I ain’t hiring a fuckin “AI” writer. They’re incompetent, slow, require three times as much work to get worse results, and they don’t even have copyright protections.
I’d ask you what the fuckin’ point is, but if you haven’t gotten it by now, I’m not sure you will.
writing is intentional, and ai isn’t, which is why anyone who uses ai is fundamentally unhireable
There are people I have watched, occasionally, who are big into the various AI art scenes — fashion, illustration, writing, whatever. None of them have any real interest in the media they’re interested, and, since I do, I routinely see them putting out images that, at a glance, seem eye-catching, but then you start to spot all the flaws. None of them know anything about fashion or the human form, nor do they understand art and the human eye, or writing and the human condition.
We make art to understand ourselves. People who want AI to make things that seem like art have no such understanding. They aren’t interested in anything but “content,” a bunch of noise that has no audience other than the same people producing that content.
There’s some guys I follow who really fell into this in a big way. They don’t understand the industry they intend to disrupt. They don’t know the slightest thing about it. They have all these events, keep oohing and aahing with each other about their chosen field… but they aren’t actually disrupting the industry. No one’s even paying attention to them, because no one takes them seriously, because they don’t care enough about the industry to know what the industry is. It’s like when children play at getting married — they have no concept of what marriage is, they just know the iconography of ceremony. So they say “yeah this is totally a real wedding” even though you and I know it isn’t and the IRS doesn’t go “welp, gotta start paying your taxes together.” No one takes them seriously.
But their little self-reinforcing bubble of fellow midjourney users sure keep telling each other that one day, surely, the world will take them seriously.
Just like NFTs.
If you read the article, you probably get it by now. An AI cannot do anything it claims to. It cannot plan, it cannot think, it cannot reason. It can generate a facsimile of writing to the untrained eye, to the careless reader who glances at a desk and sees something that looks, vaguely, like a script. But it can never give you an actual script — no amount of training will ever get there, because it is not designed to replicate any kind of actual intelligence.
The only legitimacy they will ever find is if a corporation tries to use their work so they don’t have to pay actual fashion designers before ultimately giving up, because it was never worthwhile in the first place.
For the still-not-convinced: If you need an AI to come up with ideas for you, you don’t even belong in the fucking room, because ‘coming up with ideas’ is literally the most basic level skill to have. A basketball player who can’t dribble doesn’t belong in the NBA, so why the fuck do you think you deserve a spot on my writing team if you need a computer to do what any goddamn fucking eight year old can do? Take the fucking hint, you fucking fraud: if you need AI to do the most basic tasks required of you as a writer, then you aren’t employable as a writer. Period. Fuck you. You’re a waste of everyone’s time.
If you like my work, I could use some help with medical bills and groceries. If you want to support the other work I do on this blog, like this article about the biggest pitfall young writers face and how to get around it, then hey, hit up my tip jar.
I figure this kind of writing helps inexperienced writers the most — which means people who might not have the finances to afford my work if I kept it behind a paywall. That would help me, obviously — I could guarantee a certain minimum that would ensure my ability to continue writing these articles — but the people who need my help the most cannot afford it.
I, personally, can only do this with your support; if I wasn’t doing this, I’d have to get a second job, and as disabled as I am, that’s really not great. I have to spend between $145 and up to an entire Nintendo Switch’s worth of my income on medical care every two weeks. So it’s either do this or get a second job, and a second job would not be ideal given my current disability. So when you send me a tip, you’re not just helping a disabled writer like me, you’re helping tons of students, disabled people, and others without access. Thank you.
paypal.me/stompsite
ko-fi.com/stompsite
@forgetamnesia on venmo
$docseuss on cashapp