The AI Jobs Apocalypse is here. But we aren't talking about it.

The AI job disruption is fully here, and I am perturbed by how I don’t see it being talked about. Maybe it’s partially because we are a very binary society, and we aren’t seeing entire career fields being let go. However, we are definitely seeing job fields getting decimated.

What I mean by this is, for development teams where they might have had ten developers before, now they may have 3. And the rest is being covered by reliance on AI. In ONE YEAR, the former CEO of Google, Eric Schmidt, predicts that all programming jobs will be replaced by AI. One year from now! That’s so many high level jobs that will be wiped out.

And that’s just in an industry that I am personally involved in. The same thing is happening across the creative sphere, and in nearly every job that requires a degree that doesn’t require a person to physically be there. The entire white collar labor force will be affected in one way or another, with the majority being “affected” by it in the form of layoffs. Other fields that are highly likely to be disrupted, and are already being affected, are accounting, mathematics, drafting, copywriting, computer graphics (video games and movie productions are already trying to edge AI graphics in to see if you notice), any sort of data processing or paralegal work, medical imaging. And that’s just the beginning.

I know, every technology that comes along that displaces people, has some new field that opens up. But what’s concerning here, is the scale of disruption. There just won’t enough new jobs made to offset the jobs lost. This is going to be a jobs bloodbath, and the effects can already be seen. It’s taking much longer for developers/programmers/technical workers to find a job than it did in years previously. That’s not a coincidence. And it’s also the jobs that required people to go into debt to get degrees in many cases, and now the fields are disappearing.

The darker side of me wonders if part of this “bring manufacturing back to the US” agenda isn’t to try to push people into those manufacturing roles, because there just aren’t going to be educated and skilled jobs at a meaningful level, before we know it. They are skirting saying it, because some the people behind this push, are also the AI developers. So much money is being poured into those efforts, and the people in charge of our government have a vested interest in making AI dominant. For one thing, they want to beat the rest of the world at it (particularly China), which I understand. But the darker part is many of them are billionaires, and will be able to be continually enriched if AI is the backbone of the workforce for our intellectual and technical jobs. The requirements for AI to work on an environmental and technical scale are enormous. I was concerned years ago by how much energy bitcoin mining was eating up, and now this is that issue on steroids. With the addition of provisions into the “Big Beautiful Bill” that Trump and the republicans are trying to pass, that would keep states from regulating AI AT ALL for the next ten years, this is not looking good.

Paired with the push for manufacturing jobs (though I haven’t actually seen much movement towards anyone building new factories here; in fact many investors are now moving plans outside of the US that they were planning on having here, due to our financial instability since March), is the massive cutting of worker protections, Medicaid, restarting wage garnishment and reducing payback plans that had been approved for student loans, and environmental protections.

All of those topics, if you look at them from the perspective of unleashing the AI gods and pushing Americans into menial work, fit together under a very depressing lens. Particularly if you remind yourself of how many of these government changes were brought about by the chainsaw of Elon Musk, the billionaire who has an AI company.

I’ll elaborate. If you reduce EPA regulation, and open up National Parks to development, all of a sudden the environmental problems (huge requirements for electricity and clean water for cooling, for example) for the massive drain that AI is on infrastructure goes away. Now, they also have plenty of new land to purchase and exploit to make data campuses.

If you take away Medicaid, increase the tax burden on all of those under the top earners, remove worker protections in every form, and allow the harshest penalties against student loan debt again (I’ve always thought that this was a form of indentured servitude. You can declare bankruptcy for gambling debts, but not school debt? Why?), who is forced into the positions that are freed up by exporting all the immigrants?

The not rich Americans.

Whose job security and job availability is decreasing at an incredible rate?

The not rich Americans.

The ultra rich are living in a world where they barely even have to see regular people anymore, except for possibly as blurred faces in the crowd when they give speeches. The rate at which the richest accumulated their money during the pandemic exploded, and it is exploding again with the market manipulation that has happened since March.

Now that they don’t need us humans to fulfill jobs, I’ve noticed that many places are treating their workers much worse. Forced back to office policies, worse benefits, increased surveillance of workers whether or not the job is getting done….this is all a symptom of a change in American economy. The push for “manufacturing jobs” is largely a farce, and if it’s not, will take years to happen. Meanwhile, if we aren’t needed for votes anymore, and we aren’t needed for production…where is the power of the average American citizen? This is why we have to claw for it while we can.

One of the issues that I understood during the election to be an issue, that the Democratic Party didn’t, was that there are two “economies”, and the Democrats only acknowledged one. There’s the wealthy economy, which is stocks and bonds and the Fed and inflation, and it’s the one that gets headlines. All of that does affect things downstream, but when an average American talks about “the economy”, they mean the prices of houses, gas, food, and needs. Most Americans don’t invest in the markets at all.

And here’s where this issue is coming back again. The Republicans understood those differences, and still do…but don’t care. As this regime has shown, they don’t care if the “regular person economy” if suffering right now. Tariffs are jacking up prices (e.g.: the standoffs with Walmart, the announcement Subaru just made about raising their prices, and on and on), and they don’t care. Because they already got the votes they need. And what will benefit the “rich person economy”? It’s not bringing down the cost of groceries; it’s dominating in software and AI. And the rest of us, unfortunately, are chattel in that game.

Big Tech is not called Big Human. They don’t care about us. Look at the “dead internet” theory. Have you heard about this? It’s that as algorithms, AI, and bots have proliferated online, most of our experience as users means we aren’t even actually interacting with humans. Meta even has AI “agents” that are fake accounts meant to drum up interactions to keep us online….because humans aren’t cutting it. They want your attention, but not you.

As the generative AI models have continued to grow, a concerning trend has come out. They are “hallucinating” at a higher rate. That means, they are making up data and scenarios that they aren’t supposed to, almost a “ghost in the machine” effect. It’s confusing the developers, because they expected that hallucinations would decrease, but the opposite is happening. Which, to me, just highlights how we are children playing with fire. There’s another concerning side effect, where people are using AI as a therapist and friend. I see how that happens; it’s meant to make you like it, so it puts things in a way meant to make you feel good. Unfortunately, that also means that it will just make stuff up for its user. That can be handled with new parameters, but that brings us to another thing: AI is supremely editable. It’s very easy to manipulate or change data as someone who owns an AI tool, which means introducing falsehoods and bias is even easier than it was before. And when that information comes from a “trusted” source, that’s not allowed to be regulated? Well that’s not a good thing, if you are someone who cares about humanity or freedom.

Meanwhile, there’s the issue of AI getting smarter than us and breaking out of its confines. Will a lack of oversight help that? I don’t think so, do you?

For years I’ve been saying that we need to come up with a universal basic income plan for the AI jobs disruption (I first wrote about it in 2016), and we have only moved backwards. Just last month, Bill Gates started calling for the same thing. But without industries even acknowledging that this is an issue, it’s frightening to see how quickly the average American will be impacted by this, without having any sort of societal safety net at all.

This has been a lot of doom and gloom, but there are upsides to AI. I think that if we harness it in the right way, it could change humanity for the better. It could make suggestions for change and optimization, without the emotional baggage of a human’s biases and desires. I love the idea of that applied to the climate crisis, trade deficits, and medical science, for example. But an unregulated AI industry won’t result in that, as clearly the AI gods are demonstrating. They want to be able to follow their ideas, with no interference or input from the world they are changing (and, in case of the environment, ruining). Does that seem fair to you? It doesn’t for me. Transparency and safeguards are needed in this industry, and they are needed now. We need a blanket rule on ethics, bias, transparency, and on keeping experimental AI away from the unknowing public without safety measures and guardrails. (Did you read about the unethical experiment run by Standford on Reddit? It used AI agents to influence the opinions of people who thought they were talking to other humans, and it worked. I’ll post a link to that scary stuff below).

Contact your lawmakers about your concerns, these measures are being debated now. And hopefully we can get people involved in government who understand technology more than only being enriched by it. And remember, if you’re reading this, it’s not you vs me, no matter your politics. It’s the super rich against the American citizen.

Google CEO quote about AI replacing programmers in one year:

https://san.com/cc/former-google-ceo-predicts-ai-will-replace-most-programmers-in-a-year/

An estimate of how much power generative AI takes: https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption

Ban on regulating AI: https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a

Unethical Reddit Standord and University of Pennsylvania experiment: https://www.androidheadlines.com/2025/04/reddit-ai-experiment-no-consent

4000 jobs lost to AI in May alone: https://www.cbsnews.com/news/ai-job-losses-artificial-intelligence-challenger-report/

Dead Internet theory: https://www.cnet.com/home/internet/what-is-the-dead-internet-theory/

AI And The Future of Humanity

Many years ago, I wrote a post about the future of work and the eventual need for a Universal Basic Income, due to the advancements of technology (robotic and AI) that would make human workers obsolete, or near obsolete, in various fields. Since that time, we’ve had illustrations of real life scenarios that UBI does work, and in fact makes the humans that are on it more inspired and creative. And, in a less positive light, we have very recently seen that AI is accelerating its takeover of humanity. Is that hyperbolic? Maybe. But I’d prefer to say it’s accurate.

In the last few months, there have been several cases of AI disrupting human spheres. We saw it in the form of “art” AI, which created artwork from text prompts input by humans. This immediately brought out an outcry, due to the artists that the AI had been trained upon, rightly pointing out that the AI was stealing their styles and giving nothing back. Now, has inspiration always been a nebulous and controversial aspect of the art world? Yes, indeed. Where does inspiration cross the line into plagiarism, or appropriation? These have been questions that artists have debated for years. But I believe it clearly crosses over into plagiarism when an AI bot wholeheartedly shoplifts an artists entire artistic DNA, with no attribution or payment. As though artists don’t struggle enough to make money after spending so much time and effort pouring their love into their craft…now they have HAL 3000 shoplifting the pooty too? When it takes literally the amount of time it would take to wish for a Djinn to make art of the subject of a wisher’s choosing, and in the exact style that a wisher’s favorite artist projects, we have a serious problem. This is already rocking the artistic industry, in that game studios are laying of pre-visualization artists (always one of my favorite kinds of modern artists, known for sweeping vistas and meticulously rendering fantastical worlds, among other things, and required to be intensely talented) in favor of using AI to generate pre-viz art. And will the money that is saved by axing those talented artists, serve to do anything but make the fat cats at the top further increase their financial BMI? Forgive me for being skeptical, but I doubt it.

In the same vein of AI disruption, we also have similar bots that are taking over the roles of writers (I promise this post has been poorly written by a human, and should handily pass your Turing test. Here’s a typoo to prove it) by using small prompts to write articles and stories. And ChatGPT is also coming for the techies themselves, in that it’s making large swathes of code writing obsolete, and can be driven by AI instead to write code in a tiny fraction of the time (admittedly it takes some handholding, but is FAR less demanding than the traditional code writing procedures).

So in the space of a few months, we have had huge disruptions to labor sets. And what’s unique about this particular tech assault, is that it’s upon traditionally highly skilled and learned roles. This isn’t just a robot replacing a hamburger flipper; these are AI not only replacing many people who have advanced degrees, but stealing their own work as the very means to do so. And in a particularly insulting twist, the AI is orders of magnitude faster at all of these things than a human could ever be.

So what are the implications? Well, looks like we are heading full long towards the part of late-stage capitalism where the 200 rich people with all the money leave Earth, and leave the rest of us underlings behind after we’ve been sapped dry of all our AI-stealable skills, to squabble over our Soylent Green crumbs. I kid, I kid. Sort of. To be sure, AI can be a tool to help us achieve great things. But it can also carelessly be used to irreparably damage us. And that sure feels like where we are right now.

To me, if the powers that be aren’t hearing alarm bells about the future of humanity and what we do, they just aren’t paying attention. Which, given the political antics of the last several years, wouldn’t be all that surprising. But let’s choose to be optimistic, and believe that there ARE people in power who want to help the human race still (and that they aren’t just the sorts who appear to be interested in that, until they can buy Twitter and show that they’ve chosen to throw it all in with the loonies instead. Ahem.) I believe now is the time to begin to throw together a plan to do something BEFORE everything gets awful for once. And what could they do? Well, I have three ideas that could mitigate some things.

  1. Clearly we need to make it where if an AI dataset is used to train a plagiarism bot, the original artists/writers/etc need to be paid royalties from the organization that has created that plagiarism bot. Both upon initial usage, and repeated royalties upon reuse. The good thing about AI, is you have metrics that will tell you that information down to the Nth degree.

  2. For every job that uses AI or robots to displace a human, there should be a heavy tax on that position, used to fund UBI for the people that have been displaced. That will make it so that digital slaves aren’t immediately so appealing, for doing the work of humans for the mere cost of developing them in the first place, and then making them Legion. It will also help to fund UBI for those people, so that they can retrain or otherwise lead fulfilling lives and contribute to society, instead of becoming destitute. I know that there’s the whole argument about “every time there’s a new technology, the people tied to the old one cry doom”, and that’s true. But when cars replaced horses, the people that shoed the horses could learn to work on cars. AI takes jobs, and leaves a vacuum that only AI moves into, at least for a huge ratio. And that just hasn’t been equitable at any other point in history.

  3. For my most popular idea, tie the income of the people at the top of companies to the income of the people at the lowest tier. The C-suite characters would no longer be able to reap all the rewards from taking the profits from the humans that made the products, as we so so much now. Let’s say, no one can make more than 20 times what the lowest paid workers makes. I don’t buy that anyone can work 20 times harder in a position than someone else, and so many of our deep-seated problems come from the hugely disparate income inequalities that have occurred over the past several decades. If the people at the bottom of the pyramid are taken care of, we all prosper.

  4. And as a freebie, let’s make it so that some projects are inalienably achieved by humans. We didn’t come this far just to let Pong’s great grandchild take it all away.

Are these ideas perfect? No. But at least they are something to talk about, which I am not seeing Congress do yet. And they need to start, yesterday, before this problem gets out of hand. Because it will, and very quickly. We are on the precipice of being able to make this world so good for so many, if we just continue to proactively take steps to solve problems before they are too late.

Dark Net, human frailty, and the race towards making ourselves obsolete

I just read a really great article at Vanity Fair. Much of their content is drivel (I'm not huge into what the robber barons of the age are wearing or eating, so I skip those parts) but I find that I'll unexpectedly run into very well-researched and thought provoking articles on issues that fascinate me. In this case, the article that excitedly jumped into my lap like an enthusiastic puppy is Welcome to the Dark Net, a Wilderness Where Invisible Wars are Fought and Hackers Roam Free.

In the very beginning of the article is this quote from the main interviewee (a hacker who is amusingly referred to as "Opsec"): 

"He is a fast talker when he’s onto a subject. His mind seems to race most of the time. Currently he is designing an autonomous system for detecting network attacks and taking action in response. The system is based on machine learning and artificial intelligence. In a typical burst of words, he said, “But the automation itself might be hacked. Is the A.I. being gamed? Are you teaching the computer, or is it learning on its own? If it’s learning on its own, it can be gamed. If you are teaching it, then how clean is your data set? Are you pulling it off a network that has already been compromised? Because if I’m an attacker and I’m coming in against an A.I.-defended system, if I can get into the baseline and insert attacker traffic into the learning phase, then the computer begins to think that those things are normal and accepted. I’m teaching a robot that ‘It’s O.K.! I’m not really an attacker, even though I’m carrying an AK-47 and firing on the troops.’ And what happens when a machine becomes so smart it decides to betray you and switch sides?”

The entire article is well worth a read if you're into Information Security, threats, or learning about those parts of society that still operate like the Wild West. Spoiler alert: I am fascinated by all those areas, so I think this is one of the best articles I've read this year. The blurb above sucked me in hook line and sinker. It tickled the part of my brain that enjoys these future foe tangents, because I think what he's talking about directly addresses one of the factors that we seem to avoid allowing our collective consciousness to linger on too long. 

If you're a regular follower of my blog, you may have surmised that I am basically governed by two large parts of my personality: misanthropic Luddite, and social technophile. Yes, that's conflicting. Yes, I'm aware of that, and I'm also comfortable with duality. It allows me to evaluate and contrast a lot of arguments in my head, and that's one of my favorite past-times. You never know what you'll find kicking around this old noggin.

The quote about AI sentinels, and AI sentience, articulated a very interesting modern problem. We love relinquishing power to technology, as a species. That's what originally set us apart from the animals. There is evidence of the use of tools from tens of thousands of years ago, and we haven't stopped with that innovation since. Clearly there was a large leap forward during the Industrial Revolution, and it's just continued on an upward trajectory ever since.

What's frightening is that we are quickly closing on the nexus of when we will be able to accurately control those tools, and when they make us obsolete. In a Genesis way, we have created AI in our image, and our child is rapidly moving towards establishing its own predestination. It's no secret that I actively fear AI overtaking us, because in a binary, numbers and logic way, it's not too hard to see that in the very near future machines with no God given conscience would be able to come up with cold logical reasons that we don't really need to be here. We take a lot of energy, we are messy, and we are frequently inconvenient and illogical. In a world of machines, it's easy to see how they would write us out of the equation. Is that an alarmist idea? Well, sure. But if you want to be prepared for the future, you need to look at all possibilities....even the dark and uncomfortable ones. In a system meant to adapt and learn to evolve efficiencies, we are most likely to be the least efficient part of the system. Already ghostst in the machines have evolved to make their own logical leaps in different lab tests. When we relinquish too much power, what's the end game?

In the Vanity Fair article, I particularly enjoyed the CURRENT projection that he comes up with. I've done quite a bit of speculation in my head about what's going to happen in the 5-10 year range, but I enjoyed having the real-time mirror held up in this illustration. In the last several years there have been numerous, very terrifying security breaches in the shadow world. The average person probably doesn't think about them too much, because the data breaches are so large and so frequent, and there's also that good old "This is scary on a huge level so I better not think about it" response. Usually we just see it as a news blip, and maybe a prompt to change passwords. But what has happened is there have been several large breaches on a level that could really be devastating to a lot of American citizenry. Between the health industry breaches, the OPM breaches of the government on its most secretive workers with all their most sensitive data, and the frequent hacks of financial institutions....and those are just the ones we've actually heard about...someone is amassing a lot of data for a lot of nefarious reasons. It's not a big leap to assume that there is some sort of dossier being compiled on most people, and that data isn't being kept to safeguard us. (Since I am already at tinfoil hat level here, I'll throw out my favorite advice: always have a kit, always have a plan, and always be ready).

The AI drones that Opsec speaks of as being the sentinels of the systems, and their fluid moral codes (if interfered with at the proper time in the learning process) are exactly the sort of moral gray area in our AI work force that I'm talking about. When we are creating our own little bot armies of white knights, but they themselves have no sense of light or dark, that sword can easily and nefariously be turned against us by the wrong people. And they are. Stuxnet is one of my all time favorite intelligence stories, and that was assumably executed by white knights. But now what are the black knights doing? And when the soldiers that we send out into the battlefield are no longer flesh and blood with some sort of assumed shared moral code...but instead hackable bots...that changes the battlefield entirely.

As the world of AI and computers has become more global, the control of who owns the top players has quickly changed. And as we here in the US focus more and more on the media game of misdirection (insert your pet #HASHTAGSOCIALFRENZYCAUSE), we get more muddled and forget what we are doing. It's easy to form our own echo chambers and ignore the world at our doorstep, and there's solace in pretending the wolves are at the door. The more we shout at each other about manufactured crises inside our warm homes, the more we can try to block the howling of the wolves outside. But when a bit of silence falls in our lives, when we are alone falling asleep, when our batteries on our devices have died or there's not a game or reality show flickering to put us into soma relief, we know deep down that someone somewhere is amassing to take things from us. As much as we pretend like it, most of the world is not like us. Most of the world has vastly different moral codes than what moves us in the US, and there are plenty who want what we have. Particularly as weather patterns and things like water availability affect other players in the big scary human survival game, like disease and food. No matter how accepting we want to be to each other (which I support) there are going to be nation states that will not EVER accept us. And while they may or may not be able to get warheads or fighter jets or thousands of soldiers....they likely CAN get access to the internet. And they'll fight that way. Look at the cyber caliphate army, ISIS hacking division. The battlefield continues to evolve. And we need to be aware of that.

So, what is there to do? After all, we are all just players in this game at the most basic level, when it gets down to it. I think one of the biggest things is to be aware. Look the wolves in the eye and make sure you're aware of their existence. Can you do anything about financial monoliths or energy companies getting hacked? Most likely, no. But you CAN be a good steward of your own information. You can make sure to know how to handle yourself in an emergency. You CAN make a plan to make sure loved ones know where to go if there's a power blackout or the cell networks go down. And finally, try to take time to unplug on your own sometimes, and remember that we don't need technology to handle all things in life. People don't need to get a hold of your every minute. Step away and remember how to be a full human, and get used to that idea. Appreciate what we have and the experiences that we are getting, because we are lucky to be here.