AI And The Future of Humanity

Many years ago, I wrote a post about the future of work and the eventual need for a Universal Basic Income, due to the advancements of technology (robotic and AI) that would make human workers obsolete, or near obsolete, in various fields. Since that time, we’ve had illustrations of real life scenarios that UBI does work, and in fact makes the humans that are on it more inspired and creative. And, in a less positive light, we have very recently seen that AI is accelerating its takeover of humanity. Is that hyperbolic? Maybe. But I’d prefer to say it’s accurate.

In the last few months, there have been several cases of AI disrupting human spheres. We saw it in the form of “art” AI, which created artwork from text prompts input by humans. This immediately brought out an outcry, due to the artists that the AI had been trained upon, rightly pointing out that the AI was stealing their styles and giving nothing back. Now, has inspiration always been a nebulous and controversial aspect of the art world? Yes, indeed. Where does inspiration cross the line into plagiarism, or appropriation? These have been questions that artists have debated for years. But I believe it clearly crosses over into plagiarism when an AI bot wholeheartedly shoplifts an artists entire artistic DNA, with no attribution or payment. As though artists don’t struggle enough to make money after spending so much time and effort pouring their love into their craft…now they have HAL 3000 shoplifting the pooty too? When it takes literally the amount of time it would take to wish for a Djinn to make art of the subject of a wisher’s choosing, and in the exact style that a wisher’s favorite artist projects, we have a serious problem. This is already rocking the artistic industry, in that game studios are laying of pre-visualization artists (always one of my favorite kinds of modern artists, known for sweeping vistas and meticulously rendering fantastical worlds, among other things, and required to be intensely talented) in favor of using AI to generate pre-viz art. And will the money that is saved by axing those talented artists, serve to do anything but make the fat cats at the top further increase their financial BMI? Forgive me for being skeptical, but I doubt it.

In the same vein of AI disruption, we also have similar bots that are taking over the roles of writers (I promise this post has been poorly written by a human, and should handily pass your Turing test. Here’s a typoo to prove it) by using small prompts to write articles and stories. And ChatGPT is also coming for the techies themselves, in that it’s making large swathes of code writing obsolete, and can be driven by AI instead to write code in a tiny fraction of the time (admittedly it takes some handholding, but is FAR less demanding than the traditional code writing procedures).

So in the space of a few months, we have had huge disruptions to labor sets. And what’s unique about this particular tech assault, is that it’s upon traditionally highly skilled and learned roles. This isn’t just a robot replacing a hamburger flipper; these are AI not only replacing many people who have advanced degrees, but stealing their own work as the very means to do so. And in a particularly insulting twist, the AI is orders of magnitude faster at all of these things than a human could ever be.

So what are the implications? Well, looks like we are heading full long towards the part of late-stage capitalism where the 200 rich people with all the money leave Earth, and leave the rest of us underlings behind after we’ve been sapped dry of all our AI-stealable skills, to squabble over our Soylent Green crumbs. I kid, I kid. Sort of. To be sure, AI can be a tool to help us achieve great things. But it can also carelessly be used to irreparably damage us. And that sure feels like where we are right now.

To me, if the powers that be aren’t hearing alarm bells about the future of humanity and what we do, they just aren’t paying attention. Which, given the political antics of the last several years, wouldn’t be all that surprising. But let’s choose to be optimistic, and believe that there ARE people in power who want to help the human race still (and that they aren’t just the sorts who appear to be interested in that, until they can buy Twitter and show that they’ve chosen to throw it all in with the loonies instead. Ahem.) I believe now is the time to begin to throw together a plan to do something BEFORE everything gets awful for once. And what could they do? Well, I have three ideas that could mitigate some things.

  1. Clearly we need to make it where if an AI dataset is used to train a plagiarism bot, the original artists/writers/etc need to be paid royalties from the organization that has created that plagiarism bot. Both upon initial usage, and repeated royalties upon reuse. The good thing about AI, is you have metrics that will tell you that information down to the Nth degree.

  2. For every job that uses AI or robots to displace a human, there should be a heavy tax on that position, used to fund UBI for the people that have been displaced. That will make it so that digital slaves aren’t immediately so appealing, for doing the work of humans for the mere cost of developing them in the first place, and then making them Legion. It will also help to fund UBI for those people, so that they can retrain or otherwise lead fulfilling lives and contribute to society, instead of becoming destitute. I know that there’s the whole argument about “every time there’s a new technology, the people tied to the old one cry doom”, and that’s true. But when cars replaced horses, the people that shoed the horses could learn to work on cars. AI takes jobs, and leaves a vacuum that only AI moves into, at least for a huge ratio. And that just hasn’t been equitable at any other point in history.

  3. For my most popular idea, tie the income of the people at the top of companies to the income of the people at the lowest tier. The C-suite characters would no longer be able to reap all the rewards from taking the profits from the humans that made the products, as we so so much now. Let’s say, no one can make more than 20 times what the lowest paid workers makes. I don’t buy that anyone can work 20 times harder in a position than someone else, and so many of our deep-seated problems come from the hugely disparate income inequalities that have occurred over the past several decades. If the people at the bottom of the pyramid are taken care of, we all prosper.

  4. And as a freebie, let’s make it so that some projects are inalienably achieved by humans. We didn’t come this far just to let Pong’s great grandchild take it all away.

Are these ideas perfect? No. But at least they are something to talk about, which I am not seeing Congress do yet. And they need to start, yesterday, before this problem gets out of hand. Because it will, and very quickly. We are on the precipice of being able to make this world so good for so many, if we just continue to proactively take steps to solve problems before they are too late.

Dark Net, human frailty, and the race towards making ourselves obsolete

I just read a really great article at Vanity Fair. Much of their content is drivel (I'm not huge into what the robber barons of the age are wearing or eating, so I skip those parts) but I find that I'll unexpectedly run into very well-researched and thought provoking articles on issues that fascinate me. In this case, the article that excitedly jumped into my lap like an enthusiastic puppy is Welcome to the Dark Net, a Wilderness Where Invisible Wars are Fought and Hackers Roam Free.

In the very beginning of the article is this quote from the main interviewee (a hacker who is amusingly referred to as "Opsec"): 

"He is a fast talker when he’s onto a subject. His mind seems to race most of the time. Currently he is designing an autonomous system for detecting network attacks and taking action in response. The system is based on machine learning and artificial intelligence. In a typical burst of words, he said, “But the automation itself might be hacked. Is the A.I. being gamed? Are you teaching the computer, or is it learning on its own? If it’s learning on its own, it can be gamed. If you are teaching it, then how clean is your data set? Are you pulling it off a network that has already been compromised? Because if I’m an attacker and I’m coming in against an A.I.-defended system, if I can get into the baseline and insert attacker traffic into the learning phase, then the computer begins to think that those things are normal and accepted. I’m teaching a robot that ‘It’s O.K.! I’m not really an attacker, even though I’m carrying an AK-47 and firing on the troops.’ And what happens when a machine becomes so smart it decides to betray you and switch sides?”

The entire article is well worth a read if you're into Information Security, threats, or learning about those parts of society that still operate like the Wild West. Spoiler alert: I am fascinated by all those areas, so I think this is one of the best articles I've read this year. The blurb above sucked me in hook line and sinker. It tickled the part of my brain that enjoys these future foe tangents, because I think what he's talking about directly addresses one of the factors that we seem to avoid allowing our collective consciousness to linger on too long. 

If you're a regular follower of my blog, you may have surmised that I am basically governed by two large parts of my personality: misanthropic Luddite, and social technophile. Yes, that's conflicting. Yes, I'm aware of that, and I'm also comfortable with duality. It allows me to evaluate and contrast a lot of arguments in my head, and that's one of my favorite past-times. You never know what you'll find kicking around this old noggin.

The quote about AI sentinels, and AI sentience, articulated a very interesting modern problem. We love relinquishing power to technology, as a species. That's what originally set us apart from the animals. There is evidence of the use of tools from tens of thousands of years ago, and we haven't stopped with that innovation since. Clearly there was a large leap forward during the Industrial Revolution, and it's just continued on an upward trajectory ever since.

What's frightening is that we are quickly closing on the nexus of when we will be able to accurately control those tools, and when they make us obsolete. In a Genesis way, we have created AI in our image, and our child is rapidly moving towards establishing its own predestination. It's no secret that I actively fear AI overtaking us, because in a binary, numbers and logic way, it's not too hard to see that in the very near future machines with no God given conscience would be able to come up with cold logical reasons that we don't really need to be here. We take a lot of energy, we are messy, and we are frequently inconvenient and illogical. In a world of machines, it's easy to see how they would write us out of the equation. Is that an alarmist idea? Well, sure. But if you want to be prepared for the future, you need to look at all possibilities....even the dark and uncomfortable ones. In a system meant to adapt and learn to evolve efficiencies, we are most likely to be the least efficient part of the system. Already ghostst in the machines have evolved to make their own logical leaps in different lab tests. When we relinquish too much power, what's the end game?

In the Vanity Fair article, I particularly enjoyed the CURRENT projection that he comes up with. I've done quite a bit of speculation in my head about what's going to happen in the 5-10 year range, but I enjoyed having the real-time mirror held up in this illustration. In the last several years there have been numerous, very terrifying security breaches in the shadow world. The average person probably doesn't think about them too much, because the data breaches are so large and so frequent, and there's also that good old "This is scary on a huge level so I better not think about it" response. Usually we just see it as a news blip, and maybe a prompt to change passwords. But what has happened is there have been several large breaches on a level that could really be devastating to a lot of American citizenry. Between the health industry breaches, the OPM breaches of the government on its most secretive workers with all their most sensitive data, and the frequent hacks of financial institutions....and those are just the ones we've actually heard about...someone is amassing a lot of data for a lot of nefarious reasons. It's not a big leap to assume that there is some sort of dossier being compiled on most people, and that data isn't being kept to safeguard us. (Since I am already at tinfoil hat level here, I'll throw out my favorite advice: always have a kit, always have a plan, and always be ready).

The AI drones that Opsec speaks of as being the sentinels of the systems, and their fluid moral codes (if interfered with at the proper time in the learning process) are exactly the sort of moral gray area in our AI work force that I'm talking about. When we are creating our own little bot armies of white knights, but they themselves have no sense of light or dark, that sword can easily and nefariously be turned against us by the wrong people. And they are. Stuxnet is one of my all time favorite intelligence stories, and that was assumably executed by white knights. But now what are the black knights doing? And when the soldiers that we send out into the battlefield are no longer flesh and blood with some sort of assumed shared moral code...but instead hackable bots...that changes the battlefield entirely.

As the world of AI and computers has become more global, the control of who owns the top players has quickly changed. And as we here in the US focus more and more on the media game of misdirection (insert your pet #HASHTAGSOCIALFRENZYCAUSE), we get more muddled and forget what we are doing. It's easy to form our own echo chambers and ignore the world at our doorstep, and there's solace in pretending the wolves are at the door. The more we shout at each other about manufactured crises inside our warm homes, the more we can try to block the howling of the wolves outside. But when a bit of silence falls in our lives, when we are alone falling asleep, when our batteries on our devices have died or there's not a game or reality show flickering to put us into soma relief, we know deep down that someone somewhere is amassing to take things from us. As much as we pretend like it, most of the world is not like us. Most of the world has vastly different moral codes than what moves us in the US, and there are plenty who want what we have. Particularly as weather patterns and things like water availability affect other players in the big scary human survival game, like disease and food. No matter how accepting we want to be to each other (which I support) there are going to be nation states that will not EVER accept us. And while they may or may not be able to get warheads or fighter jets or thousands of soldiers....they likely CAN get access to the internet. And they'll fight that way. Look at the cyber caliphate army, ISIS hacking division. The battlefield continues to evolve. And we need to be aware of that.

So, what is there to do? After all, we are all just players in this game at the most basic level, when it gets down to it. I think one of the biggest things is to be aware. Look the wolves in the eye and make sure you're aware of their existence. Can you do anything about financial monoliths or energy companies getting hacked? Most likely, no. But you CAN be a good steward of your own information. You can make sure to know how to handle yourself in an emergency. You CAN make a plan to make sure loved ones know where to go if there's a power blackout or the cell networks go down. And finally, try to take time to unplug on your own sometimes, and remember that we don't need technology to handle all things in life. People don't need to get a hold of your every minute. Step away and remember how to be a full human, and get used to that idea. Appreciate what we have and the experiences that we are getting, because we are lucky to be here.