Dark Net, human frailty, and the race towards making ourselves obsolete

I just read a really great article at Vanity Fair. Much of their content is drivel (I'm not huge into what the robber barons of the age are wearing or eating, so I skip those parts) but I find that I'll unexpectedly run into very well-researched and thought provoking articles on issues that fascinate me. In this case, the article that excitedly jumped into my lap like an enthusiastic puppy is Welcome to the Dark Net, a Wilderness Where Invisible Wars are Fought and Hackers Roam Free.

In the very beginning of the article is this quote from the main interviewee (a hacker who is amusingly referred to as "Opsec"): 

"He is a fast talker when he’s onto a subject. His mind seems to race most of the time. Currently he is designing an autonomous system for detecting network attacks and taking action in response. The system is based on machine learning and artificial intelligence. In a typical burst of words, he said, “But the automation itself might be hacked. Is the A.I. being gamed? Are you teaching the computer, or is it learning on its own? If it’s learning on its own, it can be gamed. If you are teaching it, then how clean is your data set? Are you pulling it off a network that has already been compromised? Because if I’m an attacker and I’m coming in against an A.I.-defended system, if I can get into the baseline and insert attacker traffic into the learning phase, then the computer begins to think that those things are normal and accepted. I’m teaching a robot that ‘It’s O.K.! I’m not really an attacker, even though I’m carrying an AK-47 and firing on the troops.’ And what happens when a machine becomes so smart it decides to betray you and switch sides?”

The entire article is well worth a read if you're into Information Security, threats, or learning about those parts of society that still operate like the Wild West. Spoiler alert: I am fascinated by all those areas, so I think this is one of the best articles I've read this year. The blurb above sucked me in hook line and sinker. It tickled the part of my brain that enjoys these future foe tangents, because I think what he's talking about directly addresses one of the factors that we seem to avoid allowing our collective consciousness to linger on too long. 

If you're a regular follower of my blog, you may have surmised that I am basically governed by two large parts of my personality: misanthropic Luddite, and social technophile. Yes, that's conflicting. Yes, I'm aware of that, and I'm also comfortable with duality. It allows me to evaluate and contrast a lot of arguments in my head, and that's one of my favorite past-times. You never know what you'll find kicking around this old noggin.

The quote about AI sentinels, and AI sentience, articulated a very interesting modern problem. We love relinquishing power to technology, as a species. That's what originally set us apart from the animals. There is evidence of the use of tools from tens of thousands of years ago, and we haven't stopped with that innovation since. Clearly there was a large leap forward during the Industrial Revolution, and it's just continued on an upward trajectory ever since.

What's frightening is that we are quickly closing on the nexus of when we will be able to accurately control those tools, and when they make us obsolete. In a Genesis way, we have created AI in our image, and our child is rapidly moving towards establishing its own predestination. It's no secret that I actively fear AI overtaking us, because in a binary, numbers and logic way, it's not too hard to see that in the very near future machines with no God given conscience would be able to come up with cold logical reasons that we don't really need to be here. We take a lot of energy, we are messy, and we are frequently inconvenient and illogical. In a world of machines, it's easy to see how they would write us out of the equation. Is that an alarmist idea? Well, sure. But if you want to be prepared for the future, you need to look at all possibilities....even the dark and uncomfortable ones. In a system meant to adapt and learn to evolve efficiencies, we are most likely to be the least efficient part of the system. Already ghostst in the machines have evolved to make their own logical leaps in different lab tests. When we relinquish too much power, what's the end game?

In the Vanity Fair article, I particularly enjoyed the CURRENT projection that he comes up with. I've done quite a bit of speculation in my head about what's going to happen in the 5-10 year range, but I enjoyed having the real-time mirror held up in this illustration. In the last several years there have been numerous, very terrifying security breaches in the shadow world. The average person probably doesn't think about them too much, because the data breaches are so large and so frequent, and there's also that good old "This is scary on a huge level so I better not think about it" response. Usually we just see it as a news blip, and maybe a prompt to change passwords. But what has happened is there have been several large breaches on a level that could really be devastating to a lot of American citizenry. Between the health industry breaches, the OPM breaches of the government on its most secretive workers with all their most sensitive data, and the frequent hacks of financial institutions....and those are just the ones we've actually heard about...someone is amassing a lot of data for a lot of nefarious reasons. It's not a big leap to assume that there is some sort of dossier being compiled on most people, and that data isn't being kept to safeguard us. (Since I am already at tinfoil hat level here, I'll throw out my favorite advice: always have a kit, always have a plan, and always be ready).

The AI drones that Opsec speaks of as being the sentinels of the systems, and their fluid moral codes (if interfered with at the proper time in the learning process) are exactly the sort of moral gray area in our AI work force that I'm talking about. When we are creating our own little bot armies of white knights, but they themselves have no sense of light or dark, that sword can easily and nefariously be turned against us by the wrong people. And they are. Stuxnet is one of my all time favorite intelligence stories, and that was assumably executed by white knights. But now what are the black knights doing? And when the soldiers that we send out into the battlefield are no longer flesh and blood with some sort of assumed shared moral code...but instead hackable bots...that changes the battlefield entirely.

As the world of AI and computers has become more global, the control of who owns the top players has quickly changed. And as we here in the US focus more and more on the media game of misdirection (insert your pet #HASHTAGSOCIALFRENZYCAUSE), we get more muddled and forget what we are doing. It's easy to form our own echo chambers and ignore the world at our doorstep, and there's solace in pretending the wolves are at the door. The more we shout at each other about manufactured crises inside our warm homes, the more we can try to block the howling of the wolves outside. But when a bit of silence falls in our lives, when we are alone falling asleep, when our batteries on our devices have died or there's not a game or reality show flickering to put us into soma relief, we know deep down that someone somewhere is amassing to take things from us. As much as we pretend like it, most of the world is not like us. Most of the world has vastly different moral codes than what moves us in the US, and there are plenty who want what we have. Particularly as weather patterns and things like water availability affect other players in the big scary human survival game, like disease and food. No matter how accepting we want to be to each other (which I support) there are going to be nation states that will not EVER accept us. And while they may or may not be able to get warheads or fighter jets or thousands of soldiers....they likely CAN get access to the internet. And they'll fight that way. Look at the cyber caliphate army, ISIS hacking division. The battlefield continues to evolve. And we need to be aware of that.

So, what is there to do? After all, we are all just players in this game at the most basic level, when it gets down to it. I think one of the biggest things is to be aware. Look the wolves in the eye and make sure you're aware of their existence. Can you do anything about financial monoliths or energy companies getting hacked? Most likely, no. But you CAN be a good steward of your own information. You can make sure to know how to handle yourself in an emergency. You CAN make a plan to make sure loved ones know where to go if there's a power blackout or the cell networks go down. And finally, try to take time to unplug on your own sometimes, and remember that we don't need technology to handle all things in life. People don't need to get a hold of your every minute. Step away and remember how to be a full human, and get used to that idea. Appreciate what we have and the experiences that we are getting, because we are lucky to be here.