
In today’s blog, George Platsis looks at the evolving role of AI in cybersecurity and makes the argument for unfettered access to information.
There’s a lot of talk these days about online censorship, public manipulation, mass surveillance, and the role Big Tech plays in society. Okay, so what does any of this have to do with cybersecurity?
Well, quite a lot.
You see, the future – and in the tech world, “the future” can easily be a few seconds away – cybersecurity practices may very well take a hard turn to rely on artificial intelligence and machine learning (AI/ML) for a majority of our data protection needs. In fact, it’s already begun. Do a quick search and you’ll find plenty of evidence of it. And you’ll also find plenty of evidence of the bad guys loving and using AI/ML as well.
At the heart of the issue is the amount of data traffic we need to manage. There is no way a human alone, even with the assistance of technology, can manage and respond to all the alerts and events that would come across their operations center. It’s simply untenable for a human to be able to discern which data packets are legitimate, which exfiltrate data surreptitiously, and which have malicious payloads.
You see, all you need is a few kilobytes of well–written code to cause havoc. Stuxnet is a perfect example. Only 500 kilobytes. That’s 0.5 megabytes. Nothing. That’s a 30 second MP3 at high quality. Most pictures you take with your phone these days are a few megabytes each. When you consider the average U.S. user eats up about 100MB per day just in mobile data alone, you begin to appreciate the phrase “needle in a haystack” that much more.
And data production will just increase. For example, most people are unaware that many of their calls are now actually data calls. How do you think you get these crystal clear high definition sounding calls? Those calls aren’t being placed over the reliable, yet old fashioned, copper lines or analog mobile networks.
Everything that we do is data. Big data. MASSIVE data! That is why machines are needed to assist with cybersecurity needs. And the bigger the enterprise, the harder it gets to manage. It’s not like this is some linear scaling issue.
So what’s the trend looking like? More of our lives and business are passing through digital sources, which means: more data produced, consumed, processed, and used.
And what does that mean? The current techniques of managing data will exhaust themselves, if they have not been exhausted already.
Enter AI/ML.
And what does AI/ML absolutely, positively need to operate? Algorithms. No algorithm, no AI/ML. Simple. Algorithms are like the air and water for AI/ML.
With Jack Dorsey, co-founder and CEO of Twitter, and Sheryl Sandberg, COO of Facebook, set to testify on Capitol Hill, an honest conversation will need a lot of talk about algorithms. And part of that conversation needs to dispel with the farce that algorithms are benign and unbiased. They’re not. And people are waking up to this fact, which is a good thing.
You see, an algorithm is a creation. Yes, an algorithm’s foundations are in science, but there is a great element of artistry put into its design, especially when you want the algorithm to give you a desired output. And that artistry comes from the developer, the person writing or tweaking that code.
I already sense what’s coming…
“You said desired output!”
Yes, I did. It’s because an algorithm, by definition, cannot exist without intent. An algorithm is a set of methodically designed rules to solve a specific problem.
The only instance where an algorithm should be considered to be benign is if the task is benign.
For example: spotting odd data traffic patterns from a device can be considered a benign task; determining whether language is “offensive” is not a benign task. The latter is subject to interpretation and bias of the coder no matter how well defined the rules are, whereas the former is pretty clear cut.
One of the most horrifying things I heard at a conference a couple years back was the need to “determine what the corporation’s role in monitoring the internet is” so that there “could [be] recalculat[ion of] the algorithms to focus” on specific social issues. What was even more horrifying to me was the loud applause this comment received.
People should be cautious in what they demand, because those who control the levers of power today may not tomorrow.
It is comments like the one that I heard that give “algorithms” a bad rap. And that bad rap is warranted when algorithms are used to shape social issues and ideas instead of conducting benign tasks. People who live in open and free societies have legitimate concern if a private corporation can de-platform them, deny them access to business opportunities, and suppress their voice.
And the only way the private corporations have the ability to take such action is by relying on algorithms are inherently biased to spot “behavior” that may violate their terms and conditions.
There are many downsides to algorithms being used like this. Honestly, too many to list here. Suffice it to say that AI/ML, when used for something other than specific, surgical-like, cybersecurity protection, may be looked at with skepticism. And that skepticism would be justified giving today’s public discourse.
So we need to separate out the good and bad of algorithms. Algorithms used to protect your data while respecting your privacy: good. Algorithms used to manipulate information flows and access to the public square: bad.
And for those worried about misinformation and disinformation, perhaps I’m old fashioned when I say this: let people say it all. I will consume as much of it as I want and I will make my own decisions and conclusions. I take it as a personal responsibility to be informed and to be sure that I am not being hoodwinked. And I believe these are foundational principals of open and free societies.
In closing, I have often said that AI/ML are like nuclear weapons, but perhaps a better characterization would be that they are like powerful chemicals. When used for very specific, narrowly-defined tasks, they can be helpful. Extremely helpful in some cases. But when they are used indiscriminately, or have wide spread use, there are consequences. Sometimes those consequences are immediate, such as corrosion and burns. In other cases, those consequences can be like an undetected cancer, only being found until it is too late.
That’s why we need to have an honest discussion of how algorithms are being used in AI/ML. If not, we’ll screw up two things: 1) potentially losing a valuable tool for data protection, one that is sorely needed with all this data zipping all over the place, and 2) how we consume, process and exchange information in a free and open society.
By George Platsis, SDI Cyber Risk Practice
September 4, 2018