Unless you have been living under a rock, you may have noticed that a growing number of people are not too pleased with Facebook and Alphabet Inc., parent of Google and developers of the mobile operating system, Android.
What started the recent frustrations are the revelations from an employee of Cambridge Analytica and how the company harvested information from 50 million Facebook users. I want to skip over the part about how the data was used, because that’s the fog in this storm and the distraction is entering “funny cat video” territory.
Where we should be focusing our thinking is here: that the data was harvested in the first place, in one central repository. That’s the issue.
Ask yourself: why do we amass data? Let’s get our hands dirty to answer that question.
Raw data is amassed for usually one or all of these reasons:
1) To understand something,
2) To develop something, and
3) To sell something.
It’s all pretty straight forward.
The only other reasons to amass data would be that you are a hoarder or doing something really creepy.
I’m going to go out on a limb here (insert Bugs Bunny level sarcasm) but my guess is Facebook and Google were doing at least all three.
It’s actually quite transparent when you understand the business models these companies rely on. They need to “understand” you in order to “develop” something so that they can “sell” it to you.
Do you see the hook? In order to “understand” you, they need to entice you somehow, because this isn’t your run of the mill brick and mortar retail shop or professional services company that you hire for a specific job.
So how are you enticed? Simple: funny cat videos, “free” software and apps, and dopamine hits. In other words, exploiting “vulnerability in human psychology” says the founding president of Facebook.
And of course these companies will ask, while you’re using these services, help us “enhance the user experience” by telling us everything about you. And if that’s too much time for you, just click on this “I Accept” button and we’ll scoop up whatever is laying around on your device.
Now, take all that data we’ve accumulated on you and start crunching it through algorithms and ever-more-powerful AI and we’re going to start predicting your every move. Why, we may even “understand” you so well that we will develop not only a product, but an idea, say a political idea, that we can sell you!
Okay, okay. We may not be able to do that, something about election laws, so we’ll just sell the data to somebody who can.
Back to being serious: to anybody that has been sincerely following the digital evolution, information security, big data, artificial intelligence, and algorithms over the last 10-15 years, the Cambridge Analytica, Facebook, and Android revelations are worthy of no more than a yawn or shrug.
Here’s why: The system operated exactly as it was supposed to. And that is the scary part.
In our warp speed attempts to improve the user experience and make money, we lost sight of a few things that matters to us: like protecting our data and even putting limits on what really needs to be gathered.
For example, if I was using certain Facebook features on Android devices – disclaimer: I don’t and won’t ever – and I have your number saved in my contact list, what consent have you given me to give your phone number to Facebook via Google? Chances are you never gave me consent, but even if you’re not a Facebook user, Facebook may have your phone number, which is one more piece of data they can collate.
Spend a few minutes and check out Google’s Privacy Policy. No need to spend too much time on it, since most privacy policies take about 15-20 minutes to read. But just spend two minutes on the section “Information we collect” and see how you feel after reading.
In a simpler time, smaller organizations would have the living daylights sued out of them if client information leaked, especially if non-disclosure agreements were signed. Really, NDAs can be considered a type of non-digital information security protocol. But that close guard on information, a responsibility really, never transferred over to the digital world.
Don’t be surprised if there’s a sea change on the horizon regarding data security, particularly when it comes to personal data. Users and clients may begin to place pressures on the data collectors to explicitly state what they’re doing with the data, how they’re protecting it and – the key to me – what happens if your data makes it out of your vaults.
If we start to pressure for legislative and regulatory changes, along with more civil legal action – I think we will – data collectors will not only see their liability exposure increase, but I think you’ll see a change in business models as well. That’s why my advice to you is step up your cybersecurity game if you’re collecting information on individuals and clients, because we’ve had a lot of talk the last few years using the carrot approach, but the general public is starting to get fed up and will want to see some stick treatment used on the data collectors.
Again, remember: the system worked exactly as it was designed to. Some don’t like the fact that Bob had access to the data. But keep this mind: it’s quite possible that we haven’t yet heard of an Alice, Joe, and Sally who had access to similar, or even more, data.
And my suspicion is some people are having some very sleepless nights these days because, while their technical cybersecurity measures may have been alright, these same people made it a policy to share and sell the data for their own interests, perhaps sharing with some, selling to others, and withholding altogether from another group.
I don’t believe the general public will be too pleased when they begin to realize they were not the consumer, but rather, they were the product, which is all the more reason if you’re collecting data, be sure you’re taking the appropriate steps to protect it and not misuse it. It could end up being costlier to you that you initially thought.
By George Platsis, SDI Cyber Risk Practice
April 3, 2018