Unless you’ve been living under a rock these past few weeks, you’ve undoubtedly heard the rustlings, okay, more like the megaphone shoutings, of some pretty major data issues at Facebook. You’ve likely wondered about your own information and what may have been done with it.

Who had access to your online content?  Shouldn’t you have been able to trust Facebook to keep your personal information safe? Why on earth, after all of this, would you dream of letting another online network have your information, no matter what they promise you?

First thing’s first, anyone who is (understandably) concerned with what’s been going on needs to understand what actually happened with this 2 billion-member social media platform. While not illegal, what Mark Zuckerberg’s empire did with your and millions of other people’s information leaves a sour taste in our mouths because it wasn’t right.

Second, what does this all mean for the future of ethics in online data collection? How can data be collected in a way that you’ll feel comfortable with? Finally, why does it matter? What does the future of data collection look like in the face of all this turmoil?

What happened

Before we get into the nitty gritty of ethics and the future of online data collection, let’s get a handle on what actually occurred. Back in 2014, a researcher named Aleksandr Kogan put together a personality-quiz app for Facebook. About 270,000 users installed the app and, in the process, granted Kogan access to their personal information. Pretty standard.

The difference, however, was that by granting Kogan’s app access to their personal information, the app also accessed personal information for all your Facebook friends. And Kogan saved it all in a private database.

Somehow (Kogan claimed he’s a scapegoat in all of this), all that information and data that was collected on over 50 million Facebook users landed in the hands of the notorious voter profiling company, Cambridge Analytica (Kogan sold it to them), who then created a massive database of individualized, psychographic profiles out of it. In 2016, this same company was hired by Donald Trump’s team to help with his online election campaign.

A purported key to Trump’s ascension was his very own custom database, called Project Alamo, wherein he stored (actually, stores– he still has it) detailed identity profiles of over 220 million Americans. Alamo contains a range of externally-appropriated data points, including voter registration cards, gun ownership records, and credit card histories.

Oh, one more thing: it also contains internet account identity information– purchased from certified Facebook marketing partners, including, you guessed it, Cambridge Analytica.

The problem

It’s not uncommon for data analytics companies to get personal information from different sources. This information is used to target particular demographic segments in marketing campaigns, based on their interests. This in itself is not problematic.

As Hui Xiong, a Professor and Vice Chair at Rutgers Business School in New Jersey, points out big data collection is simply technology. But he goes on to say that the ethical challenge is in the analysis of that data. CA’s use of that collected data in Trump’s campaign is the perfect example of how information can be used inappropriately.

Trump’s team utilized a persuasion digital marketing model to devise a communications strategy that was intended to discourage likely Hillary supporters from voting in the election.  One of his tactics, dark posting (non-public paid posts shown only to specific Facebook users), was designed to shrink the electorate, rather than encourage a larger group to vote for Trump.  These posts included anti-Hillary messaging that was targeted to what congress is calling “ethnic affinities”, including the infamous super predator speech that sent to pro-Democratic, yet infrequent, black voters.

While nothing that Trump’s campaign did to target specific members of the voting public has yet to be deemed illegal, the scandal over CA’s acquisition of this Facebook data and its subsequent misuse — that’s right, it was a misuse, not a data breach; remember you gave them the data when you took the quiz! — brings to light the ethics around how data that will inevitably be collected can, and should be used.

 

The ethics of it all

So, now you know that something went wrong, but it’s tough to put your finger on exactly what, right? Facebook is guilty of a major breach of trust. And that’s what it’s really all about. The data that was mined from your account was ultimately used for someone else’s benefit, not yours.

Sure, technically nothing illegal happened, but just because you can do something, it doesn’t mean you should. By not protecting your data, your information slipped through the cracks and was ultimately used against you and others like you. Facebook had a responsibility to you and to its other users to operate in your best interests and clearly, that did not happen.

In the United States, antitrust law exists to maintain fair competition and to prevent a company, or entity, from engaging in anti-competitive conduct. This same idea should extend to the usage of data mining.  

The fact that CA leveraged your Facebook data for Trump’s benefit (and suppressed potential votes for Trump’s competitor) is indication that antitrust law has not kept up with the changing times. And that seems to be a common theme when considering online data analytics and ethical standards.

Increasingly, policies and protections are reactionary, not proactive in their protection of information and data. Facebook failed to consider how their platform could be misused to access information. They failed to think ahead or to see the potential shortcomings in their privacy protection policies; they failed to anticipate Trump.

In the book, Weapons of Math Destruction, author Cathy O’Neil articulates the increasing need for limitations in the ways algorithms are allowed to influence our lives. Algorithms don’t do the human side of the work, the ethics of it all.

And because they don’t consider how data should be used, without the proper protections in place, they facilitate the weaponization of this data in ways that expose mental vulnerability and inject different streams of information into online platforms.

How Braintrvst is different

There are a couple things that prevent the misuse of data. Getting ahead of the gaps in policy is crucial. Being proactive in approaching how data can be misused is necessary to develop a code of conduct that considers privacy protection.

We must consider what kinds of innovation is coming down the pipeline that could potentially change the way data analytics can be used. Being proactive in this approach will allow companies like Braintrvst to ensure their protections remain relevant and don’t end up with stark gaps, like those that have come to light in antitrust law and Facebook’s privacy policies.

Another key action that will make sure that unethical targeting doesn’t occur is to always create algorithms from your data that detect patterns based on aggregate data, not individual information.

Data profiles are perfectly acceptable, and helpful for people looking to understand how to best reach specific groups. But, the data must always be aggregated, rather than individually-targeted. This way, individual privacy can be protected.

Finally, a code of ethical conduct must be developed. The fascinating thing about ethics is that, unlike law, ethics are based around foundational principles, and as such, can really be developed based off existing organizational documents, like a company’s mission or its values statement.  In this way, an ethical code of conduct can be more than simply a list of standards to which an organization adheres, since, as we’ve seen, those standards can all too quickly become obsolete.

Instead, an organization’s mission and values can inform a set of analytical strategies and tools through which an organization can consider emerging data analysis tools. Ultimately, an ethical code of conduct should centre around transparency, oversight, and above all, trust.

Ethical use of data starts at the top. It has to matter at all levels of an organization. Proper protections must be in place right from the start, and people should always be free to choose which data they share, and with whom, since privacy means something different to everyone. Most importantly, privacy policies must be fluid, not static, and a company should be prepared to continue to evolve its policies as new issues emerge, no matter how much foresight goes into its initial code of conduct.

Privacy protects critical freedoms and as such, data collection must be thoughtful, deliberate, and consistently aware of the power it wields, or the gravity of what’s at stake will simply be lost in the rush of what’s possible.

Get Brainfood

SUBSCRIBE TO OUR BLOG AND GET:

  • TIPS ON DATA ANALYTICS
  • DIGITAL MARKETING PRACTICES
  • NEW TOOLS, BEST PRACTICES AND TRENDS
  • HOW GOOD DATA IS GOOD FOR BUSINESS
  •  IDEAS FOR THE GREATER GOOD

How Data is Influencing Global Culture

Data: numbers, words, bytes. The term conjures many images in the mind. We interact with data in many meaningful ways, yet we overlook the real impact of this relationship. Data has entwined itself...

Marketing Audit Guide

"Give me six hours to chop down a tree and I will spend the first four sharpening the axe." A Marketing Audit requires you to assess your efforts and determine if they are generating the returns you...

The Ethics of Data Analysis: Did Facebook Win the Election for Trump?

Unless you’ve been living under a rock these past few weeks, you’ve undoubtedly heard the rustlings, okay, more like the megaphone shoutings, of some pretty major data issues at Facebook. You’ve...

PINTEREST FOR BUSINESS

Last December I saw a man frozen on the sidewalk. He wore a simple grey wool coat, his neck draped in a maroon scarf. The temperature nearly reached minus 40 and the winter wind engulfed him,...