What Cambridge Analytica Teaches Us About Data & Ethics
Data is a term that has forever conjured both confusion and fear from both consumers and marketers. The dawn of social media and internet sharing has meant that oceans of data is now available to organisations for their campaigns and, until recently, not a great deal of concern has been raised.
Cambridge Analytica took the usage of data to new frontiers with the intention of their strategies, not only asking how much data users are willing to give up, but also whether we can be manipulated with our own data? The answer was yes, but the moral implications of this triumph are only now being realised. Where most organisations use data as an addition to optimise their campaigns, in the case of Cambridge Analytica user data was weaponised into commercial and political influence. The data world took a sharp breath and, for a brief moment, a second of self-reflection. Add to this the ever-looming deadline of GDPR and you have a data-rich environment that is incredibly hostile towards both users and organisations.
We Have Been Here Before
Cambridge Analytica is a representation of what happens when data harvesting and strategy is taken to its most extreme levels. When organisations attempt to shape every aspect of user behaviour, they are placing the entire company’s future in jeopardy. Mass surveillance and manipulation have consistently resulted in scandal for the organisations behind it and data is no exception.
Cambridge Analytica is currently in the same situation as the NSA was in 2013, following Edward Snowden’s revelations around their practices of mass surveillance, data harvesting and manipulation. The ethics of their organisation, or lack thereof, have now become associated with a root issue around shady data mining and violation of user rights, meaning that any other companies caught in a similar situation are likely to be forever associated with “The Cambridge Analytica Scandal”.
What Does This Mean for Digital Marketing?
Digital marketing has found its roots more deeply embedded in data with every new platform and advance. From ad space to audience targeting, we are surrounded by user data and we must tread carefully. Users are already nervous about sharing information and the recent issues concerning Facebook are only likely to exacerbate this issue. Consumers are becoming smarter about how they give out data, who is using it and how to remove it. We are dangerously close to the point of no return, where users start to lose trust in digital adverts and the industry tears itself apart trying to regain that trust.
So, what can we do?
For companies operating within any digital capacity, the solution is clear. We, as agencies, must be engaging in conversations whereby we collaborate with clients to exercise a basic moral code with the gathering of data and what is done with it. We must enter into a genuine agreement with users, built on trust, transparency and user-based control. Data collection must be clearly labelled, users must be comprehensively and concisely informed what that data will be used for, and the ability to remove personal data must be simple. Basically, be GDPR-ready before GDPR becomes a requirement.
In short, we must be willing to show that we will police ourselves. That we can be trustworthy and our agenda is obvious from the beginning of the consumer-business relationship. We must ensure that our data practices are ethical from beginning to end and that those practices can stand up to the scrutiny of both experts and average consumers.
So, what does Cambridge Analytica teach us about data and ethics? It shows that the lines between smart data strategies and morally fraudulent practices are much more blurred than we would ever like to admit. If every organisation were to take a cold, hard look at how they manage their user data, where would they stand? If you’re even slightly concerned you fall on the latter side of the scale, perhaps it’s time to rethink your data processes...