American News Group

Facebook and Google accused of manipulating us with “dark patterns


By now, most of us have seen privacy notifications from popular web sites and services. These pop-ups appeared around the time that the General Data Protection Regulation (GDPR) went into effect, and they are intended to keep the service providers compliant with the rules of GDPR. The regulation requires that companies using your data are transparent about what they do with it and get your consent for each of these uses.

Facebook, Google and Microsoft are three tech companies that have been showing their users these pop-ups to ensure that they’re on the right side of European law. Now, privacy advocates have analysed these pop-ups and have reason to believe that the tech trio are playing subtle psychological tricks on users. They worry that these tech giants are guilty of using ‘dark patterns’ – design and language techniques that it more likely that users will give up their privacy.

In a report called Deceived By Design, the Norwegian Consumer Council (Forbrukerrådet) calls out Facebook and Google for presenting their GDPR privacy options in manipulative ways that encourage users to give up their privacy. Microsoft is also guilty to a degree, although performs better than the other two, the report said.

Tech companies use so-called dark patterns to do everything from making it difficult to close your account through to tricking you into clicking online ads (for examples, check out darkpatterns.org‘s Hall of Shame).

In the case of GDPR privacy notifications, Facebook and Google used a combination of aggressive language and inappropriate default selections to keep users feeding them personal data, the report alleges.

A collection of privacy advocacy groups joined Forbrukerrådet in writing to the Chair of the European Data Protection Board, the EU body in charge of the application of GDPR, to bring the report to its attention. Privacy International, BEUC (an umbrella group of 43 European consumer organizations), ANEC, a group promoting European consumer rights in standardization, and Consumers International are all worried that tech companies are making intentional design choices to make users feel in control of their privacy while using psychological tricks to do the opposite. From the report:

When dark patterns are employed, agency is taken away from users by nudging them toward making certain choices. In our opinion, this means that the idea of giving consumers better control of their personal data is circumvented.

The report focuses on one of the key principles of GDPR, known as data protection by design and default. This means that a service is configured to protect privacy and transparency. It makes this protection the default option rather than something that the user must work to enable. Their privacy must be protected even if they don’t opt out of data collection options. As an example, the report states that the most privacy-friendly option boxes should be those that are ticked by default when a user is choosing their privacy settings.

Subverting data protection by default

Facebook’s GDPR pop-up failed the data protection by default test, according to the report. It forced users to select a data management settings option to turn off ads based on data from third parties, whereas simply hitting ‘accept and continue’ automatically turned that advertising delivery method on.

Facebook was equally flawed in its choices around facial recognition, which it has recently introduced in Europe after a six-year hiatus due to privacy concerns. It turns on this technology by default unless users actively turn it off, making them go through four more clicks than those that just leave it as-is.

The report had specific comments about this practice of making users jump through hoops to select the most privacy-friendly option:

If the aim is to lead users in a certain direction, making the process toward the alternatives a long and arduous process can be an effective dark pattern.

Google fared slightly better here. While it forced users to access a privacy dashboard to manage their ad personalization settings, it turned off options to store location, history, device information and voice activity by default, the report said.

The investigators also criticized Facebook for wording that strongly nudged users in a certain direction. If they selected ‘Manage Data Settings’ rather than simply leaving facial recognition on, Facebook’s messaging about the positive aspects of the technology – and the negative implications of turning it off – became more aggressive.

“If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you,” its GDPR pop-up messaging said. “If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged,” it goes on.

The report argues that these messages imply that turning the technology off is somehow unethical. The message also contains no information on how else Facebook would use facial recognition technology.

Microsoft drew less heat from the investigators, who tabulated each tech provider’s transgressions:

We would have liked to see Apple included, as the company has long differentiated itself on privacy, pointing out that it sells devices, not users’ data.

If nothing else, this report shows that reading and thinking about privacy options is important. Paying attention to these GDPR notifications and taking time to think about what they’re asking is worthwhile, even if it means taking a few minutes before accessing your favourite service. If you already shrugged and clicked ‘accept and continue’, there’s still an option to go in and change your privacy settings later. Just watch for those dark patterns: forewarned is forearmed.

Exit mobile version