Why Data Breaches and Privacy Concerns Don’t Matter to Monopolies

Ravie Lakshmanan
7 min readNov 17, 2018

--

2018 is already on track to become one of the worst years for data breaches. A total of 3,676 breaches compromising a collective 3.6 billion records were reported just in the first nine months of this year. What’s more, “seven breaches exposed 100 million or more records with the 10 largest breaches accounting for 84.5% of the records exposed.” Although the number of breaches and that of exposed records were down 8 percent and 49 percent from last year, it doesn’t take away the fact that the statistics are staggering. It makes one wonder if companies are even serious about safekeeping the data users trusted in their hands anymore, doesn’t it?

The Big Four (Image: Recode, Scott Galloway)

If it’s worries about online security on one side, privacy concerns/violations form the other end of the spectrum. Here are some, to sample a few:

  • Last month, Facebook removed 66 accounts, pages and apps linked to Russian firms that scraped user data from the social network, including some companies that built facial recognition software for the Russian government by pulling images of Russian citizens from Google Search and Yandex.
  • About 125 apps installed on Android phones by millions of users were used to track their behaviour within those apps to program a vast network of bots to mimic their actions as part of a massive, sophisticated digital advertising fraud scheme to earn additional revenue “thanks to ads being viewed by bots.”
  • Companies like Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap were found to be offering their business customers uninstall trackers that exploit push notifications to track if and when users uninstall their apps. Their clients include Spotify, T-Mobile US and Yelp.
  • Singapore stoked privacy fears after the government unveiled plans to install surveillance cameras atop over 100,000 lampposts to help authorities pick out and recognize faces in crowds across the island-state.
  • China released a list of 169 names earlier this June who were banned from taking flights or trains under its new nationwide Social Credit reputation management system, which is on track to be deployed by 2020 to watch its 1.4 billion citizens in real-time through a combination of drone, video surveillance and facial recognition and by monitoring their brainwaves and use of messaging apps like WeChat and others. The misdemeanours included failing to pay debts on time or behaving badly on flights, while cameras equipped with A.I. smarts have been used for everything from nabbing criminals to shaming jaywalkers to listing names of people who don’t pay their debts, in addition to extensively tracking members of the Uighur Muslim minority using drones disguised as doves.
  • American consumer products manufacturer Clorox struck a deal with Kinsa, a tech startup that sells smart thermometers which can be paired with a smartphone app to measure temperature, to license their users’ zip code information to target ads in areas where higher rates of fevers were reported.
  • Restaurant waitlist and table reservation apps like Nowait (acquired by Yelp last year for US$ 40 million) and OpenTable were found to be using data-mining techniques on users’ (uniquely identified through their mobile numbers) dining preferences (like which restaurants and where you prefer to sit) for purposes of location-based marketing.
  • An AdGuard research found earlier this year that Facebook’s tracking software (called Facebook Audience Network) existed in 41 percent of apps and collected information about the device (OS, brand, model, screen resolution), carrier name, time zone, app information including the name of the current activity (like transferring funds in a bank app), IP address, in addition to a unique advertising ID (Android and iOS). Another study undertaken at Oxford University by analysing 959,000 apps on Android found that 88.44 percent of them sent data back to Google, 42.55 percent to Facebook, 33.88 percent to Twitter, 26.27 percent to Verizon, 22.75 percent to Microsoft and 17.91 percent to Amazon all without user consent.
  • Wi-Fi equipped vehicles were found to beam back drivers’ locations, their music, dining and coffee preferences to carmakers in hopes “to turn your car’s data into a revenue stream.” General Motors, whose Marketplace app allows drivers to buy coffee, doughnuts, make restaurant and hotel reservations and even prepay for gasoline through the touchscreen dashboard, admitted to “using the location of your car to serve you.”
  • Data brokers like USDate were found to be auctioning online dating profiles of users on sites like Match, Tinder, OkCupid and Plenty of Fish, including “usernames, email addresses, nationality, gender, age and detailed personal information about all of the people who had created the profiles, such as their sexual orientation, interests, profession, thorough physical characteristics and personality traits.” (1 million profiles were purchased by researchers for as low as US$ 153.)
  • Sidewalk Labs, an Alphabet (Google’s parent, that is) subsidiary focussed on urban innovation, walked into a privacy minefield after its attempts to build a futuristic smart neighbourhood called Quayside on Toronto’s waterfront courted surveillance concerns over its proposed data-collection practices that would allow third-party companies and developers to access identifiable information about Quayside residents once the project gets built. (It may be noted that Sidewalk Labs intends to expand this project to other cities, which means the data practices adopted now could potentially affect a large number of people in the future.)
  • Google just this week announced plans to consolidate DeepMind Health, the makers of AI-powered Streams app, into its newly formed Google Health unit as part of its ongoing efforts to streamline its fragmented health initiatives (Google Fit, health-oriented features in Google Search, G Suite for healthcare businesses, AI-based health research offerings, and Alphabet subsidiaries DeepMind, Verily and Calico). DeepMind, which is closely working with NHS hospitals in the U.K., gets accused of “trust demolition” for going back on its earlier promise to never connect identifiable health data to Google. (Streams came under fire back in 2016 when it was given access to intimate details of over 1.6 million patients from three hospitals part of NHS.)

And this isn’t including the Google+ breach that affected as many as 500,000 accounts last month, or Facebook’s most recent breach that exposed data from 29 million users, or the Cambridge Analytica data fiasco that put the social network under spotlight earlier this year. Of course, there is no such thing as absolute privacy or security that safeguards user information from all sorts of attacks and breaches. But the complete lack of accountability, transparency and trust is impossible to shrug off either. It didn’t have to be this way.

In a blog post after the breach came to light earlier this September, Facebook said that the data included the following apart from contact information: “Username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches.”

“People’s privacy and security are important to us, and we are sorry this happened,” said Guy Rosen, VP of product management, in a call with journalists afterwards. Time and again Facebook apologises and promises they won’t let it happen again, it only comes across as empty and half-hearted at best. Sorry is no longer good enough, because this, it bears repeating, is nothing but an all-round privacy disaster.

Not only the users’ stolen personal data is ripe for abuse in various forms (stalking, harassment, phishing attacks, you name it), there is no recourse to claim the information back. Are affected users expected to change their email addresses and phone numbers, now knowing that their contact details are floating on the internet? As Slate’s Will Oremus notes, “If your password is stolen, you change your password. The damage is done and you move on. But if all your identifying personal information is stolen? You can’t change that. It could haunt you for the rest of your life.” The questions are many, but Facebook has so far remained silent on the matter.

“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” CEO Mark Zuckerberg wrote in a lengthy blog post on March 21, at the height of the furore over Cambridge Analytica’s misuse of Facebook users’ data for political campaigns, adding “I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we’d like, but I promise you we’ll work through this and build a better service over the long term.”

That Facebook continues to ask for our trust on the platform even as it keeps us giving more and more reasons not to trust it, repeatedly losing control over the data in one form or the other, while doing precious little to assuage the concerns, is a clear sign that the odds of Facebook changing much (unless there is some form of stringent regulation) is less than zero, and that it intends to fully bank on its ubiquity to keep users coming back to it no matter the egregiousness of the violation.

It’s not just Facebook, mind you. Most of what we do, both online and offline, are being increasingly used as fodder for targeted advertising by various companies that offer their service for “free” in exchange for our attention and us voluntarily sharing our day-to-day minutiae, raising significant privacy concerns. And given the rise in frequency and magnitude of data breaches over the last few years, an important question merits further attention: why aren’t corporations doing enough to protect their users’ personal information? Why is it acceptable and “far cheaper for companies of Facebook’s size, even with large financial penalties, to invest less in security and simply apologise and accept a fine when things go wrong?”

Is it because they can afford to lose our data for they see no financial incentive in trying to secure it? Or is it because users, suffering from a data-breach fatigue, are becoming desensitised to the whole idea of privacy in a digital world? Lack of consumer revolt and regulation, coupled with people’s complacent attitudes surrounding cybersecurity, could be a reason why most companies, particularly the Big Five (including Microsoft), don’t sustain lasting damage to their brand and reputation. But a large part of it also stems from the fact that there is no viable alternative, forcing most users to continue using the service or product irrespective of the personal costs involved. These companies are called monopolies for a reason. They are simply too big to fail, and even when they do so, they can pick up right from where they left off, with little or no cost to their bottom lines.

--

--

Ravie Lakshmanan
Ravie Lakshmanan

Written by Ravie Lakshmanan

Computational journalist and cybersecurity reporter

No responses yet