The Guardian newspaper recently reported some pretty shocking allegations about the anonymous messaging app Whisper.
In summary, the paper called out Whisper for tracking users, especially “newsworthy” ones, often capturing and recording the locations of users who have not opted in to the app’s location feature and sharing user data with third parties, including the U.S. government and military.
Whisper denied all accusations, but then radically changed its privacy policy and threatened the Guardian. Whisper editor-in-chief Neetzan Zimmerman said “the Guardian made a mistake posting that story and they will regret it.”
Wait, editor-in-chief? Why does a messaging app need a news organization? (It turns out Whisper collects and shares user data with various publications.)
The Guardian’s attack and Whisper’s defense mostly boils down to a disagreement about the question of whether aggregated data, or data not associated with specific names or phone numbers, isn’t really “personal data” and therefore no privacy violation has taken place.
The biggest conflict in technology between the companies that provide certain kinds of products and services and their customers is over privacy. The emotional, intuitive belief by many is that personal data-harvesting companies are “evil,” exploitative or, at best, apathetic about the concerns of users.
But that view is clearly false. Technology companies don’t require incoming employees to pass a sociopath test. Morally, they’re regular people and that fact is obvious and self-evident.
The whole those-people-are-bad-people knee-jerk reaction won’t get us any closer to resolving the crisis. So I have an alternative view.
I believe the experience of spending all your time solving the problems of how to survive and succeed in the online application and service businesses engenders a set of beliefs that the public at large doesn’t share.
In other words, the fundamental problem with privacy is that the companies who make the choice to violate or not violate our privacy hold one set of beliefs in direct contradiction to the beliefs of their customers.
Here are the questions the industry and their customers have completely different answers for:
1. Who does user data belong to?
2. Can the company be trusted with that data?
3. Does the user agreement and privacy policy communicate the intended, actual or potential use of user data?
4. Is personal data without personally identifiable information like name, phone number or email address really personal data?
If you’re in the industry, the answers to these questions are 1) the companies in possession; 2) yes; 3) yes; and 4) no.
If you’re a typical end user, the answers are 1) the people who generated that data; 2) no; 3) no and 4) yes.
And this disagreement is the fundamental problem that needs to be solved.
Let’s look at each of these four points of disagreement individually.
1. Who does user data belong to?
When a user thinks about the data an app collects—say, the user’s home address, current location, age, where they went to school, or whatever—they naturally think that that collection of data together belongs to them.
But when a company builds a site and tool to harvest such information in aggregate, they naturally think it’s theirs to do what they want with. That’s why the public is shocked by revelations about what the companies are actually doing with data and why the companies themselves are shocked by the public outrage.
People tend to blame the user for this discrepancy. (I’ll get to user agreements and privacy policies in a minute).
2. Can the company be trusted with that data?
Everybody trusts themselves, but when it comes to personal data nobody should. Users shouldn’t trust their ability to create or remember passwords, for example. They shouldn’t trust their systems as being unhackable, because they’re not. And they shouldn’t trust themselves with other people’s data, either (say, sent to them via email).
Likewise, companies shouldn’t trust themselves with user data and assume that just because their intentions are good that anything they do is OK. Companies can’t assume their data will never be hacked, stolen, compromised, subpoenaed or abused by employees.
3. Does the user agreement and privacy policy communicate the intended, actual or potential use of user data?
It’s Time for Web Companies, Users to Face Facts About Online Privacy
The biggest lie in technology is that end-user license agreements, or EULAs—as well as privacy policies—are read by users or have any effect on what users know about the services they use.
I would guess that less than 1 percent of users read them, yet everyone who uses such products clicks the button that says they’ve read or understood the documents.
Let’s be clear: This is just legal butt-covering that shields the company from future criminal charges or civil lawsuits.
Because nobody reads them, companies go ahead and load them with permissions to do every possible thing with user data, all buried in a muddy pile of legal mumbo jumbo.
They are nothing more than a crutch. Companies convince themselves that just because the lawyers crafted an all-encompassing legal document that no obligation exists for companies to communicate to users what they’re doing with their data.
4. Is personal data without personally identifiable information like name, phone number or email addresses really personal data?
Whisper is a perfect poster child for the gulf between what companies do and what users believe companies do.
Most Whisper users believe they’re invisible on the service, completely anonymous—and that their posts are ephemeral—here now, gone tomorrow.
Whisper’s argument that the services are anonymous is based on the fact that they don’t retain your name or phone number. But every other scrap of data they’ve encountered about you is stored and kept in a database and has been for the two years the service has existed.
They’re able to identify users by their locations and actions, both individually and as groups.
This is a case where the companies should know better. In olden times, if a company knew your age, gender and zip code they felt they knew everything they needed to sell you stuff.
Nowadays, location and other data, plus the content you post and the people you interact with can be combined to paint a very accurate picture of who you are.
In any event, companies tend to see such information as non-personal and people see it as personal.
I think everyone would agree that it’s a good idea for companies and users to be on the same page about user data.
My proposed solution is that, in addition to the user agreement or privacy policy—which from the user’s point of view is nothing but a wall of baloney that takes all control and power away from the user and gives it to the companies—these companies should universally provide clarity about all four of these areas of fundamental misunderstanding, complete with honest examples.
For example, I would like to see companies require an “I have read and understand” button at the bottom of documents that tell the truth from their own point of view:
1. All data harvested by this app now belongs to us. We will use it for experimentation, advertising and for other purposes and we will share it with other companies and governments as we choose.
2. We cannot be trusted with your data, so keep that in mind as you share it with us.
3. Our user agreement is not about communicating with you, but about protecting ourselves. Note that when you agree to it, you are essentially writing a blank check for us to do whatever we want with your information.
4. Unless you’re a statistician or other specialist, it’s impossible for you to understand the amount of personal knowledge that can be gained about you from the disparate and seemingly innocuous scraps of data collected on you. It’s safe to assume, however, that the knowledge is deep and vast.
This is what should happen. Of course, it never will.
The bottom line is that personal data-harvesting companies aren’t evil. They’re just groups of humans trying to create a business for themselves. They’re not infallible. They’re not immoral. But they’re not moral, either.
The problem with privacy isn’t that companies are bad. It’s that companies have a completely different perspective on your personal data than you do.
The solution to the privacy problem is for companies to offer true, full disclosure and for users to base their decisions on which apps or services to use and what information to divulge accordingly.