The World vs the Web – The Problems Created By User-Generated Content

Google and Facebook recently came under fire when it was discovered that major brands were having their ads shown on apps that offered services used in online child abuse. It’s tempting to write this off as the tech giants being attacked for the actions of others, but the situation is much more complicated. The rise of user-generated content has allowed a voice for everyone on the web. However, it makes it really hard for tech companies to prevent individuals from using their platform (or the internet, in general) for illicit reasons. Understanding the problem with user-generated content shows why tech giants are being held accountable for the actions of rogue users and what could be done to address the issue.

The recent situation involving the apps in question highlights many of the issues involved with with user-generated content. The short version of the story is this: Whatsapp allows end-to-end encryption, which means no one except the user and the recipient can read the message. It offers enhanced privacy, but it also makes the platform a choice for people who want to share illegal content, including images of child abuse. There are apparently even groups of individuals who are sharing content this way.

Whatsapp doesn’t make it possible to search for groups on its platform, but there were third party apps on the Google Play Store that did. And it was possible for people to search for child abuse groups, and that ads would be shown along with the search results. Since these apps were approved by Google, they were in the Facebook Ad Network. Several investigations showed that major advertisers were paying for advertising in places that they didn’t want their ads being shown.

After the predictable fury from the public, Google quickly removed these type of Whatsapp group search apps from the Play Store. Similarly, Facebook made it impossible for these ads to show on these kinds of apps in the future. They even refunded any advertisers who were charged for ads that were shown on these platforms.

This situation highlights several of the biggest problems with user-generated content. First, it’s clear that when the general public has access to a content sharing platform, some users will utilize the platform to share things they shouldn’t. Second, as mobile app development becomes more common, there is a greater need to monitor the kinds of apps that are uploaded to mobile phones. Apps offer an opportunity to make ad revenue, so there is a growing trend of people trying to sneak illicit apps past the review process. Even if their app gets deleted eventually, it’s an easy source of revenue until then. Besides having their apps removed from the store, the developers who created these apps face little real punishment. This means the cycle is likely to begin anew, since the developers are always thinking of new chinks in the armor of tech giants that they can exploit.  

The Paradox of Punishment for User-Generated Content

The fact that individual users are rarely punished for illegal things they do online is the source of the ongoing problem with user-generated content. Consider the following comparison: If someone was caught selling bootleg movies by the local police, they would be punished for breaking several laws. However, if someone uploads a bootleg movie to YouTube, the channel may get deleted. However, there’s nothing to stop the person from creating a new account with a different email address and doing the same thing all over again.

Whether it’s cyberbullying, copyright infringement, videos supporting terrorism or worse, it’s harder to hold people accountable online for user-generated content. The internet is powered by self-created email accounts and usernames. This makes it difficult to know who is posting content or prevent them from continually posting content that is taken down. In some cases, the people could even be in countries where it’s hard to prosecute them.

The paradox is that when institutions tried to enforce law and order on the internet more strictly, it backfired tremendously. In the early 2000s, the RIAA began suing people for illegally downloading music online. They won some court cases and managed to get some people to pay huge fines, the public backlash was so huge that they only use this tactic in the most extreme cases. Though most people wouldn’t be concerned if tech giants turned over all the people sharing images of child abuse, the situation would be different if the police start raiding the homes of teenagers for infringing copyright on YouTube.

Fixing the Problems with User-Generated Content

This leads us back to the conundrum from the start of this article. It may seem unfair to hold tech giants responsible when people use their technology in ways they didn’t intend, but there really isn’t any other choice. There is too much user-generated content being created for government agencies to monitor and approve every post. However, it’s unacceptable to major tech platforms to profit from illegal activity and only stop when it makes headlines. Tech companies need to account for the possibility that their platforms can be misused when they are creating them. It essential that they take reasonable actions to proactively prevent abuse, not just respond after the fact.

Whatsapp needs to do a lot more to prevent it’s platform from being used illegally. They offer end-to-end encryption, which they know is being used for illegal purposes. The company pretends that if they can’t see the content, it’s not their problem. Whatsapp currently has over 100 human moderators and uses automatic filter to identify illegal groups or images. But for as much content that is shared through the platform, that is a woefully inadequate number of human moderators.

It’s even less excusable because Facebook owns Whatsapp, so it’s hard to argue that they can’t afford to hire more human moderators. Facebook currently has more than 20,000 human moderators. For its part, Facebook acknowledges the issue with Whatsapp end-to-end encryption, but thread the needle as to why they don’t do more to help law enforcement.

Earlier this year, Facebook’s global policy lead on security wrote a blog posts that laid out Facebook’s stance on the issue. In the May 2018 post she wrote, “Now that I’m at Facebook, which owns WhatsApp, I hear from government officials who question why we continue to enable end-to-end encryption when we know it’s being used by bad people to do bad things. That’s a fair question,” she said. “But there would be a clear trade-off without it: it would remove an important layer of security for the hundreds of millions of law-abiding people that rely on end-to-end encryption.”

Google needs to spend more resources on reviewing the apps in its store. It’s worth noting that these Whatsapp search apps were not approved by the Apple App Store, so it’s not like this situation was unavoidable. Facebook has other issues it needs to work on regarding user-generated content, but it this particular case, their only mistake was trusting Google to have a rigorous review process for the apps in its store.

The past few years have seen the tech industry fight desperately to prevent government regulation. There are calls for them to pay a more taxes, hearings into potential bias on search engines, and a lot of concern about the growing use of end-to-end encryption to make evidence harder to obtain. Unless the tech industry can make major changes to the way they operate, it is very likely that there will be new legislation that seeks to fix the problems created by user-generated content and self-serve ad platforms.

For more in the World vs. the Web series, read this article about France’s new plan for digital tax in 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *