Before you go...
Did you enjoy the reading? Did you notice this content is free and also ad-free?
We intend to keep it this way so please subscribe to our newsletter so we can stay in touch.
Don't forget to share it with your friends so they can enjoy the reading too.
Join Our Newsletter!
* Please provide a name
* Please enter a valid email address

The Age of Social Media Regulation

What does the shift toward tighter internet rules mean for software developers?

By Terrance J. Mintner • December 13, 2019

Jon Tyson | Unsplash

The days of unfettered social media are disappearing before our very eyes. In recent months, governments around the world have been considering ways to restrict “harmful” or “misleading” online content.

In October, Facebook CEO Mark Zuckerberg appeared before the U.S. House Financial Services Committee and received what many media outlets termed a public “grilling.” He was asked about the future of Libra, Facebook’s digital currency, and his company’s efforts to curtail the spread of fake news, sexual abuse content and hate speech on his platform.

Congress members such as Alexandria Ocasio-Cortez often cut Zuckerberg off during his measured remarks and demanded from him either a “yes or no” response. Their impatient and hostile tone suggests the aura of goodwill that once surrounded Facebook and other social media giants has pretty much evaporated.

Let’s juxtapose that hearing with broader public sentiment. What makes this a watershed moment is the high level of consensus among politicians and their constituents. Polls show most people favor government regulation of social media. Proposals include hefty fines for companies that fail to remove flagged content quickly and even jail time for executives who fail to impose tighter restrictions.

Some observers have gone so far as to call social media an unregulated “weapon.” The ability of sophisticated algorithms to spread inflammatory or misleading content to targeted audiences is a grave public danger, they contend, the same way guns (certainly in the U.S.) have made life unsafe.

Last August, a 21-year-old man opened fire in a crowded Walmart in El Paso, Texas. With a WASR-series semi-automatic rifle in hand, he killed 22 people before police could stop him.

Investigators believe he penned and published a white-nationalist manifesto against Mexican immigrants on 8chan immediately before carrying out the attack. As inspiration, he cited the mosque shootings in New Zealand last March.

In that episode, a 28-year-old Australian man carried out consecutive attacks against Muslim worshipers during Friday prayers at two mosques in Christchurch, New Zealand. Before he was apprehended, the man – a white supremacist politically aligned with the so-called “alt-right” – killed 50 people and injured 50 more.

The killer had streamed a live video on Facebook showing images of his first assault. It amassed 4,000 views before it was taken down, raising questions about Facebook’s sluggish response.

Then, last April, Islamic terrorists in Sri Lanka who may have been seeking revenge for the Christchurch attacks launched a coordinated assault on three Catholic churches and three luxury hotels in Colombo on Easter Sunday. They left over 300 people dead and injured more than 500.

In the immediate aftermath of that attack, Sri Lankan authorities made an unprecedented move: They shut down the main social networks – Facebook, WhatsApp, YouTube, and Snapchat – out of fear that more violence would ensue.

In the following months, countries have reacted to these events with legislation. Australia passed a law to punish social media companies that fail to “expeditiously” take down “abhorrent, violent material.” New Zealand, Britain, Germany, Singapore, and India have already adopted or are considering similar measures.

But these violent episodes are just part of the larger unease dominating political discourse. Politicians and members of the public are currently embroiled in debates about how to control the spread of fake news and election meddling. Nevertheless, they all seem to agree on one thing: government intervention.

Last March, Zuckerberg penned an op-ed for The Washington Post in which he stated that his company has “a responsibility to keep people safe.” This would require a “more active role for governments and regulators.”

While Facebook is developing AI technology to identify and stamp out “harmful” content, the company is still at a huge disadvantage against a massive and unceasing flow of user-generated content.

According to Zephoria, a digital marketing firm, there are over 2.38 billion people who use Facebook worldwide as of March 31, 2019. “Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded,” it reported.

Given these staggering numbers, it’s no wonder that Zuckerberg hopes to enlist the support of government. “By updating the rules for the Internet, we can preserve what’s best about it – the freedom for people to express themselves and for entrepreneurs to build new things – while also protecting society from broader harms,” he wrote in the op-ed.

But free-speech advocates have been quick to pounce.

What does “harmful” mean? They ask. Sure, social media is full of negative content but how can governments and companies achieve an adequate measure of control over it, especially with regard to posts that fall into gray areas (i.e. they do not call for violence – a clear red flag – but espouse extremely hateful ideas)? And the question about fake news remains. How to stem it?

Some observers fear that authorities and the tech giants will play it safe by over-regulating. “When lawmakers create new rules that have never been tested by courts… and then tell platforms to enforce them, we can only expect that a broad swathe of perfectly legal speech is going to disappear,” Daphne Keller, director of Intermediary Liability at the Stanford Center for Internet and Society, recently warned.

When lawmakers create new rules that have never been tested by courts… and then tell platforms to enforce them, we can only expect that a broad swathe of perfectly legal speech is going to disappear

Regardless of these and other criticisms, regulation will likely win out. This means that software developers will have no choice but to adapt. Can they turn constraining laws into expanding opportunities?

A promising area for developers is global cooperation. In his op-ed, Zuckerberg expressed the need for a “common global framework.” Instead of regulation that varies widely from one country to the next, such a framework “will ensure that the Internet does not get fractured” and that “entrepreneurs can build products that serve everyone.”

With cooperation as the new digital virtue, developers could create apps to pinpoint or iron out infrastructural weaknesses that emerge as a result of differing laws.

Another area developers could enhance is AI. Facebook already utilizes artificial intelligence to automatically flag content that violates its terms of service. Again, however, there are many gray areas.

As companies weigh removing borderline or questionable content, transparency and accountability for their decisions will become vital. On this front, developers can design apps that help define and defend a company’s red lines.

In October, amid all the fury over Zuckerberg’s testimony, his company announced the launch of a service on its mobile app called Facebook News. The move appears intended to offset criticism that the company has not done enough to fact-check fake advertisements, especially political ads.

This means that Facebook, like traditional media outlets, will soon make editorial decisions about content and how best to target certain audiences. Media experts believe the rationale behind such decisions should be made public. To help with this, experts have proposed creating a “public interest API” (application programming interface) which would allow media watchdogs to monitor what algorithms Facebook uses in targeting consumers.

By opening themselves to this kind of scrutiny, Facebook and other companies could take important steps toward regaining the public’s trust.

Lastly, developers can build tools that give users more control over how their personal data is used and stored. “Data portability” is the new buzzword. The idea is that consumers should have the right to move their personal information among providers, instead of passively allowing it to float freely about the internet.

“Data tokenization” could become another area of focus. The idea is that personal bits of data could be marked as private keys on a public blockchain. Facebook and other companies could then issue these keys to users, allowing them to gain a better understanding of how their personal information is being used and who is interacting with it.

To conclude, the political winds are blowing strongly in favor of regulation. Software developers would be wise to consider what value and innovations they can bring to the likelihood of a tightly policed web.


Leave a Reply