When regulating the Internet, you should not cut one size fits all

Brother

Professional
Messages
2,590
Reaction score
539
Points
113
This year appears to be the year that governments around the world have taken seriously the regulation of the World Wide Web.

Already much has been told about the EU, with its Directive on copyright, and now also the Regulation of terrorist content. And about Australia, with its anti-encryption laws and "offensive content". India has already passed several laws to restrict the Internet and new ones are being discussed. There are also Great britain, Germany, South korea, Singapore, Thailand, Cameroon, and so on and so forth. And of course, there are some very bad ideas pending in the US as well.

If we look at the "problem" that all these laws are trying to solve, then it can be boiled down to this: "people are doing bad things on the Internet, so the Internet needs to be regulated." Unfortunately, it is this theory of regulation (which is also stupidly applied) that is behind most of the proposals for regulating the Internet today. This is problematic for a number of reasons, in part because laws regulate the wrong side. Ideally, we should go after people who do bad things, not the tools or services they use (or the tools and services that just inform about bad things).

Well, there is a second theory behind many regulatory approaches: "Google and Facebook are big and bad, so whatever punishes them is good regulation." To me, this approach makes even less sense than the first, but it certainly leaves people thinking, at least in the EU (and possibly in the US).

Combine these two theories of internet regulation and you have a complete mess. They are more likely to hit with a sledgehammer on a large part of the world wide web than looking for targeted approaches. In addition, there is now so much focus on Google and Facebook that many of the regulatory laws are written solely for these two platforms, and do not take into account the impact that the laws can have on other Internet companies, many of which work for completely other bases. I have previously expressed my thoughts on which approach to regulation will really “hit” the tech giants while keeping the internet open, but that approach will require a very big mindset change (I still hope it happens).

In his article “Regulatory Framework for the Internet,” Ben Thompson offers a much more practical approach to regulation. He, like me, is skeptical about most attempts at regulation, but while recognizing that this will happen absolutely no matter how skeptical we are, Ben offers a framework for thinking about Internet regulation that (hopefully) can minimize the most negative the implications of the approaches used today.

You must read his entire article to understand the train of thought, background and nature of the proposed approach, but the key to Thompson's idea is the recognition that there are different types of Internet companies, and this is absolutely true not only for companies, but also for the services they provide. Ben hopes that regulatory approaches can be more fine-tuned and specialized. In this case, we would have much less collateral damage when trying to carry out square regulation through round Internet services.

Thompson's next idea is to rethink the generally accepted concepts of free as in beer and free as in speech that are familiar to every Internet user. Ben talks about a third rather famous and long-debated concept called "free as in puppy", which means that you get something for free, but you have to pay for it to maintain it in the future.

Most people in the West agree, at least in theory, with the idea that the Internet should keep content free. China, on the other hand, is a cautionary tale of how technology can be used in the opposite direction. However, the question is, should we keep free content along with free content?

In particular, Facebook and YouTube offer both free and free content at the same time: content can be created and distributed free of charge without any liability. Would it be better if content that society considers problematic had "free responsibility", that is, would imply costs for the user, correlated with possible moral costs for society?

In theory, this allows countries that believe there are certain problems on the Internet to focus their regulation more narrowly without harming the entire Internet:

A clear classification is critical to developing a regulation that actually solves problems without adverse side effects. In Australia, for example, there is no need to worry about shared hosting sites; rather, she is worried about Facebook and YouTube. Likewise, Europe wants to rein in tech giants without the hassle of small online companies. And from a theoretical point of view, it is necessary to regulate exactly where the failure occurs, but limiting the regulation to only this place is a rather difficult task.

I admit that I am not entirely convinced by Ben Thompson's model, but she gave me a reason to think. At least regulating ad-supported platforms solves some of the issues. Regulation will focus on the application layer and on a specific type of service. This approach could potentially nudge us into a "protocol, not platform" world (the more regulated world of ad-supported platforms could push companies to explore non-ad-driven business models).

I still have a lot of doubts, however, despite all the complaints about what Google and Facebook have done with the ad-supported model, we have to agree that this model has created some incredibly powerful services that have really done amazing things for many, many people. Everyone is focused on the flaws, but we shouldn't forget how much good we got from the internet built on advertising. Can this be improved? Absolutely. But don't think of all internet advertising as a problem. If a regulatory approach is to exist, it must address not only the nature of the platform, but also the specific and clearly articulated harm that regulation is trying to address. Then we can weigh the potential harms and benefits, and then assess how effective the regulatory approach is.

I'm still skeptical that most internet regulation laws will function very well and eliminate real harm (and also do it without banning a lot of good things), but as we will have a lot of discussion on this topic in the coming weeks months and years, we could also start a discussion of the laws themselves governing the Internet.
 
Top