|
The domestic legal landscape in mediaby Philippe Barbe
25 Nov 2020
|
Part 3 of 20 in a series examining the interplay of Data Science, AI, the media and advertising.
Fireworks consist of two basic ingredients packed into a shell: gun powder, and “stars” which are also explosive but less so than the gunpowder, and provide the vivid colors.
Search engine technology (see Part 2) is the gun powder fueling the big fireworks in the media and advertising industries over the past 30 years while the law has provided the stars.
Understanding stars is essential to knowing how fireworks produce what we see. Likewise, knowing and understanding the basics of the legal landscape for the media and advertising industries gives us a lot of insight into how and why these industries have evolved to date. In turn, this understanding will inform us on what data science may or may not be able to do in these industries. Regulation and Deregulation We sometimes forget that the media industry has been regulated for a long time.
In 1910 the first regulations organized air waves in response to the limited availability of frequencies. The first radio act was passed in 1927. The FCC was created in 1934.
In 1949, the Fairness Doctrine was established, asserting that broadcasters had to give a fair and balanced view on controversial issues. This reflected both the significance of radio and television in shaping opinion, and that their number was limited. By enforcing this rule, the FCC tried to guarantee pluralism of opinions needed in a democracy, mimicking limits on newspaper concentration.
The proliferation of radio and television outlets in the mid-1980s, coupled with the political climate at the time, led to the abandonment of the Fairness Doctrine in 1987, which opened the door to less balanced news. The personal attack rule, giving a right to reply, was abandoned in 2000, as well as the political editorial one which gave unsupported candidates some visibility. These regulatory changes allowed media companies to segment their audiences along ideological lines giving rise to blatantly partisan networks and stations.
Other landmark decisions impacted content. The wave of deregulation between the mid-1970s and the mid-1990s brought down guidelines on informational programming, dropped requirements aimed to ensure that programs of local interest were aired, and removed limits on commercial time and logging of political programs.
Dropping these regulations removed much of the limit on content that was the foundation of the Fairness Doctrine.
The consequence of this deregulation, combined with the explosion of outlets and digital technology lowering the cost of production of content, created an abundance of radio and television stations dedicated to chasing ever narrower audience segments.
This abundance sparked a fierce competition that, despite the increase in total numbers of ads, lowered the marginal revenue of commercial minutes making the media industry less profitable and opening the door for Data Science (as we will see in future articles). Copyright Regulation A basic legal pillar regulating media and advertising remains… content protection…in the form of copyrights and intellectual property rights.
Protection of these rights are the subject of several international treaties, from the 1887 Berne Convention for the Protection of Literary and Artistic Works, to most recently the Marrakesh VIP Treaty which took effect in 2013.
The US constitution states: Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land. The same holds in nearly every country in the world, meaning that local laws must stay within the limits of the treaty.
There are several reasons why this is important for the media and advertising industries some of which will involve Data Science.
For example, new content created by an AI machine will be governed by those treaties too, which raises complex legal questions about what constitutes an original work. How complex will the algorithm have to be in order to generate original content? Data Science can help us with that (another topic we will discuss in a future article).
Another use case for Data Science is that with the vast amount of content, particularly on the internet, how can one track copy-righted material?
This is challenging not only because of the amount of material posted, but also because in order to know if something is an illegal copy one has to know if it is the copy of an original work which has been copyrighted, and this raises the problem of authenticating content and making machines aware of what content falls under copyright laws and what doesn’t. Publishers and Distributors Historically, the law has long distinguished between publishers (and authors), who bare legal responsibility for content, and distributors (bookstores, radio and TV manufacturers and electronic stores), who do not.
Interactive and connected computer systems created two interesting legal cases in 1991 and 1995 which had lasting impacts on both the media and advertising industries, and allowed Data Science to flourish in the internet part of the media/advertising ecosystem.
In the 1991 case, a user posted unfavorable views of Cubby on CompuServe, a pre-internet messaging platform. Cubby sued CompuServe, as a publisher, for defamation. CompuServe countered that it bore no responsibility claiming to be a distributor not a publisher. The court did not settle whether CompuServe was a publisher or a distributor, but argued that because CompuServe was not editing content due to its volume, it was not responsible for the content, and therefore, not liable.
In 1995, Stratton Oakmont similarly sued Prodigy, an information service that had a bulletin board, over a user posting a defaming comment about them. In this case Prodigy lost, with the court arguing that because Prodigy had an editorial policy, it was responsible for the content. Section 230 The legislators who penned the Communication Decency Act written in 1996 wanted to promote the development of the internet and the two cases referenced above inspired representatives Chris Cox and Ron Wyden to write Section 230, asserting that: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Section 230 clarifies that an internet platform with user generated content is not a publisher, and therefore has no legal responsibility for content. The only limitation was in terms of copyright, where the international treaties prevail.
Quickly, it became clear that “anything goes” on the internet, and indeed was and is going on, from the most sophisticated scientific discourse, to human trafficking! That it took 20 years to regulate child sex trafficking with a limitation placed on Section 230 in 2018 is a measure of the powerful forces at play and the significance of Section 230.
Aside from the liability protection of Section 230, the law gave the internet companies a considerable competitive advantage over the traditional edited media: no need to pay editors to curate content.
The impact of Section 230 went well beyond the US, not only because the technology allows the content to be based in the US on servers owned by US companies, yet delivered abroad, but also because legislators in other countries felt that the same measure would allow to their domestic internet companies to grow as well.
The EU passed a similar directive in 2000, with the minor limitation that content providers had to act in good faith to be enjoy immunity from content liability. Other countries passed similar laws, sometimes incorporating restrictions that in any case left a considerable competitive advantage to internet-based content distributors relative to their traditional media competitors.
This change in the definition of what a “publisher” is and isn’t and what they are and aren’t responsible for is a huge legal star in the fireworks surrounding the media and advertising world. (Perhaps newspapers should label themselves “paper platform services” and ask for the same immunity!) Enter Data Science Protected by the law, flooded by monetizable content and millions of users who could find an infinite amount of content to consume, platforms needed tools to attract users and sell the data about those users to advertisers.
Data Science has been embraced as the perfect tool to sift, analyze and leverage all that data sitting in servers and databases.
One result… recommendation engines have evolved from a simple presentation of messages in chronological order to highly sophisticated models that compare users, compare content, predict what you may or may not like to see… initiating a de facto self-reinforcing optimization of content likability. What was a vast place for all to explore 5,000 years of accumulated knowledge from recorded history, has become for a few, a deep well reflecting their own psyche.
But history goes on, and as we will see in the next article, powerful forces are now attempting to blunt the competitive advantage granted by Section 230, and give a chance to publishers, regardless the medium.