What We Could Learn From China’s Regulation of Recommendation Algorithms
Read to the end for my recommendation.
China’s View on Recommendation Algorithms
In 2012, ByteDance (the company behind TikTok) launched an algorithmically driven app called “Toutiao” that offered personalized streams of news and content to users. The app was launched in China and designed to be value-neutral like the social media apps we use in the West. “Value neutral” in the sense that the app only selected content to show based on the users’ stated and unstated preferences without ascribing to any political, cultural or social values.
Toutiao quickly rose in popularity and became China’s most popular news app in 2016. Not surprisingly, the app’s value neutrality and laissez-faire approach to content moderation did not sit well with the Chinese Communist Party (CCP). The flow of information could hardly be controlled which made it very hard to censor content that opposed CCP’s agenda.
In September 2017, the state-owned Chinese newspaper “People’s Daily” ran a series of three articles on three consecutive days that expressed the CCP’s displeasure with Toutiao’s algorithms. China hawks might dismiss these articles as pure propaganda advocating for censorship. Not me. I think CCP’s assessment of recommendation algorithms is truthful and correct. As early as 2017, CCP could foresee the dark side of letting nameless algorithms without values and accountability take charge of people’s news and entertainment consumption.
From the first part of the People’s Daily series “Algorithm recommendation: Algorithms should not be allowed to determine content “ on why algorithms need human accountability:
“Smart news clients represented by Toutiao and Yidian Zixun, with their powerful algorithms and advanced data capture technologies, can accurately analyze and interpret users' reading habits and interests, thereby providing users with tailor-made news products, meeting personalized needs, and adapting to the trend of the times towards segmented reading.
However, behind the technological dividend, there are also places where the sun does not shine. The spread of pornographic and vulgar content is just one of the problems faced by smart news platforms in content distribution. For example, health knowledge that has not been scientifically verified, exaggerated advertisements, clickbait titles that are only for eye-catching, overly emotional opinions and even useless information often appear in the recommendations on the homepage of smart platforms. For example, a netizen accidentally clicked on a message about wreaths, and as a result, Toutiao continuously pushed information about funeral supplies, which was very disturbing. Some netizens also summarized that the inferior information can be divided into three categories: it is difficult to distinguish between true and false, and there is a mixture of good and bad; it is difficult to distinguish between right and wrong, and the value orientation is confused; it lacks depth, and the content and opinions are too superficial.”
(..)
At any time, content push cannot be without an "editor-in-chief", and no matter how good the communication channel is, it must have a "gatekeeper". Even in the era of technology, algorithms cannot completely determine the content.”
From the second part “Don't be trapped in an "information cocoon" by algorithms” about online echo chambers:
“With the help of algorithms, we can easily filter out information that we are not familiar with or do not agree with, and only see what we want to see and hear what we want to hear. Ultimately, our inherent biases and preferences are reinforced through constant repetition and self-justification. Once in such an "information cocoon", it is difficult to accept heterogeneous information and different viewpoints, and even a high wall that hinders communication between different groups and generations is erected.
It must be admitted that so-called advanced technology and sophisticated algorithms may amplify certain negative effects. At the social level, if we all indulge in our own "comfort zone" and feel sorry for ourselves, we may further reduce the rational, open and inclusive public space, and thus lose the opportunity to reach a consensus in disputes. For example, is shared bicycles a revolution in urban transportation or a burden on management? Can violent walking groups occupy the road at night? Is the fault of the mother's suicide the family or the hospital? If the two sides of the debate block each other, they may intensify the contradictions in their own talk, solidify cognition, and move towards closure. Even worse, it may evolve into an emotional mutual spray and team-up, causing artificial divisions and not conducive to solving problems.
Therefore, to get out of the "information cocoon", supervision needs to be further strengthened. For information platforms with powerful algorithms and technical support, it is far from enough to "please" users. They must consciously implement relevant central policies and regulations, not take chances, allow violence, pornography and other bad information to spread, and not fool netizens and the public in the name of profound technology. In addition, the whole society must reach a consensus, attach importance to scientific algorithms, and work together to clear up the cyberspace.”
From the third and final part of the People’s Daily series, “Beware of algorithms going the other way when it comes to innovation“ about copyright infringements and algorithms challenge to originality and creativity:
“Once the virtue of moderation is lost, algorithms may go astray and even go the other way. A self-media author once lamented: The era of intelligent information platforms is coming, and those of us who make a living by selling articles may become "brick movers" who rely on platforms to survive. Once the platform gains such a strong position, it may reprint the original author's content without giving corresponding compensation, which will ultimately further extinguish the original author's passion and love.
This shows that the biggest problem brought by intelligent information platforms may not be infringement, but going in the opposite direction of innovation, and may even destroy the source of innovation from the root.
(..)
For ordinary authors, if they want to survive on the platform, they can only cater to and please, and lose the ability to think independently and observe deeply, thereby weakening the creativity of the entire society. As a media entrepreneur said, it is better to shoot funny videos than to engage in in-depth content, which can win more clicks. This trend is worrying. In fact, many so-called self-media accounts on Toutiao have been full of vulgar, bottomless and even rumor information. What's more, the so-called algorithm push and customized release have misled some local governments and departments, making muddled accounts and wasting money, which has aroused heated discussions and dissatisfaction among netizens.
To prevent algorithms from going the other way, we need to improve relevant laws and regulations, strengthen penalties for infringements during law enforcement, and protect the rewards that original creators deserve; but more importantly, platform companies need to shoulder their corresponding social responsibilities”
Suggesting in America that social media platforms have a social responsibility to moderate would be regarded as a crime against “freedom of speech”. The relevant decision-makers think it’s a human right to be drowned in false, misleading, and spammy content with catchy headlines and zero substance.
In spite of more cultural opposition and awareness of the issues associated with social media - see for example the national debate last year sparked by Jonathan Haidt’s book The Anxious Generation – I don’t have high hopes for legal constraints on social media companies in the US for the foreseeable future. Not only is the US market-driven economy deeply entangled with BigTech’s attention-harvesting business model but the elected (and unelected) political leaders do not think “misinformation” is a real problem that needs to be addressed.
For a long time, Europe followed the United States in its determined battle march toward nihilism, chaos, and death of common values. Then, in 2022, the EU adopted the Digital Services Act (DSA) which contains some potentially important provisions on transparency, moderation and “recommender systems”. We have left to see, if the DSA will be vigorously enforced or if its impact will be limited to paper exercises.
Keep reading with a 7-day free trial
Subscribe to Futuristic Lawyer to keep reading this post and get 7 days of free access to the full post archives.