Special 02 Pt.2|Beyond TikTok Ban: States, Corporations, and Individuals in Global Digital Politics
To listen to this episode: Apple Podcast | Spotify | 小宇宙

(Continue - for the first part of this text transcription, please go to the previous article: Special 02 Pt.1|Beyo... )
As a new force of governance, where does the political power of corporations come from?
Jasmine 40:35
Earlier, you mentioned that multinational corporations don’t want to get involved in politics, but I think we should put a question mark on that. First, it’s quite common for tech companies to be securitized or politicized. On the other hand, I feel like there’s also been a shift within the tech industry itself. We’ve already discussed TikTok’s objectives at length—its primary goal is quite clear: it’s a commercial enterprise aiming for profit. Its product is straightforward—it’s a consumer-facing app that generates revenue from its users. But other platforms, like X, or other tech enterprises such as SpaceX—or even Trump’s own social media platform—are different. They are businesses, yet at the same time, they are fulfilling certain political functions, sometimes even ones entrusted to them by the state.
But is that always the case? Take Elon Musk, for example. Is his power really something that has been delegated to him by the government? I don’t think so. Some tech companies are actively legitimizing and justifying their roles as political actors through their platforms and technologies, rather than merely inheriting authority from state institutions.
D1 41:44
I completely agree. I was just talking to a friend about this yesterday—how platforms have been increasingly assigned political roles, and it seems like this trend peaked after 2016. That was the moment when events like the Cambridge Analytica scandal, major elections, Brexit, and a whole series of political incidents thrust platforms into the center of both international and domestic politics. After that, I think both scholars and the general public became acutely aware that these platforms were no longer just tech companies or neutral intermediaries. They had effectively become new governing entities—political actors in their own right—actively or passively legitimizing their role in managing speech, shaping narratives, and overseeing various aspects of the internet ecosystem.
Alina 42:26
That’s a great point. What we’ve been discussing actually challenges and even redefines our understanding of what constitutes a political subject. It disrupts traditional notions of rights, duty, and classic political philosophy concerning the origins of the state and the nature of power. For example, in traditional political theory, the idea is that in a state of nature, individuals come together and form a social contract, willingly surrendering a portion of their rights in order to establish a government that ensures security and stability. The government, in this framework, acts as the enforcer of (the remaining) rights. But what we’re seeing now is a shift—where private platforms, rather than governments, are increasingly taking on this enforcement role, shaping governance structures in ways that were never anticipated in classical theories of state power.
Jasmine 43:07
When you mentioned people surrendering their rights to the state, it reminded me of something that happened earlier this year—Trump suggesting that Elon Musk should be appointed as the head of the “Department of Government Efficiency”.
D1 43:22
Yeah, and immediately shut down the DEI (Diversity, Equity, and Inclusion) programs.
Jasmine
Elon Musk made a statement back in November saying “We are entrepreneurs, not politicians. We will serve as outside volunteers, not federal officials or employees. Unlike government commissions or advisory committees, we won’t just write reports or cut ribbons. We’ll cut costs.” They claim they don’t want the U.S. government to be controlled by a group of people who were not elected by the people. But there’s a contradiction here. Musk himself—and these entrepreneurs—are also not the people we elect. They’re not even federal employees; they’re external volunteers. It’s a very strategic position—one that allows them to advance when it benefits them and retreat when it doesn’t.
D1 44:04
Exactly. I think this is really just sugar-coating—it’s just a way of framing the issue. And I completely agree. Like what Alina just said, there’s been a lot of discussion around the idea that the relationship between platforms and users is, in many ways, contractual. When I sign up for a platform, I have to agree to its ‘terms of service’. That’s already a kind of contract between me and the platform. If I want to monetize my content, I have to sign another agreement that’s specific to the platform’s monetization policies. In a way, users are ceding their rights—privacy rights, data rights, and so on—in exchange for access to the platform’s utilities, whether that’s information, social connectivity, or economic opportunities. From this perspective, one could argue that platforms are, to some extent, exercising rights that have been voluntarily transferred to them by the people, allowing them to take on certain political functions.
But, as we discussed earlier, platforms are not ‘elected’. And there’s another factor at play here: market saturation. If a particular digital feature can only be achieved through a specific platform, then there’s no real choice for users. They’re forced into an arrangement where their rights are transferred without genuine alternatives. But in a less saturated market, where users can easily switch between platforms, we could say that people are making choices—if they don’t like one platform, they move to another. This is why the user's freedom of choice is so crucial. It’s also why many regions have approached the governance of multinational tech companies through antitrust regulations. If a single company dominates the market, users don’t have a choice. And I think this is going to be one of the key directions for future platform governance—how to manage and regulate these companies through competition law, ensuring that users always have real alternatives.
American Cyber Refugees Dominate Global Discourses?
Alina 45:54
Yes, exactly. And this actually reminds me of something Jasmine mentioned at the end of the last episode—she expressed gratitude for two things, one of which was the fact that Xiaohongshu (RedNote), as a digital space, isn’t a fixed, zero-sum, physical space. In other words, the arrival of these so-called “American cyber-refugees” doesn’t mean that we (Chinese netizens) suddenly lose our own space to exist and express ourselves.
Jasmine 46:16
But in reality, the content review time for publishing our posts has also increased—because... well, data overload.
Alina
Yeah, this really got me thinking. As I was translating the script of the previous episode, I kept reflecting on this issue. Firstly, as we just mentioned, this phenomenon in some ways echoes the early Utopian visions of the internet—like John Perry Barlow’s Declaration of the Independence of Cyberspace, which envisioned the internet as a space free from government intervention, a realm beyond sovereign control. But of course, in reality, we’ve seen plenty of restrictions. And then, when we actually started publishing content about this episode on Xiaohongshu (RedNote)—even my friends who run accounts there, including myself—we all noticed something: our posts weren’t getting published. Many people started joking that it felt like the Americans had “taken all our traffic”. In a way, it speaks to a certain dynamic—foreign users, simply because they are new and different, seem to trigger the so-called algorithmic viral boost. This, combined with potential data overload, has made it harder for local voices to be heard.
Another thing I’ve been reflecting on is how this event once again highlights how powerful states like the U.S. can disrupt the local agenda of other places by imposing its agenda onto global discourses, and it often sideline local conversations. Before the arrival of American cyber refugees, the discussions on Xiaohongshu (RedNote) revolved around topics like online scams and healthcare system in China. But once they showed up, most of the attention shifted to them. Whether it’s algorithmic visibility or public discourse, the focus was completely redirected.
And when we compare this to past events—like when Pakistan implemented similar bans, or even the online displacement of Russian users following the Russia-Ukraine conflict—the level of global media coverage and public discussion was nowhere near what we’ve seen with this influx of American users onto Xiaohongshu (RedNote).
D1 48:43
This issue ties back to something we mentioned earlier. In the era of traditional mass media, there were already discussions about which countries dominated the media agenda and which ones were often overlooked. My own background is in international journalism and global communication, and actually, back in the 1970s, UNESCO released a report called Many Voices, One World. At the time, the focus was on advocating for a more balanced media agenda—one where global information flows weren’t monopolized by a handful of powerful countries. What’s interesting is that even in the internet age, we’re still seeing echoes of that same dynamic. In this case, we’ve witnessed how quickly public attention gets reshaped, often gravitating toward discussions that are louder, more sensational, or perceived as more relevant. In a way, this reflects a continuation of traditional media’s agenda-setting power—just in a new form.
Are Algorithms biased?
Alina 49:43
Right, and earlier we talked about how TikTok’s algorithm has, in some cases, amplified the voices of marginalized communities that were previously at the periphery of mainstream media. But what’s happening on Xiaohongshu (RedNote) now has made me wonder—do algorithms also have a built-in preference for foreign content? Of course, I know that every platform has its own internal logic, almost like a formula—how many likes you get, how much engagement you generate, how much exposure you receive—all of these factors determine whether your content gets pushed to a wider audience. In this case, there’s no doubt that foreign users coming onto Xiaohongshu (RedNote) created a sense of novelty. People found it intriguing, which fueled even more discussions and thus boosted their visibility further. But it raises an interesting question: can algorithms themselves be biased?
This is actually a huge topic in academia. Take search engines, for example—what they show us is shaped by how these models were trained and what kind of data they were fed. If that data reflects existing stereotypes, then the algorithm, instead of being a neutral force, might actually reinforce those biases. So while we often talk about technological advancements as a means of liberation or equalization, in reality, they can sometimes do the opposite—deepening stereotypes rather than erasing them.
D1 51:23
Yeah, I think this is something that both academia and users are increasingly noticing—it all follows the same logic. Algorithmic bias, at its core, comes from two main sources. On one hand, there’s the bias inherent in the training data itself—whatever historical patterns and imbalances exist in that data will inevitably be reflected in the algorithm’s outputs. On the other hand, bias can also stem from how the model is designed and how the platform applies it in practice. So in the context of this particular event, it raises an interesting question: Is Xiaohongshu (RedNote) actively favoring foreign users? Is it intentionally giving them more visibility? Most internet platforms have a built-in mechanism to boost new users, simply because they want to attract and retain them. It’s a common strategy—if you’re a brand-new user on Douyin or TikTok, for example, your first few videos are much more likely to get a boost in engagement. That’s part of how platforms hook new users and encourage them to stay. But the key question here is: Which new users are getting that extra exposure? Is there a difference based on where they’re from? That’s where bias can come into play—if certain groups of new users are being favored over others, then the system isn’t just about user retention; it’s actively shaping whose voices get heard.
Policy, Tools and Enforcers Can Also Be Biased
Jasmine 52:59
Earlier, both of you discussed the biases in technology and the politicalization of companies, and I’d like to add a new perspective: policies themselves can also be biased. When we discuss international relations, this bias is often quite evident, with numerous examples. However, now, particularly in domestic politics, especially in the United States, there are also many political, selective, and biased elements at play. I want to touch on three angles here.
First, let’s focus on the U.S. domestic situation, specifically on the topic we’ve been discussing all along—social media regulations. Trump’s policies have been biased in this regard. During his first term, he sought to repeal Section 230, which we’ve talked about before. Interestingly, after he created his own social media platform, Truth Social, he benefited from the very protection offered by that clause. As a result, he stopped opposing Section 230. But it’s anticipated that during his next term, if he’s re-elected, he will push to amend it again. Even the newly appointed chairman of the Federal Communications Commission, Brendan Carr, has suggested revising Section 230 to punish advertisers who leave platforms like X, which have fewer content moderation restrictions. Carr has also expressed strong disapproval of Facebook, Google, Apple, and Microsoft, accusing these tech giants of fostering a censorship cartel. This clearly highlights how the Trump 2.0 administration is inclined to favor X, while possibly applying biased policies toward other tech behemoths.
This kind of bias also shows up in his antitrust policies. On one hand, Trump himself is not a supporter of oligopolies. During his first presidency, he launched antitrust investigations into companies like Meta, Amazon, and Apple. However, the motives behind his antitrust actions were often far from pure, reflecting his personal interests or biases. For example, when CNN published negative reports about him, he obstructed their merger. On the other hand, he swiftly approved Disney’s acquisition of 21st Century Fox. This reveals a double standard in his approach to antitrust issues, applying a flexible set of rules depending on the situation. The question now is, how will he use antitrust laws in the future? Will he weaponize them to pressure certain corporate decisions? This remains uncertain. So, when it comes to domestic politics in the U.S., there's definitely an element of political, selective, and biased decision-making. We can’t overlook the aspect of China-U.S. relations, especially when it comes to the TikTok ban legislation. This is a very recent and targeted law aimed specifically at one company – ByteDance. Over the past couple of years, we’ve seen a series of lawsuits directed at individual Chinese companies. For example, the Biosafety Act targets Chinese companies like BGI and WuXi AppTec, while the annual defense bills have focused on companies like DJI.
The third aspect I want to highlight is the bias in the policy discourse and the selection of policies. For instance, the U.S. Congress now prioritizes data protection and privacy over freedom of speech as a regulatory approach. However, U.S. TikTok users feel that the government is overprotecting their personal data and privacy. Our guest in the previous episode mentioned that they simply don’t care about their personal data and are willing to give it to China.
Trump also politicized the TikTok ban issue using the separation of powers. In a document he submitted to the Supreme Court, he argued that parts of the Congressional ban infringed on the president's executive authority. Of course, some people think that Trump’s intervention in the ban was about taking credit or engaging in a power struggle with Congress. But I think here, politicization and power struggles seem to become part of Trump’s personal strategy and tool to try to influence the final outcome for TikTok.
D1 57:00
Yes, I think his flip-flopping attitude is fully reflected in how he deals with TikTok. As Jasmine mentioned earlier, TikTok has already done a lot of work in response to the series of bans and ongoing national security-related controversies it’s faced in the U.S. over this long period of time.
For example, TikTok has established a project in the U.S. called Project Texas, and a similar project in Europe called Project Clover. The main goals of these initiatives focus on several key changes. First, as mentioned earlier, they decided to store U.S. user data solely in local data centers in the U.S. Before this, TikTok’s U.S. data was stored in Virginia with a backup in Singapore. However, since the controversies in 2020, they’ve shifted all U.S. user data to Oracle servers, marking a significant adjustment. Second, as we mentioned, TikTok’s content policy-making and review processes were also changed, with their security and risk teams relocated from Beijing and their office now solely in the U.S. Third, TikTok has opened up compliance and transparency investigations. Whether it’s through the Committee on Foreign Investment or Oracle or any third-party auditors, TikTok has made it clear that they are fully cooperative with any investigations into how they conduct reviews.
However, despite these adjustments, lawmakers are still not fully satisfied. Their focus isn’t on these technicalities; they aren’t particularly concerned with how TikTok has made these separations. Instead, they hope to implement a “ban if not sold” bill, aiming to control the company from the top down, through legislation. Therefore, I believe there is a clear bias in the expectations of what the company should do and how the government should intervene in the operations of such a business.
Nation-state, Corporate or Society: Digital Sovereignty for Whom?
Alina 58:58
Currently, there are many discussions on the issue of digital sovereignty and its ownership. Countries are now hoping to, for example, localize digital infrastructure or data storage centers, like, Apple’s iCloud for Chinese users is located in Guizhou, combining traditional geographic concepts with what is considered a more abstract concept, like ‘cloud’ or ‘cyber’ space. These states or political entities believe that they can still claim sovereignty over these digital elements. However, there are also voices arguing that data produced by companies like Facebook and Google, and digital services in general, don’t belong to us or the state, but rather to the companies providing these services. In academia, there is much debate regarding ‘privacy by using’ versus ‘privacy by design’ in the context of digital sovereignty. Should digital power be in the hands of tech companies, with nations no longer having control over citizens’ data, security, and protection? Or should citizens reclaim their own digital sovereignty, meaning these things belong neither to the state nor to companies? I think, with the issue of TikTok, the tension and struggle between the state, society, and tech companies becomes very apparent.
D1 01:00:36
Yes, I think if you ask a user to what extent they are willing to give up some of their data in exchange for convenience in life, each country’s users might give different answers. A lot of this depends on how much they understand the mechanism, how their data is used, by whom, whether they have control over their data, and so on. I think this is closely related to a citizen’s digital literacy, as well as their concerns about privacy and the concept of private rights.
Jasmine 01:01:08
It also comes down to trust in both corporations and the state.
D1
Yes, yes. I think this is a crucial area of research, which focuses on figuring out the differences and distinctions between these dynamics. This also shouldn’t be covered by, for example, what we’ve talked about just now, the tensions between state and corporate power. These issues themselves should be paid attention to.
The Securitization of the So-called ‘Universal Values’
Alina 01:01:36
And this also includes the question of what truly constitutes freedom in this new technological era. When considering issues such as freedom of speech—what our previous guest referred to as the “First Amendment right”—versus digital privacy and data security, which carries more weight? In situations where the state elevates these matters into a national security issue, what is actually the most important thing? For instance, our guest from the first episode, as well as the community they belong to, would certainly argue that their daily lives are already difficult and challenging enough. They do not want their struggles to be further entangled with abstract grand narratives about national security or other lofty political concerns.
D1 01:02:31
I think the concept of freedom of speech that we just mentioned is not only a universal value but also a fundamental human right as recognized in international human rights law. It is an obligation that both multinational corporations and sovereign states must actively uphold and protect. However, international human rights law also stipulates that freedom of speech can be restricted under certain specific circumstances—one of these being national security. The challenge, however, lies in how national security is defined and interpreted in practice, as this is ultimately determined by sovereign states. The TikTok case serves as a striking example of how national security is defined and leveraged in the real world.
Alina 01:03:04
Indeed, I think TikTok’s controversial journey in the U.S., from being an ordinary social media platform to being framed by the American government—through a series of speech acts—as a national security threat and a geopolitical tool, illustrates an attempt at securitization that, from a national level, has encountered some failures. As in Barry Buzan and Ole Wæver’s securitization theory, an issue must undergo a process of discursive and social construction, gradually being elevated from an ordinary concern to a matter of national security. This transformation then justifies the adoption of policies and actions. However, this process follows specific steps. First, it requires a series of speech acts—public statements that define an entity as an existential threat to national survival. The U.S. government has been actively engaging in this stage for sure. Second, for securitization to be successful, the audience—whether the general public or authoritative institutions—need to accept this framing. Only then can the issue be fully constructed as a national security concern, leading to the implementation of exceptional measures. However, it is evident that in this case, many Americans are not fully buying into this securitization process. Due to pressing concerns such as economic conditions, social inequalities, and systemic issues—including the widely discussed problems of healthcare in the U.S., as seen in viral conversations on Xiaohongshu (RedNote)—many citizens prioritize other issues over the supposed security risks posed by TikTok. This public sentiment significantly weakens the effectiveness of the securitization attempt.
D1 01:05:08
Yes, I think that within many so-called elite perspectives, there is a belief that social media has trivialized or “entertained” serious news and political events. However, in reality, we are increasingly seeing that many political events are not simply being made entertaining but are inherently theatrical and dramatic in nature. In this sense, criticizing people for engaging with these events in a way that reflects their theatricality feels highly patronizing.
Alina 01:05:35
Exactly, and perhaps it is precisely because TikTok embodies this de-elitization and decentralization features that we ultimately see such a discussion emerge. This directly aligns with what Overreactology aims to highlight—that things we often perceive as mundane, ordinary, apolitical, or even purely entertaining are, in reality, deeply intertwined with politics.
Jasmine 01:06:07
Yes, I think today’s episode marks a new step for Overreactology, as we’re trying to move towards more analytical content. As Alina mentioned earlier, we’ve usually focused on how macro geopolitical affairs affect individual lives at micro levels, like in the first episode of this special season, we’ve examined how the TikTok ban directly impacts personal experiences. Now, in this second episode, we moved into more abstract and analytical discussions. But I’d like to remind our listeners that, even though we’re discussing topics that seem to be happening on an international scale or very far away, they’re still deeply relevant to every individual. And before we wrap up, I’d like to give a little sneak peek about our second season, (Un)folding Conflicts, which is currently in the process of production. We hope to share it with you all very soon.
喜欢我的作品吗?别忘了给予支持与赞赏,让我知道在创作的路上有你陪伴,一起延续这份热忱!

- 来自作者
- 相关推荐