TikTok, owned by the Chinese parent company ByteDance, in 2022 responded to a hue and cry about surveillance of Americans by moving all U.S. user data onto a U.S.-based server operated by Oracle. That fig leaf, however, did not limit Beijing’s control nor allay fears that the United States’ chief economic and military rival was collecting users’ personal information and flooding social media with Chinese propaganda. In an election year, regulating TikTok is catnip for politicians.
Technically, the House legislation would allow TikTok to keep operating, but only if it were sold to a company not under control of a “foreign adversary.” China hawk Rep. Mike Gallagher (R-Wis.) told NPR: “What we’re after is, it’s not a ban, it’s a forced separation. The TikTok user experience can continue and improve so long as ByteDance doesn’t own the company.” As a practical matter, however, there might be no buyer for TikTok.
Concerns about TikTok’s ownership are long-standing and legitimate. “Gallagher says classified and unclassified national security assessments show that the app is a threat to user privacy and that it’s been used to target journalists and interfere in elections,” NPR reported.
But singling out one social media company, at least in part because of concern about pro-Chinese content, and not others raises significant First Amendment issues. Content discrimination (e.g., We don’t want you receiving pro-China messages) is a cardinal sin under the First Amendment. Even if the move were justified simply on grounds of suspected Chinese spying or collection of personal data, the U.S. government would have a heavy burden to justify a ban if the Chinese owner did not sell the company.
At issue is not China’s First Amendment rights, but those of U.S. users. It is their speech rights that would be curtailed if TikTok disappeared. As Jennifer Huddleston of the Cato Institute wrote, “Under First Amendment precedents, the government will need to prove that forced divestment or otherwise banning of the app is both based on a compelling government interest and represents the least restrictive means of advancing that interest.” Lesser measures, such as full disclosure of or an opt-out from data collection, could be proffered as an alternative to a sale or draconian ban.
David Greene of the Electronic Frontier Foundation told “PBS NewsHour”: “Now, if China does pose some particular threats, the U.S. can react to it. The question is whether forcing the sale or banning this platform from operating as it currently operates is the properly tailored way of addressing that threat.” Put simply, if the concern is about Chinese propaganda, the bill would likely not stand up to constitutional scrutiny. If it is about data privacy, the government would have to provide a detailed explanation proving a security threat (something it might want to avoid on national security grounds) and show there is no less onerous way to address the issue.
But let’s get real: There is a way to address privacy issues — not only for TikTok but for the entire social media environment. “What we do not have in the U.S. is comprehensive data privacy regulation that controls how much data companies can collect about their users in the first place, when — to the extent they can retain such data and how they can share such data,” Greene pointed out. “If companies, TikTok or anybody else, were not collecting and retaining and sharing so much data in the first place, you wouldn’t need to single out TikTok for such exceptional treatment.” He makes a persuasive case that if Congress were truly concerned about privacy it would look at “how TikTok and other social media companies retain user data, and … how data brokers then purchase and then redistribute that data to lots of actors, including governments and including our enemies.”
That brings us to the broader issues afflicting social media that extend well beyond TikTok. Congress has yet to pass even basic disclosure rules, such as the Honest Ads Act, which would apply to social media rules on advertising that apply in other media.
We have no shortage of proposals to address many of social media’s ills. The Electronic Privacy Information Center, for example, advocates passage of a “comprehensive data protection legislation to place strict limits on the collection, processing, use, and retention of personal data by social networks and other entities.” It further recommends that the Federal Trade Commission use “existing authority to rein in abusive data practices by social media companies, and … take swift action to prevent monopolistic behavior and promote competition in the social media market.”
Another approach would be to require transparency so independent experts can study the impact of social media. Rand Corp. has urged passage of legislation akin to the 2021 Platform Accountability and Transparency Act that “would require the National Science Foundation to establish a review process to approve social media researchers, who would have to be affiliated with academic institutions.” Then, approved researchers would gain “access to de-identified aggregate data from social media companies with greater than 50 million unique monthly users.” With hard data, Congress could then consider appropriate legislation on everything from algorithm regulation to lawsuits enforcing terms of service.
When it comes to artificial intelligence, yet another troubling aspect of social media, Yael Eisenstat, among other experts, recommends Congress “regulate AI with a combination of proactive measures to support a transparent industry that incentivizes pro-social behaviors, and responsive measures, to ensure accountability when AI tools exacerbate the spread of misinformation and cause hate-based or anti-democratic harms.” Simply labeling content so users can “differentiate between authentic content and artificially generated content” would be a basic step that does not involve content regulation.
Finally, ideas abound on curtailing, if not eliminating, the liability protection afforded to social media companies under Section 230 of the Communications Act. Some bills “have focused on procedural aspects of decisions to restrict content, such as by conditioning immunity on publishing terms of service or explaining decisions to restrict specific content,” the Congressional Research Service noted. Other proposals would allow specific types of lawsuits “brought under drug trafficking or nondiscrimination laws” or authorize lawsuits “if the site promoted the challenged content through a personalized algorithm.”
No doubt, many would find banning TikTok a feel-good measure to address widespread distrust of China. However, a constitutionally suspect effort hastily passed without robust hearings that fails to address the whole panoply of destructive practices across platforms constitutes irresponsible and, frankly, lazy legislating.
Instead of China-bashing or an endless stream of histrionic hearings in which lawmakers heap scorn on tech executives, it is long since past the time for Congress to do the hard work of tackling in a systematic way the problems that plague social media.
Credit: Source link