June 28, 2022

The Promise and the Worry in the Metaverse

By Mary Beth Altier and Mathis Bitton

Many people are talking about the wonderful things that can be achieved with the Metaverse.  But if we think back to the advent of the Internet and social media, we also saw tremendous opportunity. And then we saw this technology being used by nefarious actors, including human traffickers, violent extremists, and even hostile states. In 2016, Russia weaponized the Internet to interfere in the U.S. presidential election, an event that captured the destructive potential of these platforms.

Reflecting upon our experience with the social giants of web2, how can we incentivize tech companies to think now about security, moderation, or governance in the Metaverse? Which risks should we be most careful to avoid?

In the early 2010s, tech companies were reticent to police anything on their platforms. The response of the big players was, “We're not doing that. It's free speech, we're just going to let everything fly.” They were trying not to take any stance on what's acceptable and what's not, in part because the line is difficult to draw.

Yet their position has changed over time, and significantly so – among other examples, last year, Twitter permanently banned President Trump and Facebook’s Oversight Board implemented a two-year ban. How did this shift happen, and what can we learn from it moving forward? 

What We Learned from Social Media

Three key groups influence the policies of social media companies. First are advertisers, who control these platforms’ revenue streams at the source. For instance, there once was a big push against YouTube because ads for companies like Procter & Gamble, Toyota, and Anheuser-Busch were running next to ISIS beheading videos and other extremist content. The advertisers started pulling their money, and YouTube’s standards shifted – the private sector can incentivize itself to change, at least up to a point.

Second, NGOs, grassroots organizations, and voters can pressure the US government to regulate social media content. As awareness of the dangers onlinegrows, these efforts are bound to become more influential – even if Washington remains notoriously behind the curve on these matters.

Third, users themselves can decide to leave the platform. But they rarely do; once social media giants secure network effects at scale, people become more and more reluctant to leave a platform on which they have built relationships. In practice, companies like Facebook or YouTube can endure scandal after scandal before consumers begin to react strongly. The bar for any kind of reaction is too high to be meaningful; and the magnitude of the reaction, when it does occur at all, is often too small to be notable. More efficient in other sectors, user accountability does not seem particularly effective when it comes to digital platforms.

More than anything, a shift in economic incentives underpinned tech companies’ change of mind on content moderation. At first, these platforms saw moderation as an extra cost requiring resources that could be used elsewhere. But they came to understand that, precisely because it requires large resources, embracing content moderation could make it impossible for smaller players to compete. How could small social media start-ups have a content moderation department? So long as extensive moderation remained the norm, large incumbents would retain their grip on the market.

Ultimately, we can learn three lessons from the trajectory of content moderation over the past ten years. First, with large-scale network effects, social media platforms can avoid user accountability. Second, economic incentives and regulations remain the only ways to make tech companies change their behavior. Third, content moderation can come at the cost of competition and decentralization. The standards can be so demanding as to prevent smaller players from entering the market; and if they do enter the market, they will only be able to cater to a smaller user base – namely, people who prefer platforms without content moderation. This last point is particularly important as we begin to think about the Metaverse, which could become more decentralized – as web3 enthusiasts hope – or replicate the market domination patterns of web2. Let’s examine these two scenarios in turn.

Two Mental Models for the Metaverse

In the first scenario, current players – Apple, Meta, and so on – control most of the Metaverse. Each of these companies has already invested billions of dollars in Metaverse-focused research and development, and this number is bound to increase as adoption accelerates. If these firms export their dominance to the Metaverse, their content moderation policies will translate into the standards of tomorrow. In fact, last week, Meta, Microsoft, and 35 other companies announced the formation of the Metaverse Standards Forum, an open consortium dedicated to developing Metaverse interoperability protocols. Defining the rules from above, these companies could bring their decade-long experience of content moderation to the internet of the future. But this concentration of power would come at the cost of more bottom-up decision-making.

In an alternative scenario, web3 triumphs and decentralized ecosystems replace the giants of old. Overcoming the dominance of incumbents, new players turn the Metaverse into a mosaic of decentralized communities – each of which defines its own norms. While attractive, this more democratic model would make content moderation significantly harder: if thousands of decentralized organizations come up with standards that correspond to their community’s values, achieving interoperability seems downright impossible.

Each mental model therefore comes with trade-offs. Either tech giants retain control of the Metaverse and status quo moderation remains the norm, or web3 eats the world and the Internet becomes a Far West devoid of clear standards, at least in the short run. The second model looks riskier, but it does offer the possibility to democratize content moderation in the long run. As Roblox’s Head of Education Rebecca Kantar put it at a recent Brookings Institute panel, if the Metaverse is to promote healthy interactions, “profit cannot be the only motive.” Democratic decision-making is messy and often inefficient, but these flaws need not justify outsourcing power to expert-led companies with incentives of their own.

Ultimately, whether large companies succeed in capturing the Metaverse or not, those concerned about content moderation will have to balance these trade-offs between decentralization and efficiency, competition and safety, democracy and expertise.  


Mary Beth Altier  Directs the MS in Global Affairs concentration in Transnational Security as well as the Initiative on Emerging Threats. Her research interests focus on political violence, political behavior, international security, nationalism, and ethnic conflict. She translates those interests into her Metaverse focus: security, and creating safe and equitable communities. Mathis Bitton is a student of political theory at Yale and an associate at The  Metaverse Collaborative at NYU SPS.


Related Articles