Fight misinformation with IDV for tiered anonymity on social media, paper argues

Social media and its effects on our society is an ongoing conversation. Some governments are considering banning social media for children, with ensuing debates over privacy for the required age assurance. And with the rise of generative AI and deepfakes, tackling misinformation is a pressing challenge.
A new paper from Cambridge University, however, offers some guidance and perhaps inspiration for policymakers. In it the authors argue for the establishment of a three-tier anonymity framework on social media platforms utilizing limited identity verification to counter deepfakes and LLM-driven mass misinformation.
Authored by David Khachaturov, Roxanne Schnyder and Robert Mullins from the Department of Computer Science and Technology at the Institute of Criminology, University of Cambridge, the paper is currently on arxiv.
The framework proposes tiers determined by a given user’s “reach score” or influence. Tier 1 permits full pseudonymity for smaller accounts, preserving everyday privacy. Tier 2 would require private identity verification for accounts with “some influence,” reinstating real-world accountability at moderate reach.
Tier 3 would require per-post independent, machine learning-assisted fact checking and review for accounts that would traditionally be classed as sources of mass information.
Importantly, since the authors acknowledge that voluntary adoption of such protocols described above is unlikely, they offer a regulatory outline. This regulatory pathway adapts existing U.S. jurisprudence and recent EU and UK safety statutes.
The paper observes that online anonymity began as a shield for ordinary people, but with algorithmic amplification giving a single post the potential reach and influence of a TV broadcast, blanket anonymity becomes a public safety liability. “We therefore argue that identity obligations should scale with influence,” they conclude.
The European Union currently provides the strongest foundation for codifying tiered identity obligations, the authors say, with the Digital Services Act (DSA) introducing structural mechanisms that can be repurposed to scaffold a “reach-based verification regime.”
They point specifically to DSA Article 30’s Know Your Business (KYB) customer requirement, which mandates identity verification for commercial users, as a conceptual shift – platform functionality is increasingly conditioned on user transparency.
For the UK, they identify the Online Safety Act 2023, with the accompanying Categorization of Regulated Services Threshold Conditions Regulations 2024 which requires digital platforms to provide tools that enable content filtering based on verification status. “This framework introduces a layered reputational infrastructure while preserving the right to anonymity, laying the conceptual groundwork for our proposed tiered identity regime,” the paper says.
While the U.S. is the most challenging country for mandatory identity regulation due to First Amendment protections, among other shields, the authors note the possibility of indirect, incentive-based mechanisms of accountability based on bipartisan legislative proposals at the federal level.
At the core, the authors want social media platforms to be regulated so that anonymity is calibrated to communicative reach, with their tiered system providing a layered approach. “Adopting this model would re-introduce the social friction that recommender systems have eroded,” the paper argues.
The paper “Governments Should Mandate Tiered Anonymity On Social-Media Platforms to Counter Deepfakes and LLM-Driven Mass Misinformation” can be read here.
Article Topics
age verification | digital identity | generative AI | identity verification | social media
Comments