What's here is a digital landscape in which harmful, misogynistic content can be generated rapidly by anyone with a smart phone and access to a generative AI chatbotWhat's here is a digital landscape in which harmful, misogynistic content can be generated rapidly by anyone with a smart phone and access to a generative AI chatbot

AI tools are being used to subject women in public life to online violence

2025/12/19 12:06

The era of AI-assisted online violence is no longer looming. It has arrived. And it is reshaping the threat landscape for women who work in the public sphere around the world.

Our newly published report commissioned by UN Women offers early, urgent evidence indicating that generative AI is already being used to silence and harass women whose voices are vital to the preservation of democracy.

This includes journalists exposing corruption, activists mobilising voters and the human rights defenders working on the frontline of efforts to stall democratic backsliding.

Based on a global survey of women human rights defenders, activists, journalists and other public communicators from 119 countries, our research shows the extent to which generative AI is being weaponised to produce abusive content – in a multitude of forms – at scale.

We surveyed 641 women in five languages (Arabic, English, French, Portuguese and Spanish). The surveys were disseminated via the trusted networks of UN Women, Unesco, the International Center for Journalists and a panel of 22 expert advisers representing intergovernmental organisations, the legal fraternity, civil society organisations, industry and academia.

According to our analysis, nearly one in four (24%) of the 70% of respondents who reported experiencing online violence in the course of their work identified abuse that was generated or amplified by AI tools. In the report, we define online violence as any act involving digital tools which results in or is likely to result in physical, sexual, psychological, social, political or economic harm, or other infringements of rights and freedoms.

But the incidence is not evenly distributed across professions. Women who identify as writers or other public communicators, such as social media influencers, reported the highest exposure to AI-assisted online violence at 30.3%. Women human rights defenders and activists followed closely at 28.2%. Women journalists and media workers reported a still alarming 19.4% exposure rate.

Since the public launch of free, widely accessible generative AI tools such as ChatGPT at the end of 2022, the barriers to entry and cost of producing sexually explicit deepfake videos, gendered disinformation, and other forms of gender-based online violence have been significantly reduced. Meanwhile, the speed of distribution has intensified.

The result is a digital landscape in which harmful, misogynistic content can be generated rapidly by anyone with a smart phone and access to a generative AI chatbot. Social media algorithms, meanwhile, are tuned to boost the reach of the hateful and abusive material, which then proliferates. And it can generate considerable personal, political and often financial gains for the perpetrators and facilitators, including technology companies.

Meanwhile, recent research highlights AI both as a driver of disinformation and as a potential solution, powering synthetic content detection systems and counter-measures. But there’s limited evidence of how effective these detection tools are.

Many jurisdictions also still lack clear legal frameworks that address deepfake abuse and other harms enabled by AI-generated media, such as financial scams and digital impersonation. This is especially the case when the attack is gendered, rather than purely political or financial. This is due to the inherently nuanced and often insidious nature of misogynistic hate speech, along with the evident indifference of lawmakers to women’s suffering.

Our findings underscore an urgent two-fold challenge. There’s a desperate need for stronger tools to identify, monitor, report and repel AI-assisted attacks. And legal and regulatory mechanisms must be established that require platforms and AI developers to prevent their technologies from being deployed to undermine women’s rights.

When online abuse leads to real-world attacks

We can’t treat these AI-related findings as isolated statistics. They exist amid broadening online violence against women in public life. They are also situated within a wider and deeply unsettling pattern – the vanishing boundary between online violence and offline harm.

Four in ten (40.9%) women we surveyed reported experiencing offline attacks, abuse or harassment that they linked to online violence. This includes physical assault, stalking, swatting and verbal harassment. The data confirms what survivors have been telling us for years: digital violence is not “virtual” at all. In fact, it is often only the first act in a cycle of escalating harm.

For women journalists, the trend is especially stark. In a comparable 2020 survey, 20% of respondents reported experiencing offline attacks associated with online violence. But five years later, that figure has more than doubled to 42%. This dangerous trajectory should be a wake-up call for news organisations, governments and big tech companies alike.

When online violence becomes a pathway to physical intimidation, the chilling effect extends far beyond individual targets. It becomes a structural threat to freedom of expression and democracy.

In the context of rising authoritarianism, where online violence and networked misogyny are typical features of the playbook for rolling back democracy, the role of politicians in perpetrating online violence cannot be ignored. In the 2020 Unesco-published survey of women journalists, 37% of respondents identified politicians and public office holders as the most common offenders.

The situation has only deteriorated since 2020, with the evolution of a continuum of violence against women in the public sphere. Offline abuse, such as politicians and pubic office holders targeting female journalists during media conferences, can trigger an escalation of online violence that, in turn, can exacerbate offline harm.

This cycle has been documented all over the world, in the stories of notable women journalists like Maria Ressa in the Philippines, Rana Ayyub in India and the assassinated Maltese investigative jouralist Daphne Caruana Galizia. These women bravely spoke truth to power and were targeted by their respective governments – online and offline – as a result.

The evidence of abuse against women in public life we have uncovered during our research signals a need for more creative technological interventions employing the principles of “human rights by design”. These are safeguards recommended by a range of international organisations which build in protections for human rights at every stage of AI design. It also signals the need for stronger and more proactive legal and policy responses, greater platform accountability, political responsibility, and better safety and support systems for women in public life. – Rappler.com

The Conversation

Julie Posetti, Director of the Information Integrity Initiative, a project of TheNerve/Professor of Journalism, Chair of the Centre for Journalism and Democracy, City St George’s, University of London; Kaylee Williams, PhD Candidate, Journalism and Online Harm, Columbia University, and Lea Hellmueller, Associate Professor and Associate Dean of Research, City St George’s, University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03577
$0.03577$0.03577
+0.22%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Building a DEXScreener Clone: A Step-by-Step Guide

Building a DEXScreener Clone: A Step-by-Step Guide

DEX Screener is used by crypto traders who need access to on-chain data like trading volumes, liquidity, and token prices. This information allows them to analyze trends, monitor new listings, and make informed investment decisions. In this tutorial, I will build a DEXScreener clone from scratch, covering everything from the initial design to a functional app. We will use Streamlit, a Python framework for building full-stack apps.
Share
Hackernoon2025/09/18 15:05
Which DOGE? Musk's Cryptic Post Explodes Confusion

Which DOGE? Musk's Cryptic Post Explodes Confusion

A viral chart documenting a sharp decline in U.S. federal employment during President Trump's second term has sparked unexpected confusion in cryptocurrency markets
Share
Coinstats2025/12/20 01:13
Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Following the MCP and A2A protocols, the AI Agent market has seen another blockbuster arrival: the Agent Payments Protocol (AP2), developed by Google. This will clearly further enhance AI Agents' autonomous multi-tasking capabilities, but the unfortunate reality is that it has little to do with web3AI. Let's take a closer look: What problem does AP2 solve? Simply put, the MCP protocol is like a universal hook, enabling AI agents to connect to various external tools and data sources; A2A is a team collaboration communication protocol that allows multiple AI agents to cooperate with each other to complete complex tasks; AP2 completes the last piece of the puzzle - payment capability. In other words, MCP opens up connectivity, A2A promotes collaboration efficiency, and AP2 achieves value exchange. The arrival of AP2 truly injects "soul" into the autonomous collaboration and task execution of Multi-Agents. Imagine AI Agents connecting Qunar, Meituan, and Didi to complete the booking of flights, hotels, and car rentals, but then getting stuck at the point of "self-payment." What's the point of all that multitasking? So, remember this: AP2 is an extension of MCP+A2A, solving the last mile problem of AI Agent automated execution. What are the technical highlights of AP2? The core innovation of AP2 is the Mandates mechanism, which is divided into real-time authorization mode and delegated authorization mode. Real-time authorization is easy to understand. The AI Agent finds the product and shows it to you. The operation can only be performed after the user signs. Delegated authorization requires the user to set rules in advance, such as only buying the iPhone 17 when the price drops to 5,000. The AI Agent monitors the trigger conditions and executes automatically. The implementation logic is cryptographically signed using Verifiable Credentials (VCs). Users can set complex commission conditions, including price ranges, time limits, and payment method priorities, forming a tamper-proof digital contract. Once signed, the AI Agent executes according to the conditions, with VCs ensuring auditability and security at every step. Of particular note is the "A2A x402" extension, a technical component developed by Google specifically for crypto payments, developed in collaboration with Coinbase and the Ethereum Foundation. This extension enables AI Agents to seamlessly process stablecoins, ETH, and other blockchain assets, supporting native payment scenarios within the Web3 ecosystem. What kind of imagination space can AP2 bring? After analyzing the technical principles, do you think that's it? Yes, in fact, the AP2 is boring when it is disassembled alone. Its real charm lies in connecting and opening up the "MCP+A2A+AP2" technology stack, completely opening up the complete link of AI Agent's autonomous analysis+execution+payment. From now on, AI Agents can open up many application scenarios. For example, AI Agents for stock investment and financial management can help us monitor the market 24/7 and conduct independent transactions. Enterprise procurement AI Agents can automatically replenish and renew without human intervention. AP2's complementary payment capabilities will further expand the penetration of the Agent-to-Agent economy into more scenarios. Google obviously understands that after the technical framework is established, the ecological implementation must be relied upon, so it has brought in more than 60 partners to develop it, almost covering the entire payment and business ecosystem. Interestingly, it also involves major Crypto players such as Ethereum, Coinbase, MetaMask, and Sui. Combined with the current trend of currency and stock integration, the imagination space has been doubled. Is web3 AI really dead? Not entirely. Google's AP2 looks complete, but it only achieves technical compatibility with Crypto payments. It can only be regarded as an extension of the traditional authorization framework and belongs to the category of automated execution. There is a "paradigm" difference between it and the autonomous asset management pursued by pure Crypto native solutions. The Crypto-native solutions under exploration are taking the "decentralized custody + on-chain verification" route, including AI Agent autonomous asset management, AI Agent autonomous transactions (DeFAI), AI Agent digital identity and on-chain reputation system (ERC-8004...), AI Agent on-chain governance DAO framework, AI Agent NPC and digital avatars, and many other interesting and fun directions. Ultimately, once users get used to AI Agent payments in traditional fields, their acceptance of AI Agents autonomously owning digital assets will also increase. And for those scenarios that AP2 cannot reach, such as anonymous transactions, censorship-resistant payments, and decentralized asset management, there will always be a time for crypto-native solutions to show their strength? The two are more likely to be complementary rather than competitive, but to be honest, the key technological advancements behind AI Agents currently all come from web2AI, and web3AI still needs to keep up the good work!
Share
PANews2025/09/18 07:00