From June 23 to June 27, 2025, the Technical Meeting on Public Communication in Emergencies: Tackling Misinformation and Retaining Public Trust in Disruptive Information Environments was held at the International Atomic Energy Agency (IAEA) in Vienna. The event aimed to gather expert advice, operational knowledge, and research insights on the challenges and opportunities posed by generative AI during nuclear emergencies, with a particular focus on the spread of disinformation. State regulators from 55 countries and 15 international experts participated in the meeting, sharing their research, case studies, and experiences.
IGP’s Contribution
Representing the IGP, I presented my research on disinformation during nuclear emergencies. Our study examined state-sponsored disinformation campaigns using propaganda models, focusing specifically on case studies involving the Zaporizhzhia Nuclear Power Plant (ZNPP) and Fukushima Daiichi Nuclear Power Plant (FNPP) incidents. I emphasized that before debating whether AI would significantly enhance disinformation or help counter it, it is essential to first understand how disinformation gains influence rather than merely identifying what is false. According to our research, disinformation achieves influence by appearing credible and authoritative through the perceived legitimacy of its sources and institutions.
To validate this claim, we applied Jowett and O’Donnell’s Legitimating Source Model (LSM) to analyze the Fukushima disinformation case. Our research demonstrated how various propagandists cite each other’s false narratives to legitimize their own false claims, enhancing the perceived credibility of their information sources. I termed this process the “amplification mechanism.” I further argued that effectively countering disinformation requires targeting these amplification mechanisms rather than simply detecting false information and responding individually with factual data. Neutralizing the legitimacy-building processes or creating a stronger amplification network based on factual information is a more efficient approach to mitigating the impact of disinformation. I proposed that when developing policy recommendations, we should focus on how generative AI could contribute to building a stronger amplification network as a tool for emergency communications.
Presentations From the Others
There were several interesting presentations about the application of generative AI in emergency communications. Among the researchers, the presentation by Peter Kaiser and Alice Yang on leveraging AI chatbot technology was interesting. Yang’s research demonstrated that AI chatbots can serve as reliable, persuasive tools that are available 24/7 and scalable, offering multilingual support and alerts about evolving concerns. They argued that enhancing both functional aspects (ease of use, usefulness) and emotional aspects (emotional connection, social presence) of AI chatbots can build public trust, creating a “domino effect” that leads to increased social media engagement and greater intentions to donate.
Among risk communication consultants, Philip Borremans offered a field-focused view of AI’s role. Through his Unified AI-Augmented Crisis Communication (UACC) framework, he addressed the limitations of traditional methods in today’s fast-moving, interconnected “polycrisis” (multiple simultaneous crises) and “permacrisis” (prolonged state of uncertainty and instability) contexts. He highlighted three key AI functions: rapidly identifying stakeholders for real-time, tailored messaging; mapping crisis interconnectivity to predict cascading effects and combat misinformation; and supporting responses with early warning systems, pre-generated templates, and real-time monitoring for swift, adaptive communication. Borremans emphasized that AI should augment, not replace, human communicators, enhancing trust, clarity, and resilience during emergencies.
Among regulators, ROSATOM (Russia) offered a notably realistic and operational perspective on emergency communication. Unlike other presentations that focused on using AI to deliver clear, reassuring information during crises, ROSATOM acknowledged that in a nuclear emergency—like their experience at Zaporizhzhia—quickly conveying complex, accurate information to a fearful public is extremely challenging, and AI alone cannot solve this. Instead, they argued that the most realistic approach is to regularly train staff through emergency exercises and drills to improve their ability to anticipate an opponent’s disinformation strategies.
To support this, ROSATOM developed the Simula platform—a cloud-based emergency communication simulator that replicates real information-crisis conditions. Simula integrates AI not to provide “correct answers and solutions,” but to challenge trainees in unpredictable scenarios. The AI is trained to create an information field around a facility, publish negative comments, pose tricky questions, initiate scenario variations, and distinguish facts from fakes. This allows staff to practice responding to misinformation and disinformation campaigns realistically, developing adaptive, anticipatory skills essential for managing complex emergencies.
Policy Recommendation
The IAEA has not yet published any official policy recommendations from this meeting, though they have said they plan to. Like other participants, I submitted my own policy recommendation based on our discussions and simulation exercises.
I am skeptical of the IAEA’s vision of developing AI detection systems to counter misinformation and disinformation. First, it’s unclear what exactly they intend to “detect.” What is false and what is true? Who decides that? Even if we detect it, what are we going to do with it? Remove it from the platform? Post counterarguments? I also doubt how reliably AI can detect disinformation automatically. AI systems are not designed to interpret broad, complex problems automatically. They work best when applied narrowly to specific, well-defined tasks, which is not the case for disinformation that involves very diverse issues aimed at achieving political and socioeconomic goals. Finally, in democratic societies, “false information” is also deeply tied to freedom of expression. While misinformation and disinformation are certainly “not good and unhealthy” elements in a democratic society, we should not start from the premise that they are always clearly identifiable and inherently “bad and harmful.”
Throughout the meeting, I sensed an attitude among the IAEA staff that views misinformation and disinformation as inherently harmful problems we have to overcome, rather than accepting their existence and addressing them. I think this perspective ultimately leads to finding the seemingly easiest and most convenient solution: handing over the responsibility of dealing with these messy, hard-to-solve disinformation issues to AI systems that appear to be human-like machines.
So, in my policy recommendation, we did not mention anything about AI detection systems, but instead outlined two recommendations.
I acknowledged that AI does have a clear, practical role as a tool for emergency communication, especially in time-pressured and rapidly shifting public opinion environments. For example, chatbots can help staff respond quickly and consistently across multiple channels. But this only works with continuous human oversight and management to ensure accuracy, appropriateness, and ethical use. AI should be deployed thoughtfully—not as a replacement for human judgment, but to make human work more manageable and responsive. It should empower communicators to navigate complexity, not promise to simplify it away.
Ideally, I believe the IAEA should focus on building a fact-based information amplification mechanism by expanding its current communication channels to include more diverse participants. This would mean creating robust, credible channels that can quickly amplify accurate, trustworthy information to counter misinformation and disinformation during emergencies.
One way to operationalize this amplification mechanism would be to develop a non-state-led, networked governance structure. Rather than relying solely on top-down communication from the IAEA, this approach would bring together a broad coalition of actors—platform companies, academic researchers, civil society organizations, local authorities, and regulators—to collaborate voluntarily on improving the reach and credibility of accurate information. Such a multistakeholder network could function much like established models in cybersecurity governance, such as MAAWG’s approach to spam and phishing mitigation, which has successfully tackled cross-border, multi-platform threats through voluntary cooperation.
Of course, this approach is neither easy nor quick to implement. Building such a collaborative, trust-based network requires sustained commitment, careful coordination, and a willingness to move beyond traditional, centralized communication practices. Institutional credibility cannot be manufactured overnight. While challenging, this kind of long-term investment is essential if the goal is to create a truly resilient amplification mechanism capable of overwhelming disinformation amplification mechanisms.
Yet while this ideal approach is necessary for the long term, I also want to offer a more realistic recommendation that the IAEA could implement immediately. This recommendation draws inspiration from what Russia’s ROSATOM has already demonstrated in its own emergency communication planning.
Rather than attempting to build new and complex multistakeholder governance structures, a more practical step is to maintain existing communication systems while systematically enhancing the capabilities of nuclear regulators and emergency communication staff. The key idea here is not substitution—AI does not replace human decision-makers or offer some fully automated, “smart” response system. Instead, AI should be used to augment human performance in environments defined by uncertainty, complexity, and rapidly evolving public opinion.
Specifically, this means investing in continuous training and scenario-based exercises that incorporate AI-driven simulations. Such systems can expose staff to a diverse range of realistic, adversarial, and fast-changing information environments. Rather than providing “correct” or prescriptive answers, these simulations can help human communicators practice anticipating disinformation campaigns, recognizing shifting narratives, and adapting quickly under pressure. For example, they may help communicators stay flexible, responsive, and ethically grounded—even as they face adversaries who deliberately exploit fear, confusion, and rapidly spreading misinformation and disinformation.
This recommendation is consistent with the augmentation perspective on AI integration that I have previously argued for in other blog posts, including military planning. As Professor Jon Lindsay notes, this approach also aligns with the U.S. military’s current AI integration strategies. In my view, Russia’s approach to emergency communication planning reflects a similar philosophy: use AI not to replace the human role but to enhance it, building the institutional and human capacity needed to respond effectively to complex crises.