Understanding Digital Harms in Conflict, Why It Matters More Than Ever
Digital technologies are already deeply embedded in every aspect of our lives, and their role in conflict is no exception. From targeted surveillance (think hundreds of thousands of CCTV cameras now in use in Afghanistan) to mass disinformation campaigns, the digital dimension of conflict is growing rapidly, and so is the potential for harm.
A recent report by Build Up for the UK Foreign, Commonwealth & Development Office (FCDO) outlines a framework for how peacebuilders, mediators, and policymakers can understand and respond to these "digital harms in conflict" (FCDO, 2025).
We’ve taken a look, based on years of experience and best practice, to help break it down for implementers and policy makers.
So … What is Digital Harm in Conflict?
Digital harm in conflict is defined as the production and distribution of information in digital spaces that deepens divisions between people or groups, often leading to social fragmentation or even violence. This includes deliberate tactics (e.g. doxxing, hate speech) and unintended consequences (e.g. algorithmic amplification of harmful content).
During the Parliamentary elections in Iraq, we saw multiple instances of doxxing and cyber operations from fake posts targeting female politicians in an effort to discredit and undermine their campaigns, to allegations of corruption and fraud using falsified images. However, the information environment is full of such operations, and more, including the following five key digital affordances that enable harm:
Offensive cyber operations (e.g., hacking and leaking personal data).
Network control (e.g., internet shutdowns, mass surveillance).
Information deception & manipulation (e.g., disinformation and deepfakes).
Manipulative influence operations (e.g., coordinated bot activity).
Algorithmic amplification (e.g., social media algorithms favoring sensationalist content).
So What? Why It Matters
Digital harms directly undermine social cohesion, foster affective polarisation (group-based animosity), and can spark or prolong conflict. For instance, Myanmar's military used Telegram and Facebook to target dissidents, while AI-driven surveillance in Gaza and Ethiopia's communication blackouts are stark examples of state-led digital oppression.
More recently, ARK posted about how Google’s VEO is already, just days after being released, being weaponised to stoke societal tensions in Iraq. This is a growing trend which VEOs will be all too willing to exploit. AI is becoming harder to detect and harder to stop. Developing a toolkit of coordinated responses is therefore vital to protecting the individual and society at large.
Towards a Resilient Response
The report emphasizes that addressing digital harms requires integrating them into traditional conflict analysis, not treating them as an afterthought. This includes:
Partnering with cybersecurity firms to trace threats.
Conducting narrative and platform analysis to track disinformation.
Promoting legal norms, platform accountability, and digital peacebuilding roles.
Understanding the interplay between technology design, online behaviour, and conflict dynamics is critical to effective mediation and sustainable peacebuilding in the digital age.