Author: | Zineb Bhaby |
Date: | 23. September 2025 |
In today’s challenging funding landscape, humanitarian organizations face increasing pressure to do more with less. Resources are limited while needs continue to grow in both size and complexity. Traditional methods alone cannot address the scale and intricacy of contemporary crises, from climate-driven disasters to protracted conflicts affecting millions. This reality demands that we be innovative and leverage every available tool to strengthen our ability to serve affected communities.
Digital technology, including emerging Artificial Intelligence (AI) capabilities, provides real potential for the humanitarian sector, yet it remains mostly underutilized. While major humanitarian organizations have embraced digitalization in their internal operating systems, they continue operating with rather “primitive” technology for actual aid delivery. The question is not whether we can afford to explore these tools, but whether the communities we serve can afford for us not to. The ability to operate efficiently and make informed decisions can have a direct impact on how many people we can reach and how effectively we can help them.
Beyond the Hype: Promise Versus Readiness
New capabilities in data collection, processing, and utilization are transforming humanitarian work in concrete ways. AI can improve decision-making through uncovering patterns in large amounts of data, enhance targeting precision in resource-constrained environments, strengthen coordination mechanisms that currently operate in silos, create new pathways for meaningful engagement with local partners, and boost aid transparency through automated monitoring and feedback systems.
Beyond the AI hype lies a growing landscape of practical applications making measurable differences in humanitarian response. Advanced text analysis tools such as Gannet help organizations sift through vast amounts of crisis information to quickly identify where help is needed most1. The World Food Program uses AI to combat global food insecurity2, while the Red Cross and the UNOCHA deploy predictive models to assess the impact of disasters before they strike3. Open-weight AI models are being adapted to support medical diagnosis in low-resource contexts4. The UN Global Pulse’s DISHA initiative uses AI for automated damage assessment and socio-economic mapping, and the International Rescue Committee’s SignPostAI uses AI chatbots to deliver information and knowledge for people facing crises in 20 countries5.
AI has genuine potential to help us become better at planning complex operations, understanding multifaceted situations, modeling rapidly changing realities, improving communication across language barriers and coordinating between multiple actors. These capabilities could address persistent operational challenges that traditional methods struggle to resolve at the scale and speed required.
Yet a significant gap exists between some pioneering efforts and the sector’s broader technological readiness. While other sectors systematically integrate digital transformation into their core strategies, humanitarian organizations often approach technology through fragmented, project-based initiatives disconnected from their core mission. This gap reflects how humanitarian budgets often categorize technology as administrative overhead rather than recognizing it as a strategic enabler.
The Foundation Challenge: Infrastructure Before Intelligence
It feels like a moral contradiction in 2025 to advocate for technology spending while humanitarian needs go unmet. But perhaps this is an opportunity to reconsider our existing technological investments. Maybe the time has come to reevaluate digital transformation strategies that have been organization-centered rather than ecosystem-focused.
I am not advocating for buying general-purpose AI subscriptions. To best leverage AI capabilities, we need to build the data foundation. We need data systems that speak to each other. This means investing in common data infrastructure, standardized data formats, secure sharing protocols, and interoperable platforms. Without proper data infrastructure, AI becomes like installing a high-tech navigation system in a car with a broken engine; impressive technology that will not get you where you need to go. The path forward requires investing in systems, not simply subscribing to services. There is no single solution that fits all, but there is a critical need to engage with technology in meaningful ways.
The current global race toward ever-larger AI models should not distract us from what is genuinely useful for humanitarian contexts. Smaller, specialized models often provide better solutions for our specific needs while requiring significantly less computational power and offering greater transparency and explainability. Common data ecosystems leveraging small models trained on curated humanitarian data can outperform massive general-purpose systems for specific tasks and can be deployed on basic technical infrastructure. They reduce environmental impact, offer better contextual accuracy and lower barriers to adoption across the humanitarian ecosystem.
Implementation Challenges: Navigating the Risks
While the opportunities are compelling, the path to responsible implementation demands constructive skepticism and continuous learning. The dangers of ad-hoc adoption are particularly acute in humanitarian contexts. When humanitarian workers use AI tools without clear ethical frameworks or guidance, they risk introducing harm to the populations we are meant to protect. Experimentation with freely available tools risks exposing personal data, while the use of general-purpose AI models without proper safeguards risks introducing bias in decision-making and outright false information due to a lack of context in the models’ training data.
Asymmetries between aid providers and recipients create conditions where meaningful informed consent becomes challenging to obtain. When people are desperate for assistance, their ability to refuse data collection or algorithmic processing is severely compromised. Beyond consent issues, algorithmic bias could inadvertently reinforce existing inequalities when training data fails to represent marginalized groups adequately. These risks are magnified in conflict settings where representative data is scarce and collecting it without proper safeguards could increase the risk of it being weaponized against the very people we want to protect.
Environmental considerations require attention as AI systems are energy-intensive. Subscribing to general-purpose AI services could significantly increase the humanitarian sector’s environmental footprint precisely when climate change drives many of the crises we respond to.
Some concerning findings are emerging in research about AI’s cognitive impacts on complex thinking and problem-solving skills, particularly relevant in a sector requiring contextual judgment, cultural sensitivity and adaptive thinking.
Finally, the widespread availability of AI tools that can generate “deepfakes” is introducing new vulnerabilities that humanitarian organizations must be prepared to address. Given how fast technology is evolving, it is becoming increasingly difficult to distinguish authentic content from synthetic creations. In crisis settings where accurate information is critical for effective response, organizations may find themselves contending with AI-generated misinformation, fabricated reports, or deepfake videos designed to mislead both responders and affected populations. Real-world weaponization has already occurred, with deepfake videos in conflict situations spreading misinformation at scale.
Building AI-Ready Organizations
The challenges outlined above might suggest a cautious approach to AI adoption. Yet constructive engagement with these technologies requires us to see technology not as separate from our core humanitarian work but as an essential tool that can help us better uphold our principles of humanity, neutrality, impartiality and independence.
Creating responsible AI approaches starts with ethical frameworks that match the realities of our work. These must move beyond aspirational principles to practical guidelines that field teams can apply in time-sensitive situations. The ICRC’s Policy on Artificial Intelligence and the Nethope’s Humanitarian AI Code of Conduct provide valuable starting points, but our ethical considerations must continuously evolve as technology and contexts change.
Field-centered approaches must guide implementation, ensuring technology serves those closest to affected communities. This requires distributing technological skills closer to affected populations while maintaining shared learning across contexts, with field staff helping define problems worth solving rather than having solutions imposed from headquarters.
Capacity strengthening must extend to local and national actors who might otherwise be excluded from technological advances. Collaborations pairing humanitarian principles with technical skills from various sectors, including local technology communities, offer more promising pathways than traditional provider relationships that fail to build lasting capacity.
The Path Forward: Collaboration and Localization
No single humanitarian actor can navigate these challenges alone. Meaningful AI engagement requires working in consortiums and sharing resources to build ecosystems that can leverage AI effectively. Current competitive approaches lead to duplicated efforts and isolated solutions that fail to address systemic challenges.
The discussions about technology cannot take place without essential conversations about power and representation in humanitarian action. As the sector addresses localization and decolonization, technology choices inevitably intersect with questions about who designs responses, who controls resources and who defines success. How we implement AI tools can either reinforce existing power dynamics or help transform them toward more equitable approaches.
Collective action across traditional boundaries must bring together local and international humanitarian organizations, recognizing our shared responsibility to serve affected populations and our common operational challenges.
Donors have a critical role in supporting foundational elements of capacity strengthening, data infrastructure and interoperability, rather than funding visible “shiny” applications. Sustainable resource models must evolve to support technology that serves humanitarian purposes over time. Current short-term funding approaches rarely accommodate the ongoing investment needed for responsible AI development.
Success requires continuous learning and improvement, adapting our approaches as we understand more about how technology can genuinely advance humanitarian goals. This means building feedback loops that capture lessons from implementation, sharing knowledge across organizations and maintaining the flexibility to adjust course when evidence suggests better approaches. Our commitment to constructive skepticism must include questioning who benefits from technological solutions and whether they genuinely serve humanitarian objectives or primarily satisfy donor preferences for innovation.
Conclusion: Technology in Service of Humanity
The question is not whether technology will affect humanitarian action, but whether humanitarian values will guide how that technology is developed and deployed. The choice to engage thoughtfully with AI tools may ultimately determine whether they strengthen or undermine our ability to uphold humanitarian principles and effectively serve communities affected by crises.
The technology exists. The need is urgent. What is required now is the commitment to meaningful engagement: investing in robust data infrastructure, building collaborative ecosystems that can leverage AI effectively and ensuring that technological advancement serves rather than supplants human judgment and local knowledge.
As we navigate this technological transition, our success will be measured not by the sophistication of our tools but by their impact on the lives of the people we serve. This requires building systems that communicate with each other, working in consortiums that share resources and learning and maintaining constructive skepticism that ensures innovation genuinely advances humanitarian objectives rather than simply following technological trends.

Zineb Bhaby is the AI Lead at Norwegian Refugee Council, harnessing technologies to serve displaced populations. She has a decade of experience applying data science to humanitarian challenges.
Related posts
Humanitarian innovation
Balancing innovation, efficiency, and principled humanitarian action
31.08.2025Out of the box: How to Make Humanitarian Innovation Thrive
09.04.2025 13:00 - 14:00