Humanitarian Digital Panorama: A Roadmap beyond 2022

Autor*in: Giulio Coppi
Datum: 11. April 2022

Navigating localisation, no-code tech, AI and the end of ‘humtech’

When the platform of international non-profits NetHope ran a survey four years ago, it showed that most of their members were “tech-enabled” but far from reaching a digital transformation stage. Four years and some overall positive advances later (1),  the non-profit sector is still trailing behind most others when it comes to digital transformation performances.

Unsurprisingly, most of it is linked to significant investments gaps in research and development (R&D) and innovation. Very few donors fund R&D activities and the biggest one supporting R&D averaged a maximum of $8.2 million annually in total. However, a 2015 study by Deloitte highlighted how the humanitarian sector should invest around $75 million in R&D annually to align to the lowest investment average in the private sector.

Even the public sector, often criticised for its Leviathan-like slowness in evolving and innovating, has shown laudable capacity for accelerating its digital transformation journey. Faced with challenges such as the global fight against misinformation first and the impact of the Covid-19 pandemic second, local and national authorities demonstrated strong capacity to rally, pivot, and rebound.

But that’s not all. On some topics, such as open public health data, the public sector has in a few cases leapfrogged from a “on demand, when available” model to almost daily updates with a notable level of granularity. A few countries today even make their data accessible through digital application programming interfaces (APIs) allowing any third party to pull their data in real time.

Unfortunately, the same cannot be said for the aid sector. Despite years of appeals for improving data sharing, coordination and transparency among aid actors, the State of Humanitarian Data 2022 by the Centre for Humanitarian Data (HDX) run by OCHA estimated that only “69 percent or over two-thirds of relevant, complete crisis data is available across 27 humanitarian operations.” The result is a testimony of a positive trend and bear witness to the hard work done by the HDX to improve data standards and sharing practices. This said, it’s hard not to notice the missed opportunity for a sector that couldn’t come together and adopt a fully integrated, data driven, and transparent approach even when faced with the most aggressive threat to global operational continuity in a few decades.

More generally, development and humanitarian actors have decided to play it safe, preferring incremental, mission-driven approaches rather than investing in less certain adaptive and anticipatory strategies. This also translated in a clear preference towards top-down approaches led by traditional actors, further marginalising Global South organisations despite the clear potential shown by  locally and nationally led innovation efforts especially in low and middle-income countries.

The Flavour of the Year

As humanitarian organisations struggled to maintain operational readiness in the pandemic mayhem, tech companies leaped in to propose an updated version of the broken promise of the “big data revolution” that was first whispered almost a decade ago. Heralding the downfall of direct human interaction and pitching the merciful boons of digital automation, artificial intelligence (AI) advocates have swept the aid sector across all horizontals and verticals.

Differently from other hyped technologies such as blockchain and drones, AI has actually managed to build and sustain momentum over the past 5 years as early projects grew to maturity. Hala Systems, created in 2015, is today offering a range of services such as Sentry, a well-recognised  early warning system that utilizes a multi-sensor network and AI to instantaneously provide situational awareness of threats in war zones. On the opposite side of the organisational spectrum, project ARiN is an AI system helping UNHCR in the pre-screening of applications submitted to their talent pools, with the peculiar objective of removing bias from the selection and proactively factor in ethical policy components such as gender and diversity considerations.

The growing number of documented use cases warranted a special place for AI in the UN Recommendations of the High-Level Panel on Digital Cooperation. The promise of bias-free, untiring, evidence-based processes resulted appealing to both organisations and their funders, and the new technology showed enormous potential for collaborating with likeminded organisations.  The coming year is hopefully the time of delivering on the hype and prove that AI deserves to become the latest standard tool in any humanitarian stack instead of just another fad.

But not everyone sees the AI success scenario as a positive outcome. For some, a ‘humanitarian AI’ might eventually succeed in keeping under check the looming risks and harms that arise not just from malicious uses but also from unintended consequences resulting from legitimate applications. For others, even in that scenario, this technology could still fail the humanitarian ethics test. Existing instances of AI powered chatbots, for example, have been flagged as reproducing the same colonial power dynamics of as the analog system. In 2022, ‘AI for good’ needs to prove wrong those who define it an ‘enchantment of technology’ where the only magic happening in the black box is the occultation of the unfair power dynamics already at play.

This is far from being the only obstacle facing AI systems in aid. Even if they managed to identify an ethical and morally sound way to run AI systems in humanitarian action, humanitarians still have to solve the issue of data quantity\quality gap, as massive datasets are constantly needed to ensure their training and accuracy. Humanitarians cannot just buy data dumps on a marketplace as if they were start-ups or researchers. The humanitarian data collection, cleaning, and classifying work is both costly and time consuming as it has to happen in often unpredictable times and locations and often using manual procedures by non-specialised staff. As it happens, money and time are two of the top scarcities in the midst of a humanitarian crisis. The third top one is trust, which is hard to gain but very easy to lose when data breaches, leaks or abuses expose vulnerable people’s data to potential threats.

A Moment of Reckoning

A recent cyber-attack targeting servers hosting humanitarian data exposed information belonging to 515,000 people. Not just ordinary people, but 515,000 among the most vulnerable and at risk in the worst humanitarian crises of the world. Not just any servers, but those owned and controlled by the International Committee of the Red Cross. Not just a random digital crime against hosting services but, in the words of ICRC “a targeted, direct cyber-attack on ICRC servers, not the company that hosted them.”

This episode was yet another stark reality check on the myth of the “digital humanitarian”: some digital skills and good intentions are not enough in today’s world. Bringing digital in the humanitarian mix comes with responsibilities that extend to embracing the awareness of one’s own vulnerability. If the ICRC servers can be breached, any other humanitarian server can be. As stated by a former aid expert now turned information security consultant, “The biggest frontier in the humanitarian sector is the weaponization of humanitarian data.”

Despite not being completely unexpected, in practice, the ICRC hacking is likely to be remembered as the wake-up call and the loss of innocence of a whole sector. Since the 18th of January 2022, any humanitarian actors handling data cannot honestly claim anymore to not have imagined that something like that might happen.  

Most influential donors spoke out and condemned the hack, the US State Department Spokesman Ned Price actually calling on other countries to “join the ICRC in raising the alarm about this breach”. In all this noise, the silence from the humanitarian world is deafening. Granted, as Michael Igoe noted, “for a humanitarian organization that adheres to principles of neutrality and impartiality in order to work in contested places, the middle of an international cybercrime dispute is not a good place to be.”  

But there might be more to it. As already happened following the 2020 hack on a fundraising database linked to dozens of NGOs, “If you know about it, it means you have to do something about it”. That ‘something’ might be unclear to some and too expensive for others. It is not unrealistic to imagine that at this stage many organisations might be on the fence in choosing between fatalism and voluntarism.

Regardless, the post-ICRC hack is likely to see increased scrutiny on ‘back office’ functions of humanitarian ICT systems and possibly enhanced attention to adoption and adherence to cybersecurity rules and best practices. While this is a welcome, long-due development also potentially conducive to new partnerships model between humanitarians and cybersecurity experts, it also comes with challenges.

Firstly, a drizzle of cybersecurity awareness courses for staff, a two-factors authentication for company systems and a couple revisions of consent form templates won’t cut it. The aid sector needs to be deliberate in considering data about and from affected persons as a protected asset and treat it as such at all stages. In most cases, this might require a complete overhaul of existing tools, policies (when existing) and processes. In short, a new digital governance model.

Secondly, unfair funding requirements might worsen the inequality in the access to digital between INGOs and local organisations. This is the most likely outcome if funding opportunities don’t evolve by actively supporting access to cybersecurity systems and policies by local and smaller organisations, but rather blindly imposing additional digital security requirements on all grantees. As a matter of fact, cyber-security is often dumped at the end of the list when NGOs are called to invest more in life-saving response. It seems that in the current conditions, the only concrete option for most humanitarian organisations is just to reduce the amount of data gathered and processed and improve policies while relying on trusted third parties for cybersecurity.

Resizing ambitions

Still recovering from the hubris of the digital humanitarian theories and the Big Data frenzy, most experts are finally feeling comfortable sharing in public that “less is more” might actually be the way forward. Simon Johnson, a data scientist with British Red Cross and Humanitarian Data Exchange, eloquently framed the concept of minimum viable data already back in 2018. His empirical estimate identified that only around 20% to 30% of the data collected was actually used in the end.

Organisations are soon to face a moral quagmire, called to comply with a set of irreconcilable accountability frameworks. On one side, donors and their obnoxious long-term requirements for sensitive data archiving regardless of their usefulness, on the other, the GDPR-like regulations imposing that any storage of sensitive data is based on legal basis that shall include at least a justifiable legitimate or vital interest.

The argument for collecting the least amount of data needed to make an effective decision has also spun off a new corollary: building the least amount of software needed to perform a function. Less than a year ago, Andrej Verity from OCHA has stressed that core requirements in humanitarian response and coordination, namely flexibility, customizability and adaptability are nowhere to be seen in traditional legacy humanitarian systems. Clunky, slow, expensive, and ineffective, legacy systems “plague international organizations that have invested time and money in building and maintaining them”.

In his paper, Verity highlights how the current trend is pivoting towards no-code (or at least low code) platforms. While recognising that the phenomenon bears some marketing trademarks of a temporary fad, he identifies a few case studies with extremely positive outcomes. In particular, early reports recorded lower costs, minimal learning curve, and ease of creation as some of the benefits.

No-code systems are also attributed some potential in strengthening tech democratisation and localisation agendas by reducing some of the access barriers faced by local NGOs in the access to technology. However, this argument seems to overlook more structural impediments to localisation, for example those related to the resource hoarding by INGOs. All things considered, the lack of coding skills or budget for licences at the local level is a comparatively small obstacle and – alone – won’t be enough to change the paradigm.

A Roadmap Beyond 2022

As it often happens, humanitarians are caught in a Catch-22. While controlling fewer data and developing fewer software systems might mitigate the risk of data exposure and promote more deliberate uses of technology, it might also discourage investment in transformative or scale-up oriented solutions. Ironically, the spot-on approaches and gaps in evidence-based decision making were some of the factors that triggered the Big Data craze around a decade ago.

A roadmap for 2022 and beyond need to start by getting the basics right. The main challenge facing humanitarians is how to apply the lessons learned without going full circle or perpetuating technocolonialist behaviours.

As it often happens, the only way to get unstuck from a Catch-22 is to recognize that rolling back decisions and resetting some structures is not only feasible but the only possible way out. Of course, this implies stop falling into a sunk cost fallacy and recognizing that the post-World Humanitarian Summit failed in challenging power imbalances before launching ambitious plans to fix structural problems.

Just like no power shall be applied to people without them having a say on how such power is shaped and formalized, no tech shall be applied to communities without them being at the table where that tech is imagined, designed, and implemented. International humanitarian actors have an opportunity to become power broker, using their influence and access privileges to enhance and open up opportunities for local actors to lead AI-related discussions on policy, ethics, design, and sustainability.

A false dichotomy led many observers to point at local NGOs and shun them as uncapable of seriously engaging on emerging technologies. As a matter of fact, both local and international NGOs are inapt to engage on emerging technologies alone as it falls outside of their mandate. Fulfilling the localisation agenda thus requires questioning the traditional humanitarian roleplay, by opening up to new actors such as research centres, civil society, and private or social impact entities. There is no excuse to not develop digital solutions locally, by funnelling resources to the closest active R&D ecosystem to the implementation area.

To do this, we might need to reject the whole concept of ‘humanitarian technology’ altogether. The idea that a special breed of technology is needed only where armed actors are threatening civilians leads to aid actors being cornered in choosing between unsustainable in-house development and wobbly retroengineering and customisation of existing tools.

More importantly, this concept builds on the erasure of the discrimination, abuse, and violence experienced daily by minorities, vulnerable groups, and people experiencing protection needs even in peacetime. Humanitarian organisations have a moral obligation to break away from this fictitious mould and promote the ‘do no (digital) harm’ principle as the ethical baseline for digital development to be taught in computer science courses and implemented as part of quality assurance in digital product testing. Just like you don’t commercially distribute a new car model unless you reasonably expect it to be safe even in case of an accident, if you can’t deploy an AI system in wartime and trust it to be safe, you shouldn’t run it in peacetime either.

To do this, there is no need for new global partnerships or initiatives. We just need to read and listen to those in the Science, Technology, and Society (STS) field who have been paving the way for all of since decades. “Seek and learn to recognize who and what, in the midst of the inferno, are not inferno, then make them endure, give them space.”  This should be our shared roadmap for an ethical and effective use of technology in the aid sector.

Giulio Coppi is the Global Digital Specialist for Programmes at the Norwegian Refugee Council (NRC)and a IARAN Fellow focusing on Humanitarian Future and Foresight. He publishes on emerging technologies and humanitarianism.

Hinterlassen Sie einen Kommentar

Relevante Beiträge