top of page
Writer's pictureDevina Parihar

Humanitarian AI and Vulnerable Populations

Artificial Intelligence (AI) is rapidly changing the world, but not always for the better. Vulnerable populations face the most risk in dealing with the harm and unintended consequences of AI. With a surge in AI-driven systems across the world, there has been an increase in the number of such systems being developed and used in humanitarian aid and disaster response. This increase in development further underscores the importance of understanding the implications of AI systems for vulnerable populations.



What is a vulnerable population?


The definition of a “vulnerable population” varies within the context it is being used. The NCBI takes a broad, research-oriented approach in defining the term “vulnerable population.”


There are several definitions available for the term “vulnerable population”, the words simply imply the disadvantaged sub-segment of the community[1] requiring utmost care, specific ancillary considerations and augmented protections in research. The vulnerable individuals’ freedom and capability to protect one-self from intended or inherent risks is variably abbreviated, from decreased freewill to inability to make informed choices. Vulnerable communities need assiduous attention during designing studies with unique recruitment considerations and quality scrutiny measurements of overall safety and efficacy strategies ensuing research. Ethical dilemmas are widely prevalent in research involving these populations with regard to communications, data privacy and therapeutic deliberations.

The above definition touches on the ethical issues that commonly arise in research and product/service development when vulnerable populations are involved. A few concrete examples of vulnerable populations include those that experience disadvantages, discrimination, or suffering related to race, sexual orientation, gender, ethnicity, religion, and mental/physical health.


An AI design framework based on human rights should be a necessity for all AI solutions - especially when vulnerable populations are involved. The considerations that go into such a framework may vary with the context usage of AI. Moving forward, this article will focus on designing humanitarian-oriented AI solutions specifically for vulnerable populations through posing a set of questions. The questions below are based on the UN-OCHA’s Artificial Intelligence Principles for Vulnerable Populations - consider giving it a read-through for a more detailed understanding of how these principles were formed.


When Designing AI catered for people of a vulnerable population,


First and foremost, ask yourself...

“Why are we using AI?”

“What are the implications of deploying an AI system in this environment?”

“Would such an AI system exacerbate the risks for vulnerable people?”


Question the purpose of AI within the given context and have these discussions with your team as well as with the end-users of the system. Understand how this system could affect the user and conduct a thorough risk-benefit analysis. Put simply, AI may not be the best solution when working with vulnerable populations.


If it has been decided that an AI solution is best, ask yourself...

“Are we introducing initiatives for locals to be involved with AI?”

“Do we have a long term engagement plan as well as an immediate engagement plan?”


If you are working with a vulnerable population, members should feel empowered to understand, use, and vocalize feelings and concerns about the technology.


The ability to build trust between members of a vulnerable population, the design team, and the AI system is crucial. It is good practice to involve users in the design process of any system or service, and this holds especially true for members of a vulnerable population. A common initiative to increase engagement includes investing in capacity building (i.e. developing competencies and skills in an effective and sustainable manner). Capacity building allows for an organization, community, and individual to feel empowered to provide input when an AI system is being deployed in their respective community. To increase engagement throughout the development process, members of the community and vulnerable populations should be included in the design discussions from the beginning through post-deployment.


Remember - context is crucial, ask yourself...

“Are we designing in a socially aware manner?”

“Have we spent time thoroughly understanding the local culture and climate?”


AI systems, and technological solutions in general, are not solutions that work right out of the box. Designing AI goes beyond finding a “solution” - one also needs to understand the social and political climate and norms of the populations that the solution involves. Designers who leave the research and ideation phases too quickly often encounter the problem of “solutionism:”



Solutionism is dangerous however when the need to come up with solutions to help people precedes thorough testing, including analyses of the local intricacies of the situation, which may not require AI systems at all.

Designers can introduce bias into their solution by not taking the time to fully understand the problem and its context, for example by unintentionally embedding Western norms into a system. A concrete example to consider is X2AI’s psychotherapy chatbot, Karim. Karim’s purpose was to chat with Syrian refugees about mental health. A major issue that needed to be addressed when developing Karim was the introduction of potential biases to the system. Karim’s developers lived predominantly in the West, and had Western perspectives and assumptions about mental health and psychotherapy. Such assumptions could lead to many issues in future testing and deployment if the system’s eventual environment is not fully understood.


While checking for system gaps, ask yourself...

“Have we conducted an algorithmic impact assessment and a third-party audit?”


A third-party review and audit can be used to bring explainability to, what is more often than not, a black box system. Assessment and review can also help identify gaps and areas where biases, security risks, and data quality issues are prevalent. The issue of data quality is crucial in designing an AI system, and collecting quality data becomes an even more difficult task when gathering data for vulnerable populations. Data sets for such populations are often inadequate, as certain groups may not produce recorded data to begin with. Such gaps lead to a lack of a “complete picture,” and introduce more biases from which these unrecorded groups suffer the most. Security risks of AI systems are harmful to everyone, but especially for members of a vulnerable population. For example, a breach of vulnerable populations’ data can lead to even greater risks of discrimination and harm. A third-party review can serve as a check to further assess such security risks, which may occur in various stages of the development process - from data collection to storage.


These are just a few of the many questions to be asking yourself when designing an AI solution, especially when working within a humanitarian context with vulnerable populations. In order to reduce the harm commonly inflicted upon vulnerable groups, consider going through this non-exhaustive series of questions with your team and local community members.





Σχόλια


bottom of page