Is Artificial Intelligence fair? Addressing bias in algorithms

by Ayomide Owoyemi, clinician and digital health expert

2022-03-18

AI is now integral to our lives even in ways we do not know. AI systems decide which products we see, what kind of news and stories we get, how we access to credit, the quality of our care and even the quality of the health outcomes we can get. There are instances of AI systems creating harm or being limited in the value offered to people, especially minorities and marginalized groups. For example, a study of a United Health Group algorithm in the US showed that it underestimated by half the number of black patients that needed further care compared to white patients (Obermeyer et al 2019).

Fairness of AI systems is essential in ensuring that they deliver value for everyone without leaving anyone behind or harming them.

that they deliver value for everyone without leaving anyone behind or harming them. The present challenges of inclusiveness in AI systems include biased data, which is due to historical, measurement and aggregation bias inherent in the data used. Another challenge is inadequate data diversity which stems from data collection methods and socio-economic barriers. There is, for example, a lower access for women in developing countries to mobile phones. There are also challenges due to development and deployment which include inadequate diversity of the individuals involved in development processes including labelling, engineering, and validation. Other cases include not considering where the algorithm would be deployed and used, thus not ensuring that the population and real-world settings are well accounted for. The opaqueness of systems is another challenge in the way of inclusivity.

Solving or mitigating these challenges requires concerted efforts. Inclusive approaches need to ensure that designs have enshrined values which address moral and socially equitable ideals. One example is design processes that consider all stakeholders across the line of the algorithm, i.e., from funding, to developers and end-users. Another useful approach is

intentional data creation and aggregation to make up for the lack of diversity and shortcomings in existing data. Improved governance is also essential as it ensures that systems are held accountable, audits are done and “explainability” is ensured.

“Open-source approaches could help to reduce biases and increase the inclusivity of AI solutions.”

Open-source approaches combine all the benefits of the above listed solutions viz a viz data creation, aggregation, algorithm development and governance. Open-source systems (OSS) have more potential of being inclusive and diverse as entry barriers are

lower or non-existent in some cases. OSS are free to use and change without restrictions or limitations therefore they can help to also solve the problem of access as their outcomes can be used and modified by individuals and groups without the resources to create or develop their own systems from scratch. In essence, OSS help speed up adoption, reduce bias and increase inclusivity, create room for more ethical applications, and are easily auditable. Another significant upside of OSS is that they can help to create better and more useful tools as different communities actively compete against each other to produce tools and systems that can be applied across different systems, and by different groups. OSS create an incentive to be more inclusive and diverse in approach to them.

To ensure we obtain the full potential of AI systems in improving living conditions for everyone, we need to ensure that fairness practices are enshrined in all its phases of development. This will make sure that harms are minimized, inclusion is ensured and trust is improved.

Bibliography

Obermeyer et al (2019). Dissecting racial bias in an algorithm used to manage the health of populations. https://www.science.org/doi/10.1126/science.aax2342

How open-source software shapes AI policy. https://www.brookings.edu/research/how-open-source-software-shapes- ai-policy/

External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. https://jamanetwork.com/journals/jamainternalmedicine/article- abstract/2781307

About this opinion piece

This opinion piece is based on the discussions during Gemlabs special edition Online Talk on “AI and Health. How to break the bias before it’s too late?”. The event took place on March 9, 2022.

Ayomide Owoyemi is a clinician and digital health expert. He is also a PhD candidate in health informatics at the University of Illinois in Chicago with a focus on AI/ML applications in healthcare. He has had experience working as a clinician and public health physician in Nigeria. He has also built different digital health products for Nigeria and other African countries. Ayomide led the team that built the first COVID triage tool in Africa. He is a contributor to a Springer textbook on Artificial Intelligence in Medicine.

Recent Think Pieces