By 2050, one in four people in the world will be African. This demographic shift puts Africa at the center of global discussions, including those around Artificial Intelligence. As AI continues to transform industries, societies, and economies, Africa has both challenges and opportunities ahead.
How the continent handles AI development, regulation, and implementation will shape its own future and have a big impact on the global AI scene. A key part of this journey is ensuring that AI solutions are built with a deep understanding of the needs, behaviors and realities of African users. This is where Human-Centered Design (HCD) plays a crucial role. By grounding AI development in real-world user insights and co-designing with communities, Africa can create technologies that are not just innovative but also equitable, relevant, and impactful for its diverse populations. For UX and qualitative researchers, this is an opportunity to lead in shaping AI solutions that are user-focused and truly representative of African contexts. It is important for Africa to be an active player and take intentional steps towards defining its AI future.
Africa is already making waves within the AI ecosystem
Fortunately, we are already seeing these steps being taken. The AI landscape in Africa is evolving, with several key players doing pretty cool work and driving innovative and rich research across the continent. At YUX, we have been advocating for more human-centered AI by embedding contextual research into AI development and exploring the potential impact of AI in research within the African context.
We started in 2023 by using generative AI to analyze around 25,000 rows of open-ended responses
for a project with Google. It really opened our eyes to how powerful AI can be for making sense of
large-scale user data. Zupon, Crew, and Ritchie (2021) examined the effects of text normalization and dataset quality for several low-resource African languages, including Afrikaans, Amharic, Hausa, Igbo, Malagasy, Somali, Swahili, and Zulu. Their study focused on building a text normalizer and training language models to improve NLP applications for these African languages. Nekoto et al. (2020) emphasized the role of participatory research in addressing the challenges of scaling Machine Translation (MT) for low-resourced languages. Their case study on MT for African languages demonstrated that involving local communities in the research process can create valuable datasets, benchmarks, and models, even with limited formal training. Yasine and Diop (2025) examined the role of religious leaders in shaping AI legislation and ethical frameworks by involving 296 religious authorities in Senegal. Their study highlighted the need for responsible AI regulation and emphasized the importance of including diverse local perspectives to ensure culturally relevant ethical considerations in AI development across Africa.
Beyond the amazing research work being carried out, we also see that governments across Africa are increasingly prioritizing AI, as they should. For example, Egypt’s Ministry of Communications and Information Technology (MCIT) recently launched an AI Readiness Assessment, while Kenya rolled out its National AI Skilling Initiative to equip students and instructors with AI skills. As of March 2025, about 14 African countries are in different stages of developing their own AI frameworks or strategies.
The challenges of AI in Africa show why more structured policies are needed
However, as Dr Chinasa Okolo, fellow at the Brookings Institution, recently shared during a panel at the HBS Africa Business Conference, there is a difference between policy and strategy. A policy is a formal statement that provides a set of principles and rules to guide implementation. On the other hand,
a strategy is a roadmap for achieving specific goals and defining actions, without the legal weight of policy. Currently, while around 14 African countries are developing AI strategies and frameworks, only Rwanda has a fully established AI policy.
The lack of both policy and human-centered principles has led to significant consequences, as seen in recent AI-related scandals across the continent. For instance, impersonation fraud using AI has become a growing concern, where deepfake technology is used to create realistic fake videos or audio recordings of individuals to defraud or manipulate them. In 2023, reports came out that Kenyan workers, employed by Sama, a contractor for OpenAI, were tasked with training ChatGPT by reviewing and filtering harmful content, including graphic material such as violence and sexual abuse. These workers were severely underpaid, earning as little as $1.32 per hour, and exposed to distressing content without sufficient mental health support. Many developed symptoms of PTSD, anxiety, and depression due to the traumatic nature of their work.
Scandals like these are exactly why African countries need to establish robust governing frameworks around AI implementation, with clear and enforceable consequences. Right now there seems to be a lot of talk around strategies and roadmaps and frameworks, but not a lot about actually implementing binding legal and regulatory structures necessary to ensure accountability.
There is a huge lack in localized data within existing AI systems
Beyond the challenges related to the absence of AI policy across the continent, there are other issues regarding AI development, deployment, and integration across the continent.
A significant challenge lies in the lack of representation of African cultures, languages, and values within these systems. Many AI models, particularly large language models, are predominantly developed by Western companies, which leads to a very one-sided perspective in the technology. Nigeria alone has over 500 languages, yet only 31 are supported by Google Translate.
One reason for this gap is that African markets are often seen as having lower buying power, making it harder for companies to justify the costs of collecting language data and training AI models for smaller languages. However, the challenge isn’t just about external investment, Africa itself isn’t producing enough digital content in these languages. In a project YUX conducted with the Wikimedia Foundation, we explored how people engage with content in African languages like Yoruba and Swahili and found a severe lack of available Wikipedia articles in these languages. This data gap excludes millions of Africans from the digital world, making it even harder to build AI systems that truly reflect the continent’s linguistic diversity. It's something we can’t afford to ignore if we want to build truly inclusive AI.
So, what’s the way forward??
One solution is to develop more effective methods for working with low-resource languages, making it more feasible and affordable to include them in AI development. Researchers at Google are actively exploring ways to improve AI’s ability to process African languages and expand their representation, showing that progress is possible (Zupon, Crew & Ritchie, 2021; Ritchie et al, 2022. ). Another possible solution is for Africa to build its own foundational AI models, training them with our languages, cultures, and unique nuances. Collect more data and use this data to train AI models. This would be a big step toward digital sovereignty, ensuring we're no longer constrained by Western-centric datasets or decision-making frameworks that fail to fully grasp our realities.
That said, while the idea has a lot of potential, we also can’t ignore how massive this task is. Building these foundational models requires a huge amount of resources—big datasets, serious computing power, a lot of money, and plenty of talent. It’s not just about creating the models; it’s about building an entire ecosystem around them. Is Africa ready for such a big challenge? Are we prepared to invest the time, money, and resources needed to pull this off?
Smaller and simpler AI models are likely a more practical approach for Africa
As Dr Chinasa Okolo emphasizes in her publications and articles, when it comes to AI in the African context, sometimes less is more. As AI models grow bigger, they don’t always perform better, and this is especially important in the African contexts where many languages have limited data available. Smaller, more focused models could be more effective for our needs. AI development doesn’t have to rely on huge budgets or massive computing power. Africa doesn’t need to wait for giant data centers or massive computing systems to start making waves. Instead, we can focus on building smaller, targeted solutions that tackle specific needs right now. This way, we can still make an impact in the AI space without needing all the big infrastructure upfront.
We should also be actively evaluating existing LLMs in our local context, assessing them for unforeseen harms, embedded biases, and relevance to our lived experiences. Fine-tuning these LLMs using locally relevant datasets would be a practical middle ground, it’s a quick way to get useful AI tools that still reflect real African experiences.
There’s also an opportunity to create Lite models, designed to work in areas with limited connectivity and bandwidth. These models could be optimized for low-end devices, and work offline or in a pruned, lightweight version. By starting small with language models for specific regions, we can gradually scale and improve them over time. Perhaps solutions like USSD could be used to make AI tools more accessible across the continent.
But beyond just technical solutions, we also need to bring younger voices into these discussions. AI decisions shouldn’t just be made by older politicians; we need to include young people at the table who are passionate about technology and the future of AI. It’s about building a pipeline of talent who can research, develop, govern, and lead AI efforts across Africa. This means not just focusing on the technology itself but creating an ecosystem where the next generation of African AI leaders can thrive and shape the future of AI on the continent.
There needs to be a stronger focus on AI research tailored to Africa's needs
The bottom line is, if Africa wants to take ownership of its AI future, it must prioritize research and development that addresses its unique needs. There’s a real need to focus on AI research that is relevant to the African context. Huge shout-out to everyone already driving this forward, as highlighted throughout this article!
At YUX, we recognize the significant gap in qualitative and behavioral data that accurately reflects Africa’s diversity. Our focus is shifting beyond just designing products and services, we are working to create the datasets that will help shape AI systems that respect our identity, values, and aspirations.
In the coming months, we are working with our team of Machine Learning engineers and researchers on the following projects:
- Conducting research and co-creation sessions to test and enhance the relevance of local AI products (e.g., check out our work with Dimagi).
- Building stereotype datasets to evaluate existing LLMs to ensure they don’t reinforce harmful biases or misrepresent African identities.
- Exploring patterns of trust in conversational AI across African user contexts.
We are open to collaboration and invite the HCD and UX community to join us in this shift! Let’s join heads together to ensure the very foundation of AI systems is built on rich, contextual data that empowers African communities.
In the meantime, explore our State of HCD in Africa report for key insights into the African HCD industry, its challenges, opportunities, and the role human-centered design can play in shaping AI on the continent. Looking ahead, we will be expanding this study to explore the perception and adoption of AI across Africa—stay tuned for more!