Elizabeth

June 12, 2025

Let’s Talk About How People Use AI

In 2023, we worked on a project with Google that prompted our team to actively consider AI as a tool for addressing some of our research challenges. All this was happening a year after the launch of ChatGPT, and the cautious excitement and possibilities the tool created. The broader cultural narrative surrounding AI at the time centred on its role as a tool for productivity and its potential implications for the future of work and employment. Fast-forward to today, a little less than three years since OpenAI launched its general-purpose LLM, the cultural narrative around AI has evolved. As generative AI tools become more capable and diverse in their functionality, the caution surrounding what this means for the future of work has evolved into scepticism about its impact on our lives, as more people attempt to make sense of this technology. 

The unending company press releases detailing an AI-first strategy are a glaring example of how global organisations are changing their strategies and structure. However, this has been apparent from our conversations with clients and colleagues in the past year. As an organisation concerned with how technology shapes the lives and futures of people in Africa and how African culture and realities can be centred in the development of technology, we’re often working with technology companies and within the constraints of technological products. Generative AI presents a complex reality, depending on who you ask; the tools are either exciting, concerning, restructuring the world or a mix of everything and more. We recognise that generative AI is becoming an integral part of our reality, and the pertinent question for us is “what does this mean for Africans and our futures?”


Mixed Signals

In April, Harvard Business Review published an essay on how people are using AI in 2025. This was a follow-up to a similar article they published in 2024. The article had a lot of interesting insights, but the following findings were interesting as they spoke to signals we’ve been noticing:

  • The number one use case for generative AI in 2025 is therapy/companionship,
  • Generating ideas, the number one use case in 2024, had moved five places down, and
  • Organising my life, which placed second on the list, is a new case, only noted this year.

In under 3 years of public adoption of generative AI, these tools have progressed from being merely perceived as an opportunity to “supercharge” productivity to occupying people’s imaginations of their lives, mental health and daily activities. An often-cited barrier to mental health care globally is accessibility due to cost. According to the World Health Organisation, African governments allocate less than 50 US cents per capita to mental health, well below the US$2 recommendation for low-income countries. Due to the macroeconomic state of countries on the continent, doctors are also taking flights for better employment opportunities, resulting in a shortage of medical health professionals. For instance, Lagos, Africa’s most populous city, has a shortage of  33,000 doctors.  It won’t be surprising if African governments attempt to address the healthcare challenges on the continent and prioritise physical over mental health due to resource constraints. Young Africans navigating a health system where mental health is barely accessible and the cost is on the high side, are turning to ChatGPT as a placeholder for friendship and therapy. It isn’t uncommon to see videos of young people across Africa on TikTok sharing how ChatGPT is taking on the role of friend and therapist for them. Ordinarily, finding a therapist is challenging, and a culturally competent one can be a daunting task. The general-purpose LLMs freely available to the public are overwhelmingly trained on data from people outside the continent. Beyond the question of attachment and safety these tools pose for this use case, there is an added layer of concern about the contextual relevance of the advice it gives on sensitive topics.

On the other hand, people interact with a vast amount of information online, and for those who maintain an active online profile, it can be challenging to keep up and make sense of this. In March this year, X (formerly Twitter) added a feature that allows its users to tag Grok to explain some Tweets. The nature of the Tweets people have asked Grok to clarify to them has been particularly interesting. These questions range from the light-hearted and mundane to political and macro-economic to the socio-cultural and philosophical.  If user behaviour displayed on Twitter is anything to go by, people are willing to ask AI questions on just about anything. How much does it know about these people, their cultures and their contexts to give a well-considered answer?


Thinning Trust

The examples I’ve shared so far are possible because users are either interacting with Generative AI online or sharing their experiences with these tools in a public setting. What happens when people use AI and it influences personal decisions they consider too sensitive to share with the public? If the information is harmful to people, how do you trace the harm, and who bears the consequences for that? 

In an article on AI Mental Models and Trust: The Promises and Perils of Interaction Design, Soojin Jeong and Anoop Singha argue that there are three major factors influencing how people develop their mental model of AI — cultural narratives, prior technology and social cues. Zooming in on prior tech experiences, the four technologies shaping people’s mental model around AI use are search engines, non-LLM chatbots, voice assistants and recommender systems. For the Grok as a contextualiser and LLMs as a placeholder for therapy and friendship examples we’ve discussed so far, people are leveraging their mental models of how search engines and voice assistants work, respectively.

While people transfer existing models of use from one technology to another, due to the limited understanding of how generative AI works, trust can seem like an abstract concept. Soojin Jeong and Anoop Singh argue that while expectation management and predictability have traditionally been defined in HCI as the mechanisms by which users develop trust in a system, for AI, predictability is insufficient. The scope of what the expectations are and how to manage them isn’t entirely clear. With AI, bias is interwoven into its design, and these tools aren’t yet capable of articulating how much they may be failing users. 

Let's consider a few examples of how using generative AI can be consequential for users, and the questions these raise about navigating trust in these systems.

  • Treating work done with the support of AI with suspicion: Last year, Paul Graham, investor and writer, shared a Tweet, expressing concern about the use of the word “delve” as a sign that ChatGPT wrote a text. There was pushback from people in countries where using the word “delve” is a regular feature in their syntax. English is a language spoken globally, however, syntax can be geographically distinct. As more people have gotten on board with using AI to boost their productivity, some users are using words (e.g. the use of delve) and punctuation marks (e.g. overuse of em dashes and rhetorical devices) to identify texts that have been written by AI or with the support of AI. This can be consequential for non-native English speakers using these tools to refine their writing to sound professional, as well as for individuals from cultures where certain words and syntax are considered standard. After all, many of the general-purpose LLMs out there primarily interact in American English. 
  • Implications of AI for shaping the narrative about a people: The South African president and courts have consistently dismissed claims of a white genocide happening in the country, much to Elon Musk’s disapproval. Last month, Grok (Twitter’s AI assistant) went on a tirade about white genocide in response to a question asking about the accuracy of a baseball fact. The xAI team described this as a result of the action of an “unauthorised modification” and promised to put future system changes through review. This raises many questions: Who are the individuals responsible for reviewing future system changes? How expansive is their understanding of the world, and if modifications to a tool that has been positioning itself as a source of answers to the world have been left unsupervised, can we trust their team?

These examples illustrate that in the absence of general-purpose LLMs designed to accommodate these nuances and possibilities, design, however crude, has taken shape. AI doesn’t suffer the consequences of its actions and inactions; people do. So, how do we ensure that these systems have a decent knowledge of people, as they’re already shaping how people and their work are perceived? 

Kitala Cultural AI Lab

So, at the start of this year, YUX launched a lab — Kitala Cultural AI Lab. A lot of the questions I’ve shared so far underpin the establishment of this lab. How do we design for both application and implication? The aim of our work at YUX is to ensure that African ideas, cultures, and realities are reflected in the design of digital products. This spirit has always shaped our interactions with our clients. With the lab, we’re working to address the critical need for culturally relevant AI in Africa. Leveraging our network of design researchers across 10+ countries and some developers, our initial projects focus on these three areas: dataset curation to benchmark and improve existing general-purpose LLMs, designing AI-powered tools that solve research problems we have in our context and conducting behavioural research on the use of AI across the continent. The following projects are currently in progress at the lab:

  • A cultural stereotype evaluation dataset for Nigeria, Kenya and Senegal, leveraging a quant-qual data collection approach to build a critical dataset of cultural stereotypes to evaluate and improve the cultural adequacy and harm prevention in large language models,
  • A speech recognition tool for supporting transcription and translation interviews conducted in local languages, leveraging existing open-source models in low-resourced languages and fine-tuning them to be useful for our research in the health sector. The current languages in focus are Wolof, Hausa and Yoruba, 
  • An in-depth qualitative study to understand the nuanced interplay of religion, mental health, and AI in Ghana and Senegal, with a survey accommodating Nigerian and Kenyan realities, and
  • A qualitative and quantitative study in Nigeria, Ghana, Kenya and Senegal exploring what generative AI means for people’s day-to-day lives and their mental models around trusting these tools.

The goal is to openly share datasets and findings from our studies. As a starting point for our conversation with the public, we have launched a survey to understand AI use in Nigeria, Ghana, Kenya, and Senegal.

UX of AI Study Context

Generative AI might just be the defining digital technology of the 2020s, and is changing our understanding of what interfaces are. Google has a People + AI Guidebook sharing practical guidance for designing human-centred AI products.  Interestingly, we don’t understand how these technologies reason. There is early research on how it reasons by Anthropic and Apple. However, it’s still unclear why people should trust these systems and interfaces. If builders don’t fully understand what these systems can do, what are we asking of people (or users) when we ask them to actively use these systems or embed them in apps we already use daily? This survey started as an attempt to explore what trust means to people in their interactions with natural language chat interfaces, and potentially develop a trust scorecard or benchmark that reflects how people measure trust in these systems. However, we quickly hit a roadblock. Trust is highly contextual and can look different from one use case or context to another. We also didn’t know which use case to focus on, so we decided to start with a survey to understand the landscape, determine the primary use cases people use general-purpose generative AI for, and gain a general sense of how they describe their trust level for different use cases. Drawing on the lessons learned from our past Africa-wide studies, we decided to focus on a select group of countries, specifically Ghana, Nigeria, Kenya, and Senegal. That way, we can direct our efforts towards getting a sizable number of responses from each of these countries, rather than relying on a single country's response to be representative of the entire continent. The survey will run for 4 weeks, and the report will be out in July 2025. Our hope with this survey is to prioritise use cases for exploration in our qualitative study on trust in generative AI across the continent, and also share our findings with the general public.

If you’ve read up to this point and live in any of the four countries, please fill out the survey. We’re also open to having conversations about this project or any of the other projects we’ve listed. Connect with YUX on LinkedIn or send an email to hello@yux.design.    

References

Jeong, Soojin, and Anoop Sinha. “AI Mental Models and Trust: The Promises and Perils of Interaction Design.”EPIC Proceedings (2024): 12–25. https://www.epicpeople.org/ai-mental-models-and-trust. ISSN 1559-8918. License: CC-BY.

Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, Behaviour & Information Technology, DOI:10.1080/0144929X.2019.1656779