About KITALA
Kitala.ai, powered by YUX Design, is a data annotation and AI evaluation platform dedicated to grounding large language models in local realities. By leveraging a network of annotators and community leaders across 22 African countries, it captures the linguistic nuances and social norms that standard crowdsourcing often misses. Kitala’s services include Stereotype & Bias Evaluation, Live LLM Testing, and the creation of Culturally Grounded Q&A datasets for high-stakes sectors like maternal health and inclusive finance. Ultimately, Kitala serves as an essential safety layer for AI deployment in Africa, ensuring models are accurate, safe, and truly reflective of the communities they serve.
What are we looking for
We are seeking a highly organized and detail-oriented Quality Assurance Lead. You will be the operational engine behind our LLM testing projects. Your job is to manage our team of Quality Assurors (QAs), ensuring that every conversation between a participant and an AI is evaluated accurately, on time, and to the highest standards.
Your missions
QA Team Management: Act as the primary lead for the QA team, assigning daily tasks and conversation batches to specific QAs based on language and project requirements.
Workflow Coordination: Ensure all QA evaluations (ratings, summaries, and feedback) are completed within the project deadlines.
Rejection & Redo Oversight: Monitor sessions rejected by QAs. You will coordinate with the participant management team to ensure participants redo their sessions correctly and oversee the second round of QA for those corrected sessions.
Quality Calibration: Conduct spot-checks on the QAs' work to ensure consistent grading standards across the team.
Reporting: Maintain a clear dashboard of progress, flagging any bottlenecks in the evaluation process to the Project Manager.
Feedback Integration: Summarize common errors found by QAs to help improve the instructions and scenarios given to participants.
Qualifications
At least 3 years of experience in operations, quality control, or project coordination, ideally within the tech, data, or research sectors.
A strong understanding of how LLM conversations work and an eye for spotting biases, hallucinations, or culturally irrelevant responses.
You must be able to track hundreds of sessions and multiple QAs simultaneously without losing track of details.
Excellent written and spoken English. You need to be able to give clear, constructive feedback to the QA team.
Comfortable working with CRM tools, Google Sheets, and specialized annotation or testing platforms.
Prior experience as a researcher or designer is highly valued to understand the nuance of human-AI interaction.
Other stuff
Contract : Full-time
Women candidates are strongly encouraged to apply
Total work flexibility and autonomy - we don’t care where you are as long as you meet with your colleagues when needed, are super efficient and grow your leadership within the company.
We offer competitive salaries plus great, non-material benefits - read more about it here - however we’re still a self-funded African SME, so don’t apply if you're still somewhat attracted by big corporate or NGO money, cars and suits!.
Are you interested?
Please send your CV, and a few words of motivation (not a formal letter!) by clicking on the Apply now button below. Good luck!