The Equiano Project is a new AI research lab that will be located in Africa. The lab's mission is to cultivate technical expertise to advance AI safety research, economic impacts, and policy frameworks. The lab will focus on researching AI alignment, specifically in the context of low-resource natural language processing (NLP), policy, and economic frameworks.
website: https://www.equiano.institute
The Equiano Project has the following goals:
To train and cultivate AI Safety scholars in Africa
To develop indigenous interpretable models, policy frameworks, and highlight the economic impacts of AI in Africa
To expand access and collaboration of AI Safety Research in emerging markets, ensuring that diverse perspectives and voices are included in shaping AI policies and practices
To advance mechanistic interpretability techniques and evaluation specifically tailored to African AI models and data sets, enhancing transparency and accountability in AI systems deployed in the region
To empower Equiano Scholars with the skills and knowledge necessary for technical alignment and policy work, enabling them to contribute effectively to AI governance efforts
To investigate the potential productivity gains, automation opportunities, and economic impacts that AI can bring to various sectors in Africa, guiding policymakers and stakeholders in leveraging AI for sustainable development
Train and cultivate AI Safety scholars in Africa: The lab will offer a variety of training programs and research opportunities for African scholars. These programs will cover the fundamentals of AI safety, as well as the latest research in the field. The lab will also provide mentorship and support to help scholars develop their careers in AI safety.
Develop indigenous interpretable models, policy frameworks, and highlight the economic impacts of AI in Africa: The lab will develop AI models that are tailored to the needs of Africa. These models will be interpretable, meaning that they will be understandable to humans. The lab will also develop policy frameworks that promote the responsible development and use of AI in Africa. The lab will also conduct research on the economic impacts of AI in Africa.
Expand access and collaboration of AI Safety Research in emerging markets: The lab will partner with local and international research institutions, universities, governments, and industry stakeholders. These partnerships will help to ensure that the lab's research is relevant to the needs of Africa and that it has a real impact on the continent. The lab will also host workshops and conferences to promote collaboration in the field of AI safety.
Advance mechanistic interpretability techniques and evaluation specifically tailored to African AI models and data sets, enhancing transparency and accountability in AI systems deployed in the region: The lab will develop new techniques for evaluating the interpretability of AI models. These techniques will be tailored to the needs of Africa, where data sets are often small and noisy.
Empower Equiano Scholars with the skills and knowledge necessary for technical alignment and policy work, enabling them to contribute effectively to AI governance efforts: The lab will provide training and mentorship to Equiano Scholars to help them develop the skills and knowledge necessary for technical alignment and policy work. The lab will also provide opportunities for Equiano Scholars to get involved in AI governance efforts.
Investigate the potential productivity gains, automation opportunities, and economic impacts that AI can bring to various sectors in Africa, guiding policymakers and stakeholders in leveraging AI for sustainable development: The lab will conduct research on the potential economic impacts of AI in Africa. This research will help policymakers and stakeholders to understand how AI can be used to promote sustainable development in Africa.
The funding for the Equiano Project will be used to cover the following costs:
Salaries for lab staff
Research expenses, such as publishing and data costs
Equipment and software
Conferences and workshops
Outreach and dissemination
Advisors
Tyna Eloundou: Tyna is a researcher at OpenAI and a member of the 2020 cohort of OpenAI research scholars. She has published research on undesired content detection in the real world and the labor market impact potential of large language models. She has also worked on model safety and misuse, systemic risks and economic impacts of AI among others.
Cecil Abungu: Cecil conducts research around AI risk with a special focus on issues faced by the global south. He works with the AI:FAR team on projects related to how AI could lead to extreme inequality and power concentration. To further his research and build his knowledge in long termism and AI risk, Cecil received support from Open Philanthropy's early career funding for individuals interested in improving the long-term future.
Team
Joel Christoph: Joel is a Ph.D. Researcher in Economics at the European University Institute (EUI) with a complementary background in policy and political science at Tsinghua University and the Carnegie Endowment. As a former Research Fellow at Oxford's FHI. His extensive international exposure, and experience in leadership roles, such as Vice-Curator of the World Economic Forum (WEF) Global Shapers, further underscore his ability to navigate complex policy landscapes and drive strategic economic initiatives in diverse contexts.
Jonas Kgomo: Jonas is a BSc in Mathematics( Istanbul University) and MSc Computer Science (Sussex University) graduate who has experience working on early-stage software companies. Entrepreneur First 2022 cohort and previously launched a Progress Studies overlay journal focusing on the progress we make as a civilisation in technology, science and policy.
Claude Formaken: Claude is a Research Engineer at InstaDeep Ltd, an AI company , while pursuing a PhD in multi-agent reinforcement learning in the University of Cape Town, South Africa. Claude is both excited and concerned by the transformative impact advanced Artificial Intelligence (AI) will have on Africa and the world at large. He is passionate about this and works on AI Safety initiatives in Africa.
JJ Balisanyuka-Smith: JJ graduated with Honors in Cognitive Science and Math at Swarthmore College, PA. His research interests are focused on the areas of Machine Learning Theory, AI Safety, and AI Compression. He worked on early research at Cohere AI. He is an Alumni of the Sutton Trust/ Fulbright US program and regularly volunteers with the program.
We are a responsible innovation lab, we design frameworks that ensure that humans are placed first in the design process of our R&D solutions. We think in a very unlikely cases this could be misused to find out which individuals are being discriminated by technology and malicious players could focus on increasing disparities.
We are currently not funded.