Using AI systems in global partnership – a project-cycle approach
A well-thought-out, shared project cycle helps partners align early, make technical decisions together, and plan for change throughout the collaboration. The boxes below focus on projects that use AI as a tool. For projects where the AI system is an output, see the section at the end.
Make major decisions — especially technical and design choices — after critical discussion with all partners. Co-design is costly but investing in relationships and understanding partner contexts greatly improves a project’s chance of success.
Define the following key elements in a Memorandum of Understanding at the earliest possible point:
- Collaboratively define methods, required infrastructure, expected outputs, and the long-term vision. Proactively address knowledge management and intellectual property issues.
- Enable all partners to understand AI-related challenges. Ensure that everyone has access to high-quality information on the tools used, including their limitations and weaknesses. Create space for critical questions and concerns, and mitigate power imbalances linked to technical expertise. Maintain open channels to address expectations and concerns throughout the project.
- Ensure that each partner contributes to the technical work. Strive for co-funding (even if minimal) to strengthen structural participation.
- Clarify the role of internal oversight bodies (e.g., ethics committees or IP offices).
- Scrutinize the role of private companies and assess the costs of cooperation with tech providers. Consider whether public providers could offer computing power or resources.
Capacity building should take place throughout the project, not only at the end. Every partner contributes to helping others learn from and within the project.
- Proactively create opportunities for partners to exchange experiences with AI; reduce dependency on private providers through design choices (e.g., architecture, user-interface, business models).
- Promote open datasets and, where possible, make available the parameters used to design the AI systems.
- Share methodological lessons and insights on developing/implementing AI systems in low-resource contexts.
Using AI as part of a research toolbox requires discipline-specific good practices.
- Make sure all partners have a good understanding of the weaknesses and limitations of the AI systems used.
- Ensure scientific integrity and transparency according to your discipline’s standards. Support weaker partners to reach similar quality; secure ethical review on all sides where available.
- Respect domestic regulations (esp. on data and AI) and institution-specific rules across the entire data lifecycle. Measure and minimize environmental impacts of AI system use.
AI reshapes relations within teams and institutions. Project teams have a responsibility to identify and support important changes directly connected to their research project, including:
- Examining how AI-driven methods re-shape imbalances in access to and production of knowledge (including publication practices).
- Assessing impacts on funding practices in/for partner institutions.
Think as early as possible about when and how the project ends and what comes after.
- Secure financial and human resources to provide support as long as possible. Decide how cooperation should evolve post-grant.
- Assign responsibilities to identify and evaluate mid- and long-term changes induced by the project.
Is the goal of your project to design and create an AI system? Use these in addition to the boxes above:
- Identify all intended users (intermediary and final). Engage directly and concretely with them. Reflect on your own assumptions and blind spots, and confront them with real-world (not idealized) situations.
- Work with local partners and civil society to identify their needs, vulnerabilities, and constraints.
- Assess the project’s social, economic, and cultural acceptability with end users and affected groups. Define success in terms of local usefulness and effectiveness.
- Beware of “better-than-nothing” dynamics: people may accept poorly performing AI systems if no alternatives exist. In such cases, consent alone is not sufficient to justify use.
- Strengthen local sovereignty at individual, institutional, and systemic levels so partners/users can adapt, maintain, and govern the system after project’s end.
- Ensure sustainable integration into user environments (including behavioral incentives, financial support, and business models).
- Set up long-term support and secure funding for the post-project phase (technical updates, human support, communities of practice).
- The project’s team takes on responsibility for the system’s quality. To secure accountability, clarify who identifies, prevents, and mitigates which risks.
- Identify surveillance uses of the AI system, especially when powerful/governmental actors are involved. Consider possible regime change and its impact on the use-case.
- Anticipate discriminatory effects from the users’ perspectives; ensure safety against errors, degradation, or misuse.
- Protect traditional local knowledge from being ignored (not considered by the AI system), but also from being used without consent by research teams and/or private companies.
- Anticipate new behaviors enabled by the tool, including effects on the paid work individuals do and on the consumption of scarce resources.
- Identify the potential loss of professional competencies among users (deskilling) and address them with local partners.



