The Global Research Partnerships Alliance is a coalition of Swiss institutions strengthening collaborations with partners around the world to advance sciences for sustainable development.mehr

Bild: NASAmehr

AI in global partnerships – how to best use AI systems

Artificial intelligence (AI) systems are increasingly impacting academic activities and global research partnerships are no exception. How can you and your partners apply AI systems successfully? Here you will find resources, good practices, and case studies to help you navigate common challenges and make the most out of AI systems when collaborating globally.

AI systems are reshaping research activities across all disciplines. When used responsibly, they can accelerate and broaden academic work. They support faster analysis of large datasets, improve access to literature and knowledge, and make collaborative work easier. AI systems are also significantly changing academic partnerships across geographical, cultural, and institutional boundaries. With the objective of more efficiency, equity, and impact, our shared challenge is to address the pitfalls linked to the design and use of AI systems.

The European Union’s regulation on AI defines an AI system as a machine-based system that generates outputs from the input it receives. Output includes predictions, recommendations, or content. Crucially, AI systems are broader than generative AI systems like ChatGPT. They also include ranking systems, data analysis, and predictive analysis tools, often reshaping the way we work and interact.

In research collaborations of all disciplines, AI systems typically play two roles. First, AI systems are part of the researcher’s toolbox. Second, AI systems can constitute the output of a research project. Thereby, research teams design and create a system meant to be used by others beyond the project.

The use and development of AI systems often succeed or fail on basics:

  • Poorly digitized local knowledge: local practices may not be digitized and are difficult to integrate into data-based processes.
  • Administrative capacity: consistent regulation and enforcement.
  • Risk of private tech providers exercising threats and pressure on developers and users: AI systems are often under the control of private companies.
  • Communication infrastructure: availability and reliability of networks and sustained bandwidth, especially outside capitals and major urban centers.
  • Electricity: stable power to run devices and servers.
  • User devices: sufficient access to suitable laptops/phones.
  • Computing access: affordable GPUs/cloud credits for training and deployment.

Good practices: plan budgets and timelines for electric power, devices, shared computing, and digitization; design for offline or low-bandwidth use from day one; train people to identify ethical risks and comply with applicable regulation.

AI projects inherit all the potential pitfalls of digital projects while also adding specific challenges. Without careful observation and testing, they are costly, fragile, and prone to mismatch with real users and contexts.

  • High failure risk: poor user fit or solutions that only work in idealized settings.
  • Low resilience: data, policies, or environmental change; performance degrades.
  • Costly to plan: hidden work in data preparation, participation, and monitoring.
  • Age poorly: technologies evolve; retraining and maintenance are unavoidable.

Here is a list of don’ts when using AI systems in research partnerships:

  • Tech for tech’s sake - no clear objective or integration into a broader vision
  • “Baby panda” projects - cute, but not scalable or sustainable
  • One-way transfer - importing “Northern” solutions to different contexts, reinforcing epistemic domination and colonial legacies
  • Context insensitivity - ignoring local context, undervaluing partners’ knowledge and expertise
  • Deficit assumptions with respect to the partner - less resources ≠ less knowledge
  • Savior mindset - partners are not “beneficiaries” to be saved, but equal co-creators
  • Parachute collaborations - avoid short-term drop-ins; invest time in genuine, long-term relationships
  • End-only sharing - don’t wait until the end to share results; capacity-building and knowledge exchange should happen throughout

This project has been led by Johan Rochel and Jean-Daniel Strub (ethix - Lab for innovation ethics) in collaboration with the GRP Alliance. Many thanks to the persons who took part in an interview: Yael Borofsky, Zenebe Uraguchi, Bublu Thakur-Weigold, David Svarin, Frederick Bruneault, Jan Göpel, Kebene Wodajo, Katharina Frei, Solomzi Makohliso. Many thanks to the participants of the “AI in Global Research Partnerships” Conference held in Bern in June 2025 and organized by the GRP Alliance.

Partner:innnen