About ahead

Committed to standardizing guidelines for the ethical use of AI

Combining expertise for a bioethical perspective on the applications of AI in healthcare and biomedicine.

The objective of ahead is to create a transdisciplinary, diverse, and global community drawing on the expertise of professionals in biomedicine, ethics, AI development, software engineering, sociology, psychology, law, gender studies, and relevant stakeholders, dedicated to the ongoing effort of tackling challenges and setting important standards in an ever-changing landscape.

We will create a methodology and a platform to evaluate AI in healthcare, making sure it meets legal, ethical, and technical standards. We will integrate various knowledge areas to create an ethics approach that includes participatory methods and bioethical analysis. This will help combine different priorities and perspectives when assessing new AI applications in healthcare and biomedicine

connection

Our objectives

1.

To build the ahead Transdisciplinary Community for AI in Biomedicine and Healthcare through a dedicated training programme.

1.

To build the ahead Transdisciplinary Community for AI in Biomedicine and Healthcare through a dedicated training programme.

2.

To establish the ahead Observatory to continuously track the latest developments in AI within biomedicine and healthcare.

3.

To build a Methodological Approach for the Transdisciplinary Evaluation of AI applications in biomedicine and healthcare.

4.

To extend the existing testing and monitoring open-source platform OpenEBench (OEB) and customise it for transdisciplinary assessment of AI applications in biomedicine.

2.

To establish the ahead Observatory to continuously track the latest developments in AI within biomedicine and healthcare.

3.

To build a Methodological Approach for the Transdisciplinary Evaluation of AI applications in biomedicine and healthcare.

4.

To extend the existing testing and monitoring open-source platform OpenEBench (OEB) and customise it for transdisciplinary assessment of AI applications in biomedicine.

Our methodology

Leveraging the collective expertise of the community, this project aims to pioneer a comprehensive methodology for assessing AI based systems that addresses all aspects of AI development, performance, and impact.

In alignment with the priorities outlined by the AI Act and the EU ALTAI10 principles, we propose the following outline for an assessment methodology, with three levels of evaluation and evaluation principles for each level:

TECHNICAL LEVEL

The technical level consists of assessing the AI model in carrying out its intended functions, ensuring reliable and consistent execution. This includes assessing the model performance, using accuracy and other traditional metrics, as well as the computational workload, looking at the execution speed, computational resources, and energy consumption.

HUMAN INTERACTION LEVEL

The human interaction level pertains to assessing the functioning and practical use of the model in its intended settings. This includes assessing the presence of biases through examining accuracy in relation to patient demographics such as their sex, gender, race, ethnicity, socio-economic status, age, etc. This level will also assess the usability of a model, ensuring that is easy to use and understand in the real-world settings that it is intended for.

SOCIETAL LEVEL

The societal level aims to assess the impact that the use of such AI-based systems will have on society. In the health and biomedical field, this begins with looking at how an AI system will affect professional and patient relationships and experiences. In the professional domain, this includes examining the potential impact and possible integration issues of the use of an AI-based system within biomedical professions, institutions, and industry. For patients, this includes the impact on patient experience and satisfaction. Finally, this level includes assessing the environmental impact of an AI-based system.

TECHNICAL LEVEL

The technical level consists of assessing the AI model in carrying out its intended functions, ensuring reliable and consistent execution. This includes assessing the model performance, using accuracy and other traditional metrics, as well as the computational workload, looking at the execution speed, computational resources, and energy consumption.

HUMAN INTERACTION LEVEL

The human interaction level pertains to assessing the functioning and practical use of the model in its intended settings. This includes assessing the presence of biases through examining accuracy in relation to patient demographics such as their sex, gender, race, ethnicity, socio-economic status, age, etc. This level will also assess the usability of a model, ensuring that is easy to use and understand in the real-world settings that it is intended for.

SOCIETAL LEVEL

The societal level aims to assess the impact that the use of such AI-based systems will have on society. In the health and biomedical field, this begins with looking at how an AI system will affect professional and patient relationships and experiences. In the professional domain, this includes examining the potential impact and possible integration issues of the use of an AI-based system within biomedical professions, institutions, and industry. For patients, this includes the impact on patient experience and satisfaction. Finally, this level includes assessing the environmental impact of an AI-based system.