Secondment: Miguel Vazquez

|
Institution of origin |
Barcelona Supercomputing Center |
|---|---|
|
Host institution |
University of Basel — Institute for Biomedical Ethics (IBMB Unibas) |
|
Initial objective |
Co‑design and prototype an AI‑assisted ethical evaluation system to support the AHEAD Observatory |
During the summer secondment at IBMB Unibas, Miguel Vazquez worked with host ethicists and the AHEAD consortium to design, implement and validate a configurable ethics evaluation system. The work focused on making ethical frameworks operational: enabling expert curation of the knowledge base the agents consult, and producing explainable, reproducible evaluations for healthcare AI use cases.
Key achievements
- Developed a prototype evaluation pipeline that runs agent-driven assessments for use cases across 10 ethical frameworks.
- Implemented a versioned corpora system so domain experts can assemble, edit and manage framework‑specific resources
- Introduced prompt and agent versioning, enabling experts to customise agent behaviour and re-run assessments with alternate prompt configurations.
- Implemented support for a variety of inference endpoints, like OpenAI, Anthropic, Ollama, or vLLM, to compare their performances to help tailor the system for smaller models.
- Built an agent orchestration layer (LLM agents) that ingests use case descriptions, consults the selected framework corpus, performs structured evaluations, and emits explainable assessment outputs with run metadata.
- Exposed functionality via a lightweight web interface and job API for creating and configuring evaluation jobs, inspecting runs and outputs, and editing documents and prompts.
Main outcomes / deliverables
- Functional prototype of the Ethical Evaluation System (code + documentation), including:
- Evaluation job and run management,
- Framework corpora and document versioning,
- Prompt versioning and editable agent prompts,
- Explainable outputs for each evaluation run.
- A methodological blueprint explaining how to translate ethical frameworks into machine‑operable corpora and prompts for agent-based evaluation.
- Case studies and example runs demonstrating evaluations across the 10 frameworks and illustrating how expert edits to corpora/prompts affect outputs.
- Recommendations for integrating the prototype into the AHEAD Observatory (deployment options, curator workflows, validation pathways).
- Source code available at https://github.com/Rbbt-Workflows/Ethics.
- Web interface currently only available for internal use.

