Amyotrophic Lateral Sclerosis (ALS) and Multiple Sclerosis (MS) are chronic diseases characterized by progressive or alternate impairment of neurological functions (motor, sensory, visual, cognitive). Patients have to manage alternated periods in hospital with care at home, experiencing a constant uncertainty regarding the timing of the disease acute phases and facing a considerable psychological and economic burden that also involves their caregivers. Clinicians, on the other hand, need tools able to support them in all the phases of the patient treatment, suggest personalized therapeutic decisions, indicate urgently needed interventions.

The goal of iDPP@CLEF is to design and develop an evaluation infrastructure for AI algorithms able to:

Better describe disease mechanisms.

Stratify patients according to their phenotype assessed all over the disease evolution.

Predict disease progression in a probabilistic, time dependent fashion.


Overall iDPP is targeting two kinds of activities: (a) preliminary and exploratory pilot tasks on disease progression prediction; (b) position papers on the explainability of the prediction algorithms. Overall, this mix will provide participants with the opportunity to make some hands-on experience with these data and provide feedback about the task design as well as to brainstorm on how to evaluate this kind of algorithms and, in particular, assess their explainability. For this initial iteration we focus on ALS progression prediction; future cycles will be extended to MS as well.

In particular, we offer the following activities:

Pilot Task 1 –

Ranking Risk of Impairment

It focuses on ranking of patients based on the risk of impairment in specific domains. More in detail, we will use the ALSFRS-R scale to monitor speech, swallowing, handwriting, dressing/hygiene, walking and respiratory ability in time and will ask participants to rank patients based on time to event risk of experiencing impairment in each specific domain.

Pilot Task 2 –

Predicting Time of Impairment

It refines Task 1 asking participants to predict when specific impairments will occur (i.e. in the correct time-window). In this regard, we assess model calibration in terms of the ability of the proposed algorithms to estimate a probability of an event close to the true probability within a specified time-window.

Position Papers Task 3 –

Explainability of AI algorithms

We call for proposals of different visualization frameworks able to show the multivariate nature of the data and the model predictions in an explainable, possibly interactive, way.


  • Adriano Chiò, University of Turin, Italy
  • Arianna Dagliati, University of Pavia, Italy
  • Barbara Di Camillo, University of Padua, Italy
  • Mamede Alves de Carvalho, University of Lisbon, Portugal
  • Nicola Ferro, University of Padua, Italy
  • Piero Fariselli, University of Turin, Italy
  • Sara C. Madeira, University of Lisbon, Portugal
Skip to content