The benchmarking of methods and validation of datasets and analysis results is a recurrent need in all WPs of SOUND. The topic is closely related to quality assessment and quality control, in the latter the user’s focus is on whether a certain product is fit for use (“absolute quality”), and in the former, how different products compare to each other (“relative quality”).
SOUND partners have relevant experience and track record. Tavaré and Lynch (UCAM) led the WP “Benchmarking statistical methods and experimental protocols” in the EC FP7 project RADIANT, where they have collected available benchmark/validation datasets for DNA variant calling in tumours. Huber (EMBL) contributed to the EC FP6 Coordination Action EMERALD [Beisvag, BioTechniques 2011], which produced the still widely used Bioconductor package arrayQualityMetrics.
This WP covers two main application areas: causative variant calling in rare inherited diseases (T11.2) and tumour genetics (T11.3-T11.5). As a fundament for these activities, in T11.1 we will develop a common set of criteria and terminology, so that different benchmarks performed by different groups are comparable.
Researchers regularly need to present absolute or relative measures of the quality of their data or algorithms as part of peer-reviewed publication. A well-known limitation of these measures is that they can be guided by what was economic to do and sufficient to overcome peer review, and that may not always be what is important to a potential user of the data or method. We envisage that the standards developed in T11.1 may also serve beyond SOUND as quality reporting guidelines for researchers, reviewers, editors and consumers.
Calling genomic alterations from next-generation sequencing data is a critical early step in many genomic medicine research projects, but despite that central role, there is still much disagreement about best methods and error rate control; even after almost a decade of operation of TCGA and ICGC, identifying cancer-associated mutations and rearrangements in whole-genome sequencing data is acknowledged to be an open challenge, as stated e.g. by the ICGC-TCGA DREAM Mutation Calling challenge and other on-going consortium-level benchmarking efforts.
The above examples also highlight the need for a tightly circumscribed mission of this WP. Benchmarking and validation in general are easily the topic of a major consortium itself. With the resources of a single WP in SOUND, our first aim is to coordinate benchmarking and validation efforts that are required by the other WPs in SOUND, to achieve methodical consistency and generate synergies by exchanging approaches. Our second aim is to disseminate this added value beyond the consortium.
- University of Cambridge (Lead Partner)
- European Molecular Biology Laboratory, Heidelberg
- ETH Zurich
- Technische Universität München
- Instituto de Engenharia Mecânica, Lisbon
- German Cancer Research Center, Heidelberg
- Hospital of the University of Zurich
- Technische Universität München, Klinikum Rechts der Isar