A key challenge in generating and sharing data across diverse research fields concerns uncertainty around the interpretation of data and the outcomes of data-intensive research. Uncertainty can mean very different things, however, in different contexts and even between different case studies. Our discussion proposes to address this issue from multiple perspectives, with reference to concrete cases drawn from biomedicine and the life sciences. We will touch on how levels of acceptable uncertainty differ widely across fields in medicine and biology (Rena Alcalay) and examine the varying levels of uncertainty that arise when interpreting diagnostic information generated by algorithmic facial recognition. When algorithmic analysis is used to detect rare diseases, the issue of explainability presents challenges for data sharing across different actors such as geneticists, scientists, clinicians, and patients (Paul Trauttmansdorff). In international “mega” brain projects involving multiple national, institutional, and community actors, harmonising (meta-)data standards remains a key challenge (Anna-Lena Rüland). Narrative forms of data sharing offer the potential to incorporate a range of points of view and epistemic assumptions, however, as does greater attention to the textual form in which data is communicated and circulated (Kim Hajek). With our case studies in mind, we also explore how experts can identify reliable expertise across diverse research fields.? When it comes to evaluating data, we ask what qualities of the specific experts and their broader research contexts may count as markers of reliability (Richard Williams). By approaching uncertainty in data-intensive research from multiple perspectives, we hope to present a new framework from which to analyse the risk of uncertainty in data sharing. This framework enables researchers to take seriously the plurality of expertise involved in data sharing and the particularity of their interests when managing uncertainty.
We currently witness a paradigm shift in how international research is organized. For decades, the global science system was based on collaboration and openness. However, recently, governments have started to securitize national research systems and increased restrictions on international research collaborations through export controls and security screenings of collaboration partners. This process is referred to as “research security.” In Germany, the ministry of education and research published a position paper on “research security in light of the Zeitenwende,” in which it stresses the need to balance “the freedom of science that we cherish with our security policy interests” (BMBF 2024). The ministry’s paper outlines eight broad policy proposals that are meant to guide the actors and institutions that make up Germany’s highly differentiated research ecosystem in implementing research security. Drawing on data that was collected for a research project on the implementation of research security policies in Germany, this presentation studies how the vague and poorly understood concept and practice of research security creates uncertainties for scientific managers, administrators and researchers in Germany. It also outlines how implementers are currently trying to navigate and address these uncertainties.
N/A
Staunton et. al. (2021) argue that the increasing reliance on repositories for data exchange complicates the ability to monitor research practices, environments, techniques, and technologies. This challenge in tracking data exchanges undermines the quality and completeness of metadata, as well as the appropriateness of the data for potential application (p. 115, see also Rajesh et al., 2021). I examine a specific aspect of this issue, which I call the Problem of Risk Variability. Across disciplines, norms for acceptable levels of uncertainty–or sigmas–when annotating metadata vary significantly. For example, in genome-wide association studies (GWAS), require highly stringent significance levels to minimise false positives due to the sheer number of comparisons involved. However, when GWAS data are applied in diagnostic medicine, the acceptable level of uncertainty may be relaxed, especially when the goal is to initiate treatment. This discrepancy suggests that different scientific domains may prioritise distinct epistemic values when assessing whether a hypothesis is true (Davis-Stober et. al., forthcoming). By highlighting these differences, I aim to illuminate how epistemic norms shape research practices and complicate the proper annotation of metadata across disciplines.
N/A
When sharing data, how can we reasonably judge what data is and remains reliable across diverse research fields? This calls into question what types of expertise we may need, how we might get it and the risks that its absence might cause. To competently judge expertise across fields, experts need what Collins and Evans call ’external meta-expertise’. In other words, experts need expertise regarding an expertise they personally lack. Collins and Evans give ‘meta-criteria’ that allow us to directly judge the qualities of the experts in different fields to indirectly judge their expertise. For example, the credentials, experience and track record of an expert might be evidence of reliable expertise. In response, our case studies show the need to shift from evaluating research participants to evaluating research processes. We often can and should do much more than mostly judge the qualities of individual experts. We should foreground the qualities of the research processes within which individual experts are embedded. The shift in focus from the participants in research to the processes in research highlights the context-specific factors that condition the production of data and how changes in context can change the usefulness of data as it travels across diverse research fields, independently of how reliable the individual experts producing the data might be.