GEOMAR Conference & Event Management

28.–30. Apr. 2026
DKFZ, Heidelberg
Europe/Berlin Zeitzone

Pre-Conference Workshops

Workshops offer space for hands-on sessions, focused discussions, and collaborative exchanges on topics related to metadata, FAIR data, and research infrastructures. They provide an excellent opportunity to connect communities, explore emerging ideas, and build bridges between technical and scientific perspectives.

We are delighted that we have 12 successful contributions to our call for pre conference workshops. The workshops are either one 90 minutes block (short) or two 90 minutes blocks (extended). We are still in the process of scheduling them.

Short Workshops 

  1. Building connected data ecosystems - How to facilitate FAIR data workflows across tools and services
    Read more
  2. Creating and inspecting Research Object Crates – The interactive way
    Read more
  3. Creating RDF-compliant metadata templates with the AIMS Metadata Profile Service
    Read more
  4. From chaos to clarity: Smart sample management with LinkAhead & O2A SAMPLES 
    Read more
  5. From shared challenges to shared action: Metadata harmonization in practice
    Read more
  6. Improving NetCDF metadata quality: A collaborative approach for data producers
    Read more
  7. Making Helmholtz data assets visible via the Helmholtz Knowledge Graph
    Read more
  8. Make your own FAIR Digital Objects – The graphical way
    Read more
  9. Semantics hidden in the dark – Make datasets shine (Practical integration of terminology services for FAIR data)
    Read more

Workshops (Extended)

  1. Building confidence with research metadata at scale
    Read more
  2. From ontology to ELN - Create your made-to-measure semantic metadata platform
    Read more
  3. Semantic x-Lab: Bridging laboratory metadata and semantic knowledge discovery 
    Read more

 



Building connected data ecosystems - How to facilitate FAIR data workflows across tools and services 

SHORT WORKSHOP - DISCUSSION

Hosts: Rory Macneil, Tilo Mathes, Emanuel Söding

Most research centers maintain dedicated infrastructures to capture, curate, and store research data produced by their personnel. The employed solutions, however, are often run independently of one another and therefore lack connectivity, creating gaps in the data workflows. An integrated data ecosystem, however, would manage information, provide workflows, and support data documentation as data is produced. 
Lab and field notebooks are essential tools for documenting structured information during measurement campaigns or field and laboratory work. Modern Electronic Lab Notebooks (ELNs) and data collection tools offer advanced features to support this documentation process and can enrich records with additional metadata—such as instrumentation details, personnel involved, sample registration, and more. They are often positioned in sections of the data workflow, where critical information is generated and possibly merged and thus could operate as data workflow orchestrators. On the other hand, this task could also be assumed by other tools, depending on the architecture of the envisioned data ecosystem.

However, in practice, many centers and laboratories face significant barriers: ELNs and other services are not readily available, may require costly licenses, don’t integrate sufficiently into existing workflows and infrastructure. They also often lack institutional support or training opportunities. As a result, their use is not yet widespread.

In this workshop, which builds on the results of a workshop taking place in summer 2025. We would like to discuss potential architecture models within research centers, and invite participants to explore the potential of ELNs and other tools within scientific workflows. Together, we’ll discuss desirable features, briefly review a few existing solutions, adoption challenges and consider whether centrally provided ELN services across Helmholtz could be a sustainable way forward. The aim is, to form a working group, developing interoperability standards for data ecosystems.

Tentative audience
Research Data Managers, Researchers, Research Infrastructure/service providers, Core facility providers

Maximum number of participants: 30

> Back to list

 



Creating and inspecting Research Object Crates – The interactive way

SHORT WORKSHOP - TRAINING

Host: Christopher Raquet

NovaCrate [1] is a web-based interactive editor for creating, editing, and visualizing Research Object Crates [2] (RO-Crates). Built for inspecting, validating, and manipulating RO-Crates, it enables getting a deeper understanding of an RO-Crate's content and structure.

In our workshop, we aim to provide training in NovaCrate and RO-Crate. We also hope to extend our understanding of the requirements of researchers, data stewards, and any other roles that may come in contact with RO-Crates or NovaCrate [1] for further investigation for potential improvement.

During the hands-on session, we will guide and encourage participants to work together in small groups and package some prepared research data as an RO-Crate with the help of NovaCrate. To do so, participants will describe the research data with metadata created through NovaCrate [1]. Here we see a close connection to track topic No. 4, "From Harmonisation to Action(ability)". 

In this process, teams are encouraged to take notes on challenges, blockers, and ideas for improvement. At the end of the workshop, we will discuss the experience with the participants, guided by the notes the participants have taken.
The discussion will be centered around these questions:

  • In which scenarios are RO-Crates useful?
  • How to approach reuse or consumption of RO-Crates?
  • How can you incorporate RO-Crates into your research

We hope to have an interesting discussion not only providing us with crucial input for the development of our services, but also to offer our participants with the opportunity for discourse on the applications of RO-Crates in their research area.

  [1]: https://novacrate.datamanager.kit.edu/
  [2]: https://www.researchobject.org/ro-crate/

Tentative audience:
Researchers, data stewards, and any other roles that may come in contact with RO-Crates

Maximum number of participants: 30

> Back to list

 



Creating RDF-compliant metadata templates with the AIMS Metadata Profile Service

SHORT WORKSHOP

Hosts: Kseniia Dukkart, Marc Fuhrmans, Moritz Kern, Jürgen Windeck

Generating FAIR research data and enabling its reuse is the overall goal of research data management. However, establishing machine-readable knowledge representation - the “I” in FAIR - as the foundation for FAIR data and metadata remains a major challenge for many research communities. We have developed an approach to create subject-specific, RDF-compliant metadata profiles (i.e., SHACL shapes) that enable precise and flexible documentation of research processes and data. Our modelling approach supports inheritance between profiles: communities can create and share modular profiles as building blocks, which others can adopt and extend, so that metadata remains community-specific and interoperable at the same time.

To facilitate the modelling process and make it accessible to users with limited ontology expertise, we have developed a web service that provides a graphical user interface for creating metadata profiles [1]. It allows users to add suitable terms from existing terminologies together with constraints on permitted value nodes (e.g. expected data types, classes, or node shapes) and attribute cardinalities. Based on those profiles, metadata forms can be automatically generated for entering profile-compliant metadata [2] as well as search interfaces to explore profile-based metadata via faceted search [3].

In this workshop, participants will learn how to use the AIMS editor to create and extend metadata profiles and discuss the challenges of creating RDF-compliant metadata for research data. We will also present the new user interface prototype and conduct a hands-on user test. By gathering feedback from metadata experts, data stewards, and domain experts, we aim to improve the current user interface and discuss how RDF-based metadata can be embedded into everyday research workflows.

[1] NFDI4ING Metadata Profile Service. https://profiles.nfdi4ing.de
[2] Shacl-form. https://github.com/ULB-Darmstadt/shacl-form
[3] RDF-Store. https://github.com/ULB-Darmstadt/rdf-store

Tentative audience:
D
ata Stewards, Metadata Experts, Research Domain Experts, Metadata infrastructure providers

Maximum number of participants: 30

> Back to list

 



From chaos to clarity: Smart sample management with LinkAhead & O2A SAMPLES

SHORT WORKSHOP - TUTORIAL

Hosts: Maren Rebke, Florian Spreckelsen

LinkAhead is a flexible open source toolbox for research data that adapts easily when workflows or requirements change. It offers a clear web interface, programmatic access and a semantic structure that can be extended for many different research contexts.
In this workshop we will introduce LinkAhead and demonstrate how it supports O2A SAMPLES, a sustainable and interoperable platform for transparent, FAIR compliant and AI ready sample metadata. O2A SAMPLES enables reliable sample registration, storage tracking and Nagoya documentation, and connects smoothly with Helmholtz infrastructures. With well defined workflows, QR based tracking and fully documented procedures, it provides an efficient and collaborative approach to managing samples from field collection to digital archive. This unified framework strengthens reproducibility, accessibility and discoverability, enabling efficient digitization and collaboration across the entire sample lifecycle.

In this workshop, participants will first get to know the open-source research data management software LinkAhead [1, 2] which is the basis of the O2A SAMPLES platform at AWI. We will introduce LinkAhead's datamodel and webinterface including hands-on examples of how to query for, insert, and edit data entries in LinkAhead. We will then continue with an introduction of the O2A Samples platform with its sample and storage management workflows. Participants will learn how samples are registered, and how to export and update their metadata. An outlook will be given on configuring and adapting the sample management module [3] to the participants' (or their institutions') needs.

[1] https://doi.org/10.3390/data4020083
[2] https://gitlab.com/linkahead/
[3] https://gitlab.com/linkahead/linkahead-sample-management

Tentative audience: 
Data stewards and data managers, researchers

Maximum number of participants: 20

> Back to list

 



From shared challenges to shared action: Metadata harmonization in practice

SHORT WORKSHOP - DISCUSSION

Host: Oonagh Brendike-Mannix

Metadata harmonisation is a collective action problem. In this workshop our goal is to bring together data stewards, infrastructure providers, and researchers to share practical experiences in improving metadata quality, and co-identify actionable next steps toward harmonized metadata practices.

The workshop builds on our analysis of metadata provided, previous workshops, and one-on-one counselling sessions. 

Intended Outcomes:

The workshop will:

  • present a summary of insights gathered from community workshops and one-on-one provider counseling,
  • provide short provider case reflections illustrating practical harmonization efforts (successes and challenges),
  • facilitate an interactive group exchange on lessons learned, remaining obstacles, and community-identified priorities,
  • synthesize outcomes into a joint set of next steps for HMC and providers, and shared recommendations.

Expected results:

  • Shared understanding of practical paths to improve metadata in provider contexts,
  • A curated list of next steps and recommendations for provider networks and HMC,
  • Strengthened network of practitioners engaged in metadata harmonization.

Tentative audience:
Infrastructure providers, data stewards, repository managers, metadata curators, FAIR practitioners, and researchers interested in improving metadata practices and interoperability.

Maximum number of participants: 30

> Back to list

 



Improving NetCDF Metadata Quality: A Collaborative Approach for Data Producers

SHORT WORKSHOP - TRAINING

Hosts: Romy Fösig, Björn Saß

High quality, interoperable NetCDF data are essential for FAIR Earth System Science, yet metadata completeness and consistency often remain a challenge for data producers. Building on the Helmholtz Metadata Guideline for NetCDF (HMG NetCDF) and its associated checker tool, this workshop offers a focused discussion and hands on session to support practical adoption of the guidelines. Participants will jointly review key elements of the guideline, discuss open questions and implementation challenges, and test the prototype checker tool using their own NetCDF files.

The goal of the workshop is twofold: to improve participants’ datasets towards repository and portal ready metadata, and to gather concrete feedback to further refine both the guidelines and the checker tool.
The expected outcome is increased confidence in producing FAIR, interoperable NetCDF data and strengthened community alignment around standardized metadata practices.

Agenda (Main content):

  1. Introduction of the NetCDF metadata guidelines
  2. Discussion of the guidelines
  3. Bring your own NetCDF file: Improvement on the base of the guidelines (hands-on)
  4. Test of the HMG NetCDF checker tool

Tentative audience: 
Researchers (data producers), data stewards/ managers

Maximum number of participants: 15

> Back to list

 



Making Helmholtz Data Assets Visible via the Helmholtz Knowledge Graph

SHORT WORKSHOP - DISCUSSION

Host: Oonagh Brendike-Mannix, Volker Hofmann, Marco Nolden 

This workshop aims to advance the Helmholtz Knowledge Graph (HKG) as a shared metadata backbone by identifying new data providers and sources, extending and refining the HKG data model, and jointly evaluating practical onboarding processes for data providers across Helmholtz.

The Helmholtz Knowledge Graph is a federated metadata infrastructure that makes digital assets—such as datasets, publications, software, and instruments—discoverable, comparable, and queryable across the Helmholtz Association. While the HKG already integrates metadata from multiple infrastructures, its continued value depends on active collaboration with data providers, domain experts, and metadata professionals.

The workshop provides a structured, interactive setting to work on three closely connected themes. First, participants will identify novel data providers and metadata sources, including domain-specific repositories, institutional services, and emerging infrastructures, that could meaningfully extend the coverage of the HKG. This includes discussing when data sources can be considered authoritative and how they may be used to validate, enrich, or contextualize other metadata in the graph.

Second, the workshop will explore metadata schemas and domain-specific structures that are currently not, or only partially, represented. Participants will review limitations of the existing HKG data model and discuss extensions that improve expressiveness for search, discovery, and cross-domain analysis.

Finally, participants will discuss how a structured onboarding process for data providers can be established, identifying challenges, best practices, and opportunities to better align technical pipelines with real-world metadata creation and maintenance.

Outcomes include a curated list of candidate data providers, shared criteria for authoritative metadata, concrete proposals for extending the HKG data model, and initial milestones for onboarding new data providers.

The workshop will be organized into parallel and successive discussion tables, followed by joint synthesis sessions to consolidate results across perspectives.

Data stewards, infrastructure providers, metadata specialists, and developers working with metadata infrastructures within Helmholtz and beyond.

Tentative audience:
Data stewards, infrastructure providers, metadata specialists, and developers working with metadata infrastructures within Helmholtz and beyond.

Maximum number of participants: 20

> Back to list

 



Make your own FAIR Digital Objects – the graphical way

SHORT WORKSHOP - TRAINING

Host: Andreas Pfeil

To accelerate the adoption of FAIR Digital Objects (FDOs), their creation and usage needs to be implemented in software. Our work targets the task of creating and maintaining FDO records. We introduce an application to build designs for FDO records in an intuitive and visual way, targeting non-experts and experts in the field alike. From a design, code and FDOs can be generated to automatically create FDO records from given information.

In this workshop, we aim to provide the skill to create FAIR Digital Objects in smaller and larger scales with minimal resources. We encourage the participants to bring JSON-encoded metadata of the objects they would like to publish as FAIR DOs. For those who do not, we will provide examples to work with. We also hope to get some feedback for the further development of the FAIR DO Designer and insights into deeper requirements of the target group. The workshop will have the following shape:

  • Introduction to the FAIR DO Designer (10 min)
  • Demonstration and guidance through the basic concepts (interactive, 20 min)
  • Working session, so participants can build their own FDOs (guided, 45 min)
  • Discussion and Feedback (15 min)

References:


Tentative audience:
D
ata stewards, everyone publishing (meta-)data, research software developers

Maximum number of participants: 30

> Back to list

 



Semantics Hidden in the Dark – Make Datasets Shine (Practical Integration of Terminology Services for FAIR Data)

SHORT WORKSHOP

Hosts: Claudia Martens, Alexander Wolodkin, Anette Ganske 

Semantic technologies and terminology services are a cornerstone for implementing the FAIR principles, as they make the meaning of data explicit, machine-actionable, and reusable beyond their original context. While data may be technically accessible, a lack of shared semantics often limits interoperability and hinders reuse across disciplines, infrastructures, and research communities. Terminology services address this challenge by providing controlled concepts, semantic relationships, and persistent identifiers that enable consistent interpretation and integration of data.

This workshop focuses on the practical adoption of terminology services in research data infrastructures, moving beyond conceptual discussions toward concrete, transferable implementations. It presents key results of the BITS project (Blueprint for the Integration of Terminology Services in Earth System Science) and demonstrates how terminology services on the example of the ESS TS (https://terminology.nfdi4earth.de) can be embedded into research data infrastructures to improve discovery, interoperability, and semantic enrichment of datasets. Designed as an interactive forum, the workshop combines short inputs, live demonstrations, and participatory elements to make the added value of semantics tangible for different stakeholder groups. Participants will engage with real-world implementations of terminology services integrated into repository interfaces (via API usage) and metadata pipelines, supported by interactive elements such as live polling, search challenges, and guided discussion. By bringing together repository managers, researchers, and data stewards, the workshop fosters exchange between technical and conceptual perspectives and supports community-driven learning. Overall, the workshop aims to lower barriers to adopting terminology services, strengthen awareness of their strategic importance for FAIR data, and stimulate discussion on scalable and sustainable implementations across research infrastructures.

Tentative audience:
Repository managers, metadata experts, researchers, data stewards, and all interested participants

Maximum number of participants: 35

> Back to list

 



Building Confidence with Research Metadata at Scale

EXTENDED WORKSHOP - TRAINING

Host: Sara El-Gebali

This hands-on, in-person workshop empowers research support professionals to move beyond static views of metadata and actively interrogate, assess, and act on DOI metadata at scale using the DataCite API. Using the DataCite metadata schema as a practical reference point, participants will work directly with real DOI metadata to explore which metadata elements most strongly influence discovery, reuse, trust, and decision-making. While DataCite is used as a reference implementation, the approaches and principles discussed are applicable to other PID-based and metadata-rich infrastructures.

Rather than inspecting records one by one, the workshop introduces the DataCite API as an accessible way to turn metadata into evidence. Participants will learn how to translate everyday institutional and research questions such as which records are missing licenses, how well ORCIDs or RORs are adopted, or which outputs are funded by a specific organisation into concrete, reproducible API queries.

The focus is not on software development, but on practical metadata literacy: understanding how metadata is structured, how it can be queried systematically, and how enriched metadata can be used to support curation, reporting, policy monitoring, remediation planning, and decision-making. Through guided, browser-based exercises, participants will explore high-impact metadata fields including licenses, subjects, affiliations, funders, descriptions, and relationships across repositories and disciplines. Assessing structured metadata in this way supports both FAIR maturity assessment and machine-actionable reuse, including AI-readiness.

By the end of the workshop, participants will be able to use API results to assess metadata completeness and quality, prioritise remediation efforts, and produce clear, defensible insights for management, policy, and strategic communication. They will gain confidence in explaining the value of specific metadata fields to researchers and decision-makers, and leave with reusable query patterns and a stronger understanding of metadata as shared research infrastructure rather than static documentation.

Tentative audience:
Repository managers, datastewards, PID infrastructure providers, Audience: absolute beginner to advanced users who work with DOI metadata but may never have used an API before.

Maximum number of participants: 30

> Back to list

 



From ontology to ELN - Create your made-to-measure semantic metadata platform

EXTENDED WORSKHOP - TRAINING

Host: Fabian Kirchner

This workshop will teach you how to use Herbie for setting up a bespoke semantic electronic laboratory notebook or research metadata platform which is customized to your concrete scientific needs.

We will start with an ontology of your scientific domain, pick a typical metadata record you might want to collect, and end with a set of (re)usable web forms for entering such a record in a fully semantically annotated way.

A typical and cumbersome approach would be creating spreadsheets and a set of transformation scripts to facilitate easy data entry for non-technical users. In the workshop you will get to know an alternative approach using Herbie: You will learn to write validation schemas in the standardized SHACL Shapes Constraint Language, upload these alongside your ontology to Herbie, and obtain a platform with easily usable web forms which automatically persist all entered data into a semantically annotated RDF knowledge graph.

After entering a few exemplary records, we will explore how you can query the created RDF knowledge graph using SPARQL to extract the data you need in downstream projects.

This workshop is intended for those who have an application ontology and want to start collecting (small) (meta)data that is properly semantically annotated. There are no restrictions on the domain. Herbie works best for data entered manually in an append-only approach, like it is typically done in laboratory notebooks.

We assume basic understanding of RDF and OWL, in particular you should be able to understand RDF graphs in the turtle format. You should bring your own computer and have Python and Node.js installed to be able to run some development tools.

Tentative audience:
D
ata stewards, researchers with interest in technicalities of data management, developers

Maximum number of participants: 15

> Back to list

 



Semantic x-Lab: Bridging Laboratory Metadata and Semantic Knowledge Discovery

EXTENDED WORKSHOP - HACK SESSION

Hosts: Oliver Knodel, Manja Luzi-Helbing, Felix Mühlbauer, David Pape, Martin Voigt

The Semantic x-Lab project addresses a fundamental challenge in modern research data ecosystems: the fragmentation of laboratory metadata across heterogeneous systems and disciplinary silos. Funded within the Helmholtz Metadata Collaboration (HMC) and co-led by HZDR, GFZ, and GSI, the project aims to interlink ontology-based descriptions of workflows, instruments, resources, and experimental data to make them discoverable, interoperable, and semantically rich. By building a distributed knowledge graph through a user-centered co-design process with laboratory partners and large-scale facility stakeholders, Semantic x-Lab fosters cross-domain insights that were previously inaccessible due to isolated metadata landscapes.

Building on our 2025 Kick-Off Workshop where we introduced the project scope, collaborative use cases, and the foundational vision for FAIR semantic integration of lab information, this workshop will advance hands-on discussions on concrete integration strategies and community engagement practices. Participants will explore how semantic search interfaces, ontology alignment, and co-design methodologies can support FAIR metadata workflows across research domains.

The workshop aligns with key HMC Conference 2026 track topics, by showcasing ontology-based harmonisation efforts, Human-Machine Collaboration in (Meta)data Acquisition through discussions on digital tools and workflows and Domain and Application-specific Ontologies via real use cases from laboratory contexts. Based on this, we will develop and discuss exemplary knowledge graphs in groups during the workshop in order to introduce researchers to the field, but also to show infrastructure providers and central data stewards how knowledge graphs can support their work and the scientists they serve. We will also take these insights into account as our project progresses and incorporate them into further work.

This workshop invites researchers, data stewards, and infrastructure developers to contribute to shaping Semantic x-Lab’s next phases and to collectively envision semantic metadata as a cornerstone for future-ready, cross-disciplinary research discovery.

Tentative audience:
R
esearchers, knowledge graph developers, data stewards, research infrastructure provider

Maximum number of participants: 24

> Back to list