Workshops offer space for hands-on sessions, focused discussions, and collaborative exchanges on topics related to metadata, FAIR data, and research infrastructures. They provide an excellent opportunity to connect communities, explore emerging ideas, and build bridges between technical and scientific perspectives.
We are delighted that we have 11 successful contributions to our call for pre conference workshops. The workshops are either one 90 minutes block (short) or two 90 minutes blocks (extended). We are still in the process of scheduling them.
Short Workshops
- Building connected data ecosystems - How to facilitate FAIR data workflows across tools and services
This workshop explores how ELNs and related tools can be integrated into connected research data ecosystems to better orchestrate scientific workflows, while addressing current barriers to adoption and discussing suitable architecture models and possible centrally provided Helmholtz-wide services.
Read more - Creating and inspecting Research Object Crates – The interactive way
Hands-on session with the research object crate editor NovaCrate to create your own research object crates with the goal of exploring new use cases and uncovering collaboration potential
Read more - Creating RDF-compliant metadata templates with the AIMS Metadata Profile Service
Learn how to use the AIMS profile editor to build RDF/SHACL-based metadata profiles for discipline-specific needs.
Read more - From chaos to clarity: Smart sample management with LinkAhead & O2A SAMPLES
FAIR Sample management workflows in the research field earth and environment with O2A SAMPLES and LinkAhead
Read more - From shared challenges to shared action: Metadata harmonization in practice
This follow-up workshop builds on the Harmonizing Metadata Across Helmholtz Infrastructures: Community Perspectives & Next Steps (19 February 2026) and presents aggregated insights from the workshop and consulting phase, highlighting practical pathways toward harmonized and interoperable metadata across Helmholtz infrastructures.
Read more - Making Helmholtz data assets visible via the Helmholtz Knowledge Graph
The workshop extends the dialogue with our data providers as well as with those who would like to connect to the Helmholtz KG in the future.
Read more - Make your own FAIR Digital Objects – The graphical way
In this Hands-On session you'll be guided through the FAIR DO Designer to create your own FAIR Digital Objects, together with your own reusable FAIR DO design.
Read more - Semantics hidden in the dark – Make datasets shine (Practical integration of terminology services for FAIR data)
An interactive workshop demonstrating how terminology services and semantic technologies can be practically integrated into research data infrastructures and worksflows to enhance discoverability, interoperability, and FAIR data reuse across disciplines.
Read more
Workshops (Extended)
- Building confidence with research metadata at scale
This hands-on workshop enables research support professionals to move beyond static metadata records and systematically query, evaluate, and operationalize DOI metadata at scale using the DataCite API.
Read more - From ontology to ELN - Create your made-to-measure semantic metadata platform
Learn to use Herbie for setting up a bespoke semantic electronic laboratory notebook or research metadata platform which is customized to your concrete scientific needs via ontologies and SHACL validation schemas.
Read more - Semantic x-Lab: Bridging laboratory metadata and semantic knowledge discovery
This workshop explores how semantic knowledge graphs can bridge fragmented laboratory metadata, offering hands-on sessions to co-design knowledge graphs and discuss cross-domain integration strategies for researchers and infrastructure providers.
Read more
Building connected data ecosystems - How to facilitate FAIR data workflows across tools and services
SHORT WORKSHOP - DISCUSSION
Hosts: Rory Macneil, Tilo Mathes, Emanuel Söding
Most research centers maintain dedicated infrastructures to capture, curate, and store research data produced by their personnel. The employed solutions, however, are often run independently of one another and therefore lack connectivity, creating gaps in the data workflows. An integrated data ecosystem, however, would manage information, provide workflows, and support data documentation as data is produced.
Lab and field notebooks are essential tools for documenting structured information during measurement campaigns or field and laboratory work. Modern Electronic Lab Notebooks (ELNs) and data collection tools offer advanced features to support this documentation process and can enrich records with additional metadata—such as instrumentation details, personnel involved, sample registration, and more. They are often positioned in sections of the data workflow, where critical information is generated and possibly merged and thus could operate as data workflow orchestrators. On the other hand, this task could also be assumed by other tools, depending on the architecture of the envisioned data ecosystem.
However, in practice, many centers and laboratories face significant barriers: ELNs and other services are not readily available, may require costly licenses, don’t integrate sufficiently into existing workflows and infrastructure. They also often lack institutional support or training opportunities. As a result, their use is not yet widespread.
In this workshop, which builds on the results of a workshop taking place in summer 2025. We would like to discuss potential architecture models within research centers, and invite participants to explore the potential of ELNs and other tools within scientific workflows. Together, we’ll discuss desirable features, briefly review a few existing solutions, adoption challenges and consider whether centrally provided ELN services across Helmholtz could be a sustainable way forward. The aim is, to form a working group, developing interoperability standards for data ecosystems.
Tentative audience
- Data ecosystem architects
- tool or information providers in institutional RDM
- data stewards interested in the digitalization of data workflows in their centers
- lab / infrastructure providers who are interested in showing merit of their infrastructures
Participants prerequisites:
No specific technical expertise is required.
Maximum number of participants: 30
Creating and inspecting Research Object Crates – The interactive way
SHORT WORKSHOP - TRAINING
Host: Christopher Raquet
NovaCrate [1] is a web-based interactive editor for creating, editing, and visualizing Research Object Crates [2] (RO-Crates). Built for inspecting, validating, and manipulating RO-Crates, it enables getting a deeper understanding of an RO-Crate's content and structure.
In our workshop, we aim to provide training in NovaCrate and RO-Crate. We also hope to extend our understanding of the requirements of researchers, data stewards, and any other roles that may come in contact with RO-Crates or NovaCrate [1] for further investigation for potential improvement.
During the hands-on session, we will guide and encourage participants to work together in small groups and package some prepared research data as an RO-Crate with the help of NovaCrate. To do so, participants will describe the research data with metadata created through NovaCrate [1]. Here we see a close connection to track topic No. 4, "From Harmonisation to Action(ability)".
In this process, teams are encouraged to take notes on challenges, blockers, and ideas for improvement. At the end of the workshop, we will discuss the experience with the participants, guided by the notes the participants have taken.
The discussion will be centered around these questions:
- In which scenarios are RO-Crates useful?
- How to approach reuse or consumption of RO-Crates?
- How can you incorporate RO-Crates into your research
We hope to have an interesting discussion not only providing us with crucial input for the development of our services, but also to offer our participants with the opportunity for discourse on the applications of RO-Crates in their research area.
[1]: https://novacrate.datamanager.kit.edu/
[2]: https://www.researchobject.org/ro-crate/
Tentative audience:
Software developers, data stewards, researchers, anybody interested in taking part. Prior knowledge of RO-Crates is not required.
Participants prerequisites:
Interest in research object crates in any form, no experience or knowledge required. Please bring your own laptop for the hands-on session. A recent version of Firefox or Chrome is required.
Maximum number of participants: 30
Creating RDF-compliant metadata templates with the AIMS Metadata Profile Service
SHORT WORKSHOP
Hosts: Kseniia Dukkart, Marc Fuhrmans, Moritz Kern, Jürgen Windeck
Generating FAIR research data and enabling its reuse is the overall goal of research data management. However, establishing machine-readable knowledge representation - the “I” in FAIR - as the foundation for FAIR data and metadata remains a major challenge for many research communities. We have developed an approach to create subject-specific, RDF-compliant metadata profiles (i.e., SHACL shapes) that enable precise and flexible documentation of research processes and data. Our modelling approach supports inheritance between profiles: communities can create and share modular profiles as building blocks, which others can adopt and extend, so that metadata remains community-specific and interoperable at the same time.
To facilitate the modelling process and make it accessible to users with limited ontology expertise, we have developed a web service that provides a graphical user interface for creating metadata profiles [1]. It allows users to add suitable terms from existing terminologies together with constraints on permitted value nodes (e.g. expected data types, classes, or node shapes) and attribute cardinalities. Based on those profiles, metadata forms can be automatically generated for entering profile-compliant metadata [2] as well as search interfaces to explore profile-based metadata via faceted search [3].
In this workshop, participants will learn how to use the AIMS editor to create and extend metadata profiles and discuss the challenges of creating RDF-compliant metadata for research data. We will also present the new user interface prototype and conduct a hands-on user test. By gathering feedback from metadata experts, data stewards, and domain experts, we aim to improve the current user interface and discuss how RDF-based metadata can be embedded into everyday research workflows.
[1] NFDI4ING Metadata Profile Service. https://profiles.nfdi4ing.de
[2] Shacl-form. https://github.com/ULB-Darmstadt/shacl-form
[3] RDF-Store. https://github.com/ULB-Darmstadt/rdf-store
Tentative audience:
Metadata managers, Data stewards, research domain experts, or (metadata) infrastructure providers.
Participants prerequisites:
- Bring your own notebook
- Basic, general knowledge in terminologies, metadata schemas, and RDF (experts of course highly welcome!)
Maximum number of participants: 30
From chaos to clarity: Smart sample management with LinkAhead & O2A SAMPLES
SHORT WORKSHOP - TUTORIAL
Hosts: Maren Rebke, Florian Spreckelsen
LinkAhead is a flexible open source toolbox for research data that adapts easily when workflows or requirements change. It offers a clear web interface, programmatic access and a semantic structure that can be extended for many different research contexts.
In this workshop we will introduce LinkAhead and demonstrate how it supports O2A SAMPLES, a sustainable and interoperable platform for transparent, FAIR compliant and AI ready sample metadata. O2A SAMPLES enables reliable sample registration, storage tracking and Nagoya documentation, and connects smoothly with Helmholtz infrastructures. With well defined workflows, QR based tracking and fully documented procedures, it provides an efficient and collaborative approach to managing samples from field collection to digital archive. This unified framework strengthens reproducibility, accessibility and discoverability, enabling efficient digitization and collaboration across the entire sample lifecycle.
In this workshop, participants will first get to know the open-source research data management software LinkAhead [1, 2] which is the basis of the O2A SAMPLES platform at AWI. We will introduce LinkAhead's datamodel and webinterface including hands-on examples of how to query for, insert, and edit data entries in LinkAhead. We will then continue with an introduction of the O2A Samples platform with its sample and storage management workflows. Participants will learn how samples are registered, and how to export and update their metadata. An outlook will be given on configuring and adapting the sample management module [3] to the participants' (or their institutions') needs.
[1] https://doi.org/10.3390/data4020083
[2] https://gitlab.com/linkahead/
[3] https://gitlab.com/linkahead/linkahead-sample-management
Tentative audience:
The workshop on O2A SAMPLES targets researchers, data managers, and research infrastructures that handle large, diverse physical sample collections and need a FAIR, interoperable, and sustainable system for managing, sharing, and reusing sample metadata. People generally interested in the underlying flexible toolbox LinkAhead for research data management are also welcome to attend.
Participants prerequisites:
No prior knowledge, own laptop with browser strongly recommended to follow the examples.
Maximum number of participants: 20
From shared challenges to shared action: Metadata harmonization in practice
SHORT WORKSHOP - DISCUSSION
Host: Oonagh Brendike-Mannix
Metadata harmonisation is a collective action problem. In this workshop our goal is to bring together data stewards, infrastructure providers, and researchers to share practical experiences in improving metadata quality, and co-identify actionable next steps toward harmonized metadata practices.
The workshop builds on our analysis of metadata provided, previous workshops, and one-on-one counselling sessions.
Intended Outcomes:
The workshop will:
- present a summary of insights gathered from community workshops and one-on-one provider counseling,
- provide short provider case reflections illustrating practical harmonization efforts (successes and challenges),
- facilitate an interactive group exchange on lessons learned, remaining obstacles, and community-identified priorities,
- synthesize outcomes into a joint set of next steps for HMC and providers, and shared recommendations.
Expected results:
- Shared understanding of practical paths to improve metadata in provider contexts,
- A curated list of next steps and recommendations for provider networks and HMC,
- Strengthened network of practitioners engaged in metadata harmonization.
Tentative audience:
The workshop is intended for representatives of Helmholtz data infrastructures, repository managers, metadata specialists, research data managers, and individuals responsible for metadata governance or interoperability. Others interested in the topic are welcome.
Participants prerequisites:
No specific technical expertise is required.
Maximum number of participants: 30
Making Helmholtz Data Assets Visible via the Helmholtz Knowledge Graph
SHORT WORKSHOP - DISCUSSION
Host: Oonagh Brendike-Mannix, Volker Hofmann, Marco Nolden
This workshop aims to advance the Helmholtz Knowledge Graph (HKG) as a shared metadata backbone by identifying new data providers and sources, extending and refining the HKG data model, and jointly evaluating practical onboarding processes for data providers across Helmholtz.
The Helmholtz Knowledge Graph is a federated metadata infrastructure that makes digital assets—such as datasets, publications, software, and instruments—discoverable, comparable, and queryable across the Helmholtz Association. While the HKG already integrates metadata from multiple infrastructures, its continued value depends on active collaboration with data providers, domain experts, and metadata professionals.
The workshop provides a structured, interactive setting to work on three closely connected themes. First, participants will identify novel data providers and metadata sources, including domain-specific repositories, institutional services, and emerging infrastructures, that could meaningfully extend the coverage of the HKG. This includes discussing when data sources can be considered authoritative and how they may be used to validate, enrich, or contextualize other metadata in the graph.
Second, the workshop will explore metadata schemas and domain-specific structures that are currently not, or only partially, represented. Participants will review limitations of the existing HKG data model and discuss extensions that improve expressiveness for search, discovery, and cross-domain analysis.
Finally, participants will discuss how a structured onboarding process for data providers can be established, identifying challenges, best practices, and opportunities to better align technical pipelines with real-world metadata creation and maintenance.
Outcomes include a curated list of candidate data providers, shared criteria for authoritative metadata, concrete proposals for extending the HKG data model, and initial milestones for onboarding new data providers.
The workshop will be organized into parallel and successive discussion tables, followed by joint synthesis sessions to consolidate results across perspectives.
Data stewards, infrastructure providers, metadata specialists, and developers working with metadata infrastructures within Helmholtz and beyond.
Tentative audience:
The workshop targets developers, data engineers, and infrastructure providers who manage or develop data holdings (e.g., local or domain-specific knowledge graphs, data or information repositories, databases, etc.) and are interested in having them represented in the Helmholtz KG
Participants prerequisites:
Participants should have a basic idea of the Helmholtz KG project.
There are no specific technical prerequisites.
Maximum number of participants: 20
Make your own FAIR Digital Objects – the graphical way
SHORT WORKSHOP - TRAINING
Host: Andreas Pfeil
To accelerate the adoption of FAIR Digital Objects (FDOs), their creation and usage needs to be implemented in software. Our work targets the task of creating and maintaining FDO records. We introduce an application to build designs for FDO records in an intuitive and visual way, targeting non-experts and experts in the field alike. From a design, code and FDOs can be generated to automatically create FDO records from given information.
In this workshop, we aim to provide the skill to create FAIR Digital Objects in smaller and larger scales with minimal resources. We encourage the participants to bring JSON-encoded metadata of the objects they would like to publish as FAIR DOs. For those who do not, we will provide examples to work with. We also hope to get some feedback for the further development of the FAIR DO Designer and insights into deeper requirements of the target group. The workshop will have the following shape:
- Introduction to the FAIR DO Designer (10 min)
- Demonstration and guidance through the basic concepts (interactive, 20 min)
- Working session, so participants can build their own FDOs (guided, 45 min)
- Discussion and Feedback (15 min)
References:
- FAIR DO Designer Code Repository: https://github.com/kit-data-manager/fair-do-designer
- FAIR DO Designer Online Demonstrator: https://kit-data-manager.github.io/fair-do-designer/
Tentative audience:
Everyone who is interested in learning about and creating FAIR Digital Objects. For example Researchers, Data Stewards, Data Curators and Software Developers.
Participants prerequisites:
- Participants should be interested in creating FAIR DOs.
- Participants need a laptop with a modern web browser. We recommend to have at least Firefox or Chrome installed as a fallback.
Maximum number of participants: 30
Semantics Hidden in the Dark – Make Datasets Shine (Practical Integration of Terminology Services for FAIR Data)
SHORT WORKSHOP
Hosts: Claudia Martens, Alexander Wolodkin, Anette Ganske
Semantic technologies and terminology services are a cornerstone for implementing the FAIR principles, as they make the meaning of data explicit, machine-actionable, and reusable beyond their original context. While data may be technically accessible, a lack of shared semantics often limits interoperability and hinders reuse across disciplines, infrastructures, and research communities. Terminology services address this challenge by providing controlled concepts, semantic relationships, and persistent identifiers that enable consistent interpretation and integration of data.
This workshop focuses on the practical adoption of terminology services in research data infrastructures, moving beyond conceptual discussions toward concrete, transferable implementations. It presents key results of the BITS project (Blueprint for the Integration of Terminology Services in Earth System Science) and demonstrates how terminology services on the example of the ESS TS (https://terminology.nfdi4earth.de) can be embedded into research data infrastructures to improve discovery, interoperability, and semantic enrichment of datasets. Designed as an interactive forum, the workshop combines short inputs, live demonstrations, and participatory elements to make the added value of semantics tangible for different stakeholder groups. Participants will engage with real-world implementations of terminology services integrated into repository interfaces (via API usage) and metadata pipelines, supported by interactive elements such as live polling, search challenges, and guided discussion. By bringing together repository managers, researchers, and data stewards, the workshop fosters exchange between technical and conceptual perspectives and supports community-driven learning. Overall, the workshop aims to lower barriers to adopting terminology services, strengthen awareness of their strategic importance for FAIR data, and stimulate discussion on scalable and sustainable implementations across research infrastructures.
Tentative audience:
The workshop addresses repository managers, data stewards, research software engineers, and researchers involved in metadata management or research data infrastructures who are interested in improving semantic interoperability and FAIR implementation
Participants prerequisites:
No prior knowledge of semantic technologies or terminology services is required. Basic familiarity with research data management practices, data repositories, or metadata concepts is helpful but not mandatory.
Maximum number of participants: 35
Building Confidence with Research Metadata at Scale
EXTENDED WORKSHOP - TRAINING
Host: Sara El-Gebali
This hands-on, in-person workshop empowers research support professionals to move beyond static views of metadata and actively interrogate, assess, and act on DOI metadata at scale using the DataCite API. Using the DataCite metadata schema as a practical reference point, participants will work directly with real DOI metadata to explore which metadata elements most strongly influence discovery, reuse, trust, and decision-making. While DataCite is used as a reference implementation, the approaches and principles discussed are applicable to other PID-based and metadata-rich infrastructures.
Rather than inspecting records one by one, the workshop introduces the DataCite API as an accessible way to turn metadata into evidence. Participants will learn how to translate everyday institutional and research questions such as which records are missing licenses, how well ORCIDs or RORs are adopted, or which outputs are funded by a specific organisation into concrete, reproducible API queries.
The focus is not on software development, but on practical metadata literacy: understanding how metadata is structured, how it can be queried systematically, and how enriched metadata can be used to support curation, reporting, policy monitoring, remediation planning, and decision-making. Through guided, browser-based exercises, participants will explore high-impact metadata fields including licenses, subjects, affiliations, funders, descriptions, and relationships across repositories and disciplines. Assessing structured metadata in this way supports both FAIR maturity assessment and machine-actionable reuse, including AI-readiness.
By the end of the workshop, participants will be able to use API results to assess metadata completeness and quality, prioritise remediation efforts, and produce clear, defensible insights for management, policy, and strategic communication. They will gain confidence in explaining the value of specific metadata fields to researchers and decision-makers, and leave with reusable query patterns and a stronger understanding of metadata as shared research infrastructure rather than static documentation.
Tentative audience:
This workshop is aimed at repository managers, data stewards, PID infrastructure providers, and analysts working with DOI metadata. It is particularly suitable for professionals who engage with metadata operationally but have limited or no experience working with APIs.
Participants prerequisites:
Participants should have basic familiarity with the DataCite Metadata Schema. No prior experience with APIs is required. The workshop is designed to accommodate both beginners and advanced participants. Participants should bring a laptop with internet access.
Maximum number of participants: 30
From ontology to ELN - Create your made-to-measure semantic metadata platform
EXTENDED WORSKHOP - TRAINING
Host: Fabian Kirchner
This workshop will teach you how to use Herbie for setting up a bespoke semantic electronic laboratory notebook or research metadata platform which is customized to your concrete scientific needs.
We will start with an ontology of your scientific domain, pick a typical metadata record you might want to collect, and end with a set of (re)usable web forms for entering such a record in a fully semantically annotated way.
A typical and cumbersome approach would be creating spreadsheets and a set of transformation scripts to facilitate easy data entry for non-technical users. In the workshop you will get to know an alternative approach using Herbie: You will learn to write validation schemas in the standardized SHACL Shapes Constraint Language, upload these alongside your ontology to Herbie, and obtain a platform with easily usable web forms which automatically persist all entered data into a semantically annotated RDF knowledge graph.
After entering a few exemplary records, we will explore how you can query the created RDF knowledge graph using SPARQL to extract the data you need in downstream projects.
This workshop is intended for those who have an application ontology and want to start collecting (small) (meta)data that is properly semantically annotated. There are no restrictions on the domain. Herbie works best for data entered manually in an append-only approach, like it is typically done in laboratory notebooks.
We assume basic understanding of RDF and OWL, in particular you should be able to understand RDF graphs in the turtle format. You should bring your own computer and have Python and Node.js installed to be able to run some development tools.
Tentative audience:
This workshop is intended for those who have an application ontology (or plan to build/use one) and want to start collecting (small) (meta)data that is properly semantically annotated. In particular it will be interesting for data stewards, researchers with interest in the technicalities of data management, and developers of research software.
Participants prerequisites:
Basic understanding of RDF and OWL, in particular RDF graphs in the turtle format. Own computer with Python and Node.js installed.
Maximum number of participants: 15
Semantic x-Lab: Bridging Laboratory Metadata and Semantic Knowledge Discovery
EXTENDED WORKSHOP - HACK SESSION
Hosts: Oliver Knodel, Manja Luzi-Helbing, Felix Mühlbauer, David Pape, Martin Voigt
The Semantic x-Lab project addresses a fundamental challenge in modern research data ecosystems: the fragmentation of laboratory metadata across heterogeneous systems and disciplinary silos. Funded within the Helmholtz Metadata Collaboration (HMC) and co-led by HZDR, GFZ, and GSI, the project aims to interlink ontology-based descriptions of workflows, instruments, resources, and experimental data to make them discoverable, interoperable, and semantically rich. By building a distributed knowledge graph through a user-centered co-design process with laboratory partners and large-scale facility stakeholders, Semantic x-Lab fosters cross-domain insights that were previously inaccessible due to isolated metadata landscapes.
Building on our 2025 Kick-Off Workshop where we introduced the project scope, collaborative use cases, and the foundational vision for FAIR semantic integration of lab information, this workshop will advance hands-on discussions on concrete integration strategies and community engagement practices. Participants will explore how semantic search interfaces, ontology alignment, and co-design methodologies can support FAIR metadata workflows across research domains.
The workshop aligns with key HMC Conference 2026 track topics, by showcasing ontology-based harmonisation efforts, Human-Machine Collaboration in (Meta)data Acquisition through discussions on digital tools and workflows and Domain and Application-specific Ontologies via real use cases from laboratory contexts. Based on this, we will develop and discuss exemplary knowledge graphs in groups during the workshop in order to introduce researchers to the field, but also to show infrastructure providers and central data stewards how knowledge graphs can support their work and the scientists they serve. We will also take these insights into account as our project progresses and incorporate them into further work.
This workshop invites researchers, data stewards, and infrastructure developers to contribute to shaping Semantic x-Lab’s next phases and to collectively envision semantic metadata as a cornerstone for future-ready, cross-disciplinary research discovery.
Tentative audience:
The workshop is designed for researchers or lab managers, data stewards, knowledge graph developers, and data/community managers working with laboratory metadata or interested in semantic data integration, ontology alignment, and user-centered design of knowledge graphs. It particularly addresses those who want to explore how semantic technologies can enhance discoverability and interoperability in their research domains
Participants prerequisites:
Notebook/Laptop can help; Basic understanding of metadata management or interest in lab-workflows and documentation and semantic technologies is beneficial but not required.
Maximum number of participants: 24