Selected contributions
We are very grateful for all the participants who submitted their contribution pitches. A lot of good submissions were received, and we're trying to accommodate as many as possible this year, eliminating the usual repetition of the talks.
The organizing committee has selected the following contributions for the Symposium:
Workshops
-
Building Communities That Work: A Hands-On Workshop on Fighting Noise in Developer Forums
by Linus Gasser (EPFL) [details]As research software engineers, we've all seen promising online communities deteriorate under waves of low-effort posts and automated spam - but what actually makes a community worth participating in? In this interactive workshop we want to explore what we're really trying to achieve when we build technical communities. Then, we'll dive into the trade-offs of different approaches: proof-of-personhood systems, web-of-trust networks, reputation scoring, and human curation. We'll report actionable insights you can apply to your own projects and collaborations. Come ready to share your own community horror and success stories! -
Connecting code, data & compute for collaborative research with Renku
by Laura Kinkead (SDSC) [details]Data science projects combine code and data from diverse locations, such as git repositories, cloud storage platforms, specialized data repositories, and more. Coordinating these resources requires technical knowledge and represents time-consuming overhead that diverts time from core analytical work. Even more critically, this fragmentation presents a barrier to collaboration. When resources are not brought together in a repeatable, structured manner, team members struggle to replicate software environments or access necessary datasets, ultimately slowing project progress and hindering knowledge sharing. Renku is an open source collaboration platform built by the Swiss Data Science Center that empowers teams to focus on analytical insights rather than resource management. Renku is used nationwide to create collaborative and reproducible data science projects in research and teaching. Join our workshop to learn how you can use Renku to connect your data, code, compute in a unified and shareable workspace. -
Deploy a FAIR Python application in 30 minutes using Gradio
by Simon Duerr (HES-SO Valais-Wallis) [details]This workshop will introduce the differences between various frameworks to build UIs using Python such as Streamlit, Marimo or Gradio. It will introduce various features of Gradio that make it ideal to deploy FAIR web applications quickly that are useable anywhere, packaged conveniently using Docker, extensible, expose a REST API and have little overhead over the actual research code. -
Exploring Drug Repurposing for Autoimmune Diseases with DeepLife's Cell Blueprint
by Constance Beyer (DeepLife) [details]This workshop will provide participants with hands-on experience using DeepLife's Cell Blueprint software. The session will combine a live demo of the platform's features with a guided, interactive use case focusing on drug repurposing for autoimmune diseases. Participants will gain practical insights into leveraging cutting-edge tools for cellular network analysis and therapeutic discovery. -
RSEs and Data Stewards synergies finding
by Moushumi Ulrich-Nath (Lib4RI) [details]Research software engineers (RSEs) and data stewards (DS) share overlapping goals in advancing data-centric research, yet they often operate in silos, missing critical opportunities for collaboration that could significantly enhance both data and software practices. This workshop will bring these communities together to identify targeted project areas that leverage their combined expertise to address shared challenges. Participants will leave with actionable project concepts, new connections across communities, and clear next steps for further development through RSE and DS project teams. This workshop is ideal for those seeking to address specific data and software challenges through concrete, cross-disciplinary collaboration. -
Understand the value of Nix for stable development - The Fun Way
by Gabriel Nuetzi (SDSC) [details]We will explore how the Nix package manager can give you stable developer environments, packages and improve your development experience. Please make sure you have the requirements ready before the workshop.
Show-and-tell sessions
-
AFFORD: a workflow for data stewards to make the data FAIRer
by Gorka Fraga Gonzalez (UZH) [details]For many researchers it may not be affordable to produce FAIR data. We propose a workflow to create a data index that will help data stewards and researchers curating the necessary metadata and documentation to facilitate finding and sharing their research data; making them effectively FAIRer. Developed by scientists without a software development background, the AFFORD workflow utilizes generalist open-source tools broadly used for scientific research (R, Quarto) as well as Git. It is intended to be maintainable by data stewards or researchers interested in data management, without requiring advanced programming skills. For this, we provide a demo website and a skeleton repository with the necessary materials ready for reuse (see our preprint) . Our goal is to help researchers produce FAIRer data and, as a bonus, to help them adopt a set of tools that are important for scientific reproducibility in general. -
Box-Framework: web application generator for Postgres databases
by Andrea Minetti (WSL) [details]We are developing an open-source tool called Box-Framework that enables rapid creation of web interfaces for PostgreSQL databases. The tool is currently under active development and is already being used in approximately 15 applications. The two main use cases are: - Field Campaigns: Entering data directly into the database from the field. - Management Applications: Used for databases such as the national forest fire database and the Swiss forest protection database. The advantage of the tool is that it enables people with database knowledge but limited frontend expertise to quickly develop a productive web application. -
Building Secure, GPU-Accelerated Applications on HPC Infrastructure
by Ahmad Alhineidi and Viktor Kovtun (UniBE) [details]This show-and-tell would present a project that demonstrates how High-Performance Computing (HPC) infrastructure can be leveraged to deploy secure, GPU-accelerated AI applications using the Open OnDemand web platform. By integrating language and speech models within an interactive, user-friendly interface, This enables researchers and students to run advanced NLP and text analysis tasks directly on the HPC cluster without needing deep technical expertise and with zero code. -
Buildpacks as reproducibilty enabler
by Samuel Gaist (Idiap) [details]Creating a reusable and reproducible environment for other people to use can be quite challenging especially for young researchers some of who may not have a computer science background as RSE have. Docker images are the simplest way to do that but they can be hard to build correctly and requires additional non-trivial knowledge that makes it a complicated tool to add to, often overwhelmed PhD students, tool belt. This presentation shows how using buildpacks can help achieve that goal in a simple fashion so people can concentrate on the code for their research. -
From Paper to Digital Tools: An Overview of the Transition at Empa
by Stefanie Hauser (Empa) [details]The transformation from paper to digital tools at Empa brings several challenges: limited time due to shorter project cycles, a lack of clear incentives, and insufficient capabilities as staff are overwhelmed by numerous parallel topics. These constraints hinder the adoption of new digital solutions, despite their potential to improve efficiency and collaboration in the long term. -
From research code to impact: 5 Years of RSE services for EPFL-ENAC
by Charlie Weil (EPFL) [details]A short version of my EnhanceR Seminar series talk of June 25th, which is a presentation of our EPFL team of RSEs. ENAC-IT4R: A Technical Research Service to Foster a Collaborative, FAIR, and Open Research Data & Code Ecosystem — and Strengthen Scientific Valorization. -
How EUROfusion Advanced Computing Hubs Leverage HPC to Accelerate Research and Engineering in Nuclear Fusion
by Gilles Fourestey (EPFL) [details]Within the framework of the EUROfusion consortium, the Advanced Computing Hub HPC centers of excellence actively engage in enhancing existing European fusion simulation codes. This effort is geared towards enabling researchers to fully harness the enhanced capabilities offered by the latest generations of supercomputers. These simulation codes are specifically designed for modeling plasmas within tokamaks and stellarators in order to accelerate the design of fusion experiments, such as ITER and JT-60SA, as well as the DEMO demonstration power plant. -
How Rust enables you to create a domain specific language
by Jusong Yu (PSI) [details]We use rust to develop a domain specific language for workflow orchestrating. Rust ecosystem make building such thing fairly easy with a lot modern language features enabled. In the show-and-tell, I'll give an overview how this small language looks like and how rust make it easy to build it. -
iLog: a digital inventory logbook integrated with openBIS
by Simone Baffelli (Empa) [details]During the ORD M1 project, we have developed a prototype of a digital inventory logbook integrated with the openBIS ELN-LIMS. This logbook allows users to easily define complex setups composed of multiple sub-objects, edit their state, and track their modifications over time in an user friendly and general manner. In this talk we present the design philosophy and the technical choices behind iLog and provide a short live demo of the tool in use. -
Midap-tools: a python package for post-processing and visualization of midap results
by Lukas von Ziegler (ETHZ) [details]Midap-tools is a python package that allows post-processing and visualization of midap results. Midap is a machine vision powered imaging analysis pipeline developed by the SIS that: 1. segments fluorescent microscopy images from fluidic experiments to detect individual cells (i.e e.coli bacteria) at each frame 2. tracks and measures these individual cells over time While midap performs these functions very well and creates invaluable data for the researchers, its output data contains alot of complexity and can be challenging for the experimentalists to work with. midap-tools is a new tool that provide easy to use high level functions to process, analyze and visualize outputs from entire microfluicid experiments across many samples and color channels. Its goal is to further bridge the gap between the researcher and the complexity of microfluidic data. -
NTSuisse: a web platform for high-resolution mass spectrometry (HRMS) data
by Kai-Michael Kammer (Eawag) [details]I would like to present NTSuisse, a web platform being developed here at eawag for the analysis and management of high-resolution mass spectrometry (HRMS) data. It is accessible to participating cantons, Swiss water suppliers, expert bodies, and the Swiss Federal Office for the Environment. Key features of the NTSuisse platform include user-friendly data upload and storage capabilities, centralized automatic processing, target and suspect screening and quantification. The platform allows stakeholders to manage and analyze their own data independently and offers batch-wise data processing. It has been in development since 2023 with a planned release beginning of 2026. My colleague, Johannes Boog, will also attend and be able to answer questions. The show and tell would showcase the software and explain design decisions regarding frontend and backend. -
PoC or Prod: What makes AI projects successful?
by Roman Wixinger and Hannes Stählin (Ergon Informatik AG, ETHZ) [details]AI prototypes are easy, reliable AI systems are rare. In this show-and-tell we share three fast checks that helped our teams turn proof of concepts into production services.Business impact: An AI project needs to solve an actual problem, so there is a need for it beyond technical curiosity. DevOps and reproducibility: Software engineering practices are key for ensuring security and sustainable development and operations. Systematic evaluation: Data collection apps and synthetic benchmarks turn "looks good" into numbers and reveal how an AI solution actually performs.Throughout the talk, we draw on lessons from our own projects in industry and academia and leave participants with a practical checklist for their next AI project. -
Sustaining Scientific Workflows: The Case for Stable RSE Roles in AiiDA Development
by Edan Bainglass and Ali Khosravi (PSI) [details]We'll share our experience developing AiiDA, a workflow management tool that helps scientists run and reproduce complex computational workflows. Over the past 10 years, AiiDA has grown a lot — and so has the challenge of keeping it running smoothly. With PhDs and postdocs constantly rotating in and out, passing on knowledge and maintaining the software has become a real struggle. We'll talk about why stable, long-term RSE positions are essential to avoid burnout, lost expertise, and stalled progress. Using real examples from our recent work, we'll show how RSEs can make a big difference — not just in keeping the software alive, but in pushing it forward and helping researchers do better science. -
Take up the torch of an existing project
by Diego Antolinos (UniNE) [details]Being recruited on a project that's already started brings a special challenge. I intend to speak around 15 minutes about the Panda project (NPR80 Pandemic Data) and try to bring forward some of the lessons I learned when joining a project of 3 years in journalism studies in the middle, as a temporary contracted research engineer. What does it mean to analyse data that's been collected by someone else? Navigating a research protocol that's been decided without you, yet trying to have a say in the project? Align your practices with the existing code base? Etc. -
Web app for a Tumor Board
by Boris Simic (ETHZ) [details]In collaboration with the HFR (hôpital fribourgeois) we developed a web application to display all relevant information on a screen in a Tumor Board (a session where oncologists decide on the treatment of cancer patients). HFR provides different treatment plans based on experience and published evidences. During such a treatment plan different bio samples are produced (i.e. blood, biopsy) and stored in a central biobank inside the hospital. Our database aims to connect the patient information with the treatment decisions and the bio samples in one database. This information is afterwards shareable with researchers. We will show and tell the concept and the application we built to support cancer treatment at HFR.
Cancelled sessions
The following show-and-tell sessions have been tentatively announced, but the speakers had to withdraw. They are included here for completeness. These talks may appear later in EnhanceR Seminar sessions.
-
CeDA's developement of a web app for interactively exploring the data on brain mechanisms related to the regulation of sleep
by Rodrigo C. G. Pena (UniBas) -
Navigating through Research Software Engineering as an undergraduate
by Sarans Chopra (EPFL) -
Style Transfer of unparallel text data
by Luca Marin (ETHZ)