Workshops at AMAM 2025
Title:
Mobile body and brain imaging and Brain in the loop optimization concept
Duration: 1.5 hours
Organizers:
- Asghar Mahmoudi (TU Darmstadt)
- Dietmar Benz (ANT Neuro)
Contributors:
- Morteza Khosrotabar (TU Darmstadt)
- Krittika Choudhury (ANT Neuro)
Workshop Description / Motivation:
This workshop presents cutting-edge approaches for combining mobile EEG and brain-in-the-loop optimization (BILO) in the context of assistive robotics. The integration of brain activity with body movement analysis allows for adaptive and user-centered control of assistive devices such as exosuits.
Participants will be introduced to the core concepts of EEG-based evaluation and control, and experience live demonstrations showcasing real-time EEG integration using ANT Neuro’s systems in collaboration with TU Darmstadt.
Workshop Goals:
- Present the BILO concept for EEG-based assistance evaluation and control
- Introduce the basics of mobile EEG technologies and integration workflows
- Demonstrate EEG-enabled real-time control of assistive systems
- Discuss future directions and open challenges in brain-body integration
Key Features:
- Live EEG demonstrations using ANT Neuro systems
- Example of EEG-informed adaptation in robotic assistance
- Real-time signal streaming using LabStreamingLayer (LSL)
- Opportunities for audience discussion and collaboration
Target Audience:
- Researchers in assistive robotics, human-robot interaction, and wearable systems
- Cognitive scientists and neuroengineers interested in BCI integration
- Biomedical and movement engineers
- Anyone curious about brain-informed movement control
Time | Activity |
---|---|
0:00 – 0:05 | Welcome & Introduction: Overview of the workshop theme and motivation (Asghar Mahmoudi) |
0:05 – 0:20 | Introduction to Mobile EEG Systems: Overview of state-of-the-art technology and integration with movement data (Krittika Choudhury) |
0:20 – 0:35 | Introduction to BILO for Assistive Systems: Concept and use cases in exosuit control (Morteza Khosrotabar) |
0:35 – 0:55 | Demo 1 – Real-Time EEG Acquisition & Monitoring – Hands-on demonstration of mobile EEG signal tracking (Dietmar Benz, Krittika Choudhury) |
0:55 – 1:15 | Demo 2 – EEG-Driven Exosuit Interaction representing BILO in action – Live demonstration combining EEG inputs with assistive device control (Krittika Choudhury, Morteza Khosrotabar) |
1:15 – 1:30 | Audience Q&A and Open Discussion: (Moderated by Asghar Mahmoudi) |
Title:
WhiteBox – Whitebox Modelling of Human Movement
Duration: 1.5–2 hours (10th of July, 15:30 – 17:30)
Chair
- Constantin Rothkopf (TU Darmstadt)
Organizers:
- Dirk Balfanz (TU Darmstadt)
Contributors:
- Omid Mohseni – Modelling responses to backpack perturbations during walking
- Niteesh Midlagajni – Modeling Human Pouring Behavior using System Identification and Optimal Control Methods
- Matthias Schultheis – Inverse optimal control of human reaching movements
- Fabian Kessler – Modelling navigation behavior based on landmark-guided walking, gaze, and body orientation
Abstract / Motivation:
Understanding human movement requires models that not only reproduce observed behavior but also provide interpretable insight into the control and decision-making processes involved. The WhiteBox project explores how explicit, interpretable models (“white-box” models) can explain complex motor behavior by linking control, biomechanics, and cognitive factors.
This workshop presents modeling approaches across diverse human movement tasks—from upper-limb actions to whole-body locomotion—illustrating how whitebox modeling can unify sensorimotor behavior across domains. The talks span applications in inverse optimal control, biomechanical modeling, and gaze and navigation behavior.
Workshop Goals:
- Present real-world examples of interpretable modeling in human movement
- Discuss methodological challenges and advantages of whitebox approaches
- Stimulate interdisciplinary discussion on combining perception, control, and biomechanics in modeling
Abstracts:
Fabian Kessler: Modeling sensorimotor strategies for active learning and active perception during spatial navigation
As humans navigate, they actively learn the structure of their environment, constructing internal models that guide how and where they seek information. This process shapes active perception, allowing individuals to reduce uncertainty about their position through targeted eye, head, and body movements. Here, we investigated how people actively seek spatial information from landmarks while moving through a virtual environment, using a head-mounted display. Participants followed predefined walking paths with varying landmark visibilities – some requiring active movements to view them. Most participants fixated on landmarks throughout the different walking paths, and those who didn’t showed greater variability in their endpoint locations. Critically, acquiring landmark information at key positions along the walking path predicted better performance than the total amount of landmark fixations. A computational model minimizing spatial uncertainty captured these patterns of fixations, while an alternative model not explicitly considering uncertainty did not. Taken together, our results suggest that humans actively coordinate their eye, head, and body movements to shape their spatial uncertainties, highlighting the intertwined roles of active learning and active perception in human spatial navigation.
Niteesh Midlagajni: Modeling Human Pouring Behavior using System Identification and Optimal Control Methods
Optimal control methods under uncertainty have been proven effective in modeling a variety of human sensorimotor behaviors. However, these algorithms require the specification of underlying dynamics. In many naturalistic, sequential tasks, deriving these dynamics from first principles is non-trivial. Here, we propose using system identification methods to learn the dynamics directly from behavioral data, and then leveraging optimal control models to understand the task. We focus on the task of pouring: a highly practiced, everyday activity involving continuous visuomotor control. We designed a novel experimental setup combining mobile eye tracking, object tracking, and a custom-made digital scale, and collected data of participants performing the pouring task under two conditions: self-paced, where they poured naturally, and fast, where they were instructed to pour as quickly as possible. Participants were given no explicit instructions on the target fill level or time constraints in either condition. Using Sparse Identification of Nonlinear Dynamics (SINDy), we extract a low-dimensional dynamical model from the data and apply iterative Linear Quadratic Gaussian (iLQG) control to infer the cost function governing the behavior. The resulting cost function is highly interpretable: the pouring task can be described as reaching one’s preferred fill level while minimizing action cost and regulating flow rate to avoid spilling. Taken together, this approach offers an interpretable framework for analyzing naturalistic behavior using tools from optimal control theory.
Omid Mohseni: Modeling Balance Recovery Strategies Following Mediolateral Gyroscopic Moment Perturbations During Walking.
Studying how individuals respond to controlled perturbations provides valuable insights into the sensorimotor control of balance. By applying moments to the upper body using an AMP, we examined the balance recovery strategies used during laterally perturbed walking. We found that immediately following the perturbation, the hip strategy and adjustments in foot placement were the primary mechanisms for maintaining balance. Based on these observations, we employed a simple template model along with a bioinspired controller to replicate the observed balance recovery responses, thereby shedding light on the black-box model of human motor control.
Matthias Schultheis: Modeling human sensorimotor tasks using stochastic optimal control
Humans perform remarkably fast and precise movements despite the presence of noise in their sensors, muscles, and environments. In this talk, I will show how many everyday sensorimotor behaviors—such as arm reaching movements—can be modeled as stochastic optimal control systems. Specifically, we formulate these tasks as Linear-Quadratic-Gaussian (LQG) controllers extended to signal-dependent noise, where the variability scales with the magnitude of the signals involved. This framework links task goals, biomechanics, and uncertainty, enabling predictions of both average trajectories and trial-to-trial variability. Building on this forward model, I will describe an inverse approach that allows us to infer key properties, such as latent costs, beliefs, and motor noise characteristics, directly from behavioral data. Together, these methods offer a principled way towards learning interpretable (“whitebox”) models from behavioral data.
Time | Activity |
---|---|
0:00 – 0:15 | Welcome & Introduction to WhiteBox (Constantin Rothkopf) |
0:15 – 0:30 | Talk 1: Modeling Human Pouring Behavior using System Identification and Optimal Control Methods (Niteesh Midlagajni) |
0:30 – 0:45 | Talk 2: Modeling human sensorimotor tasks using stochastic optimal control (Matthias Schultheis) |
0:45 – 1:00 | Talk 3: Modeling Balance Recovery Strategies Following Mediolateral Gyroscopic Moment Perturbations During Walking (Omid Mohseni) |
1:00 – 1:15 | Talk 4: Modeling sensorimotor strategies for active learning and active perception during spatial navigation (Fabian Kessler) |
1:15 – 1:30 | Panel Discussion: “What Makes a Model a White Box?” All Speakers, Moderated by Constantin Rothkopf |
1:30 – 1:50 | Open Q&A with Audience |
1:50 – 2:00 | Concluding Remarks (Constantin Rothkopf) |
Title:
2nd BioAct Workshop – Bioinspired Actuator Designs for Robotics: From Legged Locomotion to Assistive Devices
Duration: 2 hours
Organizers:
- Gregory Sawicki (Georgia Tech)
- Koh Hosoda (Kyoto University)
- Marc Murcia (TU Darmstadt)
Contributors:
- Koh Hosoda (Kyoto University)
- Greg Sawicki (Georgia Tech)
- Marc Murcia (TU Darmstadt)
- Dai Owaki (Tohoko University)
- Aida Rashty (TU Darmstadt)
Abstract / Motivation:
Bioinspired actuator designs are a cornerstone for advancing robotic mobility and assistive devices. Following the success of the first BioAct workshop at BioRob 2018 and the subsequent Springer book series, this second edition brings together leading researchers in bioinspired actuation, human–robot interaction, and mechanically intelligent systems.
The workshop explores how biological principles inform the design of actuators for legged robots and wearable assistance, including concepts such as hybrid actuators with variable impedance and muscle-inspired actuation. With a special focus on real-world applications and cross-domain translation from biomechanics to robotics, this session is designed to foster exchange and new collaboration.
Goals:
- Share state-of-the-art designs in bioinspired actuation
- Discuss actuator needs and trends in assistive technologies
- Bridge the gap between biomechanics and robotic design
- Explore future research opportunities and cross-lab synergies
Possible Follow-Up Outcomes:
- Initiate a collaborative paper or proposal
- Foster interdisciplinary collaborations on futuristic bioinspired robots and assistive systems
- Exchange dataset needs, actuator benchmarks, or design constraints
Time | Activity |
---|---|
0:00 – 0:05 | Welcome & Introduction: Overview of workshop goals and themes (Maziar Sharbafi) |
0:05 – 0:25 | Talk 1 – Biomechanics-Informed Specs for Exos and Prostheses (Greg Sawicki) |
0:25 – 0:45 | Talk 2 – Mechanically Intelligent Actuators in Legged Robotics (Koh Hosoda) |
0:45 – 1:00 | Talk 3 – Bioinspired hybrid actuation with insight from human hopping (Aida Rashty) |
1:00 – 1:15 | Talk 4– Bioinspired hybrid actuation: EPA (electric-pneumatic actuator) and corresponding robotic findings (Marc Murcia) |
1:15 – 1:30 | Talk 5 – Jellyfish Cyborg: Soft-embodied Robot Recruiting Biological Actuators (Dai Owaki) |
1:30 – 1:50 | Panel Discussion – Design Priorities Across Applications (Moderated discussion with all speakers, Host: Maziar Sharabfi) |
1:50 – 2:00 | Open Q&A + Audience Discussion Feedback, ideas, collaborations |
Title:
Central vs. Distributed Coordination in Locomotion: From CPGs to Reflex Control and Morphological Coupling
Duration: 2 hours
Organizers:
- Auke Ijspeert (EPFL)
- Maziar Sharabfi (TU Darmstadt)
Contributors:
- André Seyfarth (TU Darmstadt)
- Auke Ijspeert (EPFL)
- Poramate Manoonpong (VISTEC, Thailand, SDU, Denmark)
- David Remy (University of Stuttgart)
- Kotaro Yasui (Tohoko University)
- Maziar Sharabfi (TU Darmstadt)
Abstract / Motivation:
Locomotion in animals and robots emerges from a complex interplay between centralized neural control and distributed, often local, mechanisms, including reflexes, passive dynamics, and morphological coupling.
This workshop will explore the full spectrum of coordination strategies—from central pattern generators (CPGs) to mechanically mediated synchronization, offering insights into how different systems combine top-down and bottom-up control to achieve robust and adaptive movement.
We aim to foster cross-disciplinary dialogue between experts in robotics, neuroscience, and biomechanics, addressing both theoretical principles and practical implementations.
Workshop Goals:
- Compare and contrast central vs. distributed control approaches for locomotion
- Discuss how biological systems inspire coordination strategies in robots
- Explore the integration of CPGs, reflexes, and morphological computation
- Encourage interdisciplinary exchange and new collaborations
Possible Outcomes:
- Identify open questions for future research
- Foster interdisciplinary collaborations
- Initiate a follow-up publication, panel, or project proposal
Time | Activity |
---|---|
0:00 – 0:05 |
Welcome & Introduction: Overview of the workshop theme and motivation (André, Seyfarth) |
0:05 – 0:20 | Talk 1: Reverse-engineering feedforward and feedback control mechanisms of animal locomotion (Auke Ijspeert) |
0:20 – 0:35 | Talk 2: Neural Control with Reflexes and Local Feedback in Robotic Systems (Poramate Manoonpong) |
0:35 – 0:50 | Talk 3: Mechanical Coupling and Morphological Computation in Locomotion (David Remy) |
0:50 – 1:05 | Talk 4: Decoding the interplay between central and peripheral control for versatile locomotor repertoire in centipedes (Kotaro Yasui) |
1:05 – 1:20 | Talk 5: Concerted control, distributed control with central sensory feedback (Maziar Sharbafi) |
1:20 – 1:40 | Panel Discussion: Trade-Offs, Synergies, and Future Challenges (Moderated by André Seyfarth) |
1:40 – 1:55 | Audience Q&A and Open Discussion: Fostering interactive exchange with all participants |
1:55 – 2:00 | Concluding remarks (André, Seyfarth) |
Title:
Movement Academy, a bridge between research and practice
Duration: 1.5–2 hours
Organizers:
- André Seyfarth (TU Darmstadt)
Contributors:
- André Seyfarth (TU Darmstadt)
- Roger Russel (Feldenkrais Zentrum Heidelberg)
- Eisa Alokla (TU Darmstadt)
- Philipp Erben
Abstract / Motivation:
How can knowledge from movement science be translated into everyday practice and individual experience? The Movement Academy explores exactly that – bridging scientific research and embodied practice by bringing together perspectives from biomechanics, neuroscience, movement pedagogy, and somatic methods.
This workshop introduces the concept and activities of the Movement Academy, a collaborative initiative at TU Darmstadt involving movement scientists, educators, and practitioners. By creating a space for dialogue and exchange, the academy fosters interdisciplinary learning and development through hands-on engagement, case-based reflection, and scientific framing of movement practice.
The session will present examples of past and ongoing formats, insights from real-world applications, and invite participants into an open discussion about future directions and integration into education and training.
Workshop Goals:
- Present the concept, format, and purpose of the Movement Academy
- Share insights on how scientific knowledge and somatic experience can inform each other
- Foster cross-disciplinary conversation between researchers, therapists, coaches, and educators
- Explore how science-based movement practice can support health, adaptability, and rehabilitation
Target Audience:
- Researchers and students in movement science, biomechanics, neuroscience
- Movement educators, therapists, trainers, and coaches
- Professionals interested in embodied learning, rehabilitation, and health promotion
Time | Activity |
---|---|
0:00 – 0:10 | Introduction to Movemen Academy: History, Philosophy, and Goals (Philipp Erben) |
0:10 – 0:30 | Research Meets Somatic Practice: The value of experiential movement learning in scientific context (Andre Seyfarth) |
0:30 – 0:50 | Exemplary outcomes of Embodied Exploration Activity with Feldenkrais method: A guided experiential session combining perception and movement (Roger Russel) |
0:50 – 1:10 | Insights from the Academy: Demonstration of empirical methods functionality with scientific methods (Eisa Alokla) |
1:10 – 1:30 | Panel Discussion & Audience Q&A: How can we further integrate movement research and real-world practice? (All contributors) |
Title:
LokoAssist, Seamless integration of assistance systems for the natural locomotion of humans
Duration: 1.5 hours
Organizers:
- Mario Kupnik (TU Darmstadt)
- Oskar von Stryk (TU Darmstadt)
- Sebastian Wolf (Heidelberg University)
- Herta Flor (Heidelberg University)
Abstract / Motivation:
The LokoAssist research training group, funded by the German Research Foundation (DFG), explores how assistive devices – such as prostheses, orthoses, and exoskeletons– can be seamlessly integrated into the user's body schema to enable natural and intuitive human locomotion.
This workshop presents the interdisciplinary structure and research goals of the project, introduces its core scientific pillars, and highlights early achievements that address the central question: How can we enable intelligent, user-centered integration of assistive systems into human movement?
Workshop Goals:
- Present the concept LokoAssist for locomotion assistance
- Share interdisciplinary insights from engineering, biomechanics, neuroscience, and psychology
- Highlight technological and scientific progress from different aspects
- Foster dialogue on challenges and future directions in seamless assistance
Target Audience:
Researchers and practitioners in:
- Robotics & exoskeleton design
- Biomechanics & human locomotion
- Human-machine interaction
- Rehabilitation, prosthetics, and assistive tech development
- Neuroscience and cognitive modeling of motor control
Time | Activity |
---|---|
0:00 – 0:10 | Introduction to LokoAssist (André Seyfarth) |
0:10 – 0:25 | Mechatronic Systems: Adaptable actuators, sensing, and drive concepts for real-world assistance (Mario Kupnik) |
0:25 – 0:40 | Sensorimotor Movement Models: From undisturbed reference behavior to predictive ML-based support (Oskar von Stryk) |
0:40 – 0:55 | Assistance Scenarios and Evaluation: Human-in-the-loop experiments and multi-dimensional outcome measures (Sebastian Wolf) |
0:55 – 1:10 | User Perspective and Body Representation: Cognitive and perceptual integration of assistive devices into body schema (Herta Flor) |
1:10 – 1:30 | Panel Discussion & Audience Q&A: Open exchange on interdisciplinarity, translation, and future |
Tutorials at AMAM 2025
Title:
Model Predictive Control in Robotics: Foundations and Learning-Based Extensions
Duration: 1.5-2.5 hours (To be determined)
Instructors / Presenters:
Maik Pfefferkorn (Instructor of “Model Predictive Control and Machine Learning” for Control Engineering students, TU Darmstadt)
Prof. Rolf Findeisen Head of the Control and Cyber-Physical Systems Laboratory
Speakers:
- Maik Pfefferkorn (TU Darmstadt)
Instructor of “Model Predictive Control and Machine Learning” for Control Engineering students - Prof. Rolf Findeisen (TU Darmstadt)
Head of the Control and Cyber-Physical Systems Laboratory
Target Audience:
- Graduate students, researchers, and professionals in robotics and control.
- Engineers with a basic knowledge of control who want to learn about MPC.
- Anyone interested in the intersection of machine learning, model-based control, and their applications.
Learning Goals:
- The participants can explain model predictive control as a solution approach to optimal control problems and formulate and solve model predictive controllers for simple problems.
- The participants can sketch the application of MPC in the field of robotics
- The participants can describe and evaluate the use of machine learning methods in model predictive controllers and design and implement learning-supported model predictive controllers for simple problems.
- The participants can name and describe practical challenges and emerging research directions.
Optional Materials to Accompany Tutorial:
Slide deck for the tutorial session
- Open-source code notebook or GitHub link
- List of MPC solvers (e.g., CasADi, acados, do-mpc, FORCES Pro)
- List of references for further reading on learning-enhanced MPC
Time | Topic | Contents |
---|---|---|
0:00 – 0:20 | Introduction |
Autonomous (robotic) systems; Feedback and model predictive control; Machine Learning; Examples of MPC in robotics |
0:20 – 0:45 | Basics of model predictive control (MPC) |
Dynamic systems and models; Optimal control problems (OCPs); MPC as solution approach to OCPs; Repeated feasibility; Stability; Numerics of MPC; Software tools |
0:45 – 0:55 | DLive Demo / Code Walkthrough: MPC | Example implemented in Python or Matlab |
0:55 – 1:00 | Break | |
1:00 – 1:20 |
MPC meets machine learning |
Supervised machine learning; Gaussian process regression; Neural networks; Fusing MPC and ML: What can we learn? |
1:20 – 1:40 | MPC and reinforcement learning (RL) |
Basics of RL (policy gradient methods); Combining MPC and RL; Teaser: Some connections between MPC and RL |
1:40 – 1:50 | ILive Demo / Code Walkthrough: RL and MPC: | Example implemented in Python or Matlab |
1:50 – 2:00 | Outlook/Summary, Q & A |
Title:
LocoMuJoCo: A Simulation Platform for Learning-Based Locomotion and Assistance
Duration: 1.5 – 2 hours
Instructors / Presenters:
- Davide Tateo (TU Darmstadt)
- Guoping Zhao (Southeast University)
- Nadine Drewing (TU Darmstadt)
- Jan Peters (TU Darmstadt)
Initiators/ Project Support:
- Jan Peters ( TU Darmstadt)
- Maziar Sharbafi (TU Darmstadt)
- André Seyfarth (TU Darmstadt)
Abstract / Motivation:
LocoMuJoCo is an open-source simulation platform designed for learning robot motions from demonstrations. It allows using existing human locomotion datasets and retargeting them to specific robot embodiments. It can also be used to simulate musculoskeletal systems and imitate human locomotion. Built on top of MuJoCo, it enables accurate biomechanical modeling, real-time control, and seamless integration with learning algorithms.
This tutorial provides a hands-on introduction to LocoMuJoCo, showcasing its capabilities in locomotion modeling, learning-based control, and assistive device simulation. Attendees will see how the platform can be used to simulate complex neuromuscular systems and optimize assistive strategies using data-driven techniques.
Target Audience:
- Robotics researchers and students working on locomotion or biomechanics
- Developers of assistive devices or prosthetics
- Scientists interested in simulating neuromechanical models with learning-based control
Learning Goals:
- Understand the structure and capabilities of LocoMuJoCo
- Learn how to model musculoskeletal systems and locomotion tasks
- See examples of integrating learning algorithms for motor control
- Explore how the platform can support assistive device research
Materials & Tools:
- LocoMuJoCo GitHub repository and documentation
- Example simulation models and learning scripts
- Recommended software setup (MuJoCo, Python, etc.)
Optional Outcomes:
- Introduce LocoMuJoCo to a broader research audience
- Collect feedback for future improvements and collaborative use
- Initiate a tutorial series or user group around LocoMuJoCo development and applications
Time | Topic | Presenter |
---|---|---|
0:00 – 0:10 | Welcome & Overview: Introduction to LocoMuJoCo and Tutorial Goals | Nadine Drewing |
0:10 – 0:30 | Platform Architecture & Features: Overview of LocoMuJoCo, Model Setup, API, Tools | Davide Tateo |
0:30 – 0:50 | Demonstration of Simulation: Demonstration, Examples, and Use Cases for Robot & Human Gaits | Davide Tateo |
0:50 – 1:15 | Modeling in LocoMuJoCo: Muscular Models, Conversion of Models for MuJoCo | Guoping Zhao |
1:00 – 1:15 |
Learning-Based Control in LocoMuJoCo RL/Optimization Approaches for Gait and Support |
Gouping Zhao |
1:15 – 1:40 | Applications in Assistive Device Research: Simulating and Tuning Exosuits/Prostheses | Nadine Drewing |
1:40 – 2:00 | Interactive Q&A + Discussion: Audience questions and platform feedback | All Presenters |