MyoTalks
MyoTalks is an online podcast series organized by the MyoSuite team to invite scientists and researchers from robotics, biomechanics, machine learning, neuroscience, sports sciences, rehabilitation and clinical sciences, human-computer interaction, ergonomics, and any related fields to share insights and works towards understanding full-scale, end-to-end human embodied intelligence. Slack Channel and Subscribe


"What I cannot create, I do not understand" - Richard Feynman
Biomechanical User Simulations for HCI
April 17, 2025 8:00 AM Eastern Standard Time
Florian Fischer (University of Cambridge, Human-Computer Interaction)

In Human-Computer Interaction (HCI), interfaces and interaction techniques are typically validated and compared through application-specific user studies and well-established quantitative models that enable the prediction of summary statistics, e.g., the time it takes to complete a given task. Inspired by recent advances in neighbouring fields, we propose forward simulation of biomechanical user models as a complementary, highly powerful tool for HCI that can provide novel insights into how and why users move during interaction, as well as the interdependencies between user perception and control. In combination with state-of-the-art optimization and ML methods, this approach allows researchers and designers to predict movement trajectories and ergonomic variables, such as fatigue, prior to conducting user studies. In this talk, I will discuss the potential of biomechanical simulation for HCI and interface optimisation, as well as current limitations and challenges. In addition, I will present SIM2VR, a system that uses RL methods to simulate how users interact with a given VR application. This system, for the first time, enables training simulated users directly in the same VR application that real users interact with, which represents a major step towards automated biomechanical testing in XR.
Using Embodied AI to help answer "why" questions in systems neuroscience
April 25, 2025 2:30 – 3:30PM Eastern Standard Time
Aran Nayebi (Carnegie Mellon University, Machine Learning & Neuroscience)

Deep neural networks trained on high-variation tasks ("goals") have had immense success as predictive models of the human and non-human primate visual pathways. More specifically, a positive relationship has been observed between model performance on ImageNet categorization and neural predictivity. Past a point, however, improved categorization performance on ImageNet does not yield improved neural predictivity, even between very different architectures. In this talk, I will present two case studies in both rodents and primates, that demonstrate a more general correspondence between self-supervised learning of visual representations relevant to high-dimensional embodied control and increased gains in neural predictivity. In the first study, we develop the (currently) most precise model of the mouse visual system, and show that self-supervised, contrastive algorithms outperform supervised approaches in capturing neural response variance across visual areas. By "implanting" these visual networks into a biomechanically-realistic rodent body to navigate to rewards in a novel maze environment, we observe that the artificial rodent with a contrastively-optimized visual system is able to obtain more reward across episodes compared to its supervised counterpart. The second case study examines mental simulations in primates, wherein we show that self-supervised video foundation models that predict the future state of their environment in latent spaces that can support a wide range of sensorimotor tasks, align most closely with human error patterns and macaque frontal cortex neural dynamics. Taken together, our findings suggest that representations that are reusable for downstream embodied tasks may be a promising way forward to study the evolutionary constraints of neural circuits in multiple species.
Towards Personalizing Assistive Technology in the Real-world
Jul. 2, 2025 13:30 PM Eastern Standard Time
Patrick Slade (Harvard University, Bioengineering)

Hundreds of millions of people face mobility. Wearable sensing and assistive devices offer the potential of helping people overcome these challenges, but determining how to improve mobility or health metrics may be unclear. We will discuss several case studies involving developing and personalizing assistive technology for real-world use: a wearable system for tracking calories burned during exercise, a portable exoskeleton that personalizes assistance during real-world walking, and a navigation aid for people with impaired vision. These projects focus on developing technology that is low-cost and easily reproducible to work towards tools for underserved populations.
Wireless and Programmable Neurostimulator System for In Vivo Neural Microstimulation
Jul. 9, 2025 13:30 PM Eastern Standard Time
Alpaslan Ersöz (Carnegie Mellon University, Mechanical Engineering)

This talk presents the design and validation of a wireless, programmable, multi-channel neurostimulator system for in vivo neural microstimulation. The system integrates discrete analog circuitry and embedded firmware to enable charge-balanced current stimulation, programmable anodic biasing for enhanced charge injection, and voltage transient monitoring to ensure electrochemical safety. It also incorporates artifact suppression for concurrent neural recording. The device's compact design supports multiple stimulation modalities including amplitude and frequency modulation. Validation was performed through benchtop, in vitro, and in vivo experiments, demonstrating up to a ten-fold increase in charge injection capacity and successful artifact-free recording. The talk will highlight the system architecture, performance results, and implications for microstimulation in animal studies.
Neural mechanisms of motivated movement
Jul. 23, 2025 13:30 PM Eastern Standard Time
Steven Chase (Carnegie Mellon University, Biomedical Engineering & Neuroscience Institute)

Movements are influenced by motivation. Consider a basketball player shooting a free throw. Depending on the stakes of the outcome of the shot, performance can vary greatly. Top athletes rise to the challenge, and perform better during a game than they do during practice. But when the stakes are inordinately high, like when the game is on the line, even skilled players can "choke under pressure", and under-perform right when it matters the most. Here I will explore the neural mechanisms that link motivation to changes in movement, by investigating how neural population activity in primary motor cortex changes as a function of reward. We find clear neural signatures of reward in motor cortex that can predict, on a trial-by-trial basis, whether choking under pressure is likely.
Sensing & Modulation of Neural Activity for Motor Restoration and Enhanced Assistive Interaction
Jul. 25, 2025 13:30 PM Eastern Standard Time
Nikhil Verma (Carnegie Mellon University, Mechanical Engineering)

Neurological conditions like stroke and spinal cord injury damages the corticospinal tract and disrupts the communication between the brain and body, leading to paralysis. Addressing these deficits, particularly hand function in individuals with tetraplegia, is a critical clinical priority. In this talk, I will discuss complementary strategies in neural sensing and neuromodulation designed to address these motor deficits and enhance assistive interactions. Using high-density electromyography (HDEMG), we demonstrate that motor unit activity can be detected even in muscles clinically diagnosed as paralyzed. These subtle myoelectric signals can be decoded, providing possible control signals for assistive devices for individuals with severe motor paralysis. Complementing this sensing approach, electrical stimulation of the spinal cord delivered below the injury site can recruit spinal sensorimotor circuits that remain intact after the injury. Our studies highlight how both non-invasive (transcutaneous) and invasive (epidural) spinal stimulation effectively restore voluntary motor function after paralysis from stroke or spinal cord injury. Taken together, recent advances in sensing and modulating neural activity form a comprehensive and synergistic approach to restore motor function and improve interaction capabilities for individuals living with chronic paralysis.
Mapping representations for motor control and awareness in the cerebral cortex
Mar. 22, 2025 14:30 PM Eastern Standard Time
John Veillette, (University of Chicago, Neuroscience)

Our bodies are the fundamental input-output interface between the brain and world, and the conscious experience of acting through this interface, the sense of agency (SoA), is among the most basic facets of self-awareness. A critical challenge in the cognitive sciences is to understand which types of motor control representations percolate into awareness as SoA – and where SoA may deviate from a veridical metric of control. In this talk, I will discuss research in which we manipulate participants' experience of agency while usurping control of their muscles using functional electrical stimulation. We find evidence that this experience of volition over the musculature can be decoded from human brain recordings. We then present a proof-of-concept study in which we use the representations of a simulated biomechanical controller (trained in MyoSuite) to human functional magnetic resonance imaging recordings during a motor task, and we find that neural activity in brain regions predicted by the MyoSuite model can also be used to "decode" SoA days later. We close by discussing challenges and opportunities for using neuromuscular control models to understand human brain activity in controlled and in naturalistic tasks, with a particular focus on the potential of personalized brain models.
MyoChallenge Round-Table 2: Achieving State-of-the-art Musculoskeletal Control
Oct 17, 2024 10:00 AM Eastern Time
Kaibo He and Pierre Schumacher


Kaibo He and Pierre Schumacher will give a 15-minute presentation on their research, scientific interests, and work. Another 15-minute presentation will cover how they designed their solution, including their approach to the challenge, the rationale behind their design, and why they believe their solution worked best. Finally, they should spend about 10 minutes discussing what motivated them to join the challenge and how it has advanced their own work.
MyoChallenge Round-Table 1: Achieving State-of-the-art Musculoskeletal Control
Wed. October 2nd, 2:00 AM - 3:00 AM Eastern Time
Alberto Chiappa and Jungnam Park


In this week's MyoTalks, we are excited to welcome past MyoChallenge winners—Alberto Chiappa and Jungnam Park—to share their experiences and strategies for excelling in the NeurIPS competition track. Achieving biological dexterity remains a key objective in robotics, biomechanics, and neural control of movement. Insights from a variety of scientific disciplines are essential to advance state-of-the-art techniques, and we are thrilled to host the MyoChallenge Round-table to explore the knowledge our winners have contributed from their respective fields toward advancing musculoskeletal control and helping them achieving state-of-the-art.
MyoSuite/MyoChallenge: Towards Full-Scale Human Embodied Intelligence
Wed., Sep 11, 2024 10:00 AM - 11:00 AM EDT
Vittorio Caggiano, Chun Kwang Tan, Cheryl Wang



Abstract: Humans are embodied intelligent beings acting in the physical world. Achieving and understanding such intelligence has been the holy grail for neuroscience, AI, and robotics communities. Nevertheless, a full-scale end-to-end level of understanding has been unattainable for a long time due to the problem's complexity. Researchers in neuroscience, AI, and robotics each turn into simpler sub-problems in their fields. Here, we introduce MyoSuite, an embodied AI platform that simulates human intelligence, end-to-end and full-scale, by integrating machine learning, biomechanical muscle models, and neural control of movement. This platform enables the generation of physiologically realistic movements, such as dexterous manipulation, which holds significant potential for applications in prosthetics, rehabilitation, neuroscience, and building human androids. We will also introduce SAR and MyDex, our state-of-the-art models for human in-hand dexterity and MyoChallenge-24, this year's NeurIPS competition track for building the best embodied human models.
Together with Theoretical and Computational Neuroscience Journal Club at Johns Hopkins University
Sensing and stimulating the brain to restore neurological function
Aug. 1, 2025 13:30 PM Eastern Standard Time
Doug Weber (Carnegie Mellon University, Mechanical Engineering & Neuroscience)

Significant advances in materials and microelectronics over the last decade have enabled clinically relevant technologies that measure and regulate neural signaling in the brain, spinal cord, and peripheral nerves. These technologies provide new capabilities for studying basic mechanisms of information processing and control in the nervous system, while also creating new opportunities for restoring function lost to injury or disease. Neural sensors can also measure the activity of motor neurons to enable direct neural control over prosthetic limbs and assistive technologies. Conversely, these neural interface technologies can stimulate activity in sensory and motor neurons to reanimate paralyzed muscles. Although many of these applications rely currently on devices that must be implanted into the body for precise targeting, ultra-miniaturized devices can be injected through the skin or vascular system to access deep structures without open surgery. This talk will focus on efforts to develop wearable and injectable neural interfaces for restoring or improving motor function in people with paralysis due to stroke, spinal cord injury, ALS, and other neurological disorders.
Disturbance detection during locomotion and effective assistance for balance recovery in aging gait
Aug. 8, 2025 13:30 PM Eastern Standard Time
Maria Tagliaferri (Carnegie Mellon University, Mechanical Engineering)

Falls during daily ambulation are a leading cause of injury among older adults, often resulting from delayed physiological responses to balance disturbances such as slips and trips. Lower-limb exoskeletons hold promise for reducing fall risk by detecting and responding to these perturbations faster than the human user. However, a critical first step toward effective exoskeleton-based balance support is the development of real-time, onboard methods for perturbation detection. While whole-body angular momentum (WBAM) is a commonly used metric, it is suboptimal for exoskeleton applications due to its high computational demands and reliance on extensive parameter tuning. To address these limitations, our group is developing a novel perturbation detection framework based on lower-limb kinematics during walking. In parallel, we aim to bridge key knowledge gaps regarding when and how assistance should be applied to effectively enhance the user's balance recovery. Using a single-degree-of-freedom hip exoskeleton device developed in the lab, we are investigating human responses to a range of sagittal-plane perturbations to inform the design of control strategies that augment balance without interfering with natural movement. Specifically, we are implementing and evaluating both biomechanical model-based and neural network-based control architectures to understand their effect on recovery time, muscle activation, and metabolic cost in response to perturbations.
Learning from Demonstrations: from Generative Adversarial Training to Representation Learning
Aug. 13, 2025 13:30 PM Eastern Standard Time
Chenhao Li (ETH Zurich, AI Center)

This talk explores recent advances in learning from demonstrations (LfD), with a focus on motion priors and policy learning for robotics and embodied agents. We examine two primary methodological streams: generative adversarial training and feature-based representation learning. We compare the two methods and discuss the key challenges they present, including issues such as discriminator saturation, mode collapse, limited or noisy data, and sparse supervision. To address these challenges, we present a series of algorithmic innovations, including Wasserstein-based adversarial frameworks, constrained style mimicry, mutual information maximization, latent manifold representations via frequency-domain parameterization, and automatic reference generation. These techniques enable more robust, data-efficient learning and allow policies to generalize beyond the original demonstration data. Finally, we highlight the connections and differences between the two approaches, and how they can be leveraged together to advance motion learning in complex environments.
Making 3D Human Digitization Affordable, Efficient, and Accessible
Aug. 15, 2025 13:30 PM Eastern Standard Time
YoungJoong Kwon (Emory University, Computer Science)

Human digitization—a process that digitally captures a subject's appearance, expressions, and movements—holds tremendous promise for transcending physical barriers and improving lives. Yet despite its potential, the technology remains largely inaccessible due to costly studio setups and reliance on specialized expertise. In this talk, I will discuss how these limitations can be addressed by presenting affordable, efficient, and user-friendly approaches to human digitization. First, I will focus on reconstruction from sparse observations, leveraging 3D priors and temporal information to compensate for limited camera inputs and reliably capture geometry even under occlusions. Second, I will introduce an efficient representation that reduces computational demands, using light fields for fast, high-quality synthesis. Finally, I will propose easy-to-interact representations that eliminate complex pipelines: by integrating generative models, a single reference image and minimal user input can drive the creation of novel poses and views without extensive test-time optimization. Lastly, I will explore potential applications of these approaches, highlighting how they might further improve lives through more affordable, efficient, and accessible human digitization solutions.
Predictive Principles of Motor Behavior
Aug. 22, 2025 13:30 PM Eastern Standard Time
Nidhi Seethapathi (MIT, Brain and Cognitive Sciences & EECS)

The best current robots still fall short of the efficiency and safety guarantees exhibited by biological systems. One way to understand this superior performance is to develop computational models that predict how animals select, execute, and learn everyday movements. Despite this need, most of our current computational and theoretical understanding is limited to simple tasks or explanatory models with limited predictive breadth. My talk will highlight the predictive principles of safe and efficient motor behavior we've uncovered recently: the cost functions, controller structures, and learning rules. These principles will provide a blueprint for engineering human-like performance in wearable and autonomous robots.
Learning, Hierarchies, and Reduced Order Models
Oct. 3, 2025 13:30 PM Eastern Standard Time
Steve Heim (Cornell University, Research Scientist)

With the advent of ever more powerful compute and learning pipelines that offer robust end-to-end performance, are hierarchical control frameworks with different levels of abstraction still useful? Hierarchical frameworks with reduced-order models (ROMs) have been commonplace in model-based control for robots, primarily to make long-horizon reasoning computationally tractable. I will discuss some of the other advantages of hierarchies, why we want ROMs and not simply latent spaces, and the importance of matching the time scale to each level of the hierarchy. In particular, I will show some results in learning for legged robots using ROMs with cyclic inductive bias, with both hand-designed and data-driven ROMs. I will also discuss using viability measures to estimate the intuitive notion of "how confident/safe is this action" and why this is only useful at the right level of abstraction.
Mapping representations for motor control and awareness in the cerebral cortex
Mar. 22, 2025 14:30 PM Eastern Standard Time
John Veillette, (University of Chicago, Neuroscience)

Our bodies are the fundamental input-output interface between the brain and world, and the conscious experience of acting through this interface, the sense of agency (SoA), is among the most basic facets of self-awareness. A critical challenge in the cognitive sciences is to understand which types of motor control representations percolate into awareness as SoA – and where SoA may deviate from a veridical metric of control. In this talk, I will discuss research in which we manipulate participants' experience of agency while usurping control of their muscles using functional electrical stimulation. We find evidence that this experience of volition over the musculature can be decoded from human brain recordings. We then present a proof-of-concept study in which we use the representations of a simulated biomechanical controller (trained in MyoSuite) to human functional magnetic resonance imaging recordings during a motor task, and we find that neural activity in brain regions predicted by the MyoSuite model can also be used to "decode" SoA days later. We close by discussing challenges and opportunities for using neuromuscular control models to understand human brain activity in controlled and in naturalistic tasks, with a particular focus on the potential of personalized brain models.
MyoChallenge Round-Table 2: Achieving State-of-the-art Musculoskeletal Control
Oct 17, 2024 10:00 AM Eastern Time
Kaibo He and Pierre Schumacher


Kaibo He and Pierre Schumacher will give a 15-minute presentation on their research, scientific interests, and work. Another 15-minute presentation will cover how they designed their solution, including their approach to the challenge, the rationale behind their design, and why they believe their solution worked best. Finally, they should spend about 10 minutes discussing what motivated them to join the challenge and how it has advanced their own work.
MyoChallenge Round-Table 1: Achieving State-of-the-art Musculoskeletal Control
Wed. October 2nd, 2:00 AM - 3:00 AM Eastern Time
Alberto Chiappa and Jungnam Park


In this week's MyoTalks, we are excited to welcome past MyoChallenge winners—Alberto Chiappa and Jungnam Park—to share their experiences and strategies for excelling in the NeurIPS competition track. Achieving biological dexterity remains a key objective in robotics, biomechanics, and neural control of movement. Insights from a variety of scientific disciplines are essential to advance state-of-the-art techniques, and we are thrilled to host the MyoChallenge Round-table to explore the knowledge our winners have contributed from their respective fields toward advancing musculoskeletal control and helping them achieving state-of-the-art.
MyoSuite/MyoChallenge: Towards Full-Scale Human Embodied Intelligence
Wed., Sep 11, 2024 10:00 AM - 11:00 AM EDT
Vittorio Caggiano, Chun Kwang Tan, Cheryl Wang



Abstract: Humans are embodied intelligent beings acting in the physical world. Achieving and understanding such intelligence has been the holy grail for neuroscience, AI, and robotics communities. Nevertheless, a full-scale end-to-end level of understanding has been unattainable for a long time due to the problem's complexity. Researchers in neuroscience, AI, and robotics each turn into simpler sub-problems in their fields. Here, we introduce MyoSuite, an embodied AI platform that simulates human intelligence, end-to-end and full-scale, by integrating machine learning, biomechanical muscle models, and neural control of movement. This platform enables the generation of physiologically realistic movements, such as dexterous manipulation, which holds significant potential for applications in prosthetics, rehabilitation, neuroscience, and building human androids. We will also introduce SAR and MyDex, our state-of-the-art models for human in-hand dexterity and MyoChallenge-24, this year's NeurIPS competition track for building the best embodied human models.
Together with Theoretical and Computational Neuroscience Journal Club at Johns Hopkins University