Keynotes

Emel Demircan:
Human Movement Understanding and Robotics

 

 

 

 

 

 

Abstract: The goal of our research is to create a cyber-human framework that advances both robotics and biomechanics, by deepening our scientific understanding of human motor performance dictated by musculoskeletal physics and neural control, in order to assist clinicians in quantifying the characteristics of a subject’s motion and designing effective motion training treatments. Current technologies do not permit detailed motion reconstruction in real time, which limits their use in clinical settings. This work will combine theory with software, hardware and sensing technology to synthesize human motion with dynamic, actively controlled subject-specific musculoskeletal models and to provide real-time visual feedback to a human subject. Robots are revolutionising the field of prosthetics. With the application of machine learning, prosthetic rehabilitation can become more natural in terms of performance and sustainable for human anatomy. In this study, the human activity of scooping various forms of materials was investigated. Optimal human posture was identified for different motion primitives in terms of muscular effort using musculoskeletal modeling. The robot was programmed using a deep convolutional neural network, which is trained to identify the type of material using machine vision. Consequently, the activity can be performed efficiently based on human intuition in a dynamic environment. Hence the proposed research fully utilizes the mechanical potential of robots within the constraints of human musculoskeletal system. In another study, a 3D musculoskeletal simulation with real-time body tracking was developed to simulate and analyze the operational space of human upper limb motion. The simulation was created in Unity3D, while the musculoskeletal models and definitions were imported from OpenSim. We use the immersive VR CAVE system together with musculoskeletal models for the advancement of real-time motion capture and feedback. More recently, we started developing a cyber-physical system that uses sensory information along with computer models of human running to characterize a person’s locomotion, to provide multi-modal feedback signals to modify and improve human motion over time. Finally, we developed a simulation-based lower-limb assistive device to provide the scientific understanding for the locomotion of people with balance disorders and in injury-risk due to heavy loads – which will facilitate the design and control of the next generation of robotic assistive devices.

Biography:
Dr. Emel Demircan is an Assistant Professor of the Department of Mechanical and Aerospace Engineering and Biomedical Engineering at California State University, Long Beach. Dr. Demircan obtained her Ph.D in Mechanical Engineering from Stanford University in 2012. She was a postdoctoral scholar at Stanford from 2012 to 2014 and a visiting assistant professor at University of Tokyo from 2014 to 2015. She was also a part-time scientist at Lucile Salter Packard Children’s Hospital Gait Analysis Lab at Stanford University. Dr. Demircan’s research focuses on the application of dynamics and control theory for the simulation and analysis of biomechanical and robotic systems. Her research interests include experimental and computational approaches for the study of human movement, rehabilitation robotics, sports biomechanics, human motion synthesis, natural motion generation in humanoid robotics and human motor control. In 2014, Dr. Demircan established an IEEE RAS Technical Committee on « Human Movement Understanding. » She is actively collaborating with clinical, athletic and industrial partners and is involved in professional and outreach activities within the IEEE Robotics Society (RAS).

 

Mehdi Benallegue:
Control, Efficiency, Anticipation, and Feedback in Anthropomorphic Motion

 

 

 

 

 

Abstract: The underactuation and the feasibility constraints limit drastically the dynamics of anthropomorphic systems. This seems contradictory to their versatility and their ability to generate low-powered, mostly passive dynamics gaits. Indeed, this efficient locomotion requires often the walkers to give up part of their momenta controllability, reducing, even more, the reachable range of motions. However, this dynamics depends closely on the mechanical structure of the walker and comes at the expense of a much higher sensitivity to perturbations and a reduced scope for anticipation. Conversely, anticipatory planning requires stiffer actuation and more accurate tracking leading to ineffective control. In this talk, we will discuss how the tradeoff between efficiency and robustness is actually a continuous choice, how stochastic metrics should drive the choice of the locomotion mode, and how proper modelling could take profit from the feasibility constraints to produce high-quality feedback.

Biography:
Mehdi Benallegue received the ingénieur degree from the Institut National d’Informatique (INI), Algeria, in 2007, the M.Sc. degree from the University of Paris 7, Paris, France, in 2008 and the Ph.D. degree from Université de Montpellier 2, France, in 2011. He has been a postdoctoral researcher in a neurophysiology laboratory in Collège de France and in LAAS CNRS. He is currently a Research Associate with the Humanoid Research Group in National Institute of Advanced Industrial Science and Technology, Japan. His research interests include estimation and control of humanoid robots, biomechanics, neuroscience and computational geometry.