2016 Texas A&M Robotics Workshop

Texas A&M University, College Station, TX
April 1, 2016

2016 Texas A&M Robotics Workshop: Program

The workshop will be held in room 2005 of the Emerging Technologies Building (ETB) on the campus of Texas A&M University.
The program will start at 9:00 am and end at 5:00 pm.


9:30-10:30: Session 1
Session Chair: Jory Denny (Texas A&M)
Efficient Robot Motion Planning with Practical Performance Guarantee, Kostas Bekris, Rutgers University
Robotic Search of Transient Targets, Dezhen Song, Texas A&M
Recent Progress: Wearable Hand Orthoses and High-Resolution Contact Sensing, Matei Ciocarlie, Columbia Univeristy
10:30-11:00 Break
11:00-12:00: Session 2
Session Chair: Chinwe Ekenna (Texas A&M)
Design and Life Support Based on Unified Digital Human and Humanoid, Eiichi Yoshida, AIST
Computational Control Engines For Robotics Systems, Todd Murphey, Northwestern University
Compliant robotic systems based on variable stiffness actuation, Raffaella Carloni, University of Twente
12:00-2:00 Lunch
1:00-2:00 Lab/CampusTours: We will leave from ETB around 1pm and return in time for the afternoon sessions
2:00-3:00: Session 3
Session Chair: Sam Rodriguez (Texas A&M)
Humans and Machines of Like Mind: The Future of Team Performance, Julie Shah, MIT
Soft Robotics: how an octopus can teach build robots, Cecilia Laschi, Scuola Superiore Sant'Anna
Participatory Route Planning, Ming C. Lin, UNC, Chapel Hill
3:00-3:30 Break
3:30-4:30: Session 4
Session Chair: Shawna Thomas (Texas A&M)
Robust 3D Tracking of Unknown Objects, Hedvig Kjellström, KTH
High-Perfomance Provably Safe Autonomous Vehicles and Systems, Sertac Karaman, MIT
Human Motion Measurement and Analysis for Rehabilitation, Dana Kulic, University of Waterloo
4:30-5:00: Panel Discussion on the Future of Robotics - Visions & Challenges
Moderator: Nancy Amato,Texas A&M University
The Workshop Speakers
Mac Schwager, Stanford University
Dylan Shell, Texas A&M University

Abstracts and Biographies

"Efficient Robot Motion Planning with Practical Performance Guarantee"
Kostas Bekris
Rutgers University

This talk will summarize contributions, which aim to identify the appropriate tradeoff between computationally efficiency and performance guarantees for sampling-based motion planning algorithms in the context of different robot motion planning challenges. In particular, the conditions to achieve asymptotic optimality with these methods have been recently identified. We will highlight our results, which indicate that such planners are also probably near-optimal after finite computation time. Furthermore, sparse representations can be used to efficiently return near-optimal solutions. An important recent contribution has been the development of a variant, which achieves asymptotic optimality for kinodynamic planning without access to a steering function. This result has also implications for high-dimensional planning under uncertainty. This talk will also describe the application of such methods to manipulation task planning and how efficient solutions can be achieved that are also probabilistically complete for object rearrangement problems.

Kostas Bekris is an Assistant Professor of Computer Science at Rutgers University. He completed his PhD degree in Computer Science at Rice University, Houston, TX, under the supervision of Prof. Lydia Kavraki. He was Assistant Professor at the University of Nevada, Reno between 2008 and 2012. He works in robotics and his interests include motion planning, especially for systems with complex dynamics and robot manipulators, as well as applications in smart environments and simulations. His research group has been supported by NSF, NASA (Early CAREER Faculty award), DHS, DoD and the NY/NJ Port Authority.

Back to the top

"Compliant robotic systems based on variable stiffness actuation"
Raffaella Carloni
University of Twente

Recent developments in the field of compliant robotics and the ever increasing desire and need for human-robot interaction have caused a paradigm shift from robots that are designed as stiff as possible to achieve large bandwidth control, to robots that are intrinsically compliant. This talk provides an overview of my current research on variable stiffness actuators and their application to more complex robotic systems. I will present a modeling method for generic compliant robotic manipulators which is based on graph theory and the port–Hamiltonian formalism and allows for a modular approach to the interconnection of rigid bodies with (variable) stiffness actuators by means of kinematic pairs. Within this framework, a description of the work-space stiffness is derived together with a stiffness control based on the definition of a geometry based metric that also provides design guidelines for the design of compliant manipulators. The design and realization of a robotic arm that implements variable stiffness actuators is presented as proposed for the scenario of the European SHERPA project that focuses on a smart collaboration between humans and ground-aerial robots for improving search and rescuing activities.

Raffaella Carloni received the M.Sc. and Ph.D. degrees from the University of Bologna (Italy) and she joined the University of Twente (The Netherlands) in February 2008, where she is currently an Associate Professor. She is a member of the IFAC Technical Committee in Robotics, she is an Associate Editor of the IEEE Transactions on Robotics and of the IEEE Robotics and Automation Society Conference Editorial Board.
Her research activities focuses on the modeling, mechanical design, and control of compliant robots. The work is characterized by a mechatronic approach towards developing the fundamentals needed to realize robotic devices that can safely and reliably operate in close vicinity to humans or directly interacting with them. Application scenarios of particular interest are: (1) Bipedal robots and leg prostheses, (2) Haptic teleoperation of aerial vehicles for inspection by contact, (3) Assistive robotic arms.

Back to the top

"Recent Progress: Wearable Hand Orthoses and High-Resolution Contact Sensing"
Matei Ciocarlie
Columbia University

In this talk I will describe recent progress on two projects related to robotic manipulation. The first one focuses on fully wearable robotic hand rehabilitation devices, which could extend training and improve quality of life for patients affected by hand impairments. We investigate the capability of single-actuator devices to assist whole-hand movement patterns through a network of exotendons. In experiments with stroke survivors, we measured the force levels needed to overcome various levels of spasticity and open the hand for grasping, and qualitatively demonstrated the ability to execute fingertip grasps. The second project focuses on a method for achieving high resolution tactile sensing. One traditional approach is to fabricate a large number of taxels, each delivering an individual, isolated response to a stimulus. In contrast, we propose a method where the sensor simply consists of a continuous volume of piezoresistive elastomer with a number of electrodes embedded inside. In addition to extracting rich location information from few wires, this approach lends itself to simple fabrication methods and makes no assumptions about the underlying geometry, simplifying future integration with robot fingers.

Matei Ciocarlie is an Assistant Professor at Columbia University's Mechanical Engineering Department, with secondary affiliations in the Computer Science Department and the Data Science Institute. His main interest is in reliable robotic performance in unstructured, human environments, focusing on areas such as novel robotic hand designs and control, autonomous and Human-in-the-Loop mobile manipulation, shared autonomy, teleoperation, and assistive robotics. Matei completed his Ph.D. at Columbia University; his doctoral dissertation, focused on reducing the computational complexity associated with dexterous robotic grasping, was the winner of the 2010 Robotdalen Scientific Award. Before joining the faculty at Columbia, Matei was a Research Scientist and then Group Manager at Willow Garage, Inc., and then a Senior Research Scientist at Google, Inc. In recognition of his work, Matei was awarded the 2013 IEEE Robotics and Automation Society Early Career Award, a 2015 Young Investigator Award by the Office of Naval Research, a 2016 CAREER Award by the National Science Foundation, and a 2016 Sloan Fellowship.

Back to the top

"High-performance Provably Safe Autonomous Vehicles and Systems"
Sertac Karaman

Autonomous vehicles hold the potential for tremendous impact. However, substantial societal and economic barriers remain before their widespread adoption. For instance, for societal acceptance, they must guarantee safety; and for economic viability, they must deliver high performance. In this talk, we present planning and control algorithms, both at the vehicle level and at the systems level, that guarantee safety and high performance. At the vehicle level, we introduce novel algorithms for agile mobility for a single high-performance autonomous vehicle. The new algorithms construct arbitrarily good solutions for stochastic optimal control problems using tensor decomposition methods; their running time scales linearly with dimension and polynomially with the rank of the optimal cost-to-go function. We demonstrate the new algorithms on a simulated perching problem. At the systems level, we present coordination algorithms for a fleet of autonomous vehicles passing through an intersection. The new algorithms extend existing algorithms for management of data networks to systems with dynamics, and they provide provable guarantees on performance and safety. At both levels, we find that pushing systems to their performance limits renders differential constraints active. As a result, the new algorithms must carefully handle, and in some cases exploit, the underlying dynamics

Sertac Karaman is the Charles Stark Draper Assistant Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (since Fall 2012). He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.

Back to the top

"Robust 3D Tracking of Unknown Objects"
Hedvig Kjellström

Visual tracking of unknown objects is an essential task in robotic perception, of importance to a wide range of applications. In the general scenario, the robot has no full 3D model of the object beforehand, just the partial view of the object visible in the first video frame. A tracker with this information only will inevitably lose track of the object after occlusions or large out-of-plane rotations. The way to overcome this is to incrementally learn the appearances of new views of the object. However, this bootstrapping approach is sensitive to drifting due to occasional inclusion of the background into the model.
We propose a method that exploits 3D point coherence between views to overcome the risk of learning the background, by only learning the appearances at the faces of an inscribed cuboid. This is closely related to the popular idea of 2D object tracking using bounding boxes, with the additional benefit of recovering the full 3D pose of the object as well as learning its full appearance from all viewpoints.
We show quantitatively that the use of an inscribed cuboid to guide the learning leads to significantly more robust tracking than with other state-of-the-art methods. We show that our tracker is able to cope with 360 degree out-of-plane rotation, large occlusion and fast motion.
This is joint work with Alessandro Pieropan, Niklas Bergström, and Masatoshi Ishikawa.

Hedvig Kjellström is a Professor of Computer Science and the head of the Computer Vision and Active Perception Lab (CVAP) at KTH in Stockholm, Sweden. She received an MSc in Engineering Physics and a PhD in Computer Science from KTH in 1997 and 2001, respectively. The topic of her doctoral thesis was 3D reconstruction of human motion in video. Between 2002 and 2006 she worked as a scientist at the Swedish Defence Research Agency, where she focused on Information Fusion and Sensor Fusion. In 2007 she returned to KTH, pursuing research in activity analysis in video. She is especially interested in the use of multimodality and context in video analysis of human non-verbal communicative behavior and activity, and has investigated this within the European projects PACO-PLUS and TOMSY.
In 2010, she was awarded the Koenderink Prize for fundamental contributions in Computer Vision for her ECCV 2000 article on human motion reconstruction, written together with Michael Black and David Fleet. She has written around 70 papers in the fields of Robotics, Computer Vision, Information Fusion, Machine Learning, Cognitive Science, Speech, and Human-Computer Interaction. She is presently mostly active within the Robotics area.

Back to the top

"Human Motion Measurement and Analysis for Rehabilitation"
Dana Kulić
University of Waterloo

In this talk, we will describe a system for on-line measurement and analysis of human movement that can be used to provide feedback to patients and clinicians during the performance of rehabilitation exercises. The system consists of wearable inertial measurement unit (IMU) sensors attached to the patient's limbs. The IMU data is processed to estimate joint positions. We will describe an approach to improve the accuracy of pose estimation via on-line learning of the dynamic model of the movement, customized to each patient. Next, the pose data is segmented into exercise segments, identifying the start and end of each motion repetition automatically. The pose and segmentation data is visualized in a user interface, allowing the patient to simultaneously view their own movement overlaid with an animation of the ideal movement. We will present results of user studies analyzing the system capabilities for gait measurement of stroke patients undergoing gait rehabilitation, and demonstrating the significant benefits of feedback with patients undergoing rehabilitation following hip and knee replacement surgery.

Dana Kulić received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulić worked at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulić was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulić is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. In 2014, she was awarded the Ontario Early Researcher Award for her work on human motion analysis for rehabilitation and human-robot interaction. Dr. Kulić is the founding co-chair of the IEEE Robotics and Automation Society Technical Committee on Human Motion Understanding. Her research interests include human motion understanding, human-robot interaction and mechatronics.

Back to the top

"Soft Robotics: how an octopus can teach build robots"
Cecilia Laschi
Scuola Superiore Sant'Anna

Look at an octopus with a roboticist’s eyes: its arm are soft and deformable, they can bend in any direction, at any point along the arm; however, they can stiffen when needed and they can grasp and pull objects with considerable strength; the octopus does not have a large brain, yet it can control this huge amount of possible movements and motion parameters.
The octopus is undoubtedly a good model for soft robotics, and an extreme one, considering that it has no rigid structures, of any kind. By understanding the secrets of the octopus soft dexterity and by copying few key principles, a soft-bodied 8-arm robot, that can crawl in water and take objects with stiff grasps has been developed and validated. The octopus-like robot is a good example of the feasibility of soft robots and related technologies.
Today, there is an important community of scientists studying Soft Robotics, which is not only facing the many interdisciplinary challenges for building soft robot components and systems, but is also focusing on helpful applications: a soft endoscope for biomedical applications, a soft arm for helping elderly people in the shower, and a ‘grown-up’ octopus robot helping humans in underwater explorations and operations.

Prof. Cecilia Laschi is Full Professor of Biorobotics at the the BioRobotics Institute of the Scuola Superiore Sant'Anna in Pisa, Italy, where she serves as Rector’s delegate to Research. She graduated in Computer Science at the University of Pisa in 1993 and received the Ph.D. in Robotics from the University of Genoa in 1998. In 2001-2002 she was JSPS visiting researcher at Waseda University in Tokyo.
Her research interests are in the field of biorobotics and she is currently working on soft robotics, humanoid robotics, and neurodevelopmental engineering. She has been and currently is involved in many National and EU- funded projects, she was the coordinator of the ICT-FET OCTOPUS Integrating Project, leading to one of the first soft robots, and she coordinates the European Coordination Action on Soft Robotics RoboSoft. She has authored/co- authored more than 90 papers on ISI journals (over 200 in total), she is Chief Editor of the Specialty Section on Soft Robotics of Frontiers in Robotics and AI she is in the Editorial Board of Bioinspiration&Biomimetics, Frontiers in Bionics and Biomimetics, Applied Bionics and Biomechanics, Advanced Robotics. She is member of the IEEE, of the Engineering in Medicine and Biology Society, and of the Robotics & Automation Society, where she served as elected AdCom

Back to the top

"Self-Folding Origami"
Jyh-Ming Lien
George Mason University

In recent year, we have witnessed the acceleration in the development of self-folding origami or self-folding machines due to the advances in robotics engineering and material science. These self-folding origami can fold itself into a desired shape via the micro-thick folding actuators or by reacting to various stimuli such as light, heat and magnetic fields. Although the development is still in its early stage, there have already been many applications, such as surgical instruments for minimally invasive surgery, where there is a need for very small devices that can be deployed inside the body to manipulate tissue.
Designing self-folding origami that can resume or approximate a single or multiple target shapes requires careful foldability analysis. In this talk, I will talk about the recent progress made by my research group to advance the foldability analysis. I will first introduce the basic ideas of motion planning and its application in computational origami. I will then discuss computational challenges faced in folding rigid origami, a class of origami whose entire surface remains rigid during folding except at crease lines. To address these challenges, I will present techniques that reuse computation and sample in discrete domain. Finally, I will present preliminary results on the effort of making 3D shapes foldable.

Jyh-Ming Lien is an Associate Professor in the Department of Computer Science at George Mason University. He is the director of the Motion and Shape Computing (MASC) group and affiliated with the Autonomous Robotics Laboratory. He received his Ph.D. in Computer Science from Texas A&M University in 2006. Prior to joining George Mason in 2007, he was a postdoctoral researcher at UC Berkeley. His research goal is to develop efficient, robust and practical algorithms for representing, manipulating and analyzing massive geometric data of shape and motion. His research finds applications in the areas of computational geometry, computer graphics, GIS, visualization and robotics. His research has been supported by NSF, USGS, DOT, AFOSR, and Virginia Center for Innovative Technology. Images, videos, papers, and software about his work can be found at: https://masc.cs.gmu.edu/

Back to the top

"Participatory Route Planning"
Ming C. Lin
University of North Carolina, Chapel Hill

We present an approach to “participatory route planning”, a novel concept that takes advantage of mobile devices, such as cellular phones or embedded systems in cars, to form an interactive, participatory network of vehicles that plan their travel routes based on the current traffic conditions and existing routes planned by the network of participants, thereby making more informed travel de-cision for each participating user. The premise of this approach is that a route, or plan, for a vehicle is also a prediction of where the car will travel. If routes are created for a sizable percentage of the total vehicle population, an estimate for the overall traffic pattern is attainable. Taking planned routes into account as predictions allows the entire traffic route planning system to better distribute vehicles and minimize traffic congestion. We present an approach that is suitable for realistic, city-scale scenarios, a prototype system to demonstrate feasibility, and experiments using a state-of-the-art microscopic traffic simulator.

Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and nine best paper awards at international conferences. She is a Fellow of ACM and IEEE.
Her research interests include physically-based modeling, virtual environments, sound rendering, haptics, robotics, and geometric computing. She has (co-)authored more than 250 refereed publications in these areas and co-edited/authored four books. She has served on over 150 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is a member of IEEE CS Board of Governors and CRA-W Board of Directors, Chair of 2015 IEEE Computer Society (CS) Transactions Operation Committee, and a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014). She also has served on several Editorial Boards, steering committees and advisory boards of international conferences, government agencies, and industry.

Back to the top

"Computational Control Engines For Robotics Systems"
Todd D. Murphey
Northwestern University

Robotic applications require real-time control for high-dimensional, nonlinear/nonsmooth systems operating in an uncertain environment, often with limited actuation, poor quality sensors, and low bandwidth. Computational simulation tools have evolved in the last two decades to efficiently meet robotics needs, whereas computational control and estimation tools largely have not. This talk will focus on substantial progress towards bringing fully automated nonlinear control synthesis in software to robotics applications. The first part of this talk will focus on the use of role of integration schemes in real-time, low-bandwidth systems. The second part of the talk will focus on sequential action control (SAC), a control formulation with an analytic feedback solution for general affine nonlinear systems. SAC provides continuous-time control that is globally well-posed, inherits stability properties from classical linear and model-predictive techniques, and admits both control saturation and unilateral state constraints. SAC scales to systems with many degrees of freedom and for some examples can be executed on a mobile phone, indicating that real-time nonlinear control is feasible for many more systems than previously believed. Lastly, SAC can be algorithmically automated, potentially enabling a computational control engine for any robotic system.

Dr. Todd D. Murphey is an Associate Professor of Mechanical Engineering at Northwestern University. He received his B.S. degree in mathematics from the University of Arizona and the Ph.D. degree in Control and Dynamical Systems from the California Institute of Technology. His laboratory is part of the Neuroscience and Robotics Laboratory, and his research interests include computational methods for mechanics and real-time optimal control, physical networks, and information theory in physical systems. Honors include the National Science Foundation CAREER award in 2006, membership in the 2014-2015 DARPA/IDA Defense Science Study Group, and Northwestern's Charles Deering McCormick Professorship of Teaching Excellence. He is a Senior Editor of the IEEE Transactions on Robotics.

Back to the top

"The Chimera of Robust Place Recognition"
José Neira
Universidad de Zaragoza

In this talk I will discuss the history of the place recognition, or loop closing problem in SLAM. I will present some of the most recent algorithms and results of our group in this field, and I will also explain why hoping for a fool-proof place recognition algorithm is a chimera, and SLAM systems should instead accommodate for possible failures in place recognition.

José is full professor of Computer Science at the Universidad de Zaragoza, Spain, where he teaches courses in Computer Programming, Computer Vision, Compiler Theory, Robotics and Machine Learning, in several degrees at the University of Zaragoza and also frequently as invited lecturer in many universities, research centers and conferences throughout the world. He has been invited researcher at the Massachusetts Institute of Technology, the University of Oxford, Imperial College London, the Technical University of Munich, and the Instituto Superior Tecnico of Lisbon. José has published more than 50 books, journal papers and conference papers on the subject of perception for intelligent robots. He is one of the top 5% most cited researchers in robotics worldwide according to Google Scholar, his work has received more than 5,000 citations. José has served as associate editor for the IEEE Transactions on Robotics, and has been invited editor for Robotics and Autonomous Systems, the Journal of Field Robotics, Autonomous Robots, and the IEEE Transactions on Robotics. José has been involved in the organization of many other scientific events, including Robotics: Science and System (RSS, the 2010 edition was organized at the University of Zaragoza), the IEEE International Conference on Robotics and Automation (ICRA), the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), the International Joint Conference on Artificial Intelligence (IJCAI), and the Conference on Artificial Intelligence of the Association for the Advancement of Artificial Intelligence (AAAI). José also is also involved as expert in the evaluation of FP7 and H2020 Research and Innovation programs of the European Commission, as well as the European Research Council grants programs.

Back to the top

Mac Schwager
Stanford University

Mac Schwager is an assistant professor in the Aeronautics and Astronautics department at Stanford University. He obtained his BS degree in 2000 from Stanford University, his MS degree from MIT in 2005, and his PhD degree from MIT in 2009. He was a postdoctoral researcher working jointly in the GRASP lab at the University of Pennsylvania and CSAIL at MIT from 2010 to 2012. From 2012 to 2015 he was an assistant professor of Mechanical Engineering at Boston University. His research interests are in distributed algorithms for control, perception, and learning in groups of robots and animals. He received the NSF CAREER award in 2014.

Back to the top

"Humans and Machines of Like Mind: The Future of Team Performance"
Julie Shah

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. This is because people and machines don’t think the same way. In this talk, I share recent work that enables machines to learn from the best human team members how to extract information from human conversation, and participate in real-time to improve human team decision-making. We conduct experiments in which intelligent agents that these models assist people in classification and planning tasks. Our studies demonstrate statistically significant improvements in people’s performance in decision-making tasks, when aided by the intelligent agents.

Julie Shah is an Associate Professor in the Department of Aeronautics and Astronautics at MIT and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human- robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments," and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Her work on industrial human-robot collaboration was also recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and she has received international recognition in the form of best paper awards and nominations from the International Conference on Automated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human- Robot Interaction, the International Symposium on Robotics, and the Human Factors and Ergonomics Society.

Back to the top

"Robotic Search of Transient Targets"
Dezhen Song
Texas A&M University

Searching for objects in physical space has been one of the most important tasks for mobile robots. Transient targets refer to intermittent signal emitting objects such as cellphone users, airplane black boxes, and unknown sensor networks. Searching for such targets is difficult because targets are found only if both signal emission and sensing range conditions are simultaneously satisfied. This problem is inherently stochastic which makes the traditional coverage-based searching techniques less effective. Considering a large searching region, sparse target distribution, the expected searching time, multi-target signal correspondence, variable signal transmission power, and the efficient coordination of multiple robots, we report a series of algorithms developed over last decade that handle the cases from single-target-single-robot to decentralized multi-target-multi-robot with different sensing and communication constraints and explicit performance analyses. Extensive simulation and physical experiment results are also included in the talk.

Dezhen Song is an Associate Professor with Department of Computer Science and Engineering, Texas A&M University, College Station, Texas, TX, USA. Song received his Ph.D. in 2004 from University of California, Berkeley, MS and BS from Zhejiang University in 1995 and 1998, respectively. Song's primary research area is perception, networked robots, visual navigation, automation, and stochastic modeling. Dr. Song received the Kayamori Best Paper Award of the 2005 IEEE International Conference on Robotics and Automation (with J. Yi and S. Ding). He received NSF Faculty Early Career Development (CAREER) Award in 2007. From 2008 to 2012, Song was an associate editor of IEEE Transactions on Robotics (T-RO). From 2010 to 2014, Song was an Associate Editor of IEEE Transactions on Automation Science and Engineering (T-ASE). Song is currently an Associate Editor for IEEE Robotics and Automation Letters (RA-L).

Back to the top

"Design and Life Support Based on Unified Digital Human and Humanoid"
Eiichi Yoshida
Intelligent Systems Research Institute(IS-AIST)

By approaching from various aspects on interaction between humans, devices and environments, we aim at a framework unifying human simulation and robotics that is useful for product design and life support. The first axis is to develop a system for human-centered product design through understanding humans' recognition and behavior principles by using a digital human that can model its shape, musculo-skeletal structure and motions, as well as interactions with environments. Another main research topic is development of a humanoid robot that can reproduce various human behaviors to serve as an evaluator of products such as assistive devices, by estimating the mechanical effects in a quantitative manner, which is difficult with human measurement.

Eiichi Yoshida received M.E and Ph.D degrees on Precision Machinery Engineering from Graduate School of Engineering, the University of Tokyo in 1993 and 1996 respectively. In 1996 he joined former Mechanical Engineering Laboratory, later reorganized as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST/IS-CNRS/ST2I Joint French-Japanese Robotics Laboratory (JRL) at LAAS-CNRS, Toulouse, France, from 2004 to 2008. Since 2009, he is Co-Director of CNRS-AIST JRL (Joint Robotics Laboratory), UMI3218/RL, and since 2015 he serves as Deputy-Director of Intelligent Systems Research Institute (IS-AIST), AIST, Tsukuba, Japan. His research interests include robot task and motion planning, human modeling, and humanoid robots.

Back to the top

"Panel: The Future of Robotics-Visions \& Challenges"

Back to the top

Program   Home   Local Arrangements