Full day Workshop on

Learning and control for autonomous manipulation systems:
the role of dimensionality reduction 

Program

8.30-9.00 Welcome and introduction by the organizers

9.00-9.30 Tamar Flash Principles underlying dimension reduction and compositionality in human movements and robotic implementations

9.30-10.00 Antonio Bicchi  ─ Designing and Using Hands with More Synergies

10.00-10.20 Poster and demos spotlight presentations

10.20-10.50 Coffee Break (Posters and Demo section)

   -Towards Generalizable Associative Skill Memories

   -The Closure Signature and the Stiffness Signature: a Functional Approach to Model Underactuated Compliant Robotic Hands

   -An Extended Passive Motion Paradigm for Human-like Posture & Movement Planning in Redundant Manipulators

   -Acquisition of Variable Impedance Skills by Robot Manipulator

10.50-11.20 Aude BillardLearning the dimensions that do not matter is important to offer robustness in control

11.20-11.50 Dongheui Lee ─ Challenges in Kinesthetic and Teleoperation Teaching of Manipulation Skills

11.50-12.20 Jan Babic ─ Complementary sensorimotor control during physical human-robot collaboration

12.20-13.10 Lunch Break

13.10-13.40 Sergey Levine - Deep Robotic Learning and Dexterous Manipulation

13.40-14.10 Matthew Howard - Learnt Redundancy Resolution and Constraints in Grasping

14.10-14.30 Poster and demos spotlight presentations

14.30-15.00 Coffee Break (Posters and Demo section)

    -Learning to Grasp with a Deep Network for 2D Context and Geometric Prototypes for 3D Structure

    -Semantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View

    -FCN-Based 6D Robotic Grasping for Arbitrary Placed Objects

  -A Robust Grasping Policy Based on Derivative-Free Optimization and Grasp-Quality Neural Networks Trained with Synthetic Point Clouds and Grasps from Dex-Net 2.0

15.00-15.30  Ken Goldberg The New Wave in Robot Grasping

15.30-16.00 Oliver Kroemer - Learning Robust Manipulation Skills with Tactile Events

16.00-17.00 Panel session and conclusions

 

________________________________________________________________________________________________________

Jan Babic – Jožef Stefan Institute

Title: Complementary sensorimotor control during physical human-robot collaboration

Abstract: In the past many studies on human motor control investigated how individuals perform arm manipulation tasks. In robotics, these studies were frequently used as a basis to control robots in scenarios of  human-robot collaboration. Nevertheless, a deeper understanding of how  multiple human subjects collaborate between each other is needed to further enhance the performance of robots while they collaborate with the humans. In this talk I will present our latest experimental study  where we investigated how pairs of humans perform and co-adapt to each other while they have to physically collaborate to carry out a joint task and how we utilized such models to control the robots in human-robot cooperative setups.

 

Antonio Bicchi – Centro E. Piaggio 

Title: Designing and Using Hands with More Synergies 

Abstract: To deal with the complexity of grasping and manipulation with human and robot hands, many researchers have been using hierarchically organized representations of reduced dimensions, e.g. principal components of observed hand postures ordered by their statistical relevance to a set of tasks, aka postural synergies. Ideally, one would like to have tools to exploit this structure to design and control hands of increasing complexity, matching the needs of more advanced tasks. This could be in principle a solution to the minimalistic robotic problem, i.e. to find the simplest possible system that solves a given task. However, much remains to be done in this direction. In this talk I will discuss a few preliminary steps taken toward designing and controlling hands with more synergies. 

 

Aude Billard – École Polytechnique Fédérale de Lausanne (EPFL) 

Title: Learning the dimensions that do not matter is important to offer robustness in control

Abstract: Most tasks entail redundancy.  Redundancy is advantageous in that it offers multiple ways to solve the task, a flexibility required to adapt to perturbations. Task redundancy is conveyed through variability in control strategies. When manipulating objects with one or several arms, the dimension of the control can be very high. Machine learning techniques provide tools to embed a representation of this high-dimensional and non-linear manifold of this space of feasible motion strategies that can ease control.  Identifying task-relative constraints and learning how these constraints vary during task completion can also be used to model changes in impedance during manipulation. Task-relative constraints correspond to control directions in which the task shows no variability. This translates into parameters of impedance, with low stiffness along directions with high variability and, conversely, with high stiffness in directions with low variability, i.e. directions highly constrained

 

Tamar Flash – The Weizmann Institute of Science

Title: Principles underlying dimension reduction and compositionality in human movements and robotic implementations

Abstract: In my talk I will discuss several recent research directions we have taken to explore the different principles underlying dimension reduction and compositionality in the control and construction of complex human upper limb and gait movements.   Investigating motor compositionality we have explored the nature of motor primitives underlying the construction of complex movements at different levels of the motor hierarchy.  In particular we have focused on seeking and inferring what motion primitives are possibly used in the generation of hand and center of mass (COM) trajectories during human upper limb and locomotion movements, respectively. I will also discuss   the topic of motor coordination and the mapping between end-effector and joint motions both during arm and leg movements. The mathematical models we have used in these studies combine geometrical approaches with optimization models aimed at inferring motion invariants and to unravel motor coordination and timing strategies. The usefulness of these approaches for humanoid robot research will be demonstrated.

 

Ken Goldberg – University of California, Berkeley

Title: The New Wave in Robot Grasping 

Abstract: Despite 50 years of research, robots are still remarkably clumsy.  I will present what I see as three "waves" in methodology.  The first wave, which is still dominant, uses analytic methods based on screw theory and assumes exact knowledge of pose, shape, and other properties (see the 2016 Handbook of Robotics).  The relatively recent Second Wave is purely empirical: purely data driven approaches which learn grasp strategies from many examples using techniques such as reinforcement learning and hyperparametric function approximation (Deep Learning).  The "New" wave will be hybrid methods that combine analytic models to bootstrap and tune empirical models, where data and code is exchanged via the Cloud using emerging advances in cloud computing, big data, deep learning, and the internet of things.  This talk will present an overview and new results from my lab for applications in home decluttering, warehouse order fulfillment, and robot-assisted surgery.

 

Matthew Howard – Kings College London   

Title: : Learnt Redundancy Resolution and Constraints in Grasping

Abstract: This talk will provide a survey of the problem of learning kinematic constraints, a multivariate regression problem that is not easily amenable to traditional machine learning approaches. Motivated by the ease with which humans are able to exploit constraints in everyday behaviour, such as grasping a credit card from a table, we examine how representations of the environment may be learnt in order to predict the outcome of actions. As will be shown, the formalism has deep links with dimensionality reduction since the effect of constraints is, essentially, to reduce the number of degrees of freedom available to perform the task. Interestingly the same formalism appears to describe a variety of other aspects of coordinated control, including redundancy resolution and task-prioritised control, of which a few examples will be provided.

 

Dongheui Lee – Technical University of Munich

Title: Challenges in Kinesthetic and Teleoperation Teaching of Manipulation Skills  

Abstract: In this talk, I will present our recent activities on manipulation skill learning at the Dynamic Human Robot Interaction Lab at TUM. New action descriptors have been developed in order to represent skills more effectively.  We have investigated efficient teaching approaches for a human to transfer his/her manipulation skills to a humanoid robot. Different teaching modalities face different challenges. I will discuss some challenges in kinesthetic teaching and teleoperation teaching and present our recently developed control and learning algorithms in order to tackle these issues in a manipulation skill learning context. 

 

Sergey Levine – UC Berkeley 

Title: Deep Robotic Learning and Dexterous Manipulation

Abstract: Deep learning has produced widespread improvements in passive perception: from speech recognition to computer vision, tasks that involve detecting, classifying, or localizing in passive sensory input data benefit tremendously from the representational capacity of deep neural networks. However, does this extend to real-world decision making and control, at the degree of fidelity necessary for dexterous manipulation? In this talk, I will discuss a few recently developed algorithms that could be suitable for acquiring complex dexterous manipulation skills in challenging real-world settings. I will present experimental results for vision-based control with conventional parallel-jaw grippers for tasks such as grasping and basic object manipulation, as well as some of our experiments with deep learning for control of dexterous 5-finger hands.

 

Oliver Kroemer – University of Southern California
Title: Learning Robust Manipulation Skills with Tactile Events

Abstract: Contact states play an important role in manipulation tasks, as they determine which objects the robot's actions will affect. A change in the contact state, e.g., making, breaking, or slipping contacts, may correspond to a subgoal of the task or an error depending on the context. In this talk, I will discuss methods for learning to detect these types of contact events using tactile sensing. I will also explain how the robot can use this contact information to adapt its manipulation skills in order to perform tasks more robustly.

adidas