https://sites.google.com/asu.edu/affordance-aware-imitation/project
ACRONYM,
Eppner et al., ICRA, 2021
Learning 6-dof task-oriented grasp detection via implicit estimation and visual affordance,
Chen, et al., IROS, 2022
✅ Affordance prediction can directly be used by a motion planner
✅ Affordance prediction can directly be used by a motion planner
❌ Human annotation of affordances incurs significant costs
Learning Affordance
from Demonstrations?
Learning Grasp Affordances Through Human Demonstration,
Granville et al., ICDL, 2006
Learning a dictionary of prototypical grasp-predicting parts from grasping experience,
Renaud, et al., ICRA, 2013
Learning Grasp Affordances Through Human Demonstration,
Granville et al., ICDL, 2006
Learning a dictionary of prototypical grasp-predicting parts from grasping experience,
Renaud, et al., ICRA, 2013
Learning Grasp Affordances Through Human Demonstration,
Granville et al., ICDL, 2006
Learning a dictionary of prototypical grasp-predicting parts from grasping experience,
Renaud, et al., ICRA, 2013
✅ Eliminates the need for expensive affordance labeling
✅ Eliminates the need for expensive affordance labeling
❌ A motion planner is required to translate affordance predictions into grasp execution,
keeping affordance learning and policy construction/learning as distinct processes.
Human attention naturally gravitates towards object parts essential for task completion
Can you propose any strategies for discovering affordance-cues?
Ask Yourself:
What distinguishes the two trajectories from each other
Sample a trajectory from this category: $sample$-$a$
Sample a trajectory from the same
category as $sample$-$a$: $sample$-$p$
Sample a trajectory from a different
category as $sample$-$a$: $sample$-$n$
Full Model:
Siamese + Coupled Triplet Loss
Ablation 1:
Siamese + Triplet Loss
Ablation 2:
Without Contrastive Learning