Learning to accomplish complex tasks may require a tight coupling among different levels of cognitive functions or components, like perception, acting, planning, and self-explaining. One may need a coupling between perception and acting components to make decisions automatically especially in emergent situations. One may need collaboration between perception and planning components to go with optimal plans in the long run while also drives task-oriented perception. One may also need self-explaining components to monitor and improve the overall learning. In my research, I explore how different cognitive functions or components at different levels, modeled by Deep Neural Networks, can learn and adapt simultaneously. The first question that I address is: Can an intelligent agent leverage recognized plans or human demonstrations to improve its perception that may allow better acting? To answer this question, I explore novel ways to learn to couple perception-acting or perception-planning. As a cornerstone, I will explore how to learn shallow domain models for planning. Apart from these, more advanced cognitive learning agents may also be reflective of what they have experienced so far, either from themselves or from observing others. Likewise, humans may also frequently monitor their learning and draw lessons from failures and others' successes. To this end, I explore the possibility of motivating cognitive agents to learn how to self-explain experiences, accomplishments, and failures, to gain useful insights. By internally making sense of the past experiences, an agent could have its learning of other cognitive functions guided and improved.
By Yantian Zha
Self-Explanation Guided Robot Learning