My long-term research goal is to build machines that can continually learn concepts (e.g., properties, relations, skills, rules and algorithms) from their experiences and apply them for reasoning and planning in the physical world. The central theme of my research is to decompose the learning problem into learning a vocabulary of neuro-symbolic concepts. The symbolic part describes their structures and how different concepts can be composed; the neural part handles grounding in perception and physics. I leverage structures to make learning more data-efficient, more compositionally generalizable, and also inference and planning faster.
How should we represent various types of concepts?
How to capture the programmatic structures underlying these concepts (The Theory-Theory of Concepts)?
How can we efficiently learn these concepts from natural supervisions (e.g., language, videos)?
How can we leverage the structures of these concepts to make inference and planning faster?