less than 1 minute read

My research focuses on sample-efficient reinforcement learning in relational domains, specifically concentrating on developing decision-making systems capable of using structured domain knowledge. Domain knowledge, instantiated through task hierarchies for planners and D-FOCI based abstractions at the higher levels, can effectively guide lower level policy learning. This integration is crucial for overcoming the combinatorial challenges inherent in large-scale relational environments. A central theme of my work is leveraging symmetries and shared relational structures across tasks and agents to learn policies that inherently generalize to novel object configurations, task variations, and complex multiagent settings. My recent work involved developing a framework that utilizes a planner as a centralized controller to efficiently learn policies that generalize to an increasing number of agents and tasks in relational multiagent domains. Current research directions include learning lifted policies from demonstration data that contains privileged information, and exploring few-shot concept learning augmented with LLM-based priors, both aimed at further advancing the goal of learning efficiently.