Adaptive Querying for Reward Learning from Human Feedback

Oregon State University
Kinova arm with optimal trajectory

The proposed Adaptive Feedback Selection (AFS) efficiently queries humans in different formats across the state space to learn a reward function and mitigate NSEs.

Abstract

Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors using multiple forms of human feedback, by optimizing the query state and feedback format. Our proposed adaptive feedback selection is an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. The feedback format selection also accounts for the cost and probability of receiving feedback in a certain format.

Our experiments in simulation demonstrate the sample efficiency of our approach in learning to avoid undesirable behaviors. The results of our user study with a physical robot highlight the practicality and effectiveness of adaptive feedback selection in seeking informative, user-aligned feedback that accelerate learning.


Overview of the Solution Approach

Overview of Adaptive Feedback Selection
High-level AFS workflow.

The critical states \( \Omega \) for querying are selected by clustering the states. A feedback format \(f^*\) that maximizes information gain is selected for querying the user across \( \Omega \). The NSE model is iteratively refined based on feedback. An updated policy is calculated using a penalty function \( \hat{R}_N \), derived from the learned NSE model.

1. Critical States Selection
  • Cluster state space into $K$ clusters
  • Identify cluster $k \in K$ that has a high information gain for NSE discovery
    Information gain of a cluster $k$ $$ IG(k)^t \!=\! \frac{1}{\left|\Omega_k^{t-1}\right|}\; \sum_{s\in \Omega_k^{t-1}} D_{KL}\!\left(\hat{p}^{\,t}\,\big\|\,\hat{q}^{\,t-1}\right) $$
    $\hat{p}:$ True NSE distribution
    $\hat{q}:$ Learned NSE distribution
  • Select critical states, $\Omega$, at random based on the cluster weights.
Critical States Selection Illustration
2. Feedback Format Selection
Feedback Format Selection
3. Stopping Criteria
Stopping Criteria

Experiments in Simulation

Primary Task: To optimize shortest distance to the destination

Baselines used:

  1. Naive Agent: Executes main policy without learning about NSEs (upper bound on NSE penalty).
  2. Oracle: Has full knowledge of $R^T$ and $R^N$ (lower bound on NSE penalty).
  3. Reward Inference (RI): Learns via first modeling $\beta$ (Ghosal et al., 2023).
  4. Cost-Sensitive: Chooses feedback with the lowest cost given the feedback preference model $D$.
  5. Most-Probable Feedback: Selects the feedback format most likely preferred by the human.
  6. Random Critical States: Uses AFS to learn about NSEs, while sampling critical states randomly.

Domains

Domains used in Sim

Results

1. Average penalty across 100 trials

Average NSE penalty across 100 trials in Sim

2. Average penalty with different clustering algorithms and number of clusters ($K$)

Average penalty with different clustering methods

3. Average penalty incurred by AFS operating with and without the stopping criteria

Average penalty with different stopping criteria

Human Subjects Study in Simulation

Average penalty in the human subjects study in simulation

In this study, participants (N=12) interacted with a simulated autonomous agent in the Vase domain to provide feedback across multiple formats through a GUI. The agent learned to minimize negative side effects (NSEs) while completing primary task. The figure above shows both the simulation setup and the learned performance, with results highlighting how adaptive querying reduces the average NSE penalty.


In-Person User Study with Kinova Gen3 7DoF Arm

In-person user study setup with Kinova Gen3 7DoF arm

Extending the simulation results to a real-world setting, this study evaluated human–robot interaction using a Kinova Gen3 7-DoF arm performing household-style object delivery tasks. Participants (N=30) provided feedback either via a GUI or by physically guiding the robot. The figure above shows the in-person user study setup with Kinova Gen3 7DoF arm.

Results from the in-person user study setup with Kinova Gen3 7DoF arm

Performance metrics show consistent NSE mitigation and strong user alignment with the adaptive feedback strategy. Qualitative measures (trust, competence (RoSAS), and workload (NASA-TLX)) demonstrate that the adaptive approach maintained safety and efficiency without increasing user effort.