PhD Candidate in Computational Media at UCSC
thumb (1).jpg

Expressive Curiosity

The Case of the Curious Robot: Designing Socially Viable Curious Behavior in AI Agents

Project Overview

Curiosity is a critical driver of learning, but how should it be expressed in AI-driven robots? This research explores the social viability of curious behavior in non-human agents, investigating how different levels of curiosity impact human perceptions of a robot’s intelligence, competence, and social acceptability. Our findings contribute to the design of AI systems that balance exploratory learning with real-world social expectations.

Research Approach

Our study used a 4x3 mixed-factorial design, testing four levels of robot curiosity (none, low, medium, high) against three priming conditions: participants were told the robot was either “curious,” “learning,” or “autonomous.” The study measured:

  • Expectation alignment: How well participants felt the robot’s behavior matched its described cognitive state.

  • Perceived competence and intelligence: Ratings of the robot’s capability as a worker and a thinking agent.

  • Emotional tone of feedback: Sentiment analysis of participants’ open-ended responses.

Iterative Design Process

The study was refined through multiple pilot tests, where we optimized animation fidelity to match the real-world constraints of the Kuri robot. We worked with professional animator and co-author Doug Dooley to ensure the robot’s gaze, movement pauses, and body orientation communicated varying degrees of curiosity effectively. Testing different levels of curiosity revealed key thresholds where curiosity shifted from being perceived as insightful to disruptive.

Key Findings

  1. Curiosity increases perceptions of intelligence but decreases perceptions of competence. Higher curiosity levels made the robot seem more thoughtful but less effective at its task.

  2. Expectation matching depends on priming. Participants who expected a “curious” robot felt its behaviors aligned better at higher curiosity levels, whereas those primed for “autonomous” robots preferred low curiosity.

  3. High curiosity can negatively impact social perceptions. At the highest level, the robot was rated as less likable and its behaviors generated more negative sentiment in open-ended responses.

  4. Behavioral design must align with cognitive expectations. Misalignment between expected cognition (curious vs. learning vs. autonomous) and actual behavior led to decreased trust and acceptance.

Implications & Future Directions

Our research highlights the importance of designing AI agents that not only learn effectively but also communicate their internal states in ways that align with human expectations. Moving forward, we will explore how different environments and task contexts influence acceptable levels of curiosity and how social cues can be adapted dynamically based on user feedback. Future work will also test these findings with physical robots to assess real-world applicability.

For more details, see our full research paper presented at IVA 2024.

A supplementary zine highlighting the qualitative findings of the study and presented as a digital artifact at IVA 24.
Illustrations by Louis Riddick

Robot's curiosity and anticipated cognitive state (Curious vs. Learning vs. Autonomous) impacted how well the robot met participant expectations.

Robot curiosity behavior impacted emotional tone of participant open-ended responses. Significant change occurred at the highest level of curiosity.