ABSTRACT

Proactive AR agents promise context-aware assistance, but their interactions often rely on explicit voice prompts or responses, which can be disruptive or socially awkward. We introduce Sensible Agent,a framework designed for unobtrusive interaction with these proactive agents. Sensible Agent dynamically adapts both “what” assistance to offer and, crucially, “how” to deliver it, based on real-time multimodal context sensing. Informed by an expert workshop (n=12) and a data annotation study (n=40), the framework leverages egocentric cameras, multimodal sensing, and Large Multimodal Models (LMMs) to infer context and suggest appropriate actions delivered via minimally intrusive interaction modes. We demonstrate our prototype on an XR headset through a user study (n=10) in both AR and VR scenarios. Results indicate that Sensible Agent significantly reduces perceived intrusiveness and interaction ef-fort compared to voice-prompted baseline, while maintaining high usability.

Full-text PDF

Overview

We introduce Sensible Agent, a framework for unobtrusive interaction with a proactive AR agent. While the conventional approach requires users to use voice prompts to instruct agents, Sensible Agent proactively prompts the user based on context, toggles context-adaptive unobtrusive interactions, and suggests different types of queries based on the context.

We introduce Sensible Agent, a framework for unobtrusive interaction with a proactive AR agent. While the conventional approach requires users to use voice prompts to instruct agents, Sensible Agent proactively prompts the user based on context, toggles context-adaptive unobtrusive interactions, and suggests different types of queries based on the context.

Teaser Video

30-second teaser video for Sensible Agent.

Full Video

Full 5-minute video for Sensible Agent.


CITING

@inproceedings{leeSensibleAgentFramework2025,
  title = {Sensible {Agent}: {A} {Framework} for {Unobtrusive} {Interaction} with {Proactive} {AR} {Agents}},
  shorttitle = {{{SensibleAgent}}},
  booktitle = {Proceedings of the 38th {{Annual ACM Symposium}} on {{User Interface Software}} and {{Technology}}},
  author = {Lee, Geonsun and Xia, Min and Numan, Nels and Qian, Xun and Li, David and Chen, Yanhe and Kulshrestha, Achin and Chatterjee, Ishan and Zhang, Yinda and Manocha, Dinesh and Kim, David and Du, Ruofei},
  year = {2025},
  month = sept,
  series = {{{UIST}} '25},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  doi = {10.1145/3746059.3747748},
}