Abstract

Imagine a near-future smart home. Home-embedded visual AI sensors continuously monitor the resident, inferring her activities and internal states that enable higher-level services. Here, as home-embedded sensors passively monitor a free person, good inferences happen randomly. The inferences' confidence highly depends on how congruent her momentary conditions are to the conditions favored by the AI models, e.g., front-facing or unobstructed. We envision new strategies of AI-to-Human Actuation (AHA) that empower the sensory AIs with proactive actuation so that they induce the person's conditions to be more favorable to the AIs. In this light, we explore the initial feasibility and efficacy of AHA in the context of home-embedded visual AIs. We build a taxonomy of actuations that could be issued to home residents to benefit visual AIs. We deploy AHA in an actual home rich in sensors and interactive devices. With 20 participants, we comprehensively study their experiences with proactive actuation blended with their usual home routines. We also demonstrate the substantially improved inferences of the actuation-empowered AIs over the passive sensing baseline. This paper sets forth an initial step towards interweaving human-targeted AIs and proactive actuation to yield more chances for high-confidence inferences without sophisticating the model, in order to improve robustness against unfavorable conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.