Siyou Pei
About Me - CV - HiLab



Research


UI Mobility Control in XR: Switching UI Positionings between Static, Dynamic, and Self Entities

Siyou Pei, David Kim, Alex Olwal, Yang Zhang, and Ruofei Du (CHI 2024)

Extended reality (XR) has the potential for seamless user interface (UI) transitions across people, objects, and environments, but UI mobility remains an often-overlooked feature. UI mobility refers to changing a UI's type between static, dynamic, and self UI, based on users' in-situ needs. In response, we facilitated UI mobility between entities with a probing interaction technique Finger Switches and users found it userful in our studies.



WheelPose: Data Synthesis Techniques to Improve Pose Estimation Performance on Wheelchair Users

William Huang, Sam Ghahremani, Siyou Pei, and Yang Zhang (CHI 2024)

Existing pose estimation models perform poorly on wheelchair users due to a lack of representation in training data. We present a data synthesis pipeline to address this disparity in data collection and subsequently improve pose estimation performance for wheelchair users. Our configurable pipeline generates synthetic data of wheelchair users using motion capture data and motion generation outputs simulated in the Unity game engine, validated by real users and performance evaluation.



Embodied Exploration: Facilitating Remote Accessibility Assessment for Wheelchair Users with Virtual Reality

Siyou Pei, Alexander Chen, Chen Chen, Franklin Mingzhe Li, Megan Fozzard, Hao-yun Chi, Nadir Weibel, Patrick Carrington, Yang Zhang (ASSETS 2023)

Embodied Exploration is Virtual Reality technique for wheelchair users to evalute accessibility remotely. It delivers the experience of a physical visit while keeping the convenience of remote assessment. We validated the efficacy of Embodied Exploration against photo galleries and virtual tours through user studies. Furthermore, we presented key findings on user perception and usability, leading to design guidelines for future accessibility assessment tools.



ForceSight: Non-Contact Force Sensing with Laser Speckle Imaging

Siyou Pei, Pradyumna Chari, Xue Wang, Xiaoying Yang, Achuta Kadambi, Yang Zhang (UIST 2022)

Best Demo Honorable Mention

We present ForceSight, a non-contact force sensing approach using laser speckle imaging. Our key observation is that object surfaces deform in the presence of force. This deformation, though very minute, manifests as observable and discernible laser speckle shifts, which we leverage to sense the applied force. To investigate the feasibility of our approach, we conducted studies on a wide variety of materials. We also demonstrated the applicability with several example applications.



Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions

Siyou Pei, Alexander Chen, Jaewook Lee, Yang Zhang (CHI 2022)

Best Paper Honorable Mention

A new interaction technique that lets users' hands become virtual objects by imitating the objects themselves. For example, a thumbs-up hand pose is used to mimic a joystick. We created a wide array of interaction designs around this idea to demonstrate its applicability in object retrieval and interactive control tasks. Collectively, we call these interaction designs Hand Interfaces.



AURITUS: An Open-Source Optimization Toolkit for Training and Development of Human Movement Models and Filters Using Earables

Swapnil Sayan Saha, Mr. Sandeep Singh Sandha, Siyou Pei, Vivek Jain, Mr. Ziqi Wang, Yuchen Li, Ankur Sarker, Prof. Mani Srivastava (IMWUT 2021)

AURITUS is an extendable and open-source optimization toolkit designed to enhance and replicate earable applications. AURITUS handles data collection, pre-processing, and labeling tasks using graphical tools and provides a hardware-in-the-loop (HIL) optimizer and TinyML interface to develop lightweight and real-time machine-learning models for activity detection and filters for head-pose tracking.

 

Quick Question: Interrupting Users for Microtasks with Reinforcement Learning

Bo-Jhang Ho, Bharathan Balaji, Mehmet Koseoglu, Sandeep Sandha, Siyou Pei, Mani Srivastava (ICML 2021 Workshop on Human in the Loop Learning)

Human attention is a scarce resource in modern computing. Quick Question explores use of reinforcement learning (RL) to schedule microtasks while minimizing user annoyance and compare its performance with supervised learning. We model the problem as a Markov decision process and use Advantage Actor Critic algorithm to identify interruptible moments.

 


News

May 2024: Will present UI Mobility in XR at CHI 2024.

Feb 2024: Reviewed CHI 2024 Late-Breaking Work as Chair Committee.

Oct 2023: Presented Embodied Exploration at ASSETS 2023, New York.

Oct 2023: Reviewed CHI 2024 Papers.

July 2023: Successfully organized a non-profit summer program LACC 2023 at UCLA.

May 2023: Reviewed UIST 2023 Papers.

Apr 2023: Reviewed ISMAR 2023 Papers.

Jan-Apr 2023: Interned at Google in San Francisco.

Mar 2023: Reviewed DIS 2023 Papers.

Jan 2023: Reviewed CHI 2023 Late-Breaking Work.

Sep-Dec 2023: Interned at Google in Los Angeles.

Nov 2022: Received UIST Best Demo Honorable Mention Award for ForceSight.

Oct 2022: Presented ForceSight at UIST '22, Bend, OR.

Oct 2022: Reviewed CHI 2023 Papers.

August 2022: Passed the Oral Qualifying Examination at Department of Electrical & Computer Engineering and became a Ph.D candidate.

May 2022: Reviewed UIST 2022 Papers.

May 2022: Presented Hand Interfaces at CHI '22, New Orleans, LA.

April 2022: Received CHI Honorable Mention Award for Hand Interfaces.

April 2022: Reviewed CHI 2022 Late-Breaking Work.

Mar 2022: Passed the Preliminary Exam at Department of Electrical & Computer Engineering.

Dec 2021: Turned in MS thesis and obtained the MS degree.

Nov 2021: Reviewed CHI 2022 Papers.

Feb 2021: Reviewed CHI 2021 Late-Breaking Work.

Sep 2019: Started MS-Ph.D. program at University of California, Los Angeles.






© Siyou Pei, Source Code.