IMUTube

apr 2020 - present

overview

Developed by Georgia Tech Ubiquitous Computing Group and funded by Oracle, IMUTube is an automatic processing pipeline that extracts virtual streams of inertial measurement unit (IMU) data from YouTube videos for human activity recognition (HAR). This large-scale labelled dataset can be used to support research in HAR such as detecting mental and physical health and understanding human behaviors. The goal for this design project is to create a platform for researchers who may not a have programming background to easily use this pipeline.

role

product designer

team

Hyeok Kwon | project manager
Jae Hyuk Kim | product designer
Yulai Cui | engineer
Sukriti Bhardwaj | engineer
Victor Guyard | engineer

publication

IMUTube

a little more context on IMUTube...

This ongoing machine learning project extracts wearable sensor-based data from YouTube videos. Collecting and processing data is extremely time-consuming and expensive, and this pipeline allows researchers studying human behaviors to easily access large-scale labeled data sets.

IMUTube processes target human activities from input videos into virtual imu dataset that researchers can investigate further.

input video

3D reconstruction

virtual sensor

what was the problem we were trying to solve?

how might me we make wearable sensor-based data more accessible to practioners with a limited programming background?

what were my responsibilities?

Collaborate with the IMUTube researchers who developed the backend pipeline to understand the current system and define the scope of the project

Lead the experience design of the project by providing information architecture, user flows, and wireframes for the new platform

Collaborate with one other designer to create a design system and final UI designs

Conduct usability testing such as task analysis and cognitive walkthroughs with GT UbiComp Group

design implications
for key problem areas

problem #1: Lack of Accessibility to researchers without programminng skills

While this pipeline can serve as a great resource for many researchers studying human behaviors, it requires programming skills to retrieve data because the video URLs and data parameters have to be hard-coded. The new platform will allow researchers without programming skills to easily navigate and access data.

problem #2: Lack of flexibility in input options

Researchers need the flexibility to input multiple target activities and different data parameters for each activity, which can be redundant and inefficient. The new platform will provide flexibility by allowing multithreading when inputting data parameters.

problem #3: lack of recoverability

The current system requires users to hard code the parameters and repeat the process all over again if users make an error. We want the new platform to allow recovery and users to easily go back and fix their errors.

what is the main flow?

low-fidelity design iterations

Based on the user flow, I created low-fidelity designs with basic content and layout to test the overall user experience within the GT UbiComp Group. I tested the wireframes with a few researchers by conducting cognitive walkthroughs and made several iterations based on the results and feedback.

key takeaways
from initial user testing

Users who are not familiar with the IMU pipeline have a difficult time navigating without any instructions.

Users sometimes want to apply the same parameters to all target activities but want the flexibility to modify if needed.

Users need more information about the videos before selecting (title, length, preview).

Users want more operation visibility through a progress bar.

moving into high-fidelity

Based on the user flow, I created low-fidelity designs with basic content and layout to test the overall user experience within the GT UbiComp Group. I tested the wireframes with a few researchers by conducting cognitive walkthroughs and made several iterations based on the results and feedback.

target activity search

Researchers are able to select multiple target activities.

activity parameters

Researchers define activity parameters for each target activity by selecting height and BMI range and placing sensors on the interactive 3D body model.

Video parameters

Researchers enter video parameters such as human tracking per video, duration, resolution, number of videos.

selecting videos

Researchers select videos for IMU processing for each target activity. They are able to preview before selecting videos.

Downloads

Downloaded data sets are organized by activity categories.

future work

more usability testing and iterations!

This project is still work-in-progress. We want to conduct more usability testing with researchers who aren’t familiar with the IMU pipeline but would be interested in using and address edge cases.