Research
Shape n' Swarm: Hands-on, Shape-aware Generative Authoring with Swarm UI and LLMs
Matthew Jeung, Anup Sathya, Michael Qian, Steven Arellano, Luke Jimenez, Ken Nakagaki
Relevant Research Domains
Swarm User Interfaces, LLMs, Multi-modal Interaction
Motivation
This project explores what happens when hand-shaping and speech are combined for a tangible authoring method. Inspired by Radical Atoms and Perfect Red, this paper pushes towards visions of fluid, tangible interaction. This is a passion project that I presented at ACM UIST 2025 in Korea!
TorqueLocomotion
User manipulation-flexible locomotion using deep reinforcement learning.
Progress Snapshots
Thus far, I have created an accurate MuJoCo simulation of our lab's flywheel hardware, with a simulated PID motor controller. In this simulation, I have a decent PPO control policy that enables basic flipping, turning, and translational movement.
Early hardware testing for translational locomotion.
Policy learns to flip from horizontally-oriented flywheel to vertically-oriented flywheel (after 140k timesteps)!
Relevant Research Domains
Deep Reinforcement Learning, Self-Reconfiguring Hardware, Modular Robotics
Motivation
This project hopes to push forward research on self-reconfiguring, modular robotic systems, such as M-Blocks. My vision for this project is to allow users to assemble shapes with cube primitives, which immediately know how to move based on a shape-flexible control policy. This control policy would also allow for more flexibility in self-assembling robotic systems.
Skills Involved
Robotics simulation with MuJoCo, Reinforcement Learning, Hardware Design (PCB design, 3D modelling)
Next Steps
  • Improvements to RL policy.
  • Transition from MuJoCo simulation to hardware.
  • Exploration of multi-module locomotion. (Multiple modules attached together!)
Haptics Project [Under Review CHI 2026]
Hidden for now!
Role
Working as a co-lead author on a haptics paper. Very excited about the project, but hidden for now!
Relevant Research Domains
Haptics
Skills Involved
3D Modelling, Electronics (PCB design, microcontrollers), Programming, Simulation
Video Learning Project [Anonymized]
Developing an autoencoder architecture for robot learning purposes.
Role
Working under a PhD student, I have mainly contributed on dataloading pipelines for parallelized training on GPU, deploying RL agents for data collection, and visualizing autoencoder outputs during training.
Relevant Research Domains
Deep Learning, Computer Vision, Diffusion, Robotics
Skills Involved
PyTorch, CUDA
Motivation
Hidden for now!
Side Projects
GreenGuide
Grocery store item sustainability feedback through barcode-scanning glove.
Skills Involved
Electronics (Raspberry Pi, Soldering, etc), 3D Modelling (Fusion 360), Software Development (Python, Flask, APIs)
Description
GreenGuide provides instant sustainability feedback for grocery items using a barcode-scanning glove, OpenFoodFacts API, and LLM agent.
1lb Combat Bot
Horizontal spinner bot designed completely from scratch. Features weapon belt system and asymmetrical blade.
Demo
Skills Involved
3D Modelling (Fusion 360), Electronics (Vertiq drone motor, ESCs, drive motors)
Description
A competitive robot built for 1lb combat, featuring a custom weapon belt and unique blade design for optimal performance. Made with one partner.
senseMEE
ML-based Spotify extension to autoqueue songs based on physical activity, environment, and weather.
Demo
Skills Involved
Mobile App Development (Swift, Spotify API, IMU data filtering), Machine Learning (scikit-learn)
Description
senseMEE uses machine learning to analyze physical activity, environment, and weather, autoqueuing songs on Spotify to match your context.