arrow_back Back home

Projects

Here are examples of some things that I’ve worked on, big and small!


Inverse kinematics on a Panda arm

8 Feb 2024

With my friend Leo, I programmed some inverse kinematics on a Panda manipulator using some handy ROS packages. The transform tree of the Arm was calibrated using a camera mounted to the robot’s wrist, as well as an Aruco marker on the table.

Custom robotics simulator using Unity

28 Jan 2024

The above is a tool that I’m developing for my reforestation robot project at CMU. The Unity-based simulator has a complete bridge with ROS2, so sensor data from the simulator can be easily displayed in Rviz, for example. I wrote custom scripts to simulate camera sensors, GNSS, and more. The terrain is based of of real-world elevation data pulled from the USGS’s National Map Downloader. I’ve designed this tool so that virtual worlds can be easily created based on any location in the world– all you need are the GPS coordinates.

Source code (still in development).

Realistic forest generation in the browser

26 Dec 2023

I’m fascinated with teaching computers to have “green thumbs”– modeling plant dynamics, eventually so that machines can help us grow plants better than ever! See my full post here or play with the simulation above.

Scene classification using a retro bag-of-words technique

14 Sept 2023

Filter responses on a kitchen scene for bag-of-words classification

Life existed before CNNs! For my CS class at CMU, I wrote a bag-of-words scene classifier. It uses a series of Gaussian and related filters to convert any image into the frequency domain, then uses the filter responses to construct a histogram. These histograms are clustered, forming a classifier. This technique used to be state-of-the-art. My classifier achieved a whopping 61% accuracy after I tuned the spatial pyramid params.

A self-driving car that drove us to the park

10 May 2023

I founded and led an autonomous driving group called Nova! Here’s our car taking us on a picnic.

Map-assisted state estimation with semantic segmentation

15 Dec 2022

The idea here is that, in the event that the signal from a GNSS sensor is lost, or another localization sensor fails, we can use real-time perception data and offline map information to retain localization in an autonomous vehicle. Does it work as well as RTK? No! But it’s a great fallback, and it uses computations that the AV stack is performing anyway, so it’s a great safety system that can run in the background. And it reduced error from a simulated noisy GPS by 47%! Read more here.