Shape:Parametric
Brian Zheng

Brian Zheng

Autonomy Software Developer & Mechatronics Engineer

Just a guy hungry for knowledge. I build intelligent autonomous systems through AI and robotics.

Projects

A selection of my recent work and engineering projects

Experience & Achievements

My professional journey and accomplishments

Autonomy Software Developer

Sept 2025 - Present

enVgo • Waterloo, ON

  • Optimized and increased the accuracy of a Segmentation Model by 19% through improving dataset accuracy and streamlining scalable data generation, while also improving inference speeds using TensorRT and CUDA on Jetson architectures
  • Created model-agnostic scripts in dockerized containers to evaluate metrics of segmentation models by computing mIOU, precision, and F1 scores
  • Developed real-time birdseye view through homography transformations using GStreamer/OpenGL to stitch 4 fisheye camera feeds, and GLSL shaders with per-camera undistortion maps
  • Implemented Structure-from-Motion (SFM) model using synchronized camera feeds to estimate vessel pose and reconstruct the 3D dock environment, allowing localization and motion planning for autonomous docking
  • Developed radar CAN/ROS drivers, and applied RL and EKFs to filter noise and extract tracked point clouds

Director of Eve Autonomy and Perception Software

May 2025 - Present

WATonomous • University of Waterloo

  • Leading a team of 40+ members in the research and development of an autonomous Kia EV, incorporating AI/machine learning, sensor fusion, and complex path planning algorithms
  • Developing ROS2 wrappers for LiDAR ground segmentation and CV multi-object tracking algorithms like ByteTrack
  • Developed a multi-modal 3D object detection and tracking pipeline by fusing LiDAR and camera data, RANSAC Model Floor Segmentation, and class-parameterized DBSCAN clustering of batched YOLOv8 detections
  • Enhanced object detection pipeline performance by optimizing the YOLOv8 Model using TensorRT and CUDA GPU acceleration, maximizing hardware resources and inference speeds
  • Implemented ROS2 Drivers and launch files for LiDAR and Cameras and calibrated camera intrinsics
  • Implemented a hybrid A* and BFS search algorithm for local path planning and a Pure Pursuit and PID controller for path tracking

Robotic Systems Engineer

Jan 2025 - May 2025
  • Developed a multi-modal 3D object detection and tracking pipeline by fusing LiDAR and camera data, RANSAC Model Floor Segmentation, and class-parameterized DBSCAN clustering of batched YOLOv8 detections
  • Enhanced object detection pipeline performance by optimizing the YOLOv8 Model using TensorRT and CUDA GPU acceleration, maximizing hardware resources and inference speeds
  • Implemented ROS2 Drivers and launch files for LiDAR and Cameras and calibrated camera intrinsics
  • Implemented a hybrid A* and BFS search algorithm for local path planning and a Pure Pursuit and PID controller for path tracking

Education & Certifications

Bachelor of Applied Science (BASc)

Mechatronics, Robotics, and Automation Engineering

University of Waterloo

Aug 2024 - May 2029

Certified SOLIDWORKS Associate (CSWA)

Issued by Dassault Systèmes

View Certificate

Jan 2025

Skills & Tech Stack

Technologies and tools I work with

Programming Languages

C/C++
Python
Java
JavaScript
HTML/CSS

AI/ML & Computer Vision

PyTorch
TensorRT
OpenCV
CUDA
MediaPipe

Robotics & Autonomy

ROS2
GStreamer
OpenGL
WebRTC
ArduPilot/PX4

Tools & Platforms

Docker
Git
SOLIDWORKS
AWS EC2
Linux
CVAT

Resume

Download my resume or view a summary below

Download Resume (PDF)

Last Updated: January 2025

About Me

I'm a Mechatronics Engineering student at the University of Waterloo, specializing in autonomy software development and perception systems. Currently, I'm working as an Autonomy Software Developer at enVgo, where I optimize AI models and develop real-time computer vision pipelines for autonomous vessels.

At WATonomous, I lead a team of 40+ members developing an autonomous Kia EV, focusing on sensor fusion, 3D object detection, and path planning. My work spans the full autonomy stack—from low-level sensor drivers to high-level AI inference and control algorithms.

I'm passionate about building intelligent systems that bridge the gap between perception, planning, and control. Whether it's optimizing YOLOv8 models with TensorRT, implementing Structure-from-Motion for 3D reconstruction, or developing real-time multi-camera stitching with GStreamer, I love tackling challenging problems in robotics and computer vision.

Workspace photo 1 Workspace photo 2

Core Interests & Specialties

  • 3D Object Detection & Tracking
  • Deep Learning & Model Optimization
  • Sensor Fusion (LiDAR, Camera, Radar)
  • Autonomous Navigation & Control
  • Real-Time Computer Vision Pipelines
  • GPU Acceleration (CUDA, TensorRT)

Currently: Working on autonomous systems at enVgo and leading perception development at WATonomous. Always excited to collaborate on robotics, computer vision, or AI projects!

Blog & Writing

Thoughts on projects, engineering, and technology

Blog Post 1
January 2025

Building a Morse Code Robot with LEGO EV3

A deep dive into creating an autonomous robot that communicates using Morse code, covering hardware design, sensor integration, and algorithmic implementation.

5 min read
Read More
Blog Post 2
December 2024

Improving LiDAR–Camera Fusion in ROS 2

Exploring techniques for better sensor fusion between LiDAR and camera data in ROS 2, including calibration methods and real-time processing optimizations.

8 min read
Read More

Get In Touch

Let's connect! I'm always open to discussing new projects, opportunities, or just having a conversation.