↑↑↓↓←→←→BA — nice.
Available · Bristol, UK

Ruchita Vehale

Robotics & Computer Vision Engineer

Robotics & Computer Vision Engineer specialising in SLAM, machine learning, computer vision, and AI. MSc Robotics, University of Bristol.

6+
Publications
9.43
MTech GPA
8+
Systems Built
DRDO
Defence Research
Current Role
Associate Software Engineer · DeGould Ltd
Education
MSc Robotics · University of Bristol
Location
Bristol, United Kingdom
Focus
SLAM · CV · ML · Sensor Fusion
About

I'm a robotics engineer working at the intersection of machine learning, computer vision, and autonomous systems - across the full stack, from sensor calibration and SLAM pipelines to training and deploying ML models in real environments.

My MSc dissertation at Bristol built a dynamic visual SLAM system integrating YOLOv8, DeepSORT, and MiDaS with ORB-SLAM, achieving robust localisation and 3D mapping in environments with moving objects. I also hold six peer-reviewed publications (Wiley, RSC, SSRN) across antenna design, deep learning, and federated AI governance.

Previously Associate Software Engineer at DeGould, Jr. Drone Systems Engineer at HVN Labs, Research Fellow at CMET, and Research Intern at HEMRL, DRDO - India's defence research organisation.

Visual SLAM Deep Learning Object Detection Sensor Fusion Autonomous Navigation 3D Vision ROS MAVLink
Education
MSc Robotics
University of Bristol
2024 – 2025
MTech Electronics & Electrical
Savitribai Phule Pune University
2020 – 2022
Gold Medal · GPA 9.43/10
BE Electronics Engineering
AISSMS College of Engineering
2016 – 2020
GPA 8.31/10
Expertise

Domains I've built
depth in.

ML
Machine Learning
YOLOv8, CNNs, Reinforcement Learning, Federated Learning. Training to inference in production.
AI
Artificial Intelligence
Deep learning pipelines, real-time inference, AI governance architecture and auditing.
CV
Computer Vision
Object detection, multi-object tracking, depth estimation, stereo vision, image segmentation.
SL
SLAM & Navigation
ORB-SLAM with dynamic object filtering, Open3D point cloud processing, autonomous path planning.
AS
Autonomous Systems
GPS + LiDAR + camera fusion, MAVLink integration, coaxial drone R&D, autonomous landing.
SF
Sensor Fusion
Multi-modal perception: LiDAR, RGB-D, GPS, IMU, stereo cameras for robust real-world operation.
Engineering experience

Work that
shipped.

DeGould Ltd
Bristol, UK  ·  Oct 2025 – Present
Associate Software Engineer (R&D)
Oct 2025 – Present
Problem solved

Automotive manufacturers need fast, accurate, automated defect localisation across vehicle bodies. Manual inspection is slow and inconsistent. CV-based systems must handle complex 3D geometry and be precisely calibrated.

Designed and deployed image segmentation pipelines to precisely isolate defect regions across complex vehicle surfaces, improving detection accuracy and repeatability in production.

Built a Blender plugin automating 3D model processing workflows. A 2-day manual task compressed to under 1 hour (16× improvement), deployed in live inspection pipelines.

Implemented 3D model + 2D image hybrid fusion for pose estimation, significantly reducing reliance on expensive LiDAR hardware while maintaining accuracy across inspection stations.

Developed calibration and localisation algorithms across multiple inspection pipelines, reducing manual intervention and enabling scalable deployment.

Improved codebase architecture and version control workflows, enabling faster, more reliable production deployments across the R&D team.

16×
Workflow Speedup
3D+2D
Pose Fusion
↓LiDAR
Cost Reduction
Prod
Deployed
Stack
PythonComputer VisionSegmentationBlender APIPose Estimation3D FusionMachine LearningGit
HVN Labs, Future Space, BRL
Bristol, UK  ·  Apr 2025 – Aug 2025
2 roles
Jr. Drone System Engineer
Jun 2025 – Aug 2025
Problem solved

Autonomous drones need to land precisely in GPS-degraded, dynamic environments. Single-sensor approaches fail in practice. The system must fuse heterogeneous sensors in real-time on embedded hardware.

Designed and implemented a reverse landing system using GPS + Raspberry Pi camera fusion, enabling precise autonomous landings in dynamic conditions.

Developed stereo camera-based depth estimation to replace monocular approaches, providing sub-metre altitude control for low-altitude navigation.

Built LiDAR + camera sensor fusion pipeline for robust drone detection and accurate landing zone identification in cluttered environments.

Integrated MAVLink protocol and Wi-Fi communication for seamless ground-to-drone control and real-time telemetry transmission.

GPS+V+L
3-Sensor Fusion
MAVLink
Control Protocol
Real-time
RPi Processing
Deployed
HVN Labs
Stack
MAVLinkLiDARRaspberry PiStereo VisionGPS FusionOpenCVPython
Technology Intern
Apr 2025 – May 2025
Problem solved

Building a coaxial drone system for synchronised lighting displays requires tight control system integration and reliable autonomous landing in GPS-noisy environments.

Contributed to coaxial drone R&D: assembly, control system tuning, and performance testing across varied flight conditions for synchronised display applications.

Developed a precision autonomous landing system using GPS and camera data fusion for reliable operation in dynamic, GPS-noisy environments.

R&D
Drone Build
Auto
Landing
Stack
Drone AssemblyControl SystemsGPSCamera Fusion
CMET (Centre for Materials for Electronics Technology)
Pune, India  ·  Apr 2023 – Aug 2024
Research Fellow
Apr 2023 – Aug 2024
Problem solved

Designing compact, high-accuracy microwave antennas for 5G requires novel substrate materials and precise simulation. Existing designs were too large and imprecise for next-generation communication standards.

Optimised antenna designs using CST Microwave Studio, achieving 20% size reduction and 70% accuracy improvement for GPS, Wi-Fi, Bluetooth and 5G frequencies.

Operated 3D printers (FDM, Inkjet) and characterisation tools (XRD, VNA) for antenna prototyping and dielectric material testing.

Fabricated a biodegradable-ink strain sensor via 3D inkjet printing - a novel approach to sustainable flexible electronics with defence and wearable applications.

Research led to 3 journal publications in Wiley and RSC journals, including novel dielectric nanocomposite materials for microwave applications.

20%
Size Reduction
70%
Accuracy Gain
3
Publications
5G
Antenna R&D
Stack
CST Studio3D PrintingXRD/VNA5GRF EngineeringFDMInkjet
HEMRL (High Energy Materials Research Laboratory), DRDO
Pune, India  ·  Nov 2021 – Aug 2022
Research Intern
Nov 2021 – Aug 2022
Problem solved

Defence applications require reliable object tracking and velocity estimation from video footage under variable lighting, without access to additional sensors or GPS, on constrained hardware.

Designed and deployed a MATLAB-based video tracking system using Gaussian Mixture Model and point tracking for moving object recognition in defence research contexts.

Achieved robust accuracy under variable lighting conditions through adaptive background subtraction and GMM model tuning.

Executed feature extraction and depth estimation using projective geometry across multiple camera planes for accurate 3D reconstruction.

Validated precise velocity measurement across 20 independent video datasets, with reliable real-world performance.

20
Video Datasets
DRDO
Defence R&D
Stack
MATLABGMM TrackingDepth EstimationProjective GeometryComputer Vision
Applied work

Systems I've
built.

SLAM · 01
SLAM
Dynamic Visual SLAM with YOLOv8 + DeepSORT + MiDaS
Real-time SLAM that filters dynamic objects to prevent map corruption, enabling reliable localisation in real-world environments with people and vehicles.
↳ Robust localisation across monocular and RGB-D cameras in dynamic environments
YOLOv8 DeepSORT MiDaS ORB-SLAM
Drone · 02
Drone
Autonomous Drone Landing via Multi-Sensor Fusion
Precision landing system fusing GPS, Raspberry Pi camera, LiDAR and stereo depth, deployed at HVN Labs. Handles environments where GPS alone is insufficient.
↳ Precise autonomous landings across varied real-world conditions, deployed at HVN Labs, Bristol
GPS Fusion LiDAR Raspberry Pi Stereo Vision
Machine Learning · 03
Machine Learning
Hybrid Apple Detection: YOLOv8 vs Traditional CV
Deep learning benchmark study: YOLOv8 trained for 1,500 epochs against traditional CV methods. Published in SSRN Electronic Journal.
↳ 98.87% mAP, published in SSRN Electronic Journal
YOLOv8 Image Segmentation OpenCV Python
Machine Learning · 04
Machine Learning
AI Dungeon Maze: Supervised + Unsupervised + RL
All three ML paradigms integrated on a single D&D maze dataset, deployed as a playable Pygame environment.
↳ Full AI-driven game, all three ML paradigms working in concert
Reinforcement Learning Q-Learning Supervised Learning Scikit-learn
Robotics · 05
Robotics
Leader-Follower Robot with QTR-RC Sensor Fusion
PID-controlled multi-robot coordination with dynamic speed adaptation: 97.93% reduction in tracking errors. Research paper submitted.
↳ 97.93% reduction in tracking errors, research paper submitted
PID Control Sensor Fusion QTR-RC Embedded C
Robotics · 06
Robotics
NAO Robot Language Tutor: HRI System
AI-powered humanoid robot tutor with synchronised speech, gestures and adaptive lesson flow. Real-time two-way conversation via Python sockets.
↳ Real-time speech-gesture synchronisation, interactive HRI demo
NAO Robot HRI Speech Recognition Python Sockets
Robotics · 07
Robotics
Magnet-Detecting Robot: Pololu 3Pi+
Autonomous robot with boundary detection, PID control, and BFS shortest-path planning for systematic magnet location.
↳ Accurate magnet localisation with provably optimal traversal path
Pololu 3Pi+ PID Control BFS Path Planning C++
Machine Learning · 08
Machine Learning
CIFAR-10 Deep Learning Classifier
CNN-based image classification pipeline on CIFAR-10 with visual analysis dashboard. Data augmentation, batch norm, dropout.
↳ High-accuracy CNN model with visual analysis dashboard
Keras CNN TensorFlow NumPy
Problem
System design & approach
Engineering impact
Tech stack
Research

Published
work.

Technical stack

Skills &
tools.

Programming Languages
Python MATLAB C / C++ Embedded C Verilog / FPGA
Machine Learning & AI
YOLOv8 Keras / TensorFlow Scikit-learn PyTorch Reinforcement Learning Federated Learning
Computer Vision & SLAM
OpenCV DeepSORT ORB-SLAM MiDaS Open3D Stereo Vision
Tools & Platforms
ROS MAVLink Raspberry Pi Blender API AWS Git SolidWorks AutoCAD CST Studio NumPy / SciPy Pygame
Recognition

Leadership &
awards.

NCC Cadet Sergeant
3 Mah Air Squadron
Led 500 cadets · B & C Certificates · Best Cadet award
Gold Medal, Rifle Shooting
NCC National Camp
0.22 Rifle, precision marksmanship
Robotics Competitions
Robotics & Drone Club
Inter-college and national level
Harvard ML Course
Harvard University
Machine Learning & AI, ongoing
Multilingual
Languages
English · Hindi · Marathi · Japanese N4/N5
Contact

Let's build
something real.

I'm open to roles in Robotics, Machine Learning, Computer Vision, and AI. If you're working on autonomous systems, perception, or hard engineering problems — reach out directly.