Machine learning and computer vision engineer with a mechanical engineering background, expanding into AI agent development and emerging technology adoption. Build end-to-end perception pipelines for autonomous robots and aerial platforms, and designs intelligent AI solutions for complex data retrieval and analysis. Demonstrated success deploying real-time detection, semantic segmentation, and multi-sensor fusion optimised for x64, ARM, and embedded GPUs, alongside conversational AI systems integrating LLMs, external APIs, and enterprise data sources.
Add work experience to your profile. (optional)
• Developed and optimised advanced semantic segmentation models for off-road UGV perception, achieving high
classification accuracy and sustaining real-time performance in challenging terrains.
• Designed and trained custom object detection networks for plantation-specific targets, markedly improving
detection, tracking and classification in dynamic agricultural environments while remaining hardware-agnostic.
• Built an in-house annotation toolkit that enables pixel-level corrections and automated label-format conversion,
streamlining high-quality dataset preparation for vision pipelines.
• Engineered an Object Detection Module integrating segmentation model with real-time object tracking, 3D
bounding box estimation, motion trajectory prediction, and speed calculation, enhancing situational awareness for
autonomous navigation.
• Performed multi-camera calibration for 4 and 6-camera Bird’s-Eye View (BEV) systems, achieving high-fidelity 360°
panoramic vision for UGV navigation and obstacle avoidance, with intrinsic and extrinsic parameter refinement.
• Conducted sentiment analysis on organisational online reviews, automating data collection, text preprocessing and
dashboard visualisation to surface actionable customer insight for management.
• Developed the Automated Loose-Fruitlet Collector, a vision-guided robotic system that detects loose fruitlets on the
ground, calculates 3D grasp points, and commands a 4-axis arm to autonomously pick and deposit each fruitlet,
significantly reducing manual harvesting effort.
• Architected and deployed a real-time drone vision pipeline that ingests live video from the aircraft, streams it to an
onboard edge computer, off-loads deep-learning inference to a remote GPU server over a secure peer-to-peer VPN,
and returns annotated frames to a custom graphical interface, achieving sub-150 ms end-to-end latency while keeping
onboard compute overhead minimal.
• Developed a semantic segmentation annotation refinement tool for correcting pixel errors in class labels. Integrated
COCO-to-YOLO format conversion, streamlining dataset preparation for different deep learning frameworks.
• Designed a stereo-temporal multi-camera perception system for depth and motion estimation in autonomous
navigation using LiDAR-guided learning. Modified a ResNet18 encoder to process 12-channel stereo-temporal input
from synchronized left-right camera pairs across consecutive frames.
• Developed a staged training strategy to resolve convergence instability in joint depth-motion learning. Stabilized
depth estimation first using masked L1 loss over LiDAR ground truth, then introduced motion learning via geometric
consistency loss with frozen encoder weights.
• Implemented a bounded inverse-depth parameterization and LiDAR-guided geometric consistency loss for 6-DoF
relative pose estimation, achieving meaningful depth maps and non-zero trajectory predictions on the KITTI
benchmark.
• Led end-to-end development of GASAD Assist, an AI data assistant on Microsoft Copilot Studio. Covered architecture
design, prompt engineering, Power Automate workflows, and deployment to Microsoft Teams and M365 Copilot.
Integrated JENDELA coverage and geocoding APIs. Designed a scalable system supporting nationwide data expansion
without configuration changes.
• Built a geospatial web application with an embedded AI chatbot that autonomously controls map interactions based
on user intent. Developed a two-way communication bridge between the application and Microsoft Copilot Studio,
enabling natural language-driven data visualization of telecommunications infrastructure and service complaints across
Malaysia.
• Conceptualized a privacy-first elderly monitoring system using 60 GHz mmWave radar for fall detection and
inactivity sensing without cameras. Designed the four-layer technology architecture (sensing, edge intelligence,
connectivity, service platform) with edge-first processing to minimize data exposure.
• Contributed to the MCMC Internal Working Group on developing a regulatory framework for online gaming services
under the Communications and Multimedia Act 1998. Supported policy assessment covering children’s safety,
consumer protection, monetization practices, and regulatory classification across licensing and industry code
Add work education to your profile. (optional)
SLAM-Based Navigation System with RTAB-Map and A* Algorithm | Mar 2023 – Jan 2024
• Developed a SLAM-based navigation system enabling a mobile robot to navigate constrained environments while
avoiding static obstacles. Integrated software and hardware components using Python for real-time motion planning.
Hydrogen Fuel Cell Car | May 2023 – Dec 2023
• Led the electrical and electronics department in designing circuits for a hydrogen fuel cell car system. Engineered a
control system for the motor controller integrating with the fuel cell using C++.
Ro-ChI: Robot for Children Interaction | Mar 2022 – Jan 2023
• Led a team to design and develop a humanoid robot with 6 DoF arm movement and an interactive touch-screen
communication system. Programmed functionalities in C++ for seamless user interaction.
We will review the reports from both freelancer and employer to give the best decision. It will take 3-5 business days for reviewing after receiving two reports.