In this project, I led the design and development of a quadruped robot, which involved complex mechanical, electrical,
and software integration. The goal was to create a robust, four-legged autonomous robot capable of navigating dynamic
environments. Using CAD tools like Solidworks and PTC Creo, I designed the robot’s structure, focusing on weight
distribution and mechanical strength. The control system relied on ROS and a custom motion planning algorithm that
allowed the robot to adjust its gait in real-time based on the environment. I implemented dynamic path planning to
ensure obstacle avoidance and smooth navigation. This project won a state-level robotics championship and significantly
advanced my understanding of autonomous navigation and robotic kinematics.
In this project, I developed a software module for a multi-robot system, focusing on coordination and communication
between robots. The system was designed to optimize task execution by distributing tasks among multiple robots, ensuring
efficient task completion with minimal redundancy. I used ROS to manage inter-robot communication and integrated sensors
for real-time data exchange, enabling the robots to adjust their paths based on the actions of other robots in the
system. This project demonstrated my ability to build complex systems that require real-time collaboration between
autonomous robots.
This project involved the development of dynamic path-planning algorithms for an autonomous robot to navigate
unpredictable environments. Using ROS and Gazebo for simulation, I implemented global path planning using Dijkstra and
A* algorithms, while employing local planners for real-time obstacle avoidance. The robot successfully navigated a test
environment while dynamically adjusting its path based on sensor input. This project not only refined my understanding
of planning algorithms but also helped me master ROS and simulation tools like Rviz and Gazebo.
This project aimed to assist visually impaired individuals by developing an AI-driven image captioning system. I trained
a deep learning model using the Inception V3 CNN encoder for feature extraction, reaching a 92% accuracy rate in
generating descriptive captions from real-time video feeds. To optimize real-time performance, I implemented Block
Static Expansion and multi-headed attention vectors, which improved both accuracy and response time. Additionally, I
developed Python scripts for seamless integration with mobile devices, enabling visually impaired users to capture and
process video in real-time, with automatic caption generation and voice narration. This project highlighted my ability
to apply AI solutions for real-world accessibility challenges.
This project involved the design and development of an autonomous robot capable of picking and placing color-coded
blocks in a designated area. I engineered a real-time color detection system using a Pi Camera, which was integrated
with the A* path planning algorithm to ensure accurate retrieval and placement of the blocks. The robot was equipped
with IMU sensors (MPU6050) and encoders for precise navigation, and I implemented PID control for stable maneuvering.
Obstacle avoidance was achieved through the use of ultrasonic sensors, allowing the robot to navigate dynamically
changing environments. The project achieved an 80% success rate in goal placement, demonstrating effective robot
localization and task execution.
Conducted a comprehensive evaluation of 2D mapping algorithms, including HectorSLAM, Cartographer, and Gmapping. Mapped
a 20,000 sqft hostel floor to validate the study, providing actionable insights for optimal mapping algorithm selection.