SLAM Using RGB-D and Lidar data fusion

Autonomous navigation in an unknown and dynamic environments is a central problem in many robotic applications. Much of the reported literature focuses on a subset of this problems i.e., Simultaneous Localization and Mapping (SLAM) in static environments. A lot of developments have been&

2025-06-28 16:35:03 - Adil Khan

Project Title

SLAM Using RGB-D and Lidar data fusion

Project Area of Specialization RoboticsProject Summary

Autonomous navigation in an unknown and dynamic environments is a central problem in many robotic applications. Much of the reported literature focuses on a subset of this problems i.e., Simultaneous Localization and Mapping (SLAM) in static environments.

A lot of developments have been done in SLAM and different methodologies were applied. Although such methodologies are effective in static environments, they fail in realistic scenarios whereby moving objects are present. In recent years, however, researchers have attempted to design SLAM algorithms for unstructured, uncertain, and dynamic environments. In order for an autonomous robot to navigate through an unknown environment; two problems need to be resolved:

(i) SLAM problem, which builds and updates the environment map while localizing the robot with respect to that map and (ii)Detection And Tracking of Moving Objects (DATMO) nearby the robot and estimating their future behaviour.

The Simultaneous Localization and Mapping (SLAM) of robots plays a crucial role in the development of mobile robots. Map building is one of the most necessary task for navigation in an unknown environment.

Our project aims at devloping a low cast,light weight and robust robot which  uses SLAM techique to generate map and according to that map,it will localize itself.We will focus on obtaining consisting map of static objects,discriminating between static and dynamic objects and concurrently estimate and track moving features.By applying ML-RANSAC agorithm to observation data,we can track moving objects while we are localizing the robot and mapping stationery objects via EKF filter. 

SLAM has many applications in indoor,outdoor ,industrial ,defence and survilliance and also in many other sectors like underground mining,underwater and space .In defence area SLAM can be used to do mapping of unsecured areas or buildings where person’s activities could be dangerous .Also ,we can embedd SLAM system onto drone and also on quadcopter to do mapping of highly risky and unsecured areas. Countries like Pakistan where security issues are on top,SLAM techniue would be useful for obtaining maps of operational areas.

In Pakistan, one of the basic tasks of  NDMA is to search and rescue in disasters. The Pakistan army could use mobile robots like SLAM technique based to find people trapped in dangerous places. With the information you collect, the robot can serve to create a better bailout. Currently, most robots are teleoperated . But giving them the ability to navigate and search by generating map could help us alot in critical situation, as a group of robots can cover the same area faster, increasing the likelihood of rescuing people.Also we can embedd SLAM onto fire extinguisher robot for mapping and navigation in critical situations.

SLAM has an importance in autonomus vehicals .Many automobile companies like Tesla uses the technique SLAM to navigate and mapping of vehicals.

Project Objectives

The main objectives of this project will be:

  1. · To solve problems related to dynamic environment.

  2. · To apply machine learning to detect features.

  3. · To distinguish between static and moving Objects.

  4. · To build a map of unknown environment

The objective of simultaneous localization and mapping (SLAM) is to build a map and to locate the robot in that map at the same time. We should clarify that for the SLAM problem, it does not matter if the robot moves autonomously or is controlled by a human. The important thing is to build the map and locate the robot correctly.

 SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment.The purpose is to provide an accurate perceptive model of SLAM relied on lidar/RGB sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons.

 Robots are generally working in two categories of environment; constant and unstable. One of the most challenging problem in the robotics context is mapping the environment. When robots are working in the constant environment this problem is typically solved by just connecting the robot to the ground; therefore, they don’t have localization and mapping problem because of not moving. This is the way that industrial robotics solve this problem such as robots for service tasks on factory floors. On the other hand, mapping the environment will become an issue when the robots move into the unstable environment; hence having a Simultaneous Localization and Mapping (SLAM) system which solves mapping and localization at the same point in time. One of the advantages of this technique is that the procedure of mapping is online, therefore, the autonomous robot capable to detect all of the environmental factors which have innumerable intricate landmarks and obstacles. Moreover, autonomous robots are able to explore in the new environment and make the right decision according to data from the new environment and navigate itself through the trajectory by developing the map from the new environment. The robot is able to estimate its own location and any obstacles.

Simultaneous Localization and Mapping technique ascertains that it is possible for the intelligent mobile robot to execute the mapping and localization process at the same time. This technique increases the robot’s efficiency of task performance while exploring a dynamic environment.

Project Implementation Method

This project proposes a SLAM location mapping scheme that integrates vision and Lidar, which is mainly divided into Visual module, Lidar module and Fusion module. The feature extraction and the PNP matching between feature and map are carried out in the front-end of the vision, and the Lidar front-end uses correlation matching method for Lidar scanning and matching. The back-end uniformly adopts graph optimization methods. In the Fusion module, the position information of vision and Lidar is fused and positioned, and the three-dimensional map is established.

The position conversion of the vision module and the Lidar module is input to the fusion module. The visual feature points are used for motion estimation after matching, and the Lidar method uses correlation matching for motion estimation. The system outputs two poses at the same time when the vision and Lidar locate successfully simultaneously, and the EKF fusion is performed on the two poses. When the visual tracking is unsuccessful, the pose of the Lidar is used to splice the point cloud data of the depth camera to obtain a 3D map. At the same time, feature detection and matching are continued in subsequent frames to re-initialize the map points in the visual SLAM. If successful, continue to use the fusion mode, or else use the positioning results of the Lidar to create a three-dimensional map as shown in Figure below

      The pose acquired by the visual SLAM is a 3D motion with six degrees of freedom. When fused with the 2D information acquired by the Lidar radar, it needs to decompose its motion on the 2D map. That is, the pose component on the XY plane in the world coordinate system is decomposed by a 3D rotation matrix representing the camera pose. Since both RGB-D camera and Lidar are installed horizontally in the system described herein, it is considered that the pose in the ZX plane in the camera coordinate system is the XY pose in the world coordinate system. Then the problem is transformed into an extended Kalman filter fusion problem for 2D motion.

SLAM Using RGB-D and Lidar data fusion _1639947697.png

Benefits of the Project

As mapping is a crucial factor for many applications in an unknown environment.so, SLAM is a techique for generating map and localize the object with respect to the generated map.

SLAM is central to a range of indoor, outdoor, air ,underwater and defence applications for both manned and autonomous vehicals.

Examples:

In naturally or innaturally disasters affected areas:

      Disasters like earthquake ,flood, deceases,virus and many other naturally or innaturally affected risky areas where person’s activities are not possible , it is essential to have a map of these affected and risky areas for contionously monitoring and preventing the effect.So, SLAM plays a vital role in such areas for obtaining map and for rescuing purposes.    

Autonomous Vehicals:

      One of the important benefits of SLAM is its use in autonomous vehicals. .Autnomouos navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. As autonomous vehical’s trend is increasing day by day,so SLAM technique is being considered as vital solution for mapping of autonomous vehicals

Defence:

SLAM can be used in defence area to do mapping of unsecured areas or buildings where person’s activities could be dangerous .So,SLAM could be useful in defence activities.

At home:

        SLAM can be used in homes as a vacuum cleaner ,lawn movers and many other domestic robots

Air:

        SLAM can be used in air as survelliance with unmanned air vehicals.SLAM technique can be embedded onto drone or quadcopter like technologies for survilliance purposes

Underwater:

        SLAM can be used in underwater for reef monotoring and also for autonomous underwater vehicals.

Underground:

SLAM can be used in underground for exploration of mines where gps services are not accessible.

Space:

SLAM can be used in space for plantery rovers and also terrain mapping for localization. 

Technical Details of Final Deliverable

Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose is to provide an accurate perceptive review of case history of SLAM relied on LIDAR/RGB sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. The output of the project is to use Simultaneous Localization and Mapping technique that ascertains for the intelligent mobile robot to execute the mapping and localization process at the same time. This technique increases the robot’s efficiency of task performance while exploring a dynamic environment.

  To achieve our objective we will make a modular robot that will do the basic movements and it will be planted with Lidar and RGB-D sensors and by taking observations from these sensors we will use onboard raspberry pi board for the processing of the data while openCv module will be used for image processing. The output from the Lidar and vision module will be inputted to fusion module where the visual feature points are used for motion estimation after matching, and the Lidar will be used for correlation matching for motion estimation. The EKF fusion is performed on the two poses. When the visual tracking is unsuccessful, the pose of the Lidar is used to splice the point cloud data of the depth camera to obtain a 3D map. The software used for the above process will be using robotic operating system (ros) and with ros we will use Gazebo which is a 3D dynamic simulator with the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments.

Typical uses of Gazebo will include:

At the end,we will deliever a robot which uses SLAM technique and will be well equipped with sensors and cameras to obtain the objectives of project.As this project will be consisted of harware and software both . So software technicals will also be provided.

Final Deliverable of the Project HW/SW integrated systemCore Industry SecurityOther Industries Manufacturing Core Technology RoboticsOther Technologies Artificial Intelligence(AI)Sustainable Development Goals Industry, Innovation and InfrastructureRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 80000
openCR Equipment12850028500
lidar sensor Equipment11600016000
raspberry pi board Equipment180008000
Kinect Xbox sensor Equipment160006000
IMU senor Equipment135003500
robot body Equipment180008000
Miscellaneous Miscellaneous 11000010000

More Posts