Localisation of Robots
2022-2023
Yiliin.Cao Jifeng.Li
Zhiyang.Zhao Yilin.Yang Yihan.Lu

Project Background and Objectives
Background: Modern airports and railway stations have become larger and more complex in layout, and the demand for navigation robots is increasing. However, existing navigation robots have deficiencies in simultaneous localization and mapping (SLAM) and safety.
Survey: Through a questionnaire survey, it was found that most respondents believe that navigation robots are necessary and helpful. At the same time, suggestions were collected on improving the accuracy of robot instructions, providing more human-like guidance, and simplifying the user experience. For example, 63.46% of the respondents emphasized the importance of accuracy, and 41.1% of the respondents believed that the interaction method needs to be improved.
Key Technologies: The functions of SLAM, LiDAR (Light Detection and Ranging), ultrasonic and infrared sensors in robot navigation and obstacle avoidance are introduced.
Design Concepts
System Design: Adopt an architecture in which a computer controls two Raspberry Pi motherboards. Raspberry Pi A installs Ubuntu and deploys the Robot Control System (ROS) to control the camera and LiDAR for visual recognition and SLAM, and transmit data. Raspberry Pi B installs the official system to control the driver board and other functions, such as infrared obstacle avoidance, ultrasonic alarm and basic motion control.

Technology Selection: LiDAR is selected to implement SLAM because it has advantages such as fast speed, high accuracy and no influence from light compared with other visual SLAM methods. The infrared sensor is chosen for obstacle avoidance because of its high precision, strong sensitivity and immunity to ambient light interference, and it is easy to integrate into small devices.
Prototyping and Testing
Prototyping
Robot Assembly: Assemble the robot according to the manufacturer’s tutorial and improve the positions of some devices. For example, replace the camera with a clearer one and the radar with a more sensitive one, adjust the positions of the radar and Raspberry Pi, and use two Raspberry Pi motherboards to improve performance.

Hardware Selection: Use Raspberry Pi 4B as the motherboard and realize remote control by VNC and SSH. The configuration and secure connection methods are introduced.
Function Module Implementation: It includes installing RP LIDAR A1 to achieve SLAM, using camorama software and YoloV5s model to realize camera visual inspection, and designing a basic motion and obstacle avoidance system, covering infrared obstacle avoidance, ultrasonic warning, computer control system and tracking functions.

Testing
Basic Motion Module: The test shows that the robot can move straight and turn according to the commands input by the computer. The problem of low original speed is solved by modifying the code.
Obstacle Avoidance Module: The ultrasonic sensor can detect the distance and give a warning, and the infrared sensor can automatically avoid obstacles. The test verifies their effectiveness.

Camera Test: The camera image is clear, and the target recognition function can be successfully realized although there is a delay.

SLAM Module: In an unfamiliar environment, the robot can generate a map by repeating the route 2 – 3 times, and the more times, the more accurate the map.
Discussion
User Feedback: User feedback shows that the robot has problems such as complex operation, insufficient intelligence and unstable connection.
Difficulties and Solutions: Problems such as system incompatibility are encountered when using Raspberry Pi. They are solved by using two Raspberry Pi, configuring batteries and heat sinks, and changing the connection method. The camera call may have problems, and the call path of the configuration file needs to be determined. There are problems such as the position and installation of the radar in robot assembly. Measures such as increasing the height, strengthening the installation, adjusting the parameters and replacing the sensor are taken respectively.
Applications and Future Work: The robot can navigate, locate, map, avoid obstacles and perform visual inspection in an unfamiliar environment. In the future, it is planned to upgrade the SLAM system to 3D – LiDAR, integrate the two boards, upgrade the battery and add a voice module, etc., to improve performance and adapt to rescue scenarios, and promote the application in airports and other places.
Project Summary: The Intelligent Airport Navigation Robot project utilises LiDAR SLAM and multiple sensors to ensure safety and efficiency, helping to improve the airport navigation experience and reduce the need for manpower, but roll-out needs to consider the impact on employment. Whilst the project is positive in terms of equality, diversity and inclusion, the designers have taken on responsibility for safety and will need to continue to improve the technology.