Please use this identifier to cite or link to this item:
http://theses.ncl.ac.uk/jspui/handle/10443/6710| Title: | Developing neural network navigation for service robots in dynamic environments |
| Authors: | Wu, Quan |
| Issue Date: | 2025 |
| Publisher: | Newcastle University |
| Abstract: | Learning how to navigate autonomously in an unknown indoor environment whilst avoiding both static and dynamic obstacles is crucial for mobile robots. Traditional navigation systems in mobile robots lack the capability for autonomous learning. This study focuses on the performance of a navigation system in a wheeled mobile robot, utilizing a convolutional neural network (CNN). The evaluation was conducted through simulation using Webots software, where the environment was designed to include walls, floors, and objects to create boundaries and obstacles. LiDAR, compass, and bumper sensors were used to receive data from the simulated environment. These data sets were saved and then converted into image files as the input data for the CNN. The 15-layer deep CNN was designed in MATLAB Deep Network Designer APP. Two networks were trained: supervised and unsupervised. In the supervised version, user control of the robot was used to manually avoid obstacles whilst data was collected. This proved to be a time-consuming approach. To efficiently obtain a more comprehensive data set, an unsupervised approach was adopted in which the robot randomly started in different locations with random trajectories. With both avoidance and collision of obstacles occurring, two sets of data were collected labelled ‘good’ and ‘bad’. Both sets were then used to train the network. Validation during training proved to be an unreliable indicator of network performance due to the possible multiple outputs from a given data set. Therefore, all trained networks were tested in Webots using distance travelled and number of collisions as the performance metric. The supervised network demonstrated a success rate of 99.24% whilst the unsupervised network demonstrated a 95.43% success rate. The work demonstrates that supervised learning in a simulated environment is a suitable way of training CNNs. The study then advanced to a dynamic environment by introducing several moving objects into the previous environment. These objects, equipped with several sensors, could avoid both the robot and static boxes. Initially, the same method from the static environment was applied, using 5 consecutive scans instead of one to detect moving objects. However, the trained network failed to make the robot avoid obstacles effectively. The algorithm was then improved in two subsequent versions: algorithm A increased the delay time between data iv captures to enhance the difference in data, whereas algorithm B divided the original motion process into five steps with consistent movement decisions, enriching the distinctiveness and information of the input data. The final collision rate decreased from the initial 17.21% to 6.78% with algorithm B. The final step involves constructing the physical robot and then testing the trained network. A two-wheeled differential robot equipped with a LiDAR sensor was built to test the basic SLAM process in a static environment, and an IMU model was included to determine the heading north, testing the trained network. Super beacons are used to record the trajectory. The trained network enables the real robot to navigate effectively, allowing it to consistently avoid obstacles and navigate to the north side of the room. Future objectives include implementing the trained network in a dynamic environment to further enhance the robot's navigation capabilities. |
| Description: | Ph. D. Thesis. |
| URI: | http://hdl.handle.net/10443/6710 |
| Appears in Collections: | School of Engineering |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Wu Quan (170132055) ecopy.pdf | Thesis | 4.68 MB | Adobe PDF | View/Open |
| dspacelicence.pdf | Licence | 43.82 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.