
AbstractRobot navigation in crowded environments has recently benefited from advances in deep reinforcement learning (DRL) approaches. However, it still presents a challenge to designing socially compliant robot behavior. Avoiding collisions and the difficulty of predicting human behavior are crucial and challenging tasks while the robot navigates in a congested social environment. To address this issue, this study proposes a dynamic warning zone that creates a circular sector around humans based on the step length and speed of humans. To properly comprehend human behavior and keep a safe distance between the robot and the humans, warning zones are implemented during the robot’s training using deep enforcement learning techniques. In addition, a short-distance goal is established to help the robot efficiently reach the goal through a reward function that penalizes it for going away from the goal and rewards it for advancing towards it. The proposed model is tested on three state-of-the-art methods: collision avoidance with deep reinforcement learning (CADRL) , long short-term memory (LSTM-RL), and social attention with reinforcement learning (SARL). The suggested method is tested in the Gazebo simulator and the real world with a robot operating system (ROS) in three scenarios. The first scenario involves a robot attempting to reach a goal in free space. The second scenario uses static obstacles, and the third involves humans. The experimental results demonstrate that the model performs better than previous methods and leads to safe navigation in an efficient time.
Artificial intelligence, Robot, Reinforcement Learning Algorithms, FOS: Mechanical engineering, Ocean Engineering, Information technology, Modeling Pedestrian Dynamics and Evacuations, Pedestrian Dynamics, Engineering, Dynamic obstacle avoidance, Artificial Intelligence, Autonomous robots, Reinforcement learning, Mobile robot, Dynamic warning zone, Robot control, Deep reinforcement learning, Behavior-based robotics, Human–computer interaction, QA75.5-76.95, Robotics, T58.5-58.64, Reinforcement Learning, Computer science, Electronic computers. Computer science, Social robot, Automotive Engineering, Physical Sciences, Computer Science, Autonomous Vehicle Technology and Safety Systems, Lane Detection, Robot learning, Short-distance goal, Simulation
Artificial intelligence, Robot, Reinforcement Learning Algorithms, FOS: Mechanical engineering, Ocean Engineering, Information technology, Modeling Pedestrian Dynamics and Evacuations, Pedestrian Dynamics, Engineering, Dynamic obstacle avoidance, Artificial Intelligence, Autonomous robots, Reinforcement learning, Mobile robot, Dynamic warning zone, Robot control, Deep reinforcement learning, Behavior-based robotics, Human–computer interaction, QA75.5-76.95, Robotics, T58.5-58.64, Reinforcement Learning, Computer science, Electronic computers. Computer science, Social robot, Automotive Engineering, Physical Sciences, Computer Science, Autonomous Vehicle Technology and Safety Systems, Lane Detection, Robot learning, Short-distance goal, Simulation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 21 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
