泡泡一分钟:Learning Motion Planning Policies in Uncertain Environments through Repeated Task Executions

张宁  Learning Motion Planning Policies in Uncertain Environments through Repeated Task Executions

By motion planning strategy repetitive tasks to perform learning uncertain environment
link: https: //pan.baidu.com/s/1TlSJn0fXuKEwZ9vts4xA6g
extraction code: jwsd
copy the contents of this open Baidu network disk phone App, the operation more convenient oh

Florence Tsang, Ryan A. Macdonald, and Stephen L. Smith

The ability to navigate uncertain environments from a start to a goal location is a necessity in many applications. While there are many reactive algorithms for online replanning, there has not been much investigation in leveraging past executions of the same navigation task to improve future executions. In this work, we first formalize this problem by introducing the Learned Reactive Planning Problem (LRPP). Second, we propose a method to capture these past executions and from that determine a motion policy to handle obstacles that the robot has seen before. Third, we show from our experiments that using this policy can significantly reduce the execution cost over just using reactive algorithms.

In many applications, the ability to target position from start to navigate uncertain environment is required. Although there are many reactions algorithms for on-line re-planning, but in the past performed with the same navigation task to improve the implementation of a future without much investigation. In this work, we first reaction by learning programming problem (LRPP) to formalize this problem is introduced. Secondly, we propose a method to capture these past execution, and to determine a strategy to deal with obstacles to the movement of the robot've seen before. Third, we can see from the experiment, this strategy can significantly reduce implementation costs, rather than just reactive algorithms.

 

Guess you like

Origin www.cnblogs.com/feifanrensheng/p/11519225.html