Autonomous Navigation Method Based on RGB-D Camera for a Crop Phenotyping Robot
J Field Robotics(2024)
摘要
Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB‐D camera, specifically designed for phenotyping robots in field environments. The PP‐LiteSeg semantic segmentation model was employed due to its real‐time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB‐D camera. Navigation feature points were extracted from these segmented areas, with their three‐dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real‐time deviation correction during the walking of phenotyping robot. Additionally, the method includes end‐of‐row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP‐LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.
更多查看译文
关键词
autonomous navigation,crop phenotyping robot,depth information,row spacing calculation,semantic segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn