Abstract
Modern agricultural applications rely more and more on deep learning solutions. However, training well-performing deep networks requires a large amount of annotated data that may not be available and in the case of 3D annotation may not even be feasible for human annotators. In this work, we develop a deep learning approach to segment mushrooms and estimate their pose on 3D data, in the form of point clouds acquired by depth sensors. To circumvent the annotation problem, we create a synthetic dataset of mushroom scenes, where we are fully aware of 3D information, such as the pose of each mushroom. The proposed network has a fully convolutional backbone, that parses sparse 3D data, and predicts pose information that implicitly defines both instance segmentation and pose estimation task. We have validated the effectiveness of the proposed implicit-based approach for a synthetic test set, as well as provided qualitative results for a small set of real acquired point clouds with depth sensors. Code is publicly available at this https URL.
Abstract (translated)
现代农业应用越来越依赖深度学习解决方案。然而,训练表现良好的深度网络需要大量注释数据,这可能无法获得,在3D注释情况下甚至可能不可行。在这项工作中,我们提出了一种用于分割蘑菇并估计其三维数据的方法,以点云的形式获取深度传感器测量得到的数据。为了绕过注释问题,我们创建了一个蘑菇场景的合成数据集,我们完全意识到3D信息,比如每个蘑菇的姿态。所提出的网络具有全卷积骨干,可以解析稀疏的3D数据,预测隐含的实例分割和姿态估计任务。我们在synthetic测试集以及一小部分真实获取点云的定性结果上进行了验证。代码公开可用,在https:// this URL。
URL
https://arxiv.org/abs/2404.12144