Training A Vision-Based Autonomous Robot From Material Bending Analysis To Deformation Variables Predictions With An Xr Approach
Chi-Fu Hsiao Dept. of Architecture, Tamkang University
Ching-Han Lee College of Future, National Yunlin University of Science & Technology
Chun-Yen Chen Dept. of Digital Media Design, National Yunlin University of Science & Technology
Yu-Cyuan Fang Dept. of Digital Media Design, National Yunlin University of Science & Technology
Teng-Wen Chang Dept. of Digital Media Design, National Yunlin University of Science & Technology
This paper proposes a “Human Aided Hand-Eye System (HAHES)” to aid the autonomous robot for “Digital Twin Model (DTM)” sampling and correction. HAHES combining the eye-to hand and eye-in hand relationship to build an online DTM datasets. Users can download data and inspect DTM by “Human Wearable XR Device (HWD)”, then continuous updating DTM by back-testing the probing depth, and the overlap between physics and virtual. This paper focus on flexible linear material as experiment subject, then compares several data augmentation approaches: from 2D OpenCV homogeneous transformation, autonomous robot arm nodes depth probes, to overlap judgement by HWD. Then we train an additive regression model with back-testing DTM datasets and use the gradient boosting algorithm to inference an approximate 3D coordinate datasets with 2D OpenCV datasets to shorten the elapsed time. After all, this paper proposes a flexible mechanism to train a vision-based autonomous robot by combing different hand-eye relationship, HWD posture, and DTM in a recursive workflow for further researchers.
Keywords: Digital Twin Model, Hand-Eye Relationship, Human Wearable Xr Device, Homogeneous Transformation, Gradient Boosting