Smart Hand For Digital Twin Timber Work -The Stepping Scanning And It’S Camera Positioning By Industrial Arm Robot

Chika Sukegawa Keio Univeristy
Arastoo Khajehee Keio Univeristy
Takuma Kawakami Takenaka Corporation
Syunsuke Someya Takenaka Corporation
Yuji Hirano Takenaka Corporation
Masako Shibuya Keio Univeristy
Koki Ito Keio Univeristy
Yoshiaki Watanabe Takenaka Corporation
Qiang Wang Takenaka Corporation
Tooru Inaba Takenaka Corporation
Alric Lee Keio University
Kensuke Hotta Keio University
Yasushi Ikeda Keio University

This paper explores a computer vision method for robot arm in wood process, named ‘stepping scanning’ in human-robot interactions within an Augmented Reality (AR) environment. With an aim to reduce the technical threshold required for a human user to perform advanced robotic processes, we explore how an industrial robotic arm can be used in collaboration with a human user to scan a timber block placed in an arbitrary position unknown to the computer system. As the placement of the timber block is not predetermined, the computer has to find out where the material is. It also needs to obtain information about the material and form characteristics of the block and present them to the user. A digital twin model of the material is created with computer vision techniques as a base map to exchange information and instruction with the human user. There are many examples of computer vision or machine vision applications in different industries, such as object detection in automatic driving, or image recognition in industrial robots (Klette,R 2014), yet there are few industrial use cases in the architectural construction field. Most of the construction work is carried out by a mix of human and machine under the coordination of the former. But in some situations, for instance, processing wood parts by robot arm, machine vision technology may be put to use to its full potential as most of the wood process has been replaced by electrically driven tools. Thus, the problem statement is that there are a few definitions of required information from physical space to operate a CNC machine in automated wood processing in the context of human machine interaction. The Article aim is to construct a unique setup of a computer vision system for wood processing with an industrial robot arm. In order to achieve that aim, the following research questions were set; 1) What sort of information is needed for automated timber work with robotic arms and machine vision? 2) What resolution of information is required to achieve the targeted accuracy? 3) How should these be considered in terms of practicality? Methodology; The developed system in this research has two major components: 1. interaction of a human user with a physical-virtual robot in real-time through AR; 2. A multi-stage scanning process by the robot. The system uses a digital twin model of the physical robot placed in AR, superimposed on the physical one. The system allows the user to see simulations of the robot’s movement in place in AR and place commands for material scanning with the physical robot. As the digital twin is placed within AR the human user can assess the robot’s movement in relation to the real world conditions. In addition, projection mapping casted from the end effector of the robot on the material surface is used to present its properties in a visually intuitive manner. A Microsoft Hololens and an RGB-D camera that was mounted on the robot were used as scanning tools. The scan stages were as follows: 1. Working area scan using the Hololens; 2. 3D scan using a depth camera to recognize the approximate position of the timer; 3. RGB scan to find the material type; 4. RGB scan based on the previous one to find the accurate orientation based on surface indices; 5. Close detailed 3D scan to find an accurate position of the material. 6. Real-time interactive projection of potential milling points and other information on the material that could be adjusted by the user though the AR user interface All of the aforementioned scan stages are used to gradually achieve an increasingly accurate representation of the material in the computer. The level of detail that was achievable using each scan stage corresponded with that which was required by the next step. The resulting system has the capability to operate on a given timber block with a repeatable coordination accuracy of the timber’s local origin point, the deviation between the end of milling and scanned timber 5mm to 10mm. Through the projection mapping function the user can also instruct and interact with the robot to increase the accuracy to the desired degree. In addition, it allows us to make sure the robot’s understanding of the material is correct. The contribution of this research is to develop an effective scanning method for half automated CNC robot arm as a part of wood processing. The system reduces the time and resources usually required to create a meaningful digital information profile of the material, with an adaptive scanning resolution in parts of interest determined by the human user during the interactive process. In conclusion, the machine vision system developed in this paper creates an interactive workflow in which the robotic movement is visualized beforehand, and presents a current understanding of the material to the user in real time with gradually increasing accuracy, demonstrating the potential of machine vision in the AEC industry, not as a replacement for human workers but as a complimentary counterpart. (826) Ref Reinhard Klette (2014). Concise Computer Vision. Springer. ISBN 978-1-4471-6320-6.

Keywords: Digital_Twin Stepping_Scanning Robot_Arm Sdg9

View Full Paper