Syst Comp Jpn, 34(3): 10–19, 2003 Published online in Wiley InterScience (DOI 10.1002/scj.10203Ĭonventionally, robot control algorithms are divided into two stages, namely, path or trajectory planning and path tracking (or path control). The authors applied this real-time vision-based interface to the field of robotics and used it together with speech recognition on a single PC to create a prototype of a pet robot system that can respond to the user's gestures. Real-time sensing of the shape and motion of a specific object from an image can be implemented by using the Motion Processor and this ROI extraction method, enabling this interface to function as a computer eye that can easily recognize a target object. In addition, the authors proposed a method of using the collected image depth information to quickly and stably extract the ROI and showed, according to evaluation experiments on a PC, that the region can be detected with high precision within 0.06 second. By illuminating the target object with near-infrared light and using image sensors to collect the reflected light, the Motion Processor can eliminate the background and collect shape, motion, and depth information of only the target object. This vision-based interface consists of two elements: the “Motion Processor™” and the “extraction of the region of interest (ROI).” The Motion Processor is a new image acquisition device. The authors developed a new vision-based interface that detects gestures in real time to enable interaction between the user and computer. Originality/value â–“ The presented code and examples, along with the fairly interesting and reliable results, indicate clearly that the technology is suitable for industrial utilization. Practical implications â–“ Finally, the paper discusses application to industrial cases where close cooperation between humans and robots is necessary. Namely, the connection between the PC and the robot is explained in detail since it was built using a RPC socket mechanism completely developed from the scratch. The paper is very detailed in showing implementation aspects enabling the reader to explore immediately from the presented concepts and tools. Findings â–“ Two simple examples designed to operate with a state-of-the-art industrial robot manipulator are then built to demonstrate the applicability to laboratory and industrial applications. ![]() The paper also introduces the concepts of text-to-speech translation and voice recognition, and shows how these features can be used with applications built using the Microsoft.NET framework. The speech recognition grammar is specified using the grammar builder from the Microsoft Speech SDK 5.1. The demonstration was coded using the Microsoft Visual Basic and C#.NET 2003 and associated with two simple robot applications: one capable of picking-and-placing objects and going to predefined positions, and the other capable of performing a simple linear weld on a work-piece. Design/methodology/approach â–“ A demonstration is presented using two industrial robots and a personal computer (PC) equipped with a sound board and a headset microphone. This feature can be interesting with several industrial, laboratory and clean-room applications, where a close cooperation between robots and humans is desirable. This paper reports a few results of an ongoing research project that aims to explore ways to command an industrial robot using the human voice. ![]() These outcomes can also be used to demonstrate the application of the proposed scheme in an undergraduate robotics course. The main advantages of the proposed schemes are the lack of need to mount extra sensors on realistic robot manipulators to measure joint space coordinates, simplifying the hardware. Thus, a simulated system was proposed to assist users in programming the robot manipulator and improve the positioning of its motion. ![]() Users of the various simulation capabilities of the proposed methodology can preview the simulated motion, and perceive and resolve discrepancies between the planned and simulated paths prior to execution of a task. Real-time control with dynamic simulation is utilized to verify the kinematic calculations. ![]() Moreover, the SolidWorks CAD model is used with LabVIEW to simulate the given trajectory. The main objective of this work is to construct a robot manipulator that is based on 3D printing to meet the demand for low-cost and light structures. Robots can fabricated by 3D printed component parts that are simple, light, and cheap. With the rapid growth of 3D printing and associated innovations, this 3D printing fabrication technology is being increasingly used in academic research and industry.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |