Approaches and Information



Team TRCC


Robot Design
 
         The robot that is designed and built for Thailand @Home competition in 2012 is called 'TRCC-422R'. The base of this robot has three omnidirectional wheels as shown in Figure 1. With this design, the robot can move toward any direction without the need of steering. The speed of each wheel is controlled to the desired velocity which is calculated from the inverse kinematics equation. Sensors and the robot arm are attached at the column at the middle of the robot. The robot arm can slide up and down in order to manipulate objects at different heights.
        The robot structure made of aluminum because this material has lightweight and high strength. The lower part of the robot carries a notebook computer, electronics boards and batteries. The robot can move at a maximum speed of 0.6 m./sec. with weight 25 kg. and height 1.5 m. The robot arm has 6 degrees of freedom and can lift a 1.5 kg payload.



Fig. 1. 'TRCC-422R' robot in the Thailand Robot@Home Championship League 2012.


Software
 

The software for this robot is implemented using C# programing language with the EMGU library for human recognition, face recognition, speech recognition, self-localization and mapping. 



1. Human Recognition
Microsoft Kinect SDK is used to implement Kinect Skeletal Tracking [1]. The full body of human has to be visible for the system to be able to recognize as human at the initial condition. Thereafter, the microcontrollers control the robot wheels to follow the human.

2. Auditory Perception
Speech is recognized using Microsoft Speech SDK. Before speech recognized process, the vocabulary or sentence commands are adding. The complex commands which a long sentence can recognize and then the robot response appropriate behavior.



 3.  Perception of Objects
The Objects are recognized by using SURF (Speeded Up Robust Features) [2, 3] for Object Recognition from EMGU library. The images of object are recorded from different viewpoints during training to created data set. The detected objects are extracted SURF features and mapped into the image data set. Then computer sends the control signals to the microcontroller. After that the robot use arm to grasp the target object according to the locations of the object in the frame. The object recognition user interface is shown in Figure 2.







Fig. 2. Object Recognition User Interface

 4.  Face recognition
      The faces are recognized by using Haar-like features[4, 5] for Object Recognition from the EMGU library.  The images of faces are recorded from the users during training to created data set. The detected face are identify according to the user faces data set. The face recognition user interface is shown in Finger 3.




Fig. 3. Face Recognition User Interface

 5.  Self-Localization and Mapping
The laser scan sensor and an incremental encoder are used to for pose estimation and locating the position of the robot with Particle Filter Algorithm [6]. The Particle Filter Algorithm find local position of the robot in a 2 dimensional map as shown in Figure 4. In Figure 5, the laser scan and motor rotation are used for obstacle avoidance by a 3 dimensions map that is created from the laser data.


                                             
Fig. 4. Self-robot localizatio
  Fig. 5. Tilt Laser scan for 3D
  
Control Systems 

      The low-level robot behavior is controlled by the program on an AVR MEGA1280 Microcontroller. The Microcontroller controls the movement of the robot and the robot arm, and communicates with the computer to perform the tasks. The diagram of robot control system is shown in Figure 6.
      The robot movement is specified by a linear velocity and an angular velocity which based on 3 omnidirectional wheel inverse kinematics equations.
      The robot arm can move to any positions in three-dimensional plane within the arm reachable workspace. The trajectory of the robot end-effector to the target position is calculated based on the Inverse kinematic equation of the robot arm. The microcontroller controls each servo motors to the target position.
  
                   

Fig. 6. Diagram of robot control system



References
    1. Skeletal Tracking, http://msdn.microsoft.com/en-us/library/hh973074.aspx, available on 7 Feb 2013.

2. Herbert Bay, Tinne Tuytelaars, Luc J. Van Gool: SURF: Speeded Up Robust Features. ECCV (1) 2006: 404-417

3. SURF feature detector in CSharp, http://www.emgu.com/wiki/index.php/SURF_feature_detector_in_CSharp, available on 7 Feb 2013.

4. Viola and Jones: Rapid object detection using a boosted cascade of simple features: Computer Vision and Pattern Recognition, 2001

5. Lienhart, R. and Maydt, J.: An extended set of Haar-like features for rapid object detection: ICIP02, pp. I: 900–903, 2002

6. Rui Guo; Fengchi Sun; Jing Yuan: ICP based on Polar Point Matching with application to Graph-SLAM: Mechatronics and Automation, 2009. ICMA 2009. International Conference on , vol., no., pp.1122-1127, 9-12 Aug. 2009 















ไม่มีความคิดเห็น:

แสดงความคิดเห็น