Team TRCC
Robot Design
The robot that is designed and built for Thailand @Home competition in 2012 is called 'TRCC-422R'. The base of this robot has three omnidirectional wheels as shown in Figure 1. With this design, the robot can move toward any direction without the need of steering. The speed of each wheel is controlled to the desired velocity which is calculated from the inverse kinematics equation. Sensors and the robot arm are attached at the column at the middle of the robot. The robot arm can slide up and down in order to manipulate objects at different heights.
The robot structure made of aluminum because this material has lightweight and high strength. The lower part of the robot carries a notebook computer, electronics boards and batteries. The robot can move at a maximum speed of 0.6 m./sec. with weight 25 kg. and height 1.5 m. The robot arm has 6 degrees of freedom and can lift a 1.5 kg payload.
1. Human Recognition
Microsoft Kinect SDK is used to implement Kinect Skeletal Tracking [1]. The full body of human has to be visible for the system to be able to recognize as human at the initial condition. Thereafter, the microcontrollers control the robot wheels to follow the human.
2. Auditory Perception
Speech is recognized using Microsoft Speech SDK. Before speech recognized process, the vocabulary or sentence commands are adding. The complex commands which a long sentence can recognize and then the robot response appropriate behavior.
3.
Perception of Objects
The Objects are recognized by using SURF (Speeded Up Robust
Features) [2, 3] for Object Recognition from EMGU library. The images of object
are recorded from different viewpoints during training to created data set. The
detected objects are extracted SURF features and mapped into the image data
set. Then computer sends the control signals to the microcontroller. After that
the robot use arm to grasp the target object according to the locations of the
object in the frame. The object recognition user interface is shown in Figure 2.
4.
Face recognition
The
faces are recognized by using Haar-like features[4, 5] for Object Recognition from
the EMGU library. The images of faces
are recorded from the users during training to created data set. The detected
face are identify according to the user faces data set. The face recognition
user interface is shown in Finger 3.
5.
Self-Localization and Mapping
The laser scan sensor and an
incremental encoder are used to for pose estimation and locating the position
of the robot with Particle Filter Algorithm [6]. The Particle Filter Algorithm
find local position of the robot in a 2 dimensional map as shown in Figure 4.
In Figure 5, the laser scan and motor rotation are used for obstacle avoidance
by a 3 dimensions map that is created from the laser data.
The low-level robot behavior is controlled by the program on an AVR MEGA1280 Microcontroller. The Microcontroller controls the movement of the robot and the robot arm, and communicates with the computer to perform the tasks. The diagram of robot control system is shown in Figure 6.
The robot movement is specified by a linear velocity and an angular velocity which based on 3 omnidirectional wheel inverse kinematics equations.
The robot arm can move to any positions in three-dimensional plane within the arm reachable workspace. The trajectory of the robot end-effector to the target position is calculated based on the Inverse kinematic equation of the robot arm. The microcontroller controls each servo motors to the target position.
Fig. 6. Diagram of robot control system
ไม่มีความคิดเห็น:
แสดงความคิดเห็น