Development and Investigation of Vision System for a Small-Sized Mobile Humanoid Robot in a Smart Environment
Development and Investigation of Vision System for a Small-Sized Mobile Humanoid Robot in a Smart Environment
- Conference Article
4
- 10.1109/icra.2014.6907775
- May 1, 2014
Temporal asynchrony between two cameras in the vision system is a usual problem in practice. In some vision task such as estimating fast moving targets, the estimation error caused by the tiny temporal asynchrony will become non-ignorable essentials. This paper will address on the asynchrony in the stereo vision system of humanoid Ping-Pong robot, and present a real-time accurate Ping-Pong ball trajectory estimation algorithm. In our approach, the complex Ping-Pong ball motion model is simplified by a polynomial parameter function of time t due to the limited observing time interval and the requirement of real-time computation. We then use the perspective projection camera model to re-project the ball's parameter function on time t into its image coordinates on both cameras. Based on the assumption that the time gap of two asynchronous cameras will maintain a const during very short time interval, we can obtain the time gap value and also the trajectory parameters of the Ping-Pong ball in a short time interval by minimizing the errors between the images of the ball in each camera and their re-projection images from the modeled parameter function on time t. Comprehensive experiments on real Ping-Pong robot cases are carried out, the results show our approach is more proper for the vision system of humanoid Ping-Pong robot, when concerning the accuracy and real-time performance simultaneously.
- Book Chapter
1
- 10.5772/12934
- Jan 8, 2011
Humanoid robots have the similar appearance to human being with a head, two arms and two legs, and has some intelligent abilities as human being, such as object recognition, tracking, voice identification, obstacle avoidance, and so on. Since they try to simulate the human structure and behavior and they are autonomous systems, most of the times humanoid robots are more complex than other kinds of robots. In the case of moving over an obstacle or detecting and localizing an object, it is critically important to attain as much precise information regarding obstacles/object as possible since the robot establishes contact with an obstacle/object by calculating the appropriate motion trajectories to the obstacle/object. Vision system supplies most of the information, but the image sequence from the vision system of a humanoid robot is not static when a humanoid robot is walking, so some problems occur due to the ego-motion. Therefore, the humanoid robots need the algorithms that can autonomously determine their action and paths in unknown environments and compensate the ego-motion using the vision system. The vision system is one of the most important sensors in the humanoid robot system, it can supply lots of information which a humanoid robot needs. However the vision system indispensably requires the stabilization module, which can compensate the ego-motion of itself for the more precise recognition. Over the years, a number of researches have been achieved in motion compensation field on the vision system mounted in the robot. Some researches use single camera, but the stereovision, which can extract information regarding the depth of the environment, is commonly used. Robot motion from stereo-vision can be estimated by the 3D rigid transform, using the 2D multi-scale tracker, which projects 3D depth information on the 2D feature space. The scale invariant feature transform (SIFT) (Hu et al., 2007), which is a local feature based algorithms to extract features from images and estimate transformation using their location, and iterative closest point (ICP) (Milella & Siegwart, 2006), which is used for registration of digitized data from a rigid object with an idealized geometric model, have been used mainly for motion estimation using single camera or stereo camera for the video stabilization or autonomous navigation purposes, and have been widely used in wheeled robots (Lienhart & Maydt, 2002)(Beveridge et al., 2001)(Morency & Gupta, 2003). Moreover, the optical flow based method, which can estimate the motion by 3D normal flow constraint using gradient-based error function, is widely used, because of the simplicity of
- Conference Article
2
- 10.1109/arso.2008.4653607
- Aug 1, 2008
This paper realizes the humanoid robotic system to service the customers in N tables, and to deliver the ordered meal to the corresponding customer. The proposed system includes the following four subsystems: a humanoid robot with 26 degree-of-freedom, a wheeled vehicle with navigation ability, an N-tables system, and a counter with order collection and task allocation. In the beginning, the orders from the customers in N tables system are transmitted to the counter via the Bluetooth module. After the analysis, the corresponding task is assigned to the humanoid robot. Based on the lines on the ground, the line-follower system under the wheeled vehicle, the designed navigation strategy, and the communication between the humanoid robot and the wheeled vehicle, the humanoid robotic system will walk, turn left or right, drive the vehicle, and reach the neighborhood of the corresponding table. Then the ordered meal is sent to the customer, and then the humanoid robotic and wheeled vehicle return to the counter for the next task. Finally, the experiments valid the usefulness of the proposed system.
- Conference Article
20
- 10.11499/sicep.2004.0_108_5
- Sep 15, 2005
- SICE Annual Conference Program and Abstracts
This research describes the development of a real-time machine vision system to guide a harvesting robotic manipulator for the red Fuji apples. The machine vision system is composed of a color CCD video camera to acquire Fuji apple images at the orchard and a PC to process the acquired images. The machine vision system was able to recognize the fruit under the different lighting conditions and it could locate the fruit in less than one second.
- Research Article
10
- 10.1007/s00170-018-1739-x
- Mar 5, 2018
- The International Journal of Advanced Manufacturing Technology
In this paper, we propose an algorithm to determine optimal measurement configurations for self-calibrating a robotic visual inspection system with multiple point constraints. The algorithm aims to improve the robotic visual inspection system’s calibration accuracy. To do so, a pre-calibration of the robotic visual inspection system is needed to obtain the hand-eye and robot exterior relationship to implement the inverse kinematic algorithm. The candidate measurement configurations with one point constraint can be obtained using the inverse kinematic algorithm for the robotic visual inspection system, so DETMAX is implemented to determine a given number of optimal measurement configurations from the candidate measurement configurations. Particle swarm optimization is used to optimize the positions of the multiple points one by one. To verify the efficiency of the proposed approach, experiment evaluation is conducted on a robotic visual inspection system.
- Research Article
39
- 10.1016/j.jfranklin.2021.11.009
- Nov 22, 2021
- Journal of the Franklin Institute
A survey Of learning-Based control of robotic visual servoing systems
- Book Chapter
2
- 10.1016/b978-0-12-814411-4.00016-0
- Jan 1, 2020
- Neural Circuit and Cognitive Development
Chapter 16 - Development of the visual system
- Book Chapter
7
- 10.1016/b978-0-12-397267-5.00033-9
- Jan 1, 2013
- Comprehensive Developmental Neuroscience: Neural Circuit Development and Function in the Heathy and Diseased Brain
Chapter 14 - Development of the Visual System
- Research Article
1
- 10.1088/1757-899x/185/1/012021
- Mar 1, 2017
- IOP Conference Series: Materials Science and Engineering
A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.
- Research Article
- 10.3390/info16070550
- Jun 27, 2025
- Information
This paper presents the implementation of a vision system for a collaborative robot equipped with a web camera and a Python-based control algorithm for automated object-sorting tasks. The vision system aims to detect, classify, and manipulate objects within the robot’s workspace using only 2D camera images. The vision system was integrated with the Universal Robots UR5 cobot and designed for object sorting based on shape recognition. The software stack includes OpenCV for image processing, NumPy for numerical operations, and scikit-learn for multilayer perceptron (MLP) models. The paper outlines the calibration process, including lens distortion correction and camera-to-robot calibration in a hand-in-eye configuration to establish the spatial relationship between the camera and the cobot. Object localization relied on a virtual plane aligned with the robot’s workspace. Object classification was conducted using contour similarity with Hu moments, SIFT-based descriptors with FLANN matching, and MLP-based neural models trained on preprocessed images. Conducted performance evaluations encompassed accuracy metrics for used identification methods (MLP classifier, contour similarity, and feature descriptor matching) and the effectiveness of the vision system in controlling the cobot for sorting tasks. The evaluation focused on classification accuracy and sorting effectiveness, using sensitivity, specificity, precision, accuracy, and F1-score metrics. Results showed that neural network-based methods outperformed traditional methods in all categories, concurrently offering more straightforward implementation.
- Conference Article
5
- 10.1109/ihmsc.2011.49
- Aug 1, 2011
The vision system of apple harvesting robot was researched and designed to make it possible for realizing automatic harvesting of apple. The vision system model is studied. The vision system of apple harvesting robot was designed by two aspects, including hardware composition and soft architecture. The VFW method was employed to realize the real-time image acquisition. The recognition of vision system is developed using the combination method of regional growth algorithm and color characteristics. The preliminary orientation of apple target was calculated by finding its centroid. At last, the performance of this version system was evaluated. The results showed that the developed vision system of apple harvesting robot successfully achieved the recognition and orientation of apple target. It was conclude that this version system was effective for realizing the automatic harvesting of apple.
- Conference Article
6
- 10.1109/icpr.2002.1048437
- Dec 10, 2002
Most biological systems employ visually acquired information for their locomotion. In the course of evolutionary history, the visual system of organisms has evolved to be adapted to the environment. As a consequence of this adaptation, biological systems often display highly efficient visual skills. This reasoning has motivated the development of a specific visual system, which serves the purpose of navigation in an unusual environment - a sewer. The sewer environment exhibits two dominating features: restricted geometry of its inner surfaces and absolute darkness. These features are exploited by the hybrid vision system of the autonomous robot consisting of a crosshair laser projector and a camera. If a priori knowledge about the sewer geometry is taken into account, orientation of the robot can be derived from a visual analysis of a regular laser pattern projected onto the sewer surface. Because the footprint image is acquired in an entirely dark environment, the camera records a mostly dark image with the bright footprint in it. The analysis of such an image is very fast and knowledge of the robot's instantaneous orientation derived from this analysis is enough to guide its navigation. It is concluded that proper exploitation of the environmental constraints has lead to the development of this highly efficient visual system.
- Conference Article
6
- 10.1109/itme.2008.4744032
- Dec 1, 2008
Many researchers now are engaging themselves in the research on vision system for humanoid soccer robot. By making a deep analysis on the HSI color space, a better algorithm is presented which is judging the main feature of current pixel dynamically according to an intensity (I value) related formula in this paper. Fuzzy K-means clustering method is adopted in the image segmentation after all the similar pixels are found. This research is a good foundation for the future work of more complex vision based motion planning of humanoid soccer robot.
- Research Article
77
- 10.1016/s0044-8486(99)00183-0
- Jul 27, 1999
- Aquaculture
Development of red porgy Pagrus pagrus visual system in relation with changes in the digestive tract and larval feeding habits
- Conference Article
- 10.1109/ijcnn.1999.836159
- Jul 10, 1999
Examines the application of four competitive learning algorithms to the clustering of simple visual motion for use in the vision system of autonomous mobile robots. The arrangement and properties of the optical sensors used were loosely based on the visual apparatus of a jumping spider. It was found that competitive learning and specifically frequency sensitive competitive learning is able to learn to identify motion in an unsupervised manner. These learned visual representations can then be combined in subsequent processing stages for the development of active robotic vision systems. The unpredictability of a robot's operating environment and the inherent variations in the properties of physical sensors makes the use of adaptive clustering techniques essential. Both simulated and empirical results involving a modest robot demonstrate that novel motion and stationary position can be expressed as a combination of basic learned motion vectors.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.