In this study, we described the scanning area limitation (SAL) speciality in laser flying marking and defined maximum markable time and maximum marking offset (MMO) for analyzing the effect of SAL on flying marking process. We presented maximum flying velocity (MFV) to evaluate the performance of a laser marking system and investigate the factors, including the the length of marking graphics in the moving direction and the marking order of graphics objects, which will impact the maximum flying velocity very much. Transverse and vertical directions of graphics entering into the scanning area and three object scanning path algorithms, the all-entered-marking, first-entering-first-marking (FEFM), and rowed-FEFM were analyzed and compared, and MMO and MFV were calculated using these algorithms. Experimental MFV results with different algorithms satisfied theoretical calculations very well, and it was shown that there is a best MFV performance when using the FEFM scanning path algorithm in the transverse moving direction.
A laser flying marking system with a galvanometer scanner could be widely used as a workhorse because the moving speed of product on the workline would not be affected while being marked. A laser flying marking system, with high-speed galvanometer scanners, was set up. Two kinds of marking effects, vector style and matrix style, were realized in the system. Different motion-tracing methods, including a closed-loop feedback tracing mode and an open-loop computing tracing mode, were studied and utilized in the control software. Experimental results show that the system using the closed-loop feedback motion-tracing method has more adaptability for variable-speed applications.
This paper focuses on an effective face recognition algorithm of poor quality video data and its real-time implementation. As the fundamental step of our approach, a fast face detection method based on color information is presented. Instead of performing a pixel-based color segmentation on each single face image, we incorporate color information into a face detection scheme based on spatio-temporal filtering in image sequences, which can reduce the noises in surveillance video. Our face recognition method is based on principal component analysis (PCA) that is fairly effective and fast for surveillance video in comparison with feature-based methods. For the training set, a large database of numerous face images of each subjects, digitized at the condition of three head orientations is setup. We use separate eigenspaces for different views of head orientations, so that the collection of images taken from each view of head orientations will have its own eigenspace. For real-time implementation, a automatic face detection and recognition system with TI Digital Signal Processor(DSP) TMS320C6201 is described.
This paper addresses an ameliorative version of traditional eigenface methods. Much of the previous work on eigenspace methods usually built only one eigenface space with eigenfaces of different persons, utilizing only one or very limited faces of an individual. The information of one facial image is very limited, so traditional methods have difficulty coping with differences of facial images caused by the changes of age, emotion, illumination, and hairdress. We took advantage of facial images of the same person obtained at different ages, under different conditions, and with different emotion. For every individual we constructed an eigenface subspace separately, namely multiple eigenface spaces were constructed for a face database. Experiments illustrated that the ameliorative algorithm is distortion- invariant to some extent.
This paper addressed the problem of image matching with two images, an object image and a novel deformed image. Generally two images have differences of rotation, translation, scaling, noise disturbance, or occlusion. A common task is to develop matching algorithms insensitive to image distortions. Here we inherited and extended the spirit of eigenspace approach, applying a 3-layer BP neural network to find the mage pattern which is also obtained from the same scene of the object image pattern. To testify the feasibility and robustness of our algorithm, images obtained from real scenes by satellites were used in our experiments. Experiments have shown that the framework could produce feasible results.
This paper introduces principal component analysis into matching and correlation tracking, and presents a matching algorithm based on principal component analysis. This matching method can bear some image distortions in the image matching and visual tracking. Experimental results are presented to support it.
Based on the mathematical analysis for optics of the laser scanning system, the mechanisms of the graphic distortion in the laser display has been studied. After considering the beam trace from image plan to galvanometer, a relation between the image points on the screen and the scan angles was built. Thus, it has been able to rectify the scanning linear distortion and pincushion error by computer software effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.