Current Issue : April - June Volume : 2020 Issue Number : 2 Articles : 5 Articles
In this article, we present a method to position the tool in a micromachine system based on\na camera-LCD screen positioning system that also provides information about angular deviations\nof the tool axis during its running. Both position and angular deviations are obtained by reducing\na matrix of LEDs in the image to a single rectangle in the conical perspective that is treated by a\nphotogrammetry method. This method computes the coordinates and orientation of the camera with\nrespect to the fixed screen coordinate system. The used image consists of 5 Ã? 5 lit LEDs, which are\nanalyzed by the algorithm to determine a rectangle with known dimensions. The coordinates of the\nvertices of the rectangle in space are obtained by an inverse perspective computation from the image.\nThe method presents a good approximation of the central point of the rectangle and provides the\ninclination of the workpiece with respect to the LCD screen reference system of coordinates. A test\nof the method is designed with the assistance of a Coordinate Measurement Machine (CMM) to\ncheck the accuracy of the positioning method. The performed test delivers a good accuracy in the\nposition measurement of the designed method. A high dispersion in the angular deviation is detected,\nalthough the orientation of the inclination is appropriate in almost every case. This is due to the small\nvalues of the angles that makes the trigonometric function approximations very erratic. This method\nis a good starting point for the compensation of angular deviation in vision based micromachine\ntools, which is the principal source of errors in these operations and represents the main volume in\nthe cost of machine elementsâ?? parts....
Ultrasound has been trialed in biometric recognition systems for many years, and at present\ndifferent types of ultrasound fingerprint readers are being produced and integrated in portable\ndevices. An important merit of the ultrasound is its ability to image the internal structure of the\nhand, which can guarantee improved recognition rates and resistance to spoofing attacks. In addition,\nambient noise like changes of illumination, humidity, or temperature, as well as oil or ink stains on\nthe skin do not affect the ultrasound image. In this work, a palmprint recognition system based on\nultrasound images is proposed and experimentally validated. The system uses a gel pad to obtain\nacoustic coupling between the ultrasound probe and the userâ??s hand. The collected volumetric image\nis processed to extract 2D palmprints at various under-skin depths. Features are extracted from one of\nthese 2D palmprints using a line-based procedure. Recognition performances of the proposed system\nwere evaluated by performing both verification and identification experiments on a home-made\ndatabase containing 281 samples collected from 32 different volunteers. An equal error rate of 0.38%\nand an identification rate of 100% were achieved. These results are very satisfactory, even if obtained\nwith a relatively small database. A discussion on the causes of bad acquisitions is also presented, and\na possible solution to further optimize the acquisition system is suggested....
Shadow detection is a crucial task in high-resolution remote-sensing image\nprocessing. Various shadow detection methods have been explored during\nthe last decades. These methods did improve the detection accuracy but are\nstill not robust enough to get satisfactory results for failing to extract enough\ninformation from the original images. To take full advantage of various features\nof shadows, a new method combining edges information with the spectral\nand spatial information is proposed in this paper. As known, edge is one\nof the most important characteristics in the high-resolution remote-sensing\nimages. Unfortunately, in shadow detection, it is a high-risk strategy to determine\nwhether a pixel is the edge or not strictly because intensity values on\nshadow boundaries are always between those in shadow and non-shadow\nareas. Therefore, a soft edge description model is developed to describe the\ndegree of each pixel belonging to the edges or not. Sequentially, the soft edge\ndescription is incorporating to a fuzzy clustering procedure based on HMRF\n(Hidden Markov Random Fields), in which more appropriate spatial contextual\ninformation can be used. More concretely, it consists of two components:\nthe soft edge description model and an iterative shadow detection algorithm.\nExperiments on several remote sensing images have shown that the proposed\nmethod can obtain more accurate shadow detection results....
Due to requirements and necessities in digital image research, image matching\nis considered as a key, essential and complicating point especially for\nmachine learning. According to its convenience and facility, the most applied\nalgorithm for image feature point extraction and matching is Speeded-Up\nRobust Feature (SURF). The enhancement for scale invariant feature transform\n(SIFT) algorithm promotes the effectiveness of the algorithm as well as\nfacilitates the possibility, while the application of the algorithm is being applied\nin a present time computer vision system. In this research work, the aim\nof SURF algorithm is to extract image features, and we have incorporated\nRANSAC algorithm to filter matching points. The images were juxtaposed\nand asserted experiments utilizing pertinent image improvement methods.\nThe idea based on merging improvement technology through SURF algorithm\nis put forward to get better quality of feature points matching the efficiency\nand appropriate image improvement methods are adopted for different\nfeature images which are compared and verified by experiments. Some\nresults have been explained there which are the effects of lighting on the underexposed\nand overexposed images....
Embedded video applications are now involved in sophisticated transportation systems like autonomous vehicles and driver\nassistance systems. As silicon capacity increases, the design productivity gap grows up for the current available design tools.\nHence, high-level synthesis (HLS) tools emerged in order to reduce that gap by shifting the design efforts to higher abstraction\nlevels. In this paper, we present ViPar as a tool for exploring different video processing architectures at higher design level. First,\nwe proposed a parametrizable parallel architectural model dedicated for video applications. Second, targeting this architectural\nmodel, we developed ViPar tool with two main features: (1) An empirical model was introduced to estimate the power consumption\nbased on hardware utilization and operating frequency. In addition to that, we derived the equations for estimating the\nhardware utilization and execution time for each design point during the space exploration process. (2) By defining the main\ncharacteristics of the parallel video architecture like parallelism level, the number of input/output ports, the pixel distribution\npattern, and so on, ViPar tool can automatically generate the dedicated architecture for hardware implementation. In the\nexperimental validation, we used ViPar tool to generate automatically an efficient hardware implementation for a Multiwindow\nSum of Absolute Difference stereo matching algorithm on Xilinx Zynq ZC706 board. We succeeded to increase the design\nproductivity by converging rapidly to the appropriate designs that fit with our system constraints in terms of power consumption,\nhardware utilization, and frame execution time....
Loading....