Current Issue : April - June Volume : 2017 Issue Number : 2 Articles : 6 Articles
Face Identification system is an active research area in these years. However, the accuracy and its dependency in real life systems are still questionable. Earlier research in face identification systems demonstrated that LBP based face recognition systems are preferred than others and give adequate accuracy. It is robust against illumination changes and considered as a high-speed algorithm. Performance metrics for such systems are calculated from time delay and accuracy. This paper introduces an improved face recognition system that is build using C++ programming language with the help of OpenCV library. Accuracy can be increased if a filter or combinations of filters are applied to the images. The accuracy increases from 95.5% (without applying any filter) to 98.5% when applying a combination of Bilateral filter, Histogram Equalization and Tan and Triggs Algorithm. Finally, the results show degradation in accuracy and increasing in recognition time if images database get bigger....
This paper proposes an approximative 1-minimization algorithm with computationally efficient strategies to achieve\nreal-time performance of sparse model-based background subtraction. We use the conventional solutions of the\n1-minimization as a pre-processing step and convert the iterative optimization into simple linear addition and\nmultiplication operations. We then implement a novel background subtraction method that compares the\ndistribution of sparse coefficients between the current frame and the background model. The background model is\nformulated as a linear and sparse combination of atoms in a pre-learned dictionary. The influence of dynamic\nbackground diminishes after the process of sparse projection, which enhances the robustness of the implementation.\nThe results of qualitative and quantitative evaluations demonstrate the higher efficiency and effectiveness of the\nproposed approach compared with those of other competing methods....
Perceptual image quality assessment (IQA) adopts a computational model to assess the image quality in a fashion,\nwhich is consistent with human visual system (HVS). From the view of HVS, different image regions have different\nimportance. Based on this fact, we propose a simple and effective method based on the image decomposition for\nimage quality assessment. In our method, we first divide an image into two components: edge component and\ntexture component. To separate edge and texture components, we use the TV flow-based nonlinear diffusion method\nrather than the classic TV regularization methods, for highly effective computing. Different from the existing\ncontent-based IQA methods, we realize different methods on different components to compute image quality. More\nspecifically, the luminance and contrast similarity are computed in texture component, while the structural similarity is\ncomputed in edge component. After obtaining the local quality map, we use texture component again as a weight\nfunction to derive a single quality score. Experimental results on five datasets show that, compared with previous\napproaches in the literatures, the proposed method is more efficient and delivers higher prediction accuracy....
In this paper, a mixture of generalized Cauchy distribution and Rayleigh distribution that possesses a closed-form\nexpression is proposed for modeling the heavy-tailed Rayleigh (HTR) distribution. This new approach is developed for\nanalytically modeling the amplitude distribution of ultrasound images based on the HTR distribution. HTR as a\nnon-Gaussian distribution is basically the amplitude probability density function (PDF) of the complex isotropic\nsymmetric �±-stable (S�±S) distribution which appears in the envelope distribution of ultrasonic images. Analytic\nexpression for HTR distribution is a momentous consideration in signal processing with stable random variables.\nFurthermore, we introduce a mixture ratio estimator based on the energy of amplitude PDF which contains both �±\nand �³ parameters. For a quantitative assessment, we compare the accuracy and computational complexity of the\nproposed mixture with other approximations of HTR distribution through several numeral simulations on synthetic\nrandom samples. Experimental results obtained from the Kolmogorov-Smirnov (K-S) distance and Kullback-Leibler\n(K-L) divergence as the goodness-of-fit tests on real ultrasound images reveal the favor of the new mixture model....
Reversible watermarking is a kind of digital watermarking which is able to recover the original image exactly as well\nas extracting hidden message. Many algorithms have aimed at lower image distortion in higher embedding capacity.\nIn the reversible data hiding, the role of efficient predictors is crucial. Recently, adaptive predictors using least square\napproach have been proposed to overcome the limitation of the fixed predictors. This paper proposes a\nnovel reversible data hiding algorithm using least square predictor via least absolute shrinkage and selection\noperator (LASSO). This predictor is dynamic in nature rather than fixed. Experimental results show that the\nproposed method outperforms the previous methods including some algorithms which are based on the\nleast square predictors....
A novel approach for positioning using smartphones and image processing techniques is developed. Using structure\nfrom motion, 3D reconstructions of given tracks are created and stored as sparse point clouds. Query images are\nmatched later to these 3D models. High computational costs of image matching and limited storage require\ncompressing point clouds without loss of positioning performance. In this work, localization is improved and memory\nand storage requirements are minimized. We assumed that the computational speed and, at the same time, storage\nrequirements benefit from reducing the number of points with appropriate outlier detection. In particular, our\nhypothesis was that positioning accuracy is maintained while reducing outliers in a reconstructed model. To evaluate\nthe hypothesis, three methods were compared: (i) density-based (Sotoodeh, International Archives of\nPhotogrammetry, Remote Sensing and Spatial Information Sciences XXXVI-5, 2006), (ii) connectivity-based (Wang et\nal. Comput Graph Forum 32(5):207ââ?¬â??10, 2013), and (iii) our distance-based approach. In tenfold cross-validation,\napplied to a pre-reconstructed reference 3D model, localization accuracy was measured. In each new model, the\npositions of test images were identified and compared to the according positions in the reference model. We\nobserved that outlier removal has a positive impact on matching run-time and storage requirements, while there are\nno significant differences in the localization error within the methods. That confirmed our initial hypothesis and allows\nmobile application of image-based positioning....
Loading....