Current Issue : July - September Volume : 2017 Issue Number : 3 Articles : 5 Articles
Accurate 3D measuring systems thrive in the past few years. Most of them are based on laser scanners because\nthese laser scanners are able to acquire 3D information directly and precisely in real time. However, comparing to\nthe conventional cameras, these kinds of equipment are usually expensive and they are not commonly available to\ncustomers. Moreover, laser scanners interfere easily with each other sensors of the same type. On the other hand,\ncomputer vision-based 3D measuring techniques use stereo matching to acquire the cameras� relative position and\nthen estimate the 3D location of points on the image. Because this kind of systems needs additional estimation of\nthe 3D information, systems with real time capability often relies on heavy parallelism that prevents implementation\non mobile devices.\nInspired by the structure from motion systems, we propose a system that reconstructs sparse feature points to a 3D\npoint cloud using a mono video sequence so as to achieve higher computation efficiency. The system keeps tracking\nall detected feature points and calculates both the amount of these feature points and their moving distances. We\nonly use the key frames to estimate the current position of the camera in order to reduce the computation load and\nthe noise interference on the system. Furthermore, for the sake of avoiding duplicate 3D points, the system\nreconstructs the 2D point only when the point shifts out of the boundary of a camera. In our experiments, we show\nthat our system is able to be implemented on tablets and can achieve state-of-the-art accuracy with a denser point\ncloud with high speed....
One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a\nhandheld camera, the tendency of the photographer�s hand to shake causes the image to blur. In response to this problem, image\ndeblurring has become an active topic in computational photography and image processing in recent years. Fromthe view of signal\nprocessing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed\nto be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a\ncamera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded\nby multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for\nnonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for\nmeasurement of the amounts and directions ofmotion blur. These blurred regions are then used to estimate point spread functions\nsimultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image.We expect that the proposed\nmethod can achieve satisfactory deblurring of a single nonuniform blur image....
In order to reach higher coding efficiency compared to its predecessor, a state-of-the-art video compression standard,\nthe High Efficiency Video Coding (HEVC), has been designed to rely on many improved coding tools and sophisticated\ntechniques. The new features are achieving significant coding efficiency but at the cost of huge implementation\ncomplexity. This complexity has increased the HEVC encoders� need for fast algorithms and hardware friendly\nimplementations. In fact, encoders have to perform the different encoding decisions, overcoming the real-time encoding\nconstraint while taking care of coding efficiency. In this sense, in order to reduce the encoding complexity, HEVC\nencoders rely on look-ahead mechanisms and pre-processing solutions. In this context, we propose a gradient-based\npre-processing stage. We investigate particularly the Prewitt operator used to generate the gradient and we propose\nnecessary approaches that enhance the gradient performance of detecting the HEVC intra modes. We also set different\nprobability scenarios, based on the gradient information, in order to speed up the mode search process. Moreover, we\npropose a gradient-based estimation of the texture complexity that we use for coding unit decision. Results show that\nthe proposed algorithm achieves a reduction of 42.8% in encoding time with an increase in BD rate of only 1.1%....
Single image motion deblurring has been a very challenging problem in the\nfield of image processing. Although there are many researches had been proposed\nto solve this problem, it still has problems on kernel accuracy. In order\nto improve the kernel accuracy, an effective structure selection method was\nused to select the salient structure of the blur image. Then a novel kernel estimation\nmethod based on L0 2 norm was proposed. To guarantee the\nsparse kernel and eliminate the negative influence of details L0 -norm was\nused. And L2 -norm was used to ensure the continuity of kernel. Many experiments\nwere done to compare proposed method and state-of-the-art methods.\nThe results show that our method can estimate a better kernel and use\nless time than previous work, especially when the size of blur kernel is large....
This paper discusses a super-resolution (SR) system implemented on a mobile device. We utilized an Android device�s\ncamera to take successive shots and applied a classical multiple-image super-resolution (SR) technique that utilized a\nset of low-resolution (LR) images. Images taken from the mobile device are subjected to our proposed filtering\nscheme wherein images that have noticeable presence of blur are discarded to avoid outliers from affecting the\nproduced high-resolution (HR) image. The remaining subset of images are subjected to non-local means denoising,\nthen feature-matched against the first reference LR image. Successive images are then aligned with respect to the first\nimage via affine and perspective warping transformations. The LR images are then upsampled using bicubic\ninterpolation. An L2-norm minimization approach, which is essentially taking the pixel-wise mean of the aligned\nimages, is performed to produce the final HR image.\nOur study shows that our proposed method performs better than the bicubic interpolation, which makes its\nimplementation in a mobile device quite feasible. We have also proven in our experiments that there are substantial\ndifferences from images captured using burst mode that can be utilized by an SR algorithm to create an HR image....
Loading....