Current Issue : April - June Volume : 2015 Issue Number : 2 Articles : 4 Articles
Conventional end-to-end distortion models for videos measure the overall distortion based on independent estimations\nof the source distortion and the channel distortion. However, they are not correlating well with the perceptual\ncharacteristics where there is a strong inter-relationship among the source distortion, the channel distortion, and\nthe video content. As most compressed videos are represented to human users, perception-based end-to-end\ndistortion model should be developed for error-resilient video coding. In this paper, we propose a structural similarity\n(SSIM)-based end-to-end distortion model to optimally estimate the content-dependent perceptual distortion due to\nquantization, error concealment, and error propagation. Experiments show that the proposed model brings a better\nvisual quality for H.264/AVC video coding over packet-switched networks....
Detecting multiple salient objects in complex scenes is a challenging task. In this paper, we present a novel method to\ndetect salient objects in images. The proposed method is based on the general ââ?¬Ë?center-surroundââ?¬â?¢ visual attention\nmechanism and the spatial frequency response of the human visual system (HVS). The saliency computation is\nperformed in a statistical way. This method is modeled following three biologically inspired principles and compute\nsaliency by two ââ?¬Ë?scatter matricesââ?¬â?¢ which are used to measure the variability within and between two classes, i.e., the\ncenter and surrounding regions, respectively. In order to detect multiple salient objects of different sizes in a scene,\nthe saliency of a pixel is estimated via its saliency support region which is defined as the most salient region centered\nat the pixel. Compliance with human perceptual characteristics enables the proposed method to detect salient\nobjects in complex scenes and predict human fixations. Experimental results on three eye tracking datasets verify the\neffectiveness of the method and show that the proposed method outperforms the state-of-the-art methods on the\nvisual saliency detection task...
In this paper, an effective method, named the brightness preserving weighted dynamic range histogram equalization\n(BPWDRHE), is proposed for contrast enhancement. Although histogram equalization (HE) is a universal method, it is\nnot suitable for consumer electronic products because this method cannot preserve the overall brightness. Therefore,\nthe output images have an unnatural looking and more visual artifacts. An extension of the approach based on the\nbrightness preserving bi-histogram equalization method, the BPWDRHE used the weighted within-class variance\nas the novel algorithm in separating an original histogram. Unlike others using the average or the median of gray\nlevels, the proposed method determined gray-scale values as break points based on the within-class variance to\nminimize the total squared error of each sub-histogram corresponding to the brightness shift when equalizing them\nindependently. As a result, the contrast of both overall image and local details was enhanced adequately. The\nexperimental results are presented and compared to other brightness preserving methods...
Color identification of vehicles plays a significant role in crime detection. In this study, a novel scheme for the color\nidentification of vehicles is proposed using the locating algorithm of regions of interest (ROIs) as well as the color\nhistogram features from still images. A coarse-to-fine strategy was adopted to efficiently locate the ROIs for various\nvehicle types. Red patch labeling, geometrical-rule filtering, and a texture-based classifier were cascaded to locate the\nvalid ROIs. A color space fusion together with a dimension reduction scheme was designed for color classification.\nColor histograms in ROIs were extracted and classified by a trained classifier. Seven different classes of color were\nidentified in this work. Experiments were conducted to show the performance of the proposed method. The average\nrates of ROI location and color classification were 98.45% and 88.18%, respectively. Moreover, the classification efficiency\nof the proposed method was up to 18 frames per second....
Loading....