­
­
­
­

Inventi Rapid - Image & Video Processing

Patent Watch

  • Method For Efficient Compression and Decoding of Single Sensor Color Image Data

    A method is described to greatly improve the efficiency of and reduce the complexity of image compression when using single-sensor color imagers for video acquisition. The method in addition allows for this new image compression type to be compatible with existing video processing tools, improving the workflow for film and television production.

  • Method for processing stereoscopic images and corresponding device

    The invention relates to a method for video processing of at least one image of a video sequence, said video sequence comprising a plurality of image pairs, each image pair comprising a first image and a second image said first and second images being intended to form a stereoscopic image. In order to reduce display defects, the method comprises a step of generation of at least a third image by motion compensated temporal interpolation from at least two of said second images.

  • HARDWARE PIXEL PROCESSING PIPELINE AND VIDEO PROCESSING INSTRUCTIONS

    A hardware pixel processing pipeline and a video processing instruction set accelerate image processing and/or video decompression. The pixel processing pipeline uses hardware components to more efficiently perform color space conversion and horizontal upscaling. Additionally, the pixel processing pipeline also reduces the size of its output data to conserve bandwidth. A specialized video processing instruction set allows further acceleration of video processing or video decoding by allowing receipt of a single instruction to cause multiple addition operation or interpolation of multiple pairs of pixels in parallel.

  • VIDEO PROCESSING DEVICE WITH MEMORY OPTIMIZATION IN IMAGE POST-PROCESSING

    A video processing device is disclosed that includes a processor unit with a processor and a memory having a reorder buffer. The processor includes a reorder module, a frame rate conversion module, and post-processing function modules. The reorder, frame rate conversion, and post-processing modules access video frames stored in the reorder buffer, while the video frames are stored in the reorder buffer, and reorder, adjust the frame rate, and perform image processing, respectively, on the video frames, while the video frames are stored in the reorder buffer. A method implemented on such a video processing device is also disclosed. A computer-readable storage medium with instructions stored thereon for performing the method is also disclosed.

  • VIDEO DISPLAY DEVICE

    When a wide color gamut display displays video based on a video signal that complies with a narrower color reproduction range standard, in order to make full use of the feature of the wide color gamut display capable of displaying highly saturated and vivid reds, while eliminating the problem of seeing glaring images in the part of the red color region near the highest brightness and saturation, a video processing circuit (2) reduces and corrects the signal value of the input video signal, which represents the colors within the color range to be corrected, which is within a specified saturation range from the highest saturation to the middle saturation inside a specified hue range centered on the red hue in the color reproduction range (an expanded color reproduction range wider than the sRGB standard color reproduction range) of a liquid crystal panel (4), and which is within a specified brightness range from the highest brightness to the middle brightness inside that range, so that the saturation and brightness thereof change to saturation and brightness within a predetermined middle color range between the expanded color reproduction range and the color reproduction range of the standard to which the input video signal complies.

  • 3D VIDEO PROCESSING APPARATUS AND 3D VIDEO PROCESSING METHOD

    A 3D video processing apparatus according to an aspect of the present invention includes an offset value complementing unit which complements an offset value of the first picture, by assigning a value which is equal to or greater than the first offset value and equal to or smaller than the second offset value, the first offset value representing the smaller one of the offset value assigned to the second picture temporally preceding the first picture and the offset value assigned to the third picture temporally succeeding the first picture, and the second offset value representing the larger one of the offset value assigned to the second picture and the offset value assigned to the third picture.

  • Video Recording Environment

    Solutions for providing an interactive and intuitive video environment. Some such solutions use a user supersystem as an interactive multimedia system, including various features relating to video capture and processing. In some cases, a "live video thumbnail" is provided as part of a media album application, for example, to entice users to capture video (e.g., a photo or video file) using components of the user supersystem. Other implementations facilitate video processing functionality, such as "best frame" selection and auto-cropping of video data.

  • VIDEO PROCESSING APPARATUS AND VIDEO DISPLAY APPARATUS

    The present invention provides a video processing apparatus and video display apparatus that are capable of reliably preventing the occurrence of motion blur or dynamic false contours. The video processing apparatus has: a subfield conversion unit (2) for converting an input image into light emission data for each of subfields; a motion vector detection unit (3) for detecting a motion vector using at least two or more input images that are temporally adjacent to each other; a first subfield regeneration unit (4) for collecting light emission data of the subfields of pixels that are located spatially forward by the number of pixels corresponding to the motion vector, and thereby spatially rearranging the light emission data for each of the subfields, in order to generate rearranged light emission data for each of the subfields; and an adjacent region detection unit (41) for detecting an adjacent region between a first image and a second image of the input image, wherein the first subfield regeneration unit (4) does not collect the light emission data outside the adjacent region.

  • Visual Prosthesis

    A visual prosthesis apparatus and a method for limiting power consumption in a visual prosthesis apparatus. The visual prosthesis apparatus comprises a camera for capturing a video image, a video processing unit associated with the camera, the video processing unit configured to convert the video image to stimulation patterns, and a retinal stimulation system configured to stop stimulating neural tissue in a subject's eye based on the stimulation patterns when an error is detected in a forward telemetry received from the video processing unit.

  • METHOD AND SYSTEM FOR BANDWIDTH REDUCTION THROUGH INTEGRATION OF MOTION ESTIMATION AND MACROBLOCK ENCODING

    Video data for a current frame and a plurality of reference frames may be loaded into a video codec in a video processing device from a memory used in the video processing device, and the loaded video data may be buffered in an internal buffer used during motion estimation. Motion estimation may be performed based on the loaded video data, and after completion of the motion estimation, macroblock encoding for the current frame may be performed based on the loaded video data and the motion estimation. The motion estimation may comprise coarse motion estimation and fine motion estimation, and motion vectors may be generated based on the motion estimation on per-macroblock basis. The encoding may comprise macroblock encoding of a residual for the current frame, which may be determined based on the original video data, accessed from the internal motion estimation buffer, and prediction determined based on the generated motion vectors.

  • REAL TIME VIDEO PROCESS CONTROL USING GESTURES

    Method and apparatus of interaction with and control of a video capture device are described. In the described embodiments, video are presented at a display, the display having contact or proximity sensing capabilities. A gesture can be sensed at or near the display in accordance with the video presented on the display, the gesture being associated with a first video processing operation. The video are modified in accordance with the first video processing operation in real time.

  • METHOD AND APPARATUS FOR VIDEO RECORDING AND PLAYBACK

    According to one embodiment, a video display device including, a signal playback module configured to receive content to be played back by a source device, a signal control module configured to instruct the source device to stop and start playback of the content, and a bidirectional communication interface configured to connect the signal control module and the signal playback module such that a control signal can be transferred to and from the source device.

  • DEVICE AND METHOD FOR AUTOMATICALLY RECREATING A CONTENT PRESERVING AND COMPRESSION EFFICIENT LECTURE VIDEO

    A device and method for automatically recreating a content preserving and compression efficient lecture video is provided. The device comprises of a computer based video recreating means (3) connected to a video receiving means (2) to an input side thereof and to a video reproducing means (4) at the output side thereof, wherein the video recreating means (3) is designed to split the video into visual and audio data, split the visual data into a plurality of scenes, classify each scene into a number of activity scenes, select activity scenes pre-determined to be critical for preserving the semantics of the lecture video and determine a key frame thereof, recreate the visual data by effecting a time based merger of the key frames of the selected activity scenes, recreate the audio data by removing voiceless data and noise therein and recreate the lecture video by effecting a synchronized time based merger of the recreated visual and audio data.

  • Point to multi-point wireless video delivery

    Point to multi-point wireless video delivery. Among a group of receiver wireless communication devices (RXs), one is designated (e.g., as acknowledgment (ACK) leader). Media delivery operational parameters are selected based on the designated RX or based on all or a subset of the RXs. For simultaneous media delivery to multiple RXs, characteristics associated with the designated RX [or all, or a subset or RXs] govern the manner by which communications are made. Different respective RXs may be designated to serve in this role at different times. Wireless delivery of media (e.g., video signaling, audio signaling, etc.) to a group of RXs is effectuated in accordance with modified multicast signaling with a designated leader (e.g., ACK leader). Among a group of devices, a least successful receiving device that still receives media at an acceptable level may be chosen as the designated leader (e.g., ACK leader).

  • FRAME BUFFER COMPRESSION FOR VIDEO PROCESSING DEVICES

    For compressing a video signal, a local multiscale transform is applied to a frame of the video signal to obtain coefficient blocks. The coefficients of each block are distributed into coefficient groups associated with that block. A plurality of the coefficient groups associated with a block are processed. The processing of one of the groups comprises determining an exponent for encoding the coefficients of that group. Mantissas are determined for quantizing the coefficients of the plurality of groups in combination with the exponents respectively determined for these groups. Coding data including each exponent determined for a coefficient group and the mantissas quantizing the coefficients of the group in combination with this exponent are stored in an external frame buffer. The mantissas determined for quantizing the coefficients of one of the groups are represented in the coding data by a respective number of bits depending on the exponents determined for the plurality of coefficient groups.

  • USER-INTERACTIVE DISPLAYS INCLUDING DYNAMIC VIDEO MOSAIC ELEMENTS WITH VIRTUAL ZOOM

    The present invention teaches a method of creating and presenting a user interface comprising a Dynamic Mosaic Extended Electronic Programming Guide (DMXEPG) using video, audio, special applications, and service dynamic metadata. The system enables television or digital radio service subscribers to select and display of various programs including video, interactive TV applications, or any combination of audio or visual components grouped and presented in accordance with the dynamic program/show metadata, business rules and objectives of service providers, broadcasters, and/or personal subscriber choices, collectively referred to as mosaic element presentation criteria.

  • IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

    An image processing device includes: a plurality of encoding units that encode image data; a shared memory that stores reference image data which is used for encoding performed by each of the plurality of encoding units; and a control unit that secures an encoding unit from the plurality of encoding units, which is made to encode an intra-frame prediction encoded image and a forward prediction encoded image by priority, and that makes an encoding unit, which is not used to encode the intra-frame prediction encoded image or the forward prediction encoded image, encode a bidirectional prediction encoded image, using reference image data stored in the shared memory during a period where the secured encoding unit does not perform the encoding.

  • IMAGE PROCESSING DEVICE AND INFORMATION STORAGE MEDIUM

    An image processing device includes a normal light image acquisition section that acquires a normal light image including an object image that includes information within a wavelength band of white light, a special light image acquisition section that acquires a special light image including an object image that includes information within a specific wavelength band, an isolated point determination section that performs an isolated point determination process on a normal light processing target pixel based on a pixel value of the normal light processing target pixel, the normal light processing target pixel being a processing target pixel included in the normal light image, and a correction control section that controls a correction process performed on the special light image based on the isolated point determination process performed by the isolated point determination section.

  • IMAGE PROCESSING DEVICE, IMAGE FORMING DEVICE, AND COMPUTER-READABLE MEDIUM

    An image processing device includes a screen processing unit and a correction processing unit. The screen processing unit executes screen processing for an image to be processed. The correction processing unit performs correction processing of correcting a distortion of an output image from an original image, based on the amount of the distortion, for (i) the image before the screen processing and (ii) the image after the screen processing. The correction amount relating to the correction processing for the image after the screen processing is a small value as compared with the correction amount relating to the correction processing for the image before the screen processing.

  • Method and System for Video and Image Coding Using Pattern Matching for Intra-Prediction

    A method and system are provided in which a device can determine, for a current block of pixels, a first intra-prediction based on reconstructed neighboring pixels and a second intra-prediction based on a pattern match with a reconstructed block of pixels. One of the two intra-predictions may be selected to generate a compressed bit stream comprising information of the current block of pixels. The intra-prediction selection may be performed on a block-by-block basis. An indication may be generated as to which of the two intra-predictions was selected for a particular block of pixels. When pattern matching is selected, a positional relationship of the reconstructed block of pixels with the matching pattern may be encoded and embedded into the compressed bit stream. The same device or another device may be operable to receive a compressed bit stream comprising intra-prediction selection and/or positional relationship information to reconstruct a current block of pixels.

  • IMAGE PROCESSING DEVICE AND SHAKE CALCULATION METHOD

    A method of calculating camera shake using first and second images obtained by continuous shooting includes: extracting first and second feature points located in positions symmetrical about a central point in the first image; searching for the first and second feature points in the second image; and calculating the camera shake based on coordinates of the first and second feature points extracted from the first image and coordinates of the first and second feature points searched for in the second image.

  • IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM, AND INTEGRATED CIRCUIT

    The grayscale of an input signal is converted without amplifying noise components thereof. A grayscale conversion portion performs grayscale conversion on an input signal IS to create a converted signal TS, a noise reduction degree determining portion determines a noise reduction degree NR that expresses a strength of noise reduction processing to be applied to the converted signal based on the input signal IS and the converted signal TS, and a noise reducing portion executes noise reduction processing on the converted signal TS based on the noise reduction degree NR. By doing this, it is possible to convert the grayscale of the input signal without enhancing the noise.

  • IMAGE PROCESSING DEVICE, DISPLAY DEVICE, SCREEN CONTROL SYSTEM, AND SCREEN CONTROL METHOD

    An image processing device in which a plurality of applications each having multiple functions can be installed includes a display device and an application control unit. When screen definition information acquired from a first application includes definition information of a second application corresponding to a screen change destination, the application control unit displays an application screen in which a display component for selecting a function of the second application is arranged, on the display device. When an input manipulation on the application screen by a user is received, the application control unit specifies a function of the second application based on a result of determination of an event type of the received input manipulation and displays a functional screen of the specified function of the second application on the display device as a result of the operation of the screen change.

  • VIDEO PROCESSING DEVICE

    A video processing device for converting an interlaced signal into a progressive signal includes an OSD mixer which mixes an OSD display, such as a caption or a telop, with the interlaced signal, a cinema detector which detects a pulldown pattern by comparing video images of different fields, a phase comparator which compares a timing of change in the OSD display and a timing of change in a cinema video image based on OSD mixing signals indicating OSD-mixed locations in the OSD mixer and on the pulldown pattern detected in the cinema detector, and an interpolated pixel generator which generates a new pixel between lines of the interlaced signal by an interpolation method based on a detection result in the cinema detector and on a comparison result in the phase comparator.

  • VIDEO PROCESSING DEVICE

    A video processing device for converting an interlaced signal into a progressive signal includes an OSD mixer which mixes an OSD display, such as a caption or a telop, with the interlaced signal, a cinema detector which detects a pulldown pattern by comparing video images of different fields, a phase comparator which compares a timing of change in the OSD display and a timing of change in a cinema video image based on OSD mixing signals indicating OSD-mixed locations in the OSD mixer and on the pulldown pattern detected in the cinema detector, and an interpolated pixel generator which generates a new pixel between lines of the interlaced signal by an interpolation method based on a detection result in the cinema detector and on a comparison result in the phase comparator.

  • Image scaling method and apparatus

    A method and apparatus for down scaling image data is disclosed. One method controls a phase for an M/N filter, where N represents a number of input samples, and M represents a number of output samples. N is greater than M. Another method may switch between an M/N filter and a phase-controlled M/N filter.

  • Shared block comparison architechture for image registration and video coding

    This disclosure describes an efficient architecture for an imaging device that supports image registration for still images and video coding of a video sequence. For image registration, the described architecture uses block-based comparisons of image blocks of a captured image relative to blocks of another reference image to support image registration on a block-by-block basis. For video coding, the described architecture uses block-based comparisons, e.g., to support for motion estimation and motion compensation. According to this disclosure, a common block comparison engine is used on a shared basis for both block-based image registration and block-based video coding. In this way, a hardware unit designed for block-based comparisons may be implemented so as to work in both the image registration process for still images and the video coding process for coding a video sequence.

  • Spatial-temporal multi-resolution image sensor with adaptive frame rates for tracking movement in a region of interest

    A sensor includes an array of pixels to capture an image, the array of pixels arranged as a plurality of pixel groups, each of the pixel groups having a two or more pixels coupled to a shared floating diffusion node for outputting merged image signals from the pixel groups, at least one inter-pixel switch to control transfer of electrical charge from a floating diffusion node for a first one of the pixel groups to a floating diffusion node for a second one of the pixel groups to temporarily store a portion of a previous image frame within the floating diffusion node for the second one of the pixel groups, and a motion comparator to compare an image signal from the first one of the pixel groups with an image signal from the second one of the pixel groups to detect motion between a current frame and the previous frame.

  • IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, AND IMAGE PROCESSING METHOD

    An image processing apparatus for an image forming apparatus including a line head array that forms an image by illuminating one or more illumination elements in correspondence with image data. The image processing apparatus includes a detection part configured to detect a linear image extending in a sub-scanning direction in the image data, and an adjustment part configured to adjust a density of the linear image so that the energy used in illuminating the one or more illumination elements for forming the linear image is reduced compared to the energy used in forming the linear image without adjusting the density of the linear image.

  • MOVING IMAGE PROCESSING APPARATUS, MOVING IMAGE PLAYBACK APPARATUS, MOVING IMAGE PROCESSING METHOD, MOVING IMAGE PLAYBACK METHOD, AND STORAGE MEDIUM

    A moving image processing apparatus comprises an image capturing unit (2) that acquires data of a moving image by capturing a plurality of continuous image frames, an audio data attaching unit (14) that attaches more than one kind of audio data to the data of the moving image acquired by the image capturing unit, and a playback information attaching unit (14) that attaches playback information to the data of the moving image to which the audio data is attached by the audio data attaching unit. The playback information indicates a playback mode corresponding to each of the more than one kind of audio data. The playback mode includes a first mode of playing back the data of the moving image by skipping some of the frames and a second mode of playing back the moving image without skipping.

  • IMAGE PROCESSING APPARATUS AND METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS

    An image processing apparatus for applying film grain effects on an image of received image data includes a generating unit configured to generate, on the basis of pixel values randomly read from grain data including a plurality of pixel values, a basic grain image having a certain size larger than the grain data; a resizing unit configured to resize the basic grain image generated by the generating unit to have the same size as the received image data; and a combining unit configured to combine the basic grain image resized by the resizing unit with the received image data.

  • IMAGE PROCESSING APPARATUS, METHOD OF THE SAME, AND PROGRAM

    Provided is an image processing apparatus including: a frequency value calculation section that allocates each pixel of an input image to any of respective partial regions obtained by dividing the entirety of a possible range of a luminance value into units in a luminance direction on the basis of the luminance value thereof, and allocates one pixel of the input image to partial regions when calculating frequency values representing the number of pixels allocated to the partial regions with respect to the respective partial regions, to update the frequency values of the partial regions; a characteristic value calculation section that calculates a characteristic value representing a characteristic of the partial region; and a weighted product-sum section that performs edge-preserving smoothing on the input image by weighting and averaging the characteristic values in accordance with a distance in the luminance direction, using the calculated frequency value and the calculated characteristic value.

  • GEOMETRICAL IMAGE REPRESENTATION AND COMPRESSION

    A method and apparatus is disclosed herein for geometrical image representation and/or compression. In one embodiment, the method comprises creating a representation for image data that includes determining a geometric flow for image data and performing an image processing operation on data in the representation using the geometric flow.

  • IMAGE PROCESSING APPARATUS

    According to an embodiment, an image processing apparatus includes a generation unit. The generation unit generates a texture image by searching for a similar pixel area to a processed pixel area near a processing target pixel in the texture image from a neighboring area at a position corresponding to the processing target pixel in a sample texture image and assigning the processing target pixel a pixel value near the similar pixel area in accordance with a positional relationship between the processed pixel area and the processing target pixel. The generation unit searches for the similar pixel area based on a similarity between a pixel value in a pixel area in the neighboring area and a pixel value in the processed pixel area and a determination result indicating whether each pixel in the neighboring area expresses a same object as that expressed by the processing target pixel.

  • IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, IMAGE READING APPARATUS, AND IMAGE PROCESSING METHOD

    An image processing apparatus includes an image area extracting section for identifying and extracting, on the basis of inputted image data, an image area within the document where an image is present. The image area extracting section includes an image area detecting section for comparing a pixel value of each part of an image of the inputted image data with a threshold value so as to detect, as the image area, an area where a pixel value is larger than the threshold value. The image area extracting section further includes a judging section for judging a type of the inputted image data, and a threshold value changing section for changing the threshold value used in the image area detecting section to one suitable for the type of the inputted image data in accordance with the type judged by the judging section.

  • IMAGE PROCESSING DEVICE, PROGRAM, AND IMAGE PROCESING METHOD

    There is provided an image processing device including a recognition unit configured to recognize a plurality of users being present in an input image captured by an imaging device, an information acquisition unit configured to acquire display information to be displayed in association with each user recognized by the recognition unit, and an output image generation unit configured to generate an output image by overlaying the display information acquired by the information acquisition unit on the input image. The output image generation unit may determine which of first display information associated with a first user and second display information associated with a second user is to be overlaid on a front side on the basis of a parameter corresponding to a distance of each user from the imaging device.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

    An image processing apparatus may include a feature quantity extraction unit configured to extract a feature quantity from an image, a setting unit configured to set setting information including a plurality of setting items used to perform processing upon the image so that a designated setting item can be changed among the setting items; and a recording unit configured to associate the setting information with a feature quantity of the image and record them.

  • IMAGE PROCESSING APPARATUS AND COMPUTER READABLE MEDIUM

    An image processing apparatus includes: a noise position information obtaining unit that obtains noise position information regarding positions of noise in an image read by a reading unit that optically reads a surface of a medium, an image generating unit that generates a noise eliminated image that is obtained by eliminating a noise from the image, a pattern information obtaining unit that obtains pattern information indicating a pattern appearing on the surface of the medium from the noise eliminated image, and a pattern information registering unit that registers the pattern information obtained from areas set on the basis of the noise position information in the noise eliminated image.

  • VIDEO PROCESSING METHOD AND CIRCUIT USING THEREOF

    A video processing method enlarging and enhancing sharpness of input video data includes following steps. First, N sets of pixel row data of the input video data are respectively buffered in N linear buffers, N is a natural number. Next, I sets of enlarged pixel row data are generated by interpolation according to the buffered N sets of pixel row data in the N linear buffers and a currently inputted set of pixel row data, I is a natural number greater than N. Then, I sets of smoothed and enlarged pixel row data are generated according to the buffered N sets of pixel row data in the N linear buffers and the (N+1).sup.th set of pixel row data. Thereafter, I sets of sharpness-enhanced pixel row data are obtained according to the I sets of enlarged pixel row data and the I sets of smoothed and enlarged pixel row data.

  • GENERATING A RESPONSE TO VIDEO CONTENT REQUEST INCLUDING DYNAMICALLY PROCESSED VIDEO CONTENT

    In one embodiment, a video processing server including a memory capable of storing data and a processor is disclosed. The processor is configured for using the data such that the video processing server can receive a request redirected from a gateway for a video content, wherein the request is redirected by the gateway based on information contained in the request and wherein the information contained in the request includes control data used for an optimal delivery of the video content. The processor is further configured for using the data such that the video processing server can send the redirected request to a content provider identified in the request, receive the requested video content from the content provider, and generate a response to the request by modifying the video content based on the control data.

  • POWER AND COMPUTATIONAL LOAD MANAGEMENT TECHNIQUES IN VIDEO PROCESSING

    Techniques for managing power consumption and computational load on a processor during video processing and decoding are provided. One representative embodiment discloses a method of processing a data stream that includes video data. According to the method, one or more protocols used to create the data stream are identified. The various parsing and decoding operations required by the protocol are then identified and managed based on the available electrical power or available processing power. Another representative embodiment discloses a method of processing a data stream that includes video data. According to the method, one or more protocols used to create the data stream are identified. The various parsing and decoding operations required by the protocol are then identified and managed based on a visual quality of the video or a quality of experience.

  • VIDEO PROCESSING METHOD AND APPARATUS BASED ON MULTIPLE TEXTURE IMAGES USING VIDEO EXCITATION SIGNALS

    Disclosed herein is a video processing apparatus and method based on multiple texture images using video excitation signals. For this, an input video is divided into shot segment, and one is selected from a plurality of frames of each shot segment as a seed image. A plurality of texture points are detected from the seed image. The plurality of texture points are tracked from the plurality of frames of the shot segment and then spatio-temporal location transform variables for the respective texture points are calculated. A plurality of texture images are defined using texture points at which the spatio-temporal location transform variables correspond to one another. Each of the plurality of texture images is defined as a sum of a plurality of texture blocks that are outputs of texture synthesis filters that receive video excitation signals as inputs.

  • VIDEO PROCESSING METHOD AND APPARATUS BASED ON MULTIPLE TEXTURE IMAGES

    Disclosed herein is a video processing apparatus and method based on multiple texture images, which can process videos with optimal video quality at a low transfer rate. For this, an input video is divided into shot segments, and one is selected from a plurality of frames of each shot segment as a seed image. A plurality of texture points are detected from the seed image. The plurality of texture points are tracked from the plurality of frames of the shot segment and then spatio-temporal location transform variables for the respective texture points are calculated. A plurality of texture images are defined using texture points at which the spatio-temporal location transform variables correspond to one another.

  • DYNAMICALLY INSTALLING IMAGE PROCESSING

    Methods, computer-readable media, and systems are provided for dynamically installing and image processing filter. One method for dynamically installing and image processing filter includes starting to obtain image information by infrastructure of an image processing device and processing the obtained image information with an application. After starting to obtain image information, receiving an operating system (OS) application programming interface (API) allowing just-in-time (JIT) bytecode to be executed as a filter during processing the obtained image information.

  • Image processing apparatus, power-saving recovery control method, and computer program product

    An image processing apparatus including: a first storage unit; a second storage unit that has a higher storage capacity than that of the first storage unit and a longer start-up time than that of the first storage unit; and a control unit that, after shifting to a power-saving mode in which power consumption is reduced by shutting off power supply at least to the second storage unit, starts a recovery process from the power-saving mode upon occurrence of a recovery request from the power-saving mode to perform a processing operation using the second storage unit, starts the processing operation with the first storage unit as a data storing destination when the first storage unit is ready for use, and switches the data storing destination from the first storage unit to the second storage unit when the second storage unit is ready for use.

  • DATA PROCESSING APPARATUS AND IMAGE PROCESSING APPARATUS

    A data processing apparatus may include a plurality of buffer units that stores data, a data write control unit that writes input data to any one of the plurality of buffer units by exclusively controlling the plurality of buffer units, and a data read control unit that reads data to be output from any one of the plurality of buffer units by exclusively controlling the plurality of buffer units. The data write control unit may output a data write completion signal indicating that the writing of the data is completed when the writing of the input data is completed. The data read control unit may output a data read completion signal indicating that the reading of the data is completed when the reading of the data to be output is completed.

  • IMAGE PROCESSING DEVICE, CONTROL METHOD THEREFOR AND COMPUTER READABLE MEDIUM

    There is provided an image processing device including: a transmission unit that transfers data to an FTP server in a specified transmission mode; and a transmission mode specifying unit that initially specifies an active mode for the transmission mode for transferring the data to the FTP server, and if establishing a data transfer connection in an active mode fails, specifies a passive mode for the transmission mode for transferring the data to the FTP server.

  • ULTRASONIC DIAGNOSTIC APPARATUS AND ULTRASONIC IMAGE PROCESSING APPARATUS

    According to one embodiment, an edge information calculation unit calculates edge information based on a generated ultrasonic image. An edge filter unit generates a filtered image from the ultrasonic image by applying a filter having filter characteristics corresponding to the calculated edge information to the ultrasonic image. An edge enhancement unit generates an enhanced image from the filtered image by increasing the brightness value, of the filtered image, which corresponds to the edge information. A high brightness suppression unit generates a composite image of the enhanced image and the ultrasonic image in accordance with a compositing ratio corresponding to the brightness value of the enhanced image.

  • IMAGE PROCESSING DEVICE AND METHOD

    An image processing device reads a video consisting of a plurality of image frames from a storage device. A stable region along a Y-axis direction in each image frame is determined according to pixel information of an edge row of each image frame. The device then aligns all the image frames according to the stable region along a Y-axis direction in each image frame, and trims all the image frames by cutting additional image regions excepting the stable region along the Y-axis direction in each image frame, to reduce a shaking degree of each image frame along the Y-axis direction. Furthermore, the device reduces a shaking degree of each image frame along an X-axis direction using similar method by reducing the shaking degree along the Y-axis direction. At last, the device displays stable playback of the video consisting of the aligned and trimmed image frames.

  • Electronic Apparatus and Image Processing Method

    According to one embodiment, an electronic apparatus includes an anniversary setting module, an image setting module, an image extraction module, and an image extraction module. The anniversary setting module sets an input date to be an anniversary. The image setting module associates one still image of a plurality of still images with the anniversary, the associated still image being designated by a user. The image extraction module extracts still images from the plurality of still images when a present date is within a predetermined time period including the anniversary, the extracted still images being relevant to the associated still image. The image display module displays a moving picture using the associated still image and the extracted still images.

  • IMAGE PROCESSING METHOD AND APPARATUS ADJUSTING PROCESSING ORDER ACCORDING TO DIRECTIVITY

    An image processing apparatus may process a macro block by determining a processing order of the macro block based on a predicted directivity. The image processing apparatus may predict a directivity of the macro block based on neighboring pixel data in the macro block, determine the processing order of the macro block based on the predicted directivity, and process the macro block according to the determined processing order, thereby enhancing a data compression rate.

  • Multi-Level Video Processing Within A Vehicular Communication Network

    A system for performing multi-level video processing within a vehicle includes a pre-processing module for determining an encoding mode and enabling one or more levels of encoding based on the encoding mode. The pre-processing module further receives a video stream from a camera attached to the vehicle via a vehicular communication network and encodes the video stream based on the encoding mode to produce a packet stream output. The system further includes a video decoder for receiving the packet stream output and decoding the packet stream output in accordance with the encoding mode to produce a decoded video output.

  • APPLYING NON-HOMOGENEOUS PROPERTIES TO MULTIPLE VIDEO PROCESSING UNITS (VPUs)

    A multiprocessor system includes a plurality of special purpose processors that perform different portions of a related processing task. A set of commands that cause each of the processors to perform the portions of the related task are distributed, and the set of commands includes a predicated execution command that precedes other commands within the set of commands. It is determined whether commands subsequent to the predicated execution command are intended to be executed by a first processor or a second processor based on information in the predicated execution command and the set of commands includes all commands to be executed by each processor.

  • IN-VEHICLE IMAGE PROCESSING DEVICE AND TRAVEL AID DEVICE

    An in-vehicle information processing device capable of efficiently processing information by extracting an event that exists outside of a travel route but can possibly reach the travel route. The in-vehicle information processing device includes a map database, a traffic information database, a vehicle position determination mechanism, a driving assist information extraction mechanism for extracting driving assist information, and a driving assist information database for storing the driving assist information. An aid information extraction mechanism includes a travel route determination mechanism for determining a vehicle travel route based on a vehicle position. The driving assist information extraction mechanism extracts the driving assist information from map information and traffic information on the travel route.

  • Method for the Reduction of Biological Sampling Errors by Means of Image Processing

    The present invention relates to methods and devices for reducing biological sampling errors by means of image processing. Image processing techniques are used to determine the volume of sample added to a device, such as a diagnostic test, and to correct for user error in sampling techniques.

  • IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

    A low frequency component image L[i] and high frequency component-emphasized image SH[i] are generated from an image A[i]. Lossy compression processing is performed for the low frequency component image L[i] to generate a compressed image C[i] and store it in a memory. A compressed image C[i-1] is decoded to generate a decoded image L'[i-1]. The compressed image C[i] is decoded to generate a decoded image L'[i]. A difference image E[i] between the decoded image L'[i] and the low frequency component image L[i] is generated. The low frequency component image L[i], decoded image L'[i-1], and difference image E[i] are composited at a predetermined ratio to generate a composite image SL[i]. The high frequency component-emphasized image SH[i] and composite image SL[i] are output as subframe images of the i-th frame.

  • Method to measure local image similarity and its application in image processing

    A method for effectively performing local image similarity measurement is proposed. A system equipped with such a method for effectively performing an image processing task includes an image processor that performs an intermediate-results calculation procedure to calculate intermediate result values that are based upon corresponding pixels of a target patch and one or more similar patches. The image processor typically moves the target patch of the intermediate-results calculation to different locations in a raster order or some other organized order. The image processor then performs an intermediate-results combination procedure by calculating appropriate statistics of the intermediate result values to produce processed pixel values. A processor device typically controls the image processor to effectively perform the image processing tasks including, but not limited to, demosaicing and denoising.

  • Bioinspired System for Image Processing

    A method for digital image processing is bioinspired and includes an architecture that emulates the functions of photoreceptors, horizontal cells, bipolar cells and ganglion cells of a primate retina based on an image as input. The method detects edges and properties of the surfaces present in the digital image. The output is a data set that includes photoreceptor emulators that emulate photoreceptor cells and connected to the data input. Each emulator includes a cellular base structure with a modulated data input, a calculation center to process the modulated data and an output of the data processed by the calculation center, and the emulators forming a virtual retina in which each emulator is parameterized.

  • MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE DIAGNOSIS APPARATUS

    A medical image processing apparatus comprises: an acquiring part configured to acquire a morphological image that is formed by a first apparatus and shows the morphology of an organ of an object, and a functional image that is formed by a second apparatus different from the first apparatus and shows the state of the organ; a display; and a processor configured to cause the display to display a synthetic image based on the morphological image and the functional image.

  • IMAGE PROCESSING APPARATUS AND A DOCUMENT SCANNING SYSTEM HAVING THE SAME

    An image processing apparatus according to the present invention has an effect which enables to obtain a cutout image by directly specifying an area with a pointer such as a finger or the like on a paper surface of a document such as a book or the like. The present invention is useful as an image processing apparatus and a document scanning system having the same, the image processing apparatus processing an image obtained by scanning a paper surface of a document such as a book or the like.

  • IMAGE PROCESSING APPARATUS AND A DOCUMENT SCANNING SYSTEM HAVING THE SAME

    An image processing apparatus according to the present invention has an effect which enables to obtain a cutout image by directly specifying an area with a pointer such as a finger or the like on a paper surface of a document such as a book or the like. The present invention is useful as an image processing apparatus and a document scanning system having the same, the image processing apparatus processing an image obtained by scanning a paper surface of a document such as a book or the like.

  • THREE-DIMENTIONAL VIDEO PROCESSING DEVICE FOR GENERATING BACKLIGHT CONTROL SIGNAL TO REDUCE CROSSTALK, AND RELATED THREE-DIMENTIONAL VIDEO SYSTEM USING BACKLIGHT CONTROL AND CONTROL CIRCUIT

    A three-dimensional (3D) video processing device capable of avoiding crosstalk between adjacent frames includes a video processing circuit and a control circuit. The video processing circuit is configured to generate a 3D video signal having a first frame timing. The 3D video signal is used to control a panel to update, to thereby display 3D video frames in accordance with a second frame timing which is a delayed version of the first frame timing. The control circuit is utilized for generating a backlight control signal. A switching timing of the backlight control signal is determined according to the second frame timing.

  • VIDEO PROCESSING DEVICE, VIDEO PROCESSING METHOD, COMPUTER PROGRAM, AND DISTRIBUTION METHOD

    The present invention aims to provide a video processing device allowing notifications to be made to viewers of a program without annoyances being posed thereby. A video processing device comprises a receiver receiving video information including a subject program and flag information indicating whether the video information includes type-1 guide information for making a notification when the subject program is in 3D; a determiner determining, according to received flag information, whether the user is to be notified of the type-1 guide information; and an output controller outputting a program from the received video information, outputting type-2 guide information for making the notification when the subject program is in 3D and the determiner determines that the user is not to be notified of the type-1 guide information, and preventing output of the type-2 guide information when the determiner determines that the user is to be notified of the type-1 guide information.

  • VIDEO PROCESSING APPARATUS AND VIDEO PROCESSING METHOD

    According to one embodiment, a video processing apparatus includes a content list display module, a content register and a controller. The content list display module is configured to display a list of contents capable of being acquired via a network. The content register is configured to register contents which are acquired via any of broadcasting, a recording medium and the network and on which any processing operation such as reproduction, record or reservation is performed. The controller is configured to change a display form of the contents registered in the content register into a different display form from other contents among the contents displayed in a list in the content list display module.

  • VIDEO PROCESSING SYSTEM WITH LAYERED VIDEO CODING FOR FAST CHANNEL CHANGE AND METHODS FOR USE THEREWITH

    A video processing system includes a video encoder that encodes a video stream into an independent video layer stream and a first dependent video layer stream based on a motion vector data or grayscale and color data.

  • IMAGE PROCESSING APPARATUS AND METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS

    An image processing apparatus includes an image processing unit configured to perform image processing, a storage unit configured to be capable of storing an application program installed in the image processing apparatus, a first determination unit configured to determine whether the application program had ever been installed in the image processing apparatus, and a control unit configured to selectively control the image processing unit to be operable and control the image processing unit not to operate according to the determination by the first determination unit if an error has occurred in the storage unit.

  • MEDICAL IMAGE PROCESSING SYSTEM AND A MEDICAL IMAGE PROCESSING SERVER

    In case of communication is interrupted as a result of unforeseen circumstances, it is intended to allow confirming the results of processing executed while communication is interrupted. The medical image processing system includes a server comprising a processor and a screen generator. The processor executes an application for generating and editing a medical image based on instructions from a client. The screen generator generates and delivers the screen to the client upon receiving the processing results of the application. The server comprises a disconnect detector, a screen storage, and a screen storage controller. The disconnect detector detects communication interruption with the client. The screen storage controller causes the screen storage to store the screen generated in a first period from the time when the disconnect detector detects an interruption of communication. A screen playback part delivers the screen stored in the screen storage to the client.

  • IMAGE PROCESSING APPARATUS AND METHOD OF CONTROLLING THE SAME

    An image processing apparatus includes a living body detection unit configured to detect approaching of a living body based on a detection output depending on a distance to the living body, an operation unit configured to receive an operation command from a user, a history recording unit configured to record a history of a detection output of the living body detection unit and a history of an operation performed on the operation unit, and a determination unit configured to determine a threshold value of the detection output, the threshold value being used by the living body detection unit as a determination reference value in determining whether a living body is detected, the determination of the threshold value being made based on the history recorded in the history recording unit as to the detection output of the living body detection unit and as to the operation performed on the operation unit.

  • ULTRASOUND DIAGNOSTIC APPARATUS, CONTROL METHOD, AND IMAGE PROCESSING APPARATUS

    An ultrasound diagnostic apparatus according to an embodiment includes an extracting unit, a detecting unit, and a display controlling unit. The extracting unit extracts a cervical image region that is a region including the cervical region from an ultrasonic image of a fetus obtained by transmissions and receptions of ultrasonic waves. The detecting unit detects a dorsal body surface region that is a region that is related to a dorsal body surface of the fetus from the ultrasonic image. The display controlling unit controls to display an enlarged image including an enlarged image of the cervical image region in a region other than the dorsal region in the ultrasonic image on a display device.

  • IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

    A face detection processing unit performs a face detection process by rotating an image in increments of a predetermined angle and acquires a rotation angle at which a face is detected. An angle correction unit acquires an angle between the face and a shoulder by pattern matching and corrects the rotation angle of the image. A human-image orientation identification unit identifies a correct orientation of a human image based on the rotation angle. An image-characteristic analysis unit analyzes a frequency distribution and a brightness distribution of a non-human image. A non-human image orientation identification unit identifies the correct orientation of a non-human image based on distribution characteristics of a frequency of brightness with respect to an axis in a predetermined direction. An image-data updating unit incorporates information regarding the correct orientation in the image data.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM AND IMAGING APPARATUS

    Provided is an image processing apparatus, including an extraction color region determination unit which performs a process of determining an extraction color region including at least a partial region in an image using color information of an input image signal, and an image processing unit which performs image processing on the extraction color region of the input image signal determined by the extraction color region determination unit and/or the remaining region of the input image signal excluding the extraction color region, to obtain an output image signal.

  • IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD, LEARNING APPARATUS AND LEARNING METHOD, PROGRAM, AND RECORDING MEDIUM

    A predictive signal processing unit calculates a pixel value of a luminance component of a pixel of interest by a calculation of a predictive coefficient for a luminance component and a luminance prediction tap. A predictive signal processing unit calculates a pixel value of a chrominance component of a pixel of interest by a calculation of a predictive coefficient for a chrominance component which is higher in noise reduction effect than the predictive coefficient for the luminance component and a chrominance prediction tap. For example, the present technology can be applied to an image processing apparatus.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, STORAGE MEDIUM, AND LEARNING APPARATUS

    A prediction calculation unit calculates a pixel value of a pixel of interest for each color component by a calculation of a learned predictive coefficient and a predictive tap, and outputs an output image including the pixel value of the pixel of interest of each color component. For example, the present technology can be applied to an image processing apparatus.

  • LEARNING APPARATUS AND METHOD, IMAGE PROCESSING APPARATUS AND METHOD, PROGRAM, AND RECORDING MEDIUM

    There is provided an image processing apparatus including a model-based processing unit that executes model-based processing for converting resolution and converting an image on the basis of a camera model and a predetermined model having aligning, with respect to a high-resolution image output one frame before, and a prediction operation unit that performs a prediction operation on a pixel value of a high-resolution image to be output, on the basis of parameters stored in advance, an observed low-resolution image that is an input low-resolution image, and an image obtained by executing the model-based processing.

  • LEARNING APPARATUS AND METHOD, IMAGE PROCESSING APPARATUS AND METHOD, PROGRAM, AND RECORDING MEDIUM

    There is provided an image processing apparatus including a model-based processing unit that executes model-based processing for converting resolution and converting an image on the basis of a camera model and a predetermined model having aligning, with respect to a high-resolution image output one frame before, and a prediction operation unit that performs a prediction operation on a pixel value of a high-resolution image to be output, on the basis of parameters stored in advance, an observed low-resolution image that is an input low-resolution image, and an image obtained by executing the model-based processing.

  • IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

    An image of a prescribed frame of images of respective frames is set as a target image, and an area including a prescribed pattern is detected from the target image as a specific area. An image other than the target image is set as a non-target image, and the specific area in the non-target image is predicted. The images of the respective frames are encoded so that the specific area is encoded to have higher image quality than an area other than the specific area. In encoding, the images of the respective frames are encoded so that the specific area in the non-target image is not referred to from another frame.

  • IMAGE PROCESSING DEVICE AND METHOD

    An image processing device and method, enabling improvement in encoding efficiency. A plane approximation unit uses each of pixels values themselves of a block to be processed to obtain each of the parameters of a function representing a plane approximating the pixel values. A plane generating unit obtains pixel values on a plane represented by the supplied plane parameters. A prediction encoding unit predicts the values of the plane parameters, and obtains the difference between the prediction values and the actual plane parameter values, thereby reducing the data amount thereof. The entropy encoding unit further performs entropy encoding of the encoded plane parameters. The encoded plane parameters are supplied to the decoding side.

  • STEREOSCOPIC VIDEO PROCESSING APPARATUS, METHOD, AND PROGRAM

    The invention relates to a stereoscopic video processing apparatus, method, and program that can reduce visual fatigue on watching a stereoscopic video. A scene change detecting unit (61) accumulates, in an internal buffer or the like, frames corresponding to a stereoscopic video for three seconds extracted from a stereoscopic video signal, and specifies a scene change frame. A parallax adjusting unit (62) acquires a frame that precedes the specified scene change frame by three seconds and a frame that lags behind the scene change frame by three seconds, based on a frame number for the scene change frame specified by the scene change detecting unit (61), and calculates a parallax coefficient .alpha. by specifying a maximum parallax value in the two frames. Thus, the value of the maximum parallax of a parallax plane is adjusted by using the parallax coefficient.

  • VIDEO PROCESSING METHOD AND APPARATUS FOR USE WITH A SEQUENCE OF STEREOSCOPIC IMAGES

    To generate a warning that a stereoscopic image sequence has been synthesised from a 2D image sequence, a video processor correlates left-eye image data and right-eye image data to identify any sustained temporal offset between the left-eye and right-eye image data. A measure of sustained correlation between a measured spatial distribution of horizontal disparity and a spatial model can also be used to generate the warning.

  • COLOR IMAGE OR VIDEO PROCESSING

    A method of color masking an image or video includes reading color values of an image sample of the image or video and a corresponding change of an attribute of the image sample. Based on the color values of the image sample, the change in the image sample attribute is mapped to a change in color components of the image sample that is equivalent to the change in the image sample attribute, yet reduces visibility of the change in the image sample attribute for the specific color values of the image sample

  • PERFORMING VIDEO PROCESSING FOR FRAME SEQUENTIAL DISPLAY FOR ALTERNATELY DISPLAYING RIGHT AND LEFT IMAGES OF STEREOSCOPIC VIDEO SIGNALS

    The instant application describes a stereoscopic video processing system that includes an output image generator configured to generate interpolation frames in interpolation phases using the frames of the input video signal and the motion vector and output the frames of the input video signal or the interpolation frames as the frames of an output video signal, and an output controller configured to (i) determine whether frames of the input video signal include image regions with motion based on the motion vector, (ii) control the output image generator to output the interpolation frames upon determining the frames of the input video signal include the image regions with motion, and (iii) control the output image generator to output the frames of the input video signal upon determining the frames of the input video signal does not include image regions with motion.

  • VIDEO PROCESSING APPARATUS AND CONTROLLING METHOD FOR SAME

    A video processing apparatus that can be connected to a video playback apparatus includes an input unit configured to input video data from the video playback apparatus, an interpolation unit configured to generate interpolated frame image data of input video data, and an output unit configured to output the interpolated frame image data as interpolated video data, wherein the interpolation unit generates different interpolated frame image data according to a playback mode of the video playback apparatus.

  • VIDEO PROCESSING APPARATUS AND CONTROLLING METHOD FOR SAME

    A video processing apparatus that can be connected to a video playback apparatus includes an input unit configured to input video data from the video playback apparatus, an interpolation unit configured to generate interpolated frame image data of input video data, and an output unit configured to output the interpolated frame image data as interpolated video data, wherein the interpolation unit generates different interpolated frame image data according to a playback mode of the video playback apparatus.

  • Method and System for Power-Aware Motion Estimation for Video Processing

    Methods and systems for power-aware motion estimation video processing are disclosed. Aspects of one method may include estimating motion for video data by block matching reduced resolution blocks of video data to generate an initial motion vector. The preliminary motion vector and motion for a previous frame may be used to generate a final motion vector for the block for the present frame using an iterative algorithm. The motion estimation may be dynamically enabled and/or disabled based on content of the video data, available power to a mobile terminal, and/or a user input. The iterations used to generate the final motion vector may be based on content of the video data, available power to a mobile terminal, and/or a user input.

  • VIDEO PROCESSOR AND VIDEO PROCESSING METHOD

    According to one embodiment, a video processor includes: a server function module configured to function as a server to provide data through a network; a renderer function module configured to function as a renderer to control displaying of data provided by another server in the network; a memory configured to store therein data that can be provided to the renderer function module; a receiver configured to receive, from an external device, identification information used to identify a server providing corresponding data; a determination module configured to determine whether the server identified by using the identification information is the server of the video processor; and a display controller configured to control displaying of the data stored in the memory if it is determined that the server identified by using the identification information is the server of the video processor.

  • VIDEO PROCESSING APPARATUS AND VIDEO PROCESSING METHOD

    One embodiment provides a video processing apparatus including: a sensor module configured to recognize a presence of a person; a video processor configured to output demo video relating to a function of the video processing apparatus; and a controller configured to control the video processor so that it starts outputting the demo video when the sensor module recognizes the presence of the person.

  • Video Processing Apparatus and Method for Extending the Vertical Blanking Interval

    A video processing apparatus. A first scaling module receives original images according to an original pixel clock and performs adjustments on the original images according to a first scaling ratio to generate first scaled images. A frame buffer buffers the first scaled images. A controller controls the frame buffer to receive the first scaled images according to a first pixel clock and output the first scaled images according to a second pixel clock. A second scaling module receives the first scaled images and performs adjustments on the first scaled images according to a second scaling ratio to generate second scaled images. A length of a vertical blanking interval of the second scaled images is longer than a length of a vertical blanking interval of the original images.

  • EXPANDABLE MULTI-CORE TELECOMMUNICATION AND VIDEO PROCESSING APPARATUS

    An expandable multi-core telecommunication and video processing apparatus includes a primary wireless telecommunications device having a microprocessor that can be programed for running a wide range of software application and includes a primary, or main, viewer touch screen interface and a plurality of ports for receiving one or more video core processors. Each video core processor is removably connectable to a port located along a surface of the primary telecommunications device for permitting a plurality of individual videos which can be interfaced by a user. The individual videos displaced by each connected video core processor can act in concert with, or independently of, the main, or primary, touch screen interface which is located on a front surface of the primary telecommunications device. The primary telecommunications device further includes a detachable storage bay for retaining video core processors when not connected with the primary telecommunications device. The primary telecommunications device and the video core processors are, preferably, each connectable to a docking station, which can download data from either a video core processor or the primary telecommunications device. The docking station can be connected to a personal computer.

  • 3-D VIDEO PROCESSING DEVICE AND 3-D VIDEO PROCESSING METHOD

    A 3D image processing device capable of processing a 3D image signal, the 3D image signal including a left eye image signal and a right eye image signal and realizing a 3D The 3D image processing device includes: a parallax detecting unit operable to receive the left eye image signal and the right eye image signal, detect a parallax of an object within the 3D image, and output the detected parallax as parallax information; and an image processing unit operable to perform, based on the parallax information, a predetermined image processing to an image region of an object of at least one of the left eye image signal and the right eye image signal, the object having a parallax within a predetermined range.

  • IMAGE PROCESSING APPARATUS HAVING TOUCH PANEL

    An image processing apparatus includes an operation panel as an example of a touch panel and a display device, as well as CPU as an example of a processing unit for performing processing based on a contact. CPU includes a first identifying unit for identifying a file to be processed, a second identifying unit for identifying an operation to be executed, a determination unit for determining whether or not the combination of the file and operation as identified is appropriate, and a display unit for displaying a determination result. In the case where one of the identifying units previously detects a corresponding gesture to identify the file or the operation, and when a gesture corresponding to the other identifying unit is detected next, then the determination result is displayed on the display device before identification of the file or the operation is completed by the gesture.

  • IMAGE PROCESSING APPARATUS, COMMUNICATION METHOD THEREFOR, AND RECORDING MEDIUM

    An image processing apparatus includes an operation panel as an example of a touch panel and a display device, as well as CPU as an example of a processing unit for performing processing based on a contact. CPU includes a first identifying unit for identifying a file to be processed, a second identifying unit for identifying an operation to be executed, a determination unit for determining whether or not the combination of the file and operation as identified is appropriate, and a display unit for displaying a determination result. In the case where one of the identifying units previously detects a corresponding gesture to identify the file or the operation, and when a gesture corresponding to the other identifying unit is detected next, then the determination result is displayed on the display device before identification of the file or the operation is completed by the gesture.

  • IMAGE PROCESSING APPARATUS, COMMUNICATION METHOD THEREFOR, AND RECORDING MEDIUM

    An image processing apparatus being configured to support a power-saving mode which allows achieving low power consumption while keeping an idle connection without communication between the image processing apparatus and a communication device in a network environment, comprises: a first judgment portion which judges whether or not the communication device supports the power-saving mode; and a communicator which establishes a connection to the communication device at a first communication rate if the first judgment portion judges that the communication device does not support the power-saving mode, at a second communication rate which is faster than the first communication rate if the first judgment portion judges that the communication device supports the power-saving mode.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM

    An image processing apparatus, an image processing method, and a computer program, visually remove low-frequency noise contained in image data. Image data containing low-frequency noise is input from an input terminal. A window unit designates a window made up of a pixel of interest, and its surrounding pixels. A pixel selector selects a selected pixel to be compared with the pixel of interest from the window, and a pixel value determination unit determines a new pixel value of the pixel of interest on the basis of the pixel values of the selected pixel and pixel of interest. New image data is generated by substituting the pixel value of the pixel of interest by the new pixel value.

  • IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

    A deblocking filter 24 performs filtering of decoded image data obtained by decoding image data encoded for each block, so as to remove block distortion. If at least one of block sizes on adjacent sides of two adjacent blocks is extended larger than a predetermined block size, a filter setting unit 41 sets the tap length to an extended length to increase the strength of distortion removal, or sets a filtering object pixel range to an extended range. When a macroblock having an extended size is used, the degree of smoothing is increased, and pixels including those distant from the block boundary are subjected to filtering. Consequently, even when various block sizes are employed or when blocks of extended sizes are used, images of high image quality can be achieved.

  • IMAGE PROCESSING METHOD, ENCODING DEVICE, DECODING DEVICE, AND IMAGE PROCESSING APPARATUS

    There is provided an image processing method includes: separating an image taken at a predetermined frame rate into a first frame and at least one second frame other than the first frame at intervals of 1/n, where n is an integer of 2 or larger; calculating a low-frequency-component difference between the separated at least one second frame and first frame; performing signal processing designated by a user on the first frame; decompressing, using a low-frequency component in the first frame being subjected to the signal processing and the low-frequency-component difference, a low-frequency component in the at least one second frame being approximately subjected to the signal processing; and decompressing, using the decompressed low-frequency component in the at least one second frame and a high-frequency component therein, the at least one second frame being approximately subjected to the signal processing.

  • IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

    Disclosed herein is an image processing apparatus, including: a band information acquisition unit configured to acquire band information of each of a plurality of blocks acquired by dividing a screen on the basis of input image data; and a domain separation unit configured to separate a screen into a plurality of types of domains on the basis of the band information of each of the plurality of blocks acquired by the band information acquisition unit. The apparatus further includes a processing force computation block configured to obtain a processing force for each of the plurality of types of screen domains obtained by the domain separation unit; and an image processing block configured to execute predetermined image processing on the input image data for each of the plurality of types of screen domains separated by the domain separation unit with a processing force separated by the processing force computation block.

  • IMAGE PROCESSING DEVICE IDENTIFYING REGION IN IMAGE AS ONE OF UNIFORM REGION AND NONUNIFORM REGION

    An image processing device includes a processor, and a memory storing computer-readable instructions therein. The computer-readable instructions, when executed by the processor, causes the image processing device to perform: generating edge image data by using the original image data; calculating characteristic values for a plurality of determination regions; and identifying a determination region as a nonuniform region when the characteristic value of the determination region satisfies a prescribed criterion, and the determination region as a uniform region when the characteristic value of the determination region does not satisfy the prescribed criterion. Each of the plurality of determination regions corresponds to one of the characteristic values, represents a part of the edge image, and includes a plurality of pixels, the plurality of determination regions being different from one another, each of the characteristic values characterizing the edge strength of the corresponding determination region.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND RECORDING MEDIUM

    CPU performs a process to obtain image data, and defines a predetermined range in the obtained image data. Further, CPU extracts outlines of images falling within the predetermined range, and selects a closed line among the extracted outlines. Then, an image processing unit and CPU converts an image surrounded by the selected closed-line into a painterly rendering image.

  • IMAGE PROCESSING DEVICE IDENTIFYING ATTRIBUTE OF REGION INCLUDED IN IMAGE

    An image processing device performs: preparing image data representing an image, the image including a target region consisting of a plurality of target pixels, each of the plurality of target pixels having a pixel value; classifying each of a plurality of target pixels as one of an object pixel and a background pixel other than the object pixel, the object pixel constituting an object represented in the target region; determining whether or not the target region satisfies a first condition related to a relationship between the object pixel and the background pixel to make a first determination result; and judging whether or not the target region is a letter region representing at least one letter based on the first determination result.

  • IMAGE PROCESSING DEVICE DETERMINING ATTRIBUTES OF REGIONS

    An image processing device includes a processor; and a memory storing computer readable instructions therein. The computer-readable instructions, when executed by the processor, causes the image processing device to perform: preparing image data representing an image; identifying a first region in the image and a second region disposed inside of the first region; determining an attribute of the first region to be one of a plurality of attributes; and determining, when the attribute of the first region is determined to be the first type attribute, an attribute of the second region by using the attribute of the first region. The plurality of attributes includes a first type attribute indicating one of photo and drawing.

  • IMAGE PROCESSING APPARATUS AND METHOD

    Provided is an image processing apparatus which includes a histogram generating unit that generates a histogram representing an appearance frequency distribution of a pixel value of an input image, and a quantization table generating unit that generates a quantization table including table information used to perform transform of a bit depth of the pixel value of the input image and table information used to allocate an effective pixel in which an appearance frequency in the histogram generated by the histogram generating unit is not zero to an index value after bit depth transform so that effective pixels are allocated to index values as equally as possible.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE RECORDING DEVICE

    An image processing apparatus includes a distance information calculator that calculates distance information corresponding to a distance to an imaging object at each of portions in an image; a feature data calculator that calculates feature data at each portion in the image; a feature data distribution calculator that calculates a distribution of the feature data in each of regions that are classified according to the distance information in the image; a reliability determining unit that determines the reliability of the distribution of the feature data in each of the regions; and a discrimination criterion generator that generates, for each of the regions, a discrimination criterion for discriminating a specific region in the image based on a determination result of the reliability and the distribution of the feature data in each of the regions.

  • IMAGE PROCESSING APPARATUS FOR FUNDUS IMAGE, IMAGE PROCESSING METHOD FOR FUNDUS IMAGE, AND PROGRAM MEDIUM

    A image processing apparatus includes a selection unit configured to select either first color tone conversion processing or second color tone conversion processing having a ratio of red wavelength component set lower than either blue wavelength or green wavelength components with respect to the first color tone conversion processing, and a color tone conversion unit configured to convert a color of a fundus image by the selected color tone conversion processing.

  • METHOD AND SYSTEM FOR VIDEO PROCESSING TO DETERMINE DIGITAL PULSE RECOGNITION TONES

    In one aspect, the present disclosure relates to a method for isolating a broadcast digital pulse recognition tone of a beacon light source in a digital video sequence. In some embodiments, the method includes receiving a digital video sequence of a scene, the digital video sequence including a sequence of frames and the scene including both modulated illumination broadcast by a beacon light source and un-modulated illumination, calculating a background value of the digital video sequence, the background value including a portion of the digital video sequence corresponding to the un-modulated illumination of the scene, subtracting the background value of the digital video sequence to obtain an isolated digital video sequence of the modulated illumination of the scene, calculating a frequency content of a frame of the isolated digital video sequence, and determining a particular tone broadcast by the beacon light source based on the frequency content.

  • 3D VIDEO PROCESSING UNIT

    The 3D video processing unit combines video feeds from two unsynchronized video sources, such as left and right video cameras, in real-time, to generate a 3D image for display on a video monitor. The processing unit can also optionally receive video data from a third video source and use that data to generate a background image visible on all or a selected portion or portions of the video monitor. An alpha data generator inspects the video data held within respective buffer circuits associated with the left and right channels and generates an alpha data value for each pixel. These alpha data values are used within an alpha blending mixer to control whether a pixel is displayed or suppressed. Synchronization of the unsynchronized video sources occurs within the processing unit after alpha data values have been generated for each left and right channels.

  • Apparatus, System and Method for Recording a Multi-View Video and Processing Pictures, and Decoding Method

    An apparatus, a system, and a method for recording a multi-view video and processing images, and a decoding method are disclosed. The apparatus for recording a multi-view video and processing images includes a video recording unit, a collecting unit, a selecting unit, and an encoding unit, which are connected in sequence. The video recording unit is configured to record a video including recording a multi-view video, and output 3D video data. The collecting unit is configured to collect 3D video data output by the video recording unit. The selecting unit is configured to select at least one channel of data among the 3D video data. The encoding unit is configured to encode data including encoding the 3D video data selected by the selecting unit.

  • Method and terminal for video processing

    The disclosure discloses a method and a terminal for video processing. The method includes: when a real-time video image receiving terminal plays a real-time image picture, a shortcut for image pre-capture is set; if a user is interested in the picture, the user can click the shortcut for image pre-capture; when receiving an image pre-capture instruction, the terminal suspends the playing of the real-time picture but plays the pictures of a period before the moment of playing the real-time picture, then the user can perform image capture on the pictures played back. With the disclosure, when a user captures an image, the video pictures being played in the terminal are pre-stored, and the video pictures of a period before current time point are presented to the user by slow playback, thus the user can capture the image in easiness, so as not to miss the image that the user wants to capture due to a slow response.

  • VIDEO PROCESSING DEVICE

    A video processing device, which can output stereoscopic video information that enables stereoscopic viewing to a video display device, includes an obtaining unit that obtains the stereoscopic video information, a superimposing unit that superimposes additional video information on the stereoscopic video information, and a transmitting unit that transmits parallax information of the additional video information to the video display device, with the parallax information being associated with the stereoscopic video information on which the additional video information is superimposed.

  • INFORMATION PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, USER DEVICE, CONTROL METHOD, AND STORAGE MEDIUM

    A mediation service accepts a coordination instruction for coordinating a web application server with a coordination device from a web browser, generates a script to be authenticated by an authentication method corresponding to the server, and transmits the generated script to the coordination destination service providing system indicated by the coordination instruction. The web browser transmits authentication information or an authentication token, which is obtained in response to an input operation on an authentication information input screen displayed by execution of the script, to the coordination device. Then, the coordination device receives and saves the authentication information or the authentication token.

  • IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF FOR SELECTING AND DISPLAYING RELATED IMAGE CONTENT OF PRIMARY IMAGE CONTENT

    There is provided an image processing apparatus which comprises a receiver which receives primary image content; a communicator which communicates with at least one supply source which supplies related image content of the primary image content; a signal processor which processes and outputs the primary image content; and a controller which controls the communicator to request the supply source to supply the related image content, and controls the signal processor to process and play the related image content supplied from the supply source in response to the request if a user selects a key to play the related image content of the primary image content while the primary image content received by the receiver is processed by the signal processor.

  • MEMORY CONTROL DEVICE, MEMORY CONTROL METHOD, DATA PROCESSING DEVICE, AND IMAGE PROCESSING SYSTEM

    A memory control device that transfers data from an external memory to a data processing unit having plural processing mechanisms, includes an absolute address storage unit that stores an absolute address serving as a common reference value in a given data transfer period; a differential address storage unit that stores plural differential addresses therein; a differential address selection unit that selects any one of the plurality of differential addresses in a given order; a memory address generation unit that combines any differential address selected by the differential address selection unit with the absolute address to generate a memory address; and a data transfer unit that inputs the memory address generated by the memory address generation unit to the external memory, reads the data from the memory address, and transfers the data to the data processing unit.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

    An image processing apparatus includes a display control unit displaying a segment map, in which segment-representing images representing video segments obtained by dividing content into video segments which are collections of frames of one or more temporally continuous frames, are arranged on a display apparatus and a reproduction control unit controlling reproduction of the video segments corresponding to the segment-representing image according to a reproduction operation of a user requesting reproduction with respect to the segment-representing image, in which, when, during reproduction of a video segment of content, a reproduction operation is performed with respect to a segment-representing image of another video segment of the content, the reproduction control unit reproduces the other video segment corresponding to the segment-representing image for which the reproduction operation was performed, while still maintaining the reproduction of the video segment being reproduced.

  • IMAGE PROCESSING APPARATUS

    [Problem] An object of the present invention is to provide an image processing apparatus used for acquisition aid of optimal low-resolution image set for super-resolution processing. [Means for solving the problem] The image processing apparatus of the present invention comprises a processing unit for computing displacement amounts between a basis image and each reference image, a processing unit for generating a plurality of deformed images based on the displacement amounts, the basis image and a plurality of reference images, a processing unit for setting a threshold of a parameter used at the time of image information selection, a processing unit for selecting image information used in the super-resolution processing from the reference image by using the threshold of the parameter, a processing unit for generating composed images and weighted images based on the basis image, the displacement amounts and the image information, a processing unit for generating high-resolution grid images by dividing the composed image by the weighted image, a processing unit for generating simplified interpolation images based on high-resolution grid images, a processing unit for generating an image characteristic amount, a processing unit for displaying the image characteristic amount, and a control unit that respectively controls a processing concerning the image input, a processing concerning the basis image selection, a processing concerning the reference image selection and a processing concerning the threshold setting of the parameter as necessary.

  • VIDEO PROCESSING DEVICE FOR EMBEDDING AUTHORED METADATA AND METHODS FOR USE THEREWITH

    A video processing device includes a metadata authoring device, that generates time-coded metadata in response to content recognition data and in accordance with at least one time stamp of a video signal. A metadata association device generates a processed video signal from the video signal, wherein the processed video signal includes the time-coded metadata.

  • Aerial Survey Video Processing

    An aerial survey video processing apparatus for analyzing aerial survey video. The apparatus includes a feature tracking section adapted to associate identified features with items in a list of features being tracked, based on a predicted location of the features being tracked. The tracking section updates the list of features being tracked with the location of the associated identified features.

  • VIDEO PROCESSING DEVICE FOR EMBEDDING TIME-CODED METADATA AND METHODS FOR USE THEREWITH

    A video processing device includes a content analyzer that receives a video signal and generates content recognition data based on the video signal, wherein the content recognition data is associated with at least one timestamp included in the video signal. A metadata search device generates time-coded metadata in response to content recognition data and in accordance with the at least one time stamp. A metadata association device generates a processed video signal from the video signal, wherein the processed video signal includes the time-coded metadata.

  • VISUAL QUALITY MEASURE FOR REAL-TIME VIDEO PROCESSING

    A measure of visual quality of processed images relative to unprocessed images is generated in real-time. The measure of visual quality closely correlates with a human's actual perception of the processed image relative to the original image. The measure of visual quality is computed based on a measure of discrepancy (e.g., mean square errors) between the processed and unprocessed images and the variance of each image in the pixel domain or the transform domain may be determined. If the processed image is unavailable, a prediction of the processed image may be used in place of the processed image. The prediction of a processed image may involve predicting the variance values for processed image blocks. The visual quality measure may be used in a feedback loop to improve processing or encoding.

  • ELECTRONIC APPARATUS AND VIDEO PROCESSING METHOD

    According to one embodiment, an electronic apparatus includes a block feature amount calculator, a local contrast correction curve generator and a contrast correction module. The block feature amount calculator divides pixels in a video frame into blocks, and calculates feature amounts corresponding to the blocks using luminance values of pixels in each of the blocks. The local contrast correction curve generator generates local contrast correction curves corresponding to the blocks using the feature amounts. The contrast correction module generates a corrected video frame using the local contrast correction curves.

  • VIDEO PROCESSING METHOD, VIDEO PROCESSING CIRCUIT, LIQUID CRYSTAL DISPLAY, AND ELECTRONIC APPARATUS

    A video processing circuit detects a risk boundary, which a portion of the boundary between a dark pixel and a bright pixel in an image represented by a video signal Vid-in, and is determined by a tilt azimuth of liquid crystal molecules, from the boundary, and corrects a video signal corresponding to at least one of the dark pixel and the bright pixel which is contiguous to the detected risk boundary in at least one field of a plurality of fields constituting one frame such that a period in which the risk boundary is present in one frame period is shortened.

  • POWER SUPPLY CONTROL DEVICE, IMAGE PROCESSING APPARATUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING POWER SUPPLY CONTROL PROGRAM, AND IMAGE PROCESSING CONTROL DRIVER

    Provided is a power supply control device including a first power supply which is a power supply source of an operating unit and a main control unit, a second power supply which supplies minimum necessary power to create a power-saving state, a switching unit that switches to a power supply source selected from the first and second power supplies, a receiving unit that receives an external request signal, a determining unit that determines whether the external request signal is a switching request signal or a recovery request signal, a switching controller that switches the power supply source to the first power supply when a power-saving state is created, and the external request signal is the switching request signal, and a recovery unit that recovers at least the main control unit when the recovery request signal is received within a predetermined period after the power supply source is switched.

  • VIEWING IMAGES FOR REVIEW IN MERGED IMAGE FORM AND WITH PARAMETER-BASED IMAGE PROCESSING

    A method involves on-line viewing of a first article through a linking node for virtual merging on another structure. A particular application of the invention is directed to on-line apparel shopping involving a matching scheme using codes provided with images to be merged. For example, on-line viewing of one article, such as clothing, on another structure, includes creating an item from image-data corresponding to an article selected by an on-line viewer from an on-line viewer site with an image of a structure selected by the on-line viewer, and indicating whether the article and the structure satisfy a category-matching criterion. In certain embodiments, the articles are represented by (search) parameters that permit organizational advantages.

  • IMAGE PROCESSING ARCHITECTURES AND METHODS

    Cell phones and other portable devices are equipped with a variety of technologies by which existing functionality is improved, and new functionality is provided. Some aspects relate to imaging architectures, in which a cell phone's image sensor is one in a chain of stages that successively act on instructions/data, to capture and later process imagery. Other aspects relate to distribution of processing tasks between the device and remote resources ("the cloud"). Elemental image processing, such as filtering and edge detection--and even some simpler template matching operations--may be performed on the cell phone. Other operations are referred out to remote service providers. The remote service providers can be identified using techniques such as a reverse auction, though which they compete for processing tasks. Other aspects of the disclosed technologies relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Still others concern metadata generation, processing, and representation. A great number of other features and arrangements are also detailed.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING CONTROL DRIVER, AND IMAGE PROCESSING METHOD

    Provided is an image processing apparatus including a mode switching unit that selectively switches a mode between a rapid heating mode and a heat accumulating mode in a fixing unit, a receiving unit that receives an image formation request received from the outside, an extracting unit that extracts mode switching determination information at the earliest from the image formation request received by the receiving unit, a selecting unit that selects the mode based on the mode switching determination information extracted by the extracting unit, specifications of the fixing device, and a current temperature, and a switching control unit that controls the mode switching unit based on the mode selected by the selecting unit to switch the mode to the rapid heating mode or the heat accumulating mode.

  • IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM

    An image processing apparatus obtains an image signal, which is captured by an image capturing apparatus, and in which respective pixels correspond to light fluxes of different combinations of pupil regions, where the light fluxes have passed through in an imaging optical system of the image capturing apparatus, and incident directions. The image processing apparatus sets a focal length corresponding to an object to be focused, and generates a reconstructed image focused on the object of the set focal length from the image signal. The image processing apparatus generates a moving image by concatenating a plurality of reconstructed images generated in association with a plurality of different focal lengths, and outputs the moving image in association with the image signals.

  • METHOD FOR PROCESSING EDGES IN AN IMAGE AND IMAGE PROCESSING APPARATUS

    Method and apparatus for processing edges in an image are provided. The method in an embodiment includes the following steps. With respect to a cross-shaped patterned centered at a target pixel of an input image, a first-direction gradient along a first direction and a second-direction gradient along a second direction are calculated. According to the first-direction and second-direction gradients, it is determined whether to compensate the target pixel based on pixel values of a first plurality of pixels along the second direction or pixel values of a second plurality of pixels along the first direction within the cross-shaped pattern, or to output a pixel value of the target pixel.

  • Content Adaptive Video Processing

    In some embodiments, both video quality and processing speed may be traded off on the fly automatically. Thus different methods and parameters may be invoked to achieve a dynamically varying balance between speed and quality.

  • CASCADING MULTIPLE VIDEO TRANSCODERS IN A VIDEO PROCESSING SYSTEM

    A system (and a method) are disclosed for a video processing system with enhanced entropy coding performance. The system includes an entropy decoder configured to divide decoding of an input video stream into arithmetic decoding and syntax decoding. The entropy decoder includes an arithmetic decoding module, a syntax decoding module, a memory management module and a memory buffer connecting the two decoding modules. The arithmetic decoding module is configured to decode the input video stream into multiple bins of decoded input video stream and the syntax decoding module is configured to decode the bins of arithmetically decoded input videos stream into one or more syntax elements. The memory management module uses the memory buffer to accelerate the coding performances of arithmetic decoding and syntax decoding. The system also includes a corresponding entropy encoder configured to encode a video stream with improved coding performance.

  • VIDEO PROCESSING CIRCUIT, VIDEO PROCESSING METHOD, LIQUID CRYSTAL DISPLAY DEVICE, AND ELECTRONIC APPARATUS

    A video processing circuit includes a boundary detection unit which detects a boundary between a first pixel in which an application voltage which is designated by a video signal Vid-in is lower than a first voltage and a second pixel which exceeds a second voltage in which the application voltage is higher than the first voltage in a normally black mode; and a correction unit which corrects a video signal in which an application voltage to a liquid crystal element corresponding to the first pixel which comes into contact with a boundary detected by the boundary detection unit is designated to be a video signal in which a correction voltage which is higher than the application voltage is designated in a part of period of one frame period, and a correction voltage which is lower than the application voltage is designated in other periods of the one frame period.

  • 3D VIDEO PROCESSING

    A method, an apparatus, and a non-transitory computer readable medium for performing 2D to 3D conversion are presented. A 2D input source is extracted into left and right 3D images. Motion vectors are calculated for the left and right 3D images. Frame rate conversion is performed on the left 3D image and the right 3D image, using the respective calculated motion vectors, to produce motion compensated left and right 3D images. The left and right 3D images and the motion compensated left and right 3D images are reordered for display.

  • VIDEO PROCESSING SYSTEM WITH LAYERED VIDEO CODING FOR FAST CHANNEL CHANGE AND METHODS FOR USE THEREWITH

    A video processing system includes a video encoder that encodes a video stream into an independent video layer stream and a first dependent video layer stream based on a motion vector data or grayscale and color data.

  • INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, STORAGE MEDIUM, AND IMAGE PROCESSING APPARATUS

    A user credential sharing mechanism which can suitably implement a single sign-on function while preventing illicit accesses by accidental matches of authentication data in a mixed environment of an environment suitable for use of a single sign-on function and an unsuitable environment is provided. To accomplish this, when an information processing apparatus of this invention receives, from a user, an access request instruction to an external apparatus connected to be able to communicate with the information processing apparatus, if an authentication protocol related to user credentials generated at the time of a login operation is that which can limit a security domain, the apparatus accesses the external apparatus using the user credentials, and if that authentication protocol is that which cannot limit a security domain, the apparatus prompts the user to input an account accessible to the external apparatus.

  • IMAGE PROCESSING APPARATUS AND METHOD, AND COMPUTER PROGRAM PRODUCT

    A control unit, method and computer program product cooperate to provide a controllable depth of display of at least a part of a graphical user interface. Moreover, the control unit includes a control circuit that controls a depth display of an icon, which may be a user-selectable icon, as part of the graphical user interface. The control circuit increases the depth of display of the icon when an object is detected as approaching the display. In this way, a user is provided with visual feedback when the user is interacting with the graphical user interface.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

    There is provided an image processing apparatus including a vector detection unit which detects flow vectors of pixels in an inputted image, a vector-coherency calculation unit which calculates vector coherency based on the flow vectors detected by the vector detection unit, a deformation-characteristic computation unit which computes a deformation characteristic by using the vector coherency calculated by the vector-coherency calculation unit, the deformation characteristic being used for deforming a tap shape of a filter used for each of the pixels, and a painterly conversion unit which converts the inputted image based on the deformation characteristic computed by the deformation-characteristic computation unit.

  • Automatic Adaptation to Image Processing Pipeline

    Techniques are disclosed relating to generating generic labels, translating generic labels to image pipeline-specific labels, and automatically adjusting images. In one embodiment, generic labels may be generated. Generic algorithm parameters may be generated based on training a regression algorithm with the generic labels. The generic labels may be translated to pipeline-specific labels, which may be usable to automatically adjust an image.

  • STEREOSCOPIC VIDEO PROCESSOR, RECORDING MEDIUM FOR STEREOSCOPIC VIDEO PROCESSING PROGRAM, STEREOSCOPIC IMAGING DEVICE AND STEREOSCOPIC VIDEO PROCESSING METHOD

    A stereoscopic video reproduction device creates and records a stereoscopic video file having, as attached information of the stereoscopic video in advance, information required at the time of adjusting a parallax amount of the stereoscopic video such that binocular fusion is possible regardless of a screen size of a stereoscopic display. The parallax amount of each feature point is calculated and at least the maximum parallax amount on a distant view side is acquired. Further, the largest GOP maximum parallax amount is acquired and the GOP maximum display size is acquired in which binocular fusion is possible at the time of displaying the stereoscopic image by the right and left viewpoint images on the stereoscopic display based on this GOP maximum parallax amount. Together with the stereoscopic video, the acquired GOP maximum display size and GOP maximum parallax amount are recorded in a 3D video file as attached information.

  • VIDEO PROCESSING SYSTEM FOR SCRAMBLING LAYERED VIDEO STREAMS AND METHODS FOR USE THEREWITH

    A video processing system includes a video encoder that encodes a video stream into an independent video layer stream and a first dependent video layer stream that requires the independent video layer for decoding. A scrambling module scrambles the independent video layer stream to produce a scrambled independent video layer stream and leaves the first dependent video layer stream unscrambled.

  • SYSTEM AND METHOD FOR REAL-TIME VIDEO PROCESSING FOR ALARM MONITORING SYSTEMS

    A system and method for real-time video processing for alarm monitoring systems are disclosed. A particular embodiment includes: receiving alert data from a video image analysis module, the alert data being generated from analysis of a video feed, the alert data being in a first format; translating the alert data to a second format; and causing the alert data in the second format to be communicated to an alarm monitoring system compatible with the second format.

  • VIDEO PROCESSING APPARATUS AND METHOD FOR MANAGING TRACKING OBJECT

    A video processing apparatus includes a first detection unit configured to detect that a tracking target moving in a video has split into a plurality of objects, and a determination unit configured to, when the first detection unit detects that the tracking target has split into the plurality of objects, determine a number of objects included in the tracking target before splitting of the tracking target based on a number of the plurality of objects after splitting of the tracking target.

  • IMAGE PROCESSING APPARATUS THAT OPERATES ACCORDING TO SECURITY POLICIES, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM

    An image processing apparatus which is capable of restraining operation that does not comply with security policies even in a case where security policies are changed through setting of user modes. The security policies are set in advance in the image processing apparatus. The image processing apparatus has a UI operation unit that enables operation on the image processing apparatus. When settings of the image processing apparatus are changed via the UI operation unit, it is verified whether or not the changed settings match the security policies. Operation of the image processing apparatus is restrained until it is verified that the changed settings match the security policies.

  • IMAGE PROCESSING APPARATUS, SERVER APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

    An image processing apparatus includes an update unit configured to, based on an update file related to update of firmware transferred from an information processing apparatus that communicates with the image processing apparatus, update the firmware, a character string creation unit configured to obtain apparatus information of the image processing apparatus and create a character string based on the obtained apparatus information, and an instruction unit configured to, in a case where the update unit succeeds with the update of the firmware, instruct the information processing apparatus to access a server apparatus that communicates with the information processing apparatus using location information of the server apparatus to which the character string created by the character string creation unit is attached.

  • IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

    An image processing apparatus includes: an acquisition section configured to acquire positional information specified by a user on an input image for specifying area selection; and out of a calculation result of a transformation matrix between an object plane being a perspective projection plane formed by an object in the input image and an isothetic plane, a selected-area calculation section configured to calculate an area in the object plane as a selected area by the user using the transformation matrix of the object plane corresponding to the positional information specified by the user.

  • IMAGE PROCESSING DEVICE, IMAGE DISPLAY APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM MEDIUM

    According to an embodiment, an image processing device includes an obtaining unit, a specifying unit, a first setting unit, a second setting unit, and a processor. The obtaining unit is configured to obtain a three-dimensional image. The specifying unit is configured to, according to an inputting operation performed by a user, specify three-dimensional coordinate values in the three-dimensional image. The first setting unit is configured to set a designated area which indicates an area including the three-dimensional coordinate values. The second setting unit is configured to set a masking area indicating an area that masks the designated area when the three-dimensional image is displayed. The processor is configured to perform image processing with respect to the three-dimensional image in such a way that the masking area is displayed in a more transparent manner as compared to the display of the remaining area.

  • SHOT IMAGE PROCESSING SYSTEM, SHOT IMAGE PROCESSING METHOD, MOBILE TERMINAL, AND INFORMATION PROCESSING APPARATUS

    A shot image processing system (100) includes a mobile terminal (1) that shoots an image of a conversion target region containing a character and/or an image, and displays the shot image containing the conversion target region on a display unit; a server that receives the shot image from the mobile terminal (1), wherein the server (2) determines a specifying method for specifying a location of the conversion target region in the received shot image, and transmits the determined specifying method to the mobile terminal, and the mobile terminal (1) specifies the location of the conversion target region in the shot image based on the specifying method received from the server (2), converts the conversion target region specified in the shot image into a prescribed format, and displays a converted image obtained by the conversion on the display unit (16).

  • IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE FORMING APPARATUS

    An image processing device that is implemented on a computer is provided. The image processing device includes: enhancement direction determining unit configured to calculate spatial frequency of input image data and determine frequency component in which distribution amount of the spatial frequency maximizes; and enhancement processing unit configured to perform enhancement process by applying, to the input image data, enhancement amount that varies according to the distribution amount of the space frequency, based on the determined frequency component.

  • VIDEO PROCESSING SYSTEM AND TRANSCODER FOR USE WITH LAYERED VIDEO CODING AND METHODS FOR USE THEREWITH

    A video processing system includes a video transcoder that receives a independent video layer stream and a first dependent video layer stream that requires the independent video layer for decoding, the video transcoder generating a transcoded video signal based at least one of the independent video stream and the dependent video layer stream.

  • VIDEO PROCESSING SYSTEM WITH SHARED/CONFIGURABLE IN-LOOP FILTER DATA BUFFER ARCHITECTURE AND RELATED VIDEO PROCESSING METHOD THEREOF

    A video processing system includes a data buffer and a storage controller. The data buffer is shared between a plurality of in-loop filters, wherein not all of the in-loop filters comply with a same video standard. The storage controller controls data access of the data buffer, wherein for each in-loop filter granted to access the data buffer, the data buffer stores a partial data of a picture processed by the in-loop filter. Another video processing system includes a storage device and a storage controller. The storage controller adaptively determines a size of a storage space according to a tile partition setting of a picture to be processed by an in-loop filter, and controls the storage device to allocate the storage space to serve as a data buffer for storing data of the in-loop filter.

  • VIDEO PROCESSING AND SIGNAL ROUTING APPARATUS FOR PROVIDING PICTURE IN A PICTURE CAPABILITIES ON AN ELECTRONIC GAMING MACHINE

    A gaming system used in a wager-based electronic gaming machine is described. The gaming system is configured to provide picture in a picture capabilities on the electronic gaming machine. In one embodiment, the gaming system can include a first gaming device and a second gaming device where the first gaming device controls the second gaming device. The first gaming device can be configured to receive data and/or communicate with an electronic gaming machine controller, a value input device and value output device. The second gaming device can be configured to receive touchscreen data from a touchscreen display and first video data from the first gaming device and second video data from the EGM controller. Under control of the first gaming device, the first video data and second video data can be output in various sizes and locations on the touchscreen display.

  • VIDEO PROCESSING SYSTEM AND VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, CONTROL METHOD OF THE APPARATUS, AND STORAGE MEDIUM STORING CONTROL PROGRAM OF THE APPARATUS

    A system of this invention is a video processing system for detecting a change of a capturing target based on a video whose capturing range changes. This video processing system includes a capturing unit that captures the video whose capturing range changes, a feature extractor that extracts a frame feature of each frame from the captured video, a feature storage that stores, for each frame, the frame feature extracted by the feature extractor, a frame searcher that searches for a frame, a frame feature of which is stored in the feature storage, having a capturing range matching that of the newly captured frame by comparing a frame feature of a newly captured frame with the frame features stored in the feature storage, and a change detector that detects a change of the capturing target based on a difference between the frame feature of the newly captured frame and the frame feature of the frame found by the frame searcher. With this arrangement, it is possible to detect a change of a capturing target even if the capturing range of a capturing apparatus changes every moment.

  • VIDEO PROCESSING SYSTEM, VIDEO CONTENT MONITORING METHOD, VIDEO PROCESSING APPARATUS, CONTROL METHOD OF THE APPARATUS, AND STORAGE MEDIUM STORING CONTROL PROGRAM OF THE APPARATUS

    A system of this invention is a video processing system for determining details of a browsable video content. This video processing system includes a video fragment download unit that downloads data of a video fragments in a determination target video content via a network, and a first video content determination unit that determines the details of the video content based on the downloaded data of the video fragments. With this arrangement, it is possible to determine the details of a browsable video content while reducing the amount of data to be downloaded.