Current Issue : October-December Volume : 2023 Issue Number : 4 Articles : 4 Articles
Versatile Video Coding (VVC) introduces many new coding technologies, such as quadtree with nested multi-type tree (QTMT), which greatly improves the efficiency of VVC coding. However, its computational complexity is higher, which affects the application of VVC in real-time scenarios. Aiming to solve the problem of the high complexity of VVC intra coding, we propose a low-complexity partition algorithm based on edge features. Firstly, the Laplacian of Gaussian (LOG) operator was used to extract the edges in the coding frame, and the edges were divided into vertical and horizontal edges. Then, the coding unit (CU) was equally divided into four sub-blocks in the horizontal and vertical directions to calculate the feature values of the horizontal and vertical edges, respectively. Based on the feature values, we skipped unnecessary partition patterns in advance. Finally, for the CUs without edges, we decided to terminate the partition process according to the depth information of neighboring CUs. The experimental results show that compared with VTM-13.0, the proposed algorithm can save 54.08% of the encoding time on average, and the BDBR (Bjøntegaard delta bit rate) only increases by 1.61%....
The problem of information security and copyright protection of video is becoming increasingly prominent. The current video watermarking algorithm does not have strong anti-compression, which has a significant impact on the visual effect of video. To solve this problem, this paper proposes a video watermarking algorithm based on H.264/AVC. The algorithm combines the non-zero quantization coefficient and the energy factor to select the appropriate chroma subblock, and then an optimized modulation is designed to embed the watermark into its DCT quantization coefficients in order to minimize the number of modifications of the subblocks. The invisibility and robustness experiments of the algorithm are conducted in the paper, and the Structural Similarity Indexes are above 0.99, and the False Bit Rates are all below 0.03. The results show that the algorithm has good invisibility, anti-compression performance and obvious advantages compared with other similar methods....
Video tutorials are a popular means of learning software applications but their design and effectiveness have received little attention. This study investigated the effectiveness of video tutorials for software training. In addition, it examined whether two multimedia design principles, signaling and practice types, contribute to task performance, mental effort, and self-efficacy. The study participants were 114 undergraduate students from a nursing department. A two (no signals vs. signals) × two (video practice vs. video practice video) mixed factorial design was used for testing the main study hypotheses. The analysis revealed a unique contribution of signaling and practice types on task performance and self-efficacy. Contrary to expectations, however, no combined effect of signaling and practice types was found. This paper is concluded with a discussion of the findings and implications for future research....
Artificial intelligence plays a significant role in traffic-accident detection. Traffic accidents involve a cascade of inadvertent events, making traditional detection approaches challenging. For instance, Convolutional Neural Network (CNN)-based approaches cannot analyze temporal relationships among objects, and Recurrent Neural Network (RNN)-based approaches suffer from low processing speeds and cannot detect traffic accidents simultaneously across multiple frames. Furthermore, these networks dismiss background interference in input video frames. This paper proposes a framework that begins by subtracting the background based on You Only Look Once (YOLOv5), which adaptively reduces background interference when detecting objects. Subsequently, the CNN encoder and Transformer decoder are combined into an end-to-end model to extract the spatial and temporal features between different time points, allowing for a parallel analysis between input video frames. The proposed framework was evaluated on the Car Crash Dataset through a series of comparison and ablation experiments. Our framework was benchmarked against three accident-detection models to evaluate its effectiveness, and the proposed framework demonstrated a superior accuracy of approximately 96%. The results of the ablation experiments indicate that when background subtraction was not incorporated into the proposed framework, the values of all evaluation indicators decreased by approximately 3%....
Loading....