Current Issue : October-December Volume : 2021 Issue Number : 4 Articles : 5 Articles
The historical bottleneck for truly high scale integrated photonics is the light emitter. The lack of monolithically integrable light sources increases costs and reduces scalability. Quantum phenomena found in embedded Si particles in the nanometer scale is a way of overcoming the limitations for bulk Si to emit light. Integrable light sources based in Si nanoparticles can be obtained by different CMOS (Complementary Metal Oxide Semiconductor) -compatible materials and techniques. Such materials in combination with Si3N4 photonic elements allow for integrated Si photonics, in which photodetectors can also be included directly in standard Si wafers, taking advantage of the emission in the visible range by the embedded Si nanocrystals/nanoparticles. We present the advances and perspectives on seamless monolithic integration of CMOS-compatible visible light emitters, photonic elements, and photodetectors, which are shown to be viable and promising well within the technological limits imposed by standard fabrication methods....
&is paper describes the use of spherical wave expansion (SWE) to model the embedded element patterns of the LOFAR low-band array. &e goal is to reduce the amount of data needed to store the embedded element patterns. &e coefficients are calculated using the Moore–Penrose pseudoinverse. &e Fast Fourier Transform (FFT) is used to interpolate the coefficients in the frequency domain. It turned out that the embedded element patterns can be described by only 41.8% of the data needed to describe them directly if sampled at the Nyquist rate. &e presented results show that a frequency resolution of 1MHz is needed for proper interpolation of the spherical wave coefficients over the 80MHz operating frequency band of the LOFAR low-band array. It is also shown that the error due to interpolation using the FFT is less than the error due to linear interpolation or cubic spline interpolation....
The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in realtime persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in realtime is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems....
Aiming at the problem that the embedded platform cannot meet the real-time detection of multisource images, this paper proposes a lightweight target detection network MNYOLO (MobileNet-YOLOv4-tiny) suitable for embedded platforms using deep separable convolution instead of standard convolution to reduce the number of model parameters and calculations; at the same time, the visible light target detection model is used as the pretraining model of the infrared target detection model and the infrared target data set collected on the spot is fine-tuned to obtain the infrared target detection model. On this basis, a decisionlevel fusion detection model is obtained to realize the complementary information of infrared and visible light multiband information. *e experimental results show that it has a more obvious advantage in detection accuracy than the single-band target detection model while the decision-level fusion target detection model meets the real-time requirements and also verifies the effectiveness of the above algorithm....
In the automatic navigation robot field, robotic autonomous positioning is one of the most difficult challenges. Simultaneous localization and mapping (SLAM) technology can incrementally construct a map of the robot’s moving path in an unknown environment while estimating the position of the robot in the map, providing an effective solution for robots to fully navigate autonomously. The camera can obtain corresponding two-dimensional digital images from the real three-dimensional world. These images contain very rich colour, texture information, and highly recognizable features, which provide indispensable information for robots to understand and recognize the environment based on the ability to autonomously explore the unknown environment. Therefore, more and more researchers use cameras to solve SLAM problems, also known as visual SLAM. Visual SLAM needs to process a large number of image data collected by the camera, which has high performance requirements for computing hardware, and thus, its application on embedded mobile platforms is greatly limited. This paper presents a parallelization method based on embedded hardware equipped with embedded GPU. Use CUDA, a parallel computing platform, to accelerate the visual front-end processing of the visual SLAM algorithm. Extensive experiments are done to verify the effectiveness of the method. The results show that the presented method effectively improves the operating efficiency of the visual SLAM algorithm and ensures the original accuracy of the algorithm....
Loading....