Current Issue : July - September Volume : 2019 Issue Number : 3 Articles : 5 Articles
The Convolutional Neural Network (CNN) has been used in many fields and has achieved\nremarkable results, such as image classification, face detection, and speech recognition. Compared\nto GPU (graphics processing unit) and ASIC, a FPGA (field programmable gate array)-based\nCNN accelerator has great advantages due to its low power consumption and reconfigurable\nproperty. However, FPGAâ??s extremely limited resources and CNNâ??s huge amount of parameters and\ncomputational complexity pose great challenges to the design. Based on the ZYNQ heterogeneous\nplatform and the coordination of resource and bandwidth issues with the roofline model, the CNN\naccelerator we designed can accelerate both standard convolution and depthwise separable\nconvolution with a high hardware resource rate. The accelerator can handle network layers of different\nscales through parameter configuration and maximizes bandwidth and achieves full pipelined by\nusing a data stream interface and ping-pong on-chip cache. The experimental results show that\nthe accelerator designed in this paper can achieve 17.11GOPS for 32bit floating point when it can\nalso accelerate depthwise separable convolution, which has obvious advantages compared with\nother designs....
Binary tree topology generally fails to attract network on chip (NoC) implementations due to its low bisection bandwidth. Fat trees\nare proposed to alleviate this issue by using increasingly thicker links to connect switches towards the root node.This scheme is very\nefficient in interconnected networks such as computer networks,which use generic switches for interconnection. In anNoCcontext,\nespecially for field programmable gate arrays (FPGAs), fat trees requiremore complex switches as we move higher in the hierarchy.\nThis restricts the maximum clock frequency at which the network operates and offsets the higher bandwidth achieved through\nusing fatter links. In this paper, we discuss the implementation of a binary tree-based NoC, which achieves better bandwidth by\nvarying the clock frequency between the switches as we move higher in the hierarchy. This scheme enables using simpler switch\narchitecture, thus supporting higher maximum frequency of operation. The effect on bandwidth and resource requirement of this\narchitecture is compared with other FPGA-based NoCs for different network sizes and traffic patterns....
The field of nanosatellites is constantly evolving and growing at a very fast speed. This creates a growing demand for more\nadvanced and reliable EDAC systems that are capable of protecting all memory aspects of satellites. The Hamming code was\nidentified as a suitable EDAC scheme for the prevention of single event effects on-board a nanosatellite in LEO. In this paper, three\nvariations of Hamming codes are tested both in Matlab and VHDL. The most effective version was Hamming [16, 11, 4]2. This\ncode guarantees single-error correction and double-error detection. All developed Hamming codes are suited for FPGA\nimplementation, for which they are tested thoroughly using simulation software and optimized....
Multiple-input multiple-output (MIMO) wireless technology in combination with orthogonal frequency-division multiplexing\n(MIMO-OFDM) is an attractive technique for next-generation wireless systems. However, the performance of wireless links is\nseverely degraded due to various channel impairments which cause a decoding failure and lead to packet loss at the receiver.\nOne technique to cope with this problem is the rateless space-time block code (RSTBC). This paper presents experimental results on the performance of a 2 * 2 MIMO-OFDM system with RSTBC as measured in a testbed implemented with field-programmable gate array (FPGA). The average bit error rate (BER) performance of the proposed scheme is evaluated experimentally, and the results agree closely with simulation and analytical upper bound. It has been shown that RSTBC can be implemented in real-world scenarios and guarantee the reliability of loss-prone wireless channels....
As outlined in the 3Gpp Release 16, 5G satellite access is important for 5G network\ndevelopment in the future. A terrestrial-satellite network integrated with 5G has the characteristics\nof low delay, high bandwidth, and ubiquitous coverage. A few researchers have proposed integrated\nschemes for such a network; however, these schemes do not consider the possibility of achieving\noptimization of the delay characteristic by changing the computing mode of the 5G satellite network.\nWe propose a 5G satellite edge computing framework (5GsatEC), which aims to reduce delay and\nexpand network coverage. This framework consists of embedded hardware platforms and edge\ncomputing microservices in satellites. To increase the flexibility of the framework in complex scenarios,\nwe unify the resource management of the central processing unit (CPU), graphics processing unit\n(GPU), and field-programmable gate array (FPGA); we divide the services into three types: system\nservices, basic services, and user services. In order to verify the performance of the framework, we\ncarried out a series of experiments. The results show that 5GsatEC has a broader coverage than the\nground 5G network. The results also show that 5GsatEC has lower delay, a lower packet loss rate,\nand lower bandwidth consumption than the 5G satellite network....
Loading....