Current Issue : January - March Volume : 2020 Issue Number : 1 Articles : 5 Articles
Financial innovation by means of Fintech firms is one of the more disruptive business\nmodel innovations from the latest years. Specifically, in the financial advisor sector, worldwide\nassets under management of artificial intelligence (AI)-based investment firms, or robo-advisors,\ncurrently amount to US$975.5 B. Since 2008, robo-advisors have evolved from passive advising to\nactive data-driven investment management, requiring AI models capable of predicting financial asset\nprices on time to switch positions. In this research, an artificial neural network modelling framework\nis specifically designed to be used as an active data-driven robo-advisor due to its ability to forecast\nwith todayâ??s copper prices five days ahead of changes in prices using input data that can be fed\nautomatically in the model. The model, tested using data of the two periods with a higher volatility\nof the returns of the recent history of copper prices (May 2006 to September 2008 and September 2008\nto September 2010) showed that the method is capable of predicting in-sample and out-of-sample\nprices and consequently changes in prices with high levels of accuracy. Additionally, with a 24-day\nwindow of out-of-sample data, a trading simulation exercise was performed, consisting of staying\nlong if the model predicts a rise in price or switching to a short position if the model predicts a\ndecrease in price, and comparing the results with the passive strategies, buy and hold or sell and hold.\nThe results obtained seem promising in terms of both statistical and trading metrics. Our contribution\nis twofold: 1) we propose a set of input variables based on financial theory that can be collected and\nfed automatically by the algorithm. 2) We generate predictions five days in advance that can be used\nto reposition the portfolio in active investment strategies....
As artificial intelligence (AI)- or deep-learning-based technologies become more popular,\nthe main research interest in the field is not only on their accuracy, but also their efficiency, e.g., the\nability to give immediate results on the usersâ?? inputs. To achieve this, there have been many attempts\nto embed deep learning technology on intelligent sensors. However, there are still many obstacles in\nembedding a deep network in sensors with limited resources. Most importantly, there is an apparent\ntrade-off between the complexity of a network and its processing time, and finding a structure\nwith a better trade-off curve is vital for successful applications in intelligent sensors. In this paper,\nwe propose two strategies for designing a compact deep network that maintains the required level of\nperformance even after minimizing the computations. The first strategy is to automatically determine\nthe number of parameters of a network by utilizing group sparsity and knowledge distillation (KD)\nin the training process. By doing so, KD can compensate for the possible losses in accuracy caused\nby enforcing sparsity. Nevertheless, a problem in applying the first strategy is the unclarity in\ndetermining the balance between the accuracy improvement due to KD and the parameter reduction\nby sparse regularization. To handle this balancing problem, we propose a second strategy: a feedback\ncontrol mechanism based on the proportional control theory. The feedback control logic determines\nthe amount of emphasis to be put on network sparsity during training and is controlled based on\nthe comparative accuracy losses of the teacher and student models in the training. A surprising fact\nhere is that this control scheme not only determines an appropriate trade-off point, but also improves\nthe trade-off curve itself. The results of experiments on CIFAR-10, CIFAR-100, and ImageNet32 *32\ndatasets show that the proposed method is effective in building a compact network while preventing\nperformance degradation due to sparsity regularization much better than other baselines....
The electric energy consumption prediction (EECP) is an essential and complex task in\nintelligent power management system. EECP plays a significant role in drawing up a national energy\ndevelopment policy. Therefore, this study proposes an Electric Energy Consumption Prediction\nmodel utilizing the combination of Convolutional Neural Network (CNN) and Bi-directional Long\nShort-Term Memory (Bi-LSTM) that is named EECP-CBL model to predict electric energy consumption.\nIn this framework, two CNNs in the first module extract the important information from several\nvariables in the individual household electric power consumption (IHEPC) dataset. Then, Bi-LSTM\nmodule with two Bi-LSTM layers uses the above information as well as the trends of time series in\ntwo directions including the forward and backward states to make predictions. The obtained values\nin the Bi-LSTM module will be passed to the last module that consists of two fully connected layers\nfor finally predicting the electric energy consumption in the future. The experiments were conducted\nto compare the prediction performances of the proposed model and the state-of-the-art models for the\nIHEPC dataset with several variants. The experimental results indicate that EECP-CBL framework\noutperforms the state-of-the-art approaches in terms of several performance metrics for electric energy\nconsumption prediction on several variations of IHEPC dataset in real-time, short-term, medium-term\nand long-term time spans....
The motivation for this research paper is the application of two novel models\nin the prediction of crude oil index. The first model is a generic deep belief\nnetwork and the second model is an adaptive neural fuzzy inference system.\nFurthermore we have to emphasize on the second contribution in this paper\nwhich is the use of an extensive number of inputs including mixed and autoregressive\ninputs. Both proposed methodologies have been used in the past in\ndifferent problems such as face recognition, prediction of chromosome anomalies\netch, providing higher outputs than usual. For comparison purposes,\nthe forecasting statistical and empirical accuracy of models is benchmarked\nwith traditional strategies such as a naïve strategy, a moving average convergence\ndivergence model and an autoregressive moving average model. As it\nturns out, the proposed novel techniques produce higher statistical and empirical\nresults outperforming the other linear models. Concluding first time\nsuch research work brings such outstanding outputs in terms of forecasting\noil markets....
Accurate bathymetric modeling is required for safe maritime navigation in shallow waters\nas well as for other marine operations. Traditionally, bathymetric modeling is commonly carried\nout using linear models, such as the Stumpf method. Linear methods are developed to derive\nbathymetry using the strong linear correlation between the grey values of satellite imagery visible\nbands and the water depth where the energy of these visible bands, received at the satellite sensor, is\ninversely proportional to the depth of water. However, without satisfying homogeneity of the seafloor\ntopography, this linear method fails. The current state-of-the-art is represented by artificial neural\nnetwork (ANN) models, which were developed using a non-linear, static modeling function. However,\nmore accurate modeling can be achieved using a highly non-linear, dynamic modeling function.\nThis paper investigates a highly non-linear wavelet network model for accurate satellite-based\nbathymetric modeling with dynamic non-linear wavelet activation function that has been proven to\nbe a valuable modeling method for many applications. Freely available Level-1C satellite imagery\nfrom the Sentinel-2A satellite was employed to develop and justify the proposed wavelet network\nmodel. The top-of-atmosphere spectral reflectance values for the multispectral bands were employed\nto establish the wavelet network model. It is shown that the root-mean-squared (RMS) error of the\ndeveloped wavelet network model was about 1.82 m, and the correlation between the wavelet network\nmodel depth estimate and â??truthâ? nautical chart depths was about 95%, on average. To further\njustify the proposed model, a comparison was made among the developed, highly non-linear wavelet\nnetwork method, the Stumpf log-ratio method, and the ANN method. It is concluded that the\ndeveloped, highly non-linear wavelet network model is superior to the Stumpf log-ratio method\nby about 37% and outperforms the ANN model by about 21%, on average, on the basis of the RMS\nerrors. Also, the accuracy of the bathymetry-derived wavelet network model was evaluated on the\nbasis of the International Hydrographic Organization (IHO)â??s standards for all survey orders. It is\nshown that the accuracy of the bathymetry derived from the wavelet network model does not meet\nthe IHOâ??s standards for all survey orders; however, the wavelet network model can still be employed\nas an accurate and powerful tool for survey planning when conducting hydrographic surveys for\nnew, shallow water areas....
Loading....