Current Issue : 5 Volume : 2011 Issue Number : 1-2 Articles : 4 Articles
Background\r\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\r\nMethod\r\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\r\nResult\r\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\r\nConclusion\r\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set....
Background\r\nThe evaluation, verification and comparison of different numerical heart models are difficult without a commonly available database that could be utilized as a reference. Our aim was to compile an exemplary dataset.\r\nMethods\r\nThe following methods were employed: Magnetic Resonance Imaging (MRI) of heart and torso, Body Surface Potential Maps (BSPM) and MagnetoCardioGraphy (MCG) maps. The latter were recorded simultaneously from the same individuals a few hours after the MRI sessions.\r\nResults\r\nA training dataset is made publicly available; datasets for blind testing will remain undisclosed.\r\nConclusions\r\nWhile the MRI data may provide a common input that can be applied to different numerical heart models, the verification and comparison of different models can be performed by comparing the measured biosignals with forward calculated signals from the models....
Background\r\nAllometry, in general biology, measures the relative growth of a part in relation to the whole living organism. Using reported clinical data, we apply this concept for evaluating the probability of ventricular fibrillation based on the electrocardiographic ST-segment deviation values.\r\nMethods\r\nData collected by previous reports were used to fit an allometric model in order to estimate ventricular fibrillation probability. Patients presenting either with death, myocardial infarction or unstable angina were included to calculate such probability as, VFp = d �Ÿ (ST), for three different ST deviations. The coefficients d and �Ÿ were obtained as the best fit to the clinical data extended over observational periods of 1, 6, 12 and 48 months from occurrence of the first reported chest pain accompanied by ST deviation.\r\nResults\r\nBy application of the above equation in log-log representation, the fitting procedure produced the following overall coefficients: Average �Ÿ = 0.46, with a maximum = 0.62 and a minimum = 0.42; Average d = 1.28, with a maximum = 1.79 and a minimum = 0.92. For a 2 mm ST-deviation, the full range of predicted ventricular fibrillation probability extended from about 13% at 1 month up to 86% at 4 years after the original cardiac event.\r\nConclusions\r\nThese results, at least preliminarily, appear acceptable and still call for full clinical test. The model seems promising, especially if other parameters were taken into account, such as blood cardiac enzyme concentrations, ischemic or infarcted epicardial areas or ejection fraction. It is concluded, considering these results and a few references found in the literature, that the allometric model shows good predictive practical value to aid medical decisions...
Background\r\nReal-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting.\r\nMethods\r\nA discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions.\r\nResults\r\nThe quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds.\r\nConclusions\r\nReal-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance....
Loading....