Frequency: Quarterly E- ISSN: 2230-8121 P- ISSN: 2249-1295 Abstracted/ Indexed in: Ulrich's International Periodical Directory, Google Scholar, SCIRUS, Genamics JournalSeek
Quarterly published in print and online "Inventi Impact: Biomedical Engineering" publishes high quality unpublished as well as high impact pre-published research and reviews catering to the needs of researchers and professionals. This multidisciplinary journal covers all recent advances in the field of biomedical technology, instrumentation, and administration. Papers are invited focusing on theoretical and practical problems associated with development of medical technology; introduction of new engineering methods into public health, hospitals and patient care, improvement of diagnosis and therapy, biomedical information storage and retrieval etc.
Background: Intensity in homogeneity occurs in many medical images, especially in\nvessel images. Overcoming the difficulty due to image in homogeneity is crucial for the\nsegmentation of vessel image.\nMethods: This paper proposes a localized hybrid level-set method for the\nsegmentation of 3D vessel image. The proposed method integrates both local region\ninformation and boundary information for vessel segmentation, which is essential for\nthe accurate extraction of tiny vessel structures. The local intensity information is firstly\nembedded into a region-based contour model, and then incorporated into the\nlevel-set formulation of the geodesic active contour model. Compared with the preset\nglobal threshold based method, the use of automatically calculated local thresholds\nenables the extraction of the local image information, which is essential for the\nsegmentation of vessel images.\nResults: Experiments carried out on the segmentation of 3D vessel images\ndemonstrate the strengths of using locally specified dynamic thresholds in our level-set\nmethod. Furthermore, both qualitative comparison and quantitative validations have\nbeen performed to evaluate the effectiveness of our proposed model.\nConclusions: Experimental results and validations demonstrate that our proposed\nmodel can achieve more promising segmentation results than the original hybrid\nmethod does....
Image segmentation is an important task involved in different areas from image processing to image analysis.\r\nOne of the simplest methods for image segmentation is thresholding. However, many thresholding methods are\r\nbased on a bi-level thresholding procedure. These methods can be extended to form multi-level thresholding, but they\r\nbecome computationally expensive because a large number of iterations would be required for computing the optimum\r\nthreshold values. In order to overcome this disadvantage, a new method based on a Shrinking Search Space (3S)\r\nalgorithm is proposed in this paper. The method is applied on statistical bi-level thresholding approaches including\r\nEntropy, Cross-entropy, Covariance, and Divergent Based Thresholding (DBT), to achieve multi-level thresholding and\r\nused for intracranial segmentation from brain MRI images. The paper demonstrates that the impact of the proposed\r\n3S technique on the DBT method is more significant than the other bi-level thresholding approaches. Comparing the\r\nresults of using the proposed approach against those of the Fuzzy C-Means (FCM) clustering method demonstrates\r\na better segmentation performance by improving the similarity index from 0.58 in FCM to 0.68 in the 3S method. Also,\r\nthis method has a lower computation complexity of around 0.37s with respect to 157s processing time in FCM. In\r\naddition, the FCM approach does not always guarantee the convergence, whilst the 3S technique always converges\r\nto the optimum res....
Background: Currently there are no standard models with which to evaluate the biomechanical performance of\ncalcified tissue adhesives, in vivo. We present, herein, a pre-clinical murine distal femoral bone model for evaluating\ntissue adhesives intended for use in both osseous and osteochondral tissue reconstruction.................................
In metabolomics data, like other -omics data, normalization is an important\npart of the data processing. The goal of normalization is to reduce the variation\nfrom non-biological sources (such as instrument batch effects), while\nmaintaining the biological variation. Many normalization techniques make\nadjustments to each sample. One common method is to adjust each sample\nby its Total Ion Current (TIC), i.e. for each feature in the sample, divide its\nintensity value by the total for the sample. Because many of the assumptions\nof these methods are dubious in metabolomics data sets, we compare these\nmethods to two methods that make adjustments separately for each metabolite,\nrather than for each sample. These two methods are the following: 1) for\neach metabolite, divide its value by the median level in bridge samples\n(BRDG); 2) for each metabolite divide its value by the median across the experimental\nsamples (MED). These methods were assessed by comparing the\ncorrelation of the normalized values to the values from targeted assays for a\nsubset of metabolites in a large human plasma data set. The BRDG and MED\nnormalization techniques greatly outperformed the other methods, which often\nperformed worse than performing no normalization at all....
Background\r\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\r\nMethod\r\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\r\nResult\r\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\r\nConclusion\r\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set....
The stage of a tumor is sometimes hard to predict, especially early in its development. The size and complexity of its observations are the major problems that lead to false diagnoses. Even experienced doctors can make a mistake in causing terrible consequences for the patient. We propose a mathematical tool for the diagnosis of breast cancer. The aim is to help specialists in making a decision on the likelihood of a patient’s condition knowing the series of observations available. This may increase the patient’s chances of recovery. With a multivariate observational hidden Markov model, we describe the evolution of the disease by taking the geometric properties of the tumor as observable variables. The latent variable corresponds to the type of tumor: malignant or benign. The analysis of the covariance matrix makes it possible to delineate the zones of occurrence for each group belonging to a type of tumors. It is therefore possible to summarize the properties that characterize each of the tumor categories using the parameters of the model. These parameters highlight the differences between the types of tumors....
Background: The pupillary light reflex characterizes the direct and consensual response\nof the eye to the perceived brightness of a stimulus. It has been used as indicator of\nboth neurological and optic nerve pathologies. As with other eye reflexes, this reflex\nconstitutes an almost instantaneous movement and is linked to activation of the same\nmidbrain area. The latency of the pupillary light reflex is around 200 ms, although the\nliterature also indicates that the fastest eye reflexes last 20 ms. Therefore, a system with\nsufficiently high spatial and temporal resolutions is required for accurate assessment. In\nthis study, we analyzed the pupillary light reflex to determine whether any small\ndiscrepancy exists between the direct and consensual responses, and to ascertain\nwhether any other eye reflex occurs before the pupillary light reflex.\nMethods: We constructed a binocular video-oculography system two high-speed\ncameras that simultaneously focused on both eyes. This was then employed to assess\nthe direct and consensual responses of each eye using our own algorithm based on\nCircular Hough Transform to detect and track the pupil. Time parameters describing\nthe pupillary light reflex were obtained from the radius time-variation. Eight healthy\nsubjects (4 women, 4 men, aged 24ââ?¬â??45) participated in this experiment.\nResults: Our system, which has a resolution of 15 microns and 4 ms, obtained time\nparameters describing the pupillary light reflex that were similar to those reported\nin previous studies, with no significant differences between direct and consensual\nreflexes. Moreover, it revealed an incomplete reflex blink and an upward eye\nmovement at around 100 ms that may correspond to Bellââ?¬â?¢s phenomenon.\nConclusions: Direct and consensual pupillary responses do not any significant\ntemporal differences. The system and method described here could prove useful\nfor further assessment of pupillary and blink reflexes. The resolution obtained revealed\nthe existence reported here of an early incomplete blink and an upward eye movement....
Spirometers are important devices for following up patients with respiratory diseases. These are mainly located only at hospitals, with all the disadvantages that this can entail. This limits their use and consequently, the supervision of patients. Research efforts focus on providing digital alternatives to spirometers. Although less accurate, the authors claim they are cheaper and usable by many more people worldwide at any given time and place. In order to further popularize the use of spirometers even more, we are interested in also providing user-friendly lung-capacity metrics instead of the traditional-spirometry ones. The main objective, which is also the main contribution of this research, is to obtain a person’s lung age by analyzing the properties of their exhalation by means of a machine-learning method. To perform this study, 188 samples of blowing sounds were used. These were taken from 91 males (48.4%) and 97 females (51.6%) aged between 17 and 67. A total of 42 spirometer and frequency-like features, including gender, were used. Traditional machine-learning algorithms used in voice recognition applied to the most significant features were used. We found that the best classification algorithm was the Quadratic Linear Discriminant algorithm when no distinction was made between gender. By splitting the corpus into age groups of 5 consecutive years, accuracy, sensitivity and specificity of, respectively, 94.69%, 94.45% and 99.45% were found. Features in the audio of users’ expiration that allowed them to be classified by their corresponding lung age group of 5 years were successfully detected. Our methodology can become a reliable tool for use with mobile devices to detect lung abnormalities or diseases....
Models that describe the trace element status formation in the human organism are essential for a correction of micromineral\n(trace elements) deficiency. A direct trace element retention assessment in the body is difficult due to the many internal\nmechanisms. The trace element retention is determined by the amount and the ratio of incoming and excreted substance.\nSo, the concentration of trace elements in drinking water characterizes the intake, whereas the element concentration in\nurine characterizes the excretion. This system can be interpreted as three interrelated elements that are in equilibrium.\nSince many relationships in the system are not known, the use of standard mathematical models is difficult. The artificial\nneural network use is suitable for constructing a model in the best way because it can take into account all dependencies\nin the system implicitly and process inaccurate and incomplete data. We created several neural network models to describe\nthe retentions of trace elements in the human body. On the model basis, we can calculate the microelement levels in the\nbody, knowing the trace element levels in drinking water and urine. These results can be used in health care to provide\nthe population with safe drinking water....
Pulse wave contains human physiological and pathological information. Different\npeople will exhibit different characteristics, and hence determining the characteristic\npoints of the pulse wave of human physiological health makes sense. It is common\nthat we extract the characteristic value of pulse wave signal with the method based on\nwavelet transform on a small scale, and then determine the locations of the characteristic\npoints by modulus maxima and modulus minima. Before determining characteristic\nvalue by detecting modulus maxima and modulus minima, we need to determine\nevery period of the pulse wave. This paper presents a new kind of adaptive\nthreshold determination method which is more effective. It can accurately determine\nevery period of the pulse wave, and then extract characteristic values by modulus\nmaxima and modulus minima in every period of the pulse wave. The method presented\nin this paper promotes the research utilizing pulse wave on health life....
Loading....