KEYWORDS: Image segmentation, Medical imaging, Data modeling, Convolutional neural networks, Image processing algorithms and systems, Human-computer interaction, Evolutionary algorithms, Data integration, Computer simulations, Cancer
Deep learning-based segmentation algorithms for medical image require massive training datasets with accurate annotations, which is costly since it takes much human effort to manually labeling from scratch. Therefore, interactive image segmentation is important and may greatly improve the efficiency and accuracy of medical image labeling. Some interactive segmentation methods (e.g. Deep Extreme Cut and Deepgrow) may improve the labeling through minimal interactive input. However these methods only utilize the initial manually input information, while existing segmentation results (such as annotations produced by nonprofessionals or conventional segmentation algorithms) cannot be utilized. In this paper, an interactive segmentation method is proposed to make use of both existing segmentation results and human interactive information to optimize the segmentation results progressively. In this framework, the user only needs to click on the foreground or background of the target individual on the medical image, the algorithm could adaptively learn the correlation between them, and automatically completes the segmentation of the target. The main contributions of this paper are: (1) We adjusted and applied a convolutional neural network which takes medical image data and user's clicks information as input to achieve more accurate segmentation of medical images. (2) We designed an iterative training strategy to realize the applicability of the model to deal with different number of clicks data input. (3) We designed an algorithm based on false positive and false negative regions to simulate the user's clicks, so as to provide enough training data. By applying the proposed method, users can easily extract the region of interest or modify the segmentation results by multiple clicks. The experimental results of 6 medical image segmentation tasks show that the proposed method achieves more accurate segmentation results by at most five clicks.
A hybrid neural network system for the recognition of handwritten character using SOFM,BP and Fuzzy network is presented. The horizontal and vertical project of preprocessed character and 4_directional edge project are used as feature vectors. In order to improve the recognition effect, the GAT algorithm is applied. Through the hybrid neural network system, the recognition rate is improved visibly.
Based on the statistics theory and the pulse compression technique, a statistical method of reducing the range sidelobe (RSL) of the random binary phase codes (RBPC) is presented, which is different from those of decreasing the RSL of the pseudorandom binary phase codes. The theoretical analysis and computer simulation show that it is possible to suppress the peak RSL to lower than -30 dB, which can effectively guarantee the RBPC radar with good electronic counter-countermeasures feature applicable. Additionally, owing to the Doppler of the target, the maximum loss of the ratio of the mainlobe and sidelobe (MSR) is also discussed. In the meantime, the approach to realization of the RSL reduction with digital signal processors is given.
The radial basis function network (RBFN) is analyzed and the fuzzy radial basis function network (FRBFN) which is more suitable for the radar target recognition is proposed in this paper. Here both of the two networks are used as classifiers. This FRBFN utilize fuzzy clustering method to determine the structure of the net. The generalization property of the two networks are discussed. It is shown from the theoretical analysis and experiment that the FRBFN has better generalization property. The Doppler echoes of the targets gotten from a current surveillance radar are used in the experiment. The experimental results shows that the classification rate of the FRBFN is higher than that of the RBFN. The network proposed in this paper is promising in the application of radar target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.