We compare axial 2D U-Nets and their 3D counterparts for pixel/voxel-based segmentation of five abdominal organs in CT scans. For each organ, two competing CNNs are trained. They are evaluated by performing five-fold cross-validation on 80 3D images. In a two-step concept, the relevant area containing the organ is first extracted by detected bounding boxes and then passed as input to the organ-specific U-Net. Furthermore, a random regression forest approach for the automatic detection of bounding boxes is summarized from our previous work. The results show that the 2D U-Net is mostly on par with the 3D U-Net or even outperforms it. Especially for the kidneys, it is significantly better suited in this study.
Multispectral person detection aims at automatically localizing humans in images that consist of multiple spectral bands. Usually, the visual-optical (VIS) and the thermal infrared (IR) spectra are combined to achieve higher robustness for person detection especially in insufficiently illuminated scenes. This paper focuses on analyzing existing detection approaches for their generalization ability. Generalization is a key feature for machine learning based detection algorithms that are supposed to perform well across different datasets. Inspired by recent literature regarding person detection in the VIS spectrum, we perform a cross-validation study to empirically determine the most promising dataset to train a well-generalizing detector. Therefore, we pick one reference Deep Convolutional Neural Network (DCNN) architecture as well as three different multispectral datasets. The Region Proposal Network (RPN) that was originally introduced for object detection within the popular Faster R-CNN is chosen as a reference DCNN. The reason for this choice is that a stand-alone RPN is able to serve as a competitive detector for two-class problems such as person detection. Furthermore, all current state-of-the-art approaches initially apply an RPN followed by individual classifiers. The three considered datasets are the KAIST Multispectral Pedestrian Benchmark including recently published improved annotations for training and testing, the Tokyo Multi-spectral Semantic Segmentation dataset, and the OSU Color-Thermal dataset including just recently released annotations. The experimental results show that the KAIST Multispectral Pedestrian Benchmark with its improved annotations provides the best basis to train a DCNN with good generalization ability compared to the other two multispectral datasets. On average, this detection model achieves a log-average Miss Rate (MR) of 29.74% evaluated on the reasonable test subsets of the three analyzed datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.