current position:Home>(Brain Tumor Segmentation-Part-Two) Application of Cross-Channel Deep Feature Learning in Brain Tumor Segmentation

(Brain Tumor Segmentation-Part-Two) Application of Cross-Channel Deep Feature Learning in Brain Tumor Segmentation

2022-09-23 09:23:35Xiao Yang, who doesn't want to type code

Title:Cross-Modality Deep Feature Learning for Brain Tumor Segmentation

Abstract-Abstract

Medical image data for brain tumor segmentation is relatively deficient in data scale, which contains richer information in modality attributes.In this paper, we propose a novel cross-modal deep feature learning framework to segment brain tumors from multimodal MRI data to exploit this multimodal information.The core idea is to mine the rich patterns of multimodal data to make up for the lack of data scale.The network framework proposed in this paper includes two learning processes: cross-modal feature transformation (CMFT) and cross-modal feature fusion (CMFF) processes, which learn enrichment by transferring knowledge across modal data and fusing knowledge from different modal data, respectively.feature representation.

The goal of the network is to segment three distinct regions: the entire tumor region, the tumor core region, and the enhanced tumor core region.

Due to the complexity of brain tumor data, directly concatenating multimodal data to form the input of the network, as in previous work, is neither the best option to fully utilize the basic knowledge of each modality data nor to fuse multimodalityEffective strategies for data knowledge.

This article contributed:

1) Two modules of cross-modal feature conversion process and cross-modal feature fusion are proposed

2) propose a novel idea to learn useful feature representations from knowledge transfer across different modalities of data.To achieve this goal, we construct a generative adversarial network-based learning scheme, which can realize the cross-modal feature transformation process without any manual annotation

3) The established feature fusion network, the features are learned from the feature transformation process and authorized to use A new fusion branch is proposed, which uses the single-modal prediction results to guide the process of predicting feature fusion.

Method-Method

Method overview

Specifically, in the cross-modal feature transfer (CMFT) process, a generative adversarial network learning scheme is adopted to learn useful features that facilitate knowledge transfer across modal data.This enables the network to mine intrinsic patterns from each modality data that are helpful for the task of brain tumor segmentation

In the process of cross-modal feature fusion, a new deep neural network architecture is constructed, which uses the deep features obtained in the cross-modal feature conversion process to deeply fuse the features captured by different modal data to predictbrain tumor area.Specifically.1) The fusion process is simply realized at the input stage, that is, the multi-modal image data is connected as the network input; 2) the fusion process is realized at the output stage, that is, the multi-modal image data is connected as the network input.Thus, the segmentation results of different modal data are integrated.The overall framework is shown below:

For brevity, only two modalities of data are used to show the learning framework

As shown in the figure above, in the cross-modal feature transformation (CMFT) process, the authors build two generators and two discriminators to transfer knowledge between the two modalities.In this part, the generator is used to generate one modality data from other modality data, and the purpose of the discriminator is used to distinguish the generated data from the actual data.

In the process of cross-modal feature fusion, the author uses a generator to predict the brain tumor segmentation region from each modality data, and fuses the deep features learned from it to obtain the final segmentation result.In the fusion branch, the authors design a new fusion scheme that uses the unimodal prediction results to guide the process of feature fusion, during which a stronger feature representation can be obtained to assist in segmenting the desired brain tumor segmentation region.

Generator Architecture

The structure diagram of the generator is shown below:

This structure is also the architecture of the single-modal feature learning branch, the only difference between the two network branches is the final output layer.where the output of the generator is drawn on the solid line, while the single-modal feature learning is drawn on the dashed line, the deep features are drawn on the last two convolutional layers, and the output single-modal feature learning branch connects the cross-modal features.

The generator adopts the U-Net architecture

Discriminator Architecture

The specific structure of the discriminator is as follows:

Cross-mode feature fusion

In order to realize the cross-modal feature fusion process, the authors established a novel cross-modal feature fusion network for brain tumor segmentation.The newly designed fusion branch uses the prediction results of the single modality to guide the feature fusion process, which can not only conveniently transfer the features learned from the feature transformation process, but also learn powerful fusion features to segment the desired brain tumor region.Its schematic diagram is as follows:

Summary

This paper proposes a cross-modal deep feature learning framework1 for segmentation of brain tumor regions from multimodal resonance scans.The authors propose to mine rich patterns across multimodal data, thereby bridging the gaps in data scale.This learning framework consists of a cross-modal feature transformation (CMFT) process and a cross-modal feature fusion (CMFF) process.By constructing a generative adversarial network-based learning scheme to implement the cross-modal feature transfer process, our method is able to learn useful feature representations from knowledge transfer across cross-modal data without any manual annotation.And the cross-modal feature fusion process will transfer the features learned from the feature transformation process and endow a new fusion branch to guide the strong feature fusion process.

copyright notice
author[Xiao Yang, who doesn't want to type code],Please bring the original link to reprint, thank you.
https://en.chowdera.com/2022/266/202209230908086907.html

Random recommended