Efficient Fusion of Medical Images Based on CNN | ||||
Menoufia Journal of Electronic Engineering Research | ||||
Article 13, Volume 30, Issue 2, July 2021, Page 79-83 PDF (1.02 MB) | ||||
Document Type: Original Article | ||||
DOI: 10.21608/mjeer.2021.195522 | ||||
View on SCiNiTO | ||||
Authors | ||||
randa ali 1; Fathi El-Sayed2; Walid El-Shafai 3; Taha Elsayed Taha2 | ||||
1Department of Electronics and Electrical Communications Engineering Faculty of Electronic Engineering Menoufia University: Menouf, Egypt | ||||
2Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt. | ||||
3Department of Electronics and Electrical Communication Engineering, Menoufia University, Menouf, Menoufia, Egypt | ||||
Abstract | ||||
Fusion ofimages has an important role in medical image diagnosis. Medical imaging systems have some limitations due to degradations and noise. Hence, it is necessary to apply some techniques such as image fusion to integrate information from different image modalities into the fusion result. With this strategy, the image features will be more defined, and this in turn, helps for better diagnosis. This paper presents an image fusion technique for different kinds of images to enhance the quality of brain tumor images. This technique depends on convolutional neural network (CNN) with more layers to obtain high-quality images for better diagnosis. The learning process of the CNN helps to measure the activity level and fusion ratio. The obtained fusion resultsare compared with thoseof traditional fusion methods. Different quality evaluation metrics are used in the assessment of the proposed technique. | ||||
Highlights | ||||
This paper introduced a complete optimized fusion system for achieving image fusion by learning a CNN. The obtained results are good, when compared with those of other techniques.In addition, the processing time is low. So, the proposed techniqueis a recommended solution to image fusion for images of different kinds like medical images of different modalities. | ||||
Keywords | ||||
Image fusion; PCA; CNN; DWT; DT-CWT | ||||
Full Text | ||||
Medical imaging is a new trend in the last decades that can help specialist for better diagnosis. Both Magnetic Resonance (MR) and Computed Tomography (CT) types of images have different characteristics and limitations. For example, MR images contribute much more details about soft tissues. On the other hand, CT images give information about bone structures [1]. The main limitations here are the absence of information about both soft tissues and dense layers. That is why the fusion of registered MR and CT images is necessary for information integration. Fusion of medical images helps for disease diagnosis. So, the fusion should be performed, carefully. The resolution and details of medical images should be high [4]. The most important characteristics of the organ of interest should be represented in the fused images, and our rule is to select the best fusion technique. the i-th feature map input and yj is the j-th feature map output of the convolutional layer. The ReLU activation function used in the CNN is a non-linear function explained as follows [8]: where bj is the bias and kij is the convolutional kernel between xi and yj . A convolution matrix is a small matrix used for edge detection, sharpening, blurring and embossing. This is done by a convolution between an image and a kernel. The symbol ∗ indicates convolution operation. Another type of feature extraction is the pooling, which is used to decrease the size of the image by combining neighboring pixels into a single pixel. Average and max- pooling are operations performed in the CNN. In the fourth step, consistency verification is implemented through pixel-wise weighted averaging to get the fused image with the final decision map. As the input and output data of the fully-connected layers have fixed dimensions, to solve this problem, we perform reshaping of parameters after the conversion to feed the fully-connected layer from the convolutional layer. The network contains max-pooling and convolutional layers and can treat input images with different sizes to get the dense predictions [11]. The score map output is obtained, and it shows the focus property for the patch pairs of the input images to the network. In [12], to compare the similarity of the patches, weuse a Siamese model for its advantages over other models, including 2-channeland pseudo-Siamese models. Siamese model is easier to train in image fusion, leading to an easy convergence. 3.2 Fusion Model In this paper, the threshold of the input image is set to 0.01 × W × H, where W and H are the width and height of every input image, respectively. 6 Comparison with Other Techniques 7 Quantitative Evaluations
| ||||
References | ||||
1-H. M. El-Hoseny, S. M. EL-Rabaie, W.A. El-rahman, F.E. Abd-El-Samie, “Medical image fusion techniques based on combined discrete transform domains”, National Radio Science Conference (NRSC). IEEE, pp. 471–480, 2017. 2- K. Parmar, R.Kher, “A Comparative Analysis of Multimodality Medical Image Fusion Methods”, Sixth Asia Modelling Symposium IEEE, pp. 93-97, 2012. 3-F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. Abd El-Samie, “Curvelet fusion of MR and CT images”, Progress In Electromagnetic Research C, vol. 3, pp. 215–224, 2008. 4-Y. L. Ping, L. B. Sheng, Z. D. Hua (2007) Novel image fusion algorithm with novel performance evaluation method. Systems Engineering and Electronics, pp.509–513,2007. 5- Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based learning applied to document recognition”. Proceedings of the IEEE, pp. 2278-2324, 1998. 6-https://www.quora.com/What-is-a-receptive-field-in-a-convolutional-neural-network 20-12-2019. 7-Y. Liu, X. Chen, H. Peng, Z. Wang, “Multi-focus image fusion with a deep convolutional neural network”, Information Fusion, pp. 191-207,36,2017. 8- V. Nair, G. Hinton, “Rectified linear units improve restricted boltzmann machines”, 27th International Conference on Machine Learning, pp.807–814, 2010. 9- S. Li, J. Kwok, Y. Wang,“ Multi focus image fusion using artificial neural networks” Pattern Recognition Letters, pp.985–997, 2002, 23, 8. 10- J. Long, E. Shelhamer, T. Darrell, “Fully convolutional networks for semantic segmentation”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.3431–3440, (2015). 11- P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. L, “Over Feat: integrated recognition localization and detection using convolutional networks”, arXiv preprint, vol. 4, pp.1–16, (2014). 12- S. Zagoruyko, N. Komodakis, “Learning to compare image patches via convolutional neural networks”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361, (2015). 13- S. S. Bedi, “Contrast enhancement for PCA fusion of medical images”, Journal of global research in computer science, pp. 25-29, (2013). 14-https://www.quora.com/What-is-spatial-domain-in-image-processing 20-1-2020. | ||||
Statistics Article View: 210 PDF Download: 249 |
||||