Efficiency vs. Accuracy: A Comparative Analysis of Lightweight MobileNetV2 and VGG16 for Brain Tumor MRI Classification Using Deep Feature Extraction

Authors

  • Raja Anan Nasution Computer Science, Faculty of Engineering and Computer Science, Universitas Potensi Utama
  • Mhd. Furqan Department of Computer Science, Faculty of Science and Technology, Universitas Islam Negeri Sumatera Utara, Indonesia
  • Rika Rosnelly Computer Science, Faculty of Engineering and Computer Science, Universitas Potensi Utama

DOI:

https://doi.org/10.15408/jti.v19i1.45002

Keywords:

MRI Classification, Brain Tumor, CNN, MobileNetV2, VGG16, PCA

Abstract

Brain tumor detection using magnetic resonance imaging (MRI) is a crucial task for early diagnosis and treatment planning, requiring models that are not only accurate but also computationally efficient. This study presents a comparative analysis of two Convolutional Neural Network (CNN) architectures, MobileNetV2 and VGG16, combined with Principal Component Analysis (PCA) for deep feature dimensionality reduction. The dataset consists of 253 brain MRI images (155 tumor and 98 non-tumor), which have been preprocessed and divided into training and testing sets using an 80:20 stratification split. Experimental results show that MobileNetV2 with PCA achieves an accuracy of 86.27%, with a precision of 87.50% and a recall of 90.32% for the tumor class, demonstrating balanced performance in classifying tumor and non-tumor images. VGG16 with the same PCA configuration achieves an accuracy of 64.71%, with a recall of 100% for the tumor class but a low recall of 10% for the non-tumor class. These findings suggest that extreme dimensionality reduction affects deep feature representation differently depending on the original feature structure. The results show that MobileNetV2 provides a better balance between accuracy and feature compactness at high dimensionality reduction settings, making it more suitable for resource-constrained medical image classification scenarios.

References

[1] W. Gao, D. Wang, and Y. Huang, “Designing a Deep Learning-Driven Resource-Efficient Diagnostic System for Metastatic Breast Cancer: Reducing Long Delays of Clinical Diagnosis and Improving Patient Survival in Developing Countries,” Cancer Inform., vol. 22, no. 19, pp. 1–14, 2023, doi: 10.1177/11769351231214446.

[2] Y. Gulzar, “Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique,” Sustain., vol. 15, no. 3, pp. 1–14, 2023, doi: 10.3390/su15031906.

[3] P. Laptev, S. Litovkin, S. Davydenko, A. Konev, E. Kostyuchenko, and A. Shelupanov, “Neural Network-Based Price Tag Data Analysis,” Futur. Internet, vol. 14, no. 3, pp. 1–14, 2022, doi: 10.3390/fi14030088.

[4] J. Ismail, L. Tanti, and W. Wanayumini, “DEVELOPMENT OF SKIN CANCER PIGMENT IMAGE CLASSIFICATION USING A COMBINATION OF MOBILENETV2 AND CBAM,” JITK (Jurnal Ilmu Pengetah. dan Teknol. Komputer), vol. 10, no. 4, pp. 1–11, 2025, doi: 10.33480/jitk.v10i4.6541.

[5] Wanayumini, H. Satria, and R. Rosnelly, “Design of agrivoltaic system with internet of things control for chili fruit classification using the neural network method,” Int. J. Reconfigurable Embed. Syst., vol. 14, no. 1, pp. 176–183, 2025, doi: 10.11591/ijres.v14.i1.pp176-183.

[6] W. Abdullah, “Advanced Fruit Quality Assessment using Deep Learning and Transfer Learning Technique,” Sustain. Mach. Intell. J., vol. 10, no. 1, pp. 37–49, 2025, doi: 10.61356/smij.2025.10450.

[7] U. Salamah, Anita Ratnasari, and Sarwati Rahayu, “Automated Fruit Classification Menggunakan Model VGG16 dan MobileNetV2,” JSAI (Journal Sci. Appl. Informatics), vol. 5, no. 3, pp. 176–181, 2022, doi: 10.36085/jsai.v5i3.3615.

[8] K. Okokpujie, I. P. Okokpujie, O. I. Ayomikun, A. M. Orimogunje, and A. T. Ogundipe, “Development of a Web and Mobile Applications-Based Cassava Disease Classification Interface Using Convolutional Neural Network,” Math. Model. Eng. Probl., vol. 10, no. 1, pp. 1–10, 2023, doi: 10.18280/MMEP.100113.

[9] N. Rachburee and W. Punlumjeak, “Lotus species classification using transfer learning based on VGG16, ResNet152V2, and MobileNetV2,” IAES Int. J. Artif. Intell., vol. 11, no. 4, pp. 1344–1352, 2022, doi: 10.11591/ijai.v11.i4.pp1344-1352.

[10] N. T J, “An enhanced deep learning framework for prostate cancer detection using modified VGG16 and LeNet-MobileNetV2 integration,” Results Eng., vol. 27, no. 1, pp. 1–14, 2025, doi: 10.1016/j.rineng.2025.106918.

[11] R. A. Kumala, C. A. Sari, and E. H. Rachmawanto, “A Comparison of MobileNetV2 and VGG16 Architectures with Transfer Learning for Multi-Class Image-Based Waste Classification,” J. Appl. Informatics Comput., vol. 9, no. 4, pp. 1610–1624, 2025, doi: 10.30871/jaic.v9i4.9958.

[12] M. Hammad, M. ElAffendi, A. A. Ateya, and A. A. Abd El-Latif, “Efficient Brain Tumor Detection with Lightweight End-to-End Deep Learning Model,” Cancers (Basel)., vol. 15, no. 10, pp. 1–15, 2023, doi: 10.3390/cancers15102837.

[13] O. N. Oyelade, E. A. Irunokhai, and H. Wang, “A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification,” Sci. Rep., vol. 14, no. 1, pp. 1–23, 2024, doi: 10.1038/s41598-024-51329-8.

[14] Y. Zhu and M. Elbattah, “Explainable Deep Learning for Endometriosis Classification in Laparoscopic Images,” BioMedInformatics, vol. 5, no. 4, pp. 1–18, 2025, doi: 10.3390/biomedinformatics5040063.

[15] J. A. Prakash, V. Ravi, V. Sowmya, and K. P. Soman, “Stacked ensemble learning based on deep convolutional neural networks for pediatric pneumonia diagnosis using chest X-ray images,” Neural Comput. Appl., vol. 35, no. 11, pp. 8259–8279, 2023, doi: 10.1007/s00521-022-08099-z.

[16] N. A. Samee, G. Atteia, S. Meshoul, M. A. Al-antari, and Y. M. Kadah, “Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach,” Mathematics, vol. 10, no. 19, pp. 1–27, 2022, doi: 10.3390/math10193631.

[17] N. Aslan, G. Ozmen Koca, M. A. Kobat, and S. Dogan, “Multi-classification deep CNN model for diagnosing COVID-19 using iterative neighborhood component analysis and iterative ReliefF feature selection techniques with X-ray images,” Chemom. Intell. Lab. Syst., vol. 224, no. 1, pp. 1–11, 2022, doi: 10.1016/j.chemolab.2022.104539.

[18] Z. Muhammed and B. Al-Khateeb, “CNN-EWC: A continuous deep learning approach for lung cancer classification,” J. Intell. Syst., vol. 34, no. 1, pp. 1–13, 2025, doi: 10.1515/jisys-2024-0541.

[19] I. Abidoye, F. Ikeji, C. A. Coupland, S. D. J. Calaminus, N. Sander, and E. Sousa, “Platelets Image Classification Through Data Augmentation: A Comparative Study of Traditional Imaging Augmentation and GAN-Based Synthetic Data Generation Techniques Using CNNs,” J. Imaging, vol. 11, no. 6, pp. 1–12, 2025, doi: 10.3390/jimaging11060183.

[20] J. X. Schraut, L. Liu, J. Gong, and Y. Yin, “A multi-output network with U-net enhanced class activation map and robust classification performance for medical imaging analysis,” Discov. Artif. Intell., vol. 3, no. 1, pp. 1–12, 2023, doi: 10.1007/s44163-022-00045-1.

Downloads

Published

2026-04-28

How to Cite

Efficiency vs. Accuracy: A Comparative Analysis of Lightweight MobileNetV2 and VGG16 for Brain Tumor MRI Classification Using Deep Feature Extraction. (2026). JURNAL TEKNIK INFORMATIKA, 19(1), 183-191. https://doi.org/10.15408/jti.v19i1.45002