Optimized Detection of Red Devil Fish in Low-Quality Underwater Images from Lake Toba Using a Hybrid CNN and Transfer Learning Approach
DOI:
https://doi.org/10.56313/jictas.v4i1.429Keywords:
CLAHE, Convolutional Neural Network, Edge AI, Red Devil Fish, Transfer LearningAbstract
The detection of freshwater fish in turbid underwater environments presents significant challenges due to poor image quality caused by low lighting, suspended particles, and visual noise. This study proposes an optimized detection model for Amphilophus labiatus (Red Devil fish) in the murky waters of Lake Toba, Indonesia, using a hybrid Convolutional Neural Network (CNN) integrated with transfer learning and visual enhancement techniques. The proposed architecture combines MobileNetV2 and ResNet50 backbones with CLAHE (Contrast Limited Adaptive Histogram Equalization) and median filtering to improve image clarity and feature extraction. A custom dataset comprising 3,500 annotated underwater images was used to train and evaluate the model. The hybrid model achieved a detection accuracy of 96.1%, a precision of 95.6%, a recall of 94.8%, and a mean Average Precision ([email protected]) of 0.941—outperforming baseline models such as YOLOv5 and Faster R-CNN. Visual diagnostics and Grad-CAM attention maps confirm the model's ability to focus on key anatomical features under varying image conditions. The architecture is optimized for real-time deployment on edge-AI devices, supporting conservation efforts and biodiversity monitoring in freshwater ecosystems
References
. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 618–626.
. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, Jun. 2017.
. M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. 36th Int. Conf. Machine Learning (ICML), 2019, pp. 6105–6114.
. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Medical Image Comput. Comput.-Assisted Intervention (MICCAI), 2015, pp. 234–241.
. D. Liu, Y. Guo, J. Zhao, and Y. Fu, “Automatic fish classification with deep convolutional neural networks,” Neurocomputing, vol. 275, pp. 1100–1108, Jan. 2018.
. A. M. Abdelrahman and R. Wang, “Deep learning for fish classification in underwater images,” Sensors, vol. 21, no. 8, pp. 2823–2835, Apr. 2021.
. [8] H. Fu, B. Zhang, and C. Xu, “Fish recognition based on deep learning,” Applied Sciences, vol. 9, no. 10, pp. 1964–1975, May 2019.
. M. Jalal, M. S. M. Saad, and R. A. Jamil, “Fish species recognition using deep learning,” in Proc. IEEE Int. Conf. Smart Comput. (SMARTCOMP), 2020, pp. 201–206.
. T. Hossain, M. Moniruzzaman, and S. M. S. Islam, “A robust CNN-based fish classification approach with Grad-CAM visualization,” in Proc. IEEE Int. Conf. Image Process. (ICIP), 2022, pp. 1601–1605.
. D. Zhang, L. Yang, and Q. Zhang, “Deep learning-based fish species classification,” Sustainable Computing: Informatics and Systems, vol. 28, p. 100422, Mar. 2020.
. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Cambridge, MA, USA: MIT Press, 2016.
. L. Deng and D. Yu, “Deep learning: Methods and applications,” Foundations and Trends in Signal Processing, vol. 7, no. 3–4, pp. 197–387, 2014.
. D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer features of a deep network,” University of Montreal, Tech. Rep. 1341, 2009.
. Y. Yuan, Z. Wang, and Q. Liu, “A review of visual interpretability for deep learning,” Neurocomputing, vol. 421, pp. 70–79, Jan. 2021.
. S. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70–90, Apr. 2018.
. P. Mohanty, D. P. Hughes, and M. Salathé, “Using deep learning for image-based plant disease detection,” Frontiers in Plant Science, vol. 7, p. 1419, Sep. 2016.
. G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-path refinement networks for high-resolution semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1925–1934.
. Yanto, B., & Sari, R. P. (2019). Elektronik Pembelajaran Semester (E-RPS) Berbasis Web Fakultas Ilmu Komputer Universitas Pasir Pengaraian. Riau Journal of Computer Science, 5(2), 98-107.
. Sugiyono, D. (2010). Metode Penelitian Kuantitatif, Kualitatif, dan R&D. Penerbit Alfabeta.
. Moleong, L. J. (2019). Metodologi Penelitian Kualitatif (Edisi Revisi). PT. Remaja Rosda Karya.
. Yanto, B., Rouza, E., Fimawahib, L., Hayadi, B. H., & Pratama, R. R. (2023). Penerapan Algoritma Deep Learning CNN dalam Menentukan Kematangan Buah Jeruk Manis Berdasarkan Citra RGB. Jurnal Teknologi Informasi dan Ilmu Komputer, 10(1), 59-66.
. Yanto, B., Hutagaol, R., & Rahman, R. (2022). Analisis optimasi algoritma backpropagation momentum dalam memprediksi jenis tingkat kejahatan di Kecamatan Tambusai Utara. Journal of ICT Applications and System, 1(1), 47-60.
. Yanto, B., Fimawahib, L., Supriyanto, A., Hayadi, B. H., & Pratama, R. R. (2021). Klasifikasi Tekstur Kematangan Buah Jeruk Manis Berdasarkan Tingkat Kecerahan Warna dengan Metode Deep Learning CNN. Jurnal Inovtek Polbeng Seri Informatika, 6(2), 259-268
Published
How to Cite
Issue
Section
Copyright (c) 2025 Journal of ICT Aplications and System

This work is licensed under a Creative Commons Attribution 4.0 International License.

