Detection of Vulgarity in Anime Character: Implementation of Detection Transformer

Amalia Suciati, Dian Kartika Sari, Andi Prademon Yunus, Nuuraan Rizqy Amaliah

Abstract


Vulgar and pornographic content has become a widespread issue on the internet, appearing in various fields include anime. Vulgar pornographic content in anime is not limited to the sexuality genre; anime from general genres such as action, adventure, and others also contain vulgar visual. The main focus of this research is the implementation of the Detection Transformer (DETR) object detection method to identify vulgar parts of anime characters, particularly female characters. DETR is a deep learning model designed for object detection tasks, adapting the attention mechanism of Transformers. The dataset used consists of 800 images taken from popular anime, based on viewership rankings, which were augmented to a total of 1,689 images. The research involved training models with different backbones, specifically ResNet-50 and ResNet-101, each with dilation convolution applied at different stages. The results show that the DETR model with a ResNet-50 backbone and dilation convolution at stage 5 outperformed other backbones and dilation configurations, achieving a mean Average Precision of 0.479 and  of 0.875. The other result is dilated convolution improves small object detection by enlarging the receptive field, applying it in early stages tends to reduce spatial detail and harm performance on medium and large objects. However, the primary focus of this research is not solely on achieving the highest performance but on exploring the potential of transformer-based models, such as DETR, for detecting vulgar content in anime. DETR benefits from its ability to understand spatial context through self-attention mechanisms, offering potential for further development with larger datasets, more complex architectures, or training at larger data scales.


Keywords


anime; detection transformer; object detection; transformer; vulgarity.

Full Text:

PDF

References


D. Brou, “OpenSIUC Searching for Freedom : An Investigation of Form in Japanese Storytelling and Animation,” no. Fall, 2023.

“From the Holy Land to the Homeland : The Impact of Anime Broadcasts on Economic Growth Waseda INstitute of Political EConomy,” no. April, 2024.

G. Liu, “Influence of Digital Media Technology on Animation Design,” J. Phys. Conf. Ser., vol. 1533, no. 4, 2020, doi: 10.1088/1742-6596/1533/4/042032.

F. W. Paulus, F. Nouri, S. Ohmann, E. Möhler, and C. Popow, “The impact of Internet pornography on children and adolescents: A systematic review,” Encephale, vol. 50, pp. 649–662, 2024, doi: 10.1016/j.encep.2023.12.004.

Z. Achmad, S. Mardliyah, and H. Pramitha, “The Importance of Parental Control of Teenagers in Wacthing Anime with Pornographic Content on the Internet,” vol. 138, no. IcoCSPA 2017, pp. 81–84, 2019, doi: 10.2991/icocspa-17.2018.22.

C. Massaccesi, “Anime: A Critical Introduction, by Rayna Denison,” Alphav. J. Film Screen Media, no. 17, pp. 259–264, 2019, doi: 10.33178/alpha.17.25.

I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 6, pp. 1–20, 2021, doi: 10.1007/s42979-021-00815-1.

N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-End Object Detection with Transformers,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12346 LNCS, pp. 213–229, 2020, doi: 10.1007/978-3-030-58452-8_13.

H. Zhao, S. Zhang, X. Peng, and Z. Lu, “Improved object detection method for autonomous driving based on DETR,” 2017.

Y. Zhao et al., “DETRs Beat YOLOs on Real-time Object Detection,” 2023, doi: 10.1109/CVPR52733.2024.01605.

X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable Detr: Deformable Transformers for End-To-End Object Detection,” ICLR 2021 - 9th Int. Conf. Learn. Represent., pp. 1–16, 2021.

A. Biró, K. T. Jánosi-Rancz, L. Szilágyi, A. I. Cuesta-Vargas, J. Martín-Martín, and S. M. Szilágyi, “Visual Object Detection with DETR to Support Video-Diagnosis Using Conference Tools,” Appl. Sci., vol. 12, no. 12, Jun. 2022, doi: 10.3390/app12125977.

E. Suherman, B. Rahman, D. Hindarto, and H. Santoso, “Implementation of ResNet-50 on End-to-End Object Detection (DETR) on Objects,” SinkrOn, vol. 8, no. 2, pp. 1085–1096, 2023, doi: 10.33395/sinkron.v8i2.12378.

L. K. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, “Attention Is All You Need,” Int. Conf. Inf. Knowl. Manag. Proc., no. Nips, pp. 4752–4758, 2023, doi: 10.1145/3583780.3615497.

S. Narimani, S. Roth Hoff, K. Dæhli Kurz, K. I. Gjesdal, J. Geisler, and E. Grøvik, “Comparative analysis of deep learning architectures for breast region segmentation with a novel breast boundary proposal,” Sci. Rep., vol. 15, no. 1, pp. 1–11, 2025, doi: 10.1038/s41598-025-92863-3.

P. N. Malang, M. Sarosa, P. N. Malang, and P. N. Malang, “Comparison of Faster R-CNN ResNet-50 and ResNet-101 Methods for Recycling Waste Detection,” Int. J. Comput. Appl. Technol. Res., no. December, 2023, doi: 10.7753/ijcatr1212.1006.

R. Padilla, S. L. Netto, and E. A. B. Da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,” Int. Conf. Syst. Signals, Image Process., vol. 2020-July, no. July, pp. 237–242, 2020, doi: 10.1109/IWSSIP48289.2020.9145130.

R. Padilla, W. L. Passos, T. L. B. Dias, S. L. Netto, and E. A. B. Da Silva, “A comparative analysis of object detection metrics with a companion open-source toolkit,” Electron., vol. 10, no. 3, pp. 1–28, 2021, doi: 10.3390/electronics10030279.

J. Contreras, M. Ceberio, and V. Kreinovich, “Why dilated convolutional neural networks: A proof of their optimality,” Entropy, vol. 23, no. 6, 2021, doi: 10.3390/e23060767.

B. Liu, F. He, S. Du, J. Li, and W. Liu, “An advanced YOLOv3 method for small object detection,” J. Intell. Fuzzy Syst., vol. 45, no. 4, pp. 5807–5819, 2023, doi: 10.3233/JIFS-224530.




DOI: https://doi.org/10.15408/jti.v18i1.46064

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Amalia Suciati, Dian Kartika Sari, Andi Prademon Yunus, Nuuraan Rizqy Amaliah

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

3rd Floor, Dept. of Informatics, Faculty of Science and Technology, UIN Syarif Hidayatullah Jakarta
Jl. Ir. H. Juanda No.95, Cempaka Putih, Ciputat Timur.
Kota Tangerang Selatan, Banten 15412
Tlp/Fax: +62 21 74019 25/ +62 749 3315
Handphone: +62 8128947537
E-mail: jurnal-ti@apps.uinjkt.ac.id


Creative Commons Licence
Jurnal Teknik Informatika by Prodi Teknik Informatika Universitas Islam Negeri Syarif Hidayatullah Jakarta is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at http://journal.uinjkt.ac.id/index.php/ti.

JTI Visitor Counter: View JTI Stats

 Flag Counter