ISSN: 2252-8938
Int J Artif Intell, Vol. 14, No. 4, August 2025: 3311-3323
3322
[6] M. G. J. Meijers and D. I. Han, “The 3D food printing pyramid of gastronomy: a structured approach towards a future research
agenda,” International Journal of Gastronomy and Food Science, vol. 37, no. April, 2024, doi: 10.1016/j.ijgfs.2024.100969.
[7] M. Waseem, A. U. Tahir, and Y. Majeed, “Printing the future of food: the physics perspective on 3D food printing,” Food
Physics, vol. 1, no. July 2023, p. 100003, 2024, doi: 10.1016/j.foodp.2023.100003.
[8] T. Pereira, S. Barroso, and M. M. Gil, “Food texture design by 3d printing: a review,” Foods, vol. 10, no. 2, pp. 1–26, 2021,
doi: 10.3390/foods10020320.
[9] A. O. Agunbiade et al., “Potentials of 3D extrusion-based printing in resolving food processing challenges: a perspective review,”
Journal of Food Process Engineering, vol. 45, no. 4, pp. 1–31, 2022, doi: 10.1111/jfpe.13996.
[10] T. Sivarupan et al., “A review on the progress and challenges of binder jet 3D printing of sand moulds for advanced casting,”
Additive Manufacturing, vol. 40, 2021, doi: 10.1016/j.addma.2021.101889.
[11] W. L. Ng, G. L. Goh, G. D. Goh, J. S. J. Ten, and W. Y. Yeong, “Progress and opportunities for machine learning in materials and
processes of additive manufacturing,” Advanced Materials, vol. 36, no. 34, 2024, doi: 10.1002/adma.202310006.
[12] L. Zhu, P. Spachos, E. Pensini, and K. N. Plataniotis, “Deep learning and machine vision for food processing: a survey,” Current
Research in Food Science, vol. 4, no. December 2020, pp. 233–249, 2021, doi: 10.1016/j.crfs.2021.03.009.
[13] H. Qassim, D. Feinzimer, and A. Verma, “Residual squeeze VGG16,” arXiv-Computer Science, pp. 1-11, 2017.
[14] T. Carvalho, E. R. S. De Rezende, M. T. P. Alves, F. K. C. Balieiro, and R. B. Sovat, “Exposing computer generated images by
eye’s region classification via transfer learning of VGG19 CNN,” 2017 16th IEEE International Conference on Machine
Learning and Applications (ICMLA), Cancun, Mexico, 2017, pp. 866-870, doi: 10.1109/ICMLA.2017.00-47.
[15] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 4510-4520, doi:
10.1109/CVPR.2018.00474.
[16] S. Kalvankar, H. Pandit, and P. Parwate, “Galaxy morphology classification using efficientNet architectures,” arXiv-Computer
Science, pp. 1-13, 2021.
[17] C. Wang et al., “Pulmonary image classification based on inception-v3 transfer learning model,” IEEE Access, vol. 7,
pp. 146533–146541, 2019, doi: 10.1109/ACCESS.2019.2946000.
[18] I. Z. Mukti and D. Biswas, “Transfer Learning based plant diseases detection using ResNet50,” 2019 4th International
Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 2019, pp. 1-6, doi:
10.1109/EICT48899.2019.9068805.
[19] A. Hatamizadeh, H. Yin, G. Heinrich, J. Kautz, and P. Molchanov, “Global context vision transformers,” Proceedings of Machine
Learning Research, vol. 202, pp. 12633–12646, 2023.
[20] R. Li, C. Xiao, Y. Huang, H. Hassan, and B. Huang, “Deep learning applications in computed tomography images for pulmonary
nodule detection and diagnosis: a review,” Diagnostics, vol. 12, no. 2, 2022, doi: 10.3390/diagnostics12020298.
[21] J. Maurício, I. Domingues, and J. Bernardino, “Comparing vision transformers and convolutional neural networks for image
classification: a literature review,” Applied Sciences, vol. 13, no. 9, 2023, doi: 10.3390/app13095521.
[22] X. Fu et al., “Crop pest image recognition based on the improved ViT method,” Information Processing in Agriculture, vol. 11,
no. 2, pp. 249–259, 2024, doi: 10.1016/j.inpa.2023.02.007.
[23] F. Baumann and D. Roller, “Vision based error detection for 3D printing processes,” MATEC Web of Conferences, vol. 59,
pp. 3–9, 2016, doi: 10.1051/matecconf/20165906003.
[24] S. M. Rachmawati, M. A. Paramartha Putra, T. Jun, D. S. Kim, and J. M. Lee, “Fine-tuned CNN with data augmentation for 3D
printer fault detection,” 2022 13th International Conference on Information and Communication Technology Convergence
(ICTC), Jeju Island, Korea, Republic of, 2022, pp. 902-905, doi: 10.1109/ICTC55196.2022.9952484.
[25] C. Mawardi, A. Buono, K. Priandana, and H. Herianto, “Performance analysis of ResNet50 and inception-V3 image classification
for defect detection in 3D food printing,” International Journal on Advanced Science, Engineering and Information Technology,
vol. 14, no. 2, pp. 798–804, 2024, doi: 10.18517/ijaseit.14.2.19863.
[26] K. Paraskevoudis, P. Karayannis, and E. P. Koumoulos, “Real-time 3D printing remote defect detection (stringing) with computer
vision and artificial intelligence,” Processes, vol. 8, no. 11, pp. 1–15, 2020, doi: 10.3390/pr8111464.
[27] H. Baumgartl, J. Tomas, R. Buettner, and M. Merkel, “A deep learning-based model for defect detection in laser-powder bed
fusion using in-situ thermographic monitoring,” Progress in Additive Manufacturing, vol. 5, no. 3, pp. 277–285, 2020,
doi: 10.1007/s40964-019-00108-3.
[28] K. Prabha et al., “Recent development, challenges, and prospects of extrusion technology,” Future Foods, vol. 3, Jun. 2021, doi:
10.1016/j.fufo.2021.100019.
[29] M. Shahbazi and H. Jäger, “Current status in the utilization of biobased polymers for 3D printing process: a systematic review of
the materials, processes, and challenges,” ACS Applied Bio Materials, vol. 4, no. 1, pp. 325–369, 2021,
doi: 10.1021/acsabm.0c01379.
[30] M. Salvi, U. R. Acharya, F. Molinari, and K. M. Meiburger, “The impact of pre- and post-image processing techniques on deep
learning frameworks: a comprehensive review for digital pathology image analysis,” Computers in Biology and Medicine,
vol. 128, 2021, doi: 10.1016/j.compbiomed.2020.104129.
[31] G. Ghiasi, X. Gu, Y. Cui, and T. Lin, “Scaling open-vocabulary image segmentation with image-level labels,” in Computer Vision
– ECCV 2022, Cham, Springer, 2022, pp. 540–557. doi: 10.1007/978-3-031-20059-5_31.
[32] N. Mohammad, A. M. Muad, R. Ahmad, and M. Y. P. M. Yusof, “Accuracy of advanced deep learning with TensorFlow and
Keras for classifying teeth developmental stages in digital panoramic imaging,” BMC Medical Imaging, vol. 22, no. 1, pp. 1–13,
2022, doi: 10.1186/s12880-022-00794-6.
[33] K. Alomar, H. I. Aysel, and X. Cai, “Data augmentation in classification and segmentation: a survey and new strategies,” Journal
of Imaging, vol. 9, no. 2, 2023, doi: 10.3390/jimaging9020046.
[34] K. S. R. Sekhar, T. R. Babu, G. Prathibha, K. Vijay, and L. C. Ming, “Dermoscopic image classification using CNN with
handcrafted features,” Journal of King Saud University - Science, vol. 33, no. 6, pp. 1-9, Sep. 2021, doi:
10.1016/j.jksus.2021.101550.
[35] N. A. M. Roslan, N. M. Diah, Z. Ibrahim, Y. Munarko, and A. E. Minarno, “Automatic plant recognition using convolutional
neural network on Malaysian medicinal herbs: the value of data augmentation,” International Journal of Advances in Intelligent
Informatics, vol. 9, no. 1, pp. 136–147, 2023, doi: 10.26555/ijain.v9i1.1076.
[36] Q. Zhang, Q. Yang, X. Zhang, Q. Bao, J. Su, and X. Liu, “Waste image classification based on transfer learning and convolutional
neural network,” Waste Management, vol. 135, pp. 150–157, 2021, doi: 10.1016/j.wasman.2021.08.038.
[37] R. R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra, “Grad-CAM: why did you say that?,” arXiv-
Statistics, pp. 1–4, 2017.