Recent Trends in Artificial Intelligence - 2025

gerogepatton 3 views 14 slides Sep 03, 2025
Slide 1
Slide 1 of 14
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14

About This Presentation

The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for p...


Slide Content

Recent Trends in Artificial
Intelligence - 2025


International Journal of Artificial
Intelligence & Applications (IJAIA)


http://www.airccse.org/journal/ijaia/ijaia.html



ISSN: 0975-900X (Online); 0976-2191 (Print)


Contact Us: [email protected]

AUTOMATIC ESTIMATION OF REGION OF
INTEREST AREA IN DERMATOLOGICAL
IMAGES USING DEEP LEARNING AND
PIXEL-BASED METHODS: A CASE STUDY ON
WOUND AREA ASSESSMENT

R-D. Berndt
1
, C. Takenga
1
, P. Preik
1
, T. Siripurapu
1
, T. Fuchsluger
2
, C. Lutter
3
, A. Arnold
4
, S. Lutze
4


1
INFOKOM – Informations- und Kommunikationsgesellschaft mbH, Nonnenhofer Straße 4a,
17033 Neubrandenburg, Germany
2
Clinic and Polyclinic of Ophthamology, Medical University of Rostock, Doberaner Straße 140,
18057 Rostock, Germany
3
Clinic and Polyclinic for Orthopedy, Medical University of Rostock, Doberaner Straße 142,
18057 Rostock, Germany
4
Clinic and Polyclinic for Skin Diseases, Medical University of Greifswald,
FerdinandSauerbruch-Str. 1, 17475 Greifswald, Germany

ABSTRACT

Accurate wound area estimation is essential for effective dermatological assessment and
treatment monitoring. However, manual measurement is time-consuming and error-prone,
highlighting the need for automated, reliable methods. This paper aims to develop and evaluate
two complementary techniques for estimating the Region of Interest (ROI) in dermatological
images: a novel deep learning approach using the Segment Anything Model (SAM) and a simple
pixel-based thresholding method. SAM segments both the wound and a reference object
automatically or through prompt-based queries, without requiring additional supervised
classification. The pixel-based method offers a lightweight alternative for resource-limited
settings. Both techniques generate binary masks and calculate real-world areas using a pixel-to-
centimeter scale. Evaluation on 40 images shows that SAM outperforms the pixel-based method,
achieving an average relative error of 4.63% versus 9.5% and ≤5% error in 62.5% of cases
compared to 27.5%. The proposed methods are not limited to wound area estimation but can be
extended to inflammation area detection in rheumatoid arthritis and ophthalmology, providing a
scalable framework for ROI estimation in medical imaging.

KEYWORDS

Region of Interest (ROI) Detection, Wound Area Estimation, Pixel-Based Measurement,
Segment Anything Model (SAM), Artificial Intelligence in Dermatology


For More Details: https://aircconline.com/ijaia/V16N4/16425ijaia01.pdf

Volume Link: https://www.airccse.org/journal/ijaia/current2025.html

REFERENCES

[1] P. Foltynski, A. Ciechanowska, and P. Ladyzynski, “Wound surface area measurement methods,”
Biocybernetics and Biomedical Engineering, vol. 41, no. 4, pp. 1454–1465, Oct.–Dec. 2021, doi:
10.1016/j.bbe.2021.09.009.
[2] D. K. Lee et al., "The accuracy of manual wound measurement techniques," Wound Repair and
Regeneration, vol. 25, no. 6, pp. 789–795, 2017.
[3] H. R. Patel and J. B. Collins, "Pixel-based techniques for wound area estimation," Journal of Digital
Imaging, vol. 31, no. 3, pp. 291–298, 2019.
[4] X. Y. Zhang et al., "Deep learning in medical image analysis: A review," Journal of Healthcare
Engineering, vol. 2019, pp. 1–9, 2019.
[5] R. A. Williams and T. S. John, "Deep learning for wound analysis: A study of CNN-based models for
wound size estimation," Medical Image Analysis, vol. 45, pp. 59–67, 2020.
[6] H. Carrión, M. Jafari, M. D. Bagood, H. Y. Yang, R. R. Isseroff, and M. Gomez, “Automatic wound
detection and size estimation using deep learning algorithms,” PLoS Computational Biology, vol. 18,
no. 3, p. e1009852, Mar. 2022, doi: 10.1371/journal.pcbi.1009852.
[7] R. E. Carrión et al., “Automatic wound detection and size estimation using deep learning for monitoring
wound healing,” PLOS ONE, vol. 17, no. 3, p. e0264574, 2022, doi: 10.1371/journal.pone.0264574.
[8] R. Chairat et al., “Detect-and-segment: A deep learning approach to automate wound detection and
segmentation,” Computers in Biology and Medicine, vol. 145, p. 105429, 2022, doi:
10.1016/j.compbiomed.2022.105429.
[9] C. W. Chang et al., “Deep learning approach based on superpixel segmentation assisted labeling for
automatic pressure ulcer diagnosis,” PLOS ONE, vol. 17, no. 2, p. e0264139, 2022, doi:
10.1371/journal.pone.0264139.
[10] P. Foltynski and P. Ladyzynski, “Evaluation of two digital wound area measurement methods using
artificial intelligence,” Electronics, vol. 13, no. 12, p. 2390, 2022, doi: 10.3390/electronics13122390.
[11] D. Y. T. Chino et al., “Segmenting skin ulcers and measuring the wound area using deep
convolutional networks,” Computer Methods and Programs in Biomedicine, vol. 191, Jul. 2020, doi:
10.1016/j.cmpb.2020.105376.
[12] C. Liu, X. Fan, Z. Guo, et al., “Wound area measurement with 3D transformation and smartphone
images,” BMC Bioinformatics, vol. 20, p. 724, 2019, doi: 10.1186/s12859-019-3308-1.
[13] F. Ferreira et al., “Experimental study on wound area measurement with mobile devices,” Sensors,
vol. 21, no. 17, p. 5762, 2021, doi: 10.3390/s21175762.
[14] T. J. Liu, H. Wang, M. Christian, and C.-W. Chang, “Automatic segmentation and measurement of
pressure injuries using deep learning models and a LiDAR camera,” Scientific Reports, vol. 13, no. 1,
Jan. 2023, doi: 10.1038/s41598-022-26812-9.
[15] M. C. Alonso, H. T. Mohammed, R. D. J. Fraser, and J. L. Ramírez-GarcíaLuna, “Comparison of
wound surface area measurements obtained using clinically validated artificial intelligence-based
technology versus manual methods and the effect of measurement method on debridement code
reimbursement cost,” Wounds: A Compendium of Clinical Research and Practice, vol. 35, no. 10, pp.
E331–E338, Oct. 2023, doi: 10.25270/wnds/2303.
[16] K. Löwenstein et al., “Virtually objective quantification of in vitro wound healing scratch assays with
the Segment Anything Model,” arXiv preprint, arXiv:2407.02187, 2024. [Online]. Available:
https://arxiv.org/abs/2407.02187
[17] Labellerr, “Enhancing wound image segmentation with Labellerr,” Labellerr Blog, 2023. [Online].
Available: https://www.labellerr.com/blog/enhancing-wound-image-segmentation/
[18] I. Morales-Ivorra, J. Narváez, C. Gómez-Vaquero, C. Moragues, J. M. Nolla, J. A. Narváez, and M.
A. Marín-López, “Assessment of inflammation in patients with rheumatoid arthritis using
thermography and machine learning: a fast and automated technique,” RMD Open, vol. 8, no. 2, p.
e002458, Jul. 2022, doi: 10.1136/rmdopen-2022-002458.
[19] U. Snekhalatha, M. Anburajan, V. Sowmiya, B. Venkatraman, and M. Menaka, “Automated hand
thermal image segmentation and feature extraction in the evaluation of rheumatoid arthritis,”
Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine,
vol. 229, no. 4, pp. 319–331, Apr. 2015, doi: 10.1177/0954411915580809.
[20] A. N. Wilson, K. A. Gupta, B. H. Koduru, A. Kumar, A. Jha, and L. R. Cenkeramaddi, “Recent
advances in thermal imaging and its applications using machine learning: A review,” IEEE Sensors
Journal, vol. 23, no. 4, pp. 3395–3407, Feb. 2023, doi: 10.1109/JSEN.2023.3234335.

[21] A. Alshehri and D. AlSaeed, “Breast cancer detection in thermography using convolutional neural
networks (CNNs) with deep attention mechanisms,” Applied Sciences, vol. 12, no. 24, p. 12922,
2022, doi: 10.3390/app122412922.
[22] S. J. Mambou, P. Maresova, O. Krejcar, A. Selamat, and K. Kuca, “Breast cancer detection using
infrared thermal imaging and a deep learning model,” Sensors, vol. 18, no. 9, p. 2799, Aug. 2018,
doi: 10.3390/s18092799.
[23] Y. Qu, Y. Meng, H. Fan, and R. X. Xu, “Low-cost thermal imaging with machine learning for
noninvasive diagnosis and therapeutic monitoring of pneumonia,” Infrared Physics & Technology,
vol. 123, p. 104201, Jun. 2022, doi: 10.1016/j.infrared.2022.104201.
[24] R. Gulias-Cañizo, M. E. Rodríguez-Malagón, L. Botello-González, V. Belden-Reyes, F. Amparo, and
M. Garza-Leon, “Applications of infrared thermography in ophthalmology,” Life, vol. 13, no. 3, p.
723, 2023, doi: 10.3390/life13030723.
[25] J. Wang, Y. Tian, T. Zhou, D. Tong, J. Ma, and J. Li, “A survey of artificial intelligence in rheumatoid
arthritis,” Rheumatology and Immunology Research, vol. 4, no. 2, pp. 69–77, Jul. 2023, doi:
10.2478/rir-2023-0011.
[26] I. Morales-Ivorra, D. Taverner, O. Codina, S. Castell, P. Fischer, D. Onken, P. Martínez-Osuna, C.
Battioui, and M. A. Marín-López, “External validation of the machine learning-based thermographic
indices for rheumatoid arthritis: A prospective longitudinal study,” Diagnostics, vol. 14, no. 13, p.
1394, Jun. 2024, doi: 10.3390/diagnostics14131394.
[27] V. Shenoy et al., “Deepwound: Automated postoperative wound assessment and surgical site
surveillance through convolutional neural networks,” arXiv preprint, arXiv:1807.04355, 2018.
[Online]. Available: https://arxiv.org/abs/1807.04355
[28] D. M. Anisuzzaman et al., “A mobile app for wound localization using deep learning,” arXiv preprint,
arXiv:2009.07133, 2020. [Online]. Available: https://arxiv.org/abs/2009.07133
[29] C. W. Chang et al., “A superpixel-driven deep learning approach for the analysis of dermatological
wounds,” Computer Methods and Programs in Biomedicine, vol. 178, p. 105079, 2019, doi:
10.1016/j.cmpb.2019.105079.
[30] H. Nejati et al., “Fine-grained wound tissue analysis using deep neural network,” arXiv preprint,
arXiv:1802.10426, 2018. [Online]. Available: https://arxiv.org/abs/1802.10426
[31] I. Ballester, M. Gall, T. Münzer, et al., “Depth-based interactive assistive system for dementia care,”
Journal of Ambient Intelligence and Humanized Computing, vol. 15, pp. 3901–3912, 2024, doi:
10.1007/s12652-024-04865-0.

AUTHORS

Rolf-Dietrich Berndt is a German engineer and Managing Director of Infokom
GmbH, an ICT company in Neubrandenburg, Germany. He specializes in eHealth
innovation, focusing on telemedicine, mobile health (mHealth), and digital chronic care
solutions. He is a certified expert in data security and privacy and holds ISO 27001
ISMS certification. Berndt has led the development of secure, interoperable platforms
such as mSkin® for teledermatology and Mobil Diab® for diabetes management,
applied in both European and African healthcare contexts. His work emphasizes
accessible, privacy-compliant technologies linking primary care, specialists, and hospitals

Claude Takenga holds a BSc and MSc in Radio Engineering and Telecommunication,
and a PhD in Electrical Engineering from Hannover University. He works in the
Research and Development department at Infokom, focusing on innovation in digital
health technologies. His research spans AI-based medical imaging, mobile health
solutions, and eHealth sy stems for chronic disease management. He has led
international health projects across Europe and sub-Saharan Africa, promoting scalable,
digital diagnostics and telemedicine tools for underserved regions.

Petra Preik holds a Diploma in Informatics Engineering from the Fachhochschule
Neubrandenburg. She works in software development at Infokom, focusing on digital
health applications. Her expertise includes the design and implementation of secure, in
teroperable healthcare software systems and telemedicine solutions. With practical

experience in clinical IT environments, she contributes to the development of userfriendly tools that
support healthcare professionals and enhance patient care.

Tripura Siripurapu works in the Research and Development department at Infokom,
with a primary focus on Artificial Intelligence and Deep Learning for medical
imaging. Her work involves developing advanced machine learning algorithms for
image segmenta tion, feature extraction, and diagnostic support in digital health
applications. She contributes to projects aimed at enhancing the accuracy and
efficiency of clinical image analysis.

Thomas A. Fuchsluger (Univ.-Prof. Dr. med. Dr. rer. nat ) is a renowned specialist -
scientist with dual doctorates in medicine and natural sciences. He serves as the Chair
of Ophthalmology at the University of Rostock. His clinical and research expertise
includes corneal transplantation, ocular surface diseases, and regenerative therapies.
Prof. Fuchsluger has published extensively in peer-reviewed journals and leads
multiple interdisciplinary projects at the intersection of ophthalmology, tissue
engineering, and digital health. His work contributes significantly to advancing
personalized eye care and clinical innovation in ophthalmic surgery.

PREDICTION OF DIABETES FROM
ELECTRONIC HEALTH RECORDS

Philip de Melo

Department of Nursing and Allied Health , Norfolk State University , 700 Park Avenue
Norfolk, VA 23504 , USA

ABSTRACT

Electronic Health Records (EHRs) encompass patients’ diagnoses, hospitalizations, and
medication histories, offering a wealth of data. Although EHR-based research, particularly in
decision prediction, has made significant strides, challenges remain due to the inherently
sparse and irregular nature of EHR data, which limits their direct application in time-series
analysis. Physicians treating individuals with chronic illnesses must anticipate the progression
of their patients' conditions, as accurate forecasts enable more informed and timely treatment
decisions. The strength of prediction lies in prevention—intervening early is often more
effective than attempting to reverse damage later. In this study, we present a data-driven
model designed to deliver accurate and efficient predictions of disease trajectories using
electronic health records (EHRs) from Veterans Affairs hospitals. Prediction of disease
progression represents a fundamental challenge. EHRs contain vast volumes of frequently
updated, high-dimensional, and irregularly spaced data in various formats, including
numerical, textual, image, and video data. To address this complexity, we propose a new
approach for predicting the progression of diabetes. This method has the potential to improve
early intervention, prevent further health deterioration, and ultimately extend patients' lives.
The method is based on the PM GenAI, a novel approach that significantly improves
classification and regression results. The method is compared to traditional techniques such as
ARIMA, LTMS, and RF showing significant improvement in disease progression evaluation.
The method is demonstrated on diabetes data.


For More Details: https://aircconline.com/ijaia/V16N4/16425ijaia02.pdf

Volume Link: https://www.airccse.org/journal/ijaia/current2025.html

REFERENCES

[1] Berwick D. M., Nolan T. W., Whittington J. (2008). The triple aim: care, health, and cost. Health Aff.
27, 759, 769. 10.1377/hlthaff.27.3.759
[2] Buck D., Baylis A., Dougall D., Robertson R. (2018). A Vision for Population Health: Towards a
Healthier Future. Towards a Healthier Future. London: Kings Fund.
[3] deMelo P., (2025). Accurate Classification of Diabetes using Optimized Deep Learning Algorithm,
Journal Diabetes Meletus, (accepted for publication)
[4] de Melo (2024) Public Health Informatics and Technology. AAAS press, ISBN ISBN 13
9798893729535
[5] Eurostat (2023). Self-Perceived Health Statistics Available online at:
https://ec.europa.eu/eurostat/statistics
explained/index.php?title=Selfperceived_health_statistics&oldid=509628
[6] Holman H. R. (2020). The relation of the chronic disease epidemic to the health care crisis. ACR Open
Rheumatol. 2, 167, 173. 10.1002/acr2.11114
[7] Li Y., Mamouei M., Salimi-Khorshidi G., Rao S., Hassaine A., Canoy D., et al. (2022). Hi-behrt:
hierarchical transformer-based model for accurate prediction of clinical events using multimodal
longitudinal electronic health records. IEEE J. Biomed. Health Inf. 27, 1106, 1117.
10.1109/JBHI.2022.3224727
[8] Main C., Haig M., Kanavos P. (2022). The Promise of Population Health Management in England:
From Theory to Implementation. London: The London School of Economics and Political Science.
[9] Pham T., Tran T., Phung D., Venkatesh S. (2016). “Deepcare: a deep dynamic memory model for
predictive medicine,” in Advances in Knowledge Discovery and Data Mining: 20th Pacific-Asia
Conference, PAKDD 2016, Auckland, New Zealand, April 19-22, 2016, Proceedings, Part II 20
(Auckland: Springer; ), 30, 41.
[10] Rasmy L., Xiang Y., Xie Z., Tao C., Zhi D. (2021). Med-bert: pretrained contextualized embeddings
on large-scale structured electronic health records for disease prediction. NPJ Dig. Med. 4, 86.
10.1038/s41746-021-00455-y
[11] Rupp M., Peter O., Pattipaka T. (2023). Exbehrt: Extended transformer for electronic health records to
predict disease subtypes & progressions. arXiv [preprint]. 10.1007/978-3-031-39539-0_7
[12] World Health Organization (2023). Population Health Management in Primary Health Care: a
Proactive Approach to Improve Health and Well-Being: Primary Health Care Policy Paper Series.
No. WHO/EURO: 2023-7497-47264-69316. World Health Organization. Regional Office for Europe.
[13] Wornow M., Xu Y., Thapa R., Patel B., Steinberg E., Fleming S., et al. (2023). The shaky foundations
of large language models and foundation models for electronic health records. npj Dig. Med. 6, 135.
10.1038/s41746-023-00879-8
[14] Y. Si, J. Du, Z. Li, X. Jiang, T. Miller, F. Wang, W. J. Zheng, and K. Roberts, “Deep representation
learning of patient data from electronic health records (EHR): a systematic review,” J Biomed Inform,
2020.
[15] Y. Xu, S. Biswal, S. R. Deshpande, K. O. Maher, and J. Sunl, “RAIM: recurrent attentive and
intensive model of multimodal patient monitoring data,” Proceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery & Data Mining, ACM, New York, NY, USA,
2018, pp. 2565–2573.
[16] S. Yang, X. W. Zheng, C. Ji and X. C. Chen, “Multi-layer Representation Learning and Its
Application to Electronic Health Records,” Neural Process Lett. 2021 ; 53 ( 2 ): 1417–1433.
[17] H. Song, D. Rajan, J. J. Thiagarajan, and A. Spanias, “Attend and diagnose: clinical time series
analysis using attention models,” Proceedings of the 32nd AAAI Conference on Artificial
Intelligence, AAAI 2018, 2018, pp. 4091–4098.
[18] Y. Si, and K. Roberts, “Deep patient representation of clinical notes via multi-task learning for
mortality prediction,” AMIA Jt Summits TranslSciProc, 2019, pp. 779–788.
[19] L. Liu, H. Li, Z. Hu, H. Shi, Z. Wang, J. Tang, and M. Zhang, “Learning hierarchical representations
of electronic health records for clinical outcome prediction,” AMIA AnnuSympProc, 2019, pp. 597–
606.
[20] S. Barbieri, J. Kemp, O. Perez-Concha, S. Kotwal, M. Gallagher, A. Ritchie, and L. Jorm,
“Benchmarking deep learning architectures for predicting readmission to the icu and describing
patients-at-risk,” Scientific Reports, 2020, 10 ( 1 ): 1111.
[21] F. Yuan, S. Chen, K. Liang and L. Xu, “Research on the coordination mechanism of traditional

chinese medicine medical record data standardization and characteristic protection under big data
environment,” Shandong People's Publishing House.
[22] X. M. Yu and H. Wang, “Intelligent data mining-frequent Patterns for uncertain data,” tsinghua
university press, 2018, 06.
[23] X. W. Zheng, X. M. Yu, Y. Q. Yin, T. T. Li and X. Y. Yan, “Three-dimensional feature maps and
convolutional neural network-based emotion recognition,” International Journal of Intelligent
Systems 36 ( 2021 ): 6312–6336.
[24] X. W. Zheng, M. Zhang, T. Li, C. Ji and B. Hu, “A novel consciousness emotion recognition method
using ERP components and MMSE,” J Neural Eng. 2021 Apr 18; 18 ( 4 ).
[25] Y. Q. Yin, X. W. Zheng, B. Hu, Y. Zhang and X. C. Cui, “EEG emotion recognition using fusion
model of graph convolutional neural networks and LSTM,” Appl. Soft Comput. 100 ( 2021 ):
106954.
[26] Y. Jiang, Y. Zheng, S. Hou, Y. Chang, and J. C. Gee, “Multimodal image alignment via linear
mapping between feature modalities, J HealthcEng,” 2017, pp. 1–6.
[27] Y. Yuan, G. Xun, Q. Suo, K. Jia, and A. Zhang, “Wave2Vec: deep representation learning for clinical
temporal data.” Neurocomputing, 2019,pp. 31–42.
[28] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J. Sun, “Doctor AI: predicting clinical events
via recurrent neural networks,” JMLR Workshop ConfProc, 2016, 56 : 301–318.
[29] K. Cho, B. V. Merrienboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine
translation: encoder-decoder approaches,” Computer Science, 2014, pp. 103–111.
[30] S. Hochreiter, and J. Schmidhuber, “Long short-term memory,” Neural Computation, 1997, 9 ( 8 ):
1735–1780.
[31] D. W. Bates, S. Saria, L. Ohno-Machado, A. Shah, and G. Escobar. “Big data in health care: using
analytics to identify and manage high-risk and high-cost patients,” Health Aff, 2014, 33 ( 7 ): 1123–
1131.
[32] R. Miotto, W. Fei, and W. Shuang, “Deep learning for healthcare: review, opportunities and
challenges,” Briefings in Bioinformatics, 2017, 19 ( 6 ).
[33] F. Ma, R. Chitta, J. Zhou, Q. You, T. Sun, and J. Gao, “Dipole: diagnosis prediction in healthcare via
attention-based bidirectional recurrent neural networks,” Proceedings of the 23rd ACM SIGKDD
international conference on knowledge discovery and data mining, 2017, pp. 1903–1911. [
34] C. Chao, X. Cao, L. Jian, J. Bo, and W. Fei, “An rnn architecture with dynamic temporal matching for
personalized predictions of parkinson's disease,” Proceedings of the 2017 SIAM International
Conference on Data Mining, 2017, pp. 198–206.
[35] B. Jin, C. Che, Z. Liu, S. Zhang, X. Yin, and X. P. Wei, “Predicting the risk of heart failure with EHR
sequential data modeling,” IEEE Access, 2018, pp. 9256–9261.
[36] J. Zhang, K. Kowsari, J. H. Harrison, J. M. Lobo, and L. E. Barnes, “Patient2Vec: a personalized
interpretable deep representation of the longitudinal electronic health record,” IEEE Access, 2018, pp.
65333–65346.
[37] S. Rendle, “Factorization machines,” The 10th IEEE International Conference on Data Mining,
Sydney, Australia, 14–17 December 2010.
[38] T. Shen, J. Jia, T. S. Chua, W. Hall, and B. Chen, “PEIA: personality and emotion integrated attentive
model for music recommendation on social media platforms,” Proceedings of the AAAI Conference
on Artificial Intelligence, 2020, 34 ( 01 ): 206–213.

BIO-INSPIRED ARCHITECTURE FOR
PARSIMONIOUS CONVERSATIONAL
INTELLIGENCE : THE S-AI-GPT
FRAMEWORK

Said Slaoui

University Mohammed V, Rabat, Morocco

ABSTRACT

S-AI-GPT, a conversational artificial intelligence system, is based on the principles of Sparse
Artificial Intelligence (S-AI) developed by the author. S-AI-GPT provides a modular and bio-
inspired solution to the structural limitations of monolithic GPT-based language models,
particularly in terms of excessive resource consumption, low interpretability, and limited
contextual adaptability. This proposal is part of a broader effort to design sustainable,
explainable, and adaptive AI systems grounded in cognitive principles. The sparse activation of
specialized GPT agents, coordinated by a central GPT-MetaAgent, and a cognitive framework
modeled after the functional modularity of the human brain form the foundation of the system.
These agents are activated only when relevant, based on task decomposition and contextual cues.
Their orchestration is handled through an internal symbolic pipeline, designed for transparency
and modular control. The rationale for the paradigm shift is explained in this article along with
relevant literature reviews, the modular system architecture, and the agent-based decomposition
and orchestration logic that form the basis of S-AI-GPT. Each component is introduced through a
conceptual analysis, highlighting its function and integration within the overall architecture. By
doing this, the article establishes the foundation for upcoming improvements that will be
discussed in later articles and are based on artificial hormonal signaling and cognitive memory
subsystems. This is the first paper in a three-part series, with subsequent work addressing
personalization, affective regulation, and experimental validation.

KEYWORDS

Sparse Artificial Intelligence, GPT-MetaAgent, GPT-Specialized Agents, GPT-Gland Agents,
Hormonal Engine

For More Details: https://aircconline.com/ijaia/V16N4/16425ijaia03.pdf

Volume Link: https://www.airccse.org/journal/ijaia/current2025.html

REFERENCES

[1]. J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, “Neural module networks,” Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), vol. 2016, pp. 39 –48, 2016, doi:
https://doi.org/10.1109/CVPR.2016.13.
[2]. T. B. Brown et al., “Language models are few-shot learners,” arXiv preprint, arXiv:2005.14165,
2020. [Online]. Available: https://arxiv.org/abs/2005.14165.
[3]. L. Cañamero and J. Fredslund, “I show you how I like you – Can you read it in my face?” IEEE
Trans. Syst., Man, Cybern. A, vol. 31, no. 5, pp. 454 –459, 2001, doi:
https://doi.org/10.1109/3468.952719.
[4]. G. Chen, S. Liu, H. Wu, Q. Zhou, and X. Chen, “AutoAgents: A framework for automatic agent
generation,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI -24), 2024, doi:
https://doi.org/10.24963/ijcai.2024/3.
[5]. F. A. Gers and J. Schmidhuber, “Recurrent nets that time and space the gradient,” Neural
Computation, vol. 12, no. 7, pp. 1789 –1804, 2000, doi:
https://doi.org/10.1162/089976600300015840.
[6]. A. Goyal, J. Binas, Y. Bengio, and C. Pal, “Coordination and learning in modular multi-agent
systems,” in Adv. Neural Inf. Process. Syst. (NeurIPS), 2021. [Online]. Available:
https://proceedings.neurips.cc/paper/2021/hash/73a427b34802887d4d06cb69a7b09e92.
[7]. D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, “Neuroscience-inspired artificial
intelligence,” Neuron, vol. 95, no. 2, pp. 245 –258, 2017, doi:
https://doi.org/10.1016/j.neuron.2017.06.011.
[8]. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,”
Science, vol. 313, no. 5786, pp. 504–507, 2006, doi: https://doi.org/10.1126/science.1127647.
[9]. V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no.
7540, pp. 529–533, 2015, doi: https://doi.org/10.1038/nature14236.
[10]. G. Montero Albacete, A. López, and A. García-Serrano, “Fattybot: Hormonal chatbot,” Information,
vol. 15, no. 8, p. 457, 2024, doi: https://doi.org/10.3390/info15080457.
[11]. M. Moutoussis and R. J. Dolan, “How computation connects affect,” Trends Cogn. Sci., vol. 19, no.
4, pp. 157–163, 2015, doi: https://doi.org/10.1016/j.tics.2015.01.002.
[12]. R. W. Picard, Affective Computing, MIT Press, 1997. [Online]. Available:
https://affect.media.mit.edu/pdfs/97.picard.pdf.
[13]. C. Qu, S. Zhang, Y. Li, and J. Ma, “Tool learning with LLMs: A survey,” arXiv preprint,
arXiv:2405.17935, 2024. [Online]. Available: https://arxiv.org/abs/2405.17935.
[14]. T. B. Richards, “AutoGPT [Computer software],” GitHub, 2023. [Online]. Available:
https://github.com/Significant-Gravitas/AutoGPT.
[15]. C. Rosenbaum, T. Klinger, and M. Riemer, “Routing networks for multi-task learning,” in Int. Conf.
Learn. Representations (ICLR), 2019. [Online]. Available:
https://openreview.net/forum?id=ry8dv3R9YQ.
[16]. T. Schick and H. Schütze, “Toolformer: Language models can teach themselves to use tools,” arXiv
preprint, arXiv:2302.04761, 2023. [Online]. Available: https://arxiv.org/abs/2302.04761.
[17]. J. Schmidhuber, “Curiosity and boredom in neural controllers,” in Proc. Int. Conf. Simulation of
Adaptive Behavior, pp. 424 –429, 1991. [Online]. Available:
https://link.springer.com/chapter/10.1007/978-1-4471-1990-4_38.
[18]. N. Shazeer et al., “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,”
arXiv preprint, arXiv:1701.06538, 2017. [Online]. Available: https://arxiv.org/abs/1701.06538.
[19]. Y. Shen, K. Zhang, Y. Wang, and X. Liu, “HuggingGPT: Solving AI tasks with ChatGPT and its
friends,” arXiv preprint, arXiv:2303.17580, 2 023. [Online]. Available:
https://arxiv.org/abs/2303.17580.
[20]. S. Singh, S. Bansal, A. El Saddik, and M. Saini, “From ChatGPT to DeepSeek AI: Revisiting
monolithic and adaptive AI models,” arXiv preprint, arXiv:2504.03219, 2025. [Online]. Available:
https://arxiv.org/abs/2504.03219.
[21]. S. Slaoui, “S-AI: Sparse Artificial Intelligence System with MetaAgent,” Int. J. Fundam. Mod. Res.
(IJFMR), vol. 1, no. 2, pp. 1 –18, 2025. [Online]. Available:
https://www.ijfmr.com/papers/2025/2/42035.pdf.
[22]. Y. Talebirad and A. Nadiri, “Multi-agent collaboration: Harnessing LLM agents,” arXiv preprint,
arXiv:2306.03314, 2023. [Online]. Available: https://arxiv.org/abs/2306.03314.

[23]. TechTarget, “Mixture-of-experts models explained: What you need to know,” SearchEnterpriseAI,
2024. [Online]. Available: https://www.techtarget.com/searchenterpriseai/feature/Mixture-of-
expertsmodels-explained-What-you-need-to-know.
[24]. H. Vicci, “Emotional intelligence in AI: Review and evaluation,” SSRN Working Paper, 2024, doi:
https://doi.org/10.2139/ssrn.4818285.

AUTHORS

Said Slaoui is a professor at Mohammed V University in Rabat, Morocco. He
graduated in Computer Science from University Pierre and Marie Curie, Paris VI
(in collaboration wit h IBM France), 1986. He has over 40 years of experience in
the fields of AI and Big Data, with research focused on modular architectures,
symbolic reasoning, and computational frugality. His recent work introduces the
Sparse Artificial Intelligence (S-AI) framework, which integrates bio-inspired
signaling and agent-based orchestration. He has published numerous scientific
papers in international journals and conferences, and actively contributes to the
development of sustainable and explainable AI systems.

BIO-INSPIRED HORMONAL MODULATION
AND ADAPTIVE ORCHESTRATION IN S-AI-
GPT

Said Slaoui

University Mohammed V, Rabat, Morocco

ABSTRACT

This second article delves into the bio-inspired regulatory mechanisms and memory architectures
integrated into the S-AI-GPT system. Building upon the modular design introduced in Article 1,
we explore how artificial hormonal signaling enables dynamic orchestration of agents and
emotional coherence. The system relies on a triadic hormonal regulation layer composed of the
Hormonal Engine, the GPT-Gland Agents, and the GPT-MemoryGland, working in concert to
adjust activation thresholds, modulate emotional tone, and support context-aware responsiveness.
A dedicated section addresses the integration of memory components, including Dynamic
Contextual Memory (DCM), the Memory Agent for personalization, and a bio-inspired memory
system based on neuronal mini-structures and artificial engrams. These memory structures
interact strategically with hormonal dynamics to maintain both adaptability and persistence. We
also examine the autonomy, lifecycle, and coordination of GPTSpecialized Agents across
conversational and business contexts. The orchestrator, GPT-MetaAgent, supervises the entire
system by integrating semantic cues, user profiles, and hormonal feedback. This approach paves
the way for a resource-efficient, interpretable, and cognitively enriched conversational AI.

KEYWORDS

Sparse Artificial Intelligence, GPT-MetaAgent, GPT-Specialized Agents, GPT-Gland Agents,
Hormonal Engine.

For More Details: https://aircconline.com/ijaia/V16N4/16425ijaia04.pdf

Volume Link: https://www.airccse.org/journal/ijaia/current2025.html

REFERENCES

[1]. M. Moutoussis and R. J. Dolan, “How computation connects affect,” Trends Cogn. Sci., vol. 19, no.
4, pp. 157–163, 2015.
[2]. R. W. Picard, Affective Computing, MIT Press, 1997.
[3]. C. Qu, S. Zhang, Y. Li, and J. Ma, “Tool learning with LLMs: A survey,” arXiv preprint,
arXiv:2405.17935, 2024.
[4]. T. Schick and H. Schütze, “Toolformer: Language models can teach themselves to use tools,” arXiv
preprint, arXiv:2302.04761, 2023.
[5]. F. A. Gers and J. Schmidhuber, “Recurrent nets that time and space the gradient,” Neural
Computation, vol. 12, no. 7, pp. 1789–1804, 2000.
[6]. Y. Shen, K. Zhang, Y. Wang, and X. Liu, “HuggingGPT: Solving AI tasks with ChatGPT and its
friends,” arXiv preprint, arXiv:2303.17580, 2023.
[7]. S. Slaoui, “S-AI: Sparse Artificial Intelligence System with MetaAgent,” Int. J. Fundam. Mod. Res.
(IJFMR), vol. 1, no. 2, pp. 1–18, 2025.
[8]. Y. Talebirad and A. Nadiri, “Multi-agent collaboration: Harnessing LLM agents,” arXiv preprint,
arXiv:2306.03314, 2023.
[9]. T. B. Richards, AutoGPT [Computer software], GitHub Repository, 2023. [Online]. Available:
https://github.com/Torantulino/Auto-GPT
[10]. C. Rosenbaum, T. Klinger, and M. Riemer, “Routing networks for multi-task learning,” in Proc. 7th
Int. Conf. Learning Representations (ICLR), New Orleans, LA, USA, 2019.
[11]. TechTarget, “Mixture-of-experts models explained: What you need to know,” SearchEnterpriseAI,
2024. [Online]. Available : https://www.techtarget.com/searchenterpriseai/definition/mixture-
ofexperts
[12]. J. Schmidhuber, “Curiosity and boredom in neural controllers,” in Proc. Int. Conf. Simulation of
Adaptive Behavior, pp. 424–429, 1991.
[13]. H. Vicci, “Emotional intelligence in AI: Review and evaluation,” SSRN Working Paper, 2024.
[Online]. Available: https://ssrn.com/abstract=4768910
[14]. S. Slaoui, “Bio-Inspired Architecture for Parsimonious Conversational Intelligence: The S-AI-GPT
Framework,” Int. J. Artif. Intell. & Applications (IJAIA), vol. 16, no. 4, 2025.
[15]. A. Goyal, J. Binas, Y. Bengio, and C. Pal, “Coordination and learning in modular multi-agent
systems,” in Adv. Neural Inf. Process. Syst. (NeurIPS), 2021.
[16]. N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. Hinton, and J. Dean, “Outrageously
large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint,
arXiv:1701.06538, 2017.
[17]. S. Singh, S. Bansal, A. El Saddik, and M. Saini, “From ChatGPT to DeepSeek AI: Revisiting
monolithic and adaptive AI models,” arXiv preprint, arXiv:2504.03219, 2025. [Online]. Available:
https://arxiv.org/abs/2504.03219
[18]. G. Montero Albacete, A. López, and A. García-Serrano, “Fattybot: Hormonal chatbot,” Information,
vol. 15, no. 8, p. 457, 2024.
[19]. L. Cañamero and J. Fredslund, “I show you how I like you – Can you read it in my face?” IEEE
Trans. Syst., Man, and Cybern., Part A, vol. 31, no. 5, pp. 454–459, 2001.
[20]. D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, “Neuroscience-inspired artificial
intelligence,” Neuron, vol. 95, no. 2, pp. 245–258, 2017.
[21]. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,”
Science, vol. 313, no. 5786, pp. 504–507, 2006.
[22]. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M.
Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King,
D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep
reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529– 533, 2015.
[23]. J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, “Neural module networks,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), pp. 39–48, 2016.
[24]. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G.
Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D.
M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C.
Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot
learners,” arXiv preprint, arXiv:2005.14165, 2020. [Online]. Available:

https://arxiv.org/abs/2005.14165
[25]. G. Chen, S. Liu, H. Wu, Q. Zhou, and X. Chen, “AutoAgents: A framework for automatic agent
generation,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI-24), Jeju, Korea, 2024. [Online]. Available:
https://arxiv.org/abs/2405.06758.
[26]. M. Minsky, The Society of Mind, Simon & Schuster, 1986.

AUTHORS

Said Slaoui is a professor at Mohammed V University in Rabat, Morocco. He
graduated in Computer Science from University Pierre and Marie Curie, Paris VI
(in collaboration wit h IBM France), 1986. He has over 40 years of experience in
the fields of AI and Big Data, with research focused on modular architectures,
symbolic reasoning, and computational frugality. His recent work introduces the
Sparse Artificial Intelligence (S-AI) framework, which integrates bio-inspired
signaling and agent-based orchestration. He has published numerous scientific
papers in international journals and conferences, and actively contributes to the
development of sustainable and explainable AI systems.