International Journal of Artificial Intelligence and Applications (IJAIA), Vol.16, No.4, July 2025
58
REFERENCES
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, “Neural module networks,” Proc. IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), vol. 2016, pp. 39 –48, 2016, doi:
https://doi.org/10.1109/CVPR.2016.13.
[2] T. B. Brown et al., “Language models are few-shot learners,” arXiv preprint, arXiv:2005.14165,
2020. [Online]. Available: https://arxiv.org/abs/2005.14165.
[3] L. Cañamero and J. Fredslund, “I show you how I like you – Can you read it in my face?” IEEE
Trans. Syst., Man, Cybern. A, vol. 31, no. 5, pp. 454 –459, 2001, doi:
https://doi.org/10.1109/3468.952719.
[4] G. Chen, S. Liu, H. Wu, Q. Zhou, and X. Chen, “AutoAgents: A framework for automatic agent
generation,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI-24), 2024, doi:
https://doi.org/10.24963/ijcai.2024/3.
[5] F. A. Gers and J. Schmidhuber, “Recurrent nets that time and space the gradient,” Neural
Computation, vol. 12, no. 7, pp. 1789 –1804, 2000, doi:
https://doi.org/10.1162/089976600300015840.
[6] A. Goyal, J. Binas, Y. Bengio, and C. Pal, “Coordination and learning in modular multi-agent
systems,” in Adv. Neural Inf. Process. Syst. (NeurIPS), 2021. [Online]. Available:
https://proceedings.neurips.cc/paper/2021/hash/73a427b34802887d4d06cb69a7b09e92.
[7] D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, “Neuroscience-inspired artificial
intelligence,” Neuron, vol. 95, no. 2, pp. 245 –258, 2017, doi:
https://doi.org/10.1016/j.neuron.2017.06.011.
[8] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,”
Science, vol. 313, no. 5786, pp. 504–507, 2006, doi: https://doi.org/10.1126/science.1127647.
[9] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no.
7540, pp. 529–533, 2015, doi: https://doi.org/10.1038/nature14236.
[10] G. Montero Albacete, A. López, and A. García-Serrano, “Fattybot: Hormonal chatbot,” Information,
vol. 15, no. 8, p. 457, 2024, doi: https://doi.org/10.3390/info15080457.
[11] M. Moutoussis and R. J. Dolan, “How computation connects affect,” Trends Cogn. Sci., vol. 19, no.
4, pp. 157–163, 2015, doi: https://doi.org/10.1016/j.tics.2015.01.002.
[12] R. W. Picard, Affective Computing, MIT Press, 1997. [Online]. Ava ilable:
https://affect.media.mit.edu/pdfs/97.picard.pdf.
[13] C. Qu, S. Zhang, Y. Li, and J. Ma, “Tool learning with LLMs: A survey,” arXiv preprint,
arXiv:2405.17935, 2024. [Online]. Available: https://arxiv.org/abs/2405.17935.
[14] T. B. Richards, “AutoGPT [Computer software],” GitHub, 2023. [Online]. Available:
https://github.com/Significant-Gravitas/AutoGPT.
[15] C. Rosenbaum, T. Klinger, and M. Riemer, “Routing networks for multi-task learning,” in Int. Conf.
Learn. Representations (ICLR) , 2019. [Online]. Available:
https://openreview.net/forum?id=ry8dv3R9YQ.
[16] T. Schick and H. Schütze, “Toolformer: Language models can teach themselves to use tools,” arXiv
preprint, arXiv:2302.04761, 2023. [Online]. Available: https://arxiv.org/abs/2302.04761.
[17] J. Schmidhuber, “Curiosity and boredom in neural controllers,” in Proc. Int. Conf. Simulation of
Adaptive Behavior, pp. 424 –429, 1991. [Online]. Available:
https://link.springer.com/chapter/10.1007/978-1-4471-1990-4_38.
[18] N. Shazeer et al., “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,”
arXiv preprint, arXiv:1701.06538, 2017. [Online]. Available: https://arxiv.org/abs/1701.06538.
[19] Y. Shen, K. Zhang, Y. Wang, and X. Liu, “HuggingGPT: Solving AI tasks with ChatGPT and its
friends,” arXiv preprint, arXiv:2303.17580, 2023. [Online]. Available:
https://arxiv.org/abs/2303.17580.
[20] S. Singh, S. Bansal, A. El Saddik, and M. Saini, “From ChatGPT to DeepSeek AI: Revisiting
monolithic and adaptive AI models,” arXiv preprint, arXiv:2504.03219, 2025. [Online]. Available:
https://arxiv.org/abs/2504.03219.
[21] S. Slaoui, “S-AI: Sparse Artificial Intelligence System with MetaAgent,” Int. J. Fundam. Mod. Res.
(IJFMR), vol. 1, no. 2, pp. 1 –18, 2025. [Online]. Available:
https://www.ijfmr.com/papers/2025/2/42035.pdf.
[22] Y. Talebirad and A. Nadiri, “Multi-agent collaboration: Harnessing LLM agents,” arXiv preprint,
arXiv:2306.03314, 2023. [Online]. Available: https://arxiv.org/abs/2306.03314.