<event name> Additional reading material on XAI Hu, Yuzheng , Pingbang Hu, Han Zhao, and Jiaqi W. Ma. ‘Most Influential Subset Selection: Challenges, Promises, and Beyond’. arXiv , 8 January 2025. https:// doi.org /10.48550/arXiv.2409.18153. Guo, Han, Nazneen Fatema Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. ‘ FastIF : Scalable Influence Functions for Efficient Model Interpretation and Debugging’. arXiv , 9 September 2021. https:// doi.org /10.48550/arXiv.2012.15781. Brophy, Jonathan, Zayd Hammoudeh, and Daniel Lowd. "Adapting and evaluating influence-estimation methods for gradient-boosted decision trees." Journal of Machine Learning Research 24.154 (2023): 1-48. Basu, Samyadeep , Philip Pope, and Soheil Feizi. ‘Influence Functions in Deep Learning Are Fragile’. arXiv , 10 February 2021. https://doi.org/10.48550/arXiv.2006.14651 . Bae, Juhan, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger Grosse. ‘If Influence Functions Are the Answer, Then What Is the Question?’ arXiv , 12 September 2022. https://doi.org/10.48550/arXiv.2209.05364 . M. Sahakyan, Z. Aung and T. Rahwan, "Explainable Artificial Intelligence for Tabular Data: A Survey," in IEEE Access, vol. 9, pp. 135392-135422, 2021, doi : 10.1109/ACCESS.2021.3116481. Borisov, Vadim, et al. "Deep neural networks and tabular data: A survey." IEEE transactions on neural networks and learning systems (2022). Ren, Weijieying , et al. "Deep Learning within Tabular Data: Foundations, Challenges, Advances and Future Directions." arXiv preprint arXiv:2501.03540 (2025). Jacob R. Epifano, Ravi P. Ramachandran, Aaron J. Masino, Ghulam Rasool, Revisiting the fragility of influence functions, Neural Networks, Volume 162, 2023, Pages 581-588, ISSN 0893-6080, https:// doi.org /10.1016/j.neunet.2023.03.029.