Comparative Analysis of Neural-Symbolic Systems for Explainable Artificial Intelligence

Authors

  • Dostoevsky Tolstoy Pushkin Neural-Symbolic AI & Explainable Machine Reasoning Scientist, France Author

Keywords:

Neural-symbolic systems, Explainable AI, Interpretability, Deep learning, Symbolic reasoning, Hybrid models

Abstract

The pursuit of explainable artificial intelligence (XAI) has given rise to neural-symbolic systems, which combine the learning capabilities of neural networks with the logical reasoning power of symbolic systems. This paper presents a comparative analysis of key neural-symbolic frameworks used in XAI, assessing them across dimensions such as interpretability, scalability, and performance. Through a structured literature review and tabular evaluation, the study highlights the trade-offs and potential of different approaches, offering insights for future research in achieving transparent, trustworthy AI

References

Bader, S., & Hitzler, P. (2005). Dimensions of neural-symbolic integration — A structured survey. Artificial Intelligence Review, 23(3), 227–270. https://doi.org/10.1007/s10462-005-9000-5

Devalla, S. (2025). Human–AI feedback synergy: Assessing the reliability and contextual depth of generative evaluation systems in enterprise-scale education. International Journal of AI, Big Data, Computational and Management Studies, 6(4), 10–16. https://doi.org/10.63282/3050-9416.IJAIBDCMS-V6I4P102

Besold, T. R., et al. (2017). Neural-symbolic learning and reasoning: A survey and interpretation. arXiv preprint, arXiv:1711.03902.

Devalla, S. (2025). Securing the cloud with generative AI: A framework for safe integration into AWS-native security services. International Journal of Computer Engineering and Technology (IJCET), 16(5), 54–69. https://doi.org/10.34218/IJCET_16_05_005

d’Avila Garcez, A. S., Broda, K., & Gabbay, D. M. (2009). Neural-symbolic cognitive reasoning. Springer.

Evans, R., & Grefenstette, E. (2018). Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61, 1–64. https://doi.org/10.1613/jair.1.11233

Devalla, S. (2025). AI-Driven Telemetry Analytics for Predictive Reliability and Privacy in Enterprise-Scale Cloud Systems. International Journal of Artificial Intelligence, Data Science, and Machine Learning, 6(2), 125-134. https://doi.org/10.63282/3050-9262.IJAIDSML-V6I2P114

Garcez, A. d’Avila, Gabbay, D., & Lamb, L. (2002). Symbolic knowledge extraction from trained neural networks. Applied Artificial Intelligence, 16(6), 447–464.

Kautz, H. (2002). The logic of persistence. In Principles of Knowledge Representation and Reasoning. Morgan Kaufmann.

Devalla, S. (2025). Bridging experiment and enterprise: Continuous verification and policy enforcement in zero trust microservices. Journal of Recent Trends in Computer Science and Engineering (JRTCSE), 13(2), 117–128. https://jrtcse.com/index.php/home/article/view/JRTCSE.2025.13.2.11

Marcus, G. (2019). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115–133.

Devalla, S. (2024). Enterprise-scale evaluation of REST and GraphQL: Balancing performance, scalability, and resource utilization. International Journal of Core Engineering & Management, 7(12), 396–416.

Pinkas, G. (1995). Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge. Artificial Intelligence, 77(2), 203–247.

Towell, G. G., & Shavlik, J. W. (1993). Extracting refined rules from knowledge-based neural networks. Machine Learning, 13(1), 71–101.

Downloads

Published

2025-08-25

How to Cite

Dostoevsky Tolstoy Pushkin. (2025). Comparative Analysis of Neural-Symbolic Systems for Explainable Artificial Intelligence. International Journal of Artificial Intelligence, 6(4), 17-21. https://ijai.in/index.php/home/article/view/IJAI.06.04.003