Science, Technology, Engineering and Mathematics.
Open Access

EXPLAINABLE HIERARCHICAL RL FOR TRANSPARENT DECISION-MAKING IN DIGITAL ADVERTISING ECOSYSTEMS

Download as PDF

Volume 2, Issue 2, Pp 42-49, 2025

DOI: https://doi.org/10.61784/adsj3024

Author(s)

Miguel Torres

Affiliation(s)

Department of Computer Science, University of Arizona, Tucson, USA.

Corresponding Author

Miguel Torres

ABSTRACT

The growing demand for transparency in digital advertising decision-making has become a critical concern for industry practitioners and regulators alike. Traditional advertising allocation strategies often rely on black-box algorithms that lack sufficient explainability, posing significant challenges in environments where user privacy and regulatory compliance are paramount. This paper proposes a novel Explainable Hierarchical Reinforcement Learning (EHRL) framework specifically designed for transparent decision-making in digital advertising ecosystems. The framework integrates option-critic architectures with deep Q-networks and incorporates sophisticated state representation mechanisms to achieve both efficient and interpretable advertising strategies. Our approach utilizes a three-tier hierarchical structure that mirrors natural advertising decision-making processes, from high-level strategic planning to tactical execution. Experimental results on large-scale real-world advertising datasets demonstrate that the proposed EHRL framework significantly improves decision transparency and explainability while maintaining competitive performance. Compared to traditional Deep Q-Network (DQN) approaches, EHRL achieves a 12.3% improvement in click-through rate prediction accuracy, an 8.7% increase in user satisfaction scores, and a 34.5% enhancement in human comprehensibility of decision explanations.

KEYWORDS

Explainable artificial intelligence; Hierarchical reinforcement learning; Digital advertising; Option-critic architecture; Deep Q-networks; Transparent decision-making; User experience optimization

CITE THIS PAPER

Miguel Torres. Explainable hierarchical RL for transparent decision-making in digital advertising ecosystems. AI and Data Science Journal. 2025, 2(2): 42-49. DOI: https://doi.org/10.61784/adsj3024.

REFERENCES

[1] Doerr S, Lautermann C. Beyond direct stakeholders: The extensive scope of societal Corporate Digital Responsibility (CDR). Organizational Dynamics, 52024, 3(2): 101057.

[2] Jin J, Xing S, Ji E, et al. XGate: Explainable Reinforcement Learning for Transparent and Trustworthy API Traffic Management in IoT Sensor Networks. Sensors (Basel, Switzerland), 2025, 25(7): 2183.

[3] Zhang H, Ge Y, Zhao X, Wang J. Hierarchical Deep Reinforcement Learning for Multi-Objective Integrated Circuit Physical Layout Optimization with Congestion-Aware Reward Shaping. IEEE Access, 2025.

[4] Sun T, Yang J, Li J, et al. Enhancing auto insurance risk evaluation with transformer and SHAP. IEEE Access, 2024.

[5] Gupta A, Garg P, Narooka P, et al. Applications of Machine Learning in Marketing: Personalization, Targeting, and Customer Engagement. In International Conference on Sustainable Computing and Intelligent Systems (pp. 145-156). Singapore: Springer Nature Singapore, 2024.

[6] Grochowski M, Jablonowska A, Lagioia F, et al. Algorithmic transparency and explainability for EU consumer protection: unwrapping the regulatory premises. Critical Analysis L., 2021, 8: 43.

[7] Malgieri G. Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer law & security review, 2019, 35(5): 105327.

[8] Alexander C B. The general data protection regulation and California consumer privacy act: The economic impact and future of data privacy regulations. Loyola Consumer Law Review, 2019, 32: 199.

[9] Indriani D, Haris A, Nurdin M. Digital Marketing and Consumer Engagement: A Systematic Review. Amkop Management Accounting Review (AMAR), 2023, 3(2): 75-89.

[10] Jain A, Khan S. Optimizing cost per click for digital advertising campaigns. arXiv preprint arXiv:2108.00747, 2021.

[11] Stogiannos N, Malik R, Kumar A, et al. Black box no more: a scoping review of AI governance frameworks to guide procurement and adoption of AI in medical imaging and radiotherapy in the UK. The British Journal of Radiology, 2023, 96(1152): 20221157.

[12] Maslowska E, Malthouse E C, Hollebeek L D. The role of recommender systems in fostering consumers' long-term platform engagement. Journal of Service Management, 2022, 33(4/5): 721-732.

[13] Ji E, Wang Y, Xing S, et al. Hierarchical Reinforcement Learning for Energy-Efficient API Traffic Optimization in Large-Scale Advertising Systems. IEEE Access, 2025.

[14] Vouros G A. Explainable deep reinforcement learning: state of the art and challenges. ACM Computing Surveys, 2022, 55(5): 1-39.

[15] Kumar S, Datta S, Singh V, et al. Applications, challenges, and future directions of human-in-the-loop learning. IEEE Access, 2024, 12: 75735-75760.

[16] Ahmed I, Jeon G, Piccialli F. From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE transactions on industrial informatics, 2022, 18(8): 5031-5042.

[17] Kamath U, Liu J. Explainable artificial intelligence: An introduction to interpretable machine learning. 2021.

[18] Islam S R, Eberle W, Ghafoor S K, et al. Explainable artificial intelligence approaches: A survey. arXiv preprint arXiv:2101.09429, 2021.

[19] Radulovic N. Post-hoc Explainable AI for Black Box Models on Tabular Data. Doctoral dissertation, Institut Polytechnique de Paris, 2023.

[20] Boppiniti S T. Evolution of Reinforcement Learning: From Q-Learning to Deep. Available at SSRN 5061696, 2021.

[21] Jang B, Kim M, Harerimana G, et al. Q-learning algorithms: A comprehensive classification and applications. IEEE access, 2019, 7: 133653-133667.

[22] Chinnaraju A. Explainable AI (XAI) for trustworthy and transparent decision-making: A theoretical framework for AI interpretability. World Journal of Advanced Engineering Technology and Sciences, 2025, 14(3): 170-207.

[23] Chen S, Liu Y, Zhang Q, et al. Multi-Distance Spatial-Temporal Graph Neural Network for Anomaly Detection in Blockchain Transactions. Advanced Intelligent Systems, 2025, 2400898.

[24] Zhang Q, Chen S, Liu W. Balanced Knowledge Transfer in MTTL-ClinicalBERT: A Symmetrical Multi-Task Learning Framework for Clinical Text Classification. Symmetry, 2025, 17(6): 823.

[25] Shao Z, Wang X, Ji E, et al. GNN-EADD: Graph Neural Network-based E-commerce Anomaly Detection via Dual-stage Learning. IEEE Access, 2025.

[26] Li P, Ren S, Zhang Q, Wang X, et al. Think4SCND: Reinforcement Learning with Thinking Model for Dynamic Supply Chain Network Design. IEEE Access, 2024.

[27] Ren S, Jin J, Niu G, et al. ARCS: Adaptive Reinforcement Learning Framework for Automated Cybersecurity Incident Response Strategy Optimization. Applied Sciences, 2025, 15(2): 951.

[28] Cao J, Zheng W, Ge Y, et al. DriftShield: Autonomous fraud detection via actor-critic reinforcement learning with dynamic feature reweighting. IEEE Open Journal of the Computer Society, 2025.

[29] Wang J, Liu J, Zheng W, et al. Temporal Heterogeneous Graph Contrastive Learning for Fraud Detection in Credit Card Transactions. IEEE Access, 2025.

[30] Mai N T, Cao W, Liu W. Interpretable Knowledge Tracing via Transformer-Bayesian Hybrid Networks: Learning Temporal Dependencies and Causal Structures in Educational Data. Applied Sciences, 2025, 15(17): 9605.

[31] Cao W, Mai N T, Liu W. Adaptive knowledge assessment via symmetric hierarchical Bayesian neural networks with graph symmetry-aware concept dependencies. Symmetry, 2025, 17(8): 1332.

[32] Mai N T, Cao W, Wang Y. The global belonging support framework: Enhancing equity and access for international graduate students. Journal of International Students, 2025, 15(9): 141-160.

[33] Tan Y, Wu B, Cao J, et al. LLaMA-UTP: Knowledge-Guided Expert Mixture for Analyzing Uncertain Tax Positions. IEEE Access, 2025.

[34] Mohseni S, Zarei N, Ragan E D. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 2021, 11(3-4): 1-45.

[35] Patel A, Mishra A. Intelligent bargaining agents in digital marketplaces: a fusion of reinforcement learning and game-theoretic principles. International Journal of Advanced Artificial Intelligence Research, 2025, 2(03): 6-12.

All published work is licensed under a Creative Commons Attribution 4.0 International License. sitemap
Copyright © 2017 - 2025 Science, Technology, Engineering and Mathematics.   All Rights Reserved.