Science, Technology, Engineering and Mathematics.
Open Access

HIERARCHICAL DEEP REINFORCEMENT LEARNING FRAMEWORK FOR ADAPTIVE CPU SCHEDULING IN HYBRID TRANSACTIONAL-ANALYTICAL DATABASES

Download as PDF

Volume 2, Issue 2, Pp 20-27, 2025

DOI: https://doi.org/10.61784/adsj3021

Author(s)

Nur Aisyah1, Mehdi Benali2*

Affiliation(s)

1University of Malaya, Kuala Lumpur, Malaysia.

2Mohammed V University, Rabat, Morocco.

Corresponding Author

Mehdi Benali

ABSTRACT

Hybrid Transactional-Analytical Processing (HTAP) databases face significant challenges in CPU resource allocation due to the conflicting requirements of Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) workloads. Traditional static scheduling approaches fail to adapt to dynamic workload patterns, leading to suboptimal performance and resource utilization inefficiencies. The diverse characteristics of transactional and analytical queries require sophisticated scheduling strategies that can balance latency-sensitive transaction processing with throughput-oriented analytical operations.This study proposes a Hierarchical Deep Reinforcement Learning (HDRL) framework for adaptive CPU scheduling in HTAP database systems. The framework employs a two-level architecture where a high-level agent manages workload prioritization between OLTP and OLAP components, while low-level agents optimize resource allocation within each processing type. Deep Q-Networks (DQN) and Actor-Critic algorithms enable dynamic adaptation to changing workload patterns and system conditions.Experimental evaluation using industry-standard benchmarks demonstrates that the proposed framework achieves 34% improvement in overall system throughput while reducing OLTP query latency by 28% compared to traditional scheduling methods. The hierarchical approach successfully balances competing workload demands and adapts to varying system conditions, resulting in enhanced resource utilization efficiency and improved Quality of Service (QoS) guarantees across both transactional and analytical processing requirements.

KEYWORDS

Hierarchical reinforcement learning; CPU scheduling; HTAP satabases; Deep Q-Networks; Adaptive resource management; OLTP-OLAP optimization; Database performance; Workload balancing

CITE THIS PAPER

Nur Aisyah, Mehdi Benali. Hierarchical deep reinforcement learning framework for adaptive CPU scheduling in hybrid transactional-analytical databases. AI and Data Science Journal. 2025, 2(2): 20-27. DOI: https://doi.org/10.61784/adsj3021.

REFERENCES

[1] Boroumand A, Ghose S, Oliveira G F, et al. Enabling high-performance and energy-efficient hybrid transactional/analytical databases with hardware/software cooperation. arXiv preprint arXiv, 2022, 2204: 11275.

[2] Dritsas E, Trigka M. A Survey on Database Systems in the Big Data Era: Architectures, Performance, and Open Challenges. IEEE Access, 2025.

[3] Raza A, Chrysogelos P, Anadiotis A C, et al. Adaptive HTAP through elastic resource scheduling. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 2020: 2043-2054.

[4] Gheibi O, Weyns D, Quin F. Applying machine learning in self-adaptive systems: A systematic literature review. ACM Transactions on Autonomous and Adaptive Systems (TAAS), 2021, 15(3): 1-37.

[5] Shyalika C, Silva T, Karunananda A. Reinforcement learning in dynamic task scheduling: A review. SN Computer Science, 2020, 1(6): 306.

[6] Pérez-Dattari R, Celemin C, Ruiz-del-Solar J, et al. Continuous control for high-dimensional state spaces: An interactive learning approach. IEEE In 2019 International Conference on Robotics and Automation (ICRA), 2019: 7611-7617.

[7] Xing S, Wang Y, Liu W. Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning.Symmetry, 2025, 17(7): 1109.

[8] Eisen M, Zhang C, Chamon L, et al. Learning optimal resource allocations in wireless systems. IEEE Transactions on Signal Processing, 2019, 67(10): 2775-2790.

[9] Shanker A, Ahmad N. Optimizing Network Performance with Load Balancing Techniques in Heterogeneous Environments, 2024.

[10] Fernández-Cerero D, Troyano J A, Jakóbik A, et al. Machine learning regression to boost scheduling performance in hyper-scale cloud-computing data centres. Journal of King Saud University-Computer and Information Sciences, 2022, 34(6): 3191-3203.

[11] Srikanth G U, Geetha R. Effectiveness review of the machine learning algorithms for scheduling in cloud environment. Archives of Computational Methods in Engineering, 2023, 30(6): 3769-3789.

[12] Jalali Khalil Abadi Z, Mansouri N, Javidi M M. Deep reinforcement learning-based scheduling in distributed systems: a critical review. Knowledge and Information Systems, 2024, 66(10): 5709-5782.

[13] Munikoti S, Agarwal D, Das L, et al. Challenges and opportunities in deep reinforcement learning with graph neural networks: A comprehensive review of algorithms and applications. IEEE transactions on neural networks and learning systems, 2023, 35(11): 15051-15071.

[14] Cao W, Mai N, Liu W. Adaptive Knowledge Assessment via Symmetric Hierarchical Bayesian Neural Networks with Graph Symmetry-Aware Concept Dependencies. Symmetry, 2025.

[15] Zheng W, Liu W. Symmetry-Aware Transformers for Asymmetric Causal Discovery in Financial Time Series. Symmetry, 2025.

[16] Ghafari R, Kabutarkhani F H, Mansouri N. Task scheduling algorithms for energy optimization in cloud environment: a comprehensive review. Cluster Computing, 2022, 25(2): 1035-1093.

[17] Hutsebaut-Buysse M, Mets K, Latré S. Hierarchical reinforcement learning: A survey and open research challenges. Machine Learning and Knowledge Extraction, 2022, 4(1): 172-221.

[18] Wang Y, Xing S. AI-Driven CPU Resource Management in Cloud Operating Systems. Journal of Computer and Communications, 2025, 13(6): 135-149.

[19] Pateria S, Subagdja B, Tan A H, et al. Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 2021, 54(5): 1-35.

[20] Xing S, Wang Y. Proactive Data Placement in Heterogeneous Storage Systems via Predictive Multi-Objective Reinforcement Learning. IEEE Access, 2025.

[21] Malakar K D, Roy S, Kumar M. Database Management System: Foundations and Practices. In Geospatial Technologies in Coastal Ecologies Monitoring and Management Cham: Springer Nature Switzerland, 2025: 191-255.

[22] Hu X, Guo L, Wang J, et al. Computational fluid dynamics and machine learning integration for evaluating solar thermal collector efficiency-Based parameter analysis. Scientific Reports, 2025, 15(1): 24528.

[23] Mai N, Cao W. Personalized Learning and Adaptive Systems: AI-Driven Educational Innovation and Student Outcome Enhancement. International Journal of Education and Humanities, 2025.

[24] Colley D. Development of a Dynamic Design Framework for Relational Database Performance Optimisation ,Doctoral dissertation, Staffordshire University, 2025.

[25] LuoLe Zhou, ZuChang Zhong, XiaoMin Liang, et al. The dual effects of a country’s overseas patent network layout on its export: scale-up or quality improvement. Social Science and Management, 2025, 2(2): 12-29. https://doi.org/10.61784/ssm3046.

[26] XiaoBo Yu, LiFei He, XiaoDong Yu, et al. The generative logic of junior high school students' educational sense of gain from the perspective of "psychological-institutional dual-dimensional fairness". Journal of Language, Culture and Education Studies, 2025, 2(1): 39-44. https://doi.org/10.61784/jlces3015.

[27] Jiang B, Wu B, Cao J, et al. Interpretable Fair Value Hierarchy Classification via Hybrid Transformer-GNN Architecture. IEEE Access, 2025.

[28] XiaoBo Yu, LiFei He, XiaoDong Yu, et al. The formation mechanism and enhancement path of junior high school students’ academic gain under the background of “Double Reduction”. Educational Research and Human Development, 2025, 2(2): 30-35. https://doi.org/10.61784/erhd3041.

[29] Ji E, Wang Y, Xing S,et al. Hierarchical Reinforcement Learning for Energy-Efficient API Traffic Optimization in Large-Scale Advertising Systems, IEEE Access, 2025.

[30] Canese L, Cardarilli G C, Di Nunzio L, et al. Multi-agent reinforcement learning: A review of challenges and applications. Applied Sciences, 2021, 11(11): 4948.

All published work is licensed under a Creative Commons Attribution 4.0 International License. sitemap
Copyright © 2017 - 2025 Science, Technology, Engineering and Mathematics.   All Rights Reserved.