Schroer, Karsten
ORCID: 0000-0002-5443-1696, Ahadi, Ramin
ORCID: 0000-0002-8447-5008, Ketter, Wolfgang
ORCID: 0000-0001-9008-142X and Lee, Thomas Y.
(2025).
Data-driven planning of large-scale electric vehicle charging hubs using deep reinforcement learning.
Transportation Research Part C: Emerging Technologies, 177.
pp. 1-27.
Elsevier.
ISSN 0968090X
|
PDF
1-s2.0-S0968090X25001305-main.pdf Bereitstellung unter der CC-Lizenz: Creative Commons Attribution. Download (3MB) |
Abstract
[Artikel-Nr.: 105126] We consider the problem of planning large-scale service systems, specifically electric vehicle (EV) charging hubs (EVCHs). EVCHs are locally concentrated clusters of charging infrastructure, e.g. in large parking lots, and are often integrated with on-site generation, storage and adjacent building infrastructure. Planning such complex operational systems over a multi-year investment horizon represents a high-dimensional, dynamic and stochastic decision problem. Such planning problems typically rely on mathematical optimization frameworks which are subject to computational challenges (e.g., NP-hardness) that can limit scalability to practical system sizes. As a result, simplifying assumptions related to, for example, temporal granularity, operational detail, system size, decision horizon or stochasticity are required to achieve tractability. Modern reinforcement learning (RL) approaches, in combination with fine-grained data-driven simulation frameworks, also known as Digital Twins (DTs), may circumvent these shortcomings. We develop a scalable soft actor-critic (SAC) reinforcement learning method, that learns near-optimal EVCH configurations against a minimum cost objective. Our method uses a highly detailed DT of the EVCH environment that is bootstrapped with unique real-world sensor data from parking lots, charging stations, office buildings, and solar generation facilities, along with microscopic simulations of practical parking and charging policies. In extensive computational experiments, we provide empirical evidence that the proposed SAC RL algorithm converges closely to the global optimum (4%–15% gap) outperforming alternative popular RL approaches such as Deep Q Networks (DQN) and Deep Deterministic Policy Gradients (DDPG). We also demonstrate the superior scalability characteristic of our method to real-world problem sizes of up to 1000 charging spots. Finally, we run scenario analyses that explore the impact of user preferences and operational choices on planning decisions, thus providing actionable and novel policy guidance for EVCH planners and operators.
| Item Type: | Article |
| Creators: | Creators Email ORCID ORCID Put Code Lee, Thomas Y. UNSPECIFIED UNSPECIFIED UNSPECIFIED |
| URN: | urn:nbn:de:hbz:38-803571 |
| Identification Number: | 10.1016/j.trc.2025.105126 |
| Journal or Publication Title: | Transportation Research Part C: Emerging Technologies |
| Volume: | 177 |
| Page Range: | pp. 1-27 |
| Number of Pages: | 27 |
| Date: | August 2025 |
| Publisher: | Elsevier |
| ISSN: | 0968090X |
| Language: | English |
| Faculty: | Faculty of Management, Economy and Social Sciences |
| Divisions: | Faculty of Management, Economics and Social Sciences > Business Administration > Information Systems > Chair for Information Systems and Systems Development |
| Subjects: | Data processing Computer science Economics |
| Uncontrolled Keywords: | Keywords Language Digital twin ; Reinforcement learning ; Asset planning ; Electric vehicle charging hubs English |
| ['eprint_fieldname_oa_funders' not defined]: | Publikationsfonds UzK |
| Refereed: | Yes |
| URI: | http://kups.ub.uni-koeln.de/id/eprint/80357 |
Downloads
Downloads per month over past year
Altmetric
Export
Actions (login required)
![]() |
View Item |
https://orcid.org/0000-0002-5443-1696