Semi-Markov Reinforcement Learning for City-Scale EV Ride-Hailing with Feasibility-Guaranteed Actions
An Nguyen, Hoang Nguyen, Phuong Le, Hung Pham, Cuong Do + 1 more
TLDR
A new semi-Markov RL approach with feasibility-guaranteed actions optimizes city-scale EV ride-hailing, achieving high profits and zero violations.
Key contributions
- Formulates city-scale EV ride-hailing as a hex-grid semi-MDP with mixed actions.
- Guarantees action feasibility using a time-limited MILP projection at each decision step.
- Optimizes a robust Soft Actor-Critic (SAC) agent against demand uncertainty via Wasserstein-1.
- Achieves $1.22M profit and zero feeder-limit violations in NYC taxi simulations.
Why it matters
This paper addresses a critical challenge in sustainable urban mobility: efficiently managing large EV ride-hailing fleets. By ensuring physical feasibility and robustness to uncertainty, it offers a practical solution to maximize profits while preventing infrastructure overload. This could significantly improve the viability and adoption of EV ride-hailing services.
Original Abstract
We study city-scale control of electric-vehicle (EV) ride-hailing fleets where dispatch, repositioning, and charging decisions must respect charger and feeder limits under uncertain, spatially correlated demand and travel times. We formulate the problem as a hex-grid semi-Markov decision process (semi-MDP) with mixed actions -- discrete actions for serving, repositioning, and charging, together with continuous charging power -- and variable action durations. To guarantee physical feasibility during both training and deployment, the policy learns over high-level intentions produced by a masked, temperature-annealed actor. These intentions are projected at every decision step through a time-limited rolling mixed-integer linear program (MILP) that strictly enforces state-of-charge, port, and feeder constraints. To mitigate distributional shifts, we optimize a Soft Actor--Critic (SAC) agent against a Wasserstein-1 ambiguity set with a graph-aligned Mahalanobis ground metric that captures spatial correlations. The robust backup uses the Kantorovich--Rubinstein dual, a projected subgradient inner loop, and a primal--dual risk-budget update. Our architecture combines a two-layer Graph Convolutional Network (GCN) encoder, twin critics, and a value network that drives the adversary. Experiments on a large-scale EV fleet simulator built from NYC taxi data show that PD--RSAC achieves the highest net profit, reaching \$1.22M, compared with \$0.58M--\$0.70M for strong heuristic, single-agent RL, and multi-agent RL baselines, including Greedy, SAC, MAPPO, and MADDPG, while maintaining zero feeder-limit violations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.