WeSearch

SwarmDrive: Semantic V2V Coordination for Latency-Constrained Cooperative Autonomous Driving

·3 min read · 0 reactions · 0 comments · 2 views
SwarmDrive: Semantic V2V Coordination for Latency-Constrained Cooperative Autonomous Driving

Cloud-hosted LLM inference for autonomous driving adds round-trip delay and depends on stable connectivity, while purely local edge models struggle under occlusion. We present SwarmDrive, a semantic Vehicle-to-Vehicle (V2V) coordination framework in which nearby vehicles run local Small Language Models (SLMs), share compact intent distributions only when uncertainty is high, and fuse them through event-triggered consensus. We evaluate SwarmDrive in a 5-seed executable study built around one occluded intersection case, combining matched operating-point comparisons with robustness sweeps. In that setting, SwarmDrive under its 6G communication setting ("Swarm 6G") raises success from 68.9% to 94.1% over a single local SLM while reducing latency from a 510 ms cloud reference to 151.4 ms. However, an increased number of participating vehicles leads to higher communication overhead and packet loss. SwarmDrive also evaluates the impact of swarm-size, packet-loss, and entropy-threshold sweeps and shows that the cooperative gain holds across ablations and is best balanced near an active swarm size of 4 vehicles and an entropy trigger threshold of 0.65 in the current prototype. These results show that semantic edge cooperation can work under tight latency constraints in the targeted intersection case, but they are not a deployment-grade validation of a real 6G stack.

Original article
arXiv cs.AI
Read full at arXiv cs.AI →
Opening excerpt (first ~120 words) tap to expand

Computer Science > Robotics arXiv:2604.22852 (cs) [Submitted on 22 Apr 2026] Title:SwarmDrive: Semantic V2V Coordination for Latency-Constrained Cooperative Autonomous Driving Authors:Anjie Qiu, Donglin Wang, Zexin Fang, Sanket Partani, Hans D. Schotten View a PDF of the paper titled SwarmDrive: Semantic V2V Coordination for Latency-Constrained Cooperative Autonomous Driving, by Anjie Qiu and 4 other authors View PDF HTML (experimental) Abstract:Cloud-hosted LLM inference for autonomous driving adds round-trip delay and depends on stable connectivity, while purely local edge models struggle under occlusion.

Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv cs.AI.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from arXiv cs.AI