Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Software . 2026
License: CC BY
Data sources: ZENODO
ZENODO
Software . 2026
License: CC BY
Data sources: Datacite
ZENODO
Software . 2026
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

NS3 Simulation of CERN SPS Robot behaviour under communication drop and recovery (Comparison Congestion Control TCP Reno, Cubic, Westwood and Vegas)

Authors: Marin, Raul; Forkel, David; Marin Garces, Josep; Cervera, Enric; Matheson, Eloise; Di Castro, Mario;

NS3 Simulation of CERN SPS Robot behaviour under communication drop and recovery (Comparison Congestion Control TCP Reno, Cubic, Westwood and Vegas)

Abstract

This record provides ns-3 simulation code and complete result artifacts for a comparative evaluation of TCP Reno, TCP CUBIC, TCP Westwood(+), and TCP Vegas in a supervised teleoperation networking scenario inspired by the CERN SPS tunnel. The goal is to study protocol behavior under time-varying WiFi connectivity (coverage loss and reconnection) and heterogeneous cross-traffic, with emphasis on loss, delay/jitter, and post-reconnection recovery (buffer draining and burstiness). Scenario overview The simulated topology models a robot inside the tunnel and a remote operator on the surface using four ns-3 nodes to explicitly represent the robot internal network stack: n1 (Robot-PC): application sender for uplink sensing streams and receiver for downlink control. n2 (Robot-NIC, WiFi STA): robot wireless network interface. n3 (AP/Router): tunnel access point. n4 (Control Station / GroundStation): uplink data sink and downlink control source. Links are configured as follows: n1–n2: internal robot P2P Ethernet, 100 Mbps, 1 ms delay, large DropTail buffer. n2–n3: IEEE 802.11n WiFi, configured with reduced transmit power / receiver gain to emulate limited tunnel coverage; large WiFi MAC queue (MaxSize and MaxDelay configured in the script). n3–n4: wired P2P backhaul, 50 Mbps, 100 ms delay, DropTail buffer of 100 packets. A simple mobility pattern drives coverage degradation and reconnection, producing a realistic failure mode observed in real SPS trials: during low-connectivity intervals, packets accumulate in buffers; upon reconnection, buffered packets are released in bursts, yielding multi-second tail delays (on the order of ~6 s), consistent with measurements from SPS experiments. Traffic configuration (flows) The simulation generates multiple concurrent application-level CBR-like flows: UDP camera (n1 → n4): 8 Mbps, 1024 B packets (unresponsive to congestion). TCP LiDAR 3D (n1 → n4): 32 Mbps, 1484 B packets. TCP LiDAR 2D (n1 → n4): 600 kbps, 15000 B packets. TCP semantic data (n1 → n4): 5 kbps, 512 B packets. TCP command/control (n4 → n1): 83.68 kbps, 1024 B packets. All TCP flows are executed under each congestion-control variant (Reno, CUBIC, Westwood(+), Vegas), while the UDP camera stream is kept identical across runs. TCP variants included TCP Reno (loss-based AIMD baseline) TCP CUBIC (Linux/Ubuntu default in typical deployments; aggressive growth suited to higher BDP paths) TCP Westwood(+) (ACK-rate bandwidth estimation) TCP Vegas (delay-based congestion avoidance) Contents of this archive The record includes: ns-3 C++ simulation scripts for each TCP variant (or a single script with clearly marked lines to switch the TcpL4Protocol::SocketType). Reproducibility artifacts for each configuration, including: PCAP captures (per link/interface as configured), FlowMonitor outputs (XML and/or serialized outputs), logs and ASCII traces (throughput/delay/cwnd/other debug traces, as generated by the scripts), any post-processed summaries used to build tables/figures in the associated study. How to reproduce Install ns-3 and build it in a standard environment (Linux recommended). Copy the provided script(s) into your ns-3 scratch/ (or the appropriate directory used by your workflow). Compile and run the desired variant(s) (Reno/CUBIC/Westwood/Vegas), following the instructions in the included README and/or script header comments. The simulation generates FlowMonitor, PCAP, and log files into the configured output directories (see the scripts for exact paths). Notes on interpretation This dataset is intended to support research on supervised teleoperation under intermittent connectivity, highlighting that post-reconnection buffer draining can dominate application experience (multi-second tail delays) even when TCP loss is low and control delivery remains reliable. These behaviors motivate congestion-control extensions above layer 4 (e.g., application-layer regulation of message publication rate and quality/resolution). License and citation Please cite this Zenodo record in any publication using these scripts or results (DOI assigned by Zenodo). A BibTeX entry can be added to the record once the DOI is minted. Zenodo DOI: 10.5281/zenodo.18171879

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average