No description
- Python 81.2%
- C 17.8%
- Makefile 1%
| bachelor-thesis@26fffc3472 | ||
| ComparisonOfFastRecoveryMethodsInNetworks@19debc4b04 | ||
| InternetTopologyZoo@a725bbc476 | ||
| results | ||
| shortcut-mininet@ac2f3bf084 | ||
| src | ||
| pyproject.toml | ||
| README.txt | ||
| run_eval.py | ||
| test_tc.py | ||
| uv.lock | ||
Project: shortcut-ebpf
Description: Evaluation of Fast Recovery Methods in IP Networks (Netfilter vs. eBPF)
=== 1. Overview ===
The objective of this project is to implement and evaluate fast failover routing (SquareOne) in IP networks. It compares the performance, latency, and packet loss of two distinct dataplane approaches:
1. A user-space approach using Netfilter (nftables + NFQUEUE).
2. A high-performance, in-kernel approach using eBPF (XDP).
The project runs inside a Mininet emulation environment. It supports both a simple hardcoded diamond topology (4 routers, 4 hosts) and complex real-world topologies imported from the Internet Topology Zoo (Oteglobe.gml).
=== 2. Architecture & How It Works ===
--- The Control Plane ---
A Python-based orchestrator uses NetworkX to parse topologies, assign IP subnets, and compute primary and backup routing paths for every node. It then spins up a Mininet network, configures link constraints (1000Mbps, 2ms delay), and automatically generates and injects the routing state into the chosen dataplane.
--- Dataplane A: Netfilter (Baseline) ---
- Uses standard Linux `ip route` tables with varying metrics.
- Uses `iptables` PREROUTING rules to detect routing loops (e.g., when a packet arrives on the same interface it is destined to leave from).
- Punts looped packets to a user-space Python daemon (`listener.py`) via NFQUEUE.
- The daemon deletes the failing primary route from the kernel, forcing the system to fall back to the backup route.
--- Dataplane B: eBPF / XDP (High Performance & Embedded-Ready) ---
- eBPF XDP Program (`router.bpf.c`): A high-performance packet processor that intercepts packets at the lowest level of the network stack. It performs LPM (Longest Prefix Match) lookups, MAC address rewriting, IP checksum updates, and redirecting (XDP_REDIRECT) entirely in the kernel.
- Physical Link Monitor (`link_monitor.c`): A lightweight C daemon that runs on each router. It listens to kernel Netlink events (RTM_NEWLINK) to instantly detect when a physical interface goes down (IFF_UP flag lost) and updates an eBPF `link_state_map`.
- Fast Failover: When the XDP program sees that the primary interface is marked as down in the `link_state_map`, it instantly rewrites the packet for the backup path, achieving sub-millisecond failover without user-space routing daemon intervention.
--- Embedded-Friendly Deployment ---
To ensure the eBPF dataplane can run on stripped-down embedded hardware:
- The architecture is "Compile Once, Run Everywhere" (CO-RE).
- The eBPF C code is compiled into a standalone ELF object file (`router.bpf.o`).
- The Python orchestrator dynamically discovers network attributes (MACs, ifindexes) and generates standard Linux bash scripts using `bpftool map update`.
- These raw bash scripts and the compiled `.o` file (along with the compiled `link_monitor` binary) are all that is needed to deploy the routing logic on a target device. No Python, Clang, or LLVM is required on the routers themselves.
=== 3. Prerequisites & Dependencies ===
To run the evaluation framework, your host system must have:
- Mininet
- `iperf3` (for bandwidth testing)
- `clang` and `llvm` (to compile the eBPF dataplane)
- `bpftool` and `libbpf-dev` (for eBPF map and program management)
- `libnetfilter-queue-dev` (for the Netfilter baseline)
- Python 3.10+ and `uv` (for Python dependency management)
=== 4. Setup Instructions ===
1. Clone the repository and navigate to the project root.
2. Add the InternetTopologyZoo dataset:
$ git clone https://github.com/mroughan/InternetTopologyZoo.git
2. Install Python dependencies using `uv`:
$ uv sync
3. (Optional) Build the eBPF objects manually to ensure compilation works:
$ make -C src/dataplane/ebpf
=== 5. Running the Experiments ===
The evaluation harness (`run_eval.py`) completely automates the process of spinning up the network, starting background flows, executing primary traffic generation, triggering a link failure midway through the test, and gathering the results.
Run an automated evaluation suite (eBPF dataplane, bandwidth test, 3 runs):
$ sudo uv run run_eval.py --runs 3 --test-type bandwidth --dataplane ebpf
Run the evaluation on a large Topology Zoo network:
$ sudo uv run run_eval.py --runs 1 --test-type both --dataplane ebpf --zoo InternetTopologyZoo/gml/Oteglobe.gml
Compare against the Netfilter baseline:
$ sudo uv run run_eval.py --runs 1 --test-type bandwidth --dataplane netfilter
Drop into the Mininet CLI for manual testing (skip automated tests):
$ sudo uv run src/main.py --dataplane ebpf --test-type both
=== 6. Results & Output ===
All test data, logs, and compiled scripts are saved in the auto-generated `results/` directory in the project root:
- Raw Per-Run Logs: `results/run_<id>/iperf_client_*.log` and `results/run_<id>/ping_*.log`
- Generated Deployment Scripts: `results/ebpf_setup_*.sh`
- Aggregated Bandwidth Plot: `results/plot_bandwidth_<dataplane>.pdf`
- Aggregated Latency Plot: `results/plot_latency_<dataplane>.pdf`
The Python plotting module automatically parses the output logs across all runs, calculates averages, and generates publication-ready PDF plots showing throughput drops and recovery times.