Skip to content

tu-dortmund-ls12-rt/Periodic-ROS2

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Artifact Evaluation for Synthetic Benchmark in ROS 2 Scheduling

This document provides instructions for evaluating the artifact associated with our paper on comparing end-to-end latencies in ROS 2 scheduling.

System Requirements

  • Operating System: Linux (Ubuntu 20.04 or later recommended)
  • RAM: Minimum 8 GB
  • CPU: Minimum 4 cores
  • Disk Space: Minimum 10 GB free space
  • Docker: Ensure Docker is installed and running
  • VS Code: Ensure Visual Studio Code is installed with the Remote - Containers extension

Packaged Artifact

The artifact is packaged as a Docker container. You can find the Dockerfile and configuration files in the .devcontainer directory.

Setup Instructions

Using the Packaged Artifact

  1. Clone the Repository:

    git clone https://github.com/tu-dortmund-ls12-rt/Periodic-ROS2.git
    cd Periodic-ROS2
  2. Open in VS Code: Open the repository in Visual Studio Code. You should see a prompt to reopen the folder in a container. Click on "Reopen in Container".

  3. Build and Run the Container: The container will automatically build and set up the environment. This may take a few minutes.

  4. Run the Evaluation: Once the container is ready, open a terminal in VS Code and run:

    python3 evaluation.py --num_task_sets 1000

Setting Up on a Different Machine

If you prefer to set up the artifact on a different machine without using the Docker container, follow these steps:

  1. Install Dependencies: Ensure you have Python 3, pip, and the required libraries installed:

    sudo apt-get update
    sudo apt-get install -y python3-pip
    pip3 install matplotlib tabulate scipy
  2. Clone the Repository:

    git clone https://github.com/tu-dortmund-ls12-rt/Periodic-ROS2.git
    cd Periodic-ROS2
  3. Run the Evaluation:

    python3 evaluation.py --num_task_sets 1000

Reproducing Results

The evaluation script will reproduce the results presented in the paper. Specifically, it will generate synthetic task sets, calculate response times and end-to-end latencies, and produce visualizations similar to those in the paper. Depending on the hardware, the evaluation may take up to 10 minutes to complete. During the scripts, new windows with plots will be opened. Close these windows to continue the evaluation, until the script finishes.

Figures and Outputs

  • evaluation_plot.png: Histograms of normalized reduction in end-to-end latency.
  • Terminal: Statistical values of RM and ROS 2 default end-to-end latencies.

Simplified Experiments

For a quicker evaluation, you can reduce the number of task sets generated by passing a smaller value to the --num_task_sets parameter when running the evaluation.py script. By default, the script generates 1000 task sets. You can reduce this number to 100 or 10 to speed up the evaluation.

# Change the number of task sets from 1000 to a smaller number, e.g., 100
python3 evaluation.py --num_task_sets 100

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Dockerfile 0.7%