Skip to content
jacobkingery edited this page Dec 18, 2015 · 13 revisions

Our code consists largely of object-oriented ROS nodes and helper scripts to simplify the process of running the code with multiple agents. The two main types of ROS nodes that we created were omniscient and agent. There is one omniscient node running that facilitates communication among the agents. There is one agent node running per agent that received packets of information from the Omniscient node and acts based on the contents of the packets.

Below is a class/interaction diagram showing our code structure:

About the Nodes

omniscient knows the positions of all agents by subscribing to the topics /robot[n]/STAR_pose/continuous. Based on the agents' locations, it publishes information packets to the topics /robot[n]/packet. Each packet contains:

  • three constants k_a, k_b, and k_c that play into our governing equations
  • centroid, the point each agent converges upon
  • the radius R about the centroid on which the agents quit converging
  • sensing_radius, each agent's artificially set sensing range
  • an array others containing the locations of all agents within the sensing range.

Each agent instance subscribes to its topic /robot[n]/packet. With the information in the packet, the agent calculates its destination point, all the while publishing to /robot[n]/STAR_pose/continuous.

Rationale: Our Definition of Swarms

We could have written a much simpler single-node system that, given the location of each agent, handled all the calculations across all the bots and simply published velocities to each agent. However, we were inspired by this Harvard example of swarm robotics, in which thousands of bots form a configurations based only on the behavior of its immediate neighbors. This drove our decision to have agent nodes that received limited information and determined their behavior independently.

omniscient exists to facilitate the communication between each agent. Although omniscient publishes /robot[n]/packet that includes a sensing_radius, we do not actually interface the Neato's lidar. With omniscient, we artificially set how much each agent can "see," mimicking the nearest-neighbor knowledge demonstrated in the Harvard example.