Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/put the planner together #232

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

aknh9189
Copy link
Collaborator

@aknh9189 aknh9189 commented Nov 6, 2023

Big PR with the first pass of the sampling based planner. I expect this will be a longer review but wanted to get it into a usable state before field trials.

A couple of initial questions and comments :

  • What tests do you think are reasonable for the components? Are tests of random code (e.g. the rollouts) ok if a fixed seed is provided, and exact conditions are tested?
  • A lot of the "magic" happens in the details of the scoring functions and culling functions (e.g. constraints, objectives). Are there ways to make these small decisions more transparent, so the algorithm's behavior is more easily explained.
  • There's a lot of duplicated setup for the tests. This may be worth trying to combine in a test fixture
  • I've left printouts in, but as this gets closer to merging state, will trim and lint. Want to add the option to dump particles to disk for easy visualization (I think this will be important for tuning in bigger graphs) but will do this in a later PR.
  • We should rename this planner.

@@ -416,13 +416,13 @@ planning::BeliefUpdater<RobotBelief> make_belief_updater(const planning::RoadMap
planning::BeliefUpdater<RobotBelief> make_belief_updater(const planning::RoadMap &road_map,
const double max_sensor_range_m,
const EkfSlam &ekf,
const std::vector<int> &present_beacons,
const std::optional<std::vector<int>> &present_beacons,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... I'm not sure I agree with this change. The intention behind this function is that it makes a belief updater given a particular configuration of the world defined by the present beacons list. If we are saying that the list of present beacons is optional, it's not clear what that should mean.

I see that it's currently only used in the unit tests. If you want to represent all beacons being present, you can pass in ekf_slam.estimate().beacon_ids. If you want to represent all beacons being absent, you can pass in {}.

const GoalScoreFunctionType& goal_score_function,
const PlanningArgs& planning_args );

std::ostream& operator<<(std::ostream& os, const Candidate& candidate);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this interface looks reasonable. I'm not suggesting that this needs to happen as part of the PR, but in general, I like to separate the generic planning bits from the problem specific bits. Looking at the interface, I think this could be achieved by templating on the belief type. Again, no need to make any changes, just some context for you.



struct PlanningArgs {
unsigned int max_iterations;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason that you're using unsigned int and float instead of int and double? I tend to default to int since it's what the google style guide suggests (https://google.github.io/styleguide/cppguide.html#Integer_Types). We're not bound by it, although the default linter rules are heavily influenced by it.

.edge_cost = (neighbor_pos - map.point(parent.path_history.back())).norm(),
});
}
int action_index = rand() % successors.size();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to use the random bit generator defined in the standard library (std::mt19937) and the associated distributions instead of calling rand() directly.

I would then either pass in a seed or make the random bit generator an InOut<> argument. Debugging is way easier if you can reduce the number of uncontrolled sources of randomness.

Here is an example of taking in a seed and using it to sample an (x, y) position.

std::mt19937 gen(config.seed);

Here is an example of taking it in as a parameter:

InOut<std::mt19937> gen) {

https://en.cppreference.com/w/cpp/numeric/random/mersenne_twister_engine
https://en.cppreference.com/w/cpp/numeric/random

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also const if you don't expect to change the value.

StepCandidateFunc make_random_step_candidate_function(const planning::RoadMap &map, const planning::BeliefUpdater<RobotBelief> &belief_updater) {
return [&map, &belief_updater](const Candidate& parent)->Candidate {
std::vector<planning::Successor<int>> successors;
for (const auto &[neighbor_idx, neighbor_pos] : map.neighbors(parent.path_history.back())) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: instead of creating the successors and then selecting one, consider picking one first and then only make the one you need.

const auto &[road_map, ekf_slam, potential] = create_grid_environment(ekf_config, 0.5);


Candidate candidate = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const! here and throughout.


std::cout << "from initial candidate: " << candidate << std::endl;
for (const auto &candidate : candidates) {
std::cout << "\t candidate: " << candidate << std::endl;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there some property that you expect all candidates to have? Is there some property that you expect at least one candidate to have?

Comment on lines +171 to +200
Candidate candidate = {
.belief = ekf_slam.estimate().robot_belief(),
.path_history = {road_map.START_IDX},
};

RollOutArgs roll_out_args = {
.num_roll_outs = 1000,
};

constexpr double max_sensor_range_m = 3.0;

planning::BeliefUpdater<RobotBelief> belief_updater =
make_belief_updater(road_map,
max_sensor_range_m,
ekf_slam,
std::nullopt,
TransformType::INFORMATION);

const StepCandidateFunc step_func = make_random_step_candidate_function(road_map, belief_updater);
const TerminateRolloutFunc terminate_rollout_func = make_max_steps_or_goal_terminate_func(10, road_map);
const RolloutFunctionType rollout_function = make_rollout_function(step_func, terminate_rollout_func, roll_out_args);
auto candidates = rollout_function(candidate);

const ScoringFunc scoring_func = make_goal_distance_scoring_function(road_map, 10, 20);

CullingArgs culling_args = {
.max_num_survivors = 100,
.entropy_proxy = 0.2,
.reseed_percentage = 0.1,
};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like a lot of machinery that isn't strictly necessary to test the culling function. It also means that if you change how candidates are generated, you may need to revisit this test. Consider generating candidates that is independent of the make_rollout_function implementation.

Feel free to disagree if you think I'm missing something.


std::vector<bool> used(scored_candidates.size(), false);

CHECK(cullingArgs.entropy_proxy + cullingArgs.reseed_percentage <= 1.0); // can't have more than 100% of the population
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is only a function of the culling arguments. Consider moving it outside of the lambda.

Comment on lines +64 to +84
int num_unused = scored_candidates.size();
for (unsigned int i = 0; i < num_random_survivors; i++) {
int unused_index = rand() % num_unused;
int index = 0;
for (unsigned int j = 0; j < scored_candidates.size(); j++) {
if (!used[j]) {
if (index == unused_index) {
index = j;
break;
}
index++;
}
}

used[index] = true;
survivors.push_back(scored_candidates[index].second);
num_unused--;
if (num_unused == 0) {
break;
}
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using std::sample

@ewfuentes
Copy link
Owner

A couple of initial questions and comments :

What tests do you think are reasonable for the components? Are tests of random code (e.g. the rollouts) ok if a fixed seed is provided, and exact conditions are tested?

In general, we should prefer to eliminate sources of randomness from test code. For example, we should prefer to test with known inputs, rather than testing with inputs that change on each invocation. If this isn't possible, then we should fix the seed and check for properties that must be true for all executions. We shouldn't for example, fix a seed and then require a specific result if that result is a function of the seed. We should require a specific result if it is meant to be invariant with respect to the seed.

The reason for this is even if that even if the seed is fixed, the implementation can change in ways that are still correct, yet we produce different results. Ideally, we wouldn't want our tests misfiring in these situations.

A lot of the "magic" happens in the details of the scoring functions and culling functions (e.g. constraints, objectives). Are there ways to make these small decisions more transparent, so the algorithm's behavior is more easily explained.

I think defining a contract in the header is a reasonable first step. This can be done in comments in the header or in the type system if possible.

There's a lot of duplicated setup for the tests. This may be worth trying to combine in a test fixture

I don't think that duplicated code is inherently bad. One of the more frustrating experiences is making a change to fix a unit test and then realizing that it causes a different test to fail. If you truly believe that all tests require similar inputs, I would first reach for defining a function that constructs those inputs for you (e.g. create_grid_environment()).

I've left printouts in, but as this gets closer to merging state, will trim and lint. Want to add the option to dump particles to disk for easy visualization (I think this will be important for tuning in bigger graphs) but will do this in a later PR.

There are tons of options for serialization. If you can flatten your type, CSV's are always an option. If there is some structure you'd like to preserve, protobufs are an easy option.

We should rename this planner.

I'm happy to take suggestions :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants