Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf::MessageFilter CPU usage / clean up problem #132

Open
conqualifaction opened this issue Oct 14, 2016 · 1 comment
Open

tf::MessageFilter CPU usage / clean up problem #132

conqualifaction opened this issue Oct 14, 2016 · 1 comment

Comments

@conqualifaction
Copy link

The following code creates additional CPU usage if "testMethod" is called. The problem seems to be connected to the creation of MessageFilter. This can be tested by removing the MessageFilter creation code line.

The additional CPU usage is 4 % on my system.
I could not find a way to clean up in a way to avoid this (e.g. call clear(), unsubscribe(), etc. have no effect).

Tested on ROS Indigo.

#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <tf/transform_listener.h>
#include <tf/message_filter.h>
#include <message_filters/subscriber.h>

void testMethod(ros::NodeHandle node)
{
  tf::TransformListener tf_listener(node, ros::Duration(40));
  message_filters::Subscriber<sensor_msgs::Image> img_subscriber(node, "/topic", 400);
  tf::MessageFilter<sensor_msgs::Image> msg_filter(img_subscriber, tf_listener, "/frame", 400);
}

int main(int argc, char** argv)
{
  ros::init(argc, argv, "test_node");
  ros::NodeHandle node;

  testMethod(node);

  ROS_INFO("Going to spin...");
  ros::spin();
  return 0;
}
@tfoote
Copy link
Member

tfoote commented Mar 10, 2017

I tried running the code and don't see any noticeable load.

Checking it's state while spinning I see only the basic systems:

$ rosnode info test_node
--------------------------------------------------------------------------------
Node [/test_node]
Publications: 
 * /rosout [rosgraph_msgs/Log]

Subscriptions: None

Services: 
 * /test_node/get_loggers
 * /test_node/set_logger_level


contacting node http://snowman:44688/ ...
Pid: 17454
Connections:
 * topic: /rosout
    * to: /rosout
    * direction: outbound
    * transport: TCPROS

Can you provide more context or an example of the rest of the system that would trigger this behavior? Might you be forcing this program into and out of swap or something else to trigger a load due to memory exhaustion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants