Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROS Free Compile and Run using WebCam #426

Open
mh-ih opened this issue Feb 27, 2024 · 5 comments
Open

ROS Free Compile and Run using WebCam #426

mh-ih opened this issue Feb 27, 2024 · 5 comments
Labels
compile User has trouble compiling on their own platform. user-platform User has trouble running on their own platform.

Comments

@mh-ih
Copy link

mh-ih commented Feb 27, 2024

I have built and make open_vins on ubuntu 20.04.6 using the installation guide (ROS Free). But now I don't know how to compile and run it using my webcam without ROS. Could anyone please help me?!
Thanks.

@mh-ih mh-ih changed the title ROS Free Complie and Run using WebCam ROS Free Compile and Run using WebCam Feb 27, 2024
@Userpc1010
Copy link

Userpc1010 commented Feb 28, 2024

You need to install it as a regular library to begin with, but if you have ROS installed it will probably install the ROS version. You can solve this problem like this:

cmake_minimum_required(VERSION 3.3)
project(ov_msckf)                                     #This needs to be done for each project in the OpenVINS directory

# Include libraries (if we don't have opencv 4, then fallback to opencv 3)
# The OpenCV version needs to match the one used by cv_bridge otherwise you will get a segmentation fault!
find_package(Eigen3 REQUIRED)
find_package(OpenCV 3 QUIET)
if (NOT OpenCV_FOUND)
    find_package(OpenCV 4 REQUIRED)
endif ()
find_package(Boost REQUIRED COMPONENTS system filesystem thread date_time)
find_package(Ceres REQUIRED)
message(STATUS "OPENCV: " ${OpenCV_VERSION} " | BOOST: " ${Boost_VERSION} " | CERES: " ${Ceres_VERSION})

# If we will compile with aruco support
option(ENABLE_ARUCO_TAGS "Enable or disable aruco tag (disable if no contrib modules)" ON)
if (NOT ENABLE_ARUCO_TAGS)
    add_definitions(-DENABLE_ARUCO_TAGS=0)
    message(WARNING "DISABLING ARUCOTAG TRACKING!")
else ()
    add_definitions(-DENABLE_ARUCO_TAGS=1)
endif ()

# We need c++14 for ROS2, thus just require it for everybody
# NOTE: To future self, hope this isn't an issue...
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

# Enable compile optimizations
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3 -fsee -fomit-frame-pointer -fno-signed-zeros -fno-math-errno -funroll-loops")

# Enable debug flags (use if you want to debug in gdb)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g3 -Wall -Wuninitialized -fno-omit-frame-pointer")

# Find our ROS version!  
# NOTE: Default to using the ROS1 package if both are in our enviroment
# NOTE: https://github.com/romainreignier/share_ros1_ros2_lib_demo
find_package(catkin QUIET COMPONENTS roscpp)
find_package(ament_cmake QUIET)
#if (catkin_FOUND)                                                               Comment out this code!!!!!
#    message(STATUS "ROS *1* version found, building ROS1.cmake")
#    include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/ROS1.cmake)
#elseif (ament_cmake_FOUND)
#    message(STATUS "ROS *2* version found, building ROS2.cmake")
#    include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/ROS2.cmake)
#else ()
    message(STATUS "No ROS versions found, building ROS1.cmake")
    include(${CMAKE_CURRENT_SOURCE_DIR}/cmake/ROS1.cmake)
#endif ()

This way you will have a version without ROS even if it is present in the system. Install next as a regular cmake library.. & make -j4 & make install

Connect it to your project, I do this using qmake in pro. :

INCLUDEPATH += /usr/include/boost

LIBS += -lboost_system -lboost_thread -lboost_chrono -lboost_filesystem

INCLUDEPATH += /usr/local/include/eigen3

unix:!macx: LIBS += -L$$PWD/../../../../usr/local/lib/ -lov_core_lib

INCLUDEPATH += $$PWD/../../../../usr/local/include/open_vins
DEPENDPATH += $$PWD/../../../../usr/local/include/open_vins

unix:!macx: LIBS += -L$$PWD/../../../../usr/local/lib/ -lov_msckf_lib

INCLUDEPATH += $$PWD/../../../../usr/local/include/open_vins
DEPENDPATH += $$PWD/../../../../usr/local/include/open_vins

unix:!macx: LIBS += -L$$PWD/../../../../usr/local/lib/ -lov_init_lib

INCLUDEPATH += $$PWD/../../../../usr/local/include/open_vins
DEPENDPATH += $$PWD/../../../../usr/local/include/open_vins

unix:!macx: LIBS += -L$$PWD/../../../../usr/local/lib/ -lceres

INCLUDEPATH += $$PWD/../../../../usr/local/include
DEPENDPATH += $$PWD/../../../../usr/local/include

unix:!macx: PRE_TARGETDEPS += $$PWD/../../../../usr/local/lib/libceres.a

Next you can create a class to work with OpenVINS;

#ifndef OPENVINS_H
#define OPENVINS_H

#include "core/VioManager.h"
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include <opencv4/opencv2/opencv.hpp>
#include <QObject>
#include "QTimer"

typedef struct Data //Data you will most likely need
{
 double timestamp;

 double Vel_X = 0.0;
 double Vel_Y = 0.0;
 double Vel_Z = 0.0;

 double X = 0.0;
 double Y = 0.0;
 double Z = 0.0;

 double W = -0.5;
 double I = 0.5;
 double J = -0.5;
 double K = 0.5;
} OpenVins_Data;

class OpenVins : public QObject
{
  Q_OBJECT

public:
    OpenVins(std::string config_path, QObject *parent = nullptr);
    ~OpenVins();

public slots:

    void grab_imu (rs2_vector v_gyro_data, double v_gyro_timestamp,rs2_vector v_accel_data, double v_accel_timestamp);

    void grab_stereo_camera(cv::Mat im, cv::Mat imRight, double timestamp, uint16_t width_img, uint16_t height_img);

    void input_track (double timestamp);

    void openvins_timestamp (double * data);

    void futures_image_update ();

signals:

    void output_track (OpenVins_Data data);

    void future_image (cv::Mat img_history);

public :

    std::vector<Eigen::Vector3d> output_features_MSCKF ();

    std::vector<Eigen::Vector3d> output_features_SLAM ();

    std::vector<Eigen::Vector3d> output_features_ARUCO ();

private:

    cv::Mat frame_to_mat(const rs2::frame& f);

    std::shared_ptr<ov_msckf::VioManager> sys;

    // Verbosity
    std::string verbosity = "INFO";

    // Create our VIO system
    ov_msckf::VioManagerOptions params;

    Eigen::Matrix<double, 13, 1> state_plus;

    Eigen::Matrix<double, 12, 12> cov_plus;

    OpenVins_Data data;

    QTimer * timer;

    double last_timestamp_image = 0.0;
    double last_timestamp_gyro = 0.0;
    double last_timestamp_acc = 0.0;

    bool initOK = false;
};

#endif // OPENVINS_H


#include "openvins.h"
#include "state/State.h"
#include "state/Propagator.h"


OpenVins::OpenVins(std::string config_path, QObject *parent): QObject(parent)
{
    timer = new QTimer(this);

    connect(timer, SIGNAL(timeout()), this, SLOT(futures_image_update()));

    timer->start(33);

    state_plus = Eigen::Matrix<double, 13, 1>::Zero();
    cov_plus = Eigen::Matrix<double, 12, 12>::Zero();

    // Load the config
    auto parser = std::make_shared<ov_core::YamlParser>(config_path);

    parser->parse_config("verbosity", verbosity);
    ov_core::Printer::setPrintLevel(verbosity);

    params.print_and_load(parser);

    params.use_multi_threading_pubs = true;
    params.use_multi_threading_subs = true;

    sys = std::make_shared<ov_msckf::VioManager>(params);

    // Ensure we read in all parameters required
    if (!parser->successful()) {
      PRINT_ERROR(RED "unable to parse all parameters, please fix\n" RESET);
      std::exit(EXIT_FAILURE);
    }

}

OpenVins::~OpenVins()
{
  timer->stop();
  timer->deleteLater();
}

void OpenVins::grab_imu(rs2_vector v_gyro_data, double v_gyro_timestamp,rs2_vector v_accel_data, double v_accel_timestamp)
{
  if((abs(v_accel_timestamp-last_timestamp_acc)<0.001) && (abs(v_gyro_timestamp-last_timestamp_gyro)<0.001)) return;

  last_timestamp_acc = v_accel_timestamp;   last_timestamp_gyro = v_gyro_timestamp;

  ov_core::ImuData message_imu;

  message_imu.timestamp=v_accel_timestamp; //*1e-9;
  Eigen::Matrix<double, 3, 1> wm = {v_gyro_data.x,v_gyro_data.y,v_gyro_data.z};
  Eigen::Matrix<double, 3, 1> am = {v_accel_data.x,v_accel_data.y,v_accel_data.z};
  message_imu.wm = wm;
  message_imu.am = am;

  sys->feed_measurement_imu(message_imu);
}

void OpenVins::grab_stereo_camera(cv::Mat im, cv::Mat imRight, double timestamp, uint16_t width_img, uint16_t height_img)
{  
    if(abs(timestamp-last_timestamp_image)<0.001) return;

    last_timestamp_image = timestamp;

//    cv::imshow("img_l", im);
//    cv::imshow("img_r", imRight);

    // Else lets track this image
    ov_core::CameraData message;
    message.timestamp = timestamp;
    message.sensor_ids.push_back(0);
    message.sensor_ids.push_back(1);
    message.images.push_back(im);
    message.images.push_back(imRight);
    message.masks.push_back(cv::Mat::zeros(cv::Size(width_img, height_img), CV_8UC1));
    message.masks.push_back(cv::Mat::zeros(cv::Size(width_img, height_img), CV_8UC1));

    sys->feed_measurement_camera(message);
}

void OpenVins::input_track(double timestamp) //Updating the current state in the system
{
  if(sys->initialized()){

     // Get fast propagate state at the desired timestamp
     std::shared_ptr<ov_msckf::State> state = sys->get_state();
     Eigen::Matrix<double, 13, 1> state_plus_ = Eigen::Matrix<double, 13, 1>::Zero();
     Eigen::Matrix<double, 12, 12> cov_plus_ = Eigen::Matrix<double, 12, 12>::Zero();
     if (sys->get_propagator()->fast_state_propagate(state, timestamp, state_plus_, cov_plus_))
     {
        initOK = true;

        state_plus = state_plus_; cov_plus = cov_plus_;

//       std::cout<<std::endl;

//       // The POSE component (orientation and position)
//       std::cout<<"quat: "<<std::endl;
//       std::cout<<"x: "<< state_plus(0) <<std::endl;
//       std::cout<<"y: "<< state_plus(1) <<std::endl;
//       std::cout<<"z: "<< state_plus(2) <<std::endl;
//       std::cout<<"w: "<< state_plus(3) <<std::endl;
//       std::cout<<"pose: "<<std::endl;
//       std::cout<<"x: "<< state_plus(4) <<std::endl;
//       std::cout<<"y: "<< state_plus(5) <<std::endl;
//       std::cout<<"z: "<< state_plus(6) <<std::endl;

//       // The TWIST component (angular and linear velocities)
//       std::cout<<"linear velocitie: "<<std::endl;
//       std::cout<<"linear x: "<< state_plus(7) <<std::endl; // vel in local frame
//       std::cout<<"linear y: "<< state_plus(8) <<std::endl; // vel in local frame
//       std::cout<<"linear z: "<< state_plus(9) <<std::endl; // vel in local frame
//       std::cout<<"angular velocitie: "<<std::endl;
//       std::cout<<"x: "<< state_plus(10) <<std::endl; // we do not estimate this...
//       std::cout<<"y: "<< state_plus(11) <<std::endl; // we do not estimate this...
//       std::cout<<"z: "<< state_plus(12) <<std::endl; // we do not estimate this...

//       std::cout<<std::endl;
   }

   data.Vel_X = state_plus(7);
   data.Vel_Y = state_plus(8);
   data.Vel_Z = state_plus(9);

   data.X = state_plus(4);
   data.Y = state_plus(5);
   data.Z = state_plus(6);

   data.W = state_plus(3);
   data.I = state_plus(0);
   data.J = state_plus(1);
   data.K = state_plus(2);

   data.timestamp = timestamp;

   emit output_track(data);

  }
}

std::vector<Eigen::Vector3d> OpenVins::output_features_MSCKF() //Getting a point cloud
{
 std::vector<Eigen::Vector3d> feats_msckf = sys->get_good_features_MSCKF();

 return feats_msckf;
}

std::vector<Eigen::Vector3d> OpenVins::output_features_SLAM()
{
 std::vector<Eigen::Vector3d> feats_slam = sys->get_features_SLAM();

 return feats_slam;
}

std::vector<Eigen::Vector3d> OpenVins::output_features_ARUCO()
{
  std::vector<Eigen::Vector3d> feats_aruco = sys->get_features_ARUCO();

  return feats_aruco;
}

void OpenVins::openvins_timestamp(double * data)
{
  data = &sys->get_state()->_timestamp;
}

cv::Mat OpenVins::frame_to_mat(const rs2::frame &f)
{
    using namespace cv;
    using namespace rs2;

    auto vf = f.as<video_frame>();
    const int w = vf.get_width();
    const int h = vf.get_height();

    if (f.get_profile().format() == RS2_FORMAT_BGR8)
    {
        return Mat(Size(w, h), CV_8UC3, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_Y16)
    {
        return Mat(Size(w, h), CV_16UC1, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_RGB8)
    {
        auto r_rgb = Mat(Size(w, h), CV_8UC3, (void*)f.get_data(), Mat::AUTO_STEP);
        Mat r_bgr;
        cvtColor(r_rgb, r_bgr, COLOR_RGB2BGR);
        return r_bgr;
    }
    else if (f.get_profile().format() == RS2_FORMAT_Z16)
    {
        return Mat(Size(w, h), CV_16UC1, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_Y8)
    {
        return Mat(Size(w, h), CV_8UC1, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_DISPARITY32)
    {
        return Mat(Size(w, h), CV_32FC1, (void*)f.get_data(), Mat::AUTO_STEP);
    }

    throw std::runtime_error("Frame format is not supported yet!");
}

void OpenVins::futures_image_update()  //Sending a stereo image with descriptors to the screen
{
 if(initOK){
  // Get our image of history tracks
  cv::Mat img_history = sys->get_historical_viz_image();

  if (img_history.empty()) return;

  cv::Mat result (120, 480, CV_8UC3, cv::Scalar(10, 100, 150));

  cv::resize(img_history, result, result.size(), 0, 0, cv::INTER_CUBIC);

  emit future_image(result);
 }
}

Create an instance of the class in a separate thread:

std::string config_path = "/home/sencis/build-UGV-Desktop-Debug/UGV_Core/estimator_config.yaml";

 thread_openvins = new QThread(this);
 openvins = new OpenVins(config_path, nullptr);
 openvins->moveToThread(thread_openvins);
 thread_openvins->start();

The current configuration works for Stereo so you just have to deal with the camera in mono mode.
You can see all this in ROS1Visualizer.cpp and run_simulation.cpp I think this can serve as a starting point.

@mh-ih
Copy link
Author

mh-ih commented Mar 2, 2024

Thanks. Honestly I am new to OpenVins. I will do what you wrote and give you feedback.

@mh-ih
Copy link
Author

mh-ih commented Mar 3, 2024

It seems that you are reading the images with the real sense camera but I don't have that and want to read input images with my WebCam. Can you please explain me how to do it?

@Userpc1010
Copy link

Userpc1010 commented Mar 3, 2024

Perhaps I will leave this to you as homework)). There are examples on the Internet of how to obtain images from a camera using OpenCV, and how to use mono can be found in ROS1Visualizer.cpp. You will also need a synchronized IMU along with the camera and this is a problem because... regular cameras do not have imu, and synchronization assumes that imu data should be read at the moment when the camera shutter is charging, i.e. it must be implemented in hardware between sensors to function properly. This is necessary for setting temporary marks on the image from the camera and data received from inertial sensors. Their further integration and calculation of motion. But perhaps you have a camera like this with an ExtTrigg output from the camera shutter to read this interrupt as a read command to the IMU, or just a Realsense camera that will do the job for you.

@goldbattle goldbattle added user-platform User has trouble running on their own platform. compile User has trouble compiling on their own platform. labels Mar 25, 2024
@goldbattle
Copy link
Member

Are you trying to run it with your webcam only? This is visual-inertial project so you also need an IMU. If you just wish to use the feature tracking, then you should be able to just build ov_core and use the webcam example.

cd ov_core
mkdir build && cd build
cmake -DENABLE_ROS=OFF ..
./test_webcam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compile User has trouble compiling on their own platform. user-platform User has trouble running on their own platform.
Projects
None yet
Development

No branches or pull requests

3 participants