Stuff to get the "kinect/ASUS" data published in ros, and maybe do stuff with it.
What was previously known as openni_kinect is now split into two packages: 1.openni-camera 2.openni-launch.
Install them:
$ sudo apt-get install ros-groovy-openni-camera ros-groovy-openni-launchThen, you can run the process which will publish the camera's data. (Of course, roscore should be running.)
$ roslaunch openni_launch openni.launchCheck which topics are published by running rostopic list and, more interestingly, look at the streams using
$ rosrun image_view image_view image:=/camera/rgb/image_color
$ rosrun image_view image_view image:=/camera/depth/imageAsus Xtion PRO live model 601 which is mounted on METRA robots fails to work with ROS. The solution we found on the internet is the following:
Edit the file: /etc/openni/Globaldefauts.ini (make a backup first)
find the line UsbInterface and uncomment it to force it to use 'BULK' endpoints: UsbInterface=2
We tried this already with one of the robot and it works.
This error may occur when executing the roslaunch openni... command.
The ASUS Xtion has firmware problems with USB 3.0 interfaces. Befor trying to patch the firmware (on Windows), just plug it into some other USB interface. Modern laptops have some USB 3.0 and some USB 2.0 interfaces, so you might be lucky to hit a 2.0 one which just works.
Another possible solution, reported to work for some, is to load the following kernel modules:
$ sudo modprobe -r gspca_kinect
$ sudo modprobe -r gspca_mainLet's get going!
I'll use $WORKSPACE to be your workspace folder.
I don't know all of the ROS terminologies yet, I may use wrong words for things.
First, creating a package:
$ cd $WORKSPACE/src
$ catkin_create_pkg vision roscpp
$ cd $WORKSPACE
$ catkin_makeThere can be many "nodes" (apps/mains/...) in one package. A node may be written in C++ or Python.
Write your code. Minimal code, in a file we'll call test_ir.cpp just for giggles, and place it into the $WORKSPACE/src/vision folder:
#include <iostream>
#include <ros/ros.h>
int main(int argc, char* argv[])
{
ros::init(argc, argv, "Foo_bar_baz_quux");
std::cout << "Oh hai there!" << std::endl;
return 0;
}Add this thingy to the CMakeLists.txt file of your package, so that catkin_make knows what to do.
The previous step involving catkin_create_pkg did create a template CMakeLists.txt file in $WORKSPACE/src/vision
which you can have a look at and uncomment the things you want. For our minimal project that
results in:
cmake_minimum_required(VERSION 2.8.3)
project(vision)
find_package(catkin REQUIRED roscpp)
catkin_package(
INCLUDE_DIRS include
)
include_directories(include
${catkin_INCLUDE_DIRS}
)
add_executable(test_ir_node test_ir.cpp)
target_link_libraries(test_ir_node
${catkin_LIBRARIES}
)
install(TARGETS test_ir_node
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)Then, you need to compile your package by running catkin_make. But be wary, you'll get a (descriptive) error
message if you run it in the wrong folder. Run it from $WORKSPACE.
You might need to source $WORKSPACE/devel/setup.bash before proceeding.
If it all succeeds, you can run the test_ir_node executable/node in the vision package by then typing:
$ rosrun vision test_ir_nodeIt should greet you warmly.
git clone http://git.mech.kuleuven.be/robotics/orocos_kinematics_dynamics.gitand dorosmakegit clone https://github.com/ipa320/cob_perception_common.gitand check out the groovy branchgit checkout groovy_devand dorosmakegit clone https://github.com/ipa320/cob_people_perception.gitand check out the groovy branchgit checkout groovy_devand dorosmake- use
roslaunch openni_launch openni.launchto start kinect roslaunch cob_people_detection people_detection.launch
roslaunch openni_launch openni.launchrosrun rqt_reconfigure rqt_reconfigureand select /camera/driver from the drop-down menu. Enable the depth_registration checkbox.rosrun rviz rviz- Set the Fixed Frame (top left of rviz window) to camera_depth_optical_frame.
- Add a PointCloud2 display and set PointCloud2 topic to /camera/depth_registered/points. Set Color Transformer to RGB8. You should see a color, 3D point cloud of your scene.