Labels

Tuesday, July 8, 2014

ROS Hydro SLAM with Kinect and iRobot Create

I have recently been trying to get into learning mapping with robotics and as a consequence I've been trying to get a SLAM (Simultaneous Localization And Mapping) algorithm running. Before investigating the mechanics and mathematics that cause SLAM to work, I wanted to get a known SLAM algorithm running so I can see it myself. With a laptop, Xbox Kinect, iRobot Create, and ROS, I knew I had all the tools I needed to get SLAM running.

The first step was to install ROS. Since Hydro is the newest version and has many drastic changes, I decided to go with it. Hindsight, that may have not been the best choice, but it's too late now! Also I am using Ubuntu 12.04, since Ubuntu is the officially supported OS and at the time, 12.04 is the LTS version that ROS supports.

Once ROS is installed, I had to get drivers for the iRobot Create and the Kinect, which were little adventures on there own with ROS Hydro and Ubuntu 12.04. See my posts on the Create. For the Kinect, I used the freenect stack, since it appears that openni (the most commonly used ROS-Kinect interface) has some issues with the hydro/Ubuntu 12.04 combination that I just didn't want to deal with.

Now that the sensor data is accessible, a SLAM algorithm can be selected. I went with gmapping, since it seemed to be the most widely used SLAM package on ROS. Gmapping requires odometry data, a laser scanner, and the position of the laser scanner relative to the base (create).

To fake a laser scanner with the Kinect, see my post here.

The odometry data and the odometry frame is published by the create node. The odom_frame is a standard frame from the tf package. The tf package allows the user to relate each part's coordinate frame to other coordinate frames on the robot. The primary coordinate frame that most everything needs to operate in relation to is the base_frame, which is provided by our Create. Tf wasn't the easiest thing for me to understand, and at the time of writing this, I still don't totally understand it.

To see each of your robot's coordinate frames, use:
rosrun tf view_frames
evince frames.pdf
tf view_frames will generate a pdf which displays your robot's tf tree. Each node in the tree is a separate coordinate frame, and each node in the same tree can access any other node's data with respect to the requesting node's frame.

So, gmapping needs to access the laser scanner's data with respect to the base_link frame. If you view your tf tree at this point, you will notice that the kinect's frames are not connected to the create's. To link the two, use:

rosrun tf static_transform_publisher "base_to_kinect" -0.115 0 0.226 0 0 0 base_link camera_link 100

This command creates a static transform between the kinect and the base. The numbers passed are the x,y,z coordinates of the kinect to the center of the create. If you view your tf tree after running the previous command you should get something that looks like this:


Now that all of the frames are linked, gmapping can be ran:

rosrun gmapping slam_gmapping tf_static:=tf

The tf_static:=tf renames gmapping's subscription to the /tf_static topic to the /tf topic. In ROS hydro the tf package has been deprecated, and now uses /tf2, which instead of just publishing one /tf topic, publishes a /tf and /tf_static. So, for gmapping to find the base_frame to link the map frame to, we have to redirect it to the /tf topic.

Once gmapping is running, you should be able to open up Rviz and see the data.

rosrun rviz rivz

Just be sure to add the map data from the /map topic to the display.


Monday, July 7, 2014

Faking a Laser Scanner in ROS Hydro using Kinect

I've been searching for a solution to fake a laser scanner using a kinect for ROS Hydro. The traditional method seems to be using the pointcloud_to_laserscan package, but unfortunately it hasn't been confirmed to work with ROS hydro. I also am using the freeneck_stack instead of the openni packages for interfacing with the kinect since I've had some trouble with it and apparently there are known problems with it and Ubuntu 12.04, which I am using.

The solution I'm using right now is the depthimage_to_laserscan package. It seems to work well, and it also works with freenect_stack just fine. To install and run these packages, I used the following steps:

Install and run freenect_stack:
sudo apt-get intall ros-hydro-freenect-stack
roslaunch freenect_launch freenect-xyz.launch

Be sure to note, that on the freenect page it says:
If you are using Ubuntu 12.04, you need to blacklist the kernel module that gets loaded by default for the Kinect:
sudo modprobe -r gspca_kinect
echo 'blacklist gspca_kinect' | sudo tee -a /etc/modprobe.d/blacklist.conf

Install and run depthimage_to_laserscan:
sudo apt-get install ros-hydro-depthimage-to-laserscan
rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

This will create the /scan topic and publish laserscan messages to it. To confirm data is being published, just use "rostopic echo /scan".