The first step was to install ROS. Since Hydro is the newest version and has many drastic changes, I decided to go with it. Hindsight, that may have not been the best choice, but it's too late now! Also I am using Ubuntu 12.04, since Ubuntu is the officially supported OS and at the time, 12.04 is the LTS version that ROS supports.
Once ROS is installed, I had to get drivers for the iRobot Create and the Kinect, which were little adventures on there own with ROS Hydro and Ubuntu 12.04. See my posts on the Create. For the Kinect, I used the freenect stack, since it appears that openni (the most commonly used ROS-Kinect interface) has some issues with the hydro/Ubuntu 12.04 combination that I just didn't want to deal with.
Now that the sensor data is accessible, a SLAM algorithm can be selected. I went with gmapping, since it seemed to be the most widely used SLAM package on ROS. Gmapping requires odometry data, a laser scanner, and the position of the laser scanner relative to the base (create).
To fake a laser scanner with the Kinect, see my post here.
The odometry data and the odometry frame is published by the create node. The odom_frame is a standard frame from the tf package. The tf package allows the user to relate each part's coordinate frame to other coordinate frames on the robot. The primary coordinate frame that most everything needs to operate in relation to is the base_frame, which is provided by our Create. Tf wasn't the easiest thing for me to understand, and at the time of writing this, I still don't totally understand it.
To see each of your robot's coordinate frames, use:
rosrun tf view_frames
evince frames.pdf
So, gmapping needs to access the laser scanner's data with respect to the base_link frame. If you view your tf tree at this point, you will notice that the kinect's frames are not connected to the create's. To link the two, use:
rosrun tf static_transform_publisher "base_to_kinect" -0.115 0 0.226 0 0 0 base_link camera_link 100
This command creates a static transform between the kinect and the base. The numbers passed are the x,y,z coordinates of the kinect to the center of the create. If you view your tf tree after running the previous command you should get something that looks like this:
Now that all of the frames are linked, gmapping can be ran:
rosrun gmapping slam_gmapping tf_static:=tf
The tf_static:=tf renames gmapping's subscription to the /tf_static topic to the /tf topic. In ROS hydro the tf package has been deprecated, and now uses /tf2, which instead of just publishing one /tf topic, publishes a /tf and /tf_static. So, for gmapping to find the base_frame to link the map frame to, we have to redirect it to the /tf topic.
Once gmapping is running, you should be able to open up Rviz and see the data.
rosrun rviz rivz
Just be sure to add the map data from the /map topic to the display.