T1ll13 robot step 2.1

Whilst I await on a very slow delivery of a 3D Printer (yes, I decided to go ahead and buy one and give it a go), I decided to switch back to the software side of things.

I decided that I need to do some work with the XBox Kinect and ROS.

After a bit of googling around, I see that as I've decided to use the ROS "kinetic" version and not the "Indigo" version that everyone else has previously used (that's the older version, btw), I'd be figuring this out for myself and who knows, I might even help some other people out along the way.

I got a bit distracted and it looked like I needed to setup the robot simulator software

Apparently I need to install MoveIt! - so, time to fire up the Raspi3, drop to a Terminal and type:

$ sudo apt-get install ros-kinetic-moveit
$ source /opt/ros/kinetic/setup.bash

(and then !boom! 2hrs of power-cuts just hit my area, probably something to do with the snow, etc...)

http://docs.ros.org/kinetic/api/moveit_tutorials/html/

and then I figured out that this was a mis-direction.  This wasn't going to help me with the XBox Kinect (unless I really missed something obvious here?).

A quick wander back to the ROS Wiki..... and I see there is reference to needing to be using OPENCV3?  (red-herring! not needed)



...and then I realised, I've been distracted by work/work in my personal time and I've completely missed the point of what I was trying to do!

If I go back to the setup of the XBox Kinect that I originally performed here.
I notice that I actually explained it to myself previously:

TERM2: Start the Node for the Kinect.  An ROS node is a bit like an "IoT module", it'll do it's own thing, gathering/capturing data and if things are subscribed to it, it'll publish this out to whoever is listening.  For us here, it is the roscore/master.   The /topic sub/pub concept shouldn't be a new concept, we've been doing it with Message Queues (MQTT) for years now....
---------------------------------------------------------------------------------------------------
$ roslaunch freenect_launch freenect.launch

started roslaunch server http://rpi3ubuntu:46776/
summary
parameters
 * /camera/.....
nodes
  /camera/
ROS_MASTER_URI=http://localhost:11311
core service [/rsout] found
....
[ INFO] Starting a 3s RGB and Depth stream flush.
[ INFO] Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A0033333335A'
---------------------------------------------------------------------------------------------------

TERM3: Now that the above TERM2 should be publishing topics now, we can run a command to see the topics.  We can now look at the RGB image from the Kinect with image_view
---------------------------------------------------------------------------------------------------
$rostopic list
/camera/depth/camera_info
....
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_color
....

$ rosrun image_view image_view image:=/camera/rgb/image_color
libEGL warning: DRI2: failed to authenticate
init done
[ INFO] Using transport "raw"
---------------------------------------------------------------------------------------------------

All I need to do is make a serverNode that starts the freenect_launch and a clientNode that subscribes to specific topics published by /camera/xxx then extract out some of those values and publish so the serverNode picks the values up and acts accordingly.

There's the first mission then.....detect an object, work out the 2D/3D distance, etc.. and then trigger the motors to track the object with the XBox Kinect to keep the object 'in vision' (does the XBox Kinect have motors inside to do that? if not, I'll rig up a platform with some micro-servos to do that)

UPDATE:
okay, so I was being a bit over-complex/dumb.

The XBox Kinect publishes to the topics listed when you run:
$ rostopic list

Then, to see the values that are being published you can use
$ rostopic echo <topic you want to know more about>

Running:
s rostopic echo /camera/rgb/camera_info

gives info output on:
----
header:
  seq: 251
  stamp:
    secs: 1513072954
    nsecs: 801882376
  frame_id: camera_rgb_optical_frame
height: 480
width: 640
distortion_mode: plumb_bob
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K:
R:
P:
binning_x: 0
binning_y: 0
roi:
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
----

After checking this info about Topics, we can find out a bit more about the structure of the above.

$ rostopic type /camera/rgb/camera_info
sensor_msgs/CameraInfo

$ rosmsg show sensor_msgs/CameraInfo
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
uint32 height
uint32 width
string distortion_model
float64[] D
float64[9] K
float64[9] R
float64[12] P
uint32 binning_x
uint32 binning_y
sensor_msgs/RegionOfInterest roi
  uint32 x_offset
  uint32 y_offset
  uint32 height
  uint32 width
  bool do_rectify


well....that's a bit more like it!..... time to investigate further.....

Looks like I'm getting closer to what I was looking for:

http://sdk.rethinkrobotics.com/wiki/Kinect_basics

(and I now see that my earlier reference to MoveIt! wasn't as crazy as I thought)


I see that if I run:
$ rosrun image_view image_view image:=/camera/rgb/image_color

I am presented with this image (moving the mouse over the colours shows the different RGB values):


If I then run:
$ rosrun image_view disparity_view image:=/camera/depth/disparity

I am presented with the depth from the XBox Kinect represented in colours.  As you can see, my hand is moved closer therefore it is in red:


okay, so now I think I can start to make progress of capturing the published topic data and do something useful with it now via code..... we'll see....

and I found a Youtube video too:

as pointed out in the video, I also need to do:
sudo apt install ros-kinetic-depthimage-to-laserscan


.....and then I broke it all!  (foolishly did something in relation to opencv3 and gmapping, now just get tons of errors.  great unpicking time)

I'm so glad I documented the steps back here: https://tonyisageek.blogspot.co.uk/2017/11/m1ll13-robot-step1.html - time to wipe the SD Card, re-install everything and get back to where I was before I broke everything.  Hey, it's all part and parcel of the experience, isn't it :-)


UPDATE2:
So, a few hours later and a (re)fresh install onto the SD Card, everything set back up again and running the same tests as above, I then move into running the rviz software:

$ rviz
(remember to run the source command in new terminal windows first)

Then load the .rviz file downloaded from the YouTube video above and there we have it, a weird view of the XBox Kinect using LaserScan:




The order of running in 3 different terminals was important.
TERM1: $ roscore &
TERM2: $ roslaunch freenect_launch freenect.launch
TERM3: $ roslaunch depthimage_to_laserscan.launch (this is the file downloaded from YouTube video)
TERM1: $ rviz


This then shows the LaserScan (white line) where the laser scan is "hitting a surface", this is good for working out obstacles in the way etc...





...and this is how the Terminator T1ll13 is going to "view" the world.....


Right, that's me done for now.... time to figure out how to get that little lot now all working from ROS coding and python or C++... (note to self, I did NOT need to do anything extra with OpenCv3)

Comments