Monday, 18 December 2017

Using ROS to control GPIO pins on a Raspberry Pi 3

As the title says, it was time for me to document how to do the "simple" task of using ROS (Robot Operating System) to turn an LED on/off (well, we can do a LOT more than that by using the GPIO pins, but as a baseline, it's a starter).

Because I've installed ROS onto the Raspberry Pi itself, I can create ROS nodes directly on the RPi.
So let's get on with it and create a simple demo to blink an LED using ROS topics from the RPi.

First step, we have to download "wiringPi"
$ git clone git://git.drogon.net/wiringPi
$ cd wiringPi
$ sudo ./build

The interesting thing is that everyone decides how they are going to refer to the GPIO pins on the RPi...I just keep track of the physical pin (as that doesn't change!)
And here is the GPIO pin layout in relation to wiringPi:
https://projects.drogon.net/raspberry-pi/wiringpi/pins/

Now that we've installed the library we're going to be using, let's switch to creating a ROS package for the LED blink demo.
I've already created a workspace previously, as I was using the ROS ROBOTICS PROJECTS PDF as a walkthrough for the USB web cam sample (which worked great from an Ubuntu VMware, but fails on a real RPi...).
Previously, I was using the RPi to connect to the Servo controller and have used quite a few of the GPIOs already, so, I just decided to unplug a GPIO and re-purpose it for this example.
I plugged into physical PINs 6 and 10, so that is actually GPIO 12 and 20...

As I had already created a workspace I could just re-use it, but I'll re-document it here just incase I have to rebuild the SD Card (again!)

$ mkdir -p ~/ros_project_dependencies_ws/src
$ cd ~/ros_project_dependencies_ws/
$ catkin_make
$ source devel/setup.bash
(also remember to add that source command to the end of the ./bashrc file)

$ catkin_create_pkg ros_wiring_example roscpp std_msgs
$ cd ros_wiring_examples
$ mkdir src

Now create a file called blink.cpp, with the following content:

That code will SUBSCRIBE to a topic called /led_blink which is a Boolean type.  If the value is TRUE, the LED will turn on, otherwise it will be off.

Navigate up to the ros_wiring_examples folder and you need to edit the CMakeLists.txt file to be lie so:

Now, we need to re-run the catkin_make command, so:

$ cd ~/ros_project_dependencies_ws/
$ catkin_make
$ source devel/setup.bash

That's it, we're done.  We can now run and test this.  One thing to remember, on the RPi, when you are interacting with the GPIO, you need to be the root user.

TERM1:
$ roscore


TERM2:
$ sudo -s
$ cd ~/ros_project_dependencies_ws/build/ros_wiring_examples
NOTE that the folder is BUILD and not in /src
$ ./blink_led
(this will execute the compiled/built blink_led application built from the blink_led.cpp from above)

If we take a look at the RPi, we see the LED is "off":



TERM3:  (sending a "1" will turn on and a "0" will turn off)
$ rostopic pub /led_blink std_msgs/Bool 1


Now, if we look again, we see the LED is "on":


$ rostopic pub /led_blink std_msgs/Bool 0


and if we take a look at the blink_led app output, we can see that the console log shows the behaviour from above:



yay! we can now turn an LED on and off by publishing a value to a rostopic.... now....that's the baseline that we can build upwards from!  Controlling servos, sending signals to perform alsorts of actions can now happen!....

3D Printer arrives

After much hassle and delays and patience.....the 3D Printer arrived.

Well, yes, it arrived on a Sunday afternoon.  Whilst I was out.  At Cheddar Gorge, because, no-one is going to deliver anything on a Sunday afternoon are they? it's safe to go out and get some cheese....

nope.  I came back home and found this box on the doorstep.  Yes, it was very wet and soaking up the water.  I pushed it into the porch....
I smiled and thought of positive things, which as it turns out was the right thing to do. The box had a box inside the box, so the 3D printer was well protected.

I did the classic "sweep everything off the dining table" manoeuvre and set about putting it together:

I stuck the SD Card in the side and selected a file that ended in .gcode - I had absolutely no idea what it was, but I just wanted to test everything was okay:

After running through the warm up and levelling the print bed, it looked like it was time to go:

Initially I didn't think anything was happening....

..and then it started to print the base layer:

 ...after some time it had grown quite a bit:

I was still none the wiser on what was actually being printed:

It was now 30% through:

...and still none the wiser?!

hang on a minute, it is starting to take on a shape:

ah!ha!!!

It was pretty impressive to watch:

..mesmerising, in fact:

and there we are, all done.  First print with PLA.  "A-Okay"  :-)




I was impressed that the hand actually had the same lines as in a proper hand (easily pleased, aren't I!)

Okay, that was the first Sunday evening session sorted....now Monday morning has arrived and it was time to move it into the "office" (upstairs).  Oh look, there is a 3D Printer sized gap on the corner of the desk - I wonder how that happened? :-D

Yep....that can sit there printing away, whilst I'm "busy working":

pan out further and you realise there is more going on at "the desk" than just 3D printing:

right, time to order all those servo's, 10Kilos of ABS filament and braided fishing line...ready for the next 3 weeks of printing for the INMOOV robot!

Tuesday, 12 December 2017

T1ll13 robot step 2.1

Whilst I await on a very slow delivery of a 3D Printer (yes, I decided to go ahead and buy one and give it a go), I decided to switch back to the software side of things.

I decided that I need to do some work with the XBox Kinect and ROS.

After a bit of googling around, I see that as I've decided to use the ROS "kinetic" version and not the "Indigo" version that everyone else has previously used (that's the older version, btw), I'd be figuring this out for myself and who knows, I might even help some other people out along the way.

I got a bit distracted and it looked like I needed to setup the robot simulator software

Apparently I need to install MoveIt! - so, time to fire up the Raspi3, drop to a Terminal and type:

$ sudo apt-get install ros-kinetic-moveit
$ source /opt/ros/kinetic/setup.bash

(and then !boom! 2hrs of power-cuts just hit my area, probably something to do with the snow, etc...)

http://docs.ros.org/kinetic/api/moveit_tutorials/html/

and then I figured out that this was a mis-direction.  This wasn't going to help me with the XBox Kinect (unless I really missed something obvious here?).

A quick wander back to the ROS Wiki..... and I see there is reference to needing to be using OPENCV3?  (red-herring! not needed)



...and then I realised, I've been distracted by work/work in my personal time and I've completely missed the point of what I was trying to do!

If I go back to the setup of the XBox Kinect that I originally performed here.
I notice that I actually explained it to myself previously:

TERM2: Start the Node for the Kinect.  An ROS node is a bit like an "IoT module", it'll do it's own thing, gathering/capturing data and if things are subscribed to it, it'll publish this out to whoever is listening.  For us here, it is the roscore/master.   The /topic sub/pub concept shouldn't be a new concept, we've been doing it with Message Queues (MQTT) for years now....
---------------------------------------------------------------------------------------------------
$ roslaunch freenect_launch freenect.launch

started roslaunch server http://rpi3ubuntu:46776/
summary
parameters
 * /camera/.....
nodes
  /camera/
ROS_MASTER_URI=http://localhost:11311
core service [/rsout] found
....
[ INFO] Starting a 3s RGB and Depth stream flush.
[ INFO] Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A0033333335A'
---------------------------------------------------------------------------------------------------

TERM3: Now that the above TERM2 should be publishing topics now, we can run a command to see the topics.  We can now look at the RGB image from the Kinect with image_view
---------------------------------------------------------------------------------------------------
$rostopic list
/camera/depth/camera_info
....
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_color
....

$ rosrun image_view image_view image:=/camera/rgb/image_color
libEGL warning: DRI2: failed to authenticate
init done
[ INFO] Using transport "raw"
---------------------------------------------------------------------------------------------------

All I need to do is make a serverNode that starts the freenect_launch and a clientNode that subscribes to specific topics published by /camera/xxx then extract out some of those values and publish so the serverNode picks the values up and acts accordingly.

There's the first mission then.....detect an object, work out the 2D/3D distance, etc.. and then trigger the motors to track the object with the XBox Kinect to keep the object 'in vision' (does the XBox Kinect have motors inside to do that? if not, I'll rig up a platform with some micro-servos to do that)

UPDATE:
okay, so I was being a bit over-complex/dumb.

The XBox Kinect publishes to the topics listed when you run:
$ rostopic list

Then, to see the values that are being published you can use
$ rostopic echo <topic you want to know more about>

Running:
s rostopic echo /camera/rgb/camera_info

gives info output on:
----
header:
  seq: 251
  stamp:
    secs: 1513072954
    nsecs: 801882376
  frame_id: camera_rgb_optical_frame
height: 480
width: 640
distortion_mode: plumb_bob
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K:
R:
P:
binning_x: 0
binning_y: 0
roi:
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
----

After checking this info about Topics, we can find out a bit more about the structure of the above.

$ rostopic type /camera/rgb/camera_info
sensor_msgs/CameraInfo

$ rosmsg show sensor_msgs/CameraInfo
std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
uint32 height
uint32 width
string distortion_model
float64[] D
float64[9] K
float64[9] R
float64[12] P
uint32 binning_x
uint32 binning_y
sensor_msgs/RegionOfInterest roi
  uint32 x_offset
  uint32 y_offset
  uint32 height
  uint32 width
  bool do_rectify


well....that's a bit more like it!..... time to investigate further.....

Looks like I'm getting closer to what I was looking for:

http://sdk.rethinkrobotics.com/wiki/Kinect_basics

(and I now see that my earlier reference to MoveIt! wasn't as crazy as I thought)


I see that if I run:
$ rosrun image_view image_view image:=/camera/rgb/image_color

I am presented with this image (moving the mouse over the colours shows the different RGB values):


If I then run:
$ rosrun image_view disparity_view image:=/camera/depth/disparity

I am presented with the depth from the XBox Kinect represented in colours.  As you can see, my hand is moved closer therefore it is in red:


okay, so now I think I can start to make progress of capturing the published topic data and do something useful with it now via code..... we'll see....

and I found a Youtube video too:

as pointed out in the video, I also need to do:
sudo apt install ros-kinetic-depthimage-to-laserscan


.....and then I broke it all!  (foolishly did something in relation to opencv3 and gmapping, now just get tons of errors.  great unpicking time)

I'm so glad I documented the steps back here: https://tonyisageek.blogspot.co.uk/2017/11/m1ll13-robot-step1.html - time to wipe the SD Card, re-install everything and get back to where I was before I broke everything.  Hey, it's all part and parcel of the experience, isn't it :-)


UPDATE2:
So, a few hours later and a (re)fresh install onto the SD Card, everything set back up again and running the same tests as above, I then move into running the rviz software:

$ rviz
(remember to run the source command in new terminal windows first)

Then load the .rviz file downloaded from the YouTube video above and there we have it, a weird view of the XBox Kinect using LaserScan:




The order of running in 3 different terminals was important.
TERM1: $ roscore &
TERM2: $ roslaunch freenect_launch freenect.launch
TERM3: $ roslaunch depthimage_to_laserscan.launch (this is the file downloaded from YouTube video)
TERM1: $ rviz


This then shows the LaserScan (white line) where the laser scan is "hitting a surface", this is good for working out obstacles in the way etc...





...and this is how the Terminator T1ll13 is going to "view" the world.....


Right, that's me done for now.... time to figure out how to get that little lot now all working from ROS coding and python or C++... (note to self, I did NOT need to do anything extra with OpenCv3)

Monday, 4 December 2017

BabyX

Baby Driver is a good movie, but this is not what this article is about....

I extracted the bits that I thought were eye-opening from the following article:
https://www.bloomberg.com/news/features/2017-09-07/this-startup-is-making-virtual-people-who-look-and-act-impossibly-real

Soul Machines wants to produce the first wave of likeable, believable virtual assistants that work as customer service agents and breathe life into hunks of plastic such as Amazon.com’s Echo and Google Inc.’s Home. https://www.soulmachines.com/
....

....
Mark Sagar’s approach on this front may be his most radical contribution to the field. Behind the exquisite faces he builds are unprecedented biological models and simulations. When BabyX smiles, it’s because her simulated brain has responded to stimuli by releasing a cocktail of virtual dopamine, endorphins, and serotonin into her system. This is part of Sagar’s larger quest, using AI to reverse-engineer how humans work. He wants to get to the roots of emotion, desire, and thought and impart the lessons to computers and robots, making them more like us.

Since my 20s, I’ve had these thoughts of can a computer become intelligent, can it have consciousness, burning in my mind,” he says. “We want to build a system that not only learns for itself but that is motivated to learn and motivated to interact with the world. And so I set out with this crazy goal of trying to build a computational model of human consciousness.

Here’s what should really freak you out: He’s getting there a lot quicker than anybody would have thought. Since last year, BabyX has, among other things, sprouted a body and learned to play the piano. They grow up so fast.
....
Feeling he’d solved the riddles of the face, Sagar dreamed bigger. He’d kept an eye on advancements in AI technology and saw an opportunity to marry it with his art. In 2011 he left the film business and returned to academia to see if he could go beyond replicating emotions and expressions. He wanted to get to the heart of what caused them. He wanted to start modeling humans from the inside out.
....
Sagar clicked again, and the tissue of the brain and eyes vanished to reveal an intricate picture of the neurons and synapses within BabyX’s brain—a supercomplex highway of fine lines and nodules that glowed with varying degrees of intensity as BabyX did her thing. This layer of engineering owes its existence to the years Sagar’s team spent studying and synthesizing the latest research into how the brain works. The basal ganglia connect to the amygdala, which connects to the thalamus, and so on, with their respective functions (tactile processing, reward processing, memory formation) likewise laid out. In other words, the Auckland team has built what may be the most detailed map of the human brain in existence and has used it to run a remarkable set of simulations.

BabyX isn’t just an intimate picture; she’s more like a live circuit board. Virtual hits of serotonin, oxytocin, and other chemicals can be pumped into the simulation, activating virtual neuroreceptors. You can watch in real time as BabyX’s virtual brain releases virtual dopamine, lighting up certain regions and producing a smile on her facial layer. All the parts work together through an operating system called Brain Language, which Sagar and his team invented. Since we first spoke last year, his goals haven’t gotten any more modest. “We want to know what makes us tick, what drives social learning, what is the nature of free will, what gives rise to curiosity and how does it manifest itself in the world,” he says. “There are these fantastic questions about the nature of human beings that we can try and answer now because the technology has improved so much.”
....
AND NOW FOR THE COOL/CREEPY BIT (that I absolutely love!):
....
Sagar’s software allows him to place a virtual pane of glass in front of BabyX. Onto this glass, he can project anything, including an internet browser. This means Sagar can present a piano keyboard from a site such as Virtual Piano or a drawing pad from Sketch.IO in front of BabyX to see what happens. It turns out she does what any other child would: She tries to smack her hands against the keyboard or scratch out a shabby drawing.

What compels BabyX to hit the keys? Well, when one of her hands nudges against a piano key, it produces a sound that the software turns into a waveform and feeds into her biological simulation. The software then triggers a signal within BabyX’s auditory system, mimicking the hairs that would vibrate in a real baby’s cochlea. Separately, the system sets off virtual touch receptors in her fingers and releases a dose of digital dopamine in her simulated brain. “The first time this happens, it’s a huge novelty because the baby has not had this reaction before when it touched something,” Sagar says. “We are simulating the feeling of discovery. That changes the plasticity of the sensory motor neurons, which allows for learning to happen at that moment.

Does the baby get bored of the piano like your non-Mozart baby? Yes, indeed. As she bangs away at the keys, the amount of dopamine being simulated within the brain receptors decreases, and BabyX starts to ignore the keyboard.
....
Sagar remains sanguine about the lessons AI can learn from us and vice versa. “We’re searching for the basis of things like cooperation, which is the most powerful force in human nature,” he says. As he sees it, an intelligent robot that he’s taught cooperation will be easier for humans to work with and relate to and less likely to enslave us or harvest our bodies for energy. “If we are really going to take advantage of AI, we’re going to need to learn to cooperate with the machines,” he says. “The future is a movie. We can make it dystopian or utopian.” Let’s all pray for a heartwarming comedy.
....

https://www.bloomberg.com/news/features/2017-09-07/this-startup-is-making-virtual-people-who-look-and-act-impossibly-real




Friday, 24 November 2017

IBM partner with MIT

https://www.digitaltrends.com/computing/ibm-and-mit-ai-partnership/



"This work is undeniably promising, but it’s a simple evolution of the hardware we have today. Another, more dramatic option is the use of a quantum computer to explore the potential of an A.I. Such research is still in its earliest conceptual stages, but the enormous computational power of a large-scale universal quantum computer seems likely to inspire a major leap in our understanding.

MIT’s lab will have access to IBM Q, the company’s flagship quantum project. Recently updated to a 20-qubit processor, an even more impressive 50-qubit version on the horizon – hardware that will surely be a real gamechanger when it’s possible to use it to its full potential. This avenue of research is set to be a two-way street. Machine learning will be used to help advance research into quantum hardware, and the results will help scientists push the boundaries of machine learning.

.....

The MIT-IBM Watson A.I. Lab will be the setting for these discussions. It’s clear that A.I. is bursting with potential, but that brings about its own challenges. Individuals and organizations working in the field are sure to want to use their talents to break new ground. Both MIT and IBM want to facilitate that important work – but they want to make sure that it’s carried out with the proper caution."



Sunday, 12 November 2017

InMoov finger

Today, I decided to get on with putting together the InMoov 3d printed finger.  I wanted to see how good the 3d printing was and how the construction was done for one of the fingers, seeing as I've got to do all of them, I wanted to get to see how they were put together and how they can be controlled by a servo.

Well, there's the basics all laid out:

After a bit of digging around, I found some acetone (nail varnish remover to you & me!), why we'll need that will become clear shortly....
....as the 3d printed finger parts are ABS, you can use the acetone with a small brush to melt the parts together - there's the finger parts on the right melted together and with the blue joint material threaded through to hold it all together - works pretty well:

For now, time to attach to the base unit:

So, this was the original HobbyKing servo I was going to use.  Note those 2 extra circular discs were meant to be used, but they don't fit the servo, so I opted no to use them yet (that will change)

Now it was time to thread the tendons through the finger, I mis-used an LED to help push through the last part of the finger - hey, whatever works, right :-)

...and there we have it, 50% done...
 ..and there's the other 50% threaded through:

Now to hook up the servo to digital Pin3 of the Arduino (just easier and quicker to test with the Arduino)
 Quick bit of code:
 Verify and compiled:
 Downloaded:

....and there we go, we can now flip the finger when we need to.  I've left off the end of the finger-tip for a good reason.  You'll see the tendons are tied off there and you then melt the finger tip over the top, but it makes it permanent...and, well, I don't want to do that just yet.

Of course there are challenges.  Wouldn't be fun if there wasn't some.

Here's a quick video:


As you can see, it kinda works, but I need a better pull/Robring mechanism (basically a dished outer edge to the white plastic circle on the servo, so that the tendon can move further back) - with a better mechanism the finger will be able to pull back straighter than it does right now.

Hey, it's all learning....small steps.  Now, time to get those .stl files over to a 3d printer and get a whole hand and arm printed up ready for the next phase.  Until then, I'll switch back to the code side of things and see if I can get the Kinect hooked up for vision and get it to react by moving servos etc...