Sunday, 12 November 2017

InMoov finger

Today, I decided to get on with putting together the InMoov 3d printed finger.  I wanted to see how good the 3d printing was and how the construction was done for one of the fingers, seeing as I've got to do all of them, I wanted to get to see how they were put together and how they can be controlled by a servo.

Well, there's the basics all laid out:

After a bit of digging around, I found some acetone (nail varnish remover to you & me!), why we'll need that will become clear shortly....
....as the 3d printed finger parts are ABS, you can use the acetone with a small brush to melt the parts together - there's the finger parts on the right melted together and with the blue joint material threaded through to hold it all together - works pretty well:

For now, time to attach to the base unit:

So, this was the original HobbyKing servo I was going to use.  Note those 2 extra circular discs were meant to be used, but they don't fit the servo, so I opted no to use them yet (that will change)

Now it was time to thread the tendons through the finger, I mis-used an LED to help push through the last part of the finger - hey, whatever works, right :-)

...and there we have it, 50% done...
 ..and there's the other 50% threaded through:

Now to hook up the servo to digital Pin3 of the Arduino (just easier and quicker to test with the Arduino)
 Quick bit of code:
 Verify and compiled:
 Downloaded:

....and there we go, we can now flip the finger when we need to.  I've left off the end of the finger-tip for a good reason.  You'll see the tendons are tied off there and you then melt the finger tip over the top, but it makes it permanent...and, well, I don't want to do that just yet.

Of course there are challenges.  Wouldn't be fun if there wasn't some.

Here's a quick video:


As you can see, it kinda works, but I need a better pull/Robring mechanism (basically a dished outer edge to the white plastic circle on the servo, so that the tendon can move further back) - with a better mechanism the finger will be able to pull back straighter than it does right now.

Hey, it's all learning....small steps.  Now, time to get those .stl files over to a 3d printer and get a whole hand and arm printed up ready for the next phase.  Until then, I'll switch back to the code side of things and see if I can get the Kinect hooked up for vision and get it to react by moving servos etc...

M1ll13 robot step 2


As explained here:  http://www.cs.bham.ac.uk/internal/courses/int-robot/2015/notes/concepts.php

ROS is a message-passing framework which allows you to create fully-fledged robotic systems quickly and easily. Rather than writing a single, large program to control the robot, we run a number of smaller, process-specific programs (e.g. to analyse images from a camera) and run them side-by-side, passing messages between them to share data and other variables.

The core component in ROS is called a Node:

Each Node performs a particular process, such as processing sensor data or controlling a laser scanner. You can think of them as code modules. However, rather than calling a Java method to get the robot to do something, we publish messages.
Messages are published on topics, which are like separate channels.  Nodes subscribe to these topics:
Whilst one Node publishes messages on a certain topic (for example, robot movement commands), another Node may subscribe to this topic and act upon the messages it receives (you could think of it in terms of following someone on Twitter). Nodes can publish on any topic, or subscribe to any topic, or do both; and multiple nodes can each subscribe to the same topic.
Nodes can find each other using a program called roscore:

This acts a bit like a router in a network, and ensures that messages on each topic are being passed between nodes correctly.


I decided to make a Service to control the Servos.  "Hang on", I hear you say, "where did this concept of a Service come from?"....Well, let's take a look at what the ROS documentation says:

The publish / subscribe model is a very flexible communication paradigm, but its many-to-many one-way transport is not appropriate for RPC request / reply interactions, which are often required in a distributed system. 
Request / reply is done via a Service, which is defined by a pair of messages: one for the request and one for the reply. A providing ROS node offers a service under a string name, and a client calls the service by sending the request message and awaiting the reply. Client libraries usually present this interaction to the programmer as if it were a remote procedure call (RPC).
Services are defined using srv files, which are compiled into source code by a ROS client library.


A client can make a persistent connection to a service, which enables higher performance at the cost of less robustness to service provider changes.

I set about making a Service to specifically control the Servos (if this is the wrong thing to do, I'll no doubt find out shortly!).  Why did I choose to make this a service?  Well...I wanted a response to come back and tell me that the servo movement was valid and possibly what the status of the servo is.
I applied the following logic:
On the RPi3 I have Geany installed to edit/create the required python files.  As I'm still using the default folders from the initial walkthrough I'll be in the ~/catkin_ws folder for the following.
As is required for the service, I created the MoveServo.srv file in the /srv folder.  This contains a definition of the parameters to pass IN/OUT for the service.

Then, within the /scripts folder, I created the servo_control_server.py file.  This is the servo service that will listen on the topic /move_servo and will, well, move the servos!

Now, we need to create the servo_control_client.py file.  This will publish a message to /move_servo passing a couple of parameters ([which servo], [start degree], [end degree]) and will receive a response back that the action was successful. ie. the servo moved as was expected.  As you can see the code is very simple.

(Yes, I'll shortly upload the code to a github repo. so you can see the wonder for yourself)

Now that the python server/client code has been written, how do we make ROS aware of it and test it?  One thing to note already is that I've hooked up the RPi3 to an Adafruit PCA9685, that allows me to control 16 servos via the RPi3 I2C interface (here's an image of it being used with an Arduino, which may possibly also happen in the future)

As for servos, I have a couple of H-King 15298 servos as they are needed for the variant of the InMoov robot, that we'll be making.

Here's the initial setup.  As you can see I'm using an external power source for the PCA9685 (4xAAA batteries) as I read that the power draw for the servos will trip the RPi3, it's a good idea to power servos from their own power source anyway, (as I've previously done with ArcTuRus)

Now that we have the hardware ready.  Let's get ROS all setup and we can test this out.
$ cd ~/catkin_ws
$ catkin_make
//this will compile our .py code and make all the files needed for ROS
//as mentioned earlier, as I built on the tutorial code
//I've already done the modifications to CMake.txt and package.xml files
//so I'm not going to define it again here
$ source ./devel/setup.bash

Now, we need 3 Terminal windows to be opened:

TERM1: $ roscore

TERM2: $ rosrun beginner_tutorials servo_control_server.py
Ready to move servos.

TERM3: $ rosrun beginner_tutorials servo_control_client.py 0 150 600

The client will call /move_servo passing 0 150 600 that will trigger the code in (Service node) to then invoke the Servo[0] and move the servo accordingly 150 then 600 degrees.

(If you have to change anything in the .py files, just remember that you need to re-run $ catkin_make)

Of course, we have a little video of that doing it's thing (albeit, this video is from me writing a test app, before I merged it into the servo_control_server.py):


Yay!  Well, there we have a server Service that controls the servos and a client app that sends a request to move specific servos.  "Hang on", you say (btw - you say that a lot), "I could have just written a single app to do that.  I didn't need to use ROS.  Why do I need to add complexity for no perceived value?".  I am so glad you say that......


Let's now go one step further.  I happen to have a Sharp GP2Y0A21YK sensor knocking about.  I wire it up to the RPi3 and write a little bit of code that just does the detection of something breaking the IR beam.  You are meant to use the IR sensor to calculate the distance from the object by reviewing the voltage value, but for now, all we really care about is: "Did we detect something in the way?  If so, we need to trigger a reaction on a servo.....".  That sounds like a more realistic scenario.

As you can see below, this time we are going to create the pub/sub node concept:
We have a topic /servo_chatter listening and handled by the servo_listener.py node.
We have a servo_talker.py client app that is monitoring the IR sensor, if an object is detected it publishes a message to the topic /servo_chatter.
From the command line, we can also just run a command to manually publish to the topic /servo_chatter (demonstrating the many-to-many concept) - we could have multiple external sensors that would call the same topic just passing different parameters.  As we do not care about a response, this setup is valid.
Within the servo_listener.py code, we can then call the Service /move_servo.
As shown previously in the bottom right of this image:
We are now the servo_listener.py publishing to the /move_servo service that then performs an action on the servos.  cool, huh!  Now, you start to see why this setup makes sense.


Now, let's see all that in action:


Now we have the basics worked out, we can use this as the foundation framework to start building upwards from.  Obviously, there are ROS packages that we can re-use to do further things with, for instance, I am going to look into whether I can use this package to perform the REST API calls to the IBM Cloud Watson services.....for Conversation/Speech-to-Text/Text-To-Speech/Visual Recognition and Language Translation....



Right, now it's time to start making the InMoov finger....it's a start!

Saturday, 11 November 2017

Saturday status

Making some great progress with ROS, RPi3, PCA9685 and a couple of HK 15298s and some Python code...
(and yes, Jelena is just "resting" in the background)

I'll write-up the notes from the center of the desk a little later.  Small steps, but certainly moving in the right direction!

Friday, 10 November 2017

Aging isn't a disease

A great article on what I see as the "Next step....."

https://sdtimes.com/ibm-expands-ai-research-support-aging-population/


“For the first time ever, there are more people over 65 than under 5,” Keohane said. “There is essentially a shortage of care providers. If you look at this demographic shift across the globe, this is why a company like IBM is interested in looking at aging. What does the aging demographic shift mean for our clients? We’re in every industry and every one will be impacted.”

By combining IBM’s IoT and health care research with their Watson machine learning, Keohane says that the effect this shift will have can be broken down in such a way that it will benefit everyone from the elderly in need of improved care to businesses, now more aware of their customer base.


“Aging isn’t a disease,” Keohane said. “We’re all doing it. But it does have impact on health. So could we surround ourselves with emerging technology in the home, while assuring the privacy and security that comes with health care and design something that will help someone understand how well they’re aging in place?”


It's a very interesting read and totally an area that I see that will receive more and more focus between now and 2020 and onwards.  (Also, a great big nod towards why I'm figuring out how to build M1ll13, my own personal care robot of the future)

Tuesday, 7 November 2017

Digital Music tangent

Sometimes we just consume.  Sometime we create.  Sometimes we listen.

Sometimes we combine all of the above.


If you've ever listened to Faithless 2.0 and thought, "Hey, I can do that" (no, it's not as easy as it looks/sounds), if you have time & insomnia then yes, you can do something creative, fun and potentially get the party jumping.... maybe you're gifted like my mate Leigh, and can just do it:


For us mere mortals you need some tools:



and you might need a dose of Omnisphere (my mate Andy swears by it!)

Get creating.....

Some of these scenes look familiar:

M1ll13 robot step1

After a couple of long nights, a lack of appetite and a fair amount of headaches, I finally have something meaningful setup as a framework/platform to build on for the new robot called M1ll13 - it's a new (Millennial) species, so it needs a new type of naming structure.

(this is not M1ll13 - just a stock photo)

I originally went the RPi3 route of installing Raspbian/Jessie and then attempting to build upwards from there to install the ROS, as explained here:
http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Kinetic%20on%20the%20Raspberry%20Pi

All would seem to work okay, until it got to the rosbag install and it would fail, with no sensible way of recovering....trust me, I tried.

I even went backwards a version to the Indigo, but still had no joy:
http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi
....that just gave me issues at another point further down the line.

Not being one to admit defeat, I adopted my usual approach (to work & life) and that was to find another way to the goal...just imagine running water.  Running water will always find a way to get around what it needs to in order to keep moving.

With that in mind, I wondered if I could do what I had just done on my Mac.  I setup a VMWare Fusion VM and installed Ubuntu linux and installed ROS (as defined here).  That went off without a hitch, so I was thinking...I wonder if I could install Ubuntu onto a Raspberry Pi 3?  Not something I've done before....but time to give it a go.

After a bit of searching, I found this site: http://phillw.net/isos/pi2/
and more specifically, I downloaded this image:
http://phillw.net/isos/pi2/ubuntu-mate-16.04-desktop-armhf-raspberry-pi.img.xz

After a quick session with Etcher and a 32Gb SDCard....the image was burnt and ready to go into the Raspberry Pi 3.  It booted.  Why was I surprised?  After a quick setup and a couple of reboots later.  I was ready to see if I could get something working.  After the usual 'sudo apt-get update/upgrade' it was time to get on with the ROS stuff.
From a Terminal session, it was time to enter:
---------------------------------------------------------------------------------------------------
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116
sudo apt-get update
sudo apt-get install -y ros-kinetic-desktop-full
sudo rosdep init
rosdep update
# Create ROS workspace
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
source /opt/ros/kinetic/setup.bash
sudo apt-get install -y python-rosinstall
sudo apt-get install -y python-roslaunch
---------------------------------------------------------------------------------------------------

Initially the ros-kinect-desktop above did not have the word "full" on the end, but after having the last command "python-roslaunch" fail, I found some suggestions to do it like the above.  The "python-roslaunch" still fails with unmet dependency errors, but it didn't seem to be an issue at this point.

Now it was time to plug the XBox 360 Kinect into the RPi3.  I noticed that the Kinect has an odd cable end....not USB.  Typical.  I looked online, Amazon could get me one by the weekend, so could eBay, but that was, like, 4 days away!  After hunting around the house for 30mins, I found the original charger for the Kinect that included a USB connector - result!  It was a bit picky about which of the usb sockets I put it into on the RPi3, but I found the top left to work okay - the Kinect green light came on!  (also running lsusb told me it was detected: Microsoft Corp. Xbox NUI Camera/Motor/Audio)

Back to that open Terminal from earlier, we have to download the software to get ROS to work with the Kinect:
---------------------------------------------------------------------------------------------------
# search for packages
sudo apt-cache search ros-kinetic | grep "search term"
# Install kinect packages
sudo apt-get install -y freenect
sudo apt-get install -y ros-kinetic-freenect-camera ros-kinetic-freenect-launch
sudo apt-get install -y ros-kinetic-freenect-stack ros-kinetic-libfreenect
---------------------------------------------------------------------------------------------------


Now it was time to open up multiple Terminal windows.....

TERM1: This is the "master", think of it like the main brain that has to be running to process all the events that are happening (bit like a web server):
---------------------------------------------------------------------------------------------------
$ roscore
....
started roslaunch server http://rpi3ubuntu:37735/
ros_comm version 1.12.7
summary
parameters
 * /rosdistro: kinetic
 * /rosversion: 1.12.7
nodes
auto-starting new master
process[master]: started with pid [2592]
ROS_MASTER_URI=http://rpi3ubuntu:11311/

setting /run_id to c2dbad48-c3d7-11e7-9e7d-b827eb8fbf84
process[rosout-1]: started with pid [2605]
started core service [/rosout]
---------------------------------------------------------------------------------------------------

TERM2: Start the Node for the Kinect.  An ROS node is a bit like an "IoT module", it'll do it's own thing, gathering/capturing data and if things are subscribed to it, it'll publish this out to whoever is listening.  For us here, it is the roscore/master.   The /topic sub/pub concept shouldn't be a new concept, we've been doing it with Message Queues (MQTT) for years now....
---------------------------------------------------------------------------------------------------
$ roslaunch freenect_launch freenect.launch

started roslaunch server http://rpi3ubuntu:46776/
summary
parameters
 * /camera/.....
nodes
  /camera/
ROS_MASTER_URI=http://localhost:11311
core service [/rsout] found
....
[ INFO] Starting a 3s RGB and Depth stream flush.
[ INFO] Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A0033333335A'
---------------------------------------------------------------------------------------------------

TERM3: Now that the above TERM2 should be publishing topics now, we can run a command to see the topics.  We can now look at the RGB image from the Kinect with image_view
---------------------------------------------------------------------------------------------------
$rostopic list
/camera/depth/camera_info
....
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_color
....

$ rosrun image_view image_view image:=/camera/rgb/image_color
libEGL warning: DRI2: failed to authenticate
init done
[ INFO] Using transport "raw"
---------------------------------------------------------------------------------------------------
This will pop up a window shows you what the Xbox 360 Kinect is currently "seeing".
As shown here (and the purpose of ALL THIS WRITING WAS JUST TO SHOW THIS PHOTO!)


Yes, that is the re-purposed iMac G5 running Ubuntu with Arduino IDE and an Arduino UNO plugged in ready for some servo action later in the week.  Some random Sharp IR sensors are waiting to be used too... along with a 15 servo HAT for the RPi too.

You know it's going to be a fun couple of evenings when you can see a soldering iron sitting on the corner of the desk.

So, there's the RPi3 on the bottom left, plugged into the Xbox Kinect (on top of the iMac G5) and the big monitor on the right os plugged into the RPi3, showing the 3 terminals described earlier and the output image of the Xbox Kinect, with yours truly trying to take a decent photo.... (and yes that is a Watson T-shirt I'm wearing).

If anyone is interested, I ran up "System Monitor" and the 4 CPUs are running at 35-50% and memory is at 365Mb out of 1Gb.  I am streaming the image quite large though - something to keep an eye on.

Phew, that was step 1..... now it's time to do some further basic ROS testing and to create some new nodes and make sure that ROS is working correctly.

What ROS nodes will I be creating?.....well, obviously there will be some STT/TTS/NLU and now I have the Xbox Kinect working a Visual Recognition node too.

And for the true eagle eyed, yes that is an AIY Projects - Voice Kit box on the top right - I haven't gotten around to making it yet, maybe by the end of the week, but in essence it's the same thing as what I'm going to be doing anyway, just without pushing a button or using Google....

UPDATE: To get the I2C to work, you need to modify /boot/config.txt because we're using Ubuntu we can't use raspi-config