Using an RPi3 B+ with Docker Containers

Okay...so I've been using Docker Containers since the 32-bit version on really old hardware, I'll see if I can take a photo of the old faithful "indestructible" Panasonic laptop (it's on the shelf behind me) which has got to be quite some time ago.... I believe I was using it to install nodeJS apps into so that I could run many different versions of nodeJS and not screw up my proper OS as believe it or not, sometimes there are breaking changes that occur when you are forced to upgrade external libraries... and y'know, you might not be in a position to have the bandwidth to refactor for those changes...

I'm pretty sure that I last used that Panasonic back in 2018, maybe dipping into 2019 - just.  Anyway, it was more of an amusing "prop" than anything else, it was always fun on a GWER train to have the little wifi aerial springing about out the top as I would run a scan of all the mobile phone / devices that had wifi or bluetooth enabled insecurely on the train carriage... and leave amusing "gifts" for the unsecured.  anyway, not been on a train for years and don't plan to any time soon.

At the time, we were also doing work / work stuff that involved using Docker containers for our deployed app...that I then broke into many apps... and we would have each app within their own container.... I also then split them out so they could run stand-alone; what I believe the young'ens refer to as "MicroServices" - or as I like to call them, .dll's running on a server.  Then came K8n, the good old Kubernetes Cluster from Google (fun fact: it was originally called Borg and Project 7, as in 7-of-9 from Star Trek) and this allowed the management and scaling of those Docker Containers...

Thankfully due to me being old skool and building out the design of the app(s) to be horizontal and vertically scalable - like good old WAS (WebSphere Application Server) Java apps used to be designed - y'know, no sticky sessions, no binding or bottlenecks into a single component, blah blah bah... yeah, those app(s) were built to be modular and stand-alone, but also could scale independently depending upon the load.  The tooling was fit for purpose.  nice.  Of course, it was made to be nice & complex so that it was a right b'stard to setup properly and you needed to attend training courses to figure out the basics and then had to build it out 20 times to figure out where you'd messed up and eventually after many wars & battle scars you got it smoothed out... and then some clever d!ck built out a CI/CD pipeline tool that reads a .yaml file (basically your long earned list of scripts/steps/procedures) and does it auto-magically and 90% of people have no clue what it's doing, why and most don't even care; so long as they remember to use the secrets file in the right way.

Right, what has this got to do with RaspberryPi's?  Well, whilst I was typing up that little lot, in the background my RPi 3 was running an install script.  It has finished now, so the history lesson is over and the future lesson can begin..although, it will then move into the history category in a moment....let's try not to think too much about what time is; how it works and whether the past is real or not - I'm pretty sure there is a YouTube video by Sabine about that.


So, grab yourself an RPi 3 B+ (or whatever you have to hand - I only have these devices as there is kind of a chip shortage and whilst I do have 2 x RPi4's... they are "busy"), so the old kit will have to do.

Do the normal install / setup, blah de blah de blah...

$ cd Downloads

$ curl -sSL https://get.docker.com > docker.sh

Now, cos I'm me I've piped that output to a docker.sh file on my device so that I can take a look at it and verify it hasn't been fiddled with - as if I would know?!  Okay, the truth is I'm SSH'd into the RPi and I cannot find the pipe symbol on the bluebooth keyboard that I am using, I've tried everything / everywhere, cannot find it... so I pretend there is some ystical reason, but it's just because I cannot type the ... oh bugger, I cannot show the character... you know imagine it had " I sh" after the .com - that's what most people would do.  That tells it to execute the script once downloaded.  For us though:

$ sudo chmod +x docker.sh

$ ./docker.sh

This can take a little while - long enough to take a photo, upload it to a NAS, copy it down to your laptop, paste it into the text above - write the text above, be nostalgic about the DaisEY project... sigh... and then it has finished. sweet.

Now a few permission tweaks, otherwise we'd have to run everything as big bad naughty root user account and no-one wants to do that.  Basically add the default user you use (could be 'pi') to the docker group, so it can do stuff(s):

$ sudo usermod -aG docker tony

You do actually have to logout for this change to come into affect / effect? someone will correct my grammar, always forget the difference between the a and the e in usage - there probably is some simple rule.  anyway, you may have to type this twice:

$ logout

and then SSH back in to the RPi, then type the following to see the new group added to the list:

$ groups

There is an easy-peasey-lemon-squeezy test to see if docker is now running okay:

$ docker run hello-world

Of course there is...of course... and of course that is what it is called and every tutorial online will then tell you, at this point you will see output like this:

Hello from Docker! This message shows that your installation appears to be working correctly.

Twaddle.  It won't.  It will actually do this:

----------------------------------------------------------------------------------

Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world

9b157615502d: Pull complete 

Digest: sha256:faa03e786c97f07ef34423fccceeec2398ec8a5759259f94d99078f264e9d7af

Status: Downloaded newer image for hello-world:latest


Hello from Docker!

This message shows that your installation appears to be working correctly.


To generate this message, Docker took the following steps:

 1. The Docker client contacted the Docker daemon.

 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.

    (arm32v7)

 3. The Docker daemon created a new container from that image which runs the

    executable that produces the output you are currently reading.

 4. The Docker daemon streamed that output to the Docker client, which sent it

    to your terminal.


To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash


Share images, automate workflows, and more with a free Docker ID:

 https://hub.docker.com/


For more examples and ideas, visit:

 https://docs.docker.com/get-started/

----------------------------------------------------------------------------------

Now, I bolded the text above to show the output appears in the middle AFTER it downloads the image container and then it runs it.  If you do it again, then you just get the output and the "I took steps XXX" text afterwards.

And this is where my inherent trust of the usage of the Docker repository starts itching and little red circling lights start flashing in the back / side of my head.... I don't trust other people.  Full stop.  I just don't.  I am expected to trust that the container that was just downloaded was all safe and fine and dandy - it could friggin' contain ANYTHING inside of it.  Okay, a casual connection into it and a nosey around may look okay, but y'know, even I know how to write malware and embed it and execute it without you knowing - there are proper experts out there that do that as a full-time job.  I don't trust the containers.  There, it is out of my system now.  Yes, I do know you can contain your own containers and deploy them to the repository and download/use them - I've done that before, yes, it does give you an extra maintenance overhead, but heck, I'd rather have that overhead than some dodgy malicious code doing naughty things that it is not supposed to, all in the name of "being convenient".

Okay, so what have we proven?  Well, we've proven we have the PLUMBING in place now to run Docker Containers on the Raspberry Pi device.

Question: why would you want to do that?

Answer: (and you don't need chatGPT to answer this, but it would be interesting to see the result?!) It is so that I can now run Node-Red, InfluxDB and Grafana within containers on the RPi - I'm not installing them into / onto the RPi itself; all the software that is needed is contained within, well, the container - that is where the code will execute.  The theory is that what is running inside the container stays inside the container - it cannot reach out and fiddle with the RPi that it is running on.  Although we can expose the app running inside the container, such as Grafana running on port 3000 and that can be accessible on the network by any device that can access the RPi.  In a similar note, the InfluxDB container running on port 8086 can be accessed by the Grafana container and the two can talk merrily away to each other.

The benefit: If I want to install a newer version of InfluxDB or Grafana, I can create a new container and re-point my app(s) to it (or even just use same TCP/IP credentials and switch one off and other on), then if all working okay, can retire the old version; if any issues arise, stop the iffy container and re-enstate the old container and investigate further.  It also means you can copy the container and content to another location too... (lot's of caveats with that one, but it is feasible).  It also means, as stated above, you can "build" a container using scripts in a CI/CD pipeline, where you pull down the base image, then connect to your source-control repo (eurgh...I will not say GitHub, you cannot force me*) and then copy the files into the container and do all the permissions settings, blah de blah de blah... etc...

Right, now let's install InfluxDB as a container and then do the same for Grafana - in fact let's set it up as an IoT Server with all the goodies on it all running in Docker containers.  There's a YouTube video for that - of course there is:



How to manage those containers on the RPi?

Use PORTAINER

$ sudo docker pull portainer/portainer-ce:latest


Once downloaded we need to run it with the following command:

$ sudo docker run -d -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest


Using the web-interface.  open web-browser to port 9000

Create the admin account, select [Docker] then the [Connect] button.

I'll do a followup article on how to use Portainer - after I've installed the other Containers!



-----------------------------------------------------------------------------------------------------------------------



*I have a beef about the usage of Github and I have commented on it before, so I'll keep the rant to a short one.  I was baffled by the younger IT coders who were sold into the github repo world.  "It is soooo convenient, you should be using it - it makes life so much easier, we need to use it everywhere on every project.  I just don't understand what your problem is?" - I can hear that being said to me by at least 10, maybe 12 different people over the years.  I believe that a few weeks ago, the reason for my concerns was actually brought to light.

In fact here is a great article to support my theory.

https://codewithandrea.com/articles/dart-flutter-chatgpt/

So what? what is my problem?

Well...how do you muppets think that openAI/chatGPT tooling was "trained"? It didn't scan StackOverflow (well, if it did, then we're all safe), it scanned the "good code" that you all nicely placed into GitHub repositories for it to work out patterns and consistent ways to put the syntax intot he right place and the right order - I mean just how many different ways can you write CRUD code? not many...

I believe I looked into this back in 2017-ish, when I was pondering "Project O" that was a way to get machine learning algorithms to write code for me, I wanted a chatbot front-end to ask me questions and to then use template/boiler-plate code to then make and deploy the app.  I even built a prototype.  Now, this all sounds a bit 1990s/2000s and 4GL-ish; but this takes it a bit further.  The whitepapers I was reading (and still have somewhere) were from Microsoft Research and a few others, the conclusion at the time was there is not enough variants of code to train the models.  They need a lot more code. A lot.  Welcome the GitHub repo. explosion.  Oooooh, looks like chatGPT has just proven my point.  Also, Microsoft purchased Github... so the recommendation from that whitepaper was executed on.

Will it put coders / developers out of jobs? probably not. well, not straight away.  What it will do, is influence the dumb.  The CTOs.  The people who are in IT and are f-ing clueless; the one's that should have been car sales people, but somehow ended up in the IT world.  They will see this as an opportunity to belittle "the art of coding" again - like they did back in the late 1990s/early 2000s when they had the power & influence to outsource all the programming jobs to "cheaper resources" (hate that term) - they treated "the art of coding" in the same way Victorian milling factories treated their workers, this isn't a skilled art form, they can ramp up the people and production was a matter of how many boxes the person wanted to buy.... the conveyor belt of coded applications.  Again, the theory works, in a slim perspective of scenarios.  However, every company wants to tweak / customise / personalise the applications to suit "the way they work", "the way they get things done"... and that will still require skills.

Can I see a future where chatGPT will write the boiler-plate code? YES. I know, I've already done that and proven it!

Can I see a future where the reduced workforce of coders will take that boilerplate code and modify it / enhance it? YES.

The bigger question is: where & who will that workforce be? Remember you get what you pay for and if you are happy to continually buy cheap boots then the fake-IT-salesmen will be okay... but we'll end up with an even dumber nation in the IT industry.  That covers a lot of countries.


Do I really care? I mean I'll be out of the IT industry in the next 5-10years, so what's my problem?  It is watching the slow downfall of "the art of coding", the art of being able to create something that can help someone, the art of starting with a blank canvas and making something that makes some else's life better - the appreciation of engineering, architecture and syntax, all blended together to make a "thing".  For a fair price.  Artists historically have never really been well paid (am ignoring the modern day nonsense), but our society pays an MP £400,000 to go sit in a fake jungle and eat animal parts for the nation to chuckle over; that same nation who is getting dumber and dumber and more and more reliant on "others" to do and maintain the very things that they use everyday and keep them alive everyday... a time will come where no-one will know "how things work", they will just know "how to use them"... maybe that time is sooner than I think...   I blame the iPhone.


UPDATE:

I found this article to be rather amusing.  Even StackOverFlow are telling people to STOP posting code spewed out by the algorithm.  Yes, it might "look okay"... but if you didn't write it, how do you know it is okay?  Would you trust it to some time-pressured people who just want to get the stuff done and get paid?  they don't know you're going to use that code in the way you are going to... ie. to keep a plane in the air?! okay, bit of an exaggeration but you get what I mean... anyway, ENJOY: https://www.theregister.com/2022/12/12/chatgpt_has_mastered_the_confidence/

 



Comments