Monday, 9 November 2020

Rock Pi X

 okay, so I've been a Raspberry Pi fan since v1 or v2 and have done quite a bit with them over the past 9-10 years.

However, I just stumbled over this Pi sized SBC (Single Board Computer):

It has one major difference, instead of an ARM processor, it has an x86 processor.  Yes, an x86.

It also sounds like it will have an NPU attachment soon that will allow for AI/ML additions.  Now, that sounds interesting.

This sounds like a nice little device that will allow me to build and compile C code and execute it on the source machine and this little device withou cross-compiling.

Okay, so that's not a huge selling point, but it does start to give some options - with the added bonus that existing x86 based software will just run on this board, without the need to run emulators or simulators, which is a bonus.

Check out HERE for more info.


Of course I've just ordered one....will see how it works in a few weeks time....

Monday, 2 November 2020

Geeky UI

 Fancy changing your UI in Linux or Mac for some futuristic sci-fi-eque screen....

that is also fully functional!

Then check out eDEX-UI.

Features:

  • Fully featured terminal emulator with tabs, colors, mouse events, and support for curses and curses-like applications.
  • Real-time system (CPU, RAM, swap, processes) and network (GeoIP, active connections, transfer rates) monitoring.
  • Full support for touch-enabled displays, including an on-screen keyboard.
  • Directory viewer that follows the CWD (current working directory) of the terminal.
  • Advanced customization using themes, on-screen keyboard layouts, CSS injections. See the wiki for more info.
  • Optional sound effects made by a talented sound designer for maximum hollywood hacking vibe.


Your inner geek knows you want to download this.

Saturday, 24 October 2020

Who needs all the GPUs for ML training?

 So....some great sales person at Nvidia took a look in the warehouse one day and said, "how come we have so many GPUs on the shelf? who ordered all these? was it the same person who ordered all the oil drums in Half Life 2?...how the hell am I going to sell all these?"

...and thus, the AI/ML training requirement using GPUs was born.

okay, so that could be a total fabrication of the truth (we'll never know) and I'm sure I'll get 1001 comments telling me the real history (pst: I really don't care and if you keep reading you'll figure out why)


I was at work the other day and the subject of training ML models came up and some very clever Data Scientists were telling me that to train/learn the things I wanted to do would require exclusively using Cloud servers as they have the GPU scalability that is required to do the training in a decent amount of time.  I wasn't convinced.

Also, we cannot use the "Cloud" (ie. someone elses servers), we're offline and OnPrem.  yeah, the 1990s are back in trend again.  I asked for a spec of what we would need.  I fell off my chair when the equivalent of a Cray supercomputer spec was placed before me.  Really? REALLY? nope.  I don't believe it.  I'm willing to accept that running the ML training models on OnPrem kit is doable and the sacrifice of doing so is "time", it's just going to take 7 days to execute, but we'll get the model and then it's just a case of deploying it and that is executed against.

This bugged me.  I also wanted to perform the ML training on the deployed hardware that we have.  Basically, a ruggedised server with 16CPUs and 128GB RAM and 3TB of HDD.  Not bad for something this size of a laptop that fits into a rucksack.  I want to run the ML training on that piece of kit.  I was laughed at by the Data Scientists.  It's a bit unfair for them as they report to me, so technically I'm the boss.  So I got bossy.  I said, "we're going to make this happen.  I don't know how, but we are."  One thing they've learnt, working with me over the past year or so, is....I usually figure it out and we all bask in the glory.

As I do... I keep many fingers in many different pies, I observe a lot of information and keep threads on what might be useful in the future and rarely disregard information that is not relevent today as it might be useful in the future.  (Yes, my brain often complains that the HDD is full and I need to delete files)

So, it was with great glee that I read this news article:

https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/simple-neural-network-upgrade-boosts-ai-performance


Whilst casually reading through this article, it started to justify quite a lot of the debates I've been having with IT people in general, ie. modern day developers relying on libraries and frameworks.  Whilst this is great for acceleration and re-use, it can start to bloat your code and execution, if you're only using, let's say 2 functions out of the 3000 in a library.  If the library is not written well, you end up loading all that code into memory, just for the 2 function calls you're making...kiss goodbye to a big chunk of RAM for no good reason, but hey, RAM and HDD is cheap nowdays (still no excuse for bloat).  Anyway, this links back to the GPU niggle that I've had for a while.  Why do we need a Cray supercomputer to do some "simple" training?  going though the above article, as it states, by "fixing" something at the core framework level, a huge performance increase was made and everyone benefits.  awesome.  No, I'm not one of those people who has to make everything from scratch, I appreciate the benefits of frameworks & libraries, but you have to balance out what they offer and what it is you really want to achieve and whether you are willing to compromise or not.  Time is usually the main reason.  You need to put something together fast (because that is what IT has now become, make it quick, fail fast, show me a PoC working, that's great, now put it into production.....errrr it's a PoC, it wasn't built for production purposes... you know the thread of that discussion).

Anyway, whilst reading the above article, my inner ego was getting very smug as I was going to use this as a reference in future debates to justify why I make certain decisions in the work place.

Then, as if the universe was listening to my thoughts, this article feel into my lap: 

https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/powerful-ai-can-now-be-trained-on-a-single-computer

Well, look at that!  "Necessity is the mother of invention" - totally agree!  A student didn't have access to the Cray supercomputer style kit to do the ML training, so he looked for alternative ways to achieve what was needed and approached the problem in a different way.

Okay, he ended up using a "beefy" laptop with 36 CPUs, 1 GPU (yes, 1..ONE...) and it performed exceedingly well!  There...I now have my reference to backup my desire to do the same thing on the kit I have to hand.  I will make it work and now I'm inspired even more to do so.

Here's a link to the GITHUB repo for the SAMPLE FACTORY: https://github.com/alex-petrenko/sample-factory

Basically, it's an async reinforcement learning tool, it's not going be useful for supervised and unsupervised learning, but it's a start.  A very good start.

In a nutshell, the simple answer (as with most answers, it all depends on the question being asked), but it does sound like a lot of the problems encountered is to do with people using python and read/writing/processing huge datasets and not caching them in memory. 


The Data Science "lazy" answer of "you need 100's of GPUs and millions to do that" is the same style answer as "I'll just add these 10 frameworks & libraries to my code, RAM is cheap, we can just up it from 64GB to 128GB with the click of a button"..... no no no no... These are the people who have never written code in C.  They have never worked on limited capacity devices, where 1MB (yes MB not GB!) of RAM was a luxury and 640KB of that 1MB was taken by the OS, therefore every line of your code had to be justified and every variable that took up memory had to be efficiently cleaned up in order to keep your code from crashing the device.  That mindset is becoming part of a dying breed and it's a shame, as that mindset is what got us all to where we are today.

I want to get to tomorrow though and to do that, sometimes you have to go back to yesterday and look at how things were done and apply the techniques in a new way to accelerate the future.

I'm looking forward to next week.....

Unlike the sales person from Nvidia....who, hopefully hasn't asked for 1000000 GPUs to be made so they can make a killing out of AI/ML training

 

Thursday, 22 October 2020

HuskyLens - AI vision for Arduino/Raspberry Pi

I had a PIXYMON a few years back (it was very expensive at the time) and whilst it did do the job, once the power was removed, it "forgot" everything you'd trained it, so was a bit of a chore.  I still have it someplace.....  but now, the future has arrived and it's better (of course) 


Purchase a HUSKYLENS yourself from DF ROBOT (or via Amazon)

What is it?


HuskyLens is an easy-to-use AI Machine Vision Sensor. It is equipped with multiple functions, such as face recognitionobject trackingobject recognitionline trackingcolor recognition, and tag(QR code) recognition.

  • HuskyLens adopts the new generation of specialized AI chip Kendryte K210. The performance of this special AI chip is 1,000 times faster than that of the STM32H743 when running neural network algorithm. With these excellent performances, it is capable of capturing even fast-moving objects.

  • With the HuskyLens, your projects have new ways to interact with you or environment, such as interactive gesture control, autonomous robot, smart access control and interactive toy. There are so many new applications for you to explore.

Compatible with Arduino, RPi, etc.

Through the UART / I2C port, HuskyLens can connect popular main control boards like Arduino, micro:bit, Raspberry Pi and LattePanda to help you make very creative projects without playing with complex algorithms.

Check out the tutorials and spec's here:  https://wiki.dfrobot.com/HUSKYLENS_V1.0_SKU_SEN0305_SEN0336


Why am I now interested (again?), well, apart from the obvious, "it's AI stuff", I do have a real use-case. HERONS!  I lost a load of fish from my garden pond recently (as I did a few years back) and I'm about to re-stock with about 20 new fish this weekend.....so I want to now setup a HERON-ALERTER!

Detect movement, track, recognise - if it's a Heron, deploy the Radio controlled TANK!

Yes, I'll probably end up adding one to the TANK also, so it'll start to become autonomous......

Monday, 5 October 2020

Orquestra, a new Quantum development platform

Well.....to be able to use it, you need to be able to code for it...

https://quantumzeitgeist.com/boston-based-quantum-startup-zapata-releases-orquestra-a-new-software-platform/


Despite skeptics having doubts about the ‘realness’ of the imminent field of quantum computing, there are many strides forward made even now. IBM recently released an ambitious roadmap that is hardware-based, and Zapata, a Boston-based quantum computing startup, announced its commercial release of Orquestra. 

It is an advanced software platform used to create repeatable quantum and quantum-based workflows and algorithms and can be used across industries and cases. The process involves a quantum engine that systematically groups together information and resources even when they are spread across both quantum and classical devices.

Orquestra is designed for quantum use cases such as writing, manipulating, and optimising quantum circuits as well as running these across various devices. These devices include quantum computers, simulators, and HPC resources. Orquestra provides the following functions:

Optimised open-source and exclusive algorithms can be supplied by extensive quantum algorithm repositories. Code can be combined by users from various different quantum libraries for workflow management systems. Many different quantum and classical backends can be run and benchmarked.


I'm still on the fence with this one.  Whilst I think it's great having quantum computers, they sound really CyberPunk and cool and futuristic, I've not seen a real life useful use for them yet (ignoring security encryption for a moment) and therefore, I'm also struggling with the coding/programming of one to do something useful.  I can imagine how I would use one, for instance to understand someone speaking in real time, determining the words, context and meaning of what they are saying, as you could do all that work in parallel and have multiple threads of potential optional paths to follow, but, I don't know.  Maybe it won't be used for Machine Learning (ML) or Artificial Intelligence (AI) stuff at all, but for managing the autonomous part of self-driving cars? or some geeky backend auto-filing system of documents or something.

We were all thinking, "Space travel", but hmmmm..... we'll see :-)


Thursday, 30 July 2020

Conversational design tools and resources

There are a ton of tools out there, everyone has their "go to" tools and the occasional new one pops up when you talk to someone else, but here's a website that collects them all together, categorises them and let's you see the new one's that come on the scene.

Nifty reference site:  https://cui.tools/






CUI Tools is a directory of the best conversational design tools and resources to help you master your next voice or text-based bot project.

Made by Olivier Heitz.