Jetson Orin Nano - the AI super computer

 https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/

I saw one of these about a year ago & I was going to order one, they were expensive and mostly out of stock, so I thought I would wait a while. Then there was a shortage, so I thought I would wait a bit longer. Alas, time has zoomed by and now I have the opportunity to purchase one.  I did however purchase it from Amazon, it did have to come from the US, so it was delayed by a couple of weeks, however import tax was included in the price so it wasn't much more on the price.  Talking of which, basically £250 - which is amazing.


What does this device offer?


What is the first thing to do once unpackaged?  Well, it is a pretty basic box, the device on one side and the power cables the other. No instructions, nothing.  PERFECT! :-)

Geeks only allowed.


So, time to click the [Download SDK] button, that redirects you to the good old JetPack page (I remember doing this for the Jetson Orin cube that I was working with a couple of years back)

https://developer.nvidia.com/embedded/jetpack

which then leads onto here:

https://docs.nvidia.com/jetson/archives/r38.4/DeveloperGuide/index.html

which then leads to here:

https://docs.nvidia.com/jetson/archives/r38.4/DeveloperGuide/IN/QuickStart.html#preparing-a-jetson-developer-kit-for-use


and then just like a numpty I got an old USB stick in prep for installing onto USB, then fired up Disks and stupidly, as the USB registered as 240GB and my local HDD is about 240GB to, I deleted the first partition, thinking it was the USB and then realised what I'd done.

I think I deleted the boot partition of my laptop.  I am now frantically backing up the HDD to a 2TB USB and then I will do the infamous "restart", where the laptop doesn't recover.  we shall see!


AND THEN IT ALL CHANGED - lol


Yes, I did fry the laptop.  oh well, it was overdue a clean slate refresh! I got a lovely message when I turned it back on:


A few hours later and phew, I'm back to a working laptop.  Little bit snappier and agile, now, what were all those "little things" I had setup before? Like increasing the Swapfile to be 8GB, etc..etc..


and yes, I went backwards to UBuntu 22, rather than the 25 I was previously on.  Just incase it was really needed for the Nvidia SDK Manager. we'll see.




back to the Jetson:

just an image to show the siz difference, Jetson on the left and RPi5 on the right


I tried multiple SD Card burns, a whole variety, about 6 of them, from 32GB to 256GB cards.  They all "seemed" to burn okay, but I was getting nothing from the Jetson when plugging in.

I took it back out of the case I had spent ages getting right, I removed the camera. I even moved the plug to be a different socket as I'd heard that it was picky about power fluctuations.


I had borrowed a displayPort to HDMI adapter and I tried connecting to 4, yes 4, different monitors using a variety of HDMI and even VGA (via another adapter) outputs, I get nothing, just a blank screen, don't get anything.



I genuinely thought I had fried the Jetson, or it was already fried.  I was about to give up, when I found a really random mention of shorting out a couple of pins on the back, something to do with recovery mode.


I then decided to go the full on Nvidia SDK install route - it actually detected the device.  Okay, that was progress. It went through it's whole thing about building an OS to deploy and because I had picked Jetpack 6.2.2 it was attempting to build out all of the additional software - which it failed on for some reason, so I stripped the choices right back to a basic install & deploy to the Jetson.  This appeared to work, switch from the GUI in the SDK Manager to the Terminal I could see it was doing okay - it even reached a "Yes, completed".  It looks to be successful, it had built a linux ubuntu and deployed to the SD Card on the Jetson.  woohoo!.  unplug time, remove the jumper pin shorting out the 2 pins on the back.  Blank screen.  sigh.



I have a trillion and one screenshots to insert here about the steps above, just incase I need to do this again - and I am SURE I will.


Out of frustration, I even built / put together a double-bed ottoman bed as a distraction! Yes, that does all work, lifting up and down and all that.  Yes, it does have a nice mattress on top now and is very comfortable.



Why am I so sure, I will need the instructions on how to do the installs again? 

well, in my frustration and also for sanity reasons I hit Amazon PRIME kind of hard - lol - yes, I ordered a new DisplayPort to VGA adapter cable, why not go direct and go for the least technical?! I ordered a new power supply plug, I ordered a new 1TB NVMe SSD drive - that I will install onto when it arrives, using the new found steps identified above!

Then, hopefully, fingers crossed, I shall have the Jetson actually working.  I watched a YouTube video of a lady doing an RPi5 + AI Hat bake off against the Jetson and whilst the Jetson obviously won, she did have a mini rant about the Jetson being very un-user friendly!  I thought she was kidding. nope. I can categorically agree with her!  This is way more of a technical chore than it needs to be.

Which baffles me, all the other YouTube videos are either very well edited, or I'm being dumb, as they just press, click & voila, it's all magically working, absolutely no problems or issues at all.  I think the wonders of editing are coming into play, or, as I say, it's time to hang up my hat and step away.  However, I'm too stubborn to quit, I've gone up the steep learning mountain and I will get this device working and it will be amazing, I am sure.  It needs to be, as it will be the BRAIN of my new side-venture ;-)


cherry pick what you need: https://www.waveshare.com/wiki/JETSON-NANO-DEV-KIT

basically, install Nvidia SDK Manager onto another laptop and then get it to build the linux image and deploy the jetson specific software - it might timeout every so often but keep trying.


UPDATE:

Now, to get a local LLM running the simple & easy way, we can do the following:

Don't forget to follow the steps to add user to run docker commands directly:

https://docs.docker.com/engine/install/linux-postinstall/


Instead of:

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

you need to replace with the following:

docker run -d -p 3000:8080 --runtime=nvidia -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always -e ENV=dev ghcr.io/open-webui/open-webui:ollama


yes, that does say 03:00am, it was a sad & lonely Valentines day/night yesterday, what else is an uber geek going to be doing :-)

On first run of the container, open-webui needs to download about 30 files, this can take a random amount of time. but it does get there.

Then on another machine/laptop put in the IP:port and create an admin account.  Then go to admin settings and you can pull down an LLM model:


Then you can pick this as the default and away you go.

Remember to set the env=dev otherwise you cannot use the API or docs.


First access, needs to load model into RAM, takes about 10 secs to do that, but then it spews out rather fast and subsequent chats (until hit the Timeout, est. 5mins but is configurable) are faster:


ha - I say that and then it takes about 20-30 seconds to "think", but it does still stream quite fast.  I wonder if the docker is actually using the GPU? I need to experiment & check jtop when I am sitting next to the Jetson


it's 5am now. should probably attempt to get some sleep now. before it starts to snow :-D


I can confirm when I checked, it did look like it was CPU only. hmmm.... will need to re-think this.



UPDATE:

Now to setup the Waveshare IMX219-83 Dual camera - bit fiddly to swap over the cables to the right ones and fit them and checking here to find where to plug in the SDA / SCL cables


https://www.waveshare.com/wiki/IMX219-83_Stereo_Camera

https://www.waveshare.com/imx219-83-Stereo-camera.htm

I then found that you needed to run jetson-io.py to configure the drivers to use the right pins etc..

sudo /opt/nvidia/jetson-io/jetson.py

There are a bunch of text based screens that have values to change.  I believed from the article above that the SCA was plugged into PIN 3 and SCL into PIN 5.  Was a bit dubious as the output from jetson-io.py had no mention of PIN5, just PIN 2 and 3.  It saved the configuration and needs a reboot however, I'm downloading nanoowl jetson-container, so will need to leave that for a bit!


hmmm...I note the reference to 40-pin headers (top of Jetson) and 22-pin headers (around the back) - it seems like the 22-pin headers have settings for PIN 2+3 for IMX219 Dual, but nothing happens when I set them up. sudo ls /dev/video* gives me nothing, which is annoying as I swear blind I had an output from that at some point a few days ago.

.....Thinking......

I did find this: https://nvidia-jetson.piveral.com/jetson-orin-nano/how-to-enable-dual-imx219-camera-connection-on-jetson-orin-nano/

Then I went here and watched this video up until 1:38 - then I realised I'm a dumb a$$

https://jetsonhacks.com/2025/03/04/jetson-orin-nano-super-with-the-raspberry-pi-v2-and-v3-cameras/

I had connected the joining cables up the wrong way on the Jetson board.  This is why it worked for the other cameras before and when I refitted inside the case, it felt twisted and wrong, so, blah blah blah...

okay, I now have a Raspberry Pi module 3 camera fitted and when I type:

ls /dev/video*

I get a response of:

/dev/video0

Couldn't get the camera to actually show anything.  However, it did prove that is was connected.

I shutdown and unplugged and put the IMX219 dual cameras back in. Sure enough on boot up, little red light came on for Camera 0.  Looking positive. just need to switch drivers back over using jetson-io.sh and it "should" be back in business.

YES!

I get a response of:

/dev/video0 /dev/video1

gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=30 ! \
  'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1' ! \
  nvvidconv ! xvimagesink

shows an image of my ceiling! The cameras are pointing upwards!

replacing sensor-id=1 does the same thing.  phew!!! woo!hoo! well, that was fun & totally proves that USER ERROR is a real thing. lol.

BTW - the connector cables for the SCA/SCL appear to be red-herrings? Oh no - they are for AXIS sensors, ie. movement, if the camera moves about you can get the details.  Not needed for now, so will leave unconnected.





Talking of jetson-containers, that is where I wanted to test & use the CSI camera.

To set that up, I did the following:

git clone https://github.com/dusty-nv/jetson-containers.git
cd jetson-containers
bash jetson-containers

However, that just outputs "invalid command".  well, no surprises there :-)

Then running:

bash jetson-containers run --workdir /opt/nanoowl $(autotag nanoowl)

I see an error: autotag : command not found

Simple fix.  guess who didn't run the obvious:

./install.sh

This downloads all the python modules and libraries that are needed.  Seems obvious now, but kind of a missing step in EVERY web page that I visited.

Now, a re-run of:

bash jetson-containers run --workdir /opt/nanoowl $(autotag nanoowl)

and success a docker run is executing as it should, it is pulling down the image of nanoowl

I hope the 64GB test SD Card that I'm using has enough free space?!?!? and that is why I create these articles for myself. So, when it fails due to things like lack of space, I can swap out for the 256GB SD Card and repeat the steps, alternatively I can remove the SD Card altogether and boot to the NVMe SSD 1TB card that I have inside the Jetson Orin Nano (now we're talking!), but I'll do that right at the end once I've used the SD Card to trial & error.  Not my first rodeo, y'know ;-)

Why all this effort for nanoowl?  Well, because it then allows you to do "smart stuff" like this:

The model it uses will dynamically identify the objects that are "seen" and give a textual description - I need to test it running, but apparently it is very fast and quite accurate - also allows for custom models to be trained too.  That is the fun part I want to investigate.

https://github.com/NVIDIA-AI-IOT/nanoowl

It does prediction, as shown above - but it also does TREE prediction, for instance, it detects the first object and then analyses INSIDE that object, so it it was to see an OWL for instance, it would then identify the beak and wing, you get the idea:


and yes, it will do the normal analysis, like this - you can see the dynamic text output and the TREE concept, the usage of the brackets [ ] indicates this:


Whilst waiting for the nanoowl to download, I stumbled over this great little github repo:

https://github.com/hirankrit/stereo


Now, this is why the usage of docker containers is really beneficial - I was setting up this Jetson Orin Nano to test out replicating a piece of work from a work colleague, who made their solution as a native installation and it required a LOT of juggling, installing, down-grading, up-grading, re-installing, standing on one leg, pulling a finger etc... to get their code to run, very specifically.  That is amazing and it does, however, it does use pytorch, tensorrt, transformers, etc.. so it would probably clash if I were to attempt to do a native install of nanoowl - hence the beauty of jetson-containers.

They decouple this python versioning hell - that I thought we got away from back in Windows 95 .dll versioning hell? I guess history does repeat itself in a cycle. 

I could work / help to make the colleagues work to fit inside a docker container and that would help solve the problem, but right now, that is time & energy that I don't want to spend.

I could just get "yet another" 256GB SD Card - that I do actually have lying around - and do a fresh SDK Manager installation and skip the installing / setup of what I did to get their code to run - darn it, I did it on the NVMe SSD card too, so I'll have to wipe that & start again, but no sweat, again, that is why this type of article exists - I can just treat it as a runbook to repeat :-)

YEP - hit a "no space left on device" when attempting to extract layer. looks like it should be the SSD card ideally.

okay - repeated the above onto the SSD drive!

I run the bash jetson-containers run command and it executes and drops me into the running container itself.

right, where was I about 4 hours ago?!?!

Sigh. well, that was a bust - it's a 30GB docker container. That appears to be isolated. needs some time & effort to get working, which I will do, just not at this moment in time.  I'm drained.

I'll see if I can just get the basics worked out with the camera and some python code.


Of course a good old cup of tea & a biscuit gave enough time to ponder what could be done.  Well, we have the docker run command output. So, let's copy it and tweak it!

I removed the --network=host and added in the -p 7860:7860

Then when inside the container, attempting to run the pip install aiohttp fails - but I see why, it is referring to a URL that does not resolve.  The .dev site has been replaced by .io

So, as stated here, a quick change and voila it downloads:

pip install --index-url https://pypi.jetson-ai-lab.io/jp6/cu126 aiohttp


Now, let's see if that demo can run again?

Within the container if I run:

ls /dev/video*

I indeed do get the cameras available, so that is great news.

Well, blow me down & call me George.

python3 tree_demo.py --camera 0 --resolution 640x480 \ ../../data/owl_image_encoder_patch32.engine

downloaded some stuff and then executed!

It states that it is now listening on 0.0.0.0:7860

Going to the host Jetson and putting in the localhost:7860, I see the request hit the container debug output

and I see.....a big empty space in the browser BUT I do see it say NanoOWL and the words Camera Image and the [a face[an eye, a nose]] text.

That just might be a browser rendering issue - but it is looking more positive than before!?! Accessing from Chromium on different laptop I see:


okay, so this could now be manageable! let's put the debugging hat back on.

y'know, I spend 95% of my time setting things up and figuring out how to get them working, rather than the 95% making stuff, like I used to do!  Once I get a stable setup, I'm going to switch that around.


I found a comment:

"Looks like it’s USB camera only."

CRY! CRY! CRY! if that is true?!?!

which led me to this comment:

"The tree_demo.py program just uses OpenCV to interface to a USB camera and does not support a CSI camera.

One thing you can try is to use the --csi2webcam option of jetson-containers, as explained in this tutorial."

okay, so maybe less crying

looks like a change to the jetson-containers command, a different parameter is used --csi2webcam

jetson-containers run --csi2webcam --workdir /opt/nanoowl $(autotag nanoowl)

let's go back around the loop again


That command seemed to be valid-ish. The output shows:

CSI to Webcam conversion enabled.

CSI capture resolution: 1640x1232@30

CSI Output resolution: 1280x720@30

[Error] v4l2loopback-dkms is not installed.

It states to:

sudo apt update && sudo apt install v4l2loopback-dkms

At this point, what is there to lose?!

okay, well, I didn't tweak the docker run command this time - the pip install as updated above worked okay. so let's see if it made any difference?

python3 tree_demo.py --camera 0 --resolution 640x480 \ ../../data/owl_image_encoder_patch32.engine

it says listening on port.

and.......nope :-( same as before.  sigh.

Although I did notice in the debug output that it assigned video2 and video3 as V4L2 devices


Right. I resorted to using Claude GPT to investigate further.

It turns out that it doesn't like python3-opencv that is being used as it is not compiled with GStreamer.

Hmmmm.... well, I have options to "fix this", but if I do, it will most likely break the current code that I need to run from my work colleague.

I might do this on one of the SD Cards - just to test it out and see.  As the CMD line works fine, it is just because of the Python version of opencv.  As I say, I can get streaming video of both cameras working fine from command line, but I want to do the same as NanoOWL above from Python code.

That's a task for later in the week!


This was a journey.  A long journey.  A tiring journey.  But, I am stubborn :-D




V

Comments

Popular posts from this blog

Google Opal [experiment] - not in the UK yet

Weaviate Verba RAG with Node-Red & Ollama Engine

Remote Working Data Annotation options - Digital Nomad