I'm a geek, doing geeky things and often write some short articles to remind me what I've done and to potentially help other geeks out if they have similar interests....
Sometimes we have to go back, in order to go forward
Get link
Facebook
X
Pinterest
Email
Other Apps
-
Sometimes we have to go back, in order to go forward:
The amusing thing is - this morning I was watching a video about Natural Language Processing (yeugh, yes, with python), but it was using Tensorflow - anyway, the essence of the "new" video was basically doing the same as that chap doing the piping in unix! So many little soundbites from the above video still have not been solved, even with all the "advances" we've made with technology.
There are also a few phrases in the above video that have given me food for thought in relation to "How do we build a Quantum computer operating system?".
Well, this is a turn up for the books and something long overdue for the general public: find out more here: https://opal.withgoogle.com/landing/ Now, you could just go off and do some gold-rush effort and I highly encourage you to do so - THIS is a game-changer. It really is. As this is "my universe", I'm just going to pull it back to me (yes, very narcissistic, I know, but I don't care). Here are a couple of articles that I just want to call attention to: https://tonyisageek.blogspot.com/2020/02/a-week-off-worksort-ofproject-o.html https://tonyisageek.blogspot.com/2020/12/project-o-my-machine-programming-project.html The basic premise was to use NLP (natural language processing) where you, as a non-technical person, could describe what you want an application to be and do. Then, through the power of automation it would generate a starter-for-10 application from templated code and deploy it onto a Cloud environment, so that you could then see it / touch it / run i...
Been absent a while, have had many things to be focused on; however, this recent little nugget needed to be documented & shared, mainly because I did this on my personal laptop & I need to recreate it somewhere else and this mechanism just makes it easier - also, this might help someone else out too. Right, so what am I talking about? About a year ago I was doing some new stuff with LLMs and RAG (ingesting own documents as the data to use rather than the LLM training data) and it was okay-ish, it did the job. Zoom forward a year and obviously things have moved on, quite a bit. The RAG tools & code have improved significantly, it still takes time to ingest though - haven't found a way to speed that part up, well, I'm focused on offline/airgapped/onpremise solutions, it could probably be faster if using a Cloud SaaS offering, but that is of no interest to me, so I'll accept the time it takes. What are the steps inolved? Get a bunch of documents, upload them to be...
During the past, well, let's think, 10+ years, I've been involved in Machine Learning projects (okay, they call it "AI" now, but that's an argument for a different time) that require lots of data to be ingested / used to train the Machine Learning model. For instance, if it is text, we need lots of snippets of text that can then be categorised, so that the model "knows" that the 10 words grouped together in that context have a "meaning" and that meaning is labelled - this helps with questioning later on. It also helped to extract out the entities and relationships between the wording to give more context. Here's a simple example using the spaCy tooling to give you an example: https://www.labellerr.com/blog/image-annotation-services-and-data-labeling-for-ai-models/ If it is imagery, we need lots of images with the segmented parts identified, usually a bounding box(?) to identify the elements / objects inside the image, so, again, this can b...
Comments
Post a Comment