Who needs all the GPUs for ML training?
So....some great sales person at Nvidia took a look in the warehouse one day and said, "how come we have so many GPUs on the shelf? who ordered all these? was it the same person who ordered all the oil drums in Half Life 2?...how the hell am I going to sell all these?" ...and thus, the AI/ML training requirement using GPUs was born. okay, so that could be a total fabrication of the truth (we'll never know) and I'm sure I'll get 1001 comments telling me the real history (pst: I really don't care and if you keep reading you'll figure out why ) I was at work the other day and the subject of training ML models came up and some very clever Data Scientists were telling me that to train/learn the things I wanted to do would require exclusively using Cloud servers as they have the GPU scalability that is required to do the training in a decent amount of time. I wasn't convinced. Also, we cannot use the "Cloud" (ie. someone elses servers), we're...