×

Talk about compute and community and where things are at.

Talk about compute and community and where things are at.


I recently came across tinygrad as a small powerful nn framework that had an OpenCL backend target and could run LLaMA model. I’ve been looking out for rusticl workloads, and this seemed like a good one, and I could jump on the AI train, and run an LLM in my house! I started it going on my Radeon 6700XT with the latest rusticl using radeonsi with the LLVM backend, and I could slowly interrogate a model with a question, and it would respond. I’ve no idea how performant it is vs ROCm yet which seems to be where tinygrad is more directed, but I may get to that next week. While I was there though I decided to give the Mesa ACO compiler backend a go, it’s been tied into radeonsi recently, and I done some hacks before to get compute kernels to run. I reproduced said hacks on the modern code and gave it a run. tinygrad comes with a benchmark script called benchmark_train_efficientnet so I started playing with it to see what low hanging fruit I could find in an LLVM vs ACO shootout

Source link