This talk, given by David Patterson (a legend in computer architecture and one of the people who helped create RISC-V at UC Berkeley) is an excellent (and accessible) introduction.
Yeah, I’m working in embedded ML, and it’s an insanely exciting time. We’re getting more and more microcontrollers and single-board computers with special AI accelerators, many of them RISC-V, by the day it seems. One of the next steps (in my opinion) is finding a good way to program them that doesn’t involve C/C++ (very fast but also so painful to do AI with) or Python (slow unless it’s wrapping underlying C code, and unsuitable for microcontrollers). In fact, that’s exactly what I’m working on right now as a side project.
What’s also cool is RISC-V promises to be the one instruction set architecture to rule them all. So instead of having PCs as x86, phones and microcontrollers as ARM, then all sorts of other custom architectures like DSPs (digital signal processors), NPUs, etc., we could just have RISC-V with a bunch of open standard extensions. Want vector instructions? Well, here’s a ratified open standard for vector instructions. Want SIMD instructions? Congrats, here’s another ratified open standard.
And all these standards mean it will make it so much easier for the compiler people to provide support for new chips. A day not too long from now, I imagine it will become almost trivial to compile programs that can accelerate tons of scientific, numerical, and AI workloads onto RISC-V vector instructions. Currently, we’re stuck using GPUs for everything that needs parallelization, even though they’re far from the easiest or most optimal devices for many of our computational needs.
As computing advances, we can just create and ratify new open standards. Tired of floating point numbers? You could create a proposal for a standard posit extension today if you wanted to, then fork LLVM or GCC or something to provide the software support as well. In fact, someone already has implemented an open-source RISC-V chip with posit arithmetic and made a fork of LLVM to support it. You could fire it up on an FPGA right now if you wanted.
guess I gotta get familiar with RISC-V then
This talk, given by David Patterson (a legend in computer architecture and one of the people who helped create RISC-V at UC Berkeley) is an excellent (and accessible) introduction.
Here is an alternative Piped link(s):
This talk
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Thank you so much for that. I haven’t touched hardware in a long time, but it’s exciting to see how much impact it’s already had on ML.
Also, the bit about a 63,000x improvement over python is going to be something I bring up in a conversation I just see it.
Yeah, I’m working in embedded ML, and it’s an insanely exciting time. We’re getting more and more microcontrollers and single-board computers with special AI accelerators, many of them RISC-V, by the day it seems. One of the next steps (in my opinion) is finding a good way to program them that doesn’t involve C/C++ (very fast but also so painful to do AI with) or Python (slow unless it’s wrapping underlying C code, and unsuitable for microcontrollers). In fact, that’s exactly what I’m working on right now as a side project.
What’s also cool is RISC-V promises to be the one instruction set architecture to rule them all. So instead of having PCs as x86, phones and microcontrollers as ARM, then all sorts of other custom architectures like DSPs (digital signal processors), NPUs, etc., we could just have RISC-V with a bunch of open standard extensions. Want vector instructions? Well, here’s a ratified open standard for vector instructions. Want SIMD instructions? Congrats, here’s another ratified open standard.
And all these standards mean it will make it so much easier for the compiler people to provide support for new chips. A day not too long from now, I imagine it will become almost trivial to compile programs that can accelerate tons of scientific, numerical, and AI workloads onto RISC-V vector instructions. Currently, we’re stuck using GPUs for everything that needs parallelization, even though they’re far from the easiest or most optimal devices for many of our computational needs.
As computing advances, we can just create and ratify new open standards. Tired of floating point numbers? You could create a proposal for a standard posit extension today if you wanted to, then fork LLVM or GCC or something to provide the software support as well. In fact, someone already has implemented an open-source RISC-V chip with posit arithmetic and made a fork of LLVM to support it. You could fire it up on an FPGA right now if you wanted.
I got confused seeing my university’s YouTube channel open up, thought I clicked on a recording for one of my classes lol
If anyone else is from UBC, we’re over at !UBC@lemmy.ca