Tired of hearing about AI bullshit.
deleted by creator
Sounds a lot like an FPGA.
How much is this start up paying for this PR?
i love how being a software company that also designs hardware is now referred to as “going Apple’s way” because everything has to tie back to apple somehow
They’ve got 15% market share… but it’s sexy!
Hmm I wonder if this will go anywhere. Isn’t it similar to what a graphics card does? Calculating specific instructions in an efficient way? But where does it stop being a CPU? When it only supports specific languages?
Idk I think RISC might be the future more than this. 99% less power consumption is only going to make the neural nets a bit better because they need an exponential increase in processing power for linear gains.
This isn’t an ai co processor. It sounds more like an fpga that connects a bunch of standalone alus, and also has compiler support for common languages:
The compiler generates a representation of the data flow, places the instructions with an efficient network on chip. A RISC-V core configures the fabric and then shuts down to leave the tiles running, although the fabric can reconfigure itself as a general purpose processor that can run C, C++ or Rust as well as edge AI frameworks and potentially transformer frameworks.
The “the fabric can reconfigure itself” part is interesting too, maybe that’s why they’re not calling it an fpga
But FPGAs can reconfigure themselves.
After being configured by the user? Didn’t know that
There are a few ways to do it for Xilinx/AMD For Zynqs, the processor actually programs the programmable logic, so you just need bit files in the OS file system and your good to go. For any part there is also partial reconfiguration where small bits can be programmed with alternate partial bitstreams without reconfiguring the whole device. There are a bunch of conditions that have to be met, and I don’t have any experience with that style of design, but yep, self reconfiguration, at leas on Xilinx parts, is definitely a thing.
? why just those three why not any language that can target LLVM?
or is that just a shorthand for the languages they’ve so far created hardware specific bindings for?
🫠
Not efficiency is the Problem, ist’s AI