Quilter, which has raised more than $40 million from investors including Benchmark, Index Ventures, and Coatue, used its physics-driven AI to automate the design of a two-board computer system that booted successfully on its first attempt, requiring no costly revisions. The project, internally dubbed “Project Speedrun,” required just 38.5 hours of human labor compared to the 428 hours that professional PCB designers quoted for the same task.
I like that it’s basing its behaviors on the laws of physics. No messy human language, no opinions, slants, or agendas. AI just isn’t ready (or, possibly, will never be ready) to handle that shit. Plus, it’s not stealing work created by humans under the guise of “training”.
I assume it still relies on datacenters, which are themselves ethically questionable. Still, this seems to be the “flavor” of AI that I hate the least.
“Language models don’t apply to us because this is not a language problem,” Nesterenko explained. “If you ask it to actually create a blueprint, it has no training data for that. It has no context for that…” Instead, Quilter built what Nesterenko describes as a “game” where the AI agent makes sequential decisions — place this component here, route this trace there — and receives feedback based on whether the resulting design satisfies electromagnetic, thermal, and manufacturing constraints… The approach mirrors DeepMind’s progression with its Go-playing systems.
This is kind of interesting and cool, and it’s not a hallucinating LLM. I’ve designed a couple of simple circuit boards, and running traces can be sort of zen, but it is tedious and would be maddening as a job, so I can only imagine what the process must be like on complex projects from scratch. Definitely some hype levels coming from the company that give me pause, but it seems like an actual useful task for a machine learning algorithm.
as someone who used to work on “expert models” i’m excited that not everyone has abandoned them for “what if we just had a model that knows everything (that doesn’t exist) and costs a billion dollars to run”
I was going to ask how this is different than a Reinforcement Learning algorithm but then they called out Deep Minds Alpha-Go
Yeah…
But you know how people are already comparing vibe coding to 40k where “priests” pray to computers and hope if they do the exact same thing they’ll get the same result they want?
If we start walking down this road of even the chat or not understanding why what it did was better…
Serious unintended consequences are going to be inevitable.
Like, I swear nobody knows the paperclip story anymore.
Instrumental convergence posits that an intelligent agent with seemingly harmless but unbounded goals can act in surprisingly harmful ways. For example, a sufficiently intelligent program with the sole, unconstrained goal of solving a complex mathematics problem like the Riemann hypothesis could attempt to turn the Earth (and in principle other celestial bodies) into additional computing infrastructure to succeed in its calculations.[2]
https://en.wikipedia.org/wiki/Instrumental_convergence
I mean, we can make a very very solid argument that much of our current problems are caused by high level stock trading being done by algorithms who’s only instruction is “make numbers go up”.
This shit aint even hypothetical anymore, it’s just instead of “make as many paperclips” we told it “make more money than you did yesterday”.
Which is why we’re burning down the planet to make billionaires even more money
I can’t wait for hardware companies to let go of their designers prematurely in the pursuit of AI everything only for there to be a bug in a major board and no one available to troubleshoot thereby stranding customers with a broken board, no revision on the horizon, and no recourse.
Playing games against the laws of physics, so, games against reality. This is similar to how humans develop. So, i.m.o., this approach will go way beyond fabricating computer boards.
I may be hallucinating now, but I swear I remember nearly a decade ago there was a paper or articles about how CG PCBs were using some electrical tricks that were non standard to minimize space or something. The design purposefully had arcs or short circuits or something. Maybe it was a temperature thing? I did a more than cursory search and couldn’t find much, but I vividly remember having conversations about it. Anyone remember anything like that?
I seem to remember a story about how something - a neural net, or an early reinforced learning experiment - ended up accidentally exploiting a physics bug in a chip to achieve a result that should have gone through the chip’s expected circuitry instead.
It was specific to that one particular chip, and swapping it out for another supposedly identical chip caused the calculation, or simulation, or whatever that was running on the larger system, to fail.
That is, it wasn’t supposed to be exploiting physics glitches but that’s what happened.
… I think I found it. It happened all the way back in the 1990s if this story is to be believed: https://www.damninteresting.com/on-the-origin-of-circuits/
Yes! Thank you for the link! I can’t guarantee it but this seems like the exact thing we had been chatting about. The age puts it in time to have made the rounds but still be tech relevant at around the time of discussion.
There was a story about a researcher using evolving algorithms to build more efficient systems on FPGAs. One of the weird shortcuts was some system that normally used a clock circuit, but none was available, and it made a dead-end circuit the would give a electric pulse when used, giving it a makeshift clock circuit. The big problem was that better efficiency often used quirks of the specific board, and his next step was to start testing the results on multiple FPGAs and using the overall fitness to get past that quirk/shortcut.
Pretty sure this was before 2010. Found a possible link from 2001.
Yes, thank you! My timing was wrong (I’m getting old lol), but this was the exact thing being discussed. Glad other people were able to find the info.
Cg = computer generated?
Yea. I didn’t call it AI because I’m not sure the exact method of generation. It may have been AI or maybe some other generation method.
That middle step — the layout — creates a persistent bottleneck. For a board of moderate complexity, the process typically consumes four to eight weeks. For sophisticated systems like computers or automotive electronics, timelines stretch to three months or longer.
imagine being the poor soul who connects circuits together in some CAD program for eight weeks straight. I figure I would have pulled all my hair out by the end of the first week.




