

There’s really compelling open source models like Zonos coming out; ElevenLabs will need to figure out how to thread the needle to keep everyone happy while other solutions eat into the pie.
There’s really compelling open source models like Zonos coming out; ElevenLabs will need to figure out how to thread the needle to keep everyone happy while other solutions eat into the pie.
We didn’t stop trying to make faster, safer and more fuel efficient cars after Model T, even though it can get us from place A to place B just fine. We didn’t stop pushing for digital access to published content, even though we have physical libraries. Just because something satisfies a use case doesn’t mean we should stop advancing technology.
I’ve never been a Twitterific user, even back when everyone was on it. The UI never really made sense to me because it always felt lacking. But maybe that minimalistic approach is their entire schtick, and their former users that enjoyed their style would buy it right up.
On a similar thought, I think if Tapbots made a similar aggregator, I’d buy it in a heart beat.
Apple TV’s hardware is just so much more capable than other platforms that they’ve just been coasting along the last several generations of ”Apple TV 4K”. Our over 7 years old Gen 1 is still super capable and the only reason we picked up Gen 3 is so we can get the Thread radio in a centralized location. As an Apple user, I’m extremely glad there’s going to be a new competitor in the space, which will hopefully push Apple further along the innovation path.
Ask it for a second opinion on medical conditions.
Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.
Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.
They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.
If you can serve content locally without tunnel (ie no CGNAT or port block by ISP), you can configure your server to respond only to cloudflare IP range and your intranet IP range; slap on the Cloudflare origin cert for your domain, and trust it for local traffic; enable orange cloud; and tada. Access from anywhere without VPN; externally encrypted between user <> cloudflare and cloudflare <> your service; internally encrypted between user <> service; and only internally, or someone via cloudflare can access it. You can still put the zero trust SSO on your subdomain so Cloudflare authenticates all users before proxying the actual request.
deleted by creator
It is pretty clear that you have less of an inclination against Seagate than my experience dictates me to. Stats can be twisted to tell anything, and my twist on what I’m seeing tells me to steer away from Seagate; your interpretation can most certainly differ.
AFR is a percentage, 1 drive from a pool of 10 means 10%, 5 drives from 100 means 5%; so with regards to your point that they don’t have much WD drives, if they don’t have much WD, then each fail is even more detrimental on the chart, therefore making the data even more impactful. The data also showed the average across all manufactures and you can see clearly Seagate being consistently above the average quarter over quarter. The failure rate is annualized, so age of drive is also factored into the consideration.
When there’s a clear trend of higher failure rate represented as a percentage, I’m not going to volunteer my data, NAS or otherwise, as tribute to brand loyalty from a manufacture that’s gone downhill from the decades past.
Way back when SSD were prohibitively expensive for poor student me way back when, they came up with Momentus XT; I don’t know if they were the first hybrid HDD/SSD, but it was my first foray into flash storage. I had the earlier version with controller such that should the flash memory dies, I’d still have access to the HDD.
It, was, glorious…
I hear you. The brand is really not what we remembered them to be.
WD has been treating me well, but the most recent batch had been hgst he10 from server part deals from a couple years back so I can’t comment on the more recent drives.
Just don’t buy Seagate. Their drives consistently have the highest annualized failure rate on Backblaze reports ( https://www.backblaze.com/blog/wp-content/uploads/2024/05/6-AFR-by-Manufacturer.png ), and is consistent with my experience in small anecdotal sample of roughly 30 drives. This results in a ripple effect where the failed drive adds more work to the other drives (array rebuild after replacement), thereby increasing their risk of failing, too.
Yep! Give granite a try. I think that would be perfect for this use case both in terms of able to answer your queries and doing them quickly, without a GPU by just using modern CPU. I was getting above 30 tokens per second on my 10th gen i5, which kind of blew my mind.
Thinking models like r1 will be better at things like troubleshooting a faulty furnace, or user problems, so there’s benefits in pushing those envelopes. However, if all you need is to give basic instructions, have it infer your intent, and finally perform the desired tasks, then smaller mixture of experts models should be passable even without a GPU.
Deepseek referred here seems to be v3, not r1. While the linked article didn’t seem to have info on parameter size, fact that they state it is sparse MoE architecture should suggest it is capable to run pretty quick (compared to other models of similar parameter space), so that’s cool.
Depending on what you want to do with it, and what your expectations are; the smaller distilled versions could work on CPU, but most likely will need extra help on top, just like other similar sized models.
This being a reasoning model, you might get a more well thought out results out of it, but at the end of the day, smaller parameter space (easiest to think as ‘less vocabulary’), smaller capabilities.
If you just want something to very quickly chat back and forth with on a CPU, try IBM’s granite3.1-moe:3b, which is very fast even on a modern CPU, but doesn’t really excel in complex problems without additional support (ie: RAG or tool use).
8B parameter tag is the distilled llama 3.1 model, which should be great for general writing. 7B is distilled qwen 2.5 math, and 14B is distilled qwen 2.5 (general purpose but good at coding). They have the entire table called out on their huggingface page, which is handy to know which one to use for specific purposes.
The full model is 671B and unfortunately not going to work on most consumer hardwares, so it is still tethered to the cloud for most people.
Also, it being a made in China model, there are some degree of censorship mandated. So depending on use case, this may be a point of consideration, too.
Overall, it’s super cool to see something at this level to be generally available, especially with all the technical details out in the open. Hopefully we’ll see more models with this level of capability become available so there are even more choices and competition.
Shipping address appears to be US only. Oh well.
More than once I’ve heard the jokingly saying that ‘everything causes cancer in the state of California’ (regardless if they bore the warning label or not). I think while the intention may be good, the equivalent of notification fatigue is at play here and might not be delivering intended benefit/value.
I wonder if it’s more because they’re hitting capacity limits as result of physical limitations of memory on package design — physical distance resulting in potentially unbalanced performance due to some memory simply doesn’t have physical space that could deliver consistent performance, thus limiting capacity as an idea that crossed my mind.
So less so of a “it’ll be more performant” thing, but “this was great but we have to back paddle because we need more memory than the space could permit” kind of thing?
Works very well on vanilla docker compose as well. Annotate containers with labels and it will wire itself up automagically. It’s wonderful.