I use Arch in WSL BTW. This is not a joke its actually quite nice
I use Arch in WSL BTW. This is not a joke its actually quite nice
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
Personally, if I can’t go from human readable data to a complete model then I don’t consider it open source. I understand these companies want to keep the magic sauce thats printing them money but all the open source marketing is inherently dishonest. They should be clear that the architecture and the product they are selling is separate, much like proprietary software just has all the open source software they used as a footnote in their about screens.
Godot does have a special thing for mesh instancing, I think variations were possible as well like different colored triangles maybe? https://docs.godotengine.org/en/stable/tutorials/performance/vertex_animation/animating_thousands_of_fish.html
The way I understand the users didn’t necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.
I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.
I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.
Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.
I’m still working on this definition, again just a personal viewpoint.
Not sure if this is the right answer I’m not familiar with that ecosystem: They have comparisons on their site
Its a thing. https://en.m.wikipedia.org/wiki/Busy_waiting
I feel like its difficult to quantify for jobs where you’re being paid to think. Even when I’m goofing off, the problem I need to solve for the day is still lingering in the back of my head somewhere. Actively squinting at it doesn’t seem to make things go any faster and when I do return to work it’s usually to mash out reems of code after letting it stew, but yes, the actual amount of time I’m fulfilling my job description is… less than my working hours.
Not an answer to the question, but in case performance is the goal, Torchaudio has it here
Yes, forgot the exact details apologies
You can change those to /dev/disk/by-uuid/XYZ (“ls -an” that directory to see the symlinks to your current drives)
Basically just look for things like root=/dev/sda2 in the kernel command line. You can get it at runtime by running “cat /proc/cmdline” having /dev/sda etc in your fstab might also be a problem
Yes if you have multiple drives some buggy BIOS may not enumerate them in the same order every time. Most modern distros do UUIDs by default but when manually setting up a bootloader it is easy to succumb to such temptations to use the much simpler device paths as the UUIDs are a pain. If you’re not sure how to change the kernel parameters most likely you’re good on that front actually, its in your grub config as others have mentioned. I’ll leave this comment around in case some poor soul who did it manually comes across the thread.
Depending on if you wrote the kernel cmdline yourself I imagine this might happen using /dev/sdN style device paths? BIOS might change things up every now and then for fun, so using partition UUIDs would be a better way if so.
Ah, even then it could just be a consequence of training samples usually being chronological(most often the expected resolution for conflicting instructions is “whatever you heard last”, with some exceptions when explicitly stated) so it learns to think that way. I did find the pattern also applies to GPT trained on long articles where you’d expect it not to, so wanted to just explain why that might be.
Or I should explain better: most training samples will be cut off at the top, so the network sort of learns to ignore it a bit.
Surprisingly just setting the systemd flag in WSL settings worked, though for a long time I simply didn’t use systemd.