

I think it’s not that easy. From what I understand, the payment providers enforce that for the whole store, otherwise they don’t want to be involved. Quite shitty, but they have enough weight to pull shit like that off.
I think it’s not that easy. From what I understand, the payment providers enforce that for the whole store, otherwise they don’t want to be involved. Quite shitty, but they have enough weight to pull shit like that off.
If your client(s) accept irregularly changing remote certs (i.e. they don’t do cert pinning), it should work. If both cloudflare and you use the same CA, it would likely work even with cert pinning. Certainly possible, but increases the complexity of the overall setup.
Possible, true. But then the setup also becomes more complicated. In addition you end up with different certs for local and remote access, which could cause issues with clients if they try to enforce cert pinning for example.
Cloudflare tunnel likely terminates TLS on the edge. So if you bypass it, you don’t have HTTPS. Not a problem locally, but then destroys the portability of the URL (because at home you need http and outside you need https). Might as well use different hosts then.
For me the desire to put up with the effort to cook something came, when I bought a Ninja Speedi… because the time reduces to pretty much throwing the ingredients together. Pick something to cook (potatoes, vegetables, pasta, rice,…) and throw it in the bottom. Put the divider in and put the thing to fry at the top (meat, fries, veggy pattie, whatever). A bit of water in the bottom, timer to 12 mins, temp to 180°C and hit start. 16 or so minutes later you have your meal. It starts to heat the water to produce steam and then turns on the recirculating heat for the configured time, so your food gets steamed and fried at the same time. Not having to juggle different pots and pans at the same time made cooking much more pleasant.
I think you won’t regret it. If the container startup installs stuff, you might lock yourself out when the remote server has issues, your network has issues, or if the package you install changes due to an update.
With it baked into an image, you have reproducible results. If you build a new image and it doesn’t work anymore, you can immediately switch back to the old one and figure out the issue without pressure.
So you would expect the devs to include a filterlist for known bad packages in different potential source stores that they have no influence over? How would you distribute that? Bundled with Discover, in which case the package maintainers of the different distributions have to roll out new versions with the updated list? Or as a list maintained on some server the KDE team has to provide, which gets updated by Discover automatically on startup? What if you don’t condone their decision to block something? What if the list gets abused? What should companies do that want that list customized?
it doesn’t matter
Hehe.
Anyway, I am also completely on Zigbee. While I like the concept of Matter over Thread, I wouldn’t want to switch, because it will start with a too small network to cover a good distance and if I start replacing Zigbee devices, I effectively sabotage that network as well. So my only move would be to replace all Zigbee with Matter/Thread devices. And that seems insane. So I hope I keep getting new Zigbee devices for a while.
The idiomatic way would be to build your own image. That’s exactly the strength of the layering of container images.
“No sex!!!” … “You don’t even give us grand kids 😭😭”
I think EA was still worse. At least in my perception.
I think EA actually bought studios just to get the IP and immediately get rid of the employees. I also think they tried to milk a few of the IPs before letting it go downhill.
MS, from what I can tell, gave studios quite a lot of freedom to do what they do best. I don’t think they intentionally wanted to fuck over studios, but they rather sacrificed them.
Don’t get me wrong: that’s still bad. But there’s a difference between fucking studios over with intent and reacting badly to changed circumstances.
The thread is about snap and why it’s worse than flatpak.
“So I have this ultra portable gaming device…”
The British?! Pro-colonialism?! That can’t be. /s
I imagine it’s rather licensing. If they have to provide the software at some point, they can’t use components they are not allowed to distribute. And I agree, that this will impact development costs. But with the law in place, this is not an unexpected cost but one that can be factored in. Might be, that some live services are then no longer viable… but I don’t care. There are more games than anyone could play and games are cancelled or not even started to develop all the time for various reasons. One more or less is just noise.
Same for the “online only design” argument. The moment they decide it’s not viable anymore and they want to shut it down: what does it matter to them, what players do with it? As long as they offer the service themselves, no one is bugging them. (Although I would absolutely be in favor of also getting self hosting options right from the start, I am realist enough to accept, that this would indeed lower economical feasibility of some projects.)
Do you want to know more?
The preinstalled apps are not a feature of KDE (or Gnome, XFCE, etc.). Actually they all are structured in a very modular way where you can use or omit individual components. Firefox and LibreOffice are completely independent of it even; they merely add compatibility layers to make the integration more seamless.
What you experienced was something to attribute to the distribution you chose. They are the ones to decide which components to bundle and preinstall. That is also the reason why so many distributions exist in the first place, because different teams/devs have different visions about what the desktop should look and feel like after install.