tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…

He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…

What are your thoughts?

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    1 year ago

    He has a strong opinion, but he hasn’t lost the plot. It’s very reasonable to say you need to develop on the architecture you wanted to deploy to. If you want to be efficient, so most companies are going to deploy to architecture they have locally.

    But you’re taking comments from 2019. Nowadays lots of Mac developers develop directly on arm. So by his own argument, those Mac developers would be more comfortable deploying to an arm-based architecture cuz the running on an arm-based architecture.

    So broadly I agree with him, or his past comments from 2019, you’re going to need local developer environments, before you’re going to get efficient server software

  • pastermil@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    As someone dealing with enterprise software for living, what he’s saying absolutely makes sense, and I deal mostly in web applications (where I never really have to worry about the low level stuff).

    Just because the top layer seems to be the same, doesn’t mean the underlying ones are. There’s a reason why perfect bug compatibility is a thing (or maybe, was, in RHEL ecosystem?).

    Things that looks like slam dunks in theories are never such in practice. Weird bugs pop up from time to time; and believe me, they will!

    It might be rare, you may only see it once or twice in a project; but when it happens, you’re gonna want to be ready, or people will question your ability to do your job.

    • thelastknowngod@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The cross-compiling point makes sense but, since this is a 4.5 year old message, the state of ARM in the cloud has changed. Now developers do actually have ARM-based machines because of Apple. AWS has Graviton2 instances now and they are a lot cheaper than similarly specced x86_64 instances. ARM is a viable consideration that can be made.

  • bobtreehugger@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    It’s tough to debug issues when you can’t run on the same hardware directly.

    There’s a reason that arm support in open source software has exploded in the past few years, and it’s because of apple silicon.

    I’ll agree that it’s easier now, with most developers using higher level runtimes, but someone’s got to get those runtimes working, and it’s much easier to develop if you have a laptop running that hardware.

  • umami_wasabi@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    He is sort of right, back in 2019. Even then, IBM PowerPC mainframe are still thriving.

    Now, new language with cross compilation with some maturity are here. Major cloud providers now have ARM base machines ready, even designing to their own need.

    ARM is in the datacenter market and become a trend.

    The only thing I worried about, is the architecture of ARM are too fractured. AWS Graviton might behave differently than Ampere Altra, despite both have the ARM ISA.

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    The luxuries you have to not know a thing about enterprise grade servers because your world is JavaScript was made possible, and continues to be made possible, by people working on layers that do require familiarity with the underlying hardware.

    • jcg@halubilo.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Right, whenever someone like Linus talks about developers he’s probably not referring to your run-of-the-mill code monkey making simple web apps.

  • phx@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    X86 and AMD64 based stuff is fairly standard in terms of a motherboard with a BIOS/UEFI and peripheral busses. ARM has for a long time been kind of a mess in this regard, and there are several varieties of ARM architecture that don’t play nicely with code compiled for others.

    Don’t get me wrong. ARM can be great for certain types of workloads. It’s typically more efficient at lower power than X86, and better at various types of math. That’s why we DO see it available on ARM for certain stuff like Lambda functions, but you probably won’t be running full VM environments on it.

    Last: notice how it’s been hard to find certain varieties of Pi and various other stuff running ARM? There’s shortages all over the place but I’m general Intel and AMD have been able to apply demand for their CPU’s.

    Yes, devs aren’t tied to hardware, but there are efficiencies of scale to consider

  • INeedMana@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    From what I learned at university:
    CISC instruction set (x86) was developed to adress the technical reality of its time - time costly CPU operation and fast read from storage. Not long after that the situation has changed - storage reads became slower in comparison to computing time (putting it simply it’s faster to read an archive and unpack it than to read unpacked thing). But in the meantime the PC boom has happened. In a way backward compatibility and market inertia locked us with instruction set that is not the best optimised for our tech, despite the fact that RISC (for example ARM) was conceived earlier.

    In a way software (compilers and interpreters too) is like a muscle. The more/wider it’s used, the better it becomes. You can be writing in python but if your interpreter has some missed optimization opportunities, your code will be running faster on architecture with a better optimized interpreter available.

    From personal observations:
    The biggest cost of software is not to write something super efficient. It’s maintainability (readability and debugging), ease of use (onboarding/training time) and versatility (“let’s add the feature that is missing to what we have, instead of reinventing the wheel and maintaining two toolsets”).

    The new languages are not created because they can do something faster than assembler (they can’t, btw). If assembly code is written as optimal as possible, high level languages can at best be as fast. Writing such assembly is a problem behind the keyboard, not a technical limitation. The only thing high-level languages do better is how much time it takes a human to work with it.
    I would not be surprised to learn that bigger part of these big bucks you mention go not into optimization but rather into “how can we work around that difference so the high-level interface stays the same as for more widely used x86?”

    In the end it all boils down to machine code - it’s the only thing that really exists when it comes to executing code. If your “human to bits translator” produces unoptimized binaries, it doesn’t matter how high-level your code was written in.
    And sometime in the meantime we’ve arrived at a level when even a few behemoths like Google or Microsoft throwing money into research (not that I believe they are doing so when it comes to optimization) is enough.
    It’s the field use that from time to time provides a use-case that helps finding edge-case where optimization can be made.
    To purposefully find it? Dumping your datacenter in liquid nitrogen might be cheaper and probably more predictable.

    So yeah, I mostly agree with him.
    Maybe the times have changed a little, the thing that gave RISCs the most kick were smartphones, then one board computers, so not long ago. The improvements are always bigger at the beginning.
    But the fact that some companies are trying to get RISC back into userland in my opinion means that the computer world has only started to heal itself after the effects of PC boom

  • Wooki@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    JavaScript and TS are script languages with little to nothing to do with threading

  • kornel@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve got an ARM Mac. I’ve got ARM VPSes from Hetzner, and I’m compiling native code for the server.

    It’s definitely easier to develop, build, and test on the same architecture, than to deal with cross-compilation and emulation.

    So I think Linus is right.

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Have you used ARM servers? They’re a massive pain to work with because they just need that one little extra step every time. Oops, this Docker image doesn’t do aarch64, gotta build it yourself. Oops, this package isn’t available, gotta compile it yourself. Oops, this tool doesn’t work, gotta find an alternative or run it through the much slower qemu layer.

    The M1 was the first usable ARM development machine for the mainstream and at launch it was plagued with tons of “how do I develop on this” problems. Apple provided x64 compatibility as a workaround for basically every piece of software you want to run being on another platform. Things are moving forward, but I haven’t heard of any companies announcing how their lives improved by switching to Graviton. Maybe if Apple released a 200 core M2 server it would start to make sense to use ARM, but knowing Apple they’d probably force you to run macOS.

    Linux was released in 1991, not 1960. There were tons of programming languages out there. BASIC ran on basically anything, as did C++. Pascal and Fortran are still used to write high demand applications to this day. Nobody was stuck with C.

    Also, when you actually need performance, Javascript needs to go. Java and dotnet have the same cross platform advantages with much higher speeds. When those become too slow for you (not that hard, they both have huge overhead), you get into the realm of C++ and Rust. After that, you can go one step further, and write your code in C or Fortran (Fortran is especially good at number crunching, beating C at many tasks).

    For a while, developers were stuck with compiling stuff for their servers. Then Java came out. Java did what you say Javascript does: write once, run anywhere. Since the late nineties, server architecture does not strictly matter. You can take most .jar files and serve them from your server, your Power9 box, your Android phone, it’ll all just work after downloading a runtime.

    Nothing changed, really. The minority of developers running on ARM will usually still deploy to amd64. Unlike in the past, ARM cores on desktop are faster than ARM cores on the server. There’s no benefit to running ARM servers. Running slow software like PHP and Javascript becomes especially problematic on slower hardware, so for those cross platform runtimes, you’re still better off running on amd64. That’s part of the reason why companies like Oracle are handing out free ARM VPS products with tons of free RAM, to convince people to try their ARM product for real.

    Maybe Graviton will take off, who knows. People said the same thing about Power9 and they’re saying great stuff about RISC-V too. For now, I don’t see much change.