• They download and execute code at compile time. If a dependency of a dependency of a dependency gets hacked and inserts bad code into their crate, developers around the world will get infected. It’s like curl2bash from a bunch of random sources every time you hit compile.

      Infection has already happened a few times in NodeJS world and the general consensus was “this wasn’t as bad as it could’ve been” and “we should probably add 2FA to publishing accounts”. There are a few niche NodeJS alternatives that sandbox the npm install process, but they’re far from the mainstream.

      With Rust dependency trees looking more and more like Javascript’s, I think it’s only a matter of time before a big crate like serde or reqwest will get infected by a supply chain attack and tens of millions of bitcoin will get stolen from developers’ machines.

      It would be so easy for a desperate dev with a gambling debt and a small side project that major companies are leaning on to fall victim to extortion, and this risk is embedded in almost every programming language. It’s not just a Rust problem, and very few people are working on a solution right now (including the Rust devs, luckily!)

      • PlexSheep@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I don’t think this is a problem with proc macros or package managers. This is just a regular supply chain attack, no?

        The way I understand it, sandboxing would be detrimental to code performance. Imagine coding a messaging system with a serve struct, only for serde code to be much slower due to sandboxing. For release version it could be suggested to disable sandboxingy but then we would have gained practically nothing.

        In security terms, being prepared for incidents is most often better than trying to prevent them. I think this applies here too, and cargo helps here. It can automatically update your packages, which can be used to patch attacks like this out.

        If you think I’m wrong, please don’t hesitate to tell me!

        • “Normal” supply chain attacks would infect the executable being built, targeting your customers, while this attacks the local dev machine. For example, the malware that got inserted into a pirated copy of XCode infected tons of Chinese iPhone apps, but didn’t do much on the devs’ machines.

          Sandboxing wouldn’t necessarily lead to detrimental performance. It should be quite feasible to use sandboxing APIs (like the ones Docker uses) to restrict the compiler while proc macros are being processed. On operating systems where tight sandboxing APIs aren’t available this is a bigger challenge, but steps definitely can be taken to mitigate the problem in some scenarios.

          In terms of security you should of course assume that you’ve been hacked (or that you will be hacked), but that doesn’t mean you should make it easier. You don’t disable your antivirus because hackers are inevitable and you don’t run your entire OS in ring 0 because kernel exploits will always be found anyway; there are ways to slow hackers down, and we should use them whenever possible. Sandboxing risky compiler operations is just one link in a long chain of security measures.

    • paholg@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I personally don’t think they do, but an argument can certainly be made. Rust proc macros can run arbitrary code at compile time. Build scripts can also do this.

      This means, adding a dependency in Cargo.toml is often enough for that dependency to run arbitrary code (as rust-analyzer will likely immediately compile it).

      In practice, I don’t think this is much worse than a dependency being able to run arbitrary code at runtime, but some people clearly do.

      • kevincox@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I don’t know if it is a huge issue but it is definitely a nice to have. There are a few examples I can think of:

        1. I open the code in my IDE but build somewhere sandboxed. It would be nice if my IDE didn’t execute the code and can still do complete analysis of the project. This could also be relevant when reviewing code. Often for big changes I will pull it locally so that I can use my IDE navigation to browse it. But I don’t want to run the change until I finish my review as there may be something dangerous there.
        2. I am working on a WebAssembly project. The code will never run on my host machine, only in a browser sandbox.
        3. I want to do analysis on Rust projects like linting, binary size analysis. I don’t want to actually run the code and want it to be secure.
        4. I want to offer a remote builder service.

        I’m sure there are more. For me personally it isn’t a huge priority or concern but I would definitely appreciate it. If people are surprised that building a project can compromise their machine than they will likely build things assuming that it won’t. Sure, in an ideal world everyone would do their research but in general the safer things are the better.

        • PlexSheep@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Analyzing without running might lead to bad situations, in which code behaves differently on runtime vs what the compiler / rust-analyzer might expect.

          Imagine a malicious dependency. You add the thing with cargo, and the rust analyzer picks it up. The malicious code was carefully crafted to stay undetected, especially in static code analysis. The rust analyzer would think that the code does different things than it actually will. Could potentially lead to problematic behavior, idk.

          Not sure how realistic that scenario is, or how exploitable.