• BB_C@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    8 months ago

    Forgot to mention, and this is tangentially related to my comments from yesterday:

    A paper from 2020 showed that Cranelift was an order of magnitude faster than LLVM, while producing code that was approximately twice as slow on some benchmarks. Cranelift was still slower than the paper’s authors’ custom copy-and-patch JIT compiler, however.

    Cranelift is itself written in Rust, making it possible to use as a benchmark to compare itself to LLVM. A full debug build of Cranelift itself using the Cranelift backend took 29.6 seconds on my computer, compared to 37.5 with LLVM (a reduction in wall-clock time of 20%).

    Notes:

    • It’s easy to gloss over the “order of magnitude” part in the presence of concrete and current numbers mentioned later.
    • It’s actually “orders of magnitude” faster.

    But the numbers only show a 20% speed increase!

    The unattended reader will be left with the impression that Cranelift compiles 20% faster for a 2x slowdown. Some comments below the article confirms that.

    What the article author missed (again) is that the biggest Cranelift wins come when used in release/optimized/multi-pass mode. I mention multi-pass because the author should have noticed that the (relatively old) 2020 research paper he linked to tested Cranelift twice, with one mode having the single-pass tag attached to it.

    Any Rust user knows that slow builds (sometimes boringly so) are actually release builds. These are the builds where the slowness of LLVM optimizing passes is felt. And these are the builds where Cranelift shines, and is indeed orders of magnitude faster than LLVM.

    The fact that Cranelift manages to build non-optimized binaries 20% faster than LLVM is actually impressively good for Cranelift, or impressively bad for LLVM, however you want to look at it.

    And that is the problem with researches/authors with no direct field expertise. They can easily miss some very relevant subtleties, leading the readers to make grossly wrong conclusions.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Yeah, I’m no compiler engineer, but testing both release and debug builds is the minimum I’d do. That doesn’t even get into classes of optimizations, like loop unrolling, binary size, macros, or function inlining, which I also expect to be in a compiler comparison.

  • BB_C@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    8 months ago

    Users can now use Cranelift as the code-generation backend for debug builds of projects written in Rust

    Didn’t read the rest. But this is clearly inaccurate, as most Rustaceans probably already know.

    Cranelift can be used in release builds. The performance is not competitive with LLVM. But some projects are completely useless (too slow) when built with the debug profile. So, some of us use a special release profile where Cranelift backend is used, and debug symbols are not stripped. This way, one can enjoy a quicker edit/compile/debug cycle with usable, if not the best, performance in built binaries.

    • Giooschi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Another option is to compile dependencies with LLVM and optimizations, which will likely be done only once in the first clean build, and then compile your main binary with Cranelift, thus getting the juicy fast compile times without having to worry about the slow dependencies.

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Yes. And to complete the pro tips, the choice of linker can be very relevant. Using mold would come recommended nowadays.

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      So that “special release build” is the build you do debugging with. Shouldn’t you just modify the otherwise useless debug profile and turn on all the optimizations necessary to make it usable?

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Well, obviously that will depend on what defaults (and how many?!) a developer is going to change".

        https://doc.rust-lang.org/cargo/reference/profiles.html#default-profiles

        And the debug (dev) profile has its uses. It’s just not necessarily the best for typical day-to-day development in many projects.

        I actually use two steps of profile inheritance, with -cl (for Cranelift) inheriting from a special release-dev profile. A developer does not have to be limited in how they modify or construct their profiles.

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        I read the rest of the article, and it appears to have been partially written before support for codegen backends landing in cargo.

        The latest progress report from bjorn3 includes additional details on how to configure Cargo to use the new backend by default, without an elaborate command-line dance.

        That “latest progress report” has the relevant info ;)

        So, basically, you would add this to the top of Cargo.toml:

        cargo-features = ["codegen-backend"]
        

        Then add a custom profile, for example:

        [profile.release-dev-cl]
        inherits = "release"
        lto = "off"
        debug = "full"
        codegen-backend = "cranelift"
        

        Then build with:

        cargo build --profile release-dev-cl