Yeah, I got that.
I’m asking what would be the benefit of not using a single error enum for all failure reasons?
Yeah, I got that.
I’m asking what would be the benefit of not using a single error enum for all failure reasons?
Later: short summary of the conclusion of what the committee didn’t do (read 307 minutes)
Fixed that for you.
If you read the post, you will see it explicitly stated and explained how the committee, or rather a few bureaucratic heads, are blocking any chance of delivering any workable addition that can provide “safety”.
This was always clear for anyone who knows how these people operate. It was always clear to me, and I have zero care or interest in the subject matter (readers may find that comment more agreeable today 🙂 ).
Now, from my point view, the stalling and fake promises is kind of a necessity, because “Safe C++” is an impossibility. It will have to be either safe, or C++, not both, and probably neither if one of the non-laughable solutions gets ever endorsed (so not Bjarne’s “profiles” 😁), as the serious proposals effectively add a non-C++ supposedly safe layer, but it would still be not safe enough.
The author passionately thinks otherwise, and thinks that real progress could have been made if it wasn’t for the bureaucratic heads’ continuing blocking and stalling tactics towards any serious proposal.
I’ll wait for the conclusion of what the C++ committee does
🤣 🤣 🤣 🤣
naysayer
🙂
for multi threaded workloads there aren’t many options
Anyone who actually writes Rust code knows about tracing my friend.
We also have the ever useful #[track_caller]
/Location::caller().
And it’s needless to say that dbg!() also exists, which is better than manual printing for quick debugging.
So there exists a range of options setting between simple printing, and having to resort to using gdb/lldb (possibly with rr).
But yes, skipping debugging symbols was a bad suggestion.
It’s quite simple. Just remove the permalink field! If you are calculating it then no need to store it in the struct.
This is inefficient. It should be the other way around. Remove base_url
and rel_permalink
, and store permalink
and the rel_permalink
offset.
That way, you can get a zero cost &str
for any of the three.
With all the respect to the author and his wild
experiments, that title does not match the linker-only focus of the content.
So not only the post ended up with two (IMHO) bad recommendations (disabling debug info, building non-relocatable binaries with musl). But it also didn’t mention other important factors like codegen-units
and codegen-backend
. Since you know, code generation is the other big contributor to the cycle time (the primary contributor even, in many cases). There is also other relevant options like lto
and opt-level
.
Let’s assume that opt-level
shouldn’t be changed from defaults for no good reason.
With codegen-units
, it’s not the default that is the problem, but the fact that some projects set it to 1 (for performance optimization reasons), but without adding a separate profile for development release builds (let’s call it release-dev
).
Same goes for lto
, where you can have it set to "full"
in your optimized profile, and completely "off"
in release-dev
.
And finally, with codegen-backend
, you can enjoy what is probably the biggest speed up in the cycle by using cranelift
in your release-dev
profile.
And of course you are not limited to just two release profiles. I usually use 3-4 myself. Profile inheritance makes this easy.
And finally, you can set/not set some of those selectively for your dependencies. For example, not using cranelift
for dependencies can render the runtime performance delta negligible in some projects.
Using the parallel rustc front-end might become an interesting option too, but it’s not ready yet.
Another meme answer: nu
.
I never actually used nu
for anything. But I’ve been thinking (unironically) that nu
with its built-in from_json
and to_json
can be interesting.
The use-case I had in mind is not games or anything like that, but some system or dev tools that traditionally utilized shell scripts, but are moving towards better languages like python. So I thought a single binary that embeds nu
, but also has a lot of sub-commands that implement a lot of sub-tasks in Rust directly, and with JSON used as an exchange format, the combination can be interesting.
Now that I think about it more, this can work in both directions, with main execution being in nu (what I had in mind), or in Rust.
nu
even has an lsp server, so the development experience should theoretically be good.
Cow
does not work when you are actually required to return a reference
What does that even mean? Can you provide a practical example?
(I’m assuming you’re familiar with Deref
and autoref/autoderef behaviors in Rust.)
Option
not an option?Cow
s?LazyLock
static?Is this going to be re-posted every month?
Anyway, I’ve come to know since then that the proposal was not a part of a damage control campaign, but rather a single person’s attempt at proposing a theoretical real solution. He misguidedly thought that there was actually an interest in some real solutions. There wasn’t, and there isn’t.
The empire are continuing with the strategy of scamming people into believing that they will produce, at some unspecified point, complete magical mushrooms guidelines and real specified and implemented profiles.
The proposal is destined to become perma-vaporware. The dreamy guidelines are going to be perma-WIP, the magical profiles are going to be perma-vapordocs (as in they will never actually exist, not even in theoretical form), and the bureaucracy checks will continue to be cashed.
So not only there was no concrete strike back, it wasn’t even the empire that did it.
Keep (Neo)Vim out of this.
sublemmy
Lemmy communities. Mbin/kbin magazines.
Actually, I may have been too finicky about this myself.
Since I often write my own wrapping serialization code for use with non-serde formats, I didn’t realize that chrono::DateTime<chorono_tz::Tz>
wasn’t serde-serializable, even with the serde
feature enabled for both crates. That’s where the biggest problem probably lies.
In the example, using chorono_tz::Tz
, and only converting to-be-serialized values to FixedOffset
would probably put better focus on where the limitations/issues actually lie.
Like do you really not see this as something that shouldn’t be mentioned in a comparison between these crates? You must recognize the difference between what you’re doing and just plopping a Zoned in your struct, deriving Serialize and Deserialize, and then just letting the library do the right thing for you.
If that’s how it was framed in the comparison, it would have been fine. But my original objection was regarding the Local
+FixedOffset
example which, IMVHO, toys, if ever so slightly, with disingenuity (no offense or aggression intended, I’m a fan).
I think you also glossed over some of my other points. How do you write your serialization code using Chrono? Does it work with both chrono-tz and tzfile?
Something like this?
It can support tzfile too around the wire if it starts to expose tz names in a future version.
Why is the full presentation non-ephemerally stored instead of (timestamp, timezoe)
?
Is the use-case strictly limited to checking the validity of a future date that was generated with assumptions based on current tzdata info? That’s valid, but quite niche I would argue.
And one can adjust the wrapper to have (timestamp, timezone, assumed_offset_at_ts)
. But yes, it can be argued that it’s not idiomatic/automatic/antecedently obvious anymore.
I think you misunderstood me.
What I meant is, someone who wants to serialize zoned dt info using chrono can basically send a timestamp and a timezone name on the wire, e.g. (1721599162, "America/New_York")
.
It’s not built-in support. It’s not a single human-readable string containing all the needed info that is being sent on the wire. But it does provide the needed info to get the correct results on the other side. And it’s the obvious solution to do, and it’s doable with trivial wrappers. No Local
+FixedOffset
usage required. And no wrong results inevitable.
You would do good on a CoC board.
Friendly Advice: If you hang out in microblog platforms, especially mastodon, do it less. The echo chamber discourse there is not good for your sanity. This is general advice, not just for you, really.