Using smart pointers doesn’t eliminate the memory safety issue, it merely addresses one aspect of it. Even with smart pointers, nothing is preventing you from passing references and using them after they’re freed.
Cool, that was an informative read!
If we were willing to leak memory, then we could write […]
Box::leak(Box::new(0))
In this example, you could have just made a constant with value 0
and returned a reference to that. It would also have a 'static
lifetime and there would be no leaking.
Why does nobody seem to be talking about this?
My guess is that the overlap in use cases between Rust and C# isn’t very large. Many places where Rust is making inroads (kernel and low-level libraries) are places where C# would be automatically disqualified because of the requirements for a runtime and garbage collection.
Nah, more senior devs often also advocate for the quick fix, for the simple reason that the economics of a “proper” fix simply don’t add up, especially when you don’t know how much value such a fix would bring anyway. If you’re always looking to create “proper” solutions in hopes of someone in the future thanking you for it, it most likely means your priorities aren’t where they should be and it’s very unlikely someone will thank you for it.
I even wrote a blog on this topic: https://arendjr.nl/blog/2023/04/mvp-the-most-valuable-programmer/
Sorry, I don’t understand. Do you mean there have to be 6 digits of Pi in there, or the sixth character must be π? I’m down either way.
Sure! The other day someone called it emergent architecture. I guess it goes by multiple names :)
I would suggest indirection is one of the forms of abstraction? Making abstractions to “defer” architecture seems quite counter-intuitive to me, since the thing you’re deferring is implementation. But the architectural part — the abstraction — gets front-loaded. The idea that an abstraction can truly abstract the “how” of user persistence is very much the kind of fallacy I warned against when it comes to leaky abstractions. Do you want to use async
methods for persistence or not? Do you want to persist is batches or not? How will you be notified of completion or errors? The answers depend very much on actual architectural implementation, and if you had created an upfront abstraction you may very well find it won’t suffice if any of these variables changes. So no, I don’t think that really defers architecture at all, it merely solidifies your current assumptions about how your architecture will probably look like in the future. If any of these assumptions turn out wrong or simply undesirable later, you’ve made things harder for yourself. So if that’s the risk anyway, it’s fine to just stick with a simple concrete solution when you can.
I do agree pure functions are harder to do in impure languages, but it’s not as bad as your example of C. Rust is very much an impure language, but its borrow-checker helps you to enforce purity constraints. In fact, using pure functions there is a great way to avoid getting into fights with the borrow-checker. If you use TypeScript you can use Immer to aid you, which is also included in the Redux Toolkit.
Can’t it be both? :)
Ah, fair :) then yeah, I’d say it’s pretty aligned with emerging architecture, just that I’m trying to define the values (and in the next post, technical guidelines based on those values) to (hopefully!) help you make the right decisions as you’re working on an emerging architecture.
I assume you’re referring to this blog series: https://medium.com/prospa-technology/emerging-vs-intentional-architecture-385071ae5d75 ? I wasn’t aware of it, and it seems to have some insightful observations! There’s definitely some overlap, but by the looks of it, I think I will diverge quite a bit with my next post. I think I’m pretty aligned on the “One-Way Decisions” vs “Two-Way Decisions” part. A One-Way decision in my mind would be, which programming language or framework do we use? Do we use REST or GraphQL?
But it doesn’t really go into how to deal with Two-Way decisions, apart from saying to trust your developers. And I think it kinda glosses over the part that things that may appear to be Two-Way decisions initially may actually be closer to One-Way decisions if you continue to build on them. So where that blog still focuses quite a bit on the process, I think I want to shift the focus a bit more to the technical side (so far I’ve mostly laid down the values that inform the technical direction), especially when it comes to Two-Way decisions. I wasn’t thinking about covering One-Way decisions much, but rather on how to shape everyday coding to be more in alignment with Post/Emerging architecture so that you can avoid the Two-Way decisions that in retrospect aren’t as much of a Two-Way decision as you’d hope.
Hope that makes sense :D
Thanks! This mirrors quite some experiences I’ve had over the years indeed. And for what it’s worth, I think the way you’re handling that is not bad at all.
As someone else mentioned in the comments on Mastodon, one of the hardest things about mentoring is articulating the lessons you may not even realize you’ve learned. I don’t think anyone can be blamed for failing to teach or convince someone else, since people are simply too different to be able to teach and convince them all. As you say, you have to pick your battles, and as long as you let your teammates do their work respectfully in their own way, that alone is a great achievement!
JSON patch is a dangerous thing to use over a network. It will allow you to change things inside array indices without knowing whether the same thing is still at that index by the time the server processes your request. That’s a recipe for race conditions.
It’s limited to JS runtimes, but this discussion might be of use: https://github.com/biomejs/biome/discussions/2467 I think you may find Boa fits your criteria, except for the JIT part.
Yeah, sorting is definitely a common use case, but note it also didn’t improve every sorting use case. Anyway, even if I’m a bit skeptical I trust the Rust team that they don’t take these decisions lightly.
But the thing that lead to my original question was: if the compiler itself uses the std sorting internally, there’s also additional reason to hope that it might have transitive performance benefits. So even if compiling the Rust compiler with this PR was actually slower, compiling again with the resulting compiler could be faster since the resulting compiler benefits from faster sorting. So yeah, fingers crossed 🤞
Yeah, it was the first line of the linked PR:
This PR replaces the sort implementations with tailor-made ones that strike a balance of run-time, compile-time and binary-size, yielding run-time and compile-time improvements.
It was also repeated a few paragraphs later that the motivation for the changes was both runtime and compile time improvements. So a little bit bumped to hear the compile time impact wasn’t as good as the authors hoped apparently. I’m not even sure I fully endorse the tradeoff, because it seems the gains, while major, only affect very select use cases, while the regressions seem to affect everyone and hurt in an area that is already perceived as a pain point. But oh well, the total regression is still minor so I guess we’ll live with it.
The post mentioned that the introduction of these new algorithms brings compile-time improvements too, so how should I see this? I assumed it meant that compiling applications that use sorting would speed up, but that seems like a meaningless improvement if overall compilation times have regressed. Or do you mean compiling the compiler has become slower?
Does the Rust compiler use their std sort algorithms, or does it already use specialized ones? If the former, it would be a great side-effect if the compiler itself receives additional speed ups because of this.
From what I understand as I skimmed over the stable sort analysis (https://github.com/Voultapher/sort-research-rs/blob/main/writeup/driftsort_introduction/text.md), it lost out against driftsort.
For a little bit I thought this library might be a subtle joke, seeing the at the start. That combined with the
compress()
and decompress()
not taking any arguments and not having a return value, I thought we were being played. Not to mention the library appears to be plain C rather than C++… surely the author should know the difference?
Then I saw how the interface actually works:
// interface for the library user, implement these in your program:
unsigned int SPR_in(); // Return next byte from input or value > 255 on EOF.
void SPR_out(unsigned char); // Output byte.
This seems extremely poorly thought out. Calling into global functions for input and output means that your library will be a pain to use in any program that has to (de)compress anything more than a single input.
I use EndeavourOS and really enjoy it. It’s effectively Arch but without the fuss. You get a GUI with just a few steps to set it up and you’re good to go. I tend to upgrade once a week, while checking the forums to see nothing too bad broke. That’s basically the maintenance I have.
When I do a new install on a new device, I just clone a repo I keep with the most important config files. Then I copy them to where they belong. There’s really not much more to it.