Doesn’t prolog already “not work half the time”? (Disclaimer: I haven’t used it.)
Doesn’t prolog already “not work half the time”? (Disclaimer: I haven’t used it.)
The article is more about the behavior of members of the C++ committee than about the language. (It also has quite a few tangents.)
I understand what you’re saying, but I want to do whatever I can to promote the shift in attitudes that’s already happening across the industry.
And being late or never delivering out of fear of shipping buggy code is even worse.
From a business perspective, yes, usually true. But shipping buggy software can also harm your company’s reputation. I doubt that this has been researched enough yet to be quantifiable, but it’s easy to think of companies who were well known for shipping bugs (Microsoft, CD Projekt Red) and eventually suffered in one way or another for it. In both of those cases, you’re probably right; Windows was good enough in the 90s to dominate the desktop market, and Cyberpunk 2077 was enough of a technical marvel (for those who had the hardware to experience it) that it probably bolstered the studio’s reputation more than harmed it. But could Microsoft have weathered the transition to mobile OSes better if it hadn’t left so many consumers yearning for more reliable software? And is Microsoft not partly to blame for the general public just expecting computers to be generally flaky and unreliable?
Imagine if OSes in the 90s crashed as rarely as desktop OSes today. Imagine if desktop OSes today crashed as rarely as mobile OSes today. Imagine if mobile OSes crashed rarely enough that the average consumer never experienced it. Wouldn’t that be a better state of things overall?
I care about types not just because I like having stronger confidence in my own software, but because, as a user, bugs are really annoying, and yes, I’m confident that stronger type systems could have caught bugs I’ve seen in the wild as a user.
This article somehow links to both the Reference and the Ferrocene spec, but still concludes that an official non-Ferrocene spec is necessary.
Why doesn’t the Ferrocene spec accomplish what the author wants? He states:
In other words, without a clear and authoritative specification, Rust cannot be used to achieve EAL5.
What? Why can’t the Ferrocene spec (and compiler) be used? Do Ferrocene and TÜV SÜD not count as “some group of experts”?
(Regarding the author’s opening paragraphs, the Reference does make the same distinction about drop scopes for variables versus temporaries, though I can see why he finds the Ferrocene spec clearer. But that doesn’t demonstrate that the Reference is useless as a stand-in for a specification.)
That’s actually not how any language has ever been written, though it’s easy to get that impression from how much the C and C++ communities emphasize their formal specifications.
But in fact, both languages were in production use for over a decade before they had a formal spec. And languages with formal specifications are actually a tiny minority of programming languages.
There is indeed a caveat in the introduction to the Reference that there may be statements in it that are specific to rustc
. However, the authors strive to keep statements about the implementation separate from statements about the language.
The main reason there’s not yet an “official” spec is that creating one takes enormous time and money, which are always limited resources. (Note that both C and C++ had no formal standard for over a decade after their initial release.) The Reference is “good enough” to make a formal spec not strictly necessary, and the existence of Ferrocene makes it even less necessary, since anyone who absolutely needs a spec can use Ferrocene.
You can say the Rust implementation is wrong if it doesn’t conform to the Reference. That is not the same as “you personally disagree with the behavior.”
Rust’s guarantees about the behavior of safe code are far stronger than anything C or C++ provides, with or without a formal spec.
But do they actually have autonomy, give that random companies can use .io
and .ai
? Or did the British Indian Ocean Territory and Anguilla approve all such uses of those domains?
Obviously this isn’t specific to Rust, but frankly it’s bizarre to me that ICANN chose to tie top-level domains to country codes in the first place. Languages might have made sense, but a major feature of the internet is that it’s less beholden to political boundaries than most of the physical world is.
rm - rf
is the only version that makes sense, since the only reason to delete and re-clone is to recover from an unexpected .git/
state, and git rm
won’t remove that.
The second one!
The bit in Big Hero 6 with the video records of Tadashi inventing Baymax are about as close to this as I’ve ever seen in a sci fi action movie.
Sorry, why would you be “boned” if you have UTC time? Are you thinking of the case where the desired behavior is to preserve the local time, rather than the absolute time?
I’m not totally clear on why signals are used here in the first place. Arguably most C code doesn’t “need” to use signals in complex ways, either.
The trope will be “old” once the mainstream view is no longer that C-style memory management is “good enough”.
That said, this particular vulnerability was primarily due to how signals work, which I understand to be kind of unavoidably terrible in any language.
Why do you think most early adopters use Windows exclusively?
Indeed, I had no idea there are multiple languages referred to as “APL”.
I feel like most people defending C++ resort to “people shouldn’t use those features that way”. 😅
As far as I can tell, pointer arithmetic was not originally part of PASCAL; it’s just included as an extension in many implementations, but not all. Delphi, the most common modern dialect, only has optional pointer arithmetic, and only in certain regions of the code, kind of like unsafe
in Rust. There are also optional bounds checks in many (possibly most) dialects. And in any case, there are other ways in which C is unsafe.
True, but AFAIK they all sucked really bad.
That’s pure assumption and, as far as I can tell, not actually true. PASCAL was a strong contender. No language was competitive with handwritten assembly for several decades after C’s invention, and there’s no fundamental reason why PASCAL couldn’t benefit from intense compiler optimizations just as C has.
Here are some papers from before C “won”, a more recent article about how PASCAL “lost”, and a forum thread about what using PASCAL was actually like. None of them indicate a strong performance advantage for C.
How marvelously creative. What an abomination.