As someone who knows very little about Scheme or Arabic, what are some aspects of this language that might be novel or interesting to someone with a background in mainstream languages?
As someone who knows very little about Scheme or Arabic, what are some aspects of this language that might be novel or interesting to someone with a background in mainstream languages?
Hey, I like checked exceptions too! I honestly think it’s one of Javas’s best features but it’s hindered by the fact that try-catch is so verbose, libraries aren’t always sensible about what exceptions they throw, and methods aren’t exception-polymorphic for stuff like the Stream API. Which is to say, checked exceptions are a pain but that’s the fault of the rest of the language around them and not the checked exceptions per se.
That texture healing looks super nice. Is that something fonts can just do or does it require special editor support?
I might buy more from Epic if their launcher weren’t So. Freaking. Slow. Even claiming the free game is such a chore that I can’t be bothered to do it. It takes several minutes to load, responds sluggishly, and lags everything else on my computer the whole time it’s running. The only game I play from them anymore is Celeste because I can start it without ever going through the launcher.
…What are they actually launching though? I mean I love the payment scheme but I can’t get excited over this without an actual good product being sold.
Do people actually use Epic? I wasn’t much of a gamer before and didn’t care for Steam, and my first real exposure to PC gaming was when Epic started their weekly giveaway of free games. I made an account, discovered some cool titles, and could have been a happy customer if only their launcher weren’t so ridiculously slow. Now I can barely even stand opening the launcher to collect the free game, let alone trying to browse for games to buy.
The one case where I prefer video is when I know next to nothing about the topic and the other choice is mediocre to low-quality writing. Most people aren’t great technical writers, and it’s easy to skip over steps either because the writer assumes too much prior knowledge or simply because it takes effort to put that information in. On the other hand, videos are the opposite where it takes effort to cut stuff out, so you usually get all the steps which is what I need when I don’t know anything.
If I have the option of a well-written, step-by-step tutorial though, or if I already know the topic and have a vague idea of what I’m looking for, then text is much better for being able to search/skim/go back and forth at my own pace.
I guess it depends on what you mean by using monads, but you can have a monadic result type without introducing a concrete monad abstraction that it implements.
At a library level, couldn’t you have an opaque sum type where the only thing you can do with it is call a match
method that requires a function pointer for each possible variant of the sum type? It’d be pretty cursed to use but at least it wouldn’t require compiler plugins.
Really? I would argue that pocket calculators are AI
The behavior is defined; the behavior is whatever the processor does when you read memory from address 0.
If that were true, there would be no problem. Unfortunately, what actually happens is that compilers use the undefined behavior as an excuse to mangle your program far beyond what mere variation in processor behavior could cause, in the name of optimization. In the kernel bug, the issue wasn’t that the null pointer dereference was undefined per se, the real issue was that the subsequent null check got optimized out because of the previous undefined behavior.
No idea how hard it would be but it would be nice to have code blocks with syntax highlighting like on Github, so you could write something like
```python
def f(x):
return x
```
and get
lesswrong.com: I remain unconvinced by the central AI doom and Effective Altruism stuff, but the peripheral posts on rationality, math, short-form sci-fi stories, musings on random topics, etc. have been massively influential on me.
Do you care about modeling the cells? If not, you could represent each row with just a number. When X plays, add 1 to all the rows that include the position they played, and when O plays, subtract 1. If any row reaches +3 or -3, that player wins.
As for rotation/reflection invariance, that seems more like a math problem than a Rust problem.
I’m not sure this blog post makes the right comparison. Based on my admittedly limited experience, OCaml modules seem more comparable to Java classes than packages. They’re both bundles of functions and data, except the module contains data types instead of being the data type itself. Classes have basically all the features of strong modules like separate compilation, signatures (interfaces), functors (generics), namespacing, access control. These examples of OCaml modules are all things that would be implemented as a class in Java.
From this perspective, rather than Java lacking strong modules, it actually has them in the form of classes. It’s OCaml which lacks (or doesn’t need) an additional package system on top of its modules.
My main point is that PRQL makes no distinction. If you didn’t inspect that SQL output and already know about the difference between WHERE and HAVING, you would have no idea, because in PRQL they’re both just “filter”.
Hmm, I have to disagree here. PRQL has no distinction in keyword, but it does have a distinction in where the filter goes relative to the aggregation. Given that the literal distinction being made is whether the filter happens before or after the aggregation, PRQL’s position-based distinction seems a lot clearer than SQL’s keyword-based distinction. Instead seeing two different keywords, remembering that one happens before the aggregation and the other after, then deducing the performance impacts from that, you just immediately see that one comes before the aggregation and the other after then deduce the performance impacts.
As far as removing arbitrary SQL features, I agree that that is it’s main advantage. However, I think either the developers or else the users of PRQL will discover that far fewer of SQL’s complexities are arbitrary than you might first assume.
That’s fair, I was just thinking of things that frustrate me with SQL, but I admittedly haven’t thought too hard about why things are that way.
What are the implications of WHERE vs HAVING? I thought the only primary difference was that one happens before the aggregation and the other happens after, and all the other implications stem from that fact. PRQL’s simplification, rather than obscuring, seems like a more clear and reasonable way to express that distinction.
I don’t know if PRQL supports all SQL features, but I think it could while being less complex than SQL by removing arbitrary SQL complications like different keywords for WHERE vs HAVING, only being able to use column aliases in certain places, needing to recompute a transformation to use it in multiple clauses, not forcing queries to be in SELECT… FROM… WHERE… order, etc.
Why would you need to know the eccentricities of SQL? Shouldn’t it be enough to just know PRQL? The generated SQL should have the same semantics as the PRQL source, unless the transpiler is buggy.
It’s a Substack thing, not added by the author
Going by the example in the Github, it looks like a right-to-left Lisp with Arabic keywords. Does that fully describe the language or is there more to it than that?
I’d be interested in hearing about the parts that are more influenced by Arabic than Scheme. Are there any beyond the keyword language and writing direction? Like a new keyword that does something useful but has no equivalent in Scheme because the concept isn’t easily expressed by an English keyword?