The article you posted is from 2023 and PERA was basically dropped. However, this article talks about PREVAIL, which would prevent patents from being challenged except by the people who were sued by the patent-holder, and it’s still relevant.
The article you posted is from 2023 and PERA was basically dropped. However, this article talks about PREVAIL, which would prevent patents from being challenged except by the people who were sued by the patent-holder, and it’s still relevant.
while (true) { print money; }
Someone’s never heard of Bitcoin
ACLU, is this really that high a priority in the list of rights we need to fight for right now?
You say this like the ACLU isn’t doing a ton of other things at the same time. Here are their 2024 plans, for example. See also https://www.aclu.org/news
Besides that, these laws are being passed now, and they’re being passed by people who have no clue what they’re talking about. It wouldn’t make sense for them to wait until the laws are passed to challenge them rather than lobbying to prevent them from being passed in the first place.
wouldn’t these arguments fall apart under the lens of slander?
If you disseminate a deepfake with slanderous intent then your actions are likely already illegal under existing laws, yes, and that’s exactly the point. The ACLU is opposing new laws that are over-broad. There are gaps in the laws, and we should fill those gaps, but not at the expense of infringing upon free speech.
What makes sourcehut better?
From a self-hosting perspective, it looks like much more of a pain to get it set up and to keep it updated. There aren’t even official Docker images or builds. (There’s this and the forks of it, but it’s unofficial and explicitly says it’s not recommended for prod use.)
Yes, but only in very limited circumstances. If you:
then commits A and B are publicly visible, but commit C is not.
If a public repository is made private, its public forks are split off into a new network.
Modifying the above situation to start with a public repo:
Commit B remains visible.
A version of this where step 3 is to take the fork private isn’t feasible because you can’t take a fork private - you have to duplicate the repo. And duplicated repos aren’t part of the same repository network in the way that forks are, so the same situation wouldn’t apply.
Misleading title.
The title literally spells out the concern, which is that code that is in a private or deleted repository is, in some circumstances, visible publicly.
What title would you propose?
If my thing was public in the past, and I took it private, the old public code is still public.
The “Accessing Private Repo Data” section covers a situation where code that has always been private becomes publicly visible.
The models I’m talking about that a PI 5 can run have billions of parameters, though. For example, Mistral 7B (here’s a guide to running it on the PI 5) has roughly 7 Billion parameters. By quantizing each parameter to 4 bits, it only takes up 3.5 GB in RAM, making it easily fit in the 8 GB model’s memory. If you have a GPU with 8+ GB of VRAM (most cards from the past few years have 8 GB or more - the 1070, 2060 Super, and 3050 and each better card in that generation hit that mark), you have enough VRAM and more than enough speed to run Q4 versions of the 13B models (which have roughly 13 Billion parameters), and if you have one with 24 GB of VRAM, like the 3090, then you can run Q4 versions of the 30B models.
Apple Silicon Macs can also competently run inference for these models - for them, the limiting factor is system RAM, not VRAM, though. And it’s not like you’ll need a Mac as even Microsoft is investing in ARM CPUs with dedicated AI chips.
I don’t see how LLMs will get into the households any time soon. It’s not economical.
I can run an LLM on my phone, on my tablet, on my laptop, on my desktop, or on my server. Heck, I could run a small model on the Raspberry PI 5 if I wanted. And none of those devices have dedicated chips for AI.
The problem with LLMs is that they require immense compute power.
Not really, particularly if you’re talking about the usage of smaller models. Running an LLM on your GPU and sending it queries isn’t going to use more energy than using your GPU to game for the same amount of time would.
I disagree, unless you mean nautical piracy. The difference is that people are being swindled into paying them for a service that’s less effective than they represent it as being, whereas with piracy the only “loss” anyone suffers is speculative at best. What they’re doing is more like fraud, honestly. Unfortunately that speculative loss’s value is codified into law and the fraud is probably permitted as long as they have some fine print somewhere covering their asses.
Thanks for that! I recommend anyone who wants to minimize risk to follow their instructions for self-hosting:
Is the source code available and can I run my own copy locally?
Yes! The source code is available on Github. Its a simple static HTML application and you can clone and run it by opening the
index.html
file in your browser. When run locally it should work when your computer is completely offline. The latest commits in the git repository are signed with my public code signing key.
Generally people don’t memorize private keys, but this is applicable when generating pass phrases to protect private keys that are stored locally.
Leaving this here in case anyone wants to use this method: https://www.eff.org/dice
I don’t know for sure, but that’s the scale I would expect (billions) and the number came from https://www.seroundtable.com/google-goo-gl-urls-to-404-37758.html
the database even for hundreds of thousands of entries shouldn’t be huge
Hundreds of thousands of entries would be negligible (at 1000 bytes average per entry, 500k entries would be half a gigabyte) but the issue is that a full archive would be around 36 billion entries (making that archive around 34 TB, but probably smaller because the average link size is likely much lower than 1000 characters).
It sounds like they want a representative sample, which isn’t something I’d be confident in my ability to help them with directly, so I’d advise them to first scan for a person who’s very experienced in statistical sampling and to then work with that person to determine a strategy that will meet their goals.
If they weren’t on board with that plan, then I’d see if they were willing to share their target sample size. If I didn’t have an option for the count I would assume they would be contacting 1% of the population (80 million people). I’d also let them know that being representative and selecting for traits that will make encounters go smoothly are conflicting goals, so I’m prioritizing for representation and they can figure out the “please don’t pull a shotgun out, human!” trait on their own. Depending on all that, I’d recommend an approach that accounted for as much of the following as possible.
Traction control and other related features is a bigger deal than AWD in my opinion. In the past five years I’ve had AWD engage maybe twice.
Also, you can replace two tires at once as opposed to all four, depending on the specific vehicle and how much the difference will be between the tires you’re keeping and getting rid of. You only need to replace all four if the difference is enough to cause issues.
There are a ton of crossover SUVs with FWD, though. Here are a few:
Just so you know, Elon’s AI is “Grok,” which is unaffiliated with Groq, the AI platform used by Groqbook.
Here’s a Gizmodo article about Groq. The notable thing about Groq is that it uses specialized “LPU” hardware in order to return results faster. It also exposes an Openai compatible API, so developers can use Groq with their choice of available models (as far as I can tell that includes anything you could run with llama.cpp, though you may have to convert a model yourself if nobody’s already made it available for Groq).
That said, since Groqbook uses Llama3 via Groq, you could edit your quote to replace “Elon Musk” with “Mark Zuckerberg” and it wouldn’t change much.
(To be clear, I don’t think Groqbook is made by anyone officially associated with Groq or that either is associated with Meta, but I also didn’t check.)
Doesn’t their API also require you to allow-list IPs, making it basically useless for dynamic DNS?
From https://www.namecheap.com/support/api/intro/ under “Whitelisting IP.”
How do you define “intelligence,” precisely?
Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?
Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?
If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”
If I convinced you to consider it or if you already did, then can you clarify:
The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?
Your comment makes no sense.