• 0 Posts
  • 273 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle




  • ACLU, is this really that high a priority in the list of rights we need to fight for right now?

    You say this like the ACLU isn’t doing a ton of other things at the same time. Here are their 2024 plans, for example. See also https://www.aclu.org/news

    Besides that, these laws are being passed now, and they’re being passed by people who have no clue what they’re talking about. It wouldn’t make sense for them to wait until the laws are passed to challenge them rather than lobbying to prevent them from being passed in the first place.

    wouldn’t these arguments fall apart under the lens of slander?

    If you disseminate a deepfake with slanderous intent then your actions are likely already illegal under existing laws, yes, and that’s exactly the point. The ACLU is opposing new laws that are over-broad. There are gaps in the laws, and we should fill those gaps, but not at the expense of infringing upon free speech.



  • Yes, but only in very limited circumstances. If you:

    1. fork a private repo with commit A into another private repo
    2. add commit B in your fork
    3. someone makes the original repo public
    4. You add commit C to the still private fork

    then commits A and B are publicly visible, but commit C is not.

    Per the linked Github docs:

    If a public repository is made private, its public forks are split off into a new network.

    Modifying the above situation to start with a public repo:

    1. fork a public repository that has commit A
    2. make commit B in your fork
    3. You delete your fork

    Commit B remains visible.

    A version of this where step 3 is to take the fork private isn’t feasible because you can’t take a fork private - you have to duplicate the repo. And duplicated repos aren’t part of the same repository network in the way that forks are, so the same situation wouldn’t apply.



  • The models I’m talking about that a PI 5 can run have billions of parameters, though. For example, Mistral 7B (here’s a guide to running it on the PI 5) has roughly 7 Billion parameters. By quantizing each parameter to 4 bits, it only takes up 3.5 GB in RAM, making it easily fit in the 8 GB model’s memory. If you have a GPU with 8+ GB of VRAM (most cards from the past few years have 8 GB or more - the 1070, 2060 Super, and 3050 and each better card in that generation hit that mark), you have enough VRAM and more than enough speed to run Q4 versions of the 13B models (which have roughly 13 Billion parameters), and if you have one with 24 GB of VRAM, like the 3090, then you can run Q4 versions of the 30B models.

    Apple Silicon Macs can also competently run inference for these models - for them, the limiting factor is system RAM, not VRAM, though. And it’s not like you’ll need a Mac as even Microsoft is investing in ARM CPUs with dedicated AI chips.


  • I don’t see how LLMs will get into the households any time soon. It’s not economical.

    I can run an LLM on my phone, on my tablet, on my laptop, on my desktop, or on my server. Heck, I could run a small model on the Raspberry PI 5 if I wanted. And none of those devices have dedicated chips for AI.

    The problem with LLMs is that they require immense compute power.

    Not really, particularly if you’re talking about the usage of smaller models. Running an LLM on your GPU and sending it queries isn’t going to use more energy than using your GPU to game for the same amount of time would.







  • It sounds like they want a representative sample, which isn’t something I’d be confident in my ability to help them with directly, so I’d advise them to first scan for a person who’s very experienced in statistical sampling and to then work with that person to determine a strategy that will meet their goals.

    If they weren’t on board with that plan, then I’d see if they were willing to share their target sample size. If I didn’t have an option for the count I would assume they would be contacting 1% of the population (80 million people). I’d also let them know that being representative and selecting for traits that will make encounters go smoothly are conflicting goals, so I’m prioritizing for representation and they can figure out the “please don’t pull a shotgun out, human!” trait on their own. Depending on all that, I’d recommend an approach that accounted for as much of the following as possible.

    • gender (male, female, non-binary)
    • race
    • culture and sub-culture (so this would include everything from religion to music to hobbies)
    • profession
    • age, broken down into micro-generations
    • mix of neurotypical and neurodivergent
    • different varieties of neurodivergence
    • range of intelligences

  • Traction control and other related features is a bigger deal than AWD in my opinion. In the past five years I’ve had AWD engage maybe twice.

    Also, you can replace two tires at once as opposed to all four, depending on the specific vehicle and how much the difference will be between the tires you’re keeping and getting rid of. You only need to replace all four if the difference is enough to cause issues.

    There are a ton of crossover SUVs with FWD, though. Here are a few:

    • Honda CR-V
    • Toyota RAV4
    • Lexus RX 350
    • Toyota Highlander
    • Hyundai Tucson
    • Hyundai Palisade
    • Kia Telluride
    • Nissan Kicks
    • Nissan Rogue
    • Nissan Murano

  • Just so you know, Elon’s AI is “Grok,” which is unaffiliated with Groq, the AI platform used by Groqbook.

    Here’s a Gizmodo article about Groq. The notable thing about Groq is that it uses specialized “LPU” hardware in order to return results faster. It also exposes an Openai compatible API, so developers can use Groq with their choice of available models (as far as I can tell that includes anything you could run with llama.cpp, though you may have to convert a model yourself if nobody’s already made it available for Groq).

    That said, since Groqbook uses Llama3 via Groq, you could edit your quote to replace “Elon Musk” with “Mark Zuckerberg” and it wouldn’t change much.

    (To be clear, I don’t think Groqbook is made by anyone officially associated with Groq or that either is associated with Meta, but I also didn’t check.)



  • How do you define “intelligence,” precisely?

    Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?

    Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?

    If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”

    If I convinced you to consider it or if you already did, then can you clarify:

    The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?