I’m looking for the full scope skeptical worst case scenario to best case intention abstraction in a nutshell. I’m not looking for the copy pasta fanboi version. There must be a reason why I am made aware of DLC from apps in Android including DLC from storage and memory, I am looking for why.

I’ve also been trying to track down why AI on offline hardware displays some signs of a shadowed persistence despite the model GGUF remaining static. I can’t say for certain, but for instance models that once struggled with many advanced science fiction concepts like, no aliens, cislunar space, different sociopolitical structures than the present, AI in an Asimov like context, and life in O’Neill cylinders, after several long sessions of struggling manage to handle these concepts in parallel and much more with ease. There is certainly an element involved of how I develop the language to communicate concepts and I may be the one hallucinating some kind of external mechanism is in play. I’m admittedly struggling to understand the full scope of how model caching, Transformers, Pytorch, and Nvidia’s software tools work together beyond the basics I’ve learned while hacking with model attention to add some scaling.

How does one monitor and verify DLC? Is there any broader scope to JIT as well in this context?

I do a search for grep -rin http on everything I download in general. Is there anything else to be mindful of specifically related to DLC/JIT?

  • mintdaniel42@futurology.today
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 days ago

    It’s the same as any other code you’ll run on your machine. Privacy has nothing to do with DLC and especially not with JIT