It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.
CVEs are constantly found in complex software, that’s why security updates are important. If not these, it’d have been other ones a couple of weeks or months later. And government users can’t exactly opt out of security updates, even if they come with feature regressions.
You also shouldn’t keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
I see!
And it was a stable OS version, not a beta or something? That’s the worst kind of bugs. Hopefully manufacturers start formally verifying hardware and firmware as a standard practice in the future.
Other than what I said in the other reply:
I live in the USA so getting one would be problematic but I hear perhaps not entirely impossible for me.
Looks like it has a US release? If you’re unsure or getting a European version, double-check it’s compatible with American wireless network frequencies &c. Specific operators might also have their own shenanigans.
Do you know how it compares to e.g. Fairphone?
Nope, never tried Fairphone.
Very solid, I think (except water protection, but my previous OnePlus also didn’t have good water protection anyway; and I’m careful enough).
I don’t tend to use glyphs or the default launcher (and therefore its special widgets that only work there; but the ability to have apps in folders on my main screen while being hidden from the app menu is more important for me than a handful of widgets, so Neo Launcher it is).
A recent OS update added configurable swap (up to 8GB), calling it “RAM booster”. I don’t use it, but if you want to run a local LLM (or rather a SLM), you could try making use of it? As long as you figure out how to make the model use main RAM and not the swap.
I like the battery life (or maybe it’s just because it’s the first phone where I started charging at 20% and stopping at 80% semi-consistently).
Termux still works despite the new Android versions becoming more hostile to apps executing binaries they didn’t have included already.
One thing I miss from OnePlus is the ability to deny some apps network access entirely. (I think it was removed in later versions of Oxygen OS?)
Also was a OnePlus user - now switched to Nothing Phone (2).
I don’t focus on recommendations specifically. My typical process is:
Skip some of these if irrelevant or if you don’t care enough. Spend extra time if you care a lot.
It works well enough for every new phone (the market there is changing fast, so you start anew every time), it worked for my first PC I’ve decided to assemble with 0 prior knowledge, the mechanical keyboard and the vertical mouse, and pretty much every piece of tech I’m buying.
And I’d say it’s reasonable to use Reddit without an account even if you disagree with what the platform owners are doing. The data is still valuable for such use cases.
You’re welcome!
As far as I understand, all of them can be made to work locally (especially if your local model is served via an OpenAI-compatible API, e.g. see llama.cpp’s server
binary) with varying degrees of effort required.
Never ran RAG, so unfortunately no. But there’re quite a few projects doing the necessary handling already - I’d expect them to have manuals.
I’m using local models. Why pay somebody else or hand them my data?
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.