Well, I’ve solved it! I now have a web interface (accessible via VPN, although, in principle, I could expose it to the internet) that allows fast, full-text search of all my old emails. Here is the recipe:
notmuch setup
& notmuch new
). This creates a new folder in your maildir directory containing full-text search info.python3 -m pip install netviel
and then ran it via python3 -m netviel
That’s it! This let’s you search locally. I actually did a few more steps because I wanted to containerize this thing so I could run it on my NAS. I’d be happy to go into detail about that too, if you’re interested. One hiccup was that, for some reason, netviel binds to 127.0.0.1 instead of 0.0.0.0, and there is no way to change that without compiling the project yourself. But, I found a workaround for my Docker container where you can use socat bound to 0.0.0.0 to redirect requests to netviel, so that requests from other computers appear local to netviel.
Anyway, that makes it all sound more complicated than it is. I am super-pleased to have solved this problem at last!
I got his meaning. Isn’t that the purpose of language, in the end?
Thanks for this! I’m going to try to get this set up. It sounds perfect.
Yes, I’m coming to similar conclusions myself. To be fair, encryption is a configurable option with Mailpiler. But, yes, it is all digested and stored in a mysql database, which is definitely more opaque than plaintext in the filesystem. I might try the mutt + notmuch solution described by @marty_relaxes@discuss.technics.de below. Sounds like it might be a challenge to set up but would work great forever after. I’ll need to figure out how to convert my mbox files to maildir, but Google suggests there are tools for that. Good luck to you, let us know what you ultimately figure out! I’ve been working on this off-and-on for a few months now without figuring our a good solution!
Edit: I guess, if you want fast full-text search, a database will have to enter the equation somewhere, though.
Alas, no! Things seemed to be going well: I got >90k messages imported from my Google Takeout mbox file before the import was interrupted (not mailpiler’s fault). At this point, I logged into the “auditor” account and was able to see my emails and search them. But, then I resumed the import. By the end of today, the import was finished (~150k messages total). When I logged in with the auditor account, I got some error “No search results” and nothing I could do about it. This is actually what happened last time I tried mailpiler, too, now that I recall. All seemed fine, but, it seems, the database got corrupted or something along the way… So, now it’s useless. I might try it one more time over the next few days. I’ll keep y’all posted.
Does mutt have search capabilities? Is it optimized such that it would be effective with large mailboxes? Thanks!
I am currently working on this. Finally got the Docker working and am importing my 15GB mbox as we speak! I’ll post back here about how it works out.
What browser are you using?
Even better, so it mutates into superior data!
brick it 4 times
I’d be impressed if the battery lasted long enough for that!
I just spent an hour trying to get this installed in a Proxmox VM. No dice. After install, it just boots to the GRUB rescue prompt. Oh well, seems like a cool idea.
And farts.
I was with you until you did potatoes dirty.
Fun fact: Brussel sprouts taste better now because the bitterness was intentionally selectively bred out of them in the '90s. They were, apparently, only bitter for a period of time because the ones that were easiest to mechanically harvest were bitter. Pre-mechanical harvesting, less bitter varieties were more popular.
https://www.mentalfloss.com/posts/do-brussels-sprouts-taste-better-now-yes-here-s-why-01ghed9q8dr8
Awesome! You too.
Let me know how it works out for you! I’m happy to be able to share this. I was very pleased with myself but had no one to tell haha. I actually have several copies of this set up with each Gluetun instance connected to different countries. Then, changing country is as easy as changing your tailnet exit node!
I have solved this problem! The trick is to use two Docker containers:
Here is an example docker-compose.yml:
version: "3"
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
# line above must be uncommented to allow external containers to connect.
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-container-to-gluetun.md#external-container-to-gluetun
restart: unless-stopped
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
volumes:
- ./gluetun:/gluetun
environment:
- VPN_SERVICE_PROVIDER=airvpn
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=xxx
- WIREGUARD_PRESHARED_KEY=xxx
- WIREGUARD_ADDRESSES=xxx
- WIREGUARD_MTU=1320
- SERVER_COUNTRIES=United States
# See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup
# Timezone for accurate log times
- TZ=America/New_York
# Server list updater
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
- UPDATER_PERIOD=24h
tailscale:
container_name: tailscale
cap_add:
- NET_ADMIN
- NET_RAW
volumes:
- ./tailscale/var/lib:/var/lib
- ./tailscale/state:/state
- /dev/net/tun:/dev/net/tun
network_mode: "service:gluetun"
restart: unless-stopped
environment:
- TS_HOSTNAME=airvpn-exit-node
- TS_AUTHKEY=xxxxxxxx
- TS_EXTRA_ARGS=--login-server=https://example.com --advertise-exit-node
- TS_NO_LOGS_NO_SUPPORT=true
- TS_STATE_DIR=/state
image: tailscale/tailscale
I have found Nginx Proxy Manager to be a huge time-saver for configuring nginx and certbot.
How well do they self-clean? How often do you need to clean it manually?