There’s also a list here, though last updated in 2020: https://distributedcomputing.info/projects.html
Most of those projects remain active in some form.
Blind geek, fanfiction lover (Harry Potter and MLP). Mastodon at: @fastfinge@equestria.social.
There’s also a list here, though last updated in 2020: https://distributedcomputing.info/projects.html
Most of those projects remain active in some form.
For those of us using screen readers, this is a way bigger deal. Honestly I probably shouldn’t use a bluetooth headset and a bluetooth keyboard for my banking. We focus so much on SSL/HTTPS and wifi security, but I wonder how much effort goes into wireless keyboard security? Not nearly as much, I’d bet.
Problem was that I usually only discovered the issue when I went to read the book lol
I never did that, my connection was too slow to want to take up someone’s DCC slot for like a day to get an entire movie. Remember all the frustrating idiots who would share .lit files, but forget to remove the DRM from them?
Ah, good to know. Back in my day, when we had to walk a hundred miles to school in the snow, up hill both ways, IRC was the only place to get ebooks. I’m guessing it’s just the old users clinging on now.
Man, I’m getting flashbacks to my days running omenserve on undernet. I had no idea people were still doing this! How does the content compare to places like Anna’s archive these days?
Also, if you don’t feel comfortable building bookworm from source yourself, and you feel like you can trust me, Here’s a build of the latest bookworm code from github for 64-bit Windows: https://www.sendspace.com/pro/dl/rd388d
If you use Bookworm and use the built-in support for espeak, you can get up to 600 words per minute or so. Dectalk can go well over 900 words per minute. As far as I know, cocoa tops out at around 500 words per minute. So all of the options accept piper should be fine for you.
No, Mistral 7B can’t describe or work with images. Thanks for answering!
It really depends on your use case. If you want something that sounds pretty okay, and is decently fast, Piper fits the bill. However, this is just a command line TTS system; you’ll need to build all the supporting infrastructure if you want it to read audiobooks. https://github.com/rhasspy/piper
An extension for the free and open source NVDA screen reader to use piper lives here: https://github.com/mush42/piper-nvda
If you want something that can run in realtime, though sounds somewhat robotic, you want dectalk. This repo comes with libraries and dlls, as well as several sample applications. Note, however, that the licensing status of this code is…uh…dubious to say the least. Dectalk was abandonware for years, and the developer leaked the sourcecode on a mailing list in the 2000’s. However, ownership of the code was recently re-established, and Dectalk is now a commercial product once again. But the new owners haven’t come after the repo yet: https://github.com/dectalk/dectalk
If you want a robotic but realtime voice that’s fully FOSS with known licensing status, you want espeak-ng: https://github.com/espeak-ng/espeak-ng
If you want a fully fledged software application to read things to you, but don’t need a screen reader and don’t want to build scripts yourself, you want bookworm: https://github.com/blindpandas/bookworm
Note, however, that you should build bookworm from source. While the author accepts pull requests, because of his circumstances, he’s no longer able to build new releases: https://github.com/blindpandas/bookworm/discussions/224
If you are okay with using closed-source freeware, Balabolka is another way to go to get a full text to speech reader: https://www.cross-plus-a.com/balabolka.htm
Can Mistral describe images yet? Not sure if it’s multi-modal or not. If it could that would be a super useful feature for those of us over on rblind.com. And/or is the code available somewhere for us to hack in something like openrouter and spin up a copy?
Personally I find myself renting GPU and running Goliath 120b. Smaller models could do what I’m doing if I spent more time optimizing my prompts. But every day I’m doing different tasks, and Goliath 120b will just handle whatever I throw at it, no matter how sloppy I am. I’ve also been playing with LLAVA and Hermes vision models to describe images to me. However, when I really need alt-text for an image I can’t see, I still find myself resorting to GPT4; the open source options just aren’t as accurate or detailed.
Apparently! I don’t hide my data in any way, and constantly get ads in languages I don’t speak. Usually French, but sometimes Hindi or Chinese. And as a blind person myself, I’m not sure that my well paid full time job working in large enterprise and big tech accessibility is altruism deserving of thanks haha.
I assume it’s because I live in Canada, and big American data just assumes all Canadians speak French. I regularly get French ads on English websites.
I don’t block anything. I work in accessibility, so it’s important to me to know what the experiences are like for my fellow users with disabilities. I also don’t want to recommend sites or apps that are riddled with inaccessible ads. I’d rather not give them traffic at all. Though even though I let them track me, I still get ads in a language I don’t speak for cars I can’t drive. What’re they doing with all that data?
A couple reasons, I think:
AI dubbing: this makes it way easier for YouTube to add secondary dubbed tracks to videos in multiple languages. Based on the Google push to add AI into everything, including creating AI related OKR’s, that’s probably a primary driver. Multiple audio tracks is just needed infrastructure to add AI dubbing.
Audio description: Google is fighting enough antitrust related legal battles right now. The fact that YouTube doesn’t support audio description for those of us who are blind has been an issue for a long time, and now that basically every other video streaming service supports it, I suspect they’re starting to feel increased pressure to get on board. Once again, multiple audio tracks is needed infrastructure for offering audio description.
Surprised nobody has mentioned my two favourites:
Most of the other stuff I listen to is either industry specific or fandom/hobby specific.
I run the RBlind.com Lemmy instance at Accuris Hosting. Decent Virtual Machines, easy IPV6 support, and everything works fine. Prices are a bit on the high end, but it’s worth it to me to use a provider located in my country, where I understand all of the associated laws and can pay in my own currency via my local bank. Also, I’d rather not give money to big tech if I can help it, and support local business instead. This isn’t sponsored or anything, I’m just a mostly contented customer.
Also, of course, the fact that the control panel is screen-reader accessible is super important to me, though I doubt anyone else cares. But unfortunately that’s not yet the case with most of the larger cloud providers like AWS. And if they do deploy an inaccessible update, the company is small enough that I can send an email and get an answer from a human who has actually read what I wrote, rather than a corporate AI.
Thanks! I didn’t realize there was an announcement on Lemmy, or I would have searched. Unfortunately screenshots are kind of the only way to share posts on Discord, because you can’t link someone to a Discord message on a server they’re not a member of, so I can’t blame you for a screenshot there. However, it is possible to add alt-text on images you post to Lemmy. :-)
Yes and no. I left during the API drama and the blackout. First, moving communities wholesale just never works. Community archives don’t migrate, the affordances are different from site to site, etc. That’s why we (speaking for all the folks who run the ourblind.com set of communities) run a Reddit, a Discord, and of course the rblind.com Lemmy. The members and culture are wildly different between the three. And that’s fine. Though because of moderation issues, these days all posts to /r/blind need approval, and sometimes approval can take a day or more. However, Reddit’s decision to exempt the accessibility focused clients (Luna and Dystopia) that most blind folks use meant that a lot of blind people preferred to stay on Reddit, especially those who just consume content from other communities.
Second, creating a home for a new community, and doing it properly, takes a lot of time and effort! It’s taken us over a year to get the server infrastructure for rblind.com to a place I’m happy with. We had almost a week of downtime a while back, and until recently email delivery was extremely dodgy. While those things are fixed now, we’re still in process of creating a custom (more accessible) theme for our Lemmy. So even over a year later, I would still consider the rblind.com Lemmy to be in an alpha state. Signups are more than welcome, but we’re not actively working to push people over from elsewhere. Despite that, we’ve got a couple active daily users (mostly in off-site communities), folks make regular posts to our main community from Mastodon, and we’ve got a couple hundred registered users. It took the Reddit about five years to really take off, and even the Discord took a couple years before it started popping. So I’m happy for Lemmy to slowly build at its own pace, into whatever it decides to become, without trying to make it a clone of Reddit or something else, or forcing the existing communities to move over.
As well, of course, if Reddit does decide to cut off the accessible clients, or do something else that makes it completely screen reader inaccessible, our Lemmy means that no single service can hold our community hostage. Unlike when the API stuff happened, now we wouldn’t be in the position of racing to find a new home. We’ve got somewhere that’s mostly built and ready for people to move in when they need it.