For a long time I’ve thought it would be cool to upload my consciousness into a machine and be able to talk to a version of myself that didn’t have emotions and cravings.
It might tell me that being around my parents has consistently had a negative effect on my mood for years now, even if I don’t see it. Or that I don’t really love X, I just like having sex with her. Maybe it could determine that Y makes me uncomfortable, but has had an overall positive effect on my life. It could mirror myself back to me in a highly objective way.
Of course this is still science fiction, but @TheOtherJake@beehaw.org has pointed out to me that it’s now just a little bit closer to being a reality.
With Private GPT, I could set up my own localized AI.
https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e
https://github.com/imartinez/privateGPT
I could feed this AI with information that I wasn’t comfortable showing to anyone else. I’ve been keeping diaries for most of my adult life. Once PrivateGPT was trained on the basic language model, I could feed it my diaries, and then have a chat with myself.
I realize PrivateGPT is not sentient, but this is still exciting, and my mind is kinda blown right now.
Edit 1: Guys, this isn’t about me creating a therapist-in-a-box to solve any particular emotional problem. It’s just an interesting idea about using a pattern recognition tool on myself, and have it create summaries of things I’ve said. Lighten up.
Edit 2: It was anticlimactic. This thing basically spits out word salad no matter what I ask it, even if the question has a correct answer, like a specific date.
Mate, maybe you should just go a therapist. That‘s their job, you don‘t need an AI for this.
Pretty much. This is far beyond what an LLM can do as well.
It might tell me that
IMHO an AI won’t be able to fix or cure all those feelings. You should see a therapist for this.
“I like having sex with her” would be objectively quantifiable
Again, I don’t think feelings are quantifiable, this is the main problem with AI.
Chat GPT can already be a pretty good tool for self-reflection. The way its model works, it tends to reflect you more than anything else, so it can be used as a reasonably effective “rubber duck” that can actually talk back. I wouldn’t recommend it as a general therapeutic tool though, it’s extremely difficult to get it to take initiative so the entire process has to be driven by you and your own motivation.
Also… Have you ever watched Black Mirror? This is pretty much the episode Be Right Back, it doesn’t end well.
It doesn’t end well.
Certainly true for the majority of Black Mirror episodes 😅
And the show is just phenomenal. I can’t think of any other show in recent years (off the top of my head) where I’m just in near constant awe of the writers, apart from Bluey. Watching either, my wife and I will often turn to the other at the end of an episode and go: “It’s just so fucking good”.
The short story in the form of a wiki entry MMAcevedo seems apropos to this conversation, especially the fictional uploader’s opinions on it:
Acevedo indicated that being uploaded had been the greatest mistake of his life
This is great. Thanks.
I hadn’t bookmarked a story in a LONG time, especially once I’ve read through from start to finish.
I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)
I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.
I knew there was a reason I saved all those IRC chat logs!
Unfortunately this setup will only get you to a very rudimentary match to your writing style and only copying from text you’ve already written. New subjects or topics you did not feed it won’t show up. What you’d get is a machine that would be a caricature of you. A mimic.
Its not until the AI can actually identify the topics you prompt, make decisions based on what views and how they relate to the topic that you’ll have an interesting copy of yourself. For example if you were to ask it for something new you should cook today PrivateGPT would only list things you current stated you liked. It would not be able to know the style of food, the flavors and then make a guess as to something else that fits that same taste.
Yeah, so the AI would STILL be very favorable about having sex with X, for example, because it’s trained on your writing/speaking/whatever.
“What do I feel about this?”
“Well, an average of what you’ve always felt about it, roughly…”
Well sort of. If you never talked about dating for instance, and you then started taking to the AI about dating it may not put two and two together to get that it relates to sex. It wouldn’t be able to infer anything about the topic as it only knows what the statistically most likely next word is.
That’s what i feel like most people don’t get. Even uploading years and years of your own text will only match your writing style and the very specific things you’ve said about specific topics. That why the writers strike is kind of dumb. This form of AI wont invent new stories, just rehash old ones.
…oh…now I see why they are on strike.
…oh…now I see why they are on strike.
😆
A regular adult human have 600 trillion synapses( connections between neurons ), so to just record index of these edges needs like 4.3 PB(yep petabytes), it’s not even counting what they do, just the index(cause 32bit int is not enough.) And, just in case you don’t know, toddler have even higher connection count for faster learning until our brain decides that “oh, these connection are not really needed” then disconnect and then save energy consumption. It is really not in our reach yet to simulate a self-aware artificial creature, cause most animals we know that are self-aware have high counts of synapses.
And yes we are attempting those for various reason.
I can see this a movie
You probably have. Someone else mentioned an episode of Black Mirror.
The AI would still need to understand feelings, at least in principle, in order to interpret your actions which are based on feelings. Even “I like having sex with her” is a feeling. A purely rational mind would probably reprimand you for using contraception because what is the point of sex if not making offspring?
I would think that “I like having sex with her” would be objectively quantifiable based on how many times it was mentioned versus other mentions of the person in question.
At that point you could search your diary entries yourself to analyse the way you talk about her. Assuming of course you’re honest with your diary and yourself and not glossing over things you don’t want to realise - in which case do you really need an AI to tell you?
Those were just generic examples. More specifically, I tend to write in my journal when I have a problem I’m trying to work out, not when things are moving along smoothly. So I would expect the chatbot to be heavily biased that way. It would still be good for recognizing patterns, assigning them a weight, and giving me responses based on that data. At least that’s my understanding of how a GPT works.
Yeh, I get that it’s just an example. But wouldn’t it be like that for anything you could ask it? It can only work with what you’re giving it and that data could be heavily influenced by you not wanting to see something. Or exaggerating. Or forgetting. A human looking at your diaries might be able to put themselves in your situation and understand, based on their own experience with the human condition, how you were probably feeling in a situation the diary entry is describing and interpret the entry accordingly, maybe even especially when considering other, seemingly conflicting entries. But they’re using “outside” information which an AI doesn’t have.
Don’t get me wrong, I’m not saying what you’re imagining is completely impossible - I’m trying to imagine how it might work and why it might not. Maybe one way to develop such an AI would be to feed it diaries of historical people whose entries we can interpret with decent confidence in hindsight (surely those must exist?). Ask the AI to create a characterisation of the person and see how well it matches the views of human historians.
I am so very much not an expert on AI and I hate most of what has come of the recent surge. And remember that we’re not talking about actual intelligence, these are just very advanced text parsing and phrase correlating machines. But it does of course seem tempting to ask a machine with no secondary motives to just fucking tell me the harsh truth so now I’m thinking about it too.
LLMs don’t think. They return an output based on the input you gave them like an extremely complex switch or if-else statement (sort of). We’re a long way off from truly digitizing ourselves even in carbon copy manner like this.
I’ve found that learning about and practicing DBT has offered me more of a skill to do this myself. I know what you mean about wishing you could see outside the frame of your emotions and past. In DBT, we have something called the “emotion mind” and the “reasonable mind.” But we need both in order to make decisions. Rationality is great, but emotion provides direction, desire, goals, and a “why” for everything we do. The idea is that when you use emotion and reason together, you can use your “wise mind” which can help you see outside your experiences and gain perspective in new areas. I think I know what you mean because I also crave further neutral 3rd party understanding on my past too, and use ChatGPT a lot for that myself. Thought I would just throw in a couple more cents if you hadn’t heard of the concept. :)
I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)
I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.
I’ve thought for a while that one of the main things that AI, at least current generation AI lack is what I call statefulness.
And what I mean by that is every human being has a set of information that they believe to be true and ideas and ideals that they try to uphold even if there is a negative cost attached to that.
For instance many guys will stand up to someone being loud and abusive in a bar even though they know that they could ultimately end up getting their ass kicked or get into a fight or get kicked out of the bar for doing so, and the likelihood of receiving any kind of reward from any external source is slim to none.
Current generation AI has none of that. Its only incentive is to arrange the words that it can arrange into a pattern that pleases the script it was given.
Every AI of the current generation needs another AI attached to it that reminds it of what it knows to be true it the state of its current beliefs, and the internal societal rules that it follows.
Rules such as, “don’t make up information, verify the facts that you share with other people before sharing them, and keep these important parts of the current prompt in mind as you are generating your answer, don’t go off script”.
This secondary like morality AI or conscience AI is something that the current generation lacks and I hope the next generation begins to develop such a thing because otherwise AI is only a very fancy toy.