As long as the AI doesn't suggests violence.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
What a clickbait. Of course people are picking feee resource with zero friction over 120$ an hour half a day event.
Funny, I was just reading comments in another thread about people with mental health problems proclaiming how terrific it is. Especially concerning is how they had found value in the recommendations LLMs make and "trying those out." One of the commenters described themselves as "neuro diverse" and was acting upon "advice" from generated LLM responses.
And for something like depression, this is deeply bad advice. I feel somewhat qualified to weigh in on it as somebody who has struggled severely with depression and managed to get through it with the support of a very capable therapist. There's a tremendous amount of depth and context to somebody's mental condition that involves more deliberate probing to understand than stringing together words until it forms sentences that mimic human interactions.
Let's not forget that an LLM will not be able to raise alarm bells, read medical records, write prescriptions or work with other medical professionals. Another thing people often forget is that LLMs have maximum token lengths and cannot, by definition, keep a detailed "memory" of everything that's been discussed.
It's is effectively self-treatment with more steps.
Also worth noting that:
1. AI is arguably a surveillance technology that's built on decades of our patterns
3. Large AI companies like OpenAI are signing contracts with the Department of defense
If I were a US citizen, I would be avoiding discussing my personal life with AI like the plague.
LLM will not be able to raise alarm bells
this is like the "benefit" of what LLM-therapy would provide if it worked. The reality is that, it doesn't but it serves as a proof of concept that there is a need for anonymous therapy. Therapy in the USA is only for people with socially acceptable illnesses. People rightfully live in fear of getting labeled as untreatable, a danger to self and others, and then at best dropped from therapy and at worst institutionalized.
yep, almost nobody wants to be committed to a psych ward without consent
Almost like questioning an AI is free while a therapist costs a LOT of money.
Also talking to ChatGPT, if done anonymously, won’t ruin your career.
(Thinking of AD military, where they tell you help is available but in reality it will and maybe should cost you your security clearance.)
Maybe because it's cheaper, easier and you're not judged by other person.