niva

joined 2 years ago
[–] [email protected] 3 points 2 weeks ago
[–] [email protected] 9 points 4 weeks ago

I want this as well, but it will never happen.

[–] [email protected] 16 points 3 months ago (4 children)

This puzzles me. Why do these Meta employees care for LGBTQ people? How can anyone work for Meta and have a conscience?

[–] [email protected] 5 points 3 months ago (2 children)

At the very least there should be a law, forcing any health insurance to at least cover all costs needed to insure survival and long term health. And what is needed for survival and long term health is defined by the Doctor and not by the insurance company!!! Honestly, no idea why this is not law in any rich country in 2025!

[–] [email protected] 24 points 3 months ago* (last edited 3 months ago)

CEO got killed but everything still working as intended. For everyone who was worried I can bring relieve, UnitedHealthcare is still working well.

[–] [email protected] 2 points 3 months ago

That was pretty funny!

[–] [email protected] 1 points 4 months ago

Why was that so funny? I mean it was very funny but I don't know why ;)

[–] [email protected] 5 points 4 months ago* (last edited 4 months ago) (1 children)

Does this A in 18A stand for ångström? Can they even produce anything below 10 nm?

[–] [email protected] 1 points 4 months ago

Why Canada?

[–] [email protected] 22 points 5 months ago (5 children)

A Linux client is not even on the roadmap?

[–] [email protected] 8 points 1 year ago (3 children)

Yes I think you are right. And I think this is borderline a mental illness if you can't stop lashing out. As I understand it, she somehow thinks by bashing trans women she is doing something good for women. Trans women are somehow taking away her womanhood or something like that. I have read something like this several times from Rowling but I have no clue how trans woman could do that. But Rowling is obsessed with that, for what ever reason.

 

First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don't know and what they can't answer. They hallucinate instead of answering a question with "I don't know." or "I am not sure about this." The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don't learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

view more: next ›