Kuinox

joined 2 years ago
[–] [email protected] 4 points 14 hours ago* (last edited 14 hours ago) (1 children)

You need to be cautious because it's the victim claim and they want money.
If the redditor pinpointed the correct location:
https://www.google.fr/maps/place/YOTEL+San+Francisco/@37.7801757,-122.4121171,3a,15y,31.47h,94.68t/data=!3m7!1e1!3m5!1sxT5zDp6EdMhuMMaSj24REQ!2e0!6shttps:%2F%2Fstreetviewpixels-pa.googleapis.com%2Fv1%2Fthumbnail%3Fcb_client%3Dmaps_sv.tactile%26w%3D900%26h%3D600%26pitch%3D-4.6841735984350095%26panoid%3DxT5zDp6EdMhuMMaSj24REQ%26yaw%3D31.472150637642635!7i16384!8i8192!4m9!3m8!1s0x80858084c80122a3:0x5346081f2c1518cc!5m2!4m1!1i2!8m2!3d37.7803603!4d-122.4120372!16s%2Fg%2F11dxc1t6wq?entry=ttu&g_ep=EgoyMDI1MDYxNy4wIKXMDSoASAFQAw%3D%3D

"PASSENGER LOADING ONLY" indicate to me that you can drop off someone.
So the claim that the car stopped illegally looks wrong.

I'm defending waymo for the sole reason that they are more respectful to other road user than 95% of the other drivers. Humans taxis are often way too dangerous.

[–] [email protected] 10 points 22 hours ago* (last edited 22 hours ago) (4 children)

The bike infrastructure where the crash occured is even worse:
https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fa-bicyclist-is-suing-waymo-in-federal-court-after-its-v0-ggrvl22j2i5f1.jpeg%3Fwidth%3D1206%26format%3Dpjpg%26auto%3Dwebp%26s%3Ded7bbaad39e733d68c3ae2dfb3ce284068a74d3b

The fault shouldn't be on waymo.
Almost no car have feature that alert the user that there is incoming traffic. Why should they be blamed if allegedly their feature didn't worked ?
Car user must be aware when opening the door, there would had been the same problem with a taxi.

The main problem here is the shitty bike infrastructure.

[–] [email protected] 1 points 1 week ago

I've been re reading my response and my bad, I meant "artificial neurons were inspired from neurons", not to behave like, they have little in common.

If you were to train an LLM on nothing but math and texts about math, then asked it an art question, it would respond somewhat nonsensically with math.

If you asked an human that speak german and nothing else, a question in english, it would also respond in german (that they cant understand you).
LLMs sometimes (not often enough) do respond they don't know.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (2 children)

that suggests to me either you know how it was trained, or are making assumptions about how it was trained.

The general way it is trained is known, specifics and technics are not known, but the public do know how one of the flagship model was trained, the training process of deepseek r1 was documented in their research paper: https://arxiv.org/pdf/2501.12948

Which I did read a chunk when it was released.

And if the idea that there would be nobody training an LLM to do addition in this manner, I invite you to glance the Wikipedia article on addition.

The LLMs have multiple way to do additions, i'll showcase two as an example, I asked ChatGPT 4.1 to solve a big addition. Here it's output:

 I need to add these two large numbers carefully.

First, I'll write them one under the other, aligned at the right:

                  21,500,996,531,314,339
+ 96,357,764,557,683,165,397,721
----------------------------------


Working from right to left:

Units place: 9 + 1 = 10. Write 0, carry 1.
Tens place: 3 + 2 + 1(carry) = 6. Write 6, no carry.
Hundreds place: 3 + 7 = 10. Write 0, carry 1.
Thousands place: 4 + 7 + 1(carry) = 12. Write 2, carry 1.
Ten thousands place: 1 + 9 + 1(carry) = 11. Write 1, carry 1.
Hundred thousands place: 3 + 3 + 1(carry) = 7. Write 7, no carry.
Millions place: 1 + 5 = 6. Write 6, no carry.
Ten millions place: 5 + 6 = 11. Write 1, carry 1.
Hundred millions place: 3 + 1 + 1(carry) = 5. Write 5, no carry.
Billions place: 5 + 3 = 8. Write 8, no carry.
Ten billions place: 9 + 8 = 17. Write 7, carry 1.
Hundred billions place: 9 + 6 + 1(carry) = 16. Write 6, carry 1.
Trillions place: 0 + 7 + 1(carry) = 8. Write 8, no carry.
Ten trillions place: 0 + 5 = 5. Write 5, no carry.
Hundred trillions place: 5 + 5 = 10. Write 0, carry 1.
Quadrillions place: 1 + 4 + 1(carry) = 6. Write 6, no carry.
Ten quadrillions place: 2 + 6 = 8. Write 8, no carry.

For the remaining places, I just copy the digits from the second number:
7, 7, 5, 3, 6, 9

Therefore, the answer is:
96,357,786,056,867,851,761,260

You can notice, the whole reasoning is correct, but it wrote the wrong response, I can expand more on this if you want (I do some research on it on my free time)
This reasoning of decomposing the addition was of course learned from training data.
Now, the trigonometry used to calculate additions that i talked earlier, is not for writing a "reasoning" but when it try to write the correct response. It was created by the backpropagation trying to find a local minimum that can solve additions in order to more accuratly predict the next token.

so I would point out that technically LLMs have “tensors” not “neurons”.
I get that tensors are designed to behave like neurons, and this is just me being pedantic. I know what you mean when you say neurons, just wanted to clarify and be consistent. No shade intended.

Artificial neurons were made to behave like neurons: https://en.wikipedia.org/wiki/Artificial_neuron
And the terminology used, is neurons, cf the paper i sent earlier about how they do additions: https://arxiv.org/pdf/2502.00873

[–] [email protected] 1 points 1 week ago (4 children)

I don’t think you can disconnect how an LLM was trained from how it operates

You can, heck the example I gave show exaclty this:

If you train an LLM to use trigonometry to solve addition problems, I think you will find the LLM will do trigonometry to solve addition problems.

It was not trained to do trigonometry to solve addition problem, it was trained to respond to additions, trigonometry is how the statiscal part, the backpropagation, found a way to make the neurons solve additions.

In general if that is how the LLM is coming to its next token, then the training data must be really heavily weighted in that manner.

You are mixing up stuff, the way LLM are trained does not impose anything about how the neurons gets organised to get better score at inferrence.

[–] [email protected] 2 points 1 week ago (6 children)

You are mistaking how LLMs are trained to how they work.
It's not because it's been trained with statistics, that they compute, or think using statistics.
For example, to do additions, internally LLMs do trignonometry: https://arxiv.org/abs/2502.00873
They do probably use statistics for tons of stuff internally, but humans do too: guessing, bias, tendency, preferences.
Anthropics researcher found that their LLMs have "features" for concepts.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (8 children)

So you think you need words to be able to think ? Monkeys, birds, human babies are unable to think then ?

[–] [email protected] 2 points 1 week ago (10 children)

Is the argument that LLMs are thinking because they make guesses

No, it's that you can't root the argument that they don't think over the fact they make stuff up, because humans too. You could root it in the amount of things it guess wrong, but it's extremely hard to measure.
Again, I'm not claiming that they think, but that we don't know until one or the other is proven.
Right now, thinking one, or the other is true, is belief.

[–] [email protected] 1 points 1 week ago

How did you concluded that from theses 2 messages.

[–] [email protected] 2 points 1 week ago

Consciousness may be an illusion born from the ability of self reflection.
Also, like i showed before, you may act before consciously taking the decision of it.
https://en.m.wikipedia.org/wiki/Neuroscience_of_free_will
Theses study with the one presented by cgpgray, indicate that maybe we do stuff then we come up with a reasonable explanation after.

[–] [email protected] 3 points 1 week ago (8 children)

And the brain is made out of neurons that sends electric signals between them and operate muscles.
That doesnt explain how the brain think.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (12 children)

I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.

That is of course a big problem. They try to guess too much stuff, but it's also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can't guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
Now humans also try to guess stuff and sometimes get it wrong, it's required in order to produce results from our thinking and not be stuck in a state where we don't have enough data to do anything, like a symbolic AI.

Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.

The probleme with "does it think" is that it doesn't give any quantity or quality.

 

L'hoax était passe ici il y a quelques jour alors je poste ca.

108
Hotel Hallway (lemmynsfw.com)
 

Hello, I have been looking for a quick tool change for an ender 3 max neo, I think that it has the same head than an ender 3 neo.
I tried this tool changer, but it's sadly not compatible with my ender 3 max neo, since the printer head is different.

view more: next ›