this post was submitted on 08 Jul 2024
862 points (100.0% liked)

Science Memes

14019 readers
2253 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 65 points 9 months ago (3 children)

The meme would work just the same with the "machine learning" label replaced with "human cognition."

[–] [email protected] 85 points 9 months ago (2 children)

Have to say that I love how this idea congealed into "popular fact" as soon as peoples paychecks started relying on massive investor buy in to LLMs.

I have a hard time believing that anyone truly convinced that humans operate as stochastic parrots or statistical analysis engines has any significant experience interacting with others human beings.

Less dismissively, are there any studies that actually support this concept?

[–] [email protected] 51 points 9 months ago* (last edited 9 months ago) (2 children)

Speaking as someone whose professional life depends on an understanding of human thoughts, feelings and sensations, I can't help but have an opinion on this.

To offer an illustrative example

When I'm writing feedback for my students, which is a repetitive task with individual elements, it's original and different every time.

And yet, anyone reading it would soon learn to recognise my style same as they could learn to recognise someone else's or how many people have learned to spot text written by AI already.

I think it's fair to say that this is because we do have a similar system for creating text especially in response to a given prompt, just like these things called AI. This is why people who read a lot develop their writing skills and style.

But, really significant, that's not all I have. There's so much more than that going on in a person.

So you're both right in a way I'd say. This is how humans develop their individual style of expression, through data collection and stochastic methods, happening outside of awareness. As you suggest, just because humans can do this doesn't mean the two structures are the same.

[–] [email protected] 20 points 9 months ago (3 children)

Idk. There’s something going on in how humans learn which is probably fundamentally different from current ML models.

Sure, humans learn from observing their environments, but they generally don’t need millions of examples to figure something out. They’ve got some kind of heuristics or other ways of learning things that lets them understand many things after seeing them just a few times or even once.

Most of the progress in ML models in recent years has been the discovery that you can get massive improvements with current models by just feeding them more and data. Essentially brute force. But there’s a limit to that, either because there might be a theoretical point where the gains stop, or the more practical issue of only having so much data and compute resources.

There’s almost certainly going to need to be some kind of breakthrough before we’re able to get meaningful further than we are now, let alone matching up to human cognition.

At least, that’s how I understand it from the classes I took in grad school. I’m not an expert by any means.

[–] [email protected] 13 points 9 months ago

I would say that what humans do to learn has some elements of some machine learning approaches (Naive Bayes classifier comes to mind) on an unconscious level, but humans have a wild mix of different approaches to learning and even a single human employs many ways of capturing knowledge, and also, the imperfect and messy ways that humans capture and store knowledge is a critical feature of humanness.

load more comments (2 replies)
[–] [email protected] 7 points 9 months ago (2 children)

The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it's locked into that state with no additional learning.

A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.

[–] [email protected] 14 points 9 months ago (3 children)

Contemporary LLMs are static. LLMs are not static by definition.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 18 points 9 months ago (4 children)

I'd love to hear about any studies explaining the mechanism of human cognition.

Right now it's looking pretty neural-net-like to me. That's kind of where we got the idea for neural nets from in the first place.

[–] [email protected] 12 points 9 months ago

It's not specifically related, but biological neurons and artificial neurons are quite different in how they function. Neural nets are a crude approximation of the biological version. Doesn't mean they can't solve similar problems or achieve similar levels of cognition , just that about the only similarity they have is "network of input/output things".

[–] [email protected] 9 points 9 months ago (3 children)

At every step of modern computing people have thought that the human brain looks like the latest new thing. This is no different.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 25 points 9 months ago* (last edited 9 months ago) (1 children)
[–] [email protected] 15 points 9 months ago (2 children)

My dude it's math all the way down. Brains are not magic.

[–] [email protected] 15 points 9 months ago (2 children)

There's a lot we understand about the brain, but there is so much more we dont understand about the brain and "awareness" in general. It may not be magic, but it certainly isnt 100% understood.

[–] [email protected] 8 points 9 months ago (2 children)

We don’t need to understand cognition, nor for it to work the same as machine learning models, to say it’s essentially a statistical model

It’s enough to say that cognition is a black box process that takes sensory inputs to grow and learn, producing outputs like muscle commands.

load more comments (2 replies)
[–] [email protected] 7 points 9 months ago (1 children)

I'm not saying we understand the brain perfectly, but everything we learn about it will follow logic and math.

[–] [email protected] 11 points 9 months ago (1 children)

Not neccesarily, there are a number of modern philosiphers and physicists who posit that "experience" is incalculable, and further that it's directly tied to the collapse of the wave function in quantum mechanics (Penrose-Hammerof; ORCH-OR). I'm not saying they're right, but Penrose won a Nobel Prize in quantum mechanics and he says it can't be explained by math.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 56 points 9 months ago (6 children)

Bayesian purist cope and seeth.

Most machine learning is closer to universal function approximation via autodifferentiation. Backpropagation just lets you create numerical models with insane parameter dimensionality.

[–] [email protected] 47 points 9 months ago

I like your funny words, magic man.

[–] [email protected] 11 points 9 months ago (1 children)
[–] [email protected] 23 points 9 months ago* (last edited 9 months ago)

Universal function approximation - neural networks.

Auto-differentiation - algorithmic calculation of partial derivatives (aka gradients)

Backpropagation - when using a neural network (or most ML algorithms actually), you find the difference between model prediction and original labels. And the difference is sent back as gradients (of the loss function)

Parameter dimensionality - the “neurons” in the neural network, ie, the weight matrices.

If thats your argument, its worse than Statistics imo. Atleast statistics have solid theorems and proofs (albeit in very controlled distributions). All DL has right now is a bunch of papers published most often by large tech companies which may/may not work for the problem you’re working on.

Universal function approximation theorem is pretty dope tho. Im not saying ML isn’t interesting, some part of it is but most of it is meh. It’s fine.

[–] [email protected] 8 points 9 months ago (3 children)

A monad is just a monoid in the category of endofunctors, after all.

load more comments (3 replies)
[–] [email protected] 8 points 9 months ago

pee pee poo poo wee wee

load more comments (2 replies)
[–] [email protected] 38 points 9 months ago

Eh. Even heat is a statistical phenomenon, at some reference frame or another. I've developed model-dependent apathy.

[–] [email protected] 35 points 9 months ago (2 children)
load more comments (2 replies)
[–] [email protected] 32 points 9 months ago* (last edited 9 months ago) (2 children)
[–] [email protected] 38 points 9 months ago (6 children)

But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

This whole "we can't explain how it works" is bullshit from software engineers too lazy to unwind the emergent behavior caused by their code.

[–] [email protected] 35 points 9 months ago (1 children)

I agree with your first paragraph, but unwinding that emergent behavior really can be impossible. It's not just a matter of taking spaghetti code and deciphering it, ML usually works by generating weights in something like a decision tree, neural network, or statistical model.

Assigning any sort of human logic to why particular weights ended up where they are is educated guesswork at best.

[–] [email protected] 16 points 9 months ago (3 children)

You know what we do in engineering when we need to understand a system a lot of the time? We instrument it.

Please explain why this can't be instrumented. Please explain why the trace data could not be analtzed offline at different timescales as a way to start understanding what is happening in the models.

I'm fucking embarassed for CS lately.

[–] [email protected] 17 points 9 months ago

It's not always as simple as measuring an observable system or simulating the parameters the best you can. Lots of parameters + lots of variables = we have a good idea how it should go, we can get close, but don't actually know. That's part of why emergent behavior and chaos theory are so difficult, even in theoretically closed systems.

[–] [email protected] 17 points 9 months ago

... but they just said that it can. You check it, and you will receive gibberish. Congrats, your value is .67845278462 and if you change that by .000000001 in either direction things break. Tell me why it ended up at that number. The numbers, what do they mean?

[–] [email protected] 17 points 9 months ago (2 children)

That field is called Explainable AI and the answer is because that costs money and the only reason AI is being used is to cut costs

load more comments (2 replies)
[–] [email protected] 18 points 9 months ago* (last edited 9 months ago) (7 children)

But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

Well sure, but as someone else said even heat is statistics. Saying "ML is just statistics" is so reductionist as to be meaningless. Heat is just statistics. Biology is just physics. Forests are just trees.

load more comments (7 replies)
[–] [email protected] 14 points 9 months ago (11 children)

It's totally statistics, but that second paragraph really isn't how it works at all. You don't "code" neural networks the way you code up website or game. There's no "if (userAskedForThis) {DoThis()}". All the coding you do in neutral networks is to define a model and training process, but that's it; Before training that behavior is completely random.

The neural network engineer isn't directly coding up behavior. They're architecting the model (random weights by default), setting up an environment (training and evaluation datasets, tweaking some training parameters), and letting the models weights be trained or "fit" to the data. It's behavior isn't designed, the virtual environment that it evolved in was. Bigger, cleaner datasets, model architectures suited for the data, and an appropriate number of training iterations (epochs) can improve results, but they'll never be perfect, just an approximation.

load more comments (11 replies)
load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 20 points 9 months ago

This is exactly how I explain the AI (ie what the current AI buzzword refers to) tob common folk.

And what that means in terms of use cases.
When you indiscriminately take human outputs (knowledge? opinions? excrements?) as an input, an average is just a shitty approximation of pleb opinion.

[–] [email protected] 17 points 9 months ago (1 children)
[–] [email protected] 12 points 9 months ago

**AND stolen data

[–] [email protected] 15 points 9 months ago (1 children)
[–] [email protected] 8 points 9 months ago

But it's fitting to millions of sets in hundreds of dimensions.

[–] [email protected] 9 points 9 months ago (3 children)

Well, lots of people blinded by hype here.. Obv it is not simply statistical machine, but imo it is something worse. Some approximation machinery that happen to work, but gobbles up energy in cost. Something only possible becauss we are not charging companies enough electricity costs, smh.

[–] [email protected] 9 points 9 months ago (6 children)

We're in the "computers take up entire rooms in a university to do basic calculations" stage of modern AI development. It will improve but only if we let them develop.

load more comments (6 replies)
load more comments (2 replies)
[–] [email protected] 9 points 9 months ago (3 children)

I think saying machine learning is just statistics is a bit misleading. There’s not much statistics going on in deep learning. It’s mostly just “eh, this seems to work I dunno let’s keep doing it and see what happens”.

[–] [email protected] 19 points 9 months ago (2 children)

It’s mostly just “eh, this seems to work I dunno let’s keep doing it and see what happens”.

Yeah, no.

load more comments (2 replies)
load more comments (2 replies)
[–] [email protected] 8 points 9 months ago (2 children)

Neural nets, including LLMs, have almost nothing to do with statistics. There are many different methods in Machine Learning. Many of them are applied statistics, but neural nets are not. If you have any ideas about how statistics are at the bottom of LLMs, you are probably thinking about some other ML technique. One that has nothing to do with LLMs.

[–] [email protected] 13 points 9 months ago (11 children)
load more comments (11 replies)
[–] [email protected] 8 points 9 months ago (1 children)

Software developer here, the more I learn about neural networks, the more they seem like very convoluted statistics. They also just a simplified form of neurons, and thus I advise against overhumanization, even if they're called "neurons" and/or Alex.

load more comments (1 replies)
[–] [email protected] 7 points 9 months ago (1 children)

iTs JusT iF/tHEn 🥴🥴🥴

load more comments (1 replies)
[–] [email protected] 7 points 9 months ago (3 children)

I wouldn't say it is statistics, statistics is much more precise in its calculation of uncertanties. AI depends more on calculus, or automated differentiation, which is also cool but not statistics.

load more comments (3 replies)
load more comments
view more: next ›