this post was submitted on 10 May 2024
340 points (100.0% liked)

Technology

72581 readers
3407 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 75 points 1 year ago (2 children)

This is exactly what I'm talking about when I argue with people who insist that an LLM is super complex and totally is a thinking machine just like us.

It's nowhere near the complexity of the human brain. We are several orders of magnitude more complex than the largest LLMs, and our complexity changes with each pulse of thought.

The brain is amazing. This is such a cool image.

[–] [email protected] 12 points 1 year ago

I think of LLMs like digital bugs, doing their thing, basically programmed.

They're just programmed with virtual life experience instead of a traditional programmer.

[–] [email protected] 7 points 1 year ago (1 children)

I agree, but it isn't so clear cut. Where is the cutoff on complexity required? As it stands, both our brains and most complex AI are pretty much black boxes. It's impossible to say this system we know vanishingly little about is/isn't dundamentally the same as this system we know vanishingly little about, just on a differentscale. The first AGI will likely still have most people saying the same things about it, "it isn't complex enough to approach a human brain." But it doesn't need to equal a brain to still be intelligent.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

but it isn’t so clear cut

It's demonstrably several orders of magnitude less complex. That's mathematically clear cut.

Where is the cutoff on complexity required?

Philosophical question without an answer - We do know that it's nowhere near the complexity of the brain.

both our brains and most complex AI are pretty much black boxes.

There are many things we cannot directly interrogate which we can still describe.

It’s impossible to say this system we know vanishingly little about is/isn’t dundamentally the same as this system we know vanishingly little about, just on a differentscale

It's entirely possible to say that because we know the fundamental structures of each, even if we don't map the entirety of eithers complexity. We know they're fundamentally different - Their basic behaviors are fundamentally different. That's what fundamentals are.

The first AGI will likely still have most people saying the same things about it, “it isn’t complex enough to approach a human brain.”

Speculation but entirely possible. We're nowhere near that though. There's nothing even approaching intelligence in LLMs. We've never seen emergent behavior or evidence of an id or ego. There's no ongoing thought processes, no rationality - because that's not what an LLM is. An LLM is a static model of raw text inputs and the statistical association thereof. Any "knowledge" encoded in an LLM exists entirely in the encoding - It cannot and will not ever generate anything that wasn't programmed into it.

It's possible that an LLM might represent a single, tiny, module of AGI in the future. But that module will be no more the AGI itself than you are your cerebellum.

But it doesn’t need to equal a brain to still be intelligent.

First thing I think we agree on.

[–] [email protected] 31 points 1 year ago (1 children)

The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons. It incorporates a colossal 1.4 petabytes of data.

Assuming this means the total data of the map is 1.4 petabytes. Crazy to think that mapping of the entire brain will probably happen within the next century.

[–] [email protected] 26 points 1 year ago

If one millionth of the brain is 1.4 petabytes, the whole brain would take 1.4 zettabytes of storage, roughly 4% of all the digital data on Earth.

[–] [email protected] 20 points 1 year ago (1 children)

There is an eerie resemblence between the smallest neuron and the largest structure in the universe - Galaxy Filament

[–] [email protected] 1 points 1 year ago

I mean realistically we could just be a manifested thought of some higher being who took too big a toke of some 5-D Weed

[–] [email protected] 13 points 1 year ago

Aha! This is why I can't think straight! Spaghetti!

[–] [email protected] 9 points 1 year ago (1 children)

That cable management is horrendous. Pull them out.

[–] [email protected] 2 points 1 year ago

But it's the spaghetti cabling that makes it work and highly robust.

[–] [email protected] 8 points 1 year ago (2 children)
[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

Humbling? That’s going on in my head. I’m that complicated! Or at least the “hardware” I run on is. I think having a brain that beautifully complex is more empowering than anything! I wonder what new discoveries will stem from this.

[–] [email protected] 8 points 1 year ago

Por que no los dos?

I can see both sides:

Super humbling because nature's complexity can provide data storage and retrieval capacity several orders or magnitude greater than the best we can do right now.

Also super exciting because look at what every brain on the planet is composed of, and how it functions, in a freakin' square millimeter!

Crazy stuff. Wild.

[–] [email protected] 2 points 1 year ago

There’s a whole universe in there eh?

[–] [email protected] 7 points 1 year ago

Noam Chomsky said “we don’t know what happens when you cram 10^5 neurons* into a space the size of a basketball” - but what little we know is astonishing & a marvel

*whatever the number is

[–] [email protected] 6 points 1 year ago

I thought this was a close up of a fuzzy sweater and was like: "cool ". Read the title. "Oh, fuck, yeah."