this post was submitted on 01 Jun 2024
121 points (100.0% liked)

Technology

69545 readers
3262 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 37 comments
sorted by: hot top controversial new old
[–] [email protected] 61 points 11 months ago* (last edited 11 months ago) (1 children)

Know what uses less? No LLMs

[–] [email protected] 14 points 11 months ago

Yay, I'm doing my part!

[–] [email protected] 39 points 11 months ago (5 children)

We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.

[–] [email protected] 28 points 11 months ago (1 children)

Human brains aren't 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.

[–] [email protected] 2 points 11 months ago (2 children)
[–] [email protected] 24 points 11 months ago (1 children)

Human brains aren't digital. They're very analog.

[–] [email protected] 3 points 11 months ago (3 children)

Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the "all-or-none" principle. Once the membrane potential of a neuron reaches a certain threshold, an action potential will fire. If this threshold is not reached, it won't fire. There's no such thing as a "partial" action potential; it's a binary, all-or-none process.

Frequency Modulation: Even though an individual neuron’s action potential can be considered binary, neurons encode the intensity of the stimulation in the frequency of action potentials. A stronger stimulus causes the neuron to fire action potentials more rapidly. Again binary in nature not analog.

[–] [email protected] 9 points 11 months ago

Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle.

Isn't this true of standard multi-bit neural networks too? This seems to be what a nonlinear activation function achieves: translating the input values into an all-or-nothing activation.

The characteristic of a 1-bit model is not that its activations are recorded in a single but but that its weights are. There are no gradations of connection weights: they are just on or off. As far as I know, that's different from both standard neural nets and from how the brain works.

[–] [email protected] 3 points 11 months ago

So what you are saying is they are discrete in time and pulse modulated. Which can encode for so much more information than how NNs work on a processor.

[–] [email protected] 1 points 11 months ago

We really don't know jack shit, but we know more than enough to know fire rate is hugely important.

[–] [email protected] 10 points 11 months ago* (last edited 11 months ago) (1 children)

The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn't matter much as long as quantization occurs in pretraining.

If it's post-training, it's degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.

You can see this in the perplexity graphs in the BitNet-1.58 paper.

[–] [email protected] 6 points 11 months ago (1 children)

None of those words are in the bible

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

No, but some alarmingly similar ideas are in the heretical stuff actually.

[–] [email protected] 4 points 11 months ago

We need to scale fusion

[–] [email protected] 3 points 11 months ago

Multi bits models exist because thats how computers work, but there's been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.

[–] [email protected] 22 points 11 months ago (3 children)

Making ai more efficient will just mean more ai

[–] [email protected] 33 points 11 months ago

Generative AI is great if used as a tool instead of a solution.

[–] [email protected] 10 points 11 months ago

Since I find AIs to be useful that sounds fine to me.

[–] [email protected] 3 points 11 months ago
[–] [email protected] 11 points 11 months ago

Smaller and speedier means larger token windows and greater variety of models.

Not less energy.