ndru

joined 2 years ago
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

I’ve previously argued that current gen “AI” built on transformers are just fancy predictive type, but as I’ve watched the models continue to grow in complexity it does seem like something emergent that could be described as a type of intelligence is happening.

These current transformer models don’t possess any concept of truth and, as far as I understand it, that is fundamental to their nature. That makes their application severely more limited than the hype train suggests, but that shouldn’t undermine quite how incredible they are at what they can do. A big enough statistical graph holds an unimaginably complex conceptual space.

They feel like a dream state intelligence - a freewheeling conceptual synthesis, where locally the concepts are consistent, while globally rules and logic are as flexible as they need to be to make everything make sense.

Some of the latest image and video transformers, in particular, are just mind blowing in a way that I think either deserves to be credited with a level of intelligence, or should make us question more deeply what we means by intelligence.

I find dreams to be a fascinating place. It often excites people to thing that animals also dream, and I find it as exciting that code running on silicon might be starting to share some of that nature of free association conceptual generation.

Are we near AGI? Maybe. I don’t think that a transformer model is about to spring into awareness, but maybe we’re only a few breakthroughs away from a technology which will pull all these pieces off specific domain AI together into a working general intelligence.

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Squeezing a metal cylinder out my chute sounds a lot less pleasant than just pooping poop.

[–] [email protected] 3 points 1 month ago

A long list of refurb and land work, but my main focus the past couple weeks has been getting a kitchen garden growing in my new place. It’s been a few years since I’ve had my hands in the soil and it feels great to be growing again. Just a few beds for salad and greens right now, with a few new fruit trees, canes and bushes. I’d love to get my hands on a rotovator/cultivator to get some bigger bits of land in cultivation, but there’s limited cash and a long list of expenses.

[–] [email protected] 9 points 1 month ago

The warning is specifically for trans folk.

The government's website issued guidance for transgender travelers, saying that U.S. ESTA and visa application forms require travelers to declare their sex, which should reflect their biological sex at birth. Travelers with an "X" marker on their passport or whose gender differs from the one assigned at birth are advised to contact the U.S. Embassy in Dublin for further information on specific entry requirements.

[–] [email protected] 13 points 1 month ago (1 children)

Ah yes, one of my favourite quotes by Orreleeise: “Overcomine challenges and oeeence ine teisge and rivively renence verover re rescience”

[–] [email protected] 4 points 2 months ago

So unrealistic

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago)

I read a series of super interesting posts a few months back where someone was exploring the dimensional concept space in LLMs. The jump off point was the discovery of weird glitch tokens which would break GPTs, making them enter a tailspin of nonsense, but the author presented a really interesting deep dive into how concepts are clustered dimensionally, presenting some fascinating examples and, for me at least, explained in a very accessible manner. I don’t know if being able to identify those conceptual clusters of weights means we’re anywhere close to being able to manually tune them, but the series is well worth a read for the curious. There’s also a YouTube series which really dives into the nitty gritty of LLMs, much of which goes over my head, but helped me understand at least the outlines of how the magic happens.

(Excuse any confused terminology here, my knowledge level is interested amateur!)

Posts on glitch tokens and exploring how an LLM encodes concepts in multidimensional space. https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology

YouTube series is by 3Blue1Brown - https://m.youtube.com/@3blue1brown

This one is particularly relevant - https://m.youtube.com/watch?v=9-Jl0dxWQs8

[–] [email protected] 4 points 9 months ago

I’ve never heard of Macs running embedded systems - I think that would be a pretty crazy waste of money - but Mac OS Server was a thing for years. My college campus was all Mac in the G4 iMac days, running MacOS Server to administer the network. As far as I understand it was really solid and capable, but I guess it didn’t really fit Apples focus as their market moved from industry professionals to consumers, and they killed it.

[–] [email protected] 5 points 10 months ago

I’ve never not haven't neither

[–] [email protected] 4 points 11 months ago (1 children)

Oh noooo, the coal existing because of evolutionary lag theory is one of my favourites. Continents colliding and creating wet topical basins is cool too, but it’s not such a good story to tell.

[–] [email protected] 4 points 11 months ago (1 children)

Well, not until you brought it up.

 

From a site I've inherited which is full of things like this (and lots of other very !important things). Send help.

 

Reminders popping up every day from months ago, but I’ll definitely do them tomorrow.

view more: next ›