AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities.. just the ability to make piles and piles of money.
This is a most excellent place for technology news and articles.
AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities.. just the ability to make piles and piles of money.
The same way these capitalists evaluate human beings.
Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.
AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.
The vast majority of those implementations are worthless. Mostly ignored by it's intended users, seen as a useless gimmick.
LLM have it's uses but companies are pushing them into every areas to see what sticks at the moment.
Not the person you replied to, but I think you're both "right". The ridiculous hype bubble (I'll call it that for sure) put "AI" everywhere, and most of those are useless gimmicks.
But there's also already uses that offer things I'd call novel and useful enough to have some staying power, which also means they'll be iterated on and improved to whatever degree there is useful stuff there.
(And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)
I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn't mean AI isn't useful just that most of the chaf well disapear.
To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.
To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.
LLMs clearly have utility, but is it enough to turn them into a profitable business line?
That's an Onion level of capitalism
The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.
Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.
If we ever get it, it won't be through LLMs.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
There are already a few papers about diminishing returns in LLM.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
What is your brain doing if not statistical text prediction?
The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn't seem that unlikely. It's basically text prediction on a loop with some exterior inputs to interact.
How to tell me you're stuck in your head terminally online without telling me you're stuck in your head terminally online.
Why being so rude?
Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don't even know?
I will actually read it. Probably the only one of us two who would do that.
If it's convincing I may change my mind. I'm not a radical, like many other people are, and my opinions are subject to change.
Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.
The reason OP was so rude is that your very premise of "what is the brain doing if not statistical text prediction" is completely wrong and you don't even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.
The brain uses words to describe thoughts, the words are not actually the thoughts themselves.
https://advances.massgeneral.org/neuro/journal.aspx?id=1096
Think about small children who haven't learned language yet, do those brains still do "stastical text prediction" despite not having words to predict?
What about dogs and cats and other "less intelligent" creatures, they don't use any words but we still can teach them to understand ideas. You don't need to utter a single word, not even a sound, to train a dog to sit. Are they doing "statistical text prediction" ?
Read other replies I gave on your same subject. I don't want to repeat myself.
But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.
Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn't need to have a intelligent consciousness for it to be bad to hit a dog), but I'm highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.
The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don't even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.
Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.
the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.
Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don't only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn't mean we can't think of things we haven't learned about.
The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.
The fact that you "don’t even know why the how the brain creates an articulated spoken word is even relevant here" speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.
I can listen to a song long enough to learn the words, that doesn't mean I know what the song is about.
It's a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don't think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we'd need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don't have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the "adjust learning rate" level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that's where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.
As to "rudeness": Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree... because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including "my retirement savings are in AI stock") they c) would've come across the general argument themselves during their technological research. Or came up with it themselves, I've also seen examples of that: If you have a good intuition about complexity (and many programmers do) it's not unlikely a shower thought to have. Not as fleshed out as in the article, of course.
Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without "inner monologue" that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn't going to work.
What is your brain doing if not statistical text prediction?
Um, something wrong with your brain buddy? Because that's definitely not at all how mine works.
Then why you just expressed in a statistical prediction manner?
You saw other people using that kind of language while being derogatory to someone they don't like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.
Have you ever debated with someone from the polar opposite political spectrum and complain that "they just repeat the same propaganda"? Doesn't it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.
If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.
conscience
ok buddy
I just tried Google Gemini and it would not stop making shit up, it was really disappointing.
Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind
His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.
All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.
What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.
So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.
a series of switches is not ever going to create a sentient being
Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?
Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.
This is correct, and I don't think many serious people disagree with it.
If we ever get it, it won’t be through LLMs.
Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.
Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.
I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.
I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.
I mean, human intelligence is ultimately too "just" something.
And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.
My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.
But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.
I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought
The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.
Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn't a single thing in the universe that can't be broken down to a mathematical equation for physics or chemistry? I'm curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it's a leap and I could be wrong but I thought I've heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.
Like I said in the beginning this is straight up bong rips philosophy and haven't looked up any of the shit I brought up.
I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won't see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can't out perform any more independently than a 3 year old.
It's impossible to disprove statements that are inherently unscientific.
We taught sand to do math
And now we're teaching it to dream
All the stupid fucks can think to do with it
Is sell more cars
Cars, and snake oil, and propaganda
"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.
This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.
Except that it isn't AGI.
But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved
The real motivation is to not be beholden to Microsoft
That's not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It's reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.
Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?
So it'd take a 100M users to sign up for the $200/mo plan. All it'd take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.
Why does OpenAI "have" everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there... They have a definition of AGI... Yet, they release none of that...
Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company's value, and you'd better not tell the truth. But with all the other things, it's just silly not to share anything.
Either they're even more greedy than the Metas and Googles out there, or all the articles and "leaks" are just unsubstantiated hype.
Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.
Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.
They don't have AGI. AGI also won't happen for another laege amount of years to come
What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That's it.
AGI is a completely different beast to the current LLM flower leaves
Does anyone have a real link to the non-stalkerware version of:
https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition
-and the only place with the reference this article claims to cite but doesn't quote?