The moment we learned i needed silver to conjure holy water we started melting down every piece of cutlery and chandeliers.
webghost0101
Sorry, what is Musk in control over? He won capitalism in a way (with a lucky spawn) but hes not actually in charge of anything valueble or dangerous is he?
Now facebook has been involved in so many privacy scandals, uploading your contacts from emails and phones, reading whatsapp messages. collecting medical information from other apps. They must have the largest database of human behaviour in the world.
That + AI Is really fucking scary..
You mean analog cables like those shown in the picture?
Only (consumer) digital audio cable i own is a usb headset and apparently gold plated usb is so stupid there not even trying to sell those.
I am not convinced that the meme author didn’t know what they where doing and this is ragebait though.
Grab em Billionaire. /s
Is that a trick or something we should learn to make common practice off?
A major issue with the way lemmy is setup is links to other instances simply dont work because i cant login there.
“I have seen”
I am sorry to be a sceptic kind internet stranger but may i also ask what you mean with “studied them a long time?l
Are these conclusions from cross referencing reports or would you say you involved with official studies of the phenomenon?
For audio these do make some sense.. for any others like hdmi though..
Why not both?
“IT enthusiasts of Lemmy, how would you tackle x or y problem?”
Could lead to some interesting answers but i agree that follow up questions and expectations to actually get the problem fixed should be somewhere else.
Well there are 2 things.
First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.
There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.
For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.
I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.
I can run a small llm at home that is much much faster then chatgpt.. that is if i want to generate some unintelligent nonsense.
Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.
I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.
Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.
The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt
Yes. “That isn’t exactly free” indeed.
We give power to greed and it has corrupted our institutions. We differentiate far to little between personal ownership (to maintain and survive) and private ownership (to profit and expand) when it comes to taxes.
Any system where people will die from lack of resources should be abolished.