coolin

joined 2 years ago
[–] [email protected] 1 points 2 years ago

Hello, kids! Pirates are very bad! Never use qBittorent to download copyrighted material, and certainly do NOT connect it to a VPN to avoid getting caught. Additionally, you should also NEVER download illegal material via an https connection because it is fully encrypted and you won't get caught!

[–] [email protected] 9 points 2 years ago

This is another reminder that the anomalous magnetic moment of the muon was recalculated by two different groups using higher precision lattice QCD techniques and wasn't found to be significantly different from the Brookhaven/Fermilab "discrepancy". More work needs to be done to check for errors in the original and newer calculations, but it seems quite likely to me that this will ultimately confirm the standard model exactly as we know it and not provide any new insight or the existence of another force particle.

My hunch is that unknown particles like dark matter rely on a relatively simple extension of the standard model (e.g. supersymmetry, axioms, etc.) and the new physics out there that combines gravity and QM is something completely different from what we are currently working on and can't be observed with current colliders or any other experiments on Earth.

So probably we will continue finding nothing interesting for quite some time until we can get a large ML model crunching every single possible model to check for fit on the data, and hopefully derive some better insight from there.

Though I'm not an expert and I'm talking out of my ass so take this all with a grain of salt.

[–] [email protected] 10 points 2 years ago

Sam Altman: We are moving our headquarters to Japan

[–] [email protected] 4 points 2 years ago (1 children)

For the love of God please stop posting the same story about AI model collapse. This paper has been out since May, been discussed multiple times, and the scenario it presents is highly unrealistic.

Training on the whole internet is known to produce shit model output, requiring humans to produce their own high quality datasets to feed to these models to yield high quality results. That is why we have techniques like fine-tuning, LoRAs and RLHF as well as countless datasets to feed to models.

Yes, if a model for some reason was trained on the internet for several iterations, it would collapse and produce garbage. But the current frontier approach for datasets is for LLMs (e.g. GPT4) to produce high quality datasets and for new LLMs to train on that. This has been shown to work with Phi-1 (really good at writing Python code, trained on high quality textbook level content and GPT3.5) and Orca/OpenOrca (GPT-3.5 level model trained on millions of examples from GPT4 and GPT-3.5). Additionally, GPT4 has itself likely been trained on synthetic data and future iterations will train on more and more.

Notably, by selecting a narrow range of outputs, instead of the whole range, we are able to avoid model collapse and in fact produce even better outputs.

[–] [email protected] 21 points 2 years ago* (last edited 2 years ago)

We have no moat and neither does OpenAI is the leaked document you're talking about

It's a pretty interesting read. Time will tell if it's right, but given the speed of advancements that can be stacked on top of each other that I'm seeing in the open source community, I think it could be right. If open source figured out scalable distributed training I think it's Joever for AI companies.

[–] [email protected] 5 points 2 years ago

Shit anyone working for less than $20 packing boxes is getting scammed cause I know for a fact several places offer more than that. It just goes to show the importance of having a union to bargain for higher wages.

[–] [email protected] 9 points 2 years ago (1 children)

I don't know what type of chatbots these companies are using, but I've literally never had a good experience with them and it doesn't make sense considering how advanced even something like OpenOrca 13B is (GPT-3.5 level) which can run on a single graphics card in some company server room. Most of the ones I've talked to are from some random AI startup that have cookie cutter preprogrammed text responses that feel less like LLMs and more like a flow chart and a rudimentary classifier to select an appropriate response. We have LLMs that can do the more complex human tasks of figuring out problems and suggesting solutions and that can query a company database to respond correctly, but we don't use them.

[–] [email protected] 6 points 2 years ago

This makes sense for any other company but OpenAI is still technically a non profit in control of the OpenAI corporation, the part that is actually a business and can raise capital. Considering Altman claims literal trillions in wealth would be generated by future GPT versions, I don't think OpenAI the non profit would ever sell the company part for a measly few billions.

[–] [email protected] 3 points 2 years ago

Lmao Twitter is not that hard to create. Literally look at the Mastodon code base and "transform" it and you're already most of the way there.

[–] [email protected] 1 points 2 years ago

I used to be on GrapheneOS, but the drama with the developer plus mainly not being able to put my university ID on the wallet, forced me back on stock Android.

Besides Android, I use Google Play Store, YouTube, and Maps. For YouTube I've technically degoogled, using Invidious and NewPipe, but that's obviously still using Google services.

I really wish that digital payment didn't rely on two proprietary services (Google Wallet and Apple Wallet). It would be so much easier for phone companies to ship privacy friendly versions of Android if there was a FOSS alternative directly integrated into AOSP. I also wish apps didn't have to use Google service framework just to function, it seems stupid af. I don't think this will ever improve, so I'll probably end up on a true Linux phone whenever those catch up (2030 YEAR OF THE LINUX PHONE???)

We also need open collaboration on mapping. There is the OpenStreetMaps and Overture maps from Linux foundation, but those aren't really there yet unfortunately.

[–] [email protected] 6 points 2 years ago (1 children)

I really hate the state of the Supreme Court atm. Looking back, it wasn't a legitimate institution from the beginning, but the current 6-3 court shows how flawed it is, being out of line with public opinion in loads of different cases and effectively legislating from the bench via judicial review.

The only reason it has gotten this bad, though, is because Congress has abdicated its responsibilities as a legislative body and left it more and more to executive orders and court decisions. The entire debate around the Dobbs decision could have been avoided if Dems codified abortion into law, and this one could have avoided too if our Congress actually went to work legislating a solution to the ongoing student loan and college affordability crisis.

I think we need supreme court reform. I'm particularly partial to the idea of having a rotating bench pulled randomly from the lower courts each term, with each party in Congress getting a certain amount of strike outs to take people off that they don't want, similar to the way jurors are selected. I also think the people should be able to overrule the court via referendum, because ultimately we should decide what the constitution says.

I just can't see this happening though, at least for multiple decades until the younger people today get into political power.

view more: next ›