TauZero

joined 2 years ago
[–] [email protected] 0 points 1 year ago

The credit companies do not insure against fraud, they simply take the money out of the merchant account and put it back into yours. Now it's the merchant who has no recourse, if they have already shipped the product. So the only difference between CC and crypto is who is typically left holding an empty bag in case of theft - the payer or the payee. Certainly not the banks!

I'd argue in terms of assigning responsibility, it seems more fair to expect you the customer to keep your digital wallet secure from thieves, than to expect the merchant to try guess every time whether the visitor to their online store happens to be using a stolen credit card.

[–] [email protected] 2 points 1 year ago

Question: do the Japanese actually care about privacy? I know I do, but if you were to ask a Japanese person why does their country use cash, would they say "We have considered a system of payment cards and decided against it for privacy reasons" or would they just shrug and say "I dunno, I'm not in charge of payment systems, I use what I have"?

[–] [email protected] 1 points 1 year ago

A funny culprit I found during my own investigation was the GFCI bathroom outlet, which draws an impressive 4W. The status light + whatever the trickle current it uses to do its function thus dwarfs the standby power of any other electronic device.

[–] [email protected] 3 points 1 year ago

That's how I found out that my desktop speakers consume power even with the physical button being off and status light dark. The power brick stays warm indefinitely, a good 20W feels like! I have to unplug that thing now when not in use. Any normal power brick will be <1W of course.

[–] [email protected] 2 points 1 year ago

Some notes for my use. As I understand it, there are 3 layers of "AI" involved:

The 1st is a "transformer", a type of neural network invented in 2017, which led to the greatly successful "generative pre-trained transformers" of recent years like GPT-4 and ChatGPT. The one used here is a toy model, with only a single hidden layer ("MLP" = "multilayer perceptron") of 512 nodes (also referred to as "neurons" or "dimensionality"). The model is trained on the dataset called "Pile", a collection of 886GB text from all kinds of sources. The dataset is "tokenized" (pre-processed) into 100 billion tokens by converting words or word fragments into numbers for easier calculation. You can see an example of what the text data looks like here. The transformer learns from this data.

In the paper, the researchers do cajole the transformer into generating text to help understand its workings. I am not quite sure yet whether every transformer is automatically a generator, like ChatGPT, or whether it needs something extra done to it. I would have enjoyed to see more sample text that the toy model can generate! It looks surprisingly capable despite only having 512 nodes in the hidden layer. There is probably a way to download the model and execute it locally. Would it have been possible to add the generative model as a javascript toy to supplement the visualizer?

The main transformer they use is "model A", and they also trained a twin transformer "model B" using same text but a different random initialization number, to see whether they would develop equivalent semantic features (they did).

The 2nd AI is an "autoencoder", a different type of neural network which is good at converting data fed to it into a "more efficient representation", like a lossy compressor/zip archiver, or maybe in this case a "decompressor" would be a more apt term. Encoding is also called "changing the dimensionality" of the data. The researchers trained/tuned the 2nd AI to decompose the AI models of the 1st kind into a number of semantic features in a way which both captures a good chunk of the model's information content and also keeps the features sensible to humans. The target number of features is tunable anywhere from 512 (1-to-1) to 131072 (1-to-256). The number they found most useful in this case was 4096.

The 3rd AI is a "large language model" nicknamed Claude, similar to GPT-4, that they have developed for their own use at the Anthropic company. They've told it to annotate and interpret the features found by the 2nd AI. They had one researcher slowly annotate 412 features manually to compare. Claude did as well or better than the human, so they let it finish all the rest on its own. These are the descriptions the visualization shows in OP link.

Pretty cool how they use one AI to disassemble another AI and then use a 3rd AI to describe it in human terms!

[–] [email protected] 2 points 1 year ago

Am I the only one for whom "open"subtitles.org hasn't worked in years? I literally cannot find the download button, like in those okboomer memes. Never used the API. Switched to subscene.com and haven't had problems since.

[–] [email protected] 1 points 1 year ago

Oh I was well aware what community I was in 😁. I hate cars and exclusively ride bikes myself and here I was making a joke how I managed to get !fuck_cars of all places to downvote me for not watching fox news. All because my groupthink is not exactly identical to their groupthink (I am not the grandparent comment btw).

The secret is that karma does not matter anywhere! However, as long as the comment sorting algorithm is the way it is, I will keep believing that the downvote button is for posts that are non-constructive contributions, not for disagreement. Burying discussion is not constructive, but that's what the algorithm will do. Maybe this is a hopeless task, but I wish that after a conversation I have learned something new, or taught someone something, not just made myself feel better.

[–] [email protected] 1 points 1 year ago (1 children)

The joke is that the orbit was clearly originally reported in kilometers, but the article editor "helpfully" converted it to miles and reported it in miles as default, but it makes no sense now because the same "miles" number now equals two different "kilometers" numbers.

[–] [email protected] 8 points 1 year ago

The show For All Mankind did a good take on the problem IMO. Being gay wasn't illegal per se, but gay people could not be employed at NASA. They still joined, but they kept their orientation hidden. Then the security forces used the justification that gays keeping secrets were vulnerable to blackmail to go on witch hunts to seek and root out gays, and to defend the decision to ban gays from employment in the first place. It was a circular argument through-and-through. The base reason has always been prejudice. Didn't help that in the show there were real Soviet spies running around trying to find gays to extort for NASA rocket secrets.

[–] [email protected] 2 points 1 year ago (2 children)

They said the same thing to justify banning gays from working for the government or serving in the military.

[–] [email protected] 2 points 1 year ago (1 children)

only pertain to hiring of individuals

Not true. Title II of Civil Rights Act (1964) prohibits discrimination in public accomodations (such as hotels and restaurants or other establishments that serve the public), as affirmed by the Supreme Court to be enforceable in for example Heart of Atlanta Motel, Inc. (1964).

[–] [email protected] 1 points 1 year ago

That's the Supreme Court for ya! Their judgements do tend to meander and sometimes flip over the years, especially recently. You are probably refering to Masterpiece Cakeshop (2017) decision being different from the civil rights era cases, like say Newman v. Piggie Park Enterprises, Inc. (1968) where the defendant who did not want to serve black customers at his BBQ restaurants unsuccessfully argued that "the Civil Rights Act violated his freedom of religion as his religious beliefs compel him to oppose any integration of the races whatever." It is still enlightening to read the actual court decisions and the justifications used to arrive at one conclusion or another, and especially their explanations for how the current case is different from all the other cases decided before. After a while though it does start to look as if you could argue for any point of view whatsoever if you argued hard enough.

view more: ‹ prev next ›