[-] [email protected] 2 points 21 minutes ago

Well let's hear some suggestions then.

[-] [email protected] 2 points 1 hour ago

I think they should be given a pass - same way women-only gyms get one. That hypothetical person of color has a million places they can go and just one where they’re not welcome. Seems like a fair trade to me.

[-] [email protected] 2 points 2 hours ago

Stay strong, brother. Out of solidarity, I’m going to go label a few more things with my new white paint marker - purely out of spite.

[-] [email protected] 1 points 2 hours ago

I mean - it’s certainly possible, but you’d still be risking that 500k prize if you got caught.

And most people seem to tap out because of loneliness or starvation, so if you were going to cheat, you’d pretty much have to smuggle in either food or a better way of getting it - like a decent fishing rod and proper lures.

[-] [email protected] 2 points 2 hours ago

I've put things in my ass for no points. 1000 points sure sounds worth it.

[-] [email protected] 7 points 2 hours ago

They do regular health check-ins with the contestants, and if you’re not losing weight but there’s no footage of you catching food, they’re going to figure out pretty quickly that something’s up.

On top of that, the locations are chosen so that just hiking out to you with food would be a survival challenge in itself - and coming in by boat would almost certainly be noticed.

Interestingly, I've just been binge watching the show for the first time. I'm on season 5 currently.

[-] [email protected] 6 points 3 hours ago

Two
spaces
before
you
press
enter.

[-] [email protected] 3 points 3 hours ago* (last edited 3 hours ago)

Mine is complaining that I'm way too excited for my new white paint marker and number 64 rubber bands. I just don't get women..

[-] [email protected] 6 points 3 hours ago

What are you suggesting exactly? You have an actual solution here to offer or you just want to be a smart ass?

[-] [email protected] 17 points 5 hours ago* (last edited 3 hours ago)

Not sure what the article is getting at, but there’s a thing called “weaponized empathy” - or “concern trolling” - which is a bad-faith argumentation tactic where you pretend to be worried about someone, when in reality you’re just using that as a cover for judgment or hostility.

It can also be used more broadly. Think of how often “think of the children” gets trotted out as a justification to invade people’s privacy, when the supposed concern for kids’ wellbeing is really just an excuse.

[-] [email protected] 28 points 11 hours ago

When people have sex, they usually do it in private, without any witnesses. Whatever happens during that time is often difficult to prove afterward, since it typically comes down to one person’s word against the other’s. Unless there’s clear physical evidence of assault, it can be extremely hard to establish that something was done against someone’s will. Most reasonable people would agree that “she said so” alone doesn’t amount to proof - and isn’t, by itself, a valid basis for sending someone to prison.

[-] [email protected] 75 points 11 hours ago

"If we just trusted women"

We don't trust people based on their gender. We trust them based on credibility and evidence. If there's even the tiniest amount of doubt then it better to let the guilty walk free rather than put an innocent person in jail. And I'm speaking broadly here - not about Trump specifically.

289
submitted 5 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

100
submitted 1 week ago by [email protected] to c/[email protected]

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›

Perspectivist

0 post score
0 comment score
joined 1 week ago