FermiEstimate

joined 1 year ago
[–] [email protected] 15 points 9 months ago (5 children)

You're forgetting mass shooters, i.e., the people who don't care if they're identified or if they're getting a good price. Safe to say they're not worried about their credit rating if the plan is to take on a SWAT team in 20 minutes.

[–] [email protected] 2 points 9 months ago (2 children)

Is it actually good against tanks now? I always liked it, but it still hurt to finally get a shot off just for the tank to shrug off the hit.

[–] [email protected] 30 points 9 months ago (3 children)

American Rounds

What, was the Circus of Values brand too expensive to license?

[–] [email protected] 12 points 9 months ago

Oh, hey, I've run into this in the wild--the Kalendar AI people keep ineptly trying to start a conversation to sell some kind of kiosk software by referencing factoids they scraped from our latest press release. They've clearly spent more effort on evading spam filters and rotating domains than they have on anything else, but they helpfully use "human" names ending in "Kai," so creating a wildcard filter wasn't too hard.

Credit where it's due: I'd never heard of Kalendar or the software company who hired them, but this experience has told me everything I need to know about both of them. If you don't sweat the details and rate sentiment change using absolute value, that's kind of impressive.

[–] [email protected] 13 points 10 months ago (1 children)

Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”

We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of "exchanges" acceptable for telling someone their loved one is in hell? Like, is 20 in "Hey, not cool" territory, but it's all good once you get to 50? 40?

Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”

“They had already fully established that he wasn't in heaven,” Rohrer said.

Always a good sign when your best defense of the horrible thing your chatbot says is that it's in context.

[–] [email protected] 9 points 10 months ago

I'm just going to pretend that's one of the researchers from Where Oaken Hearts Do Gather.

[–] [email protected] 30 points 10 months ago (1 children)

I conclude that scheming is a disturbingly plausible outcome of using baseline machine learning methods to train goal-directed AIs sophisticated enough to scheme (my subjective probability on such an outcome, given these conditions, is ~25%).

Out: vibes and guesswork

In: "subjective probability"

[–] [email protected] 3 points 10 months ago

I felt the exact same way about the conversation you mentioned. I really liked the idea of the quest, but way they handled it just utterly drained all the stakes. And as you noted, it's weird to see a misstep like this after they nailed it once in Sumeru.

[–] [email protected] 15 points 10 months ago

"We're all in grave danger! What? Well no, we can't give specifics unless we risk not getting paid. Signed, Anonymous"

I mean, I wasn't exactly expecting the Einstein-Szilard letter 2.0 when I clicked that link, but this is pathetic.

[–] [email protected] 47 points 10 months ago (11 children)

lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren't fit for purpose, but he's 100% certain someone else will somehow solve this problem:

So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?

I think solving the AI hallucination problem — I think that’ll be fixed. 

But I guess my question is by who? Is it by you, or is it somewhere down the stack?

It’s someone down the stack. 

Okay.

I think either from the chip level or from the LLM itself.

[–] [email protected] 5 points 11 months ago (1 children)

You get medals and requisition points from playing that you can use to unlock new stratagems, which includes everything from weapons to orbital bombardment. Medals get you new weapons, cosmetics, etc. You also find samples you can collect on missions, and these unlock permanent upgrades for stratagems. There are player levels, but these just unlock new titles once you get past the basics.

The battle pass equivalent is Warbonds, which include new weapons, armor, cosmetics, etc. Unlike most games, warbonds don't expire and you can find enough premium currency while playing to get them without too much trouble.

On the whole, new warbond weapons tend to be different rather than obvious upgrades. The default assault rifle you get stays perfectly viable throughout the game.

[–] [email protected] 19 points 11 months ago (4 children)

All I really know is shoot bug and if you aren’t getting friendly fired to hell and back you’re playing wrong

You've pretty much got it down, though you also shoot terminators.

view more: ‹ prev next ›