Eyron

joined 2 years ago
[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

They also fired all their park workers during covid and gave themselves 10 million bonuses while their workers were surviving on food stamps. Some workers had even signed non compete clauses so they literally could not use their talents elsewhere to feed themselves.

There are plenty of things to hate Disney for, especially as they approach super-monopoly status, ruin nearly every franchise they touch, and have trouble telling what's good or not. As a company, Disney's morals and decisions grow more concerning every month. Disney is basically a disaster in progress.

However, this specific complaint seems bad: it's the wrong scale. Many companies were in the wrong during COVID, but it's hard to look at these numbers and say the layoffs here were bad decisions based on $10M in bonuses. The scales are just too different.

Disney laid off 32,000 park workers At a measly 40 hours per week at their "minimum wage" (formerly $15/hr, now $24/hr): that's $83.2 million PER MONTH: $998M a year. A $10M "bonus" is 1% of that, and even smaller compared to the $6.4B of park revenue they had loss.

The former CEO "gave up" their salary ($3M) and "bonus" ($45M in 2019), had 20-30% pay cuts to the executive staff, and a few other items. The CEO did get "$10M" in stock awards, but stock awards don't get you off food stamps. Those stocks become nothing if the company posts bad financials, which would hurt more than just the execs.

The $1.5B dividend payout in April 2020 looks much worse. Abigail Disney ranted about it on Twitter (now X). His rant is at the appropriate scale: Disney paid out billions before they chose to save millions. The execs got quite a bit of that dividend payout. That's the greed.

[–] [email protected] 2 points 3 weeks ago

Did you purposely miss the first and last questions: Which laptop is the good value?

I never said people need to run LLMs. I said Apple dominates high-end laptops and wanted a good high-end to compare to the high-end Macbooks.

Instead of just complaining about Apple, can do what I asked? Best cheaper laptop alternative that checks the non-LLM boxes I mentioned:

If you want good cooling, good power (CPU and GPU), good screen, good keyboard, good battery, good WiFi, etc., the options get limited quickly.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Is there a particular model you're thinking of? Not just the line. I usually find that Windows laptops don't have enough cooling or make other sacrifices. If you want good cooling, good power (CPU and GPU), good screen, good keyboard, good battery, good WiFi, etc., the options get limited quickly.

Even the RAM cost misses some of the picture. Apple Silicon's RAM is available to the GPU and can run local LLMs and other machine learning models. Pre-AI-hype Macs from 2021 (maybe 2020) already had this hardware. Compare that to PC laptops from the same era. Even in this era, try getting Apple's 200-400GB/s RAM performance on a PC laptop.

PC desktop hardware is the most flexible option for any budget and is cost-effective for most budgets. For laptops, Apple dominates their price points, even pre-Apple-silicon.

The OS becomes the final nail in the coffin. Linux is great, but a lot of software still only supports Windows and Apple; Linux support for the latest/current hardware can be a hit or miss (My three-year-old, 12th-gen Thinkpad just started running well). If the choice is between Mac OS or Windows 11, is there much of a choice? Does that change if a company wants to buy, manage, and support it? Which model should we be looking at? It's about time to replace my Thinkpad.

[–] [email protected] 1 points 3 months ago

Technically, it might be faster, but that's not usually the reason. Email servers generally have to do a lot of work to confirm email messages are not spam. That work usually takes significantly longer than any potential DNS savings. In fact, that spam checking is probably the reason you see the secondary domains used.

When the main domain used for many purposes (like servers, users, printers, vendor communications, accounting communications, and so forth) It leaves a lot of room for misuse. Many pre-ransomware viruses would just send out thousands of emails iper hour. The mass communicating server could also reduce the domain reputation. There are just so many ways to tarnish the reputation of your email server or your email domain.

Many spam analysis systems group the subdomains and domain together. The subdomains contribute to the domain score and the domain score contributes to the subdomain score. To send a lot of emails successfully, you need both your servers and domains to have a very strong and very good reputation. Any marks on that reputation might prevent emails from being received by users. When large numbers of emails need to be controlled, it can be hard to get everyone in the organization to adhere to email rules (especially when the the problems aren't users, but viruses/hackers) and easy to just register a new domain, more strictly controlled domain.

Some of the recent changes in email policies/tech might change the game, but old habits die hard. Separate domains can still generally be more successfully delivered, have potential security benefits, and can often work around IT or policy restrictions. They might phase out, but they might not. The benefit usually outweighs the slight disadvantage that 99% of people won't see.

tl;dr

Better controlled email reputation.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

You should probably read/know the actual law, rather than just getting it close. You're probably referring to 18 USC 922 (d) (10), which includes any felony-- not just shooting. That's one of 11 listed requirements in that section, which assumes that the first requirement (a) (1) is met: not an interstate nor foreign transaction. There's a lot more to it than just "as long as you don't have good evidence they're going to go shoot someone"

Even after the sale, ownership is still illegal under section (g)-- it just isn't the seller's fault anymore.

This is basic information that should be known to any gun safety advocate. "Responsible" gun owners must know those laws, plus others backward and forward. One small slip-up is a felony, jail, and permanent loss of gun ownership/use. Are they really supposed to listen to those who can't even talk about current law correctly?

The law can be better, but you won't do yourself any favors by misrepresenting it.

[–] [email protected] 6 points 6 months ago (2 children)

Voyager - if I didn’t love Voyager Janeway would kick my ass.

No need for threats. Voyager is good.

Blink twice if you need help.

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago)

It seems you are mixing the concepts of voting systems and candidate selection. FPP nor FPTP should not sound scary. As a voting systems, FPP works well enough more often than many want to admit. The name just describes it in more detail: First Preference Plurality.

Every voting system is as bottom-up or top-down as the candidate selection process. The voting system itself doesn't really affect whether it is top down or bottom up. Requiring approval/voting from the current rulers would be top-down. Only requiring ten signatures on a community petition is more bottom up.

The voting systems don't care about the candidate selection process. Some require precordination for a "party", but that could also be a party of 1. A party of 1 might not be able to get as much representation as one with more people: but that's also the case for every voting system that selects the same number of candidates.

Voting systems don't even need to be used for representation systems. If a group of friends are voting on where to eat, one problem might be selecting the places to vote on, but that's before the vote. With the vote, FPP might have 70% prefer pizza over Indian food, but the Indian food vote might still win because the pizza voters had another first choice. Having more candidates often leads to minority rule/choice, and that's not very good for food choice nor community representation.

[–] [email protected] 6 points 7 months ago* (last edited 7 months ago) (6 children)

That many steps? WindowsKey+Break > Change computer name.

If you're okay with three steps, on Windows 10 and newer, you can right click the start menu and generally open system. Just about any version supports right clicking "My Computer" or "This PC" and selecting properties, as well.

[–] [email protected] 42 points 7 months ago* (last edited 7 months ago)

Do you remember the Internet Explorer days? This, unfortunately, is still much better.

Pretty good reason to switch the Firefox, now. Nearly everything will work, unlike the Internet Explorer days.

  • Firefox User
[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

Do you use Android? AI was the last thing on their minds for AOSP until OpenAI got popular. They've been refining the UIs, improving security/permissions, catching up on features, bringing WearOS and Android TV up to par, and making a Google Assistant incompetent. Don't take my word for it; you'll rarely see any AI features before OpenAI's popularity: v15, v14, v13, and v12. As an example of the benefits: Google and Samsung collaborating on WearOS allowed more custom apps and integrations for nearly all users. Still, there was a major drop in battery life and compatibility with non-Android devices compared to Tizen.

There are plenty of other things to complain about with their Android development. Will they continue to change or kill things like they do all their other products? Did WearOS need to require Android OSes and exclude iOS? Do Advertising APIs belong in the base OS? Should vendors be allowed to lock down their devices as much as they do? Should so many features be limited to Pixel devices? Can we get Google Assistant to say "Sorry, something went wrong. When you're ready: give it another try" less often instead of encouraging stupidity? (It's probably not going to work if you try again).

Google does a lot of wrong, even in Android. AI on Android isn't one of them yet. Most other commercially developed operating systems are proprietary, rather than open to users and OEMs. The collaboration leaves much to be desired, but Android is unfortunately one of the best examples of large-scale development of more open and libre/free systems. A better solution than trying to break Android up, is taking/forking Android and making it better than Google seems capable of.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

I’m fully aware how rirs allocate ipv6. The smallest allocation is a /64, that’s 65535 /64’s. There are 2^32 /32’s available, and a /20 is the minimum allocatable now. These aren’t /8’s from IPv4, let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

/48s are basically pop level allocations, few end users will be getting them. In fact comcast which used to give me /48s is down to /60 now.

I’ll repeat, we aren’t running out any time soon, even with default allocations in the /3 currently existing for ipv6.

Sorry, but your reply suggests otherwise.

The RIRs (currently) never allocate a /64 nor a /56. /48 is their (currently) smallest allocation. For example, of the ~800,000 /32's ARIN has, only ~47k are "fragmented" (smaller than /32) and <4,000 are /48s. If /32s were the average, we'd be fine, but in our infinite wisdom, we assign larger subnets (like Comcast's 2601::/20 and 2603:2000::/20).

These aren’t /8’s from IPv4. let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

Taking into account the RIPE allocations, noted above, the closer equivalent to /8 is the 1.048M /20s available. Yes, it's more than the 8-bit class-A blocks, but does 1 million really sound like the scale you were talking about? "enough addresses in ipv6 to address every known atom on earth"

The situation for /48s is better, but still not as significant as one would think. With Cloudflare as an extreme example: They have 6639 IPv4 /24 blocks, but 670,550 IPv4 /48 blocks. Same number of networks in theory, but growing from needing 13-bits of networks in IPv4 to 19-bits of networks: 5 extra bits of usage from just availability.

That sort of increase of networks is likely-- especially in high-density data centers where one server is likely to have multiple IPv6 networks assigned to it. What do you think the assignments will look like as we expand to extra-terrestrial objects like satellites, moons, planets, and other spacecraft?

I’ll repeat, we aren’t running out any time soon

Soon vs never. OP I replied to said "never". Your post implied similarly, too-- that these numbers are far too big for humans to imagine or ever reach. The IPv6 address space is large enough for that: yes. But our allocations still aren't. The number of bits we're actually allocating (which is the metric used for running out) is significantly smaller than most think. In the post above, you're suggesting 56-64 bits, but the reality is currently 20-32 bits-- 1M-4B allocations.

If everyone keeps treating IPv6 as infinite, the current allocation sizes would take longer than IPv4 to run out, but it isn't really an unfathomable number like the number of atoms on Earth. 281T /48s works more sanely: likely enough for our planet-- but RIPEs seem to avoid allocating subnets that small.

IPv4-style policy shifts could happen: requirements for address blocks rise, allocation sizes shrink, older holders have /20 blocks (instead of 8-bit class A blocks), and newer organizations limited to /48 blocks or smaller with proper justification. The longer we keep giving away /20s and /32s like candy, the more likely we'll see the allocations run out sooner (especially compared to never). My initial message tried to imply that it depends on how fast we grow and achieve network growth goals:

30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes.

. . .

Realistically, it’s probably more than 100 years away, maybe outside our lifetimes

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (3 children)

That wasn't what I said. 2^56 was NOT a reference to bits, but to how many IPs we could assign every visible star, if it weren't for subnet limitations. IPv6 isn't classless like IPv4. There will be a lot of wasted/unrunused/unroutable addresses due to the reserved 64-bits.

The problem isn't the number of addresses, but the number of allocations. Our smallest allocation, today, for a 128-bit address: is only 48-bits. Allocation-wise, we effectively only have 48-bits of allocations, not 128. To run out like with IPv6 , we only need to assign 48-bits of networks, rather than the 24-bits for IPv4. Go read up on how ARIN/RIPE/APNIC allocate IPs. It's pretty wasteful.

view more: next ›