Though, I seriously doubt it's a legitimate study. Standards dictate you'd do it with people's consent and inform them what's up. You'd get scolded by your professor if you did it like this. And I believe we do studies without explicit consent, but that's university level stuff and I suppose you'd have to file a request with the ethics committee and have someone look at the study layout. I'd say if it is a "study", it's probably illegitimate and done by someone without much academic background. Or they don't abide by the same standards all students do for specific reasons.
Hmmh. I have uBlock and LocalCDN installed in my browser because I'm more worried about all the Google and Metas out there. Most of the news articles linked here are on websites with like 3 different trackers. And Google and Meta definitely have enough info about everyone to correlate minor details.
I must say I'm not super worried about my IP leaking into the Fediverse. I mean the pictures as a direct message is yet another thing. But generally speaking, we have some trade-off here between privacy and spreading information across a distributed network. It's not a good thing, but I think the benefits outweigh the downsides.
Yeah, I can see how exploration for further things could be the case.
I just wonder, do people also install browser extensions to cache all the google fonts, jsdelivr urls etc? Or do they just give away the same data to every link on this link aggregator platform and it's just when it becomes very obvious as with this weird thing?
Location would be possible. For me it's a few 100km off, but usually the GeoIP databases are more accurate.
Piefed doesn't do much image caching or proxying. It only keeps thumbnails around. Once you open a post with more than a thumbnail in it (a full picture), your IP is revealed to the image hoster.
Sure, back when I was young enough to do really stupid "pranks", we tried to vandalize Wikipedia once or twice. You get banned and re-try one day later. That's kind of how it works with IP bans. But it gets rid of 99% of people who aren't super persistent. And that's enough. And also why they do it even if it's not "perfect". Our school had one static IP for the entire computer room, so over there Wikipedia wouldn't accept edits for a whole week or two, until the ban properly expired.
Yeah, I heard it's different with some providers in north america. But then again, it's not very straightforward to track which IPs belong to which provider, in which timespans they get renewed and then match that to other info.
I mean for most users worldwide, the IP changes every 24h or so, maybe every few days. So I doubt it's of great value unless you have access to another big database of current logins to match this against. And if you already have that database, I don't see the value of recording the IP again. Only added info is that the user uses Lemmy, if there isn't any identifier in the image URL.
I wonder what the use case is for gathering IP addresses of random internet connections.
I use KaniDM and configured everything with OAuth2. That was the easiest and most straightforward I could find. But I don't think they bothered implementing LDAP. Other platforms I tried are Authentik, Authelia, Keycloak, Zitadel... They're all a bit heavier and have other/more features, but there wasn't one I really fell in love with.
I'd say use something like zeroconf(?) for local computer names. Or give them names in either your dns forwarder (router), hosts file or ssh config. Along with shell autocompletion, that might do the job. I use scp, rsync and I have a NFS share on the NAS and some bookmarks in Gnome's file manager, so i just click on that or type in scp or rsync with the target computer's name.
Hmmh, I had another look at the numbers and all the 16 / 21 / 28 Gbps "effective" memory speed of the 3090 / 4090 and 5090 seem to be about in the same ballpark. So are AMD desktop graphics cards. I thought a 5090 would do more. But don't the AI (datacenter) cards that are designed for AI workloads and 80GB of VRAM have something like 2 or 3 TB/s? I mean running large LLMs at home is kind of a niche, I'm not sure what kind of requirements people have for this. But at this price point an AMD Epyc processor with the several memory channels could maybe(?) do a similar job. I'm really not sure what the target audience is.
And I'm also curious about the alternative approaches to language models. Afaik we're not there yet with diffusion models. And it might take some time til we get a freely available state of the art model at that size. I guess cutting down on the memory-speed requirements would make things easier for a lot of use-cases.
Is erotic roleplay 'unethical'? Because we got a lot of services for that.