evenwicht

joined 1 year ago
MODERATOR OF
 

Before sharing my email address with some person or some org, I do an MX DNS lookup on the domain portion of their email address. It’s usually correct. That is, if the result is not of the form *.mail.protection.outlook.com, then that recipient is not using Microsoft’s mail server.

But sometimes I get stung by an exception. The MX lookup for one recipient yielded barracudanetworks.com, so I trusted them with email. But then they sent me an email and I saw a header like this:

Received: from *.outbound.protection.outlook.com (*.outbound.protection.outlook.com…

Is there any practical way to more thoroughly check whether an email address leads to traffic routing through Microsoft (or Google)?

 

cross-posted from: https://lemmy.sdf.org/post/36402193

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

 

cross-posted from: https://lemmy.sdf.org/post/36402193

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

 

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

 

Very disturbing that this instance vanished spontaneously, like so many small instances do. We don’t even have read access to the content.

The rumor is they are looking to come back as a platform I had not heard of (piefed). Unclear why that could not be explored in parallel to running slrpnk.net.

Update

More answers in this clip:

slrpnk down to mid-July, due to hardware failure

(I ripped that pic off of programming.dev, a centralised Cloudflare site that I would never send ppl in the free world to, hence why it is a copy).

[–] [email protected] 1 points 1 month ago

I have not tried much of anything yet. I just got a cheap laptop with a BD which came with Windows and VLC. I popped in a blu-ray disc from the library and it could not handle it.. something about not having a aacs decoder or something like that. I didn’t spend any time on it yet but ultimately in principle I would install debian and try to liberate the drive to read BDs.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (2 children)

thanks!

Though I should mention my original motivation with makemkv was to rip blu-ray discs, which has complications that go beyond DVD. But the DVD guide will still be quite useful.

[–] [email protected] 1 points 2 months ago

Fun suggestion.. could be useful to have as a side hack if congestion becomes an issue but I doubt it would come to that. They have what seems to be a high-end switch with 20 or so ports and internal fans.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

The event is ~2—3 hours or so. If someone needs the full Debian (80 gb!), I think over USB 2 it would not transfer in that timeframe. USB 2 sticks may be rare but at this event there are some ppl with old laptops that have no USB 3 sockets. A lot of people plug into ethernet. And the switch looks somewhat more serious than a 4-port SOHO.. it has like 20+ ports with fans, so I don't get the impression ethernet congestion would be an issue.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

I think they could do the job. I’ve never admin’d an NFS so I’m figuring there’s a notable learning curve there. SAMBA, well, maybe. I’ve used it before. I’m leaning toward ProFTPd at the moment but if that gives me any friction I guess I’ll consider SAMBA. Perhaps I’ll go into overachiever mode and have both SAMBA and ProFTPd pointing to the same directory.

[–] [email protected] 1 points 2 months ago (1 children)

Two possible issues w/that w.r.t my use case:

  • not in official Debian repos -- not a show stopper but definately points against it for installation and maintenance burdons across migrations
  • apparently read-only access for users. This is fine in simple cases where I would just be sharing with others, but a complete solution enables users to share with others on the same server by uploading. Otherwise everyone with a file to share must run rejetto hfs.

Nonetheless, I appreciate the suggestion. It could be handy in some situations.

[–] [email protected] 1 points 2 months ago

oh, sorry. Indeed. I answered from the notifications page w/out context. Glad to know Filezilla will work for that!

[–] [email protected] 1 points 2 months ago (2 children)

I use filezilla but AFAIK it’s just a client not a server.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (4 children)

Indeed i noticed openssh-sftp-server was automatically installed with Debian 12. Guess I’ll look into that first. Might be interesting if ppl could choose between FTP or mounting with SSHFS.

(edit) found this guide

Thanks for mentioning it. It encouraged me to look closer at it and I believe it’s well suited for my needs.

 

There is a periodic meeting of linux users in my area where everyone brings laptops and connects to a LAN. Just wondering if I want to share files with them, what are decent options? Is FTP still the best option or has anything more interesting emerged in the past couple decades? Guess I would not want to maintain a webpage so web servers are nixed. It’s mainly so ppl can fetch linux ISO images and perhaps upload what they have as well.

(update) options on the table:

  • ProFTPd
  • OpenSSH SFTP server (built into SSHd)
  • SAMBA
  • webDAV file server - maybe worth a look, if other options don’t pan out; but I imagine it most likely does not support users uploading

I started looking at OpenSSH but it’s very basic. I can specify a chroot dir that everyone lands in, but it’s impossible to give users write permission in that directory. So there must be a subdir with write perms. Seems a bit hokey.. forces people to chdir right away. I think ProFTPd won’t have that limitation.

[–] [email protected] 1 points 2 months ago

Well it’s still the same problem. I mean, it’s likely piracy to copy the public lib’s disc to begin with, even if just for a moment. From there, if I want to share it w/others I still need to be able to exit the library with the data before they close. So it’d still be a matter of transcoding as a distinctly separate step.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (2 children)

What’s the point of spending a day compressing something that I only need to watch once?

If I pop into the public library and start a ripping process using Handbrake, the library will close for the day before the job is complete for a single title. I could check-out the media, but there are trade-offs:

  • no one else can access the disc while you have it out
  • some libraries charge a fee for media check-outs
  • privacy (I avoid netflix & the like to prevent making a record in a DB of everything I do; checking out a movie still gets into a DB)
  • libraries tend to have limits on the number of media discs you can have out at a given moment
  • checking out a dozen DVDs will take a dozen days to transcode, which becomes a race condition with the due date
  • probably a notable cost in electricity, at least on my old hardware
[–] [email protected] 2 points 2 months ago

Wow, thanks for the research and effort! I will be taking your approach for sure.

 

Translating the Debian install instructions to tor network use, we have:

  torsocks wget https://apt.benthetechguy.net/benthetechguy-archive-keyring.gpg -O /usr/share/keyrings/benthetechguy-archive-keyring.gpg
  echo "deb [signed-by=/usr/share/keyrings/benthetechguy-archive-keyring.gpg] tor://apt.benthetechguy.net/debian bookworm non-free" > /etc/apt/sources.list.d/benthetechguy.list
  apt update
  apt install makemkv

apt update yields:

Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Err:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
  Connection failed [IP: 127.0.0.1 9050]

Turns out apt.benthetechguy.net is jailed in Cloudflare. And apparently the code is not developed out in the open -- there is no public code repo or even a bug tracker. Even the forums are a bit exclusive (registration on a particular host is required and disposable email addresses are refused). There is no makemkv IRC channel (according to netsplit.de).

There is a blurb somewhere that the author is looking to get MakeMKV into the official Debian repos and is looking for a sponsor (someone with a Debian account). But I wonder if this project would even qualify for the non-free category. Debian does not just take any non-free s/w.. it's more for drivers and the like.

Alternatives?


The reason I looked into #makemkv was that Handbrake essentially forces users into a long CPU-intensive transcoding process. It cannot simply rip the bits as they are. MakeMKV relieves us of transcoding at the same time as ripping. But getting it is a shit show.

 

It’s back online with not a peep from the admin as to what happened. Logins work only via the web UI, but it just gives a non-stop stream of “401 the access token is invalid” popups.

 

Love the irony and simultaneous foreshadowed embarrassment of Elon denying availability and service as a way to be more efficient.

The irony

Cloudflare enables web admins to be extremely bloated. Admins of Cloudflared websites have no incentive to produce lean or efficient websites because Cloudflare does the heavy lifting for free (but at the cost of reduced availability to marginalized communities like Tor, VPNs, CGNAT, etc). So they litter their website with images and take little care to choose lean file formats or appropriate resolutions. Cloudflare is the #1 cause of web inefficiency.

Cloudflare also pushes countless graphical CAPTCHAs with reckless disregard which needlessly wastes resources and substantially increases traffic bloat -- all to attack bots (and by side-effect text-based users) who do not fetch images and thus are the most lean consumers of web content.

The embarrassment

This is a perfect foreshadowing of what we will see from this department. “Efficiency” will be achieved by killing off service and reducing availability. Certain demographics of people will lose service in the name of “efficiency”.

It’s worth noting that DOGE is not using Cloudflare’s default configuration. They have outright proactively blacklisted Tor IPs to ensure hard-and-fast fully denied service to that demographic of people. Perhaps their PR person would try to spin this as CAPTCHA avoidance is efficient :)

The other embarrassment is that they are using Cloudflare for just a single tiny image. They don’t even have enough competency to avoid CF in the normal state & switch it on demand at peak traffic moments.

The microblog discussion

Microblog chatter here.

 

A lot of gov services use the same shitty social networks. But it’s just a bit extra disgusting when the FCC uses them along with the not-so social platforms. It’s an embarrassment.

The FCC privacy policy starts with:

“The FCC is committed to protecting the privacy of its visitors.”

Fuck no they aren’t. And we expect the FCC in particular to be well aware of the platforms that would make their privacy claim a true statement.

In particular:

  • MS Github (98 repositories and maybe a bit strange that they are hosting UK stuff there.

  • MS LinkedIn: “Visit our LinkedIn profile for information on job openings, internships, upcoming events, consumer advice, and news about telecommunications.” ← At least it’s openly readable to non-members. But I clicked APPLY on an arbitrary job listing (which had no contact info) and I was ignored, probably for not having a LinkedIn account. Which is obviously an injustice. Anyone should be able to access government job listings without licking Microsoft’s boots.

  • Facebook: “Keep informed and engaged about consumer alerts, Commission actions and events.” ←Non-Facebook members cannot even view their page. And they are relying on it for engagement and consumer alerts.

  • Twitter: “Follow @FCC for updates on upcoming meetings, helpful consumer information, Commission blog postings, and breaking FCC and telecommunications news with links to in-depth coverage.” ← At least it’s openly readable to non-members. But despicable that non-Twitter users cannot engage with the FCC. It’s an assult on free speech in the microblogging context. If you don’t lick Elon’s boots and give Twitter a mobile phone number (which they have been caught abusing before twtr contractors were caught spying on old accts, which came before Twitter was breached [twice in fact]), you cannot microblog to your government.

  • YouTube: “Playback recorded webcasts of FCC events and view tutorials, press conferences, speeches and public service announcements on the FCC's YouTube channel.” ← One of the most atrocious abuses of public resources because Youtube is no longer open access. You cannot be on Tor, you cannot use Invideous. Due to recent extreme protectionism by Google, you are subject to surveillance advertising tied to your personal IP address.

Public money finances the FCC to make whatever videos the FCC produces. Since we already paid for the videos, they should be self-hosted by the FCC, not conditional upon entry into an paid-for-by-advertising walled garden with Google as a gatekeeper. It should be illegal to do that -- and we would expect the FCC to drive a just law in that regard. We would also expect the FCC to have the competency to either stand up their own peertube instance or simply put the videos on their website. People should be fighting that shit for sure.

What a shitty example they set for how government agencies should implement comms.

 

spontaneously. No warning to users.

Although I wonder how an instance admin would warn users.

 

I am certain this community is not instantly popular. Even in the fediverse most people are fine with walled-gardens (we know this from Lemmy World). So just pointing out it’s interesting that 35 bots instantly monitor new communities as soon as they are created.

(BTW: “metapost” in the subject means it’s an off-topic post about the community itself. It’s not a reference to that shitty corp that has hijacked a generic word for commercial exploitation)

view more: next ›