evenwicht

joined 10 months ago
MODERATOR OF
[–] [email protected] 1 points 14 hours ago (1 children)

I use filezilla but AFAIK it’s just a client not a server.

[–] [email protected] 1 points 16 hours ago* (last edited 14 hours ago) (3 children)

Indeed i noticed openssh-sftp-server was automatically installed with Debian 12. Guess I’ll look into that first. Might be interesting if ppl could choose between FTP or mounting with SSHFS.

(edit) found this guide

Thanks for mentioning it. It encouraged me to look closer at it and I believe it’s well suited for my needs.

 

There is a periodic meeting of linux users in my area where everyone brings laptops and connects to a LAN. Just wondering if I want to share files with them, what are decent options? Is FTP still the best option or has anything more interesting emerged in the past couple decades? Guess I would not want to maintain a webpage so web servers are nixed. It’s mainly so ppl can fetch linux ISO images and perhaps upload what they have as well.

[–] [email protected] 1 points 1 week ago

Well it’s still the same problem. I mean, it’s likely piracy to copy the public lib’s disc to begin with, even if just for a moment. From there, if I want to share it w/others I still need to be able to exit the library with the data before they close. So it’d still be a matter of transcoding as a distinctly separate step.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (2 children)

What’s the point of spending a day compressing something that I only need to watch once?

If I pop into the public library and start a ripping process using Handbrake, the library will close for the day before the job is complete for a single title. I could check-out the media, but there are trade-offs:

  • no one else can access the disc while you have it out
  • some libraries charge a fee for media check-outs
  • privacy (I avoid netflix & the like to prevent making a record in a DB of everything I do; checking out a movie still gets into a DB)
  • libraries tend to have limits on the number of media discs you can have out at a given moment
  • checking out a dozen DVDs will take a dozen days to transcode, which becomes a race condition with the due date
  • probably a notable cost in electricity, at least on my old hardware
[–] [email protected] 2 points 1 week ago

Wow, thanks for the research and effort! I will be taking your approach for sure.

[–] [email protected] 10 points 2 weeks ago (6 children)

I’ll have a brief look but I doubt ffmpeg would know about DVD CSS encryption.

 

Translating the Debian install instructions to tor network use, we have:

  torsocks wget https://apt.benthetechguy.net/benthetechguy-archive-keyring.gpg -O /usr/share/keyrings/benthetechguy-archive-keyring.gpg
  echo "deb [signed-by=/usr/share/keyrings/benthetechguy-archive-keyring.gpg] tor://apt.benthetechguy.net/debian bookworm non-free" > /etc/apt/sources.list.d/benthetechguy.list
  apt update
  apt install makemkv

apt update yields:

Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Ign:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
Err:9 tor+https://apt.benthetechguy.net/debian bookworm InRelease
  Connection failed [IP: 127.0.0.1 9050]

Turns out apt.benthetechguy.net is jailed in Cloudflare. And apparently the code is not developed out in the open -- there is no public code repo or even a bug tracker. Even the forums are a bit exclusive (registration on a particular host is required and disposable email addresses are refused). There is no makemkv IRC channel (according to netsplit.de).

There is a blurb somewhere that the author is looking to get MakeMKV into the official Debian repos and is looking for a sponsor (someone with a Debian account). But I wonder if this project would even qualify for the non-free category. Debian does not just take any non-free s/w.. it's more for drivers and the like.

Alternatives?


The reason I looked into #makemkv was that Handbrake essentially forces users into a long CPU-intensive transcoding process. It cannot simply rip the bits as they are. MakeMKV relieves us of transcoding at the same time as ripping. But getting it is a shit show.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

“Renamed” seems like an understatement. I heard Digital Services was completely hallowed out. If you dump the people and change the name, what’s left? The chairs and keyboards?

cc @[email protected]

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

Blocking Tor is useless for DDoS protection because there are not enough exit nodes to impact a US federal website more than a fly on the windscreen of a 16 wheel tracktor-trailor. Such an attempt will bring down Tor itself before the DOGE admins even notice.

[–] [email protected] 2 points 2 months ago (1 children)

Does your block screen look different than the attached snapshot?

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (3 children)

That’s fair. I don’t really think it’s cloudflares fault though.

First of all you have to separate Cloudflare’s pre-emptive attack on Tor from that of other targets (VPN, CGNAT). The difference is that the Cloudflare patron is given control over whether to block Tor but not the others.

Non-Tor blocksCloudflare is of course at fault. CF made the decision to recklessly block whole groups of people based on the crude criteria of IP reputation associated to a member of the whole group. It would be like if someone was spotted shoplifting as they were running out the door, and security only got a glimpse of red hair. And then the store would refuse service to all people with red hair to make sure the one baddy gets blocked. It’s discriminatory collective punishment as a consequence of sloppy analysis.

Since it’s a feature that websites use to protect against bad actors and robots.

It’s an anti-feature because it’s blunt tool cheaply created by a clumbsy tech giant who has the power to bully and write-off the disempowered who they marginalize as acceptible collateral damage.

Tor blocksCloudflare defaults to harrassing Tor visitors with CAPTCHAs which are usually broken (because the CAPTCHA service CF hires is itself tor-hostile, but CF is happy because CF profits from the uncompensated labor from the captcha solutions). The CF patron can whitelist Tor or blacklist Tor (in addition to default shit show). DOGE proactively chose to blacklist the Tor community.

Defaults are important. Read about “the power of defaults” and how Google paid billions to Mozilla just to be a default search engine in the browser. The money speaks to that importance. CF is 100% responsible for the default state of their sites. Cloudflare (and CF alone) decide what the default setting is.

No one forces anyone to use cloudflare.

Exactly why someone using Cloudflare rightfully gets the blame for their shitty choice to use CF. Most particularly when it is a tax-funded service. At least in the private sector we have the option of walking. I will not use a CF website (even if Tor is whitelisted) - so they lose my business. But when public money is spent on CF who denies demographics of people who are entitled to the gov service, it’s an injustice because you cannot boycott gov services (you cannot get a tax refund if you are excluded).

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

I wonder how that can best be expressed without overly cluttering the forum. The purpose of the forum is to track that, so it would be useful if someone would post lists of signficant or essential public resources that are in walled gardens. Maybe one thread for all of North Korea and a thread for Russia, .. Venezuala, etc. But note as well if Tor is blocked but not in a fiefdom (walled garden), then [email protected] is the best place to post them.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (2 children)

You confuse bandwidth and resources.

Bandwidth is a resource. Citations needed for claims to the contrary.

Bots are often the most impactful clients of any site, because serving an image costs virtually nothing.

Nonsense. Text compresses extremely well. Images and media do not in the slightest approach the leanness of text.

Try using the web through a 2400 baud modem. Or try using a mobile connection with a small monthly quota of like 3gb and no other access. You will disable images your browser settings in no time.

Generating a dynamic page is WAY more resource intensive.

Bots and humans both trigger dynamic processing, but bots and humans of text-based clients to a lesser extent because the bandwidth-heavy media is usually not fetched as a consequence and JavaScript is not typically fetched and executed in the first place.

 

Love the irony and simultaneous foreshadowed embarrassment of Elon denying availability and service as a way to be more efficient.

The irony

Cloudflare enables web admins to be extremely bloated. Admins of Cloudflared websites have no incentive to produce lean or efficient websites because Cloudflare does the heavy lifting for free (but at the cost of reduced availability to marginalized communities like Tor, VPNs, CGNAT, etc). So they litter their website with images and take little care to choose lean file formats or appropriate resolutions. Cloudflare is the #1 cause of web inefficiency.

Cloudflare also pushes countless graphical CAPTCHAs with reckless disregard which needlessly wastes resources and substantially increases traffic bloat -- all to attack bots (and by side-effect text-based users) who do not fetch images and thus are the most lean consumers of web content.

The embarrassment

This is a perfect foreshadowing of what we will see from this department. “Efficiency” will be achieved by killing off service and reducing availability. Certain demographics of people will lose service in the name of “efficiency”.

It’s worth noting that DOGE is not using Cloudflare’s default configuration. They have outright proactively blacklisted Tor IPs to ensure hard-and-fast fully denied service to that demographic of people. Perhaps their PR person would try to spin this as CAPTCHA avoidance is efficient :)

The other embarrassment is that they are using Cloudflare for just a single tiny image. They don’t even have enough competency to avoid CF in the normal state & switch it on demand at peak traffic moments.

The microblog discussion

Microblog chatter here.

 

A lot of gov services use the same shitty social networks. But it’s just a bit extra disgusting when the FCC uses them along with the not-so social platforms. It’s an embarrassment.

The FCC privacy policy starts with:

“The FCC is committed to protecting the privacy of its visitors.”

Fuck no they aren’t. And we expect the FCC in particular to be well aware of the platforms that would make their privacy claim a true statement.

In particular:

  • MS Github (98 repositories and maybe a bit strange that they are hosting UK stuff there.

  • MS LinkedIn: “Visit our LinkedIn profile for information on job openings, internships, upcoming events, consumer advice, and news about telecommunications.” ← At least it’s openly readable to non-members. But I clicked APPLY on an arbitrary job listing (which had no contact info) and I was ignored, probably for not having a LinkedIn account. Which is obviously an injustice. Anyone should be able to access government job listings without licking Microsoft’s boots.

  • Facebook: “Keep informed and engaged about consumer alerts, Commission actions and events.” ←Non-Facebook members cannot even view their page. And they are relying on it for engagement and consumer alerts.

  • Twitter: “Follow @FCC for updates on upcoming meetings, helpful consumer information, Commission blog postings, and breaking FCC and telecommunications news with links to in-depth coverage.” ← At least it’s openly readable to non-members. But despicable that non-Twitter users cannot engage with the FCC. It’s an assult on free speech in the microblogging context. If you don’t lick Elon’s boots and give Twitter a mobile phone number (which they have been caught abusing before twtr contractors were caught spying on old accts, which came before Twitter was breached [twice in fact]), you cannot microblog to your government.

  • YouTube: “Playback recorded webcasts of FCC events and view tutorials, press conferences, speeches and public service announcements on the FCC's YouTube channel.” ← One of the most atrocious abuses of public resources because Youtube is no longer open access. You cannot be on Tor, you cannot use Invideous. Due to recent extreme protectionism by Google, you are subject to surveillance advertising tied to your personal IP address.

Public money finances the FCC to make whatever videos the FCC produces. Since we already paid for the videos, they should be self-hosted by the FCC, not conditional upon entry into an paid-for-by-advertising walled garden with Google as a gatekeeper. It should be illegal to do that -- and we would expect the FCC to drive a just law in that regard. We would also expect the FCC to have the competency to either stand up their own peertube instance or simply put the videos on their website. People should be fighting that shit for sure.

What a shitty example they set for how government agencies should implement comms.

 

I am certain this community is not instantly popular. Even in the fediverse most people are fine with walled-gardens (we know this from Lemmy World). So just pointing out it’s interesting that 35 bots instantly monitor new communities as soon as they are created.

(BTW: “metapost” in the subject means it’s an off-topic post about the community itself. It’s not a reference to that shitty corp that has hijacked a generic word for commercial exploitation)

 

Facebook is used to make announcements to RUC students. The internal RUC website (outside of Facebook) is littered with FB references.

There are social events that are officially school-sanctioned which appear exclusively on Facebook.

Some might say “fair enough” because social events are non-essential and purely for entertainment. However, RUC has organized all the coursework around group projects. A culture of social bonding is considered important enough to justify having school-sanctioned parties on campus. The organisers have gone as far as to strategically divide student parties and to discourage intermingling across the parties so that students form more bonds with the peers they work with academically. Social bonding is an integral component of the study program.

Announcing these social events exclusively on Facebook creates an irresistible temptation for non-Facebook users to join. Students face an ultimatum: either become a serf of Facebook, or accept social isolation. It also destroys any hope of existing FB users who want to break away from a Facebook addiction from doing so. Students without Facebook accounts are naturally in the dark. Facebook non-patrons may be able to catch ad-hoc hallway chatter about school events but this is a reckless approach.

When the official class schedule is incorrectly published, students who discover the error in advance announce it on Facebook. Facebook then stands as the only source of information for schedule corrections, causing Facebook non-patrons to either miss class or show up for a class that doesn't exist.

Unofficial student-led seminars and workshops are sometimes announced exclusively on Facebook. These workshops are optional but academic nonetheless.

Sometimes information exists on the school website and is duplicated on Facebook. The information becomes very well buried on the poorly organized school website because the maintainers are paying more attention to the Facebook publication that they assume everyone is reading. Specifically the study abroad program has two versions of the document that lists all the foreign schools for which there is an exchange program. One version is obsolete showing schools that no longer participate. Both versions appear in different parts of the website. The schedule of study abroad workshops is so buried that a student relying on the school website is unlikely to know that the workshops even exist. Removing the Facebook distraction would perhaps mitigate the website neglect.

RUC does not instruct students to establish Facebook accounts. There is simply a silent expectation that students are already Facebook serfs. Some of the above mentioned problems can come as a surprise because Facebook excludes non-members from even viewing the content, so non-patrons don’t even have a way to see what kind of information they are missing. There is an immense undercurrent of pressure for RUC students to become addicted loyal patrons of Facebook's corporate walled-garden.

 

MAFF (a shit-show, unsustained)

Firefox used to have an in-house format called MAFF (Mozilla Archive File Format), which boiled down to a zip file that had HTML and a tree of media. I saved several web pages that way. It worked well. Then Mozilla dropped the ball and completely abandoned their own format. WTF. Did not even give people a MAFF→mhtml conversion tool. Just abandoned people while failing to realize the meaning and purpose of archival. Now Firefox today has no replacement. No MHTML. Choices are:

  • HTML only
  • HTML complete (but not as a single file but a tree of files)

MHTML (shit-show due to non-portable browser-dependency)

Chromium-based browsers can save a whole complete web page to a single MHTML file. Seems like a good move but then if you open Chromium-generated MHTML files in Firefox, you just get an ascii text dump of the contents which resembles a fake email header, MIME, and encoded (probably base64). So that’s a show-stopper.

exceptionally portable approach: A plugin adds a right-click option called “Save page WE” (available in both Firefox and Chromium). That extension produces an MHTML file that both Chromium and Firefox can open.

PDF (lossy)

Saving or printing a web page to PDF mostly guarantees that the content and representation can reasonably be reproduced well into the future. The problem is that PDF inherently forces the content to be arranged on a fixed width that matches a physical paper geometry (A4, US letter, etc). So you lose some data. You lose information about how to re-render it on different devices with different widths. You might save on A4 paper then later need to print it to US letter paper, which is a bit sloppy and messy.

PDF+MHTML hybrid

First use Firefox with the “Save page WE” plugin to produce an MHTML file. But relying on this alone is foolish considering how unstable HTML specs are even still today in 2024 with a duopoly of browser makers doing whatever the fuck they want - abusing their power. So you should also print the webpage to a PDF file. The PDF will ensure you have a reliable way to reproduce the content in the future. Then embed the MHTML file in the PDF (because PDF is a container format). Use this command:

$ pdfattach webpage.pdf webpage.mhtml webpage_with_HTML.pdf

The PDF will just work as you expect a PDF to, but you also have the option to extract the MHTML file using pdfdetach webpage_with_HTML.pdf if the need arises to re-render the content on a different device.

The downside is duplication. Every image is has one copy stored in the MTHML file and another copy separately stored in the PDF next to it. So it’s shitty from a storage space standpoint. The other downside is plugin dependency. Mozilla has proven browser extensions are unsustainable when they kicked some of them out of their protectionist official repository and made it painful for exiled projects to reach their users. Also the mere fact that plugins are less likely to be maintained than a browser builtin function.

We need to evolve

What we need is a way to save the webpage as a sprawled out tree of files the way Firefox does, then a way to stuff that whole tree of files into a PDF, while also producing a PDF vector graphic that references those other embedded images. I think it’s theoretically possible but no tool exists like this. PDF has no concept of directories AFAIK, so the HTML tree would likely have to be flattened before stuffing into the PDF.

Other approaches I have overlooked? I’m not up to speed on all the ereader formats but I think they are made for variable widths. So saving a webpage to an ereader format of some kind might be more sensible than PDF, if possible.

(update) The goals

  1. Capture the webpage as a static snapshot in time which requires no network to render. Must have a simple and stable format whereby future viewers are unlikely to change their treatment of the archive. PDF comes close to this.
  2. Record the raw original web content in a non-lossy way. This is to enable us to re-render the content on different devices with different widths. Future-proofness of the raw content is likely impossible because we cannot stop the unstable web standards from changing. But capturing a timestamp and web browser user-agent string would facilitate installation of the original browser. A snapshot of audio, video, and the code (JavaScript) which makes the page dynamic is also needed both for forensic purposes (suitable for court) and for being able to faithfully reproduce the dynamic elements if needed. This is to faithfully capture what’s more of an application than a document. wget -m possibly satisfies this. But perhaps tricky to capture 3rd party JS without recursing too far on other links.
  3. A raw code-free (thus partially lossy) snapshot for offline rendering is also needed if goal 1 leads to a width-constrained format. Save page WE and WebScrapBook apparently satisfies this.

PDF satisfies goal 1; wget satisfies goal 2; maff/mhtml satisfies goal 3. There is likely no single format that does all of the above, AFAIK. But I still need to explore these suggestions.

 

I recall an inspirational story where a woman tried many dating sites and they all lacked the filters and features she needed to find the right guy. So she wrote a scraper bot to harvest profiles and wrote software that narrowed down the selection and propose a candidate. She ended up marrying him.

It’s a great story. I have no link ATM, and search came up dry but I found this story:

https://www.ted.com/talks/amy_webb_how_i_hacked_online_dating/transcript?subtitle=en

I can’t watch videos right now. It could even be the right story but I can’t verify.

I wonder if she made a version 2.0 which would periodically scrape new profiles and check whether her husband re-appears on a dating site, which could then alert her about the anomaly.

Anyway, the point in this new community is to showcase beneficial bots and demonstrate that there is a need to get people off the flawed idea that all bots are malicious. We need more advocacy for beneficial bots.

 

I’ve noticed this problem on infosec.pub as well. If I edit a post and submit, the form is accepted but then the edits are simply scrapped. When I re-review my msg, the edits did not stick. This is a very old Lemmy bug I think going back over a year, but it’s bizarre how it’s non-reproducable. Some instances never have this problem but sdf and infosec trigger this bug unpredictably.

0.19.3 is currently the best Lemmy version but it still has this bug (just as 0.19.5 does). A good remedy would be to install an alternative front end, like alexandrite.

 

A home insurance policy offers a discount to AAA members. The discount is the same amount as the cost of membership. I so rarely use a car or motorcycle that I would not benefit significantly from a roadside assistence plan. I cycle. But there are other discounts for AAA membership, like restaurant discounts. So my knee-jerk thought was: this is a no-brainer… I’m getting some benefits for free, in effect, so it just makes sense to get the membership.

Then I dug into AAA a bit more. The wiki shows beneficial and harmful things AAA has done. From the wiki, these points stand out to me:

AAA blamed pedestrians for safety problems“As summarized by historian Peter Norton, "[AAA] and other members of motordom were crafting a new kind of traffic safety effort[. ...] It claimed that pedestrians were just as responsible as motorists for injuries and accidents. It ignored claims defending the historic rights of pedestrians to the streets—in the new motor age, historic precedents were obsolete.”

AAA fights gasoline tax“Skyrocketing gas prices led AAA to testify before three Congressional committees regarding increased gasoline prices in 2000, and to lobby to prevent Congress from repealing parts of the federal gasoline tax, which would have reduced Highway Trust Fund revenue without guaranteeing consumers any relief from high gas prices.”

AAA fights mass transit“Despite its work promoting environmental responsibility in the automotive and transportation arenas, AAA's lobbying positions have sometimes been perceived to be hostile to mass transit and environmental interests. In 2006, the Automobile Club of Southern California worked against Prop. 87. The proposition would have established a "$4 billion program to reduce petroleum consumption (in California) by 25 percent, with research and production incentives for alternative energy, alternative energy vehicles, energy efficient technologies, and for education and training."”

(edit) AAA fights for more roads and fought against the Clean Air ActDaniel Becker, director of Sierra Club's global warming and energy program, described AAA as "a lobbyist for more roads, more pollution, and more gas guzzling."[86] He observed that among other lobbying activities, AAA issued a press release critical of the Clean Air Act, stating that it would "threaten the personal mobility of millions of Americans and jeopardize needed funds for new highway construction and safety improvements."[86] "AAA spokespeople have criticized open-space measures and opposed U.S. EPA restrictions on smog, soot, and tailpipe emissions."[87] "The club spent years battling stricter vehicle-emissions standards in Maryland, whose air, because of emissions and pollution from states upwind, is among the nation's worst."[88] As of 2017, AAA continues to lobby against public transportation projects.

Even though the roadside assistence is useless to me, the AAA membership comes with 2 more memberships. So I could give memberships to 2 family members and they would benefit from it. But it seems I need to drop this idea. AAA seems overall doing more harm than good.

AAA is a federation:It’s interesting to realize that AAA is not a single org. It is a federation of many clubs. Some states have more than one AAA club. This complicates the decision a bit because who is to say that specific club X in state Y spent money fighting the gas tax or fighting mass transit? Is it fair to say all clubs feed money to the top where federal lobbying happens?

(edit) And doesn’t it seem foolish to oppose mass transit even from the selfish car driver standpoint? If you drive a car, other cars are in your way slowing you down and also increasing your chances of simultaneously occupying the same space (crash). Surely you would benefit from others switching from car to public transport to give you more road space. It seems to me the anti mass transit move is AAA looking after it’s own interest in having more members paying dues.

Will AAA go the direction of the NRA?Most people know the NRA today as an evil anti gun control anti safety right wing org. It was not always that way. The NRA used to be a genuine force of good. It used to truly advocate for gun safety. Then they became hyper politicized and perversely fought for gun owner rights to the extreme extent of opposing gun safety. I wonder if AAA might take the same extreme direction as NRA, as urban planners increasingly come to their senses and start to realize cars are not good for us. Instead of being a force of saftey, AAA will likely evolve into an anti safety org in the face of safer-than-cars means of transport. (Maybe someone should start a counter org called “Safer than Cars Alliance” or “Better than Cars Alliance”)

I also noticed most AAA club’s websites block Tor. So the lack of privacy respect just made my decision to nix them even easier.

16
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

This is what my fetchmail log looks like today (UIDs and domains obfuscated):

fetchmail: starting fetchmail 6.4.37 daemon
fetchmail: Server certificate verification error: self-signed certificate in certificate chain
fetchmail: Missing trust anchor certificate: /C=US/O=Let's Encrypt/CN=R3
fetchmail: This could mean that the root CA's signing certificate is not in the trusted CA certificate location, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page. See README.SSL for details.
fetchmail: OpenSSL reported: error:0A000086:SSL routines::certificate verify failed
fetchmail: server4.com: SSL connection failed.
fetchmail: socket error while fetching from [email protected]@server4.com
fetchmail: Query status=2 (SOCKET)
fetchmail: Server certificate verification error: self-signed certificate in certificate chain
fetchmail: Missing trust anchor certificate: /C=US/O=Let's Encrypt/CN=R3
fetchmail: This could mean that the root CA's signing certificate is not in the trusted CA certificate location, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page. See README.SSL for details.
fetchmail: OpenSSL reported: error:0A000086:SSL routines::certificate verify failed
fetchmail: server3.com: SSL connection failed.
fetchmail: socket error while fetching from [email protected]@server3.com
fetchmail: Server certificate verification error: self-signed certificate in certificate chain
fetchmail: Missing trust anchor certificate: /C=US/O=Let's Encrypt/CN=R3
fetchmail: This could mean that the root CA's signing certificate is not in the trusted CA certificate location, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page. See README.SSL for details.
fetchmail: OpenSSL reported: error:0A000086:SSL routines::certificate verify failed
fetchmail: server2.com: SSL connection failed.
fetchmail: socket error while fetching from [email protected]@server2.com
fetchmail: Query status=2 (SOCKET)
fetchmail: Server certificate verification error: self-signed certificate in certificate chain
fetchmail: Missing trust anchor certificate: /C=US/O=Let's Encrypt/CN=R3
fetchmail: This could mean that the root CA's signing certificate is not in the trusted CA certificate location, or that c_rehash needs to be run on the certificate directory. For details, please see the documentation of --sslcertpath and --sslcertfile in the manual page. See README.SSL for details.
fetchmail: OpenSSL reported: error:0A000086:SSL routines::certificate verify failed
fetchmail: server1.com: SSL connection failed.
fetchmail: socket error while fetching from [email protected]@server1.com
fetchmail: Query status=2 (SOCKET)

In principle I should be able to report the exit node somewhere. But I don’t even know how I can determine which exit node is the culprit. Running nyx just shows some of the circuits (guard, middle, exit) but I seem to have no way of associating those circuits with fetchmail’s traffic.

Anyone know how to track which exit node is used for various sessions? I could of course pin an exit node to a domain, then I would know it, but that loses the benefit of random selection.

 

And if you try to visit the archive¹, that’s also fucked.

Not sure who these people are.. maybe they are actually watchdogs in opposition to open data.

¹ https://web.archive.org/web/20240925081816/https://www.opendatawatch.com/

view more: next ›