xtremeownage

joined 2 years ago
MODERATOR OF
[–] [email protected] 14 points 2 years ago (1 children)

Anti-DDOS, eh?

You lost me there. There is no self-hosted anti-ddos solution that is going to be effective.... Because any decent DDOS attack, can easily completely overwhelm your WAN connection. (And potentially even your ISP's upstream(s) )#

[–] [email protected] 6 points 2 years ago (1 children)

It could end up being a shart.

[–] [email protected] 1 points 2 years ago (3 children)

Not a clue.

Maybe they like the pretty dashboard pihole has.

[–] [email protected] 1 points 2 years ago

I use vlans to work with it.

[–] [email protected] 1 points 2 years ago (5 children)

unbound as a DNS filter and resolver

Its.... worked as a recursive resolver, with filtering/blacklist features for years now?

[–] [email protected] 0 points 2 years ago

Hmm... I need to go get some lemonade from panera...

Sounds good

[–] [email protected] 3 points 2 years ago

I saw it through one of the apps which scrapes reddit comments for archival.

Reddit quit making those stats public a while back, sadly

[–] [email protected] 80 points 2 years ago (10 children)

Don't make the same mistake reddit did, by assuming active users = engagement.

Look at reddit's stats, active users didn't drop very drastically when everyone left. However, engagement/comments dropped drastically.

[–] [email protected] 30 points 2 years ago

Sorry.... watching a sponsored video for world of tanks for the 10th time, or simply safe, or whatever other garbage is there isn't going to make me want to purchase it.

I value my time... If I didn't use sponsor block, I'm still going to skip right past it.... This, just does it for me.

[–] [email protected] 22 points 2 years ago

Well, I use plex, because I have used plex for a decade, and it just works.

That being said, if I were to use an alternative, Jellyfin is quite fantastic. I actually have a pod running it, just in the event that plex pulls a stupid move, causing me to lose faith in its platform.

But, that being said, I like the plex interface more then Jellyfin, and have grown accustomed to it.

Also, Kodi while powerful and extensible... just feels like a bear compared to Jellyfin.

[–] [email protected] 25 points 2 years ago (1 children)

You don't see porn on the front page of lemmy either, if you used the "subscribed" view, instead of treating "ALL" as if it doesn't contain everything.

[–] [email protected] 3 points 2 years ago

high uptime, doesn't many anything.

SSDs are rated by how much data can be written to them, as flash as finite write-endurance.

1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Sorry for the ~30 seconds of downtime earlier, however, we are now updated to version 0.18.4.

Base Lemmy Changes:

https://github.com/LemmyNet/lemmy/compare/0.18.3...0.18.4

Lemmy UI Changes:

https://github.com/LemmyNet/lemmy-ui/compare/0.18.3...0.18.4

Official patch notes: https://join-lemmy.org/news/2023-08-08_-_Lemmy_Release_v0.18.4

Lemmy

  • Fix fetch instance software version from nodeinfo (#3772)
  • Correct logic to meet join-lemmy requirement, don’t have closed signups. Allows Open and Applications. (#3761)
  • Fix ordering when doing a comment_parent type list_comments (#3823)

Lemmy-UI

  • Mark post as read when clicking “Expand here” on the preview image on the post listing page (#1600) (#1978)
  • Update translation submodule (#2023)
  • Fix comment insertion from context views. Fixes #2030 (#2031)
  • Fix password autocomplete (#2033)
  • Fix suggested title " " spaces (#2037)
  • Expanded the RegEx to check if the title contains new line caracters. Should fix issue #1962 (#1965)
  • ES-Lint tweak (#2001)
  • Upgrading deps, running prettier. (#1987)
  • Fix document title of admin settings being overwritten by tagline and emoji forms (#2003)
  • Use proper modifier key in markdown text input on macOS (#1995)
 

So, last month, my kubernetes cluster decided to literally eat shit while I was out on a work conference.

When I returned, I decided to try something a tad different, by rolling out proxmox to all of my servers.

Well, I am a huge fan of hyper-converged, and clustered architectures for my home network / lab, so, I decided to give ceph another try.

I have previously used it in the past with relative success with Kubernetes (via rook/ceph), and currently leverage longhorn.

Cluster Details

  1. Kube01 - Optiplex SFF
  • i7-8700 / 32G DDR4
  • 1T Samsung 980 NVMe
  • 128G KIOXIA NVMe (Boot disk)
  • 512G Sata SSD
  • 10G via ConnectX-3
  1. Kube02 - R730XD
  • 2x E5-2697a v4 (32c / 64t)
  • 256G DDR4
  • 128T of spinning disk.
  • 2x 1T 970 evo
  • 2x 1T 970 evo plus
  • A few more NVMes, and Sata
  • Nvidia Tesla P4 GPU.
  • 2x Google Coral TPU
  • 10G intel networking
  1. Kube05 - HP z240
  • i5-6500 / 28G ram
  • 2T Samsung 970 Evo plus NVMe
  • 512G Samsung boot NVMe
  • 10G via ConnectX-3
  1. Kube06 - Optiplex Micro
  • i7-6700 / 16G DDR4
  • Liteon 256G Sata SSD (boot)
  • 1T Samsung 980

Attempt number one.

I installed and configured ceph, using Kube01, and Kube05.

I used a mixture of 5x 970 evo / 970 evo plus / 980 NVMe drives, and expected it to work pretty decently.

It didn't. The IO was so bad, it was causing my servers to crash.

I ended up removing ceph, and using LVM / ZFS for the time being.

Here are some benchmarks I found online:

https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit#gid=0

https://www.proxmox.com/images/download/pve/docs/Proxmox-VE_Ceph-Benchmark-202009-rev2.pdf

The TLDR; after lots of research- Don't use consumer SSDs. Only use enterprise SSDs.

Attempt / Experiment Number 2.

I ended up ordering 5x 1T Samsung PM863a enterprise sata drives.

After, reinstalling ceph, I put three of the drives into kube05, and one more into kube01 (no ports / power for adding more then a single sata disk...).

And- put the cluster together. At first, performance wasn't great.... (but, was still 10x the performance of the first attempt!). But, after updating the crush map to set the failure domain to OSD rather then host, performance picked up quite dramatically.

This- is due to the current imbalance of storage/host. Kube05 has 3T of drives, Kube01 has 1T. No storage elsewhere.

BUT.... since this was a very successful test, and it was able to deliver enough IOPs to run my I/O heavy kubernetes workloads.... I decided to take it up another step.

A few notes-

Can you guess which drive is the samsung 980 EVO, and which drives are enterprise SATA SSDs? (look at the latency column)

Future - Attempt #3

The next goal, is to properly distribute OSDs.

Since, I am maxed out on the number of 2.5" SATA drives I can deploy... I picked up some NVMe.

5x 1T Samsung PM963 M.2 NVMe.

I picked up a pair of dual-spot half-height bifurcation cards for Kube02. This will allow me to place 4 of these into it, with dedicated bandwidth to the CPU.

The remaining one, will be placed inside of Kube01, to replace the 1T samsung 980 NVMe.

This should give me a pretty decent distribution of data, and with all enterprise drives, it should deliver pretty acceptable performance.

More to come....

 

Nothing fancy or dramatic. Just- tuning the idle.

 

Very interesting youtube channel. Fellow takes a car, puts a weird engine in it, and then tweaks and tweaks to maximize hp and fuel economy.

Currently working on a renault with a lawnmower engine. Previously, had a saturn with a 3 cyl kubota diesel. Got 80+ mph./

236
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Since, my doctor recommend that I put more fiber in my diet- I decided to comply.

So.... in a few hours, I will be running a few OS2 runs across my house, with 10G LR SFP+ modules.

Both runs will be from my rack to the office. One run will be dedicated for the incoming WAN connection (Coupled with the existing fiber that.... I don't want to re terminate). The other, will be replacing the 10G copper run already in place, to save 10 or 20w of energy.

This, was sparked due to a 10GBase-T module overheating, and becoming very intermittent earlier this week causing a bunch of issues. After replacing the module, links came back up and started working normally.... but... yea, I need to replace the 10G copper links.

With only twinax and fiber 10G links plugged into my 8-port aggregation switch, it is only pulling around 5 watts, which is outstanding, given a single 10GBase-T module uses more then that.

Edit,

Also, I ordered the wrong modules. BUT... the hard part of running the fiber is done!

 

Surprisingly, I guess this didn't exist. Well, if you like talking automotive technology, it does now.

Turbocharged

 

Here is one of my projects from a few years back.

Shoving a 1,000hp engine, into a long-bed pickup truck.

It was a fun project, especially since nobody would ever look at it, and think, hey, that might be fast. Nope, It was just an ugly, longbed, single-cab pickup truck.

Completely gutted on the interior with nothing but a seat, steering wheel, shifter, and gauges.

Sadly, I have retired this project due to it not being very practical for anything.

Coming soon, whenever I get off of my ass- I will be putting this powerplant into a 1987 4x4 suburban... Don't hold your breath too much, since covid occured I have not been driving too much, and this project has not been high on my list of priorities.

 

Just sharing the latest experiment from Garage54 to get some posts flowing.

If you have not see these guys, they do some really interesting experiments and projects.

view more: ‹ prev next ›