Molecular0079

joined 2 years ago
[–] [email protected] 6 points 8 months ago (1 children)

Yeah, it's amazing how upvoted the previous comment is. Just a bunch of idiots jumping on the web-hate bandwagon when even basic media players like Kodi have a tough time playing back video on the Pi.

It just isn't a very optimized device for video playback. The Pi 5 is actually a step backwards as well, providing only H265 hardware video decode which the web doesn't even use.

[–] [email protected] 2 points 8 months ago

My issue with skins is that it is completely immersion breaking. You have Homelander and Gaia running around Call of Duty now. It's comical and just destroys my enjoyment of the game.

The skins get worse and worse because to continue the money machine they have to make more and more unique skins that just destroy the cohesion of the world they've built.

[–] [email protected] 13 points 8 months ago (6 children)

This. It all boils down to value for money. 5 dollars for a skin cosmetic is bullshit. 5 dollars or more for DLC with meaningful content is okay.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

Thanks! So VRR works out of the box for you or did you have to do tweaks to get it to work? The answers on the Amazon page are conflicting, with the manufacturer saying VRR is not supported but some users saying it does. Don't know who to believe.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

I have the Arctis 7 as well and the default EQ sounds just fine, although I do prefer the Bass Boost. You can run the software inside a Windows VM, passthrough the USB dongle and configure all your settings as well. They get saved into the headset and work just fine in Linux without Linux native software.

[–] [email protected] 2 points 9 months ago

Usually yes, but it doesn't apply to BG3. The vulkan renderer is terribly broken ever since Patch 3.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago)

They borked the Vulkan Renderer somewhere around Patch...3 I think? It used to be so performant, but now it runs only at 40-60fps on my Nvidia 3090 compared to the DX11 renderer which can render at 80-120 T_T

[–] [email protected] 3 points 9 months ago

Valve should totally hook her up with one of those dev accounts that have access to games so she doesn't have to pay for it themselves or get gifted them. She's doing valuable work for the ecosystem.

[–] [email protected] 1 points 9 months ago (3 children)

Here's the adapter I use: https://www.youtube.com/watch?v=-b6w814wXvc

Did you copy the wrong link? This was a random youtube video.

Good to hear that some adapters do work though. The lack of HDMI 2.1 basically prevented me from ever considering AMD, but if there are converters that work that certainly opens up my options.

[–] [email protected] 1 points 9 months ago (5 children)

Do they support VRR though? Last I heard that was still an issue with these converters.

[–] [email protected] 1 points 10 months ago

I have a 3090 and just swapped over to the beta 555 drivers and Kwin with explicit sync patches applied (the patches will be available out of the box with Kwin 6.1). Honestly, the Wayland experience is basically flawless now for most cases. The only bug I am experiencing is Steam shows some corruption in the web views on start up until I resize the window, but it's a minor quibble in exchange for getting Wayland. I expect most of the minor remaining issues to be hammered out quickly.

Honestly, I've had genuinely bad experiences with AMD. I hated my unstable Vega 64 that would crash almost every day and was much happier when I finally ditched that card for my 3090. My laptop has a Radeon 680M and that would regularly have hard system hangs, broken video acceleration, etc.

Besides that, I also think being part of the AMD ecosystem is difficult at times. FSR sucks compared to DLSS, raytracing is sub-par, there's no path reconstruction equivalent. From a compute perspective, ROCm is unstable. Even running something as simple as Darktable with ROCm would cause half of each of my photos to not render out properly. Blender with Optix is much faster than Blender with AMD HIP. If you want to do AI, forget AMD as the ecosystem has basically gone with CUDA.

And yeah, the lack of HDMI 2.1 means no 4k 120Hz VRR on a wide variety of displays. Everyone says "why not display port", but it is tough finding a DP capable monitor with the right specs and size sometimes. For example, try finding an equivalent of an LG C2 that has DP. There's only one, its by Asus, and it costs $600 bucks more.

[–] [email protected] 1 points 10 months ago (7 children)

A lot of displays don't support DP unfortunately. I have an LG C2 which is perfect for desktop use and one of the more affordable OLED screens out there, and it does not support DP. The PC monitor equivalent that uses the same panel is made by Asus, but that one has a $600 dollar mark up.

 

I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193

This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right?

If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

 

On one of my machines, I am completely unable to log out. The behavior is slightly different depending on whether I am in Wayland or X11.

Wayland

  1. Clicking log out and then OK in the log out window brings me back to the desktop.
  2. Doing this again does the same thing
  3. Clicking log out for a third time does nothing

X11

  1. Clicking log out will lead me to a black screen with just my mouse cursor.

In my journalctl logs, I see:

Apr 03 21:52:46 arch-nas systemd[1]: Stopping User Runtime Directory /run/user/972...
Apr 03 21:52:46 arch-nas systemd[1]: run-user-972.mount: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: [email protected]: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: Stopped User Runtime Directory /run/user/972.
Apr 03 21:52:46 arch-nas systemd[1]: Removed slice User Slice of UID 972.
Apr 03 21:52:46 arch-nas systemd[1]: user-972.slice: Consumed 1.564s CPU time.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:48 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:54 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.LogoutPrompt.
Apr 03 21:52:54 arch-nas systemd[4500]: Started dbus-:[email protected].
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: kf.windowsystem: static bool KX11Extras::compositingActive() may only be used on X11
Apr 03 21:52:54 arch-nas plasmashell[5079]: qt.qpa.wayland: eglSwapBuffers failed with 0x300d, surface: 0x0
Apr 03 21:52:55 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.Shutdown.
Apr 03 21:52:55 arch-nas systemd[4500]: Started dbus-:[email protected].
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target plasma-workspace-wayland.target.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace.
Apr 03 21:52:55 arch-nas systemd[4500]: Requested transaction contradicts existing jobs: Transaction for  is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: graphical-session.target: Failed to enqueue stop job, ignoring: Transaction for graphical-session.target/stop is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace Core.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Startup of XDG autostart applications.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Session services which should run early before the graphical session is brought up.
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:[email protected]: Main process exited, code=exited, status=1/FAILURE
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:[email protected]: Failed with result 'exit-code'.graphical-session.target/stop

I've filed an upstream bug for this but I was wondering if anyone else here was also experiencing the same issue.

 

Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader.

I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.

 

I mean, come on, this has to be a joke right XD

 

cross-posted from: https://lemmy.world/post/4930979

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

Patch 2 seems to have drastically slowed down the Vulkan Renderer. Before I was able to get 80-110FPS in the Druid Grove, but now I am only getting 50fps. DX11 seems fine though, but I prefer using Vulkan since I am on Linux.

Arch Linux, Kernel 6.4.12

Ryzen 3900x

Nvidia 3090 w/ 535.104.05 drivers

Latest Proton Experimental

 

I am using one of the official Nextcloud docker-compose files to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning:

    The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.

I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance.

SOLVED: Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of noindex, nofollow, nosnippet, noarchive, but Nextcloud expects noindex, nofollow.

 

cross-posted from: https://lemmy.world/post/3989163

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

 

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

 

Linux Unplugged had a pretty good discussion IMHO about some of the more nuanced details behind the RedHat drama that I haven't seen being covered elsewhere as much. The final opinion about RedHat I leave as an exercise to the listener.

view more: next ›