Lemmling

joined 2 years ago
 

cross-posted from: https://lemmy.world/post/27031457

CrowdSec "Community"

CrowdSec "Community" offering only gets worse and worse!

First, they had raised a paywall around querying details on IP addresses that triggered Alerts. Only 30 queries per week for the "Community".

Now, they have extended that paywall to cover the whole Alerts feature! Only 500 alerts per month for the "Community"!

Enshitification meets cybersecurity!

[–] [email protected] 1 points 6 days ago

He is struggling in the Ferrari.

[–] [email protected] 1 points 2 weeks ago

Glad to hear that 🤠

[–] [email protected] 29 points 2 weeks ago (7 children)

I thought that is very normal.

[–] [email protected] 1 points 2 weeks ago

I prefer native music apps because, in my experience, they offer a smoother experience compared to PWA. I don’t mind using PWA, but the Music Assistant PWA sometimes annoys me with its delay in loading items.

[–] [email protected] 4 points 2 weeks ago

I had issues with HACS version once and also switched to docker. That way, I could add a health check to restart the container in case something breaks.

[–] [email protected] 8 points 2 weeks ago (2 children)

You can check out music streaming servers such as Navidrome (Jellyfin also works) for that together with a compatible client such as symphonium This is a closed source app but the most feature complete. I use Tempo but has no android auto support I believe.

[–] [email protected] 3 points 2 weeks ago (3 children)

Yeah I did that eventually. Now I have to repeat this on multiple devices

[–] [email protected] 4 points 2 weeks ago (5 children)

Yes, I plan to do that too. Tried Librewolf; wish they added an option to import bookmarks from Firefox. They currently support importing bookmarks from non-Mozilla based browsers.

[–] [email protected] 11 points 2 weeks ago

Glad to see Music Assistant is getting updates. It is the only way to keep my Sonos speakers working as the Sonos app hardly ever works for me.

[–] [email protected] 5 points 1 month ago

I used them in parallel for a while before switching to AdGuard. The key features that mattered to me were support for upstream DNS servers via DoH, detailed query logs, and wildcard domain rewriting. Also a better looking UI is a plus.

[–] [email protected] 5 points 1 month ago (1 children)

Good news! Hope they implement detailed query log and support for upstream DoH DNS next.

 
[–] [email protected] 2 points 2 months ago (1 children)

Nice flower indeed.

 

Dear fellow selfhosters,

If you use immich and have a digital camera that shoots JPG+RAW, you must have noticed the duplicate images taking up your screen space. I recently found out that immich has a neat feature called stacking where you can group images in the timeline. I wrote a very simple Python script to search and stack the JPG and RAW images in my instance and thought I would share it with the community. Make sure you edit the search parameters and API key and also read the whole script before running it.

For advanced immich stacking use this https://github.com/tenekev/immich-auto-stack

NOTE: I did not know this project existed before I wrote the script :)

Happy Holidays..

Immich version : v1.123.0

import json
import requests
from pathlib import Path
from collections import defaultdict

#
***
Configuration & Constants
***
API_KEY = "API_KEY"
BASE_URL = "https://immich.local.website.tld/"
RAW_FILE_EXT = ".RAF"
HEADERS = {
    "Content-Type": "application/json",
    "Accept": "application/json",
    "x-api-key": API_KEY
}
STACKS_URL = f"{BASE_URL}/api/stacks"
SEARCH_URL = f"{BASE_URL}/api/search/metadata"
ASSETS_URL = f"{BASE_URL}/api/assets"  # For checking if an asset is already stacked

# ---------------------------------
# 1. CREATE SEARCH PAYLOAD
# ---------------------------------
def create_search_payload(page: int) -> str:
    """
    Build the JSON payload to send with the search request.
    Modify search settings for your camera
    """
    payload = {
        "make": "FUJIFILM",
        "size": 1000,
        "page": page,
        "model": "X-S20",
        "takenAfter": "2024-12-20T00:00:00.000Z"
    }
    return json.dumps(payload)

# ---------------------------------
# 2. FETCH SEARCH RESULTS
# ---------------------------------
def fetch_search_results(page: int) -> dict:
    """
    Send a POST request to the search metadata endpoint 
    and return the parsed JSON response.
    """
    payload = create_search_payload(page)
    response = requests.request("POST", SEARCH_URL, headers=HEADERS, data=payload)
    response.raise_for_status()  # raises an exception if the request fails
    return response.json()

# ---------------------------------
# 3. PROCESS SEARCH RESULTS
# ---------------------------------
def process_search_results(search_results: dict, assets: defaultdict) -> None:
    """
    Parse the items in the search results and store them in the assets dict.
    The key is the file stem (without suffix), and the value is a list of items.
    """
    for item in search_results["assets"]["items"]:
        original_file_name = Path(item["originalFileName"])
        assets[original_file_name.stem].append(item)

# ---------------------------------
# 4a. HELPER: Check if a single asset is already stacked
# ---------------------------------
def is_asset_stacked(asset_id: str) -> bool:
    """
    Perform a GET request on /api/assets/:id to determine if 
    that asset is already part of a stack.

    Returns True if 'stack' is present (and not None) in the response.
    """
    url = f"{ASSETS_URL}/{asset_id}"
    response = requests.get(url, headers=HEADERS)
    response.raise_for_status()
    data = response.json()

    # If the 'stack' key exists and is not None, the asset is stacked
    return bool(data.get("stack"))

# ---------------------------------
# 4b. STACK IMAGES
# ---------------------------------
def stack_images(image: str, items: list) -> None:
    """
    For each image group (stem), determine if it should be stacked. 
    1) Check if any item in the group is already stacked. If yes, skip.
    2) Order/reverse items if needed based on suffix. To ensure the first item is a JPG, which will be the primary image in the immich stack
    3) If the group meets the criteria, send a POST request to stack them.
    """
    ids = [item["id"] for item in items]
    name_suffixes = [Path(item["originalFileName"]).suffix.upper() for item in items]

    # Skip stacking if any asset is already stacked
    if any(is_asset_stacked(asset_id) for asset_id in ids):
        print(f"Skipping '{image}' because one or more assets are already stacked.")
        return

    # If the first suffix is RAW_FILE_EXT, reverse the order
    if name_suffixes and name_suffixes[0] == RAW_FILE_EXT:
        ids.reverse()
        name_suffixes.reverse()

    # Ensure there's at least one .RAF if the group includes a .RAF
    if RAW_FILE_EXT in name_suffixes:
        assert name_suffixes.count(RAW_FILE_EXT) >= 1

    # Stack if more than one file and the first is .JPG
    if len(name_suffixes) > 1 and name_suffixes[0] == ".JPG":
        payload = json.dumps({"assetIds": ids})
        response = requests.request("POST", STACKS_URL, headers=HEADERS, data=payload)
        print(f"{response.status_code}: {image} - Stacked {len(ids)} images")

# ---------------------------------
# 5. MAIN LOGIC
# ---------------------------------
def main():
    assets = defaultdict(list)
    page = 1

    # Paginate until no nextPage
    while True:
        search_results = fetch_search_results(page)
        items_on_page = search_results["assets"]["items"]
        print(f"Page {page} - Retrieved {len(items_on_page)} items")

        # Store items by grouping them by file stem
        process_search_results(search_results, assets)

        next_page = search_results["assets"]["nextPage"]
        page += 1
        if next_page is None:
            break

    # Process each group to optionally stack images
    for image, items in assets.items():
        stack_images(image, items)

if __name__ == "__main__":
    main()
 

I am getting the following error after upgrading yt-dlp

ERROR: [Piracy] This website is no longer supported since it has been determined to be primarily used for piracy. DO NOT open issues for it

Does anyone know any forks that still works.

 

Hello everyone 👋 I am new to Lemmy

view more: next ›