Fediverse

33072 readers
115 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)

founded 2 years ago
MODERATORS
1
212
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 
 

[email protected] is not a place to file your grievances with "free speech", disrupting users, moderation, etc.

If you have problems with users: File complaints to the mods or just block them.

If you have problems with mods: File complaints with admins of the instance or just migrate to an alternative community.

If you have problems with an entire instance: Just leave it.

2
 
 

This community was essentially unmoderated for a while and I've been recently approached to take over moderation duties here. What I don't intend to do is to change any existing rules here but to enforce what has piled up in the moderation queue.

The discussion under the recent post about spam accounts turned into a flamewar regarding US domestic politics which has literally nothing to do with the Fediverse.

With dozens of comments, I don't have the bandwidth to sift through them individually and I've locked the thread. The PSA about spam accounts still stands which is why I didn't remove the post. The accounts involved with that flamewar get a pass for this time. Consider this a warning. Further trolling about US political parties will result in bans.

3
4
 
 

So I saw a post just now that I deem as a bit dumb and I was shocked to see 13 boosts and only 1 reply so I thought to myself: Why Nobody critizing this shi? Then I clicked on the post on the original instance and there was Like 10+ replies critizing that post.. Why is That? Is it the fault of my instance? Or Is it just some mastodon bs?

Side Info: Yes The people who replied are federated and I can Look up Their profile on my instance

5
 
 

cross-posted from: https://lemmy.blahaj.zone/post/25142642

And if you're not sure why so many people on Fosstodon are considering moving, there are some links in the reply.

6
7
8
9
78
submitted 2 days ago* (last edited 9 hours ago) by [email protected] to c/[email protected]
 
 

This is a followup to my introduction of BlogOnLemmy, a simple blog frontend. If you haven't seen it, no need because I will be explaining how it works and how you can run your own BlogOnLemmy for free.

Leveraging the Federation

Having a platform to connect your content to likeminded people is invaluable. The Fediverse achieves this in a platform agnostic way, so in theory it shouldn't matter which platform we use. But platform have different userbases that interact with posts in different ways. I've always preferred the forum variety, where communities form and discussion is encouraged.

My posts are shared as original content on Lemmy, and that's who it's meant for. Choosing for a traditional blog style to make a more palatable platform for a wider audience, and in this way also promoting Lemmy.

Constraints

Starting off I did not want the upkeep of another federated instance. Not every new thing that is deployed on the Fediverse needs to stand on its own or made from the ground up as an ActivityPub compatible service. But rather use existing infrastructure, already federated, already primed for interconnectivity. Taking it one step further is not a having a back-end at all, a 'dumb' website as it were. Posts are made, edited, and cross-posted on Lemmy.

The world of CSS and JavaScript on the other hand - how websites are styled and made feature-rich - is littered with libraries. Being treated like black boxes, often just a few functions are used with the rest clogging up our internet experience. Even jQuery, which is used by over 74% of all websites, is already 23kB in its smallest form. I'm not planning on having the smallest possible footprint*, but rather showing a modern web browser provides an underused toolset of natively supported functionality; something the first webdevs would have given their left kidney for.

Lastly, to improve maintainability and simplicity, one page is enough for a blog. Provided that its content can be altered dynamically.

*See optimization

How it's made

Graphviz

1. URL: Category/post

Even before the browser completely loads the page, we can take a look at the URL. With our constraints only two types of additions are available for us, the anchor and GET parameters. When an anchor, or '#', is present websites scroll to a specific place in a website after loading. We can hijack this behavior and use it to load predefined categories. Like '#blog' or '#linkdumps'. For posts, '#/post/3139396' looks nicer than '?post=3139396', but anchors are rarely search engine compatible. So I'm extracting the GET parameter to load an individual post.

Running JavaScript before the page has done loading should be swift and easy, like coloring the filters or setting Dark/Light mode, so it doesn't delay the site.

2. API -> Lemmy

A simple 'Fetch' is all that's required. Lemmy's API is extensive already, because it's used by different frontends and apps that make an individual’s experience unique. When selecting a category, we are requesting all the posts made by me in one or more lemmy communities. A post or permalink uses the same post_id as on the Lemmy instance. Pretty straight forward.

3. Markdown -> HTML

When we get a reply from the Lemmy instance, the posts are formatted in Markdown. Just as they are when you submit the post. But our browsers use HTML, a different markup language that is interpretable by our browsers. This is where the only code that's not written by me steps in, a Markdown to HTML layer called snarkdown. It's very efficient and probably the smallest footprint possible for what it is, around 1kB.

Optimization

When my blog was launched, I was using a Cloudflare proxy, for no-hassle https handling, caching and CDN. Within the EU, I'm aiming for sub-100ms* to be faster than the blink of an eye. With a free tier of Cloudflare we can expect a variance between 150 and 600ms at best, but intercontinental caching can take seconds.

Nginx and OpenLiteSpeed are regarded as the fastest webservers out there, I often use Apache for testing but for deployment I prefer Nginx's speed and reliability. I could sidetrack here and write another 1000 words about the optimization of static content and TLS handling in Nginx, but that's a story for another time.

* For the website, API calls are made asynchronously while the page is loaded and are not counted

Mythical 14kB, or less?

All data being transferred on the internet is split up into manageable chunks or frames. Their size or Maximum Transmission Unit, is defined by IEEE 802.3-2022 1.4.207 with a maximum of 1518 bytes*. They usually carry 1460 bytes of actual application data, or Maximum Segment Size.

Followed by most server operating systems, RFC 6928 proposes 10x MSS (= Congestion Window) for the first reply. In other words, the server 'tests' your network by sending 10 frames at once. If your device acknowledges each frame, the server knows to double the Congestion Window every subsequent reply until some are dropped. This is called TCP Slow Start, defined in RFC 5681.

10 frames of 1460 bytes contain 14.6kB of usable data. Or at least, it used to. The modern web changed with the use of encryption. The Initial Congestion Window, in my use case, includes 2 TLS frames and from each frame it takes away an extra 29 bytes. Reducing our window to 11.4kB. If we manage our website to fit within this first Slow Start routine, we avoid an extra round trip in the TCP/IP-protocol. Speeding up the website as much as your latency to the server. Min-maxing TCP Traffic is the name of the game.

* Can vary with MTU settings of your network or interface, but around 1500 (+ 14 bytes for headers) is the widely accepted default

10kB vs 15kB with TCP Slow Start
Visualizes two raw web requests, 10.7kB vs 13.3kB with TCP Slow Start
- Above Blue: Request Starts
- Between Green: TLS Handshake
- Inside Red: Initial Congestion Window

Icons

Icons are tricky, because describing pixel positions takes up a considerable amount of data. Instead SVG's are commonplace, creating complex shapes programmatically, and significantly reducing its footprint. Feathericons is a FOSS icon library providing a beautiful SVG rendered solution for my navbar. For the favicon, or website icon, I coded it manually with the same font as the blog itself. But after different browsers took liberties rendering the font and spacing, I converted it to a path traced design. Describing each shape individually and making sure it's rendered the same consistently.

Regular vs. Inline vs Minified

If we sum up the filesizes we're looking at around 50kB of data. Luckily servers compress* our code, and are pretty good at it, leaving only 15kB to be transferred; just above our 11kB threshold. By making the code unreadable for humans using minifying scripts we can reduce the final size even more. Only... the files that make up this blog are split up. Common guidelines recommend doing so to prevent one big file clogging up load times. For us that means splitting up our precious 11kB in multiple round trips, the opposite of our goal. Inline code blocks to the rescue, with the added bonus of the entire site now being compressed into one file making the compression more efficient to end optimization at a neat 10.7kB.

* The Web uses Gzip. A more performant choice today is Brotli, which I compiled for use on my server

In Practice

All good in theory, now let's see the effect in practice. I've deployed the blog 4 times, and each version was measured for total download time from 20 requests. In the first graph we notice the impact of not staying inside the Initial Congestion Window, where only the second scenario is delayed by a second round trip when loading the first page.

Scenario 1. and 3. have separate files, and separate requests are made. Taking priority in displaying the website, or the first file, but neglecting potential useable space inside the init_cwnd. We can tell when comparing the second graph, it ends up almost doubling their respective total load times.

The final version is the only one transferring all the data in one round trip, and is the one deployed on the main site. With total download times as low as 51ms, around 150ms as a soft upper limit, and 85ms average in Europe. Unfortunately, that means worldwide tests show load times of 700ms, so I'll eventually implement a CDN.

Speedtest 4 scenarios

  1. Regular (14,46kB): no minification, separate files
    - https://dev3.martijn.sh/
  2. Inline (13,29kB): no minification, one file
    - https://dev1.martijn.sh/
  3. Regular Minified (10,98kB): but still using separate files
    - https://dev2.martijn.sh/
  4. Inline Minified (10,69kB): one page as small as possible
    - https://martijn.sh/

I'll be leaving up dev versions until there's a significant update to the site

Content Delivery Network

Speeds like this can only be achieved when you're close to my server, which is in London. For my Eurobros that means blazing fast response times. For anyone else, cdn.martijn.sh points to Cloudflare's CDN and git.martijn.sh to GitHub's CDN. These services allow us to distribute our blog to servers across the globe, so requesting clients always choose the closest server available.

GitHub Pages

An easy and free way of serving a static webpage. Fork the BlogOnLemmy repository and name it 'GitHub-Username'.github.io. Your website is now available as username.github.io and even supports the use of custom domain names. Mine is served at git.martijn.sh.

While testing its load times worldwide, I got response times as low as 64ms with 250ms on the high end. Not surprisingly they deliver the page slightly faster globally than Cloudflare does, because they're optimizing for static content.

Extra features

  • Taking over the Light or Dark mode of the users' device is a courtesy more than anything else. Adding to this, a selectable permanent setting. My way of highlighting the overuse of cookies and localStorage by giving the user the choice to store data of a website that is built from the ground up to not use any.
  • A memorable and interactable canvas to give a personal touch to the about me section.
  • Collapsed articles with a 'Read More'-Button.
  • 'Load More'-Button loads the next 10 posts, so the page is as long as you want it to be

Webmentions

Essential for blogging in current year, Webmentions keep websites up-to-date when links to them are created or edited. Fortunately Lemmy has got us covered, when posts are made the first instance sends a Webmention to the hosters of any links that are mentioned in the post.

To stay within scope I'll be using webmention.io for now, which enables us to get notified when linked somewhere else by adding just a single line of HTML to our code.

Notes

  • Enabling HTTP2 or 3 did not speed up load times, in fact with protocol negotiations and TLS they added one more packet to the Initial Congestion Window.
  • For now, the apex domain will be pointing directly to my server, but more testing is required in choosing a CDN.
  • Editing this site for personal use requires knowledge of HTML and JS for now, but I might create a script to individualize blogs easier.

Edit: GitHub | ./Martijn.sh > Blog

10
11
 
 

cross-posted from: https://lemmy.world/post/28633828

finally some simple UI, you click on browser extension "icon" and get taken to a webpage that will show you all the video that rank by cosine similarity (I just now realized has UUID when it should have shortuuid), linked below is the webpage

https://github.com/solidheron/peertube_recomendation_algorythm/blob/main/example1.PNG

above is just example with my recommendations luckily the links change colors if you've already seen them. of course the video at the top is the video I watched the longest.

other than that I been cleaning up backend stuff and ignoring minor error that pop up. it should more accurately capture watch time on peertube videos and doesn't just say you watch an hour of a video you didn't care about. probably adding a bunch of code that needs to get cleaned up.

my opinion has shifted a bit on this simple algo it seems like the videos I get tend to be random and take require me to find videos independently to get some decent suggestion, also there's a Linux pipeline

I do have a software engineering problem where view time is only input to the algorithm, like, dislike, and finished status of a video is available. I have decided on going with cosine similarity for likes and dislikes and adding it to the time engagement. if you like a video all the tokens of that video get a +1 in length and dislike makes all the tokens -1 in length. I thought it was a good solution because it doesn't rely on converting a like to time. I wouldn't know how to deal with like being a multiplier for time engagement vector and would dislike be negative or something or just zero. generally adding a like cosine vector to a time engagement vector generally means both time and likes normalized (sorta) both can contribute to a video recommendation.

seems like cosine recommendation will need processing

12
 
 

This scoring system evaluates how decentralized and self-hostable a platform is, based on four core metrics.

πŸ“Š Scoring Metrics (Total: 100 Points)

Metric Weight Description
Top Provider User Share 30 Measures how many users are on the largest instance. Full points if <20%; 0 if >80%.
Top Provider Content Share 30 Measures how much content is hosted by the largest instance. Full points if <20%; 0 if >80%.
Ease of Self-Hosting: Server 20 Technical ease of running your own backend. Full points for simple setup with good docs.
Ease of Self-Hosting: User Interface 20 Availability and usability of clients. Full points for accessible, FOSS, multi-platform clients.

πŸ“‹ Example Breakdown (Estimates)

Platform Score Visualization
πŸ“§ Email 95 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩
🐹 Lemmy 79 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩
🐘 Mastodon 74 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩
🟣 PeerTube 94 🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩🟩
πŸ–Ό Pixelfed 42 🟧🟧🟧🟧🟧🟧🟧🟧
πŸ”΅ Bluesky 14 πŸŸ₯πŸŸ₯πŸŸ₯
πŸŸ₯ Reddit 3 πŸŸ₯

πŸ“§ Email

  • Top Provider User Share: Google β‰ˆ 17% β†’ Score: 30/30
  • Top Provider Content Share: Google handles β‰ˆ 17% of mail β†’ Score: 30/30
  • Self-Hosting: Server: Easy (Can leverage hundreds of email hosting options) β†’ Score: 16/20
  • Self-Hosting: Client: Easy (Thunderbird, K-9, etc.) β†’ Score: 19/20

Total: 95/100


🐹 Lemmy

  • Top Provider User Share: lemmy.world β‰ˆ 37% β†’ Score: 21.5/30
  • Top Provider Content Share: lemmy.world hosts β‰ˆ 37% content β†’ Score: 21.5/30
  • Self-Hosting: Server: Easy (Docker, low resource) β†’ Score: 18/20
  • Self-Hosting: Client: Good FOSS apps, web UI β†’ Score: 18/20

Total: 79/100


🐘 Mastodon

  • Top Provider User Share: mastodon.social β‰ˆ 40% β†’ Score: 20/30
  • Top Provider Content Share: mastodon.social β‰ˆ 45–50% content β†’ Score: 20/30
  • Self-Hosting: Server: Docker setup, moderate difficulty β†’ Score: 15/20
  • Self-Hosting: Client: Strong ecosystem (Tusky, web, etc.) β†’ Score: 19/20

Total: 74/100


🟣 PeerTube

  • Top Provider User Share: wirtube.de β‰ˆ 14% β†’ Score: 30/30
  • Top Provider Content Share: Approximately 14% β†’ Score: 30/30
  • Self-Hosting: Server: Docker, active community, moderate resources β†’ Score: 16/20
  • Self-Hosting: Client: Web-first UI, FOSS, some mobile options β†’ Score: 18/20

Total: 94/100


πŸ–Ό Pixelfed

  • Top Provider User Share: pixelfed.social β‰ˆ 71% β†’ Score: 4.5/30
  • Top Provider Content Share: Approximately 71% β†’ Score: 4.5/30
  • Self-Hosting: Server: Laravel-based, Docker available, some config needed β†’ Score: 15/20
  • Self-Hosting: Client: Web UI, FOSS, mobile apps in progress β†’ Score: 18/20

Total: 42/100


πŸ”΅ Bluesky

  • Top Provider User Share: bsky.social β‰ˆ 99% β†’ Score: 0/30
  • Top Provider Content Share: Nearly all content on bsky.social β†’ Score: 0/30
  • Self-Hosting: Server: PDS hosting possible but very niche and poorly documented β†’ Score: 4/20
  • Self-Hosting: Client: Mostly official client; some 3rd party β†’ Score: 10/20

Total: 14/100


🟠 Reddit

  • Top Provider User Share: Reddit hosts 100% of user accounts β†’ Score: 0/30
  • Top Provider Content Share: Reddit hosts all user-generated content β†’ Score: 0/30
  • Self-Hosting: Server: Not self-hostable (proprietary platform) β†’ Score: 0/20
  • Self-Hosting: Client: Some unofficial clients available β†’ Score: 3/20

Total: 3/100


How Scores are Calculated

πŸ§‘β€πŸ€β€πŸ§‘ How User/Content Share Scores Work

This measures how many users are on the largest provider (or instance).

  • No provider > 20%: If no provider has more than 20%, it gets full 30 points.
  • Between 20% and 80%: Anything in between is scored on a linear scale.
  • > 80%: If a provider has more than 80%, it gets 0 points.

πŸ“Š Formula:

Score = 30 Γ— (1 - (TopProviderShare - 20) / 60)
…but only if TopProviderShare is between 20% and 80%.
If below 20%, full 30. If above 80%, zero.

πŸ“Œ Example:

If one provider has 40% of all users:
β†’ Score = 30 Γ— (1 - (40 - 20) / 60) = 30 Γ— (1 - 0.43) = 17.1 points

πŸ–₯️ How Ease of Self-Hosting Scores Work

These scores measure how easy it is for individuals or communities to run their own servers or use clients.

This looks at how technically easy it is to run your own backend (e.g., email server, Mastodon server) or User Interface (e.g., web-interface or mobile-app)

  • Very Easy: One-command or setup wizard, great documentation β†’ 18–20 points
  • Moderate: Docker or manual setup, some config, active community support β†’ 13–17 points
  • Hard: Complex setup, needs regular updates or custom config, poor documentation β†’ 6–12 points
  • Very Hard or Proprietary: Little to no self-hosting support, undocumented β†’ 0–5 points

πŸ“š Sources

Footnotes

This is a work in progress and may contain mistakes. If you have ideas or suggestions for improvement, feel free to let me know.

Source: https://github.com/NoBadDays/decentralization-score/blob/main/decentralization_score_2025.04.md

13
 
 

I'm wondering if anyone made a fediverse like (aka multiple instances talking to eachother) for discord?

I know matrix exists, but it's only rooms instead of servers with channels, etc...

14
12
Info on Mastodon (social.growyourown.services)
submitted 4 days ago by [email protected] to c/[email protected]
15
16
17
 
 

cross-posted from: https://lemmy.world/post/28546756

So I’ve completed the cosine similarity function, which means the script is now recommending videos in a raw way. Below is just a ranking of videos that match my watch history (all three are most likely videos I’ve already watched):

2: {shortUUID: "saKY2TWfwNYgPUQFkE4xsi", similarity: 0.4955} 3: {shortUUID: "kk7x8GAs7gNvkzaPs6EPiU", similarity: 0.4099} 4: {shortUUID: "uXeAyVfX1WEzqSPsDxtH3p", similarity: 0.2829}

Getting to this point made me realize: there’s no such thing as a simple algorithmβ€”just simple ways to collect data. The code currently has issues with collecting data properly, so that’s something that needs fixing. Hopefully, once the data collection in this script is improved, it can be reused for future Fediverse algorithms.

There are countless ways to process the data. Cosine similarity is a simple concept and easy to implement in code, but it has a flaw: content you’ve already watched tends to rank higher than anything new. So a basic "pick the highest cosine similarity" approach probably isn’t ideal. It either needs logic to remove already-watched videos, or to bias toward videos lower down in the ranking. But filtering out watched videos isn’t perfect eitherβ€”people do like to rewatch things.

The algorithm currently just looks at how much time you spent watching unique segments of a video, then assigns a value in seconds to all the words in the title, description, and tags, and sums that over all videos.

The algorithm is actually okayβ€”subjectively, it’s better than just sorting by date. I picked a few videos at random from the top 300 ranked by cosine similarity , and there was content interesting enough to watch for more than 30 seconds, and some that was just too weird for me. Here are a few examples:

Some of these links are across different instances because no single PeerTube instance has all the videos. I loaded metadata for over 6,000 videos across five instances during testing.

The question is: should the algorithm be scoped to a single instance (only looking at content on the user’s home instance), or should it recommend from any instance and take you there?

funny thing to note is that there might be a linux pipeline in this algo

18
19
 
 

A revolution is supposed to change things. Looking at things today, the only revolutionary idea left is to make society reflect the best of us instead of the worst. Most people prefer kindness and love. But lacking these values allows others to thrive in our world. They spend their time deceiving and exploiting the rest of us, people trying to enjoy life and things that bring joy and love. We can't do that by spending all our time dealing with the sad creepy weirdos ruining everyone's lives. So they're been able to shape the world. The only revolution left is to build something to undo what they've done. We need a force for love in the world.

This begins with anyone who thinks it's silly to to expect love to play a major role in society and our future. They have to question who taught them how the world works. They have to wonder why they think that way – because the power of love is not a revolutionary concept. Something else convinced them.

Education is designed by politicians also responsible for war; news comes from corporations whose purpose is exploitation of anyone and thing possible. No wonder people think a more loving world seems like fantasy. Everything seems designed to make us think so.

The internet makes it undeniable that knowledge and tech are fueling hate, greed, ignorance in every heart, every family, community, country. What's not so obvious is how to teach people what's wrong: that knowledge should not be controlled by politicians and the rich.

If we want a revolution to actually change things, it means we need to liberate knowledge from politicians and the rich. A goal like that depends on people understanding why people don't understand it. So instead, we could hope the state of the world's enough to convince people what's wrong.

To literally free knowledge, we have to free the people responsible for it, every individual and group, all the research universities, all focused globally on the same goal: to save the future. What's more loving than that?

The key to a revolution based on love in the world is to build something free of the people who disagree, the hateful, greedy, ignorant, whatever. That's possible with the internet, where we can work together to organize ourselves, our knowledge, our resources.

The first, most important step in the only revolution we have left is to create our own democratic corporation. The only way we can confront the multinationals exploiting us is with our own. The concept of democorporation coordinates all people and groups worldwide, anyone free to share their knowledge how to build this future. It begins with whatever individuals, corporations, institutions of knowledge who don't require liberating to help. They can help free the others, they can set the foundation so Democorporation can challenge the multinational corporations pillaging the planet and threatening the future.

Democorporation can only begin in as a social network because that's how the people can best support it. Participation, data, advertising can help funding. But more importantly is to be democratic. People need this network to vote and express how to build their future. With online users, volunteers, donors, employees and investors all expressing their perspectives, they offer the most balanced democracy and leadership possible.

When we have a social network that we own, uniting the world in our own democracy, we achieve the goal of any great revolution: we establish our own republic. Interepublic has the benefit of a corporation being able to limit, exclude and fire people who don't want it to succeed. That overcomes the problem of real world countries: we're all stuck with people who want our governments to fail, who want others to suffer. As antidotes to the hate in society, Interepublic and Democorporation become outlets of love for the world. That's what we're missing, and it's all we need to change history.

This revolution is global resistance against everything dividing us and everyone exploiting us. It is the β€œrebellion of people coming together”. Interepublic and Democorporation use the internet to create leverage that's never been available before. But only if people agree love is necessary to fix what's wrong. Nothing else will bring us together.

That sounds good until you remember there is no planning how to capture love. You just express it and hope your love is reciprocated...but this isn't a teenager working up the courage to call his first love. This is the world, and all these ideas and plans that express my love only work if people understand the love I've already poured into it.

To make this revolution truly new, truly revolutionary, it begins with something as intimate and personal as love, one stranger to another. So when I profess my love for the world, I risk the worst kind of heartbreak imaginable. Maybe that risk is proof enough? To trust who I am, my motivation, I have to be honest even it's humiliating for me. That's how to explain what it took for me to do this. No one happy with the world would.

Because here's the thing...I do not want to do this. I think it's inhuman and inhumane to be put in a position like this. It is a constant fight against myself, doing what's right while ruining my life. I'm losing because the world keeps getting worse, so I feel sick with guilt and torture more ideas out of myself.

And I can't describe this inner conflict without describing my the kind of sad life that makes someone do this. If I loved myself enough, I would never be in this position. It is a living nightmare. Doing this means I love the world more than myself...too bad it feels like such an abusive relationship.

Think about it: someone's not gonna spend their life on this, decades of trial and error, if they've experienced the best of the world, love, family. My life began in stress. Now that I accomplished my goal, I'm left psychologically devastated by it. I'm in a place of responsibility no one should be. And worst of all it seems to piss people off that I even tried?

Your reaction decides if this love is reciprocated. If it is we create a love story like no other. And if not, I can hope failure and tragedy might do a better job of finding help than if I'm alive. A win-win for the world, just not for me...what's more loving than that?

For me personally, my love for the world would be reciprocated by freeing me from this stress and responsibility. Maybe the most revolutionary thing here is that I want nothing to do with politics or business. This is a first step in an ongoing process to remove myself from this insanely stressful situation, and it's quite elaborate.

The most genius thing I did was to create a story/fantasy/metaphor/game that lets me help without being directly involved. Two birds, one stone. If I can make it entertaining, I can earn money and raise attention. Four birds, one stone.

To put this in context, I've set up a political-economic-societal plan, but I also imagined a metaphorical story to promote knowledge the way religions promotes belief. It's a modern mythology, and it's for people how can't understand the liberation of knowledge. The goals for the real world and the fantasy story are the same: a search for a more loving world. And they begin the same: with your choice what happens to me.

Our world is built on the same choice we make whether to help others or not. If people form this bond with me and help me survive what's coming, those bonds, the knot of love and connection form the foundation for this revolution and the loving world it would create.

Love would be the seed for everything that grows from here, so Interepublic and Democorporation are literally born and grow shaped by love. And the world gains what it's missing most, a force to fight for the best of us against the worst.

20
8
Help (lemmynsfw.com)
submitted 5 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]
 
 

So, im IP banned from lemmy.world? Or is this cloudflare or smth locking me out? How do I proceed?

I have wanted to leave .world for a while, probably in favour of dbzero, but I would still at least like to delete my account and/or download some data beforehand?

I don't think I did anything wrong, and believe it is a cloudflare thing, but how will I contact the mods, if I cant open their front page to find their emails? Anyways. Any help is appreciated.

Also, sorry if this is the wrong community, but its the only one I know that maybe can help?

Edit 1: I can access the instance if I use a VPN, but I still dont know what to do. This kinda confirms it is cloudflare, but how can I get off their "naughty list"?

Edit the last: it seems to have solved itself after some time. I just used tgis instance for a while, and now its working again.

21
22
46
submitted 1 week ago* (last edited 6 days ago) by [email protected] to c/[email protected]
 
 

Strava is an absolute nightmare to use. My feed is absolutely chock full of ads and dog-walkers. Don't get me wrong, I'm very happy they're taking a 0.2 mile walk around their block and logging their progress, but I don't need to see it. Nike, TrainerRoad, Zwift, Peloton all have giant ads every time their users upload an activity. And I don't understand it because it's not an ad-supported network. Like I would happily pay to have all this shit hidden. It would be extremely simple for Strava to fix this, which would just be to provide me with a simple filter for what type of activities I'd like to see. The fact that they haven't done so, a long time ago, leads me to believe that they simply don't want to, for whatever reason. Plus they've already begun to enshittify by breaking integrations with third parties.

Are there any good options for this?

E: to be clear, I'm asking about the social aspect of Strava.

23
 
 

What client do you use to interact with lemmy (or the fediverse in general)?

24
 
 

cross-posted from: https://lemmy.world/post/28461880

so I spent last several days making collecting watch time on both videos and livestreams more robust and work across multiple peertube instances, im sure it still has gaps in the structure so that jenk data can get in.

if you want to try it heres the link https://github.com/solidheron/peertube_recomendation_algorythm/ btw its a browser extension

so now I got two parts left that I know of first being creating the user_recomendation_vector and the function that gets recommendation based on that vector. I settle on cosine similarity vector since its easy to implement and can be run in browser with only data collected by the user device, and doesnt requires sharing outside of peertube api. user_recomendation_vector should have two part AOLR: (algorithm of last resort) which will be the words in the title, tags, and description tokenized with an float value and recomended_standard: which will be based on what category either programs or people decide a video belongs to along with an associated float value to make it a vector.

I do have issues with deciding if engagement is important, if short video should have multiplier if they're completed, how much is a like worth, how important is it to get an end of the video.

I should add that I have made complimentary video_description_vector thats store in browser all vector dimentions are 1.

25
1089
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
view more: next β€Ί