LWD

joined 1 year ago
[–] [email protected] 1 points 1 hour ago

Same thing here. If I don't see a source code repository, the result is an obvious no-go. Especially because I'm pretty sure there are others, similar projects that either exist in the wider closed source space.

Accountability is an important thing for these kinds of apps. Unfortunately, if you don't have a reputation online, that's about as bad as having a bad reputation. No offense intended, it's just inevitable.

[–] [email protected] 1 points 2 hours ago

The post got removed a year later, by Reddit filters?

wtf

[–] [email protected] 1 points 2 hours ago

True. I don't know much about their software, though. They've released so much stuff over a short amount of time, I'm having a hard time keeping track

[–] [email protected] 1 points 1 day ago

Heh. OpenAI already accused them of training off their ChatGPT output. Hilarious if true (because fuck OpenAIq), but impressive if false.

[–] [email protected] 1 points 1 day ago (2 children)

It's actually easier than that - the model gets loaded onto open-source software that is usually made by American companies. It doesn't connect to stuff... because it can't. It's like an MP4 or jpeg file.

[–] [email protected] 42 points 1 day ago (2 children)

TDI (they deserved it) unless Feddit admins pop in with some extravagant response

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago)

From my own fractured understanding, this is indeed true, but the "DeepSeek" everybody is excited about, which performs as well as OpenAI's best products but faster, is a prebuilt flagship model called R1. (Benchmarks here.)

The training data will never see the light of day. It would be an archive of every ebook under the sun, every scraped website, just copyright infringement as far as the eye can see. That would be the source they would have to release to be open source, and I doubt they would.

But DeepSeek does have the code for "distilling" other companies' more complex models into something smaller and faster (and a bit worse) - but, of course, the input models are themselves not open source, because those models (like Facebook's restrictive Llama model) were also trained on stolen data. (I've downloaded a couple of these distillations just to mess around with them. It feels like having a dumber, slower ChatGPT in a terminal.)

Theoretically, you could train a model using DeepSeek's open source code and ethically sourced input data, but that would be quite the task. Most people just add an extra layer of training data and call it a day. Here's one such example (I hate it.) I can't even imagine how much data you would have to create yourself in order to train one of these things from scratch. George RR Martin himself probably couldn't train an AI to speak in a comprehensible manner by feeding it his life's work.

[–] [email protected] 7 points 2 days ago (8 children)

You (and Ed, who I very much respect) are correct: DeepSeek software is open source*. But from the jump, their app and official server instance were plagued with security holes - most likely accidental ones, since they were harmful to DeepSeek itself! - and naturally their app sends data to China because China is where the app is from.

I do find it pretty funny that they were also sending data to servers in the US though. This isn't a China issue, it's a privacy/design issue, and even after they resolve the security holes they still receive your data. Same as OpenAI, same as every other AI company.

* DeepSeek releases genuinely open source code for everything except for its models, which exceeds industry standards. The models can be downloaded and used without restriction, and this is considered "Open" according to the OSI, but most other people would say it's not. I don't think it's open either. But again, they have gone above and beyond industry standards, and that is why "Open"AI is angry at them.

[–] [email protected] 1 points 2 days ago

Anonym Private Audiences is currently in closed beta, supporting early-use cases where privacy matters most.

Wow that really is private! So private we can't even see what it's up to.

Differential "privacy," based on what I've learned, seems to be a joke. The only thing it does effectively is hide the fact you've disabled it, if you choose to disable it. But if other people disable it, it becomes easier to identify you. The best move is to not participate, which should encourage other people to also not participate...

And if you're one of the unlucky few people still using it, its developers basically need to choose where on a sliding scale from "anonymous" to "useful" they want to start collecting your data. And there is every prerogative for them to push towards "useful" and away from "anonymous."

It operates separately, and is not integrated with, our flagship Firefox browser.

Doubt...

[–] [email protected] 2 points 3 days ago

Well, they do promise it's secure! Which means... Nothing, absolutely nothing in terms of privacy.

[–] [email protected] 2 points 3 days ago

You wouldn't even guess this would be on the Keet homepage, but the developers can't help themselves. They just see dollar signs.

As your app grows, Holepunch lets you evolve into a business without compromises. With Bitcoin Lightning and USDt micropayments built-in, it's easy to implement and use powerful paid features in apps. Peers control their own data, including how it’s bought and sold.

"Peers control their own data"

I really hate how "sovereignty" has become a dogwhistle for "sell your data to us." And they make it as easy as possible to sell yourself out, irreversibly, for mere pennies. Maybe that's the fantasy: since "code is law" in Cryptoland, get somebody to sign over their identity with code.

 

This article is in German. Link found in a popular, censored r/privacy Reddit post, a common occurrence.

Machine-translated article below:

Switzerland has an international reputation for being a safe haven for data – outside the EU, with political stability and a modernized data protection law. But this reputation is deceptive when you take a closer look at that Intelligence Act (NDG) throws. It has allowed this since 2017 Federal Intelligence Service (NDB) far-reaching interventions: cable reconnaissance, state Trojans, data retention and the exchange with foreign secret services are possible – sometimes even without concrete suspicion. Particularly explosive: In the run-up to the 2016 vote, the Federal Council assured that no nationwide surveillance was planned and that only data traffic abroad would be affected. In fact, it later became known that national traffic is also recorded. Terms such as »filtering « or »monitoring « have never been clearly defined politically – a breeding ground for lack of transparency and loss of trust.

Approval and control mechanisms exist, but their effectiveness is limited. Legally legitimized access to large amounts of data raises serious questions: How much surveillance can a democracy take? Where does security end, where does control begin? And what does this mean for companies that advertise their services based in Switzerland as particularly safe?

Also popular Swiss providers like Threema or ProtonVPN are fundamentally subject to Swiss law – and thus also to the NDG. This means that in certain cases, state access can also be legally possible here. Both companies advertise with technical end-to-end encryption or No-log policy, but technical security alone does not protect against legal access powers. Trust is good – but a critical look at the legal framework remains essential.

Yes, Swiss laws also allow official access to existing data. Switzerland is not a data protection paradise – even if it is often represented or advertised in the same way. At first glance, the location seems trustworthy, but the NDG allows extensive, sometimes suspicious monitoring. The reality of government access options contrasts sharply with the image that many providers and users paint. Those who hope for real digital sovereignty should not be blinded by the myth of the safe Swiss data port.

At the same time, in many other countries it doesn't look any better –, often even significantly worse. In the United States, for example, laws like the Patriot Act, the Cloud Act or FISA §702 (here is an overview) extensive access to data, including from providers operating outside the USA. In the United Kingdom and France there are also legal bases for tamper-free mass surveillance.

Germany does a little better in comparison –, above all thanks to the basic legal anchoring in the Basic Law, the independent case law of the Federal Constitutional Court and a lively public debate about data protection. But here, too, not everything is in the green: the use of state Trojans (Source TKÜ), the often opaque cooperation between secret services and the recurring political pressure on the long-failed Data retention show that fundamental rights are also under constant pressure in Germany. Nowhere is there absolute certainty – but how transparently and critically a society deals with surveillance makes the decisive difference.

 

Found on Reddit's r/privacy, where either moderators or Automod have pulled the plug on it.

 

Redact is a relatively popular tool for cleaning up people's post or message history on platforms like Slack or Discord. Recently I found out about some questionable statements made by Dan Saltman, better known as Redact's creator.

Most recent behavior

From two censored r/privacy posts, where we find the CEO pretending to know which tweets a customer deleted

The Redact dev recently recontextualized tweets of a streamer hasan. but then walked it back stating he wasnt a customer like the first tweet appeared. I didnt see that before, and the op really concerned me. I don't know if I could trust them to reccomend, like have they been trustworthy in the past? And are there any alternatives that are just-work in the least?

3 months ago

From this r/privacy comment

I don't trust that platform or the guy who runs it, Dan Saltman. He recently had multiple public meltdowns. At one point, he threatened to dox Twitch employees until he could get the CEO's attention. Then he doxxed someone's name and location on a public stream, and posted a picture of them as a minor.

4 months ago

From this r/privacy post

In what appears to be a now-deleted stream, Saltman threatens to dox people multiple times. He mentions Dan Clancy, the CEO of Twitch, and threatened to dox Clancy's employees.

Did you know that they hide, by the way? Because I have a list of all the employees in Trust and Safety, and half of them hide. Sometimes... there are people... and you can't get to them. no matter what level of insane targeting you do to them. Then you have to start going to the people that they care about, and then they start caring. but I'm guessing that Dan Clancy will care if his employees that are involved with trust and safety start getting named for being antisemitic people... they are responsible. I will set up a fucking website for every single one of these motherfuckers. And that's how you make change... you make change by making the person feel the pressure of what they've done. Not the company, but the man. That's how you make change. That's how we will make change.

He also seems he threatened doxxing if they delete messages in a particular Slack channel (one he wasn't a part of.)

This guy in red. I'm not going to identify him by name. and again, if anything happens to that Slack [chat], I will identify people.

This is especially notable because Slack is one of the services Saltman's app supports.

Based on this behavior, I feel very uncomfortable using or recommending Redact.

 

(This article should be fully accessible if you have a free account. Otherwise, https://archive.is/AM0Th)

 

Your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize....

 

Found through, and title from, Nullagent. The thread is definitely worth checking out.

https://partyon.xyz/@nullagent/114332265416001848

 

Proton CEO Andy Yen gave a surprisingly sharp interview to the Swiss magazine "watson" (source in German: https://www.watson.ch/digital/wirtschaft/517198902-proton-schweiz-chef-andy-yen-zum-ausbau-der-staatlichen-ueberwachung). He warned that Proton might leave Switzerland if new surveillance laws are passed, which aligns with the company’s strong pro-privacy stance. So far, nothing unexpected.

However, Yen’s remarks about Swiss officials - describing them as lifelong bureaucrats, all lazy, and incompetent - came across as arrogant and out of place, almost like something you’d expect from a capitalism praising Trump supporter. he also was quoted in the interview, that the US works better (so they consider to move there?).

The interview left me speechless, and I’m certain I won’t be considering Proton for any of my future projects

Source

 

Mod Carrotcypher links to their own personal blog and then pins their own post.
carrotcypher linking to themself

Opsec101's homepage, made by Carrotcypher:
the opsec101 homepage

And here's a look at rule 3!
rule title: "DON'T ENGAGE IN SELF-PROMOTION."

Call me picky, but I think the moderators should follow the rules they write and enforce.

 

Found as yet another censored r/privacy post!

Understand that the decision to fire the chief and his deputy may be in fact be the most dangerous decision Trump has made so far. Timothy Haugh like his last 2 predecessors were restricting the access and control Peter Thiel had through his company Palantir over the CIA/NSA to commit domestic surveillance. Palantir is the 2nd biggest defense contractor for the CIA/NSA along with providing day-to-day operations for both agencies. The goal for Palantir is and always has been domestic surveillance. Palantir is an intelligence corporation which provides advanced analysis, sigint, osint, criminal and threat awareness and kill chain efficiencies to all levels of US, UK, and corporate agencies.

(comment source)

view more: next ›