There may be some combination of this and political partisanry going on. This isn't the only thread where one moderator is suppressing criticism of big tech and big government. I might need to take advantage of that community for recording some stuff, thank you for pointing it out.
LWD
I have many negative things to say about AI (and China) but like I stated earlier, this is in no way unique to DeepSeek.
Hmm. FOSStodon team:
The moderators are the unsung heroes of Fosstodon. They’re the people who work every single report we receive, and take appropriate action to keep Fosstodon a friendly and inclusive place for all our members.
CarrotCypher
Role: Moderator
MODERATOR OF
r/privacy
r/Pareidolia
r/opensource
r/OSINT
r/tails
… and 51 more ⇒
r/privacy moderators also censored this post with the same reason:
IRS nears deal with ICE to share addresses of SUSpected undocumented immigrants
Really makes you think.
Another deleted comment
Note, it seems you are not allowed in this reddit to express an opinion containing doubt about the security of WhatsApp - it will be removed by mods. As such, you can not read the replies here and form a judgement about what the consensus is.
carrotcypher (mod) 1 point 3 days, 1 hour ago
Or, you know, obvious astroturfing as an excuse to promote alternatives is against the rules.
OP's post history is illuminating.
On this particular article, "DeepSeek can be used to create malware" is unsurprising. Know what else can be used to create malware? Microsoft Visual Studio. Too complicated? Forums on the internet. Sam Altman's OpenAI, which they allege was used to train DeepSeek.
This isn't a breach. Nothing is getting breached here. A more honest title might be "I can use DeepSeek to help me code malware!" but this is not surprising, novel or unique to DeepSeek. See: OpenAI above. Also see: all the ways people have gotten OpenAI to simply tell them how to commit crimes with the right phrasing.
Between the papers, the source code, and the fully downloadable models that punch well above their weight class, it's the closest thing to actual open-source AI we've seen so far.
I would say the models don't really count as open source, but Facebook and OpenAI made their own perverted definition of "open-source" so while this technically meets that standard, I mostly impressed that it exceeds it.
Vijil is some shitty AI auditing startup that appears to have only "reviewed" this one product.
So when it says Deepseek is bad, the answer to "compared to what?" is "lol, idk, hopefully not the things my customers want to use."
And when it says Deepseek isn't private, it has absolutely nothing to do with a fully offline model, but just the responses to some of its synthetic tests.
These benchmarks are, effectively, useless.
Pebble was from a time when enshittifiaction wasn't as terrible as it is today, and died (post acquisition) before it could really be implemented in its products. Eric Migicovsky is an odd duck in that regard. Between this and Beeper, privacy has always been "not great, not malicious (yet)", and before enshittifiaction could set in under his watch, the company gets bought out by a bigger one with a truly lousy CEO.
Under his watch. Heh.
Pebble was possibly one of the last great tech innovations before AI, in its desperate attempt to sell our stolen data back to us in a thoroughly butchered format. Which means it pains me to read
Upgrades to the hardware will include a speaker alongside the microphone, which Migicovsky teases will be used for talking with AI assistants (ChatGPT being one example).
Personal home labs might be able to go much further with this, I hope.
Considering how popular this product originally was with hackers and open source enthusiasts, I really hope the hardware has as much longevity as its predecessor. And considering that was closed source and got so much mileage, I have the feeling that this will be better simply by how open-source works.
I did upload a screenshot with the link, but I guess it's inaccessible... Here it is in full resolution