lily33

joined 2 years ago
[–] [email protected] 4 points 3 days ago* (last edited 3 days ago) (2 children)

Any accessibility service will also see the "hidden links", and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don't know about such "feature" they'll be very confused.

Also, I don't know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.

[–] [email protected] 5 points 3 days ago (1 children)

I would love to think so. But the word "verified" suggests more.

[–] [email protected] 10 points 3 days ago

That makes me think, perhaps, you might be able to set it to exec("stuff") or True...

[–] [email protected] 12 points 3 days ago (8 children)

while allowing legitimate users and verified crawlers to browse normally.

What is a "verified crawler" though? What I worry about is, is it only big companies like Google that are allowed to have them now?

[–] [email protected] 11 points 1 week ago (3 children)

I agree that it's difficult to enforce such a requirement on individuals. That said, I don't agree that nobody cares for the content they post. If they have "something cool they made with AI generation" - then it's not a big deal to have to mark it as AI-generated.

[–] [email protected] 14 points 3 weeks ago* (last edited 3 weeks ago)

Файнали, уи дон'т хев ту уори абаут спелинг энимор, иц ол ритан хау иц рэд.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

An intelligence service monitors social media. They may as well have said, "The sky is blue."

More interesting is,

Sharing as a force multiplier

-- OpenAI

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (1 children)

Do you know of a provider is actually private? The few privacy policies I checked all had something like "We might keep some of your data for some time for anti-abuse or other reasons"...

[–] [email protected] 10 points 1 month ago

Too bad that's based on macros. A full preprocessor could require that all keywords and names in each scope form a prefix code, and then allow us to freely concatenate them.

[–] [email protected] 7 points 1 month ago (2 children)

Aren't USAid grants public?

[–] [email protected] 15 points 1 month ago

Yes, OpenAI wishes everyone else has to have authorization to do model training...

Fortunately, their ToS don't matter all that much, it's easy to use their model through a third party without ever touching them.

 

This is a meta-question about the community - but seeing how many posts here are made by L4sBot, I think it's important to know how it chooses the articles to post.

I've tried to find information about it, but I couldn't find much.

 

I'm not a lawyer, but my understanding of a license is that it gives me permission to use/distribute something that's otherwise legally protected. For instance, software code is protected by copyright, and FOSS licenses give me the right to distribute it under some conditions.

However, LLMs are produced by a computer, and aren't covered by copyright. So I was hoping someone who has better understanding of law to answer some questions for me:

  1. Is there some legal framework that protects AI models, so that I'd need a license to distribute them? How about using them, since many licenses do restrict use as well.

  2. If the answer to the above is no: By mentioning, following and normalizing LLM licenses, are we essentially helping establish the principle that we do need permission from companies to use their models, and that they have the right to restrict us?

view more: next ›