unbanshee

joined 7 months ago
[–] [email protected] 2 points 6 days ago

I switched a few months ago, and I've honestly been so impressed with how far Blender has come since the last time I tried it (more than 10 years ago, probably).

I don't work in creative industry anymore and I haven't had a ton of time to noodle around and actually try out the tools I've seen demo'd, but it was mindblowing discovering how many different software suites I had used to do stuff that Blender has been incorporating into their one package.

Maya? Obviously does most of that. ZBrush? Yep, pretty comparable. Marvelous Designer? Holy shit, yep. ToonBoom? Also that.

By far the worst part has just been trying to retrain hotkey muscle memory and learn minor (but fundamental) differences, and that's not as small a thing as a lot of people make it out to be - it does add a lot of cognitive noise and you really can't just hop in and flow right from the get go (depending on what you're doing).

Absolutely worth it to get away from Adobe though, and not having to bounce between programs while working on a model is very, very pleasant.

36
Carney to defund Canada (noraloreto.substack.com)
[–] [email protected] 1 points 1 week ago

The challenge is convincing c-suite to greenlight the work.

[–] [email protected] 5 points 2 weeks ago

Don't worry, I'm pretty sure that appeasement has historically gone really well.

[–] [email protected] 4 points 2 weeks ago (4 children)

Mine's a cabinet minister 🤢

Like I'm emailing anyway but it's gonna be swiftly ignored.

[–] [email protected] 5 points 3 weeks ago

Hell yes, I love my buddies.

[–] [email protected] 4 points 1 month ago (1 children)

I know this is an anarchist instance. It's part of the reason I assumed that anti-capitalism would be a given and I didn't need to bang the drum about it before stating my arguments. I am anti-capitalist.

It seems like your faith is much higher than mine that people are vetting the AI tools they use, or that they exclusively use their own works as training material.

From what I can tell, our stable diffusion art communities make no distinction between training sets, nor do they require that shared images be trained on public-domain or user-owned data only. Given that, I don't think it's completely unreasonable that people are equating stable diffusion users with users generating their content on the big models that were indiscriminately fed the entire internet. There's no way to easily tell.

And outside of capitalism and industry, there are interesting philosophical discussions that need to be had around generative AI that I don't see enough. Here are a few of the topics I think need to be examined more, both by human society at large, and by AI-art communities especially:

  • What does "good artists borrow, great artists steal" mean when the artist in question is modulating their output by inhuman means - parsing millions of images in ways that are physical impossibility? I think that's worth interrogating.

  • What say do living artists get in who uses their work in training sets, and how should that be respected? Is ignorance of publicly-stated wishes an acceptable excuse? How should this be moderated?

  • How do we assign value (cultural, economic, personal, sentimental, or any other) to creative works? I think arguably that both human-created and generative AI art are the product of thousands of years of human creative output, but they're vastly different in terms of the skill, types of knowledge, and time required to create one piece.

And it worries me that a lot of people seem pretty inclined to dismiss criticism of AI use as frivolous or reactionary, or couch it as a base refusal to adapt or learn new technologies. Especially when the people driving policy around the largest implementations of that technology are the ones who are the least principled in its deployment.

I know that this is a small community. I know that the proportion of people here who use custom stable diffusion models is almost definitely much higher than many other forums on the internet.

But I worry that if we don't have this kind of discussion here, where people are (maybe, optimistically/flatteringly) more judicious in their use of AI than elsewhere - if we don't have clear, principled guidelines, then the prevailing attitudes are ultimately going to wind up being those of Microsoft, Google, OpenAI, or fucking Grok.

For now though, unless I know that someone is using models trained on their own work, or at least public-domain works, I feel like I'm crossing a picket line, and I don't like that.

[–] [email protected] 12 points 1 month ago (8 children)

Sorry, what exactly do I need to tone down?

Pretty sure this is the first time I've ever commented on the issue here or elsewhere on Lemmy.

I see anti-AI sentiment all over the fediverse, but nothing in the original post that would indicate that these users are exclusively targeting db0 communities, just that the admins here have chosen to address it; and I agree it's a good way to handle the situation.

I think there are good and valuable use cases for AI, including generative AI.

But I also think a lot of the costs are hidden because the tools are free and easy to access, and because those coats often pretty abstract and wide-ranging so as to be difficult to observe, quantify, and attribute to an emerging technology. So I think there are a lot of really valid reasons to question casual use of those tools because they do not exist outside of capitalism.

The point of my earlier post wasn't meant to be that all use of AI is bad or that somebody using it to make a meme or art of their big-titty anime waifu is directly putting artists out of work, but I also don't think that those things are entirely separable, either.

And since I was replying to a user whose comment made a blanket claim implying that casual use of generative AI is trivial, well... no, I don't think it is.

I've done all sorts of art in my life. Sometimes as a job. And it's personally pretty disheartening to see comments like "it just looks like AI, human-made art doesn't look like that" because yes, it sometimes does, even if the poster has never seen human-made art like that.

But I've also spent the last few years watching dozens of friends and former coworkers lose their careers and their livelihoods en masse for no reason other than naked greed.

I think that making art more accessible through AI can be a really cool and pretty liberating thing for a lot of people, but as it's being employed by the big corporate players, it does have big serious negative externalities for working artists and for cultural products writ large, and I think that's worth bringing up.

[–] [email protected] 18 points 1 month ago (15 children)

I mean...

I can imagine how artists struggling to make ends meet might be angry that work they'd spent years learning and honing their skills to produce was and is being crawled by tools made by a bunch of silver-spoon-chomping techbros who are marketing their products to businesses who employ artists as a way to employ less artists, and pay peanuts to those they do hire to wrangle prompts and fix AI mistakes instead of actually getting to make art.

And I can imagine how frustrating it is to see people minimize that struggle when it often benefits oligarchs and C-suite ghouls.

[–] [email protected] 6 points 1 month ago

The Migrant Rights Network is currently hosting a petition drive against this bill, you can find it here: https://migrantrights.ca/equalitynotexclusion/

[–] [email protected] 2 points 1 month ago

I took two years of Norwegian in university, and in my first-ever class, tthe prof, a lovely woman originally from Sweden, brought us cookies.

One girl didn't make it to the second class because sis could literally not say 'småkaker' without bursting into laughter.

 

Private clinics in Canada are selling access to personal health data without patients’ knowledge, according to a new study that says clients in the pharmaceutical industry are paying millions for this information.

“This is not how patients want their data to be used,” lead author Dr. Sheryl Spithoff told CTV’s Your Morning on Monday. “Patients are generally fine with sharing their data if it’s going to be used for research and health system improvement... but they’re very reluctant to have their data shared or held with for-profit companies.”

Spithoff is a family physician and scientist at Women’s College Hospital in Toronto, and an assistant professor at the University of Toronto. Published in the journal JAMA Network Open early this month, the new study focused on two unnamed health data companies that each had access to between one and two million patient records.

“The entities involved in the primary care medical record industry in Canada—chains of for-profit primary care clinics, physicians, commercial data brokers, and pharmaceutical companies—work together to convert patient medical records into commercial assets,” the study explains. “These assets are largely used to further the interests of the pharmaceutical companies.”

Spithoff’s research uncovered two models for how patient data is sold. In one, private clinics sell health information to a third-party commercial data broker that removes personal information before running analytics for the pharmaceutical industry. In another, private clinics are actually owned by a health data company that uses patient information to develop algorithms for pharmaceutical companies in order to identify and target patients with drug interventions.

In both cases, data is typically used without patients’ knowledge or consent.

“According to a data broker employee, no one sought consent from patients to access and use their records,” the study claims. “Instead, companies appeared to seek out physician consent to access patient records.”

Such practices, the study adds, “could potentially generate hundreds of millions of dollars in revenue.”

Spithoff says the study identified a number of risks with the monetization and sharing of patient data.

“One is that this is likely to give the pharmaceutical industry increased control over medical practices, so we’re likely to see more of a focus on expensive new on-patent drug,” Spithoff said. “We’re also very concerned about how the data are being used—anytime data are been shared, there’s privacy risks to patients.”

A physician interviewed for the study told researchers that patient data is “snatched away.”

“It’s patient’s data but how is it that these companies even can own the data?” the physician said, according to the study. “I don’t see how it should even be legal to provide this information."

[–] [email protected] 5 points 1 month ago (2 children)

BC has strata corporations, which are kiiiiind of the same thing based on my shitty surface-level understanding of HOAs.

And basically all multi-family housing has a strata.

 

I'd make a crack about how well it's going down south, but the destruction is the point.

view more: next ›