yeah but like, legally, is this even a valid argument? Sure there is techically probably like 0.0001% of the average person being used in any given result of an AI generated image. I don’t think that gives anyone explicit rights to that portion however.
That’s like arguing that a photographer who captured you in a random photo in public that became super famous is now required to pay you royalties for being in that image, even though you are literally just a random fucking person.
You can argue about consent all you want, but at the end of the day if you’re posting images of yourself online, you are consenting to other people looking at them, at a minimum. Arguably implicitly consenting to other people being able to use those images. (because you can’t stop people from doing that, except for copyright, but that’s not very strict in most cases)
And now, being used to generate depictions of rape and CSAM.
i dont see how this is even relevant, unless the person in question is a minor, a victim, or becoming a victim, otherwise it’s no different than me editing an image of someone to make it look like they got shot in the face. Is that shitty? Sure. But i don’t know of any laws that prevent you from doing that, unless it’s explicitly to do with something like blackmail, extortion, or harassment.
The fundamental problem here is that you’re in an extremely uphill position to even begin the argument of “well it’s trained on people so therefore it uses the likeness of those people”
Does a facial structure recognition model use the likeness of other people? Even though it can detect any person that meets the requirements established by its training data? There is no suitable method to begin to breakdown at what point that persons likeness begins, and at what point it ends. it’s simply an impossible task.
yeah but like, legally, is this even a valid argument?
Personally, legal is only what the law allows the wealthy to do, and provides punishments for the working class.
Morally, that’s what you’re doing when you use AI to generate CSAM. Its the same idea why we ban all pre-created CSAM, as well, because you are victimizing the person every single time.
i dont see how this is even relevant, unless the person in question is a minor, a victim, or becoming a victim,
It makes them a victim.
But i don’t know of any laws that prevent you from doing that, unless it’s explicitly to do with something like blackmail, extortion, or harassment.
The law exists to protect the ruling class while not binding them, and to bind the working class without protecting them.
Does a facial structure recognition model use the likeness of other people?
Yes.
Even though it can detect any person that meets the requirements established by its training data? There is no suitable method to begin to breakdown at what point that persons likeness begins, and at what point it ends. it’s simply an impossible task.
Exactly. So, without consent, it shouldn’t be used. Periodt.
Personally, legal is only what the law allows the wealthy to do, and provides punishments for the working class.
if you have schizoprehnia, sure. Legal is what the law defines as ok. Whether or not people get charged for it is another thing. The question is “do you have the legal right to do it or not”
Morally, that’s what you’re doing when you use AI to generate CSAM. Its the same idea why we ban all pre-created CSAM, as well, because you are victimizing the person every single time.
legally, the reasoning behind this is because it’s just extremely illegal, there are almost no if not zero, where it would be ok or reasonable, and therefore the moral framework tends to be developed around that. I don’t necessarily agree with it always being victimization, because there are select instances where it just doesn’t really make sense to consider it that, there are acts you commit that would make it victimization. However i like to subscribe to the philosophy that it is “abusive” material, and therefore innately wrong. Like blackmail, i find that to be a little bit more strict and conducive to that sort of definition.
It makes them a victim.
at one point in time yes, perpetually in some capacity, they will exist as having been a victim, or having been victimized at one point. I also don’t really consider it to be healthy or productive to engage in “once a victim always a victim” mentality, because i think it sets a questionable precedent for mental health care. Semantically someone who was a victim once, is still a victim of that specific event, however it’s a temporally relevant victimization, i just think people are getting a little loose on the usage of that word recently.
I’m still not sure how it makes that person a victim, unless it meets one of the described criteria i laid out, in which case it very explicitly becomes an abusive work. Otherwise it’s questionable how you would even attribute victimization to the victim in question, because there is no explicit victim to even consider. I guess you could consider everybody even remotely tangentially relevant to be a victim, but that then opens a massive blackhole of logical reasoning which can’t trivially be closed.
To propose a hypothetical here. Let’s say there is a person who we will call bob. Bob has created a depiction of “abuse” in such a horrendous manner that even laying your eyes upon such a work will forever ruin you. We will define the work in question to be a piece of art, depicting no person in particular, arguably barely resembling a person at all, however the specific definition remains to the reader. You could hypothetically in this instance argue that even viewing the work is capable of making people a “victim” to it. However you want to work that one out.
The problem here, is that bob hasn’t created this work in complete isolation, because he’s just a person, he interacts with people, has a family, has friends, acquaintances, he’s a normal person, aside from the horrors beyond human comprehension he has created. Therefore, in some capacity the influence of these people in his life, has to have influenced the work he engaged in on that piece. Are the people who know/knew bob, victims of this work as well, regardless of whether or not they have seen it, does the very act of being socially related to bob make them a victim of the work? For the purposes of the hypothetical we’ll assume they haven’t seen the work, and that he has only shown it to people he doesn’t personally know.
I would argue, and i think most people would agree with me, that there is no explicit tie in between the work that bob has created, and the people he knows personally. Therefore, it would be a stretch to argue that because those people were tangentially relevant to bob, are now victims, even though they have not been influenced by it. Could it influence them in some external way, possibly causing some sort of external reaction? Yeah, that’s a different story. We’re not worried about that.
This is essentially the problem we have with AI, there is no explicit resemblance to any given person (unless defined, which i have already explicitly opted out of) or it has inherently based the image off of via training (which i have also somewhat, explicitly opted out of as well) there are two fundamental problems here that need to be answered. First of all, how are these people being victimized? By posting images publicly on the internet? Seems like they consented to people at least being aware of them, if not to some degree manipulating images of them, because there is nothing to stop that from happening (as everyone already knows from NFTs) And second of all, how are we defining these victims? What’s the mechanism we use to determine the identity of these people, otherwise, we’re just schizophrenically handwaving the term around calling people victims when we have no explicit way of determining that. You cannot begin to call someone a victim, if it’s not even know whether they were victimized or not. You’re setting an impossible precedent here.
Even if you can summarily answer those two questions in a decidedly explicit manner, it’s still questionable whether that would even matter. Because now you would have to demonstrate some form of explicit victimization and damage resulting from that victimization. Otherwise you’re just making the argument of “it’s mine because i said so”
The law exists to protect the ruling class while not binding them, and to bind the working class without protecting them.
again if you’re schizo, sure.
Yes.
on a loosely defined basis, yeah, in some capacity it uses the likeness of that person, but to what degree? How significantly? If the woman in the mona lisa picture was 4% some lady the artist saw three times a week due to their habits/routine would that make them suddenly entitled to some of that art piece in particular? What about the rest of it? You’re running down an endless corridor of infinitely unfalsifiable, and falsifiable statements. There is no clear answer here.
Exactly. So, without consent, it shouldn’t be used. Periodt.
you need to explicitly define consent, and use, because without defining those, it’s literally impossible to even begin determining the end position here.
yeah but like, legally, is this even a valid argument? Sure there is techically probably like 0.0001% of the average person being used in any given result of an AI generated image. I don’t think that gives anyone explicit rights to that portion however.
That’s like arguing that a photographer who captured you in a random photo in public that became super famous is now required to pay you royalties for being in that image, even though you are literally just a random fucking person.
You can argue about consent all you want, but at the end of the day if you’re posting images of yourself online, you are consenting to other people looking at them, at a minimum. Arguably implicitly consenting to other people being able to use those images. (because you can’t stop people from doing that, except for copyright, but that’s not very strict in most cases)
i dont see how this is even relevant, unless the person in question is a minor, a victim, or becoming a victim, otherwise it’s no different than me editing an image of someone to make it look like they got shot in the face. Is that shitty? Sure. But i don’t know of any laws that prevent you from doing that, unless it’s explicitly to do with something like blackmail, extortion, or harassment.
The fundamental problem here is that you’re in an extremely uphill position to even begin the argument of “well it’s trained on people so therefore it uses the likeness of those people”
Does a facial structure recognition model use the likeness of other people? Even though it can detect any person that meets the requirements established by its training data? There is no suitable method to begin to breakdown at what point that persons likeness begins, and at what point it ends. it’s simply an impossible task.
Personally, legal is only what the law allows the wealthy to do, and provides punishments for the working class.
Morally, that’s what you’re doing when you use AI to generate CSAM. Its the same idea why we ban all pre-created CSAM, as well, because you are victimizing the person every single time.
It makes them a victim.
The law exists to protect the ruling class while not binding them, and to bind the working class without protecting them.
Yes.
Exactly. So, without consent, it shouldn’t be used. Periodt.
if you have schizoprehnia, sure. Legal is what the law defines as ok. Whether or not people get charged for it is another thing. The question is “do you have the legal right to do it or not”
legally, the reasoning behind this is because it’s just extremely illegal, there are almost no if not zero, where it would be ok or reasonable, and therefore the moral framework tends to be developed around that. I don’t necessarily agree with it always being victimization, because there are select instances where it just doesn’t really make sense to consider it that, there are acts you commit that would make it victimization. However i like to subscribe to the philosophy that it is “abusive” material, and therefore innately wrong. Like blackmail, i find that to be a little bit more strict and conducive to that sort of definition.
at one point in time yes, perpetually in some capacity, they will exist as having been a victim, or having been victimized at one point. I also don’t really consider it to be healthy or productive to engage in “once a victim always a victim” mentality, because i think it sets a questionable precedent for mental health care. Semantically someone who was a victim once, is still a victim of that specific event, however it’s a temporally relevant victimization, i just think people are getting a little loose on the usage of that word recently.
I’m still not sure how it makes that person a victim, unless it meets one of the described criteria i laid out, in which case it very explicitly becomes an abusive work. Otherwise it’s questionable how you would even attribute victimization to the victim in question, because there is no explicit victim to even consider. I guess you could consider everybody even remotely tangentially relevant to be a victim, but that then opens a massive blackhole of logical reasoning which can’t trivially be closed.
To propose a hypothetical here. Let’s say there is a person who we will call bob. Bob has created a depiction of “abuse” in such a horrendous manner that even laying your eyes upon such a work will forever ruin you. We will define the work in question to be a piece of art, depicting no person in particular, arguably barely resembling a person at all, however the specific definition remains to the reader. You could hypothetically in this instance argue that even viewing the work is capable of making people a “victim” to it. However you want to work that one out.
The problem here, is that bob hasn’t created this work in complete isolation, because he’s just a person, he interacts with people, has a family, has friends, acquaintances, he’s a normal person, aside from the horrors beyond human comprehension he has created. Therefore, in some capacity the influence of these people in his life, has to have influenced the work he engaged in on that piece. Are the people who know/knew bob, victims of this work as well, regardless of whether or not they have seen it, does the very act of being socially related to bob make them a victim of the work? For the purposes of the hypothetical we’ll assume they haven’t seen the work, and that he has only shown it to people he doesn’t personally know.
I would argue, and i think most people would agree with me, that there is no explicit tie in between the work that bob has created, and the people he knows personally. Therefore, it would be a stretch to argue that because those people were tangentially relevant to bob, are now victims, even though they have not been influenced by it. Could it influence them in some external way, possibly causing some sort of external reaction? Yeah, that’s a different story. We’re not worried about that.
This is essentially the problem we have with AI, there is no explicit resemblance to any given person (unless defined, which i have already explicitly opted out of) or it has inherently based the image off of via training (which i have also somewhat, explicitly opted out of as well) there are two fundamental problems here that need to be answered. First of all, how are these people being victimized? By posting images publicly on the internet? Seems like they consented to people at least being aware of them, if not to some degree manipulating images of them, because there is nothing to stop that from happening (as everyone already knows from NFTs) And second of all, how are we defining these victims? What’s the mechanism we use to determine the identity of these people, otherwise, we’re just schizophrenically handwaving the term around calling people victims when we have no explicit way of determining that. You cannot begin to call someone a victim, if it’s not even know whether they were victimized or not. You’re setting an impossible precedent here.
Even if you can summarily answer those two questions in a decidedly explicit manner, it’s still questionable whether that would even matter. Because now you would have to demonstrate some form of explicit victimization and damage resulting from that victimization. Otherwise you’re just making the argument of “it’s mine because i said so”
again if you’re schizo, sure.
on a loosely defined basis, yeah, in some capacity it uses the likeness of that person, but to what degree? How significantly? If the woman in the mona lisa picture was 4% some lady the artist saw three times a week due to their habits/routine would that make them suddenly entitled to some of that art piece in particular? What about the rest of it? You’re running down an endless corridor of infinitely unfalsifiable, and falsifiable statements. There is no clear answer here.
you need to explicitly define consent, and use, because without defining those, it’s literally impossible to even begin determining the end position here.