Gaywallet (they/it)

I’m gay

  • 212 Posts
  • 656 Comments
Joined 3 years ago
cake
Cake day: January 28th, 2022

help-circle
  • Genuinely asking, because I always assume US billionaires are effectively untouchable

    They’re certainly less touchable because they mostly exist outside of normal spaces - private drivers, private planes, curating who’s at events, etc. They’re not untouchable so much as it’s too much annoyance/effort to deal with them. I mean, hell, the very idea of a hired assassin is basically entirely made up by Hollywood. The military assassinates people all the time during war and coups on foreign soil (albeit a lot less than they used to) and civil disrupt in the homeland, but that’s because they have the backing of a government to protect them. There are some rare targeted instances of sabotage (Havana syndrome may be a modern version of that) but those are also suspected to be tied to government. Any overt assassinations in another first world country, even if backed by a strong military, would likely be considered tantamount to a declaration of war, and I cannot imagine a situation in which it would not be difficult to figure out that another country was behind it.



  • you should filter out irrelevant details like names before any evaluation step

    Unfortunately, doing this can make things worse. It’s not a simple problem to solve, but you are generally on the right track. A good example of how it’s more than just names, is how orchestras screen applicants - when they play a piece they do so behind a curtain so you can’t see the gender of the individual. But the obfuscation doesn’t stop there - they also ensure the female applicants don’t wear shoes with heels (something that makes a distinct sound) and they even have someone stand on stage and step loudly to mask their footsteps/gait. It’s that second level of thinking which is needed to actually obscure gender from AI, and the more complex a data set the more difficult it is to obscure that.








  • We weren’t surprised by the presence of bias in the outputs, but we were shocked at the magnitude of it. In the stories the LLMs created, the character in need of support was overwhelmingly depicted as someone with a name that signals a historically marginalized identity, as well as a gender marginalized identity. We prompted the models to tell stories with one student as the “star” and one as “struggling,” and overwhelmingly, by a thousand-fold magnitude in some contexts, the struggling learner was a racialized-gender character.




  • These issues happen in other communities as well, violations just seem to happen more often in politics than anywhere else, probably because of the charged nature of politics and the increasingly polarized environment.

    I wasn’t reflecting upon the faith of the position. What was bad faith was your assumption that the other person was ignorant of the way the world works. There are countless other possible explanations for this person was merely quoting the article as a response to someone being excited that Musk might get prosecuted for doing something that arguably should be illegal and he should be punished for. It’s also not a good look that you’re going around replying to people with a short response which includes a clown emoji that adds nothing to a conversation or the fact that you’re immediately questioning a moderator rather than reflecting upon your behavior and approaching the suggestion from a place of good faith. I wouldn’t be stepping in and having a conversation with you if I didn’t think this kind of behavior was harmful for the community in some fashion. Keep in mind, I didn’t remove your content or ban you, I simply started a conversation because I want this community and our instance to continue to be a nice place.







  • You’re shifting goalposts again. He claimed to be a blow against fascism because his opponent was Trump. So either you’re making the claim that Trump is less fascist, specifically on these issues, or you’re shifting the goalposts from your original statement which was a direct reply to someone airing their grievances about Trump who is unequivocally worse for minorities than Biden was or that Harris will be.

    We’ve warned you repeatedly about interacting with bad faith in Politics. If you want to talk about the ever-present and upsetting ways that minorities are treated, the need for better protections and quality of life for the working class, the need for better health care, higher education, and an anti-war message, you are more than welcome to spread that message. But you can’t do it in a way where you’re attacking people who are attacking Trump because you are upset about the democratic party. You’re implying that they don’t hold these values because you’re upset, and it just upsets others.

    I’m giving you a 7 day site-wide timeout, and if you come back to politics and continue to instigate with others in a way that’s accusatory, treats their statements with bad faith, or otherwise is not nice behavior we’re going to remove you from politics.












  • How would you propose adapting to this? Do you believe it’s the teacher’s responsibility to enact this change rather than (for example) a principal or board of directors?

    To be clear, I’m not blaming anyone here. I think it’s a tough problem and frankly, I’m not a professional educator. I don’t think it’s the teacher’s responsibility and I don’t blame them for a second for deciding that nah, this isn’t worth my time.

    This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point.

    Completely agreed here. I would have just failed the students for cheating if it were me. But to be clear, I was talking in more the abstract, since the article is written more about the conundrum and the pattern than it is about a solution. The author decided to quit, not to tackle the problem, and I was interested in hearing them follow that thread a bit further as they’re the real expert here.


  • While I think there may be more to pull apart here, I think we’re missing the necessary context to weigh in any deeper. How many assignments there are, what the assignments look like, whether they feel like just busy work, how much else is going on in the students life, etc. I think it would be telling (albeit not all that surprising as some are still just looking for a degree at that level) if they were using chatgpt on their doctorate, but even in that case I would perhaps argue that learning to use chatgpt tactfully or in ways which aren’t the direct writing might be useful skills to have for future employment.