• 0 Posts
  • 28 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle





  • I’m a very strong supporter of free speech. But free speech absolutism where you go out of your way to make all voices heard is not what free speech is about. It’s about the government not interfering. Just like people have a right to a gun, but Walmart has the right to kick you out for bringing one, rammy.site users have the right to say whatever they want, and other instances have the right to defederate.

    If a teacher goes against the curriculum and teaches children that black people are all out to get them, I sure as hell hope the school would step in and stop or remove them.

    That’s not a violation of free speech, but in your opinion above it would be.




  • Free speech absolutism is harmful. By remaining federated with them, you’re participating in distributing their content and giving them a platform. People do have a choice of what they want to see, they can choose to be a part of another instance without morals. I would hope that a programming instance of all places would understand the consequences of propaganda given so many programmers work in data collection and targeted advertising. If you show an ad to 1000 people and one of them buys the product, the ad worked. It’s no different for disinformation campaigns.

    It’s not like they’re just sharing differing opinions or saying awful shit, they’re taking things out of context or making things up (or posting articles that make things up) and it’s very easy to prove if you do a tiny bit of googling. One article listed off a bunch of climate predictions that were wrong along with sources to look credible. If you checked the sources though, they were all wrong. Some of the predictions were actually made by humans (but not the claimed academic institutions) while others were straight up made up.

    I hope the admins make the right decision here. Protecting free speech doesn’t mean allowing people to say whatever they want on your platform. It means allowing them to say it on their platform without being fined or put in jail.





  • Both. Everyone is afraid of AI taking over but it’s just a tool. Human augmentation is way more likely to lead there. But in the mean time, Stephen Hawking lived quite a while only being able to speak with augmentations. Just like any other technology, it will be at the very least researched in fear that someone else will first. So might as well embrace it





  • Closer, and I hope I’m not just being a pedantic jerk, but there is no code being generated either. To use correct terminology, the weights of the nodes are what change. Nodes are roughly thought of like neurons in a brain, and weights are roughly thought of as the strength of the connection between one neuron (node) and another. Real brains are way more complex.

    The weights of the nodes do contain information, but it’s not human readable at all, we actually don’t have a way of understanding how they work, just a rough idea of why. Sort of like how your brain contains the information on how to catch a ball, it performs the equivalent of calculus to do so, but there is no calculator in your brain doing the math to catch the ball. Actually, maybe a better analogy, if you have a bouncy ball, it contains the required information to bounce if you drop it, but we can’t read that information, we can only model it.

    But I’m just rambling at this point, your point is clear and valid lol


  • Sort of, but there’s no database at all, just a bunch of numbers and math. It’s almost like controlled evolution, breeding plants to select desirable traits. Except that’s another field of computing called genetic algorithms. Neural networks are a pile of math trained on data. You give it a cat, it says whether or not it thinks it’s a cat, and you tell it if it’s right or wrong, then it adjusts it’s math accordingly. Do this with a million cats and not cats and it becomes better than humans at identifying cats. LLMs are just that but with word predictions and trillions of words for training. It’s impressive in its own right


  • I don’t disagree, but I do want to point out your understanding of how chatgpt works is flawed. There is no database or query going on. It’s a giant neural network model that was trained on all that data you mentioned. The model is effectively predicting what the next word should be based on the previous words, nothing else. Each individual word is selected this way.

    It doesn’t change any of your arguments or conclusions, but I wanted to point it out, because if someone wrote a chat not like chatgpt using databases and programming I would be floored and incredibly impressed