I mean, the whole point of declaring this era post-truth is that these people have basically opted out of consensus reality.
I mean, the whole point of declaring this era post-truth is that these people have basically opted out of consensus reality.
Why don’t they just hire a wizard to cast an anti-tiktok spell over all of Australia instead? It would be just as workable and I know a guy who swears he can do it for cheaper than whatever server costs they’re gonna try and push.
Okay apparently it was my turn to subject myself to this nonsense and it’s pretty obvious what the problem is. As far as citations go I’m gonna go ahead and fall back to “watching how a human toddler learns about the world” which is something I’m sure most AI researchers probably don’t have experience with as it does usually involve interacting with a woman at some point.
In the real examples that he provides, the system isn’t “picking up the wrong goal” as an agent somehow. Instead it’s seeing the wrong pattern. Learning “I get a pat on the head for getting to the bottom-right-est corner of the level” rather than “I get a pat on the head when I touch the coin.” These are totally equivalent in the training data, so it’s not surprising that it’s going with the simpler option that doesn’t require recognizing “coin” as anything relevant. This failure state is entirely within the realms of existing machine learning techniques and models because identifying patterns in large amounts of data is the kind of thing they’re known to be very good at. But there isn’t any kind of instrumental goal establishing happening here as much as the system is recognizing that it should reproduce games where it moves in certain ways.
This is also a failure state that’s common in humans learning about the world, so it’s easy to see why people think we’re on the right track. We had to teach my little on the difference between “Daddy doesn’t like music” and “Daddy doesn’t like having the Blaze and the Monster Machines theme song shout/sang at him when I’m trying to talk to Mama.” The difference comes in the fact that even as a toddler there’s enough metacognition and actual thought going on that you can help guide them in the right direction, rather than needing to feed them a whole mess of additional examples and rebuild the underlying pattern.
And the extension of this kind of pattern misrecognition into sci-fi end of the world nonsense is still unwarranted anthropomorphism. Like, we’re trying to use evidence that it’s too dumb to learn the rules of a video game as evidence that it’s going to start engaging in advanced metacognition and secrecy.
That’s the goal. The reality is that it doesn’t actually reproduce the skills it imitates well enough to actually give capital access to them, but it does a good enough job imitating them that they’re willing to give it a chance.
I mean a lot of the services that companies are using are cloud-hosted, meaning that especially if you have branch offices or a lot of remote workers a normal firewall in the datacenter introduces an unnecessary bottleneck. Putting the logical edge of your organization’s network in the cloud too makes sense from a performance perspective in that case, and then turning the actual firewalls into SaaS seems much less absurd.
Brief overlapping thoughts between parenting and AI nonsense, presented without editing.
The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of “thinking” (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.
Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesn’t actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.
Human children are also not immune to “hallucinations” in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a “Happy Birthday” sticker doesn’t immediately make it their (or anyone else’s) birthday.
Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.
IDK. Felt thoughtful, might try to organize it on morewrite later.
This is what the AI-is-useful-actually argument obscures. There are parts of this technology that can do legitimately cool things! Machine learning identifying patterns in massive volumes of data that would otherwise be impractical to analyze is really cool and has a lot of utility. But once you start calling it “Medical AI” then people start acting like they can turn their human brains off. “AI” as a marketing term is not a tool that can help human experts focus their own analysis or enable otherwise-unfeasible kinds of statistical analysis. Will Smith didn’t get into gunfights with humanoid iMacs because they were identifying types of bread too effectively. The whole point is that it’s supposed to completely replace the role of a person in the relevant situations.
I mean, considering only the relationships between words and symbols in the complete absence of context and real-world referents is a good description of how a certain brand of tech dunce thinks.
I’m glad I’m not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.
Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; it’s the “because I’m your Dad and I said so” for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.
There are three kinds of programmers. From smallest to largest: Those smart enough to write good math-intensive libraries, those dumb about to think they can, and those smart enough to just use what the first kind made.
You’ve got to make sure you’re not over-specializing. I’d recommend trying to roll your own time zone library next.
First and foremost, the dunce is incapable of valuing knowledge that they don’t personally understand or agree with. If they don’t know something, then that thing clearly isn’t worth knowing.
There is a corollary to this that I’ve seen as well, and it dovetails with the way so many of these guys get obsessed with IQ. Anything they can’t immediately understand must be nonsense not worth knowing. Anything they can understand (or think they understand) that you don’t is clearly an arcane secret of the universe that they can only grasp because of their innate superiority. I think that this is the combination that explains how so many of these dunces believe themselves to be the ubermensch who must exercise authoritarian power over the rest of us for the good of everyone.
See also the commenter(s) on this thread who insist that their lack of reading comprehension is evidence that they’re clearly correct and are in no way part of the problem.
A lot of the spamming at the SC2 tournament level is about staying warmed up so that when you get into a micro-intensive battle later on where all of those actions might count (splitting your marines to protect from AoE while target-firing the suicide bombing banelings, for example) you can do it. Doesn’t make it look less ridiculous, especially in the first couple of minutes before the commentary has anything to really talk about so they try to act like stealing 5 minerals at that stage could somehow decide the game. But there is a slightly more reasonable logic to it than just speed running an RSI to look cool.
The original StarCraft also offers a lot of opportunities to use your “extra” APM to optimize around the godawful AI pathing and other “quirks” of the engine. It’s not as bad as, say, DotA in terms of “this was a limitation of the original engine that is now a major cornerstone of playing the game well and if you complain about it you’re just bad” but it’s definitely up there. As the game goes on you’ll usually see players start getting slightly more fast and loose with, say, optimizing the mining at their new base because at that point in the game splitting your focus that much is more detrimental even if you can move that fast.
I definitely ended up in the occasional spectator and campaign player for all that, though. Especially now that I’m starting to have creaky old man wrists of my own.
Unfortunately it doesn’t look like he was properly banned, just booted out of his session for having suspiciously-high APM. Now, the true eSports nerds among us will already know that high APM is a staple of high-level play in some games but is also an easy way to check for certain types of cheaters. Because of the association with skill in e.g. StarCraft it also became a very easily gamable metric if for some reason you wanted to feel like you knew what you were doing or show off for your friends and strangers online. For example, certain key bindings let you perform some actions as fast as your keyboard’s refresh rate allows by holding down a key or abusing the scroll wheel on your mouse. This can send your measured APM through the roof for a time. My gut says this is what Elon was doing that triggered the anticheat program, rather than any amount of actively gaming or actually cheating.
Please note that the hard-won knowledge of my misspent youth has no bearing on how pathetic it is for the richest man in the world to be doing the same kind of begging for clout that I did at 14, especially since I’m pretty 14-year-old me was frankly better at it.
On one hand giving these people the veneer of science is actively going to undermine public confidence in “science” as a whole and directly make the world a worse place.
On the other hand, money.
I got bounced back to Casey Newton’s recent master class in critihype and found something new that stuck in my craw.
Occasionally, they get an entire sector wrong — see the excess of enthusiasm for cleantech in the 2000s, or the crypto blow-up of the past few years.
In aggregate, though, and on average, they’re usually right.
First off, please note that this describes two of the most recent tech bubbles and doesn’t provide any recent counterexamples of a seemingly-ridicilous new gimmick that actually stuck around past the initial bubble. Effectively this says: yes, they’re 0 for 2 in the last 20 years, but this time they can’t all be wrong!
But more than that I think there’s an underlying error in acting like “the tech sector” is a healthy and competitive market in the first place. They may not directly coordinate or operate in absolute lockstep, but the main drivers of crypto, generative AI, metaverse, SaaS, and so much of the current enshittifying and dead-ending tech industry comes back to a relatively small circle of people who all live in the same moneyed Silicon Valley cultural and informational bubble. We can even identify the ideological underpinnings of these decisions in the TESCREAL bundle, effective altruism and accelerationism, and “dark enlightenment” tech-fascism. This is not a ruthlessly competitive market that ferrets out weakness. It’s more like a shared cult of personality that selects for whatever makes the guys in top feel good about themselves. The question isn’t “how can all these different groups be wrong without someone undercutting them”, it’s “how can these few dozen guys who share an ideology and information bubble keep making the exact same mistakes as one another” and the answer should be to question why anyone expects anything else!
To his frequent “no, people really are this stupid” refrain I would like to add an argument. If it didn’t work on enough people to be profitable, the business model wouldn’t have persisted and been replicated and refined into the dominant model of online advertising, and/or online advertising would never have been able to become the primary monetization framework for online content. Like, it’s fucked how much of the existing Internet is effectively subsidized by exploiting people who don’t know better, and I don’t think people are really okay with this as much as the system is sufficiently obfuscated that we don’t have to notice or think about it.
Economics: the famously apolitical field that examines the distribution and creation of wealth, also a famously apolitical concept.
Ironically this whole exchange is an example of just how cooked American political discourse is. The culture war is so all-consuming that anything outside of that gets largely excised from political action entirely. Then when someone from outside the US tries to point out that basically unrestricted corporate looting and blatant violations of various human rights could be regulated or otherwise countered by political processes, people act like they’re speaking Martian.
At best you end up with the Gunther Hermann story from Deus Ex, forced into retirement and made disposable when last generation’s top-of-the-line becomes this generation’s unusable trash.
I mean, doesn’t somebody still need to validate that those keys only get to people over 18? Either you have a decentralized authority that’s more easily corrupted or subverted or else you have the same privacy concerns at certificate issuance rather than at time of site access.