• 0 Posts
  • 67 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • I am factually correct, I am not here to “debate,” I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

    This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don’t know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

    Let’s say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

    However, if an eavesdropper, let’s call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let’s say it’s 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should’ve been deterministic is now random noise for Bob.

    Yet, this description only works from Eve’s point of view. From Alice and Bob’s point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

    Because when Eve interacts with the qubit, from Alice and Bob’s perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle’s state, there has to be some correlation between something in Eve’s brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

    Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

    He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

    Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve’s perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

    Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.


  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • What is it then? If you say it’s a wave, well, that wave is in Hilbert space which is infinitely dimensional, not in spacetime which is four dimensional, so what does it mean to say the wave is “going through” the slit if it doesn’t exist in spacetime? Personally, I think all the confusion around QM stems from trying to objectify a probability distribution, which is what people do when they claim it turns into a literal wave.

    To be honest, I think it’s cheating. People are used to physics being continuous, but in quantum mechanics it is discrete. Schrodinger showed that if you take any operator and compute a derivative, you can “fill in the gaps” in between interactions, but this is just purely metaphysical. You never see these “in between” gaps. It’s just a nice little mathematical trick and nothing more. Even Schrodinger later abandoned this idea and admitted that trying to fill in the gaps between interactions just leads to confusion in his book Nature and the Greeks and Science and Humanism.

    What’s even more problematic about this viewpoint is that Schrodinger’s wave equation is a result of a very particular mathematical formalism. It is not actually needed to make correct predictions. Heisenberg had developed what is known as matrix mechanics whereby you evolve the observables themselves rather than the state vector. Every time there is an interaction, you apply a discrete change to the observables. You always get the right statistical predictions and yet you don’t need the wave function at all.

    The wave function is purely a result of a particular mathematical formalism and there is no reason to assign it ontological reality. Even then, if you have ever worked with quantum mechanics, it is quite apparent that the wave function is just a function for picking probability amplitudes from a state vector, and the state vector is merely a list of, well, probability amplitudes. Quantum mechanics is probabilistic so we assign things a list of probabilities. Treating a list of probabilities as if it has ontological existence doesn’t even make any sense, and it baffles me that it is so popular for people to do so.

    This is why Hilbert space is infinitely dimensional. If I have a single qubit, there are two possible outcomes, 0 and 1. If I have two qubits, there are four possible outcomes, 00, 01, 10, and 11. If I have three qubits, there are eight possible outcomes, 000, 001, 010, 011, 100, 101, 110, and 111. If I assigned a probability amplitude to each event occurring, then the degrees of freedom would grow exponentially as I include more qubits into my system. The number of degrees of freedom are unbounded.

    This is exactly how Hilbert space works. Interpreting this as a physical infinitely dimensional space where waves really propagate through it just makes absolutely no sense!


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.



  • Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it’s not just noise.

    Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.

    Which would you pick?

    In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn’t worth it.

    All quantum “internet” algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.


  • The problem with the one-time pads is that they’re also the most inefficient cipher. If we switched to them for internet communication (ceteris paribus), it would basically cut internet bandwidth in half overnight. Even moreso, it’s a symmetric cipher, and symmetric ciphers cannot be broken by quantum computers. Ciphers like AES256 are considered still quantum-computer-proof. This means that you would be cutting the internet bandwidth in half for purely theoretical benefits that people wouldn’t notice in practice. The only people I could imagine finding this interesting are overly paranoid governments as there are no practical benefits.

    It also really isn’t a selling point for quantum key distribution that it can reliably detect an eavesdropper. Modern cryptography does not care about detecting eavesdroppers. When two people are exchanging keys with a Diffie-Hellman key exchange, eavesdroppers are allowed to eavesdrop all they wish, but they cannot make sense of the data in transit. The problem with quantum key distribution is that it is worse than this, it cannot prevent an eavesdropper from seeing the transmitted key, it just discards it if they do. This to me seems like it would make it a bit harder to scale, although not impossible, because anyone can deny service just by observing the packets of data in transit.

    Although, the bigger issue that nobody seems to talk about is that quantum key distribution, just like the Diffie-Hellman algorithm, is susceptible to a man-in-the-middle attack. Yes, it prevents an eavesdropper between two nodes, but if the eavesdropper sets themselves up as a third node pretending to be different nodes when queried from either end, they could trivially defeat quantum key distribution. Although, Diffie-Hellman is also susceptible to this, so that is not surprising.

    What is surprising is that with Diffie-Hellman (or more commonly its elliptic curve brethren), we solve this using digital signatures which are part of public key infrastructure. With quantum mechanics, however, the only equivalent to digital signatures relies on the No-cloning Theorem. The No-cloning Theorem says if I gave you a qubit and you don’t know it is prepared, nothing you can do to it can tell you its quantum state, which requires knowledge of how it was prepared. You can use the fact only a single person can be aware of its quantum state as a form of a digital signature.

    The thing is, however, the No-cloning Theorem only holds true for a single qubit. If I prepared a million qubits all the same way and handed them to you, you could derive its quantum state by doing different measurements on each qubit. Even though you could use this for digital signatures, those digital signatures would have to be disposable. If you made too many copies of them, they could be reverse-engineered. This presents a problem for using them as part of public key infrastructure as public key infrastructure requires those keys to be, well, public, meaning anyone can take a copy, and so infinite copy-ability is a requirement.

    This makes quantum key distribution only reliable if you combine it with quantum digital signatures, but when you do that, it no longer becomes possible to scale it to some sort of “quantum internet.” It, again, might be something useful an overly paranoid government could use internally as part of their own small-scale intranet, but it would just be too impractical without any noticeable benefits for anyone outside of that. As, again, all this is for purely theoretical benefits, not anything you’d notice in the real world, as things like AES256 are already considered uncrackable in practice.


  • Entanglement plays a key role.

    Any time you talk about “measurement” this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.

    It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.

    Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don’t know which ones they need to apply it to until the full qubits are transmitted and this is revealed.

    That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.

    What does this have to do with entanglement? If we just talk about “measuring a state” then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers’ perspective.

    A popular way to get around this is to claim that the act of measurement is something “special” which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers’ perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don’t know what it is.

    However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no “special” interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a “special” interaction, you get different statistical predictions than quantum mechanics actually makes.

    This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a “special” interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that’s what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.

    The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit’s state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.

    Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the “outside” describing the other two systems interacting) is entanglement, and this also leads to decoherence.

    You even say that “measurement changes the state”, but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.


  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

    Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

    In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

    I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

    They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.


  • Bruh. We literally don’t even know what consciousness is.

    You are starting from the premise that there is this thing out there called “consciousness” that needs some sort of unique “explanation.” You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don’t think this is what you mean by that.

    We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

    There is no such thing as “wave function collapse.” The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no “collapse” going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

    So maybe Roger Penrose just wasted his retirement on this passion project?

    Depends on whether or not he is enjoying himself. If he’s having fun, then it isn’t a waste.


  • It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.


  • The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

    We can only assign an object to be what is “sensing” our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

    We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion (“I think therefore I am” is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what “we” are, but there will always be a gap between what we really are and the reflection of what we are.

    Precisely what is “sensing your thoughts” is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is “behind” it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.




  • Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

    The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

    If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.

    That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

    It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.

    I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.


  • Uh… one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it’s “businessy”? All modern technology is technical, that’s what technology is. It would be like someone saying, “GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?” Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It’s huge for regular consumers… assuming the technology could ever progress to come to regular consumers.


  • bunchberry@lemmy.worldtoScience Memes@mander.xyzCrystals
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    18 days ago

    OrchOR makes way too many wild claims for there to easily be any evidence for it. Even if we discover quantum effects (in the sense of scalable interference effects which have absolutely not been demonstrated) in the brain that would just demonstrate there are quantum effects in the brain, OrchOR is filled with a lot of assumptions which go far beyond this and would not be anywhere near justified. One of them being its reliance on gravity-induced collapse, which is nonrelativistic, meaning it cannot reproduce the predictions of quantum field theory, our best theory of the natural world.

    A theory is ultimately not just a list of facts but a collection of facts under a single philosophical interpretation of how they relate to one another. This is more of a philosophical issue, but even if OrchOR proves there is gravitational induced collapse and that there is quantum effects in the brain, we would still just take these two facts separately. OrchOR tries to unify them under some bizarre philosophical interpretation called the Penrose–Lucas argument that says because humans can believe things that are not proven, therefore human consciousness must be noncomputable, and because human consciousness is not computable, it must be reducible to something that you cannot algorithmically predict its outcome, which would be true of an objective collapse model. Ergo, wave function collapse causes consciousness.

    Again, even if they proved that there is scalable quantum interference effects in the brain, even if they proved that there is gravitationally induced collapse, that alone does not demonstrate OrchOR unless you actually think the Penrose-Lucas argument makes sense. They would just be two facts which we would take separately as fact. It would just be a fact that there is gravitionally induced collapse, a fact that there is scalable quantum interference effects in the brain but there would be no reason to adopt any of their claims about “consciousness.”

    But even then, there is still no strong evidence that the brain in any way makes use of quantum interference effects, only loose hints that it may or not be possible with microtubules, and there is definitely no evidence of the gravitationally induced collapse.


  • A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

    I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

    It’s really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It’s been around for a century, a lot of ideas are very developed.

    I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are “looking,” which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

    Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

    The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

    You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you “trace out” the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

    If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

    At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

    Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn’t be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

    My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don’t entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.


  • Understanding the nature of consciousness is one of the hardest problems in science. Some scientists have suggested that quantum mechanics, and in particular quantum entanglement, is the key to unraveling the phenomenon.

    The argument for this has never been convincing. People like Roger Penrose have argued that because people can believe things without proof, therefore consciousness is “special” in the sense that it can do something uncomputable, so it must not be reducible to classical physics. This argument is just bizarre, humans believe things without proof because they they don’t operate on proof but on confidence levels. They believe things that seem right to them based on their past experiences. Even AI operates on confidence levels and can say things that are false.

    I have never seen a convincing argument that there really is something unique about human cognition that requires introducing anything quantum or even anything supernatural as it is popular for many philosophers in academia to argue these days.

    Entanglement means the two-photon state is not a classical combination of two photon states. Instead, measuring or interacting with one of the photons instantly affects the same property of the second photon, no matter how far away it is.

    This is just patently false. Entanglement is just a statistical correlation but one over quantum probabilities rather than classical probabilities (these can be distinguished by doing a trace over a density matrix). If you have two entangled particles, let’s say they are two electrons with a 50% chance of both being spin up or down, so the only possibilities are ⇑⇑ or ⇓⇓ with no other possibilities. That means they are statistically correlated as measuring one tells you the value of the other.

    Now, let’s say, while they are still entangled, you flip the second one. If it’s supposedly true that altering one would instantly affect the other, then this should not alter the outcome. If the first was going to be ⇑ and the second was going to be ⇑, and you flip the second prior to measuring it, then it would affect the first one so they would both become ⇓⇓.

    Yet, this is not what happens in practice. In practice, if you flip the second one prior to measuring it, you find the statistical correlation changes to two possibilities of ⇑⇓ and ⇓⇑. This is exactly what you would even expect classically. If I give you two envelopes where inside of them there is a card facing up and the other there is a card facing down, and I guarantee that they are random but correlated such that both envelopes have the card facing the same direction, if you flip over one of the cards before opening it, then you would expect then to then be the opposite directions rather than the same.

    There is no actual evidence measuring a particle in an entangled pair affects the other particle. These affects only exist if you make certain metaphysical assumptions that go beyond quantum mechanics. If you presume objective collapse or hidden variables, for example, then you have to posit such affects. But traditional quantum mechanics is not an objective collapse theory or a hidden variable theory.

    Entanglement has been demonstrated for a system whose members are over 1,000 km apart. Nothing like it exists in classical physics; it is purely a quantum phenomenon. Here entanglement would raise the possibility of much faster signaling along the sections of myelin that encase segments of the axon’s length.

    No. There is literally theorem called the No-signaling Theorem in quantum mechanics which proves such a thing is impossible.


  • Quantum mechanics explains a range of phenomena that cannot be understood using the intuitions formed by everyday experience. Recall the Schrödinger’s cat thought experiment, in which a cat exists in a superposition of states, both dead and alive. In our daily lives there seems to be no such uncertainty—a cat is either dead or alive. But the equations of quantum mechanics tell us that at any moment the world is composed of many such coexisting states, a tension that has long troubled physicists.

    No, this is a specific philosophical interpretation of quantum mechanics. It requires treating the wave function as a literal autonomous entity that actually describes the object. This is a philosophical choice and is not demanded by the theory itself.

    The idea that two fundamental scientific mysteries—the origin of consciousness and the collapse of what is called the wave function in quantum mechanics—are related, triggered enormous excitement.

    The “origin of consciousness” is not a “scientific mystery.” Indeed, how the brain works is a scientific mystery, but “consciousness” is just something philosophers cooked up that apparently everything we perceive is an illusion (called “consciousness”) created by the mammalian brain that is opposed to some “true reality” that is entirely invisible and beyond the veil of this illusion and has no possibility of ever being observed.

    People like David Chalmers rightfully pointed out that if you believe this, then it seems like a mystery as to how this invisible “true reality” can “give rise to” the reality we actually experience and are immersed in every day. But these philosophers have simply failed to provide a compelling argument as to why the reality we perceive is an illusion created by the brain in the first place.

    Chalmers doesn’t even bother to justify it, he just cites Thomas Nagel who says that experience is “conscious” and “subjective” because true reality is absolute (point-of-view independent) and the reality we experience is relative (point-of-view dependent), and therefore it cannot be objective reality as it exists but must be a product of the mammalian brain. Yet, if the modern sciences has shown us anything, it is that reality is absolutely not absolute but is relative to its core.

    Penrose’s argument is even more bizarre, he claims that because we can believe things that cannot be mathematically proven, our brains can do things which are not computable, and thus there must be some relationship between the brain and the outcome of measurements in quantum mechanics in which no computation can predict them beforehand. Yet, it is just a bizarre argument. Humans can believe things that can’t be proven because humans only operate on confidence levels. If you see enough examples to be reasonably confident the next will follow the same pattern, you can believe it. This is just called induction and nothing is preventing you from putting it into a computer.

    According to Penrose, when this system collapses into either 0 or 1, a flicker of conscious experience is created, described by a single classical bit.

    Penrose, like most philosophers never convincingly justifies that experience is “conscious”.

    However, per Penrose’s proposal, qubits participating in an entangled state share a conscious experience. When one of them assumes a definite state, we could use this to establish a communication channel capable of transmitting information faster than the speed of light, a violation of special relativity.

    Here he completely goes off the rails and proposes something that goes against the scientific consensus for no clear reason. Why does his “theory” even need faster-than-light communication? How does proposing superluminal signaling help explain “consciousness”? All it does is make the theory trivially false since it cannot reproduce the predictions of experiments.

    In our view, the entanglement of hundreds of qubits, if not thousands or more, is essential to adequately describe the phenomenal richness of any one subjective experience: the colors, motions, textures, smells, sounds, bodily sensations, emotions, thoughts, shards of memories and so on that constitute the feeling of life itself.

    Now the author themselves is claiming experience is “subjective” yet does not justify it, like all sophists on this topic, they just always begin from the premise that we do not perceive reality as it is but some subjective illusion and rarely try to even justify it. That aside, they are also abusing terminology. Colors, motions, textures, smells, etc, these are not experiences but abstract categories. We can talk about the experience of the color red, but we can also talk about the experience of a rainbow, or an amusement park. Are amusement parks “subjective experiences”? No, it’s an abstract category.

    Abstract categories are normative constructs used to identify something within an experience, but are they not experiences themselves. You have an experience, and then you interpret that experience to be something. This process of interpretation and identification is not the same as the experience itself. Reality just is what it is. It is not blue or red, it is not a rainbow or an amusement park, it just is. These are socially constructed labels we apply to it.

    Sophists love to demarcate the objects of “qualia,” like red or green or whatever, as somehow “special” over any other category of objects, such as trees, rocks, rainbows, amusement parks, atoms, Higgs bosons, etc. Yet, they can never tell you why. They just insist they are special… somehow. All abstract categories are socially constructed norms used to identify aspects of reality. They are all shared concepts precisely because they are socially constructed: we are all taught to identify them in the same way. We are all shown something red and told “this is red.” Two people may be physically different and thus this “red” has different impacts on them, no matter how different it is, they both learn to associate their real experience with the same word, and thus it becomes shared.

    This is true for everything. Red, dogs, trees, cats, atoms, etc. There is no demarcation between them.

    In an article published in the open-access journal Entropy, we and our colleagues turned the Penrose hypothesis on its head, suggesting that an experience is created whenever a system goes into a quantum superposition rather than when it collapses. According to our proposal, any system entering a state with one or more entangled superimposed qubits will experience a moment of consciousness.

    This is what passes for “science” these days. Metaphysical realism has really poisoned people’s minds.

    The definitiveness of any conscious experience naturally arises within the many-worlds interpretation of quantum mechanics.

    Another piece of sophistry that originates from some physicists simply disliking the Born rule, declaring it mathematically ugly, so they try to invent some underlying story from which it can be derived that would be more mathematically beautiful. However, this underlying story is not derived from anything we can observe, so there is no possible way to agree upon what it even is. There are dozens of proposals and no way to choose between them. There simply is not “the” many-worlds interpretation. There is many many-worlds interpretations.

    To make these esoteric ideas concrete, we propose three experiments that would increasingly shape our thinking on these matters.

    All the experiments proposed deal with observing the behavior of living organisms, which is irrelevant to the topic at hand.