STOP! I know what you’re going to say. “Information cannot be transmitted faster than the speed of causality - even using entanglement, there’re major issues you cannot overcome!”
Right? Guess what? I agree with you 100%.
Now that we have *that* out of the way; let’s talk about Probability Leak Analysis [PLA].
Conventional information requires very specific symbols sequenced in such a way as to have meaning to whomever or whatever it is transmitted. This very fact makes it impossible to set a quantum state with the intent that the observer of the entangled particle is capable of viewing it as having been intentionally collapsed by a sender. However - what if we could dissolve the information into a probability cloud with the intent of allowing the probable information to leak through the entanglement of two particles such that no collapse occurs except for on the end of the receiver? This would eliminate all of the issues associated with the mainstream disagreement concerning the possibility of sending FTL communication. First, you’re eliminating the issue of the wave function collapse “who dunnit?”, and it would alter the information in such a way that the transmitted idea only becomes a statistical possibility.
So - how might Quantum PLA work?
First, remember that each quantum property is associated with another property at a right angle. For instance: location and velocity. This is The Heisenberg Uncertainty Principle. Recall that the relationship here is such that if the location of a particle is known, the velocity becomes completely unknown. The same sort of relationship exists between spin and polarity. Next, it is important to know that wave functions can be partially collapsed.
When Bob is not inducing a randomly polarized magnetic field across his PLA array, then the probability distribution for the particles in Bob's array is dispersed as evenly and smoothly across the polarity probability space as is possible for this entangled system. This is reflected in Alice's collapse PLA by a similar uncertainty distribution pattern. The cumulative effect is what we want to watch for, as that monitoring any one single particle would likely yield nonsense.
Now, we’re ready.
The setup would be simple. Bob has an array of particles entangled with Alice’s array. The more you have, the higher your chance of correctly determining the intended information. Each array is divided into 2 sub-structures [50/50 split]. One sub-structure [sub-array A] is fitted with an electromagnet, and sub-array B is naked. The electromagnet is rigged in such a way, that *when* it is activated, it’s polarity is completely determined by a quantum random number generator.
Bob's array must be setup so that the polarizing field is randomly set to 1 or -1. This will not work if you were to choose to consistenly apply one or the other. When Bob applies the field, and the field's presence is certain (again, it does not matter whether it's 1 or -1) then the spin's probability distribution widens due to the constriction of Bob's system to either 1 or -1. When Alice conducts her next regular collapse interval, the distribution analysis of her spin states will likely appear much more dispersed than the control group. (forgive the graphics, I'm just now realizing that they're not conveying the concept as well as I thought they would. I'll have to come up with a different visual representation)
When Bob desires to leak the probability of a ‘1’ bit, he activates the electromagnet, when he leaks a ‘0’, he does absolutely nothing.
When Bob sends the probability of a binary '1', the polarity of his PLA system is randomly induced. It doesn't matter whether the field is 1 or -1; however it cannot consistently be one or the other. The field must randomly alternate between the two. When Bob's PLA array is influenced with the induced field, their polarity is certain to be either 1 or -1; but highly unlikely to be somewhere between the two. This means that the entangled twin on Alice's side has a spin state that has a probability distribution of being, potentially, anything. The distribution plot would be a very uniform spread across possibilities.
Alice will not be looking at polarity. She’s going to be collapsing spin. Assuming we’re able to reset this system, a hypothetical method of knowing when a new bit is sent would be to collapse, re-initialize, and collapse again. A statistical pattern would begin to emerge which shows the variance between the times polarity is collapsed and when it is not. Observing the timing between these sudden shifts would give Alice a clue as to how often a new PLA-bit is sent. And that’s basically it. It’s not going to be 100% all the time - which is why I’d recommend that the arrays consist of many particles.
Obviously, sub-array A is where you’re watching for sudden shifts in the collapse outcome whereas sub-array B would function as a sort of parity/control. The distribution of deviations would then be pit against a probility assessment as to what the most likely binary words are. This would be superimposed over a regularity interval to determine timing and spacing between anomalies; whether they are positive or negative. Both are just as significant and both still indicate a most likely '1'.