Alex Drozd

Revisiting Robert Heinlein: Methuselah's Children

One of Heinlein’s early novels, Methuselah’s Children, is the first to introduce his “Future History,” a series of interrelated books and stories beginning a few hundred years in the future. It’s in this novel that his recurring character, Lazarus Long, is first introduced. Yet another one of Heinlein’s old man literary egos with a proclivity towards lecturing young folk, Lazarus Long drives many of the key events of Methuselah’s Children; namely, by convincing his secret society, the Howard Family, into fleeing their persecution on Earth and reaching for the stars.
Through a few centuries of selective breeding, the Howard Family has developed themselves into a sub-species of humanity with an expanded lifespan and a slow aging process. Many members of the Family are over a century old, and yet they blend into ordinary human society by faking their own deaths once their lack of aging might cause suspicion, and then move elsewhere in the world where they adopt a new persona and repeat the process. Some members of the society, though, decide they trust the rest of humanity enough to expose themselves—all in hope that the Family no longer need live in secrecy—and from there the persecution begins.
From the very beginning, Methuselah’s Children challenges the limits of our system of ethics. The rationale that Bork Vanning—the novel’s antagonist—provides to justify the persecution of the Howard Family is one that might give pause to those who subscribe to a utilitarian morality. Vanning’s argument is that if the Family holds the secret to longer life—a false assumption, yet one they are not given the chance to disprove—then they have a moral duty to share such longevity with the rest of humanity, as withholding such information is akin to letting the rest of humankind die a premature death. And, since the Family refuses to share this information, Vanning argues that the ethical thing to do is to force this information out of them—after all, wouldn’t it be ethical to force a cure out of a doctor who was withholding it from a few hundred people in danger of dying from some deadly disease? Well, in this case, extorting the information from the Family would prevent the premature deaths of a few billion people: the entire population of Earth.
This is an ethical scenario that seems to blur the lines between act and rule utilitarianism. Act utilitarians believe that which benefits the greater number of people is always the moral choice, e.g., in the case of Dostoevsky’s Crime and Punishment, kill a rich old lady and take her money to give back to the world. Rule utilitarians believe likewise but with the added caveat that what benefits society in the long run must also be taken into account when considering such scenarios, i.e., allowing innocents to come to harm in all instances where a greater number of people could derive benefit from such an act isn’t conducive to developing a stable society. It isn’t hard to see that rule utilitarianism is an answer to the hypothetical moral dilemmas contrived by opponents of utilitarianism, and yet Heinlein’s scenario presents a challenge to rule utilitarianism itself: from the perspective of Bork Vanning, persecuting and torturing members of Howard Family until their secret to long life is revealed would have long term consequences that would benefit society. By doing so, the lifespans of all humankind would be at least tripled, and the premature deaths of all members of humanity, until the end of time, prevented.
But of course, as is revealed to the reader, the Family is innocent of withholding such information. Their longevity is simply in their genes and their persecution is as unjust as any other in history. As a rule utilitarian myself, I find Heinlein’s ethical dilemma quite bothersome, and Bork Vanning’s reasoning annoyingly hard to refute while maintaining consistency.
In order to escape this violent persecution, Lazarus and the Family escape Earth on a stolen rocket and head for the nearest habitable exoplanet. Through the use of some hand-waiving physics, the Family is able to beat the cosmic speed limit and achieve faster-than-light travel, undergoing significant time dilation on the way. They arrive on a planet inhabited by a tall race of humanoid aliens, the Jockaira. Friendly and eager to help their human visitors, this species is happy to integrate the Family into their society, but it is then discovered that the Jockaira are not the true rulers of the planet. Instead, they are the equivalent of a domesticated species of animal, ruled by a master-race hidden inside the Jockaira’s mysterious temples. Heinlein’s depiction of this race reveals some excellent world-building: the Jockaira, though obviously intelligent, have limited technology and are unable to explain how much of it works, while also being baffled by some of the more individualistic concepts of humanity, such as privacy. Yet, unable to tame the Family as they have tamed the Jockaira, the master-race of the planet lifts the Family back into their rocket—while never revealing themselves—and the Family then departs yet again to look elsewhere for a world on which they can build their new society.
The second world they arrive at is inhabited by another race of humanoid aliens, though this species is hive-minded. Nicknamed the Little People, their world is incredibly minimalist, and many members of the Family begin to grow uncomfortable by a race who so desperately wants to consume them into their single-minded organism. The Little People prove themselves masters of genetic manipulation, and after they alter a human infant to better fit the Little People’s world and absorb some Family members into their hive mind, the majority of the Family choose to leave and return to Earth.
Upon their arrival back home, the remaining members of the Family discover that humanity has found the secret to long life on their own—more than two centuries have passed due to the time dilation the Family underwent on their interstellar voyage—and that the persecution the Family left behind is no longer waiting for them. It’s a great twist, one that fits the context of the story and also makes for a happy ending—a somewhat rare combo in science fiction—but it also gives Lazarus Long his chance to pontificate on the meaning of life:

“Yes, maybe it’s just one colossal big joke, with no point to it . . . But I can tell you this . . . whatever the answers are, here’s one monkey that’s going to keep on climbing, and looking around him to see what he can see, as long as the tree holds out.”

Revisiting Orson Scott Card's Children of the Mind

Orson Scott Card’s Ender Saga may be one of the most varied book series written to date. The first in the series, Ender’s Game, is a young-adult novel, while its sequel, Speaker of the Dead, explores a mixture of more adult-driven hard sci-fi and philosophical fiction. These two books are some of Card’s most praised works, a duo which made him the only author to ever win the Nebula award two years in a row—for two books in the same series, nonetheless. I found their praise well-deserved, and when I picked up the third entry in the Ender Saga, Xenocide, I couldn’t help but notice the cover celebrated that it was a nominee for the Nebula. “Why not a winner?” I wondered.
Children of the Mind, the fourth entry in the Ender Saga, almost exclusively deals with the characters introduced in the deus ex machina ending of Xenocide, and readers who found it difficult to suspend belief for Xenocide’s ending may have trouble getting through Children of the Mind’s beginning. The story mostly follows Peter and Young Valentine, childhood versions of Ender’s siblings pulled from his mind and manifested in human form. This was brought about by Jane—a supercomputer who possesses an aiúa, Card’s version of the soul—discovering that everything in the universe is connected by philotes, the most basic form of matter. Jane then discovers how to harness the energy of philotes in order to achieve faster-than-light travel, and in doing so takes Ender outside the universe, where his thoughts are accidentally turned into creations: Peter and Young Valentine.
The story of Children of the Mind revolves around the fictional planet of Lusitania, where the previous two entries of the series took place. Enraged that Lusitania has disobeyed galactic law—by refusing to turn over its citizens who are guilty of interfering with the planet’s native species—the Starways Congress then learns that anyone who leaves the planet could spread a deadly infection to the rest of the human race. In order to protect the rest of humankind from potential extinction, the congress orders the complete and total destruction of Lusitania by the Molecular Disruption Device, the same weapon Ender used to commit xenocide against his enemies in the first book of the series. This is a common theme throughout the book, the sins of Ender’s past coming back to haunt him, his friends and family facing the same total destruction he dealt the buggers as a child.
In an attempt to save Lusitania from destruction, Peter travels the galaxy—making use of Jane’s ability to instantaneously move him from planet to planet—trying to find a way to convince Starways Congress to abandon its plan. He is accompanied by Wang-Mu, a character from the previous book, and with further help from Jane, they deduce that much of the Starways Congress’s motivation to destroy Lusitania stems from a minority of its Japanese members. Their beliefs are influenced by a remote philosopher’s interpretation of human history, in which global events are depicted as the struggle between edge nations and center nations. The philosopher believes Japan is an edge nation, always fighting for its place in the world and struggling to preserve its culture, while nations like China and Egypt are center nations, whose cultures seem to swallow up even their invaders. The philosopher interprets the bombing of Hiroshima and Nagasaki as Japan’s punishment from the gods for seeking to spread an empire and trying to imitate the center nations of the world. He compares these bombings to Ender’s destruction of the buggers, and believes Lusitania is guilty of the same crimes. Therefore, because Lusitania has overstepped its bounds—like the buggers and Japan once did—it must be obliterated so that the natural hierarchy of edge and center nations can be preserved.
Another part of the book focuses on Young Valentine’s search for the civilization that may have created the pequinnos, the native species of Lusitania. One of the more interesting scenes in the book depicts Young Valentine and her companion, Miro, having an argument over their feelings for one another. Young Valentine is Ender in essence, as she is made from part of his aiúa, and therefore Miro’s love for her is really a love for Ender. Card uses this scene to raise the questions of whether or not the soul is gendered. The question provides amusing food-for-thought in itself, but it also provides additional perspective for the debate between dualism and monism. After all, if the soul is immaterial and independent of the body, then romantic love should be possible from one individual towards any other, regardless of gender. In a way, the existence of sexual orientation and preference is evidence for a set of limits to the soul, as love—a communication from soul to soul in the immaterialist view of the world—is dictated by materialist agents, such as what hormones are produced in the mind.
However, it is evident that Card doesn’t share this interpretation, as he argues against a materialist interpretation of reality throughout the book—and his solution to faster-than-light travel certainly demonstrates his belief in things outside the realm of nature. Yet, the question of what dictates a person’s choices is one his characters struggle with throughout the book. As they overcome their challenges and face more hardships than most human beings could handle, they constantly question to what end they are controlled by their nature or their nurture—and whether or not the will of their aiúa has any influence on their decisions at all.

The Realist Interpretation of Quantum Mechanics

Before the quantum revolution, the scientific depiction of the natural world was a deterministic one, i.e., once all the initial parameters of a physical system were known, the evolution of a system could be predicted with exact precision. It was this ability to make exact predictions derived from empirical knowledge that made up the backbone of science, with the field of physics painting this deterministic picture of the world on the fundamental level. From describing the motions of the stars, to the behavior of the atoms which made up our bodies and the materials around us, physics had an advantage over the other sciences, such as biology and chemistry, in that its precision was unmatched, e.g., with what speed an object would hit the ground could be calculated exactly, while how the human body would respond to a certain chemical couldn’t always be precisely predicted. Even in statistical physics–thermodynamics–where the components of a system were too innumerable to treat individually, a deterministic view was suggested. Though an ensemble of particles may approach the innumerable, nothing in the nature of thermodynamic theory suggested that the trajectories of these particles were fundamentally unknowable; it was simply a practical matter to treat the system statistically rather than to treat each molecule individually, though, in principle, each molecule could be isolated and its properties measured. It was this line of reasoning that would inspire the realist position following the quantum revolution.
But once it was shown that Niels Bohr’s model of the atom was incorrect, as was Schrödinger’s model of the electron being a continuous stream of charge distributed around the atom, physical models of theories began to lose precedence in physics.1 Mathematical formalism began to take the stage in the atomic realm–this being so because no physical model seemed to be able to describe what was being measured by experiment when it came to sub-atomic particles. Electrons, when treated probabilistically, were now shown to obey wave equations, and their characteristics, within certain limits, could be measured. This treatment introduced certain limits that were at odds with previously established principles in physics, and much debate has gone into what the wave equation of a particle physically actually represents. What quantum theory suggested was that the location of a particle could not be predicted beyond the realm of probability (as matter of principle, not just practicality), and that measurable quantities, such as position and momentum, could not be simultaneously measured, i.e., knowledge of one forbid knowledge of the other. This concept was mathematically formulated in Heisenberg’s uncertainty principle, originally published in the German physics journal, “Zeitschrift fùr Physik” in 1927–and it’s been a thorn in the philosopher’s side ever since.
To the classical physicist (and the aforementioned theory of determinism in philosophy), these ideas were anathema. It was one thing to say that it is impractical to measure a certain property of a particle, e.g., to measure the trajectory of a specific air molecule, but it is another thing to say that, in principle, a particle’s property couldn’t be measured–that nature was imposing limits as to what it revealed about itself on the fundamental level. If particles’ trajectories were fundamentally random, and the uncertainty principle was a fundamental law of nature, a deterministic view of the universe was now an anachronism. In response to this new, stochastic view of the universe, Einstein made his famous “God does not play dice” statement,2 illustrating his view that the true trajectory of a particle was not a matter of uncertainty, but that it depended on the initial conditions of the system, and if those conditions were known, its trajectory could be predicted and described precisely.
Yet, despite these classical and philosophical oppositions, quantum theory has remained supreme in its respective realm. Its predictions about the fundamental indeterminism of our universe on the atomic scale have been experimentally verified, and though we may not like it, it seems probability governs our world–not simple, linear cause and effect as was previously thought. Even the phenomenon of quantum entanglement, a contradiction in the mathematical formalism of quantum mechanics which Einstein revealed, has been physically demonstrated.3 Today, most physicists have capitulated to the inherent, counter-intuitive realities of nature that the Copenhagen and other non-deterministic interpretations of quantum mechanics suggest. It is widely accepted that knowledge of a quantum mechanical system affects the system, “true” particle trajectories do not exist, matter and light particles are also waves, and an electron can be in two places at once. These phenomena are both what we observe and what the math tells us, and therefore, the physics community must roll with it.
But this hasn’t stopped a small minority of physicists from clinging to a deterministic universe; this interpretation is known as the realist position, or the realist interpretation of quantum mechanics. Though this view is not a denial of the realities of quantum theory, as evidenced by numerous experimental confirmations–this can’t be emphasized enough–it is an insistence on the idea that the picture of the quantum realm is not complete, with all of this hinging on the grounds that quantum mechanics, though useful and consistent, has yet to provide a physical model for the universe–or at least, one that makes even a bit of sense.
Quantum mechanics is a theory without a clear publication date or founder. The formation of the theory consists of the aggregated works of many early twentieth-century physicists, such as Niels Bohr, Enrico Fermi, Erwin Schrödinger, Wolfgang Pauli, and Richard Feynman.4 Even Albert Einstein, already noted as a later opponent of the theory, couldn’t help but contribute to its formation. His work on the photoelectric effect, for which he received the Nobel Prize in physics in 1921, helped illustrate the modern understanding of discrete electron energy levels in the atom–what quantum mechanics is all about and is used for–and the relationship between the energy and frequency of light.5
Another one of the early physicists who helped construct quantum theory was Louis de Broglie. His initial work was on the theoretical development of matter waves, presented in his 1924 PhD thesis. In this brave and groundbreaking doctoral defense, de Broglie predicted that all matter had an associated wavelength, this wavelength becoming more salient as the scale of matter involved decreased, i.e., it wouldn’t be obvious for cars and baseballs, but it would be for sub-atomic particles. This prediction was confirmed by the Davisson-Germer electron diffraction experiments at Bell Laboratories–a serendipitous discovery–and de Broglie was awarded the Nobel Prize in 1928 for his insight into the wave-particle duality exhibited, not only by light, but by matter as well.
If de Broglie’s ideas about the wave-particle duality of all matter were true, they posed a challenge not just for physics, but for the philosophy of science as well. If an electron has a wavelength, then where is the electron, or better, where is the wave? The answer isn’t clear because waves are spread out over a range of space. In order to define a wavelength, one loses the ability to define a position and vice versa. Yet, an electron still can have a defined position as demonstrated by experiments which reveal its particle-like nature; particles aren’t spread out in space. It was from these considerations that Werner Heisenberg developed the famous and already mentioned uncertainty principle. To define a position, an experimentalist must forfeit information about its wavelength (momentum) or vice versa. It was the development of this principle that marked the downfall of determinism in science.
Yet, de Broglie did not originally believe that the probabilistic wave treatment of matter warranted an indeterministic interpretation of the universe. In 1927, the year before his matter-waves theory was confirmed, he proposed the pilot-wave theory, a suggestion that the wave equation in quantum mechanics could be interpreted deterministically. Though he eventually abandoned this interpretation to follow a more mainstream one, the theoretical physicist David Bohm would later continue his work, and the pilot-wave theory would also become known as the de Broglie-Bohm theory.6
In 1952, while employed at Princeton, Bohm published a paper which espoused his realist interpretation of quantum mechanics. In the paper, he suggested the idea that quantum theory was incomplete and that “hidden variables” were not taken into account in its formulation. These hidden variables would explain why the theory was so far probabilistic, and if they were taken into account, the predictive capabilities of the theory would become exact. That is, he believed there were more parameters to consider in the wave equation, and quantum theory had so far failed to predict exact results because not all of the pertinent variables were accounted for. (This is analogous to trying to measure the total kinetic energy of the Earth while only considering its linear kinetic energy and not its rotational energy. You won’t get a precise answer until you account for both.)
Bohm suggested introducing a “quantum mechanical potential energy” to begin a new mathematical treatment of the theory. The double-slit experiment, in which a single electron passes through a single slit exhibiting particle-like properties, while passing through a double-slit exhibiting wave-like properties, could be explained by postulating that the quantum mechanical potential energy of the system was changed when the second slit was opened or closed. The realist’s goal was to then discover the hidden variables and physical phenomena that would induce this change in the said potential energy of the system. In particular, Bohm pointed out that an expansion of the theory in this direction might be needed to solve the problems of quantum mechanics on the nuclear scale, where its laws broke down, and that developments in the direction of a more complete formulation of the theory would expand its domain.7
Bohm also expressed his view that though quantum mechanics was useful, consistent, and elegant, it did not justify its denouncement of determinism–the philosophy behind every field of science, not just physics. To Bohm, nothing in the theory suggested that the Copenhagen–mainstream–interpretation revealed the physical reality of nature, but rather that the theory was still developing, and that, in addition to all the theoretical complications, the instruments used in the experimental verification of the theory were bound to interfere with the precision of the measurements. After all, this was the first time in history that objects of such small size were being precisely measured for their exact location and properties. Renouncing a deterministic world view that the rest of science reinforced didn’t seem justified simply because a practical theory which suggested otherwise had been developed. Bohm, like Einstein, was sure a more complete and physically-sensible theory would one day supplant it.
In fact, Einstein didn’t wait for the future; even after having already developed his groundbreaking theory of relativity and winning the Nobel Prize for the photoelectric effect–it’s widely thought he won it for the former, not the latter–Einstein continued his work in theoretical physics, his eyes set on bringing absolute determinacy back into science. In 1935, Einstein, along with his colleagues Boris Podolsky and Nathan Rosen, published a paper demonstrating the incompleteness of the quantum mechanical description of reality by the wave function.8 In the mathematics of quantum mechanics, Einstein and his colleagues found a paradox, one that predicted the phenomenon of two or more particles becoming “entangled.” This meant that two or more particles would sometimes need to be described by a single quantum state, even when the respective particles were separated by a distance larger than was usually dealt with on the quantum scale–meaning that the speed of light wouldn’t be able to communicate information about the single state between the two particles in time for them to respond accordingly. The transmission of information is limited by the finite speed of light.
This meant that, for entanglement to occur, action at a distance was required, a concept regarded as untenable in most fields of physics–and one that bothered the ancient Greek philosophers as well. It suggested that the physical system in the entanglement question was non-local, and that for action at a distance to occur, the principle of locality must be violated. The importance of this principle rests in the assumption that in order for information to be transmitted between two objects, something must do the transmitting. Be it a particle, a field, or a wave, the information must be physically communicated somehow.
In 1964, the physicist John Stewart Bell proposed a theorem demonstrating the possibility of non-local quantum effects and quantum entanglement. Bell was convinced his work also showed that quantum theory was complete, and that the postulation of hidden variables would not add to the theory, but rather violate it, therefore ruling out the possibility of their existence.9 Going into the technical details of Bell’s Theorem is beyond the scope of this article, but its predications were experimentally proven to be true concerning the non-locality of the quantum world–but proving the nonexistence of the hidden variables would be disproving a negative, something beyond the capabilities of science, at least in its current philosophical form. Quantum entanglement was experimentally verified, proving Einstein and his colleagues wrong and making their predicted physical paradox a reality. 10
Today, there are an appreciable number of physicists who subscribe to the realist interpretation, an esoteric view in the already esoteric discipline of quantum physics. Dr. Emilio Santos of the University of Costa Rica is one of the leading physicists to subscribe to this view. Yet to be convinced that Bell’s Theorem refutes the possibility of Bohm’s hidden variables, Dr. Santos posits that the apparent stochasticism of the quantum universe is due to the interference of measuring apparatuses with their systems in quantum mechanical experiments, as well as the presence of vacuum fluctuations in space-time.1
His conception of the uncertainty principle stems from the unavoidable reality that, in a quantum system, the researcher must examine a microscopic object–which obeys quantum-mechanical laws–while using a macroscopic measuring tool–which obeys Newtonian laws.3 So far, no known theory can link the two realms together. To try and work around this difficulty, Niels Bohr, one of the first developers of quantum mechanics, proposed the correspondence principle. In doing this, he suggested that as we go from the quantum world to the classical or macroscopic one, taking the limit of Plank’s constant as it approaches zero, quantum laws transition into classical ones.1,3 However, it is philosophically contradictory to claim that some aspects of our universe are deterministic and others are not, as determinism implies that all components of a system have predictable, causal trajectories. It seems odd to claim that predictable systems are based on unpredictable foundations. Though he does not state this explicitly in his papers, it’s apparent that Dr. Santos doesn’t subscribe to Bohr’s correspondence principle, and he believes the radically different natures of the experimental system and the measuring apparatus are more to blame.
In addition to the apparatus problem, it is also much more likely that the ostensible indeterminacy of quantum mechanics rises from vacuum fluctuations.1 Dr. Santos ascribes the apparent probabilistic nature of quantum theory due to the inherent difficulties in practically measuring particles on such small scales, where the space they inhabit itself affects the system. Even vacuums participate in quantum mechanical activity, and due to the fact that there are no discontinuities in ordinary space, no system can be truly isolated or claim to be local. To Dr. Santos, while non-locality must be accepted, this does not preclude a realist interpretation of quantum theory, as it does not prove inherent, natural limits to the knowledge we may possess of any physical system; it simply suggests that the systems we study are full of too much background “noise” to precisely measure any individual particle–in the same way there’s too much noise in a crowded room to precisely record any one particular conversation. Dr. Santos suggests that, until a physical model is proposed or an advancement in the mathematical formalism of the theory suggests a realist interpretation, quantum mechanics is incomplete. He says, “I do not propose any modification of that core, but claim that the rest of the quantum formalism is dispensable.”1
It would be interesting to note the technological implications the realist interpretation would have for the modern field of quantum computing. Ordinary computers make use of binary, reducing all stored data to a collection of ones and zeroes arranged in a particular order for each datum. Relatively speaking, computers are limited by all their processes simply being a collection of yes or no, on or off, binary statements, which the computer has to read all the way through in order to perform any command.
Quantum computing would overhaul this limitation of binary by taking advantage of the wide range of quantum phenomena available to us. Instead of a computer going step by step through a collection of yes or no statements, the processors could take advantage of quantum entanglement and perform a number of different computational processes simultaneously–something impossible in binary. The fundamental indeterminacy of quantum mechanics makes these wild processes possible. Instead of electric transistors converting circuit data into binary–current flowing here or there–quantum computer chips make use of the fact that, though counterintuitive, electrons can be in several places at once, meaning an electron can run down several different circuit pathways at once, and therefore perform several different computations at once. While no quantum computer of worthy application has been developed, such devices do exist, and it’s only a matter of time until their capabilities supplant those of a digital computer’s. Already, quantum computing data is stored in something called a “qubit,” the quantum mechanical datum to replace the binary computing “bit.” So far, quantum computers can only handle a measly 16 qubits, but most developers in the field are confident an expansion of quantum computing capabilities is on the horizon.
It is somewhat unclear what a deterministic revolution in quantum theory would mean for quantum computing. This all would depend on what exactly the hidden variables and their physical reality would represent. Would the discovery of the hidden variable reveal that, in actuality, an electron cannot be in two places at once? This is unlikely, as experiment has revealed such a phenomenon to occur, but then again, what if the hidden variables revealed that our measuring in experiments did indeed influence our measured physical systems beyond the limits of forgivable scientific error, and that our measurements effected the so far paradoxical results of electrons–and the rest of matter–having almost phantom like properties? Alas, being that the realist interpretation of quantum mechanics is not the focus of many researchers in quantum theory, its implications with respect to quantum computing have not been fully considered. It could either expand or kill the field. Maybe the reason quantum computers cannot deal with more than 16 qubits is because we are asking nature to do something that is fundamentally against its mechanics, despite the fact that our tentative mathematical theories suggest it is possible.
But technological considerations aren’t the only ones to be had concerning the realist interpretation of quantum mechanics. The philosophical and religious implications of the realist interpretation versus the more mainstream, Copenhagen interpretation are quite profound, and the debate between determinism and uncertainty in quantum mechanics has inspired many philosophers to consider what each interpretation means for the limits of human free will. If the laws of nature are completely deterministic, and every event in the history of the universe can be traced back through particle trajectories to the big bang, then it follows, through inductive reasoning, that all events, even human thoughts, wants, and actions, are simply the reactions of atoms and molecules to physical laws, leaving no room for unnatural agents to participate in the system. In this view of the universe, one doesn’t make choice A instead of B because one is a free agent in a universe of otherwise natural laws, one makes that choice because the information about those two choices induced a certain chemical reaction in the mind of the chooser (the mind is made of atoms as well), and in the same way a rock falls under the influence of gravity, the matter that composes the human mind reacts under the influence of causal particle mechanics.
But if the universe is indeterministic, as suggested by the mainstream interpretations of quantum mechanics, it means human choices aren’t predetermined, and this indeterminacy ostensibly leaves room for human influence. Yet, it remains to be shown how this position can be maintained. Even if all human decisions and actions were not determined at the moment of the big bang, and all the events in the universe could be reduced to the unpredictable nature of stochastic particles, this leaves nowhere for a non-natural influence–free will–to come into the picture. Human choices are still the result of particle trajectories, whether or not those trajectories can be determined, and whether or not the trajectories of those particles are linear or non-linear. Until some unnatural agent is introduced into the complex but natural configuration of the human mind–unnatural in that it would be outside the laws of nature–the position that humans have free will cannot be maintained without appealing to notions of the supernatural. And nature does not approach the supernatural as its systems approach complexity, even the complexity of the human mind. To claim otherwise is to claim that the molecules which make up the brain follow different physical laws than the rest of the molecules in the universe. And if you disagree, I can’t blame you; it’s not like you had a choice in the matter anyways.
But philosophical debate aside, as one of the most successful and useful theories in all of theoretical physics, quantum mechanics does seem to suggest the indeterministic realities of nature. We get our understanding of semi-conductors, the devices used to power your smartphone, through quantum mechanics, and we can’t discard the probabilistic elements without discarding our understanding of the theory altogether. In physics, where experiment is king, and in science, where nature is under no obligation to make sense to us, it seems stubborn to ignore the continuing theoretical and experimental verification of the probabilistic nature of the universe. Yet, the idea that this limit is one of practicality, not principle, is a hard one to overcome. Human science has reduced every other aspect of the universe down to the simple but fascinating level of causal mechanics; it is tempting to say that quantum mechanics will one day reach this point as well.
References
1 E. Santos, Foundations of Science, 20, 357-386 (2015) or arXiv:1203.5688 [quant-ph].
2 W. Hermanns, Einstein and the Poet: In Search of the Cosmic Man, Branden Books; 1st Ed. (2013), p. 58. 

3 V. Singh, Materialism and Immaterialism in India and the West: Varying Vistas (New Delhi, 2010), p. 833-851 or arXiv:0805.1779 [quant-ph]. 

4 J Mehra, H, Rechenberg, The Historical Development of Quantum Theory ( New York, 1982). 

5 ”Albert Einstein – Facts”. gf. N.p., 2017. Web. 24 Feb. 2017. 

6 F. David Peat, Infinite Potential: The Life and Times of David Bohm (1997), p. 125-133. 

7 D. Bohm, B. Podolsky, Phys. Rev. 85, 2 (1952). 

8 A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935). 

9 H. P. Stapp, Nuovo Cim B 29, 270 (1975). 

10 A. Witze, ”75 Years Of Entanglement”. Science News. N.p., 2017. Web. 24 Feb. 2017. 


Revisiting Isaac Asimov's Foundation Trilogy

Originally serialized in Astounding Magazine during the 1940s, Isaac Asimov’s Foundation series is one of his most widely known works. Yet, many of the people I’ve come across have never heard of Foundation. Instead, they are familiar with his robot stories. Perhaps the Hollywood adaption of I, Robot is the reason for this, a loose adaption which led those unfamiliar with his work to look up the Three Laws of Robotics rather than the principles of psychohistory.
Foundation and its two sequels are some of Asimov’s richest novels where intellectual value is concerned. The classic trilogy addresses sociology, history, futurism, and the limits of humankind’s intellectual ability on a galaxy-wide scale. While Asimov later came back to the Foundation series to add several polarizing prequels and sequels, the original trilogy is worth considering on its own, mostly because all three of the books could so easily pass as one single work.
The story begins in an almost portal fantasy-like fashion—a genre in which someone from our world is thrust into an unfamiliar universe—except Gaal Dornick, the first character to appear in Foundation, isn’t from our world at all, but originates from a small and fictional planet some twenty-thousand years in the future. Yet, despite our lack of familiarity with Gaal’s world, we relate to the awe he feels when he first arrives on Trantor, the planet-wide city at the heart of the Galactic Empire. Though Gaal is from a world whose technology would seem like magic to us today, he is blown away by what he sees on Trantor, a world on the forefront of human advancement and civilization. Asimov demonstrates a level of story-telling expertise with this opening scene, and the stage has been set perfectly for what is to come. We soon learn the big and beautiful galactic empire we have just been shown is on the brink of destruction.
To resolve the conflict, Asimov’s characters introduce the concept of psychohistory. With the galaxy heading for an age of darkness following the collapse of its empire, the mathematician Hari Seldon—who was possibly Asimov’s literary alter-ego—devises a way to quantify sociological trends to make probabilistic predictions of the future. Though his theory of psychohistory is limited in its ability to predict the actions of individuals, Seldon is able to spot the influencing socio-political events that will shape the course of galactic history. Pushing his theory to the limit, Seldon produces the optimal solution to the problem: a set of events that, if carefully orchestrated in the right sequence, will reduce the time of darkness in the galaxy from that of thirty-thousand years to only a thousand.
Accused of being disloyal to the Empire, Seldon is banished to Terminus. There, he creates his Foundation, a society of individuals dedicated to ensuring his plan for the future is carried out, even hundreds of years after his death. Soon, the collapse of the galactic empire begins and the Seldon Plan goes into action.
The concept of psychohistory raises a set of questions we should ask ourselves as a species. So far, in the course of human history, the limits of human inquiry and science have been transient; much of what was impossible one hundred years ago is not the same as what is impossible today. Assuming our abilities in both the natural and social sciences continue to increase, it can only be expected we might someday develop the means to quantify historical and sociological trends in order to predict, if not the future, at least the general course of history to come. In a way, we already seem to be on course for this. Some statisticians have been reasonably successful in predicting the outcomes of sports games and political elections; imagining they might go from being right two times out of three to nine times out of ten isn’t difficult. And, once they’ve reached that level of probabilistic certainty, it isn’t difficult to imagine them going even further given more time.
What does our species do with such an ability should we ever develop it? What would the consequences be? The question is examined in Asimov’s Foundation sequels: Foundation and Empire and Second Foundation. Inevitably, Asimov was pushed by his editor John Campbell to introduce a source of conflict into the story—good guys who always win because they can control the future doesn’t make for the most engaging plot—and the Mule is introduced in the series’ second book. A mutant with the ability to control the emotions and minds of those around him, the Mule is unaccounted for in Hari Seldon’s original plan. From there, the sequence of events the Foundation was meant to bring about is derailed.
But Hari Seldon, having the foresight to expect events outside of his theory’s statistical certainty, devised a contingency plan and a way to strengthen the abilities of the Foundation itself: the Second Foundation. This, it turns out, was where the real meat of his plan lay. If his progeny were to know the future, the information which revealed it would have to remain private, otherwise, the very act of knowing the future might tempt the agents in play to change their minds about what they were going to do. Behind the scenes, the Second Foundation ensures the Seldon Plan is carried out, even influencing the first Foundation when they have to.
Much of Foundation and Empire is about the Mule’s search to find the Second Foundation, hoping to take control of the galaxy and the course of history for himself. One of the most captivating scenes in the book is when the Mule verbally acknowledges to a man under the control of his powers that the man’s actions are only the product of his mental manipulation. In response, the man voices that he is aware he is enslaved, but continues to profess his loyalty and wish to fulfill the Mule’s orders. It’s a powerful scene, and one that reveals a lot about Asimov’s materialistic view of the natural world. The Mule is able to remove a person’s free will simply by tweaking their neural activity. But the scene also speaks to the bigger theme of Foundation: the control of humankind. While Hari Seldon’s Foundation is only capable of controlling cities, societies, and planets, the Mule is capable of controlling every individual he approaches, and the two sides employ these means of control to fight for the mastery the human race.
Asimov’s attitude to such a power—the ability to manipulate the minds of individuals—is revealed by the tone of the book. Foundation and Empire is arguably his darkest work, and while its twist at the end is disappointingly predictable, the choices his characters make to prevent the Mule from discovering the Second Foundation are excitingly desperate. Perhaps this speaks to a sentiment and fear an intellectually inclined individual such as Asimov had: you may manipulate the masses, but do not dare to manipulate me. I, for one, struggle to find the distinction between the two forms of manipulation. Either way, in effect, someone is manipulated in the mind; it’s only the extent of that manipulation which differs between the Foundation and the Mule.
Asimov confronted this issue more directly in the third book of the Trilogy, Second Foundation. This books exhibits an obvious shift in tone from its predecessor, and the ability to control individual minds is no longer associated with evil. Hari Seldon’s Second Foundation, the focus of the final entry in the trilogy, controls the course of events by the subtle mental manipulation of certain key individuals in galactic politics—albeit to a much less invasive extent than the Mule. In the first half of the novel, the Second Foundation and the Mule go head to head, with the Second Foundation winning out. The second half of the novel concerns itself with the Second Foundation’s struggle to preserve their secrecy from the first Foundation, a society who either wants to put all their trust into the Second Foundation, or doesn’t want to be controlled by the Second Foundation at all—and would rather do the controlling themselves.
To me, this is the most philosophically interesting portion of the trilogy, and the one most open to interpretation. Asimov was a known humanist and atheist, and his opinion on religion is already shown in the first Foundation book, in which a church who worships the “galactic spirit” is spawned as a means of controlling a portion of the galaxy’s population and effecting the next event in the Seldon Plan. Now at the end of the trilogy, the characters of the two Foundations are presented with a dilemma. If the first Foundation is aware of the presence of the Second Foundation, then the first Foundation may become complacent. Feeling that the Second Foundation will ensure everything turns out the way it needs to be, their actions may be influenced, their motivation to carry out the Seldon Plan impeded, therefore debilitating the Second Foundation from carrying out the Seldon Plan as well—for the Second Foundation’s influence is much more subtle and in need of a front-end force to carry out the meat of the work. Perhaps Asimov is appealing to humanism here, for the same could be said for humankind. If we humans believe there is a god who will make everything right in the end, we will be less motivated to make things right ourselves. Faith, instead of solace, could be encouraging us to accept this broken world as it is because a better one awaits us. It allows us to accept the mass suffering in this world, not out of callousness or indifference, but out of patience. If we just wait, things will be better. Much like the first Foundation putting too much faith into the Second Foundation, humankind could be putting too much faith into its deities.
Yet, it could also be interpreted another way. If we interpret Second Foundation by strong analogy, then we accept that God exists in said analogy—after all, the Second Foundation exists in Asimov’s universe. Perhaps, in the same way the Second Foundation can’t reveal themselves, God can’t reveal himself because it invalidates the need for faith, the need for humankind to take it upon themselves to ask for forgiveness. If God revealed himself, and by doing so influenced all worldly things, he would be disrupting his own plan. There would be no need for free will and no need for man.
Either way, in the story itself, the Second Foundation is limited by the very function they are supposed to serve. To work in the shadows, they must give up the effectiveness of working in the light. And though the trilogy ends with the Second Foundation preserving their secrecy and setting the Seldon Plan back on track, it ends on an unsettling note for the reader willing to think about the implications.
Perhaps, in the chaos that is our world, a select few should hold all the power in secrecy and ensure society and order are preserved. Plato certainly appealed to this line of reasoning in The Republic by admonishing democracy and suggesting a benevolent dictator should govern all humankind. The question is, who should that benevolent dictator be? How could Hari Seldon be sure someone like the Mule wouldn’t take control of the Second Foundation?