Vladislav Pedder Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness
Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness
Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness

5

  • 0
Поделиться
  • Рейтинг Литрес:5

Полная версия:

Vladislav Pedder Processual Pessimism. On the Nature of Cosmic Suffering and Human Nothingness

  • + Увеличить шрифт
  • - Уменьшить шрифт

To begin with, contemporary cosmology proposes several competing hypotheses about the origin of the Universe, yet in all of them the question inevitably arises concerning the source of the primary energy that gives rise to being. The most widespread hypotheses are those in which our Universe arises from the “remnants” of a preceding one – whether as a singularity of a pre-universe, a flare at the point of gravitational collapse, or a tunneling effect within a multiverse. There are even concepts linking the birth of new universes to the interiors of black holes; the very idea of “cosmological natural selection” has received some development through formal analogies with biology. Yet even here a “parent” Universe is required, within which a black hole capable of generating another Universe first appears. All of this stems from the fact that we have not yet succeeded in unifying quantum mechanics and gravity, nor resolved the fundamental questions concerning the nature of space, time, and energy. But even if we were to uncover all the secrets of the Universe – what then? Would that solve the fundamental problem of our situation?

Modern science is increasingly turning to bold hypotheses such as the multiverse, attempting to explain the emergence of being through the existence of countless worlds or prior cosmic entities. Progress demands a willingness to consider the most daring ideas, even when they initially appear speculative. Perhaps the many-worlds interpretation, or other multiverse hypotheses, will prove closer to the true nature of reality. Yet there also exists a more economical explanation, which will be discussed below. Whichever approach proves more accurate, for us this Universe remains the only reality accessible within the limits of our lives. And although we are confined to this reality, that very limitation does not nullify the fruitfulness of bold conjectures. We will most likely be unable to gain direct knowledge of any other reality.

We must return to the question of why anything exists at all – the question “Why is there something rather than nothing?”, posed since the time of the ancient Parmenides and taken up by Leibniz, Wittgenstein, and, of course, Heidegger, who called it “the fundamental question of metaphysics.” Among all cosmological hypotheses, one in particular is especially compelling to me – the most radical and, paradoxically, the most internally consistent. Contemporary cosmology allows for the possibility that the Universe could have arisen without any external cause. This is one of the working hypotheses in modern theoretical physics16, it describes the emergence of spacetime from absolute Nothingness. Of course, this is neither the only nor the final account of the origin of the Universe. Its appeal lies not in “solving” the metaphysical riddle of being, but in showing how stable, differentiable configurations can arise from physical non-structure without recourse to transcendental explanations.

This absence of structure is understood as a pre-cosmic state – absolute NOTHING, in which space and time themselves do not yet exist. These ideas were developed by the cosmologist Alexander Vilenkin, who showed that, in the case of a closed universe with zero total energy, “nothing prevents such a universe from spontaneously arising from nothing.” Much later, the physicist Lawrence Krauss popularized this hypothesis in a popular-science form. In his book A Universe from Nothing, he describes the birth of the Universe while understanding Nothingness as a vacuum in which quantum fluctuations arise. Critics were right to note that Krauss’s “Nothing” is a conceptual substitution, and that it says nothing about the origin of the vacuum itself. Krauss sidestepped this question, but the implied answer could be formulated as follows: the vacuum, like the quantum fluctuations within it, emerged from absolute Nothingness. He maintains that observational evidence is consistent with “a universe that could have, and plausibly did, arise from a deeper nothing – including the absence of space itself…”. Both authors emphasize the impersonal, anti-anthropological character of such a scenario: the process of the world’s emergence from nothing presupposes no external design and no purpose of its creation.

But how can something arise from absolute nothing? The key lies in the property of Nothingness itself: it is paradoxically unstable. Contrary to the intuitive image of emptiness as something absolutely static and eternal, quantum physics shows that a state of complete absence is not rest but a tense indeterminacy. “Nothing” cannot remain nothing, because the very notion of “remaining” already presupposes time – and there is no time there. This follows mathematically from the fact that, in the absence of spacetime, there are no constraints capable of holding non-being in its “zero” state. And here a profound irony becomes apparent: the energetically most favorable state is not emptiness, but existence. A universe with a zero energy balance (where the positive energy of matter is offset by negative gravitational energy) is physically “simpler” than absolute Nothingness, because it resolves the fundamental contradiction of non-being.

Yet the Universe that comes into being is not in equilibrium. It is born in a state of extremely low entropy – ordered, non-equilibrium, saturated with free energy. From that moment on, its irreversible movement begins toward the very stable state that is physically more favorable than non-being: toward maximum entropy, toward heat death, toward absolute equilibrium. The Universe seems to be “completing” its transition out of nothingness by approaching the most stable configuration. It cannot return to non-being – thermodynamics forbids it. All that remains is to move forward, dissipating energy, increasing disorder, and drawing nearer to a state in which nothing further happens, everything is balanced, still, and inert..

Thus, to reiterate – “Absolute Nothing” is understood as a state of radical absence of space and time, not merely as a vacuum with quantum fluctuations within a given geometry, as in Krauss’s formulation. In such a state, there are no classical fields, particles, or “arrow of time,” yet quantum-cosmological methods allow one to define its wave function. Using the equations of quantum field theory, it can be shown that even from this “zero” configuration there exists a nonzero probability of transition to a state with a finite geometry. It is worth noting that you, as well as I, will likely wonder how physical laws can be applied to “Nothing” if there is nothing in “Nothing.” This is indeed intriguing, but Vilenkin himself addresses this question in his book Many Worlds in One. The Search for Other Universes, derived from his 1982 article “Creation of the Universes from Nothing,” in Chapter Seventeen:

“This means that there is simply no space and time, they are, in a precise sense, unreal – ‘immaterial’, they are pure ‘nothing’; they are simply a manifestation of the uncertainty principle, a foam of probabilities that space-time has one metric or another, topology, number of dimensions, etc. The concept of a universe materializing out of nothing boggles the mind… yet the state of ‘nothing’ cannot be identified with absolute nothingness. The tunneling is described by the laws of quantum mechanics, and thus ‘nothing’ should be subjected to these laws. The laws must have existed, even though there was no universe..”

The answer, of course, is not bad and refers us back to Plato, to the world of ideas and things in philosophy, but it explains nothing. And how are we to interpret the statements that “the state of ‘nothing’ cannot be defined as absolute nonexistence,” yet at the same time “it is pure ‘nothing’”? Further reading clarifies this somewhat. Vilenkin suggests that apparently, in complete “Nothing,” only the laws of physics exist, but he cannot give a definite answer to the question of where they come from, and he proposes that everything comes from God, as he mentions toward the end of the book. Presumably, the question of the existence of physical laws in nothing did not concern him greatly. There is no problem with the fact that Vilenkin saw a divine origin for “nothing,” since, as we will see later, he is not the only one who approached the hypothesis of a universe from “nothing.” But for now, let us return to the tunneling of geometry. It follows that, if Vilenkin’s conclusions are true or close to true, the geometry of spacetime itself can “tunnel”17 through a barrier of zero size, in a manner analogous to how a particle with nonzero amplitude penetrates a classical potential barrier.18 Alexander Vilenkin fully formalized and described how a closed universe could arise via quantum tunneling from literally “nothing” into de Sitter space19, after which inflationary expansion begins. From the perspective of the wave function, this corresponds to a condition of a wave-free state on a zero geometry (the so-called “tunneling wave function”). After tunneling, a finite-size “bubble” appears. If this bubble surpasses a critical scale, it does not collapse but inflates to large dimensions, entering an inflationary phase of expansion. Modern studies, including those incorporating quantum gravity (for example, in loop quantum cosmology), continue to develop this idea of universe tunneling at zero scale factor.

Concurrently with Vilenkin, the hypothesis of a universe with zero total energy was proposed. According to this hypothesis, the positive energy of matter (mass, fields, kinetic energy) is exactly balanced by the negative energy of the gravitational field. In 1973, Edward Tryon suggested that our universe is a large-scale fluctuation of the quantum vacuum, with its total energy equal to zero because the energy of matter is precisely offset by gravitational potential energy.

If the positive material energy and the negative energy of curvature exactly compensate each other, then the “appearance” of the universe requires no external energy source. As Stephen Hawking noted, in the creation of mass, exactly as much “negative” energy arises as the positive energy taken, so that the total energy remains zero.

The zero-total-energy hypothesis assumes that positive contributions (energy of mass, fields, and kinetic energy) are counterbalanced by negative contributions associated with the gravitational field. This does not contradict fundamental laws of nature. The mass – energy equivalence law remains valid: mass is still equivalent to energy, and the creation of mass does not imply arbitrary appearance of positive energy outside the equations. The law of local conservation of energy and momentum, as formulated in general relativity, holds: no spontaneous loss or creation of energy – momentum occurs in any local region of spacetime. Einstein’s equations are also not violated – the balance of positive and negative contributions emerges as a solution to these equations under the chosen boundary conditions.

However, it should be noted that the energy of the gravitational field in general relativity cannot be unambiguously localized; its evaluation uses global constructions and special definitions (for example, ADM energy for asymptotically flat spaces) or relies on specific boundary conditions. Therefore, the statement that “the total energy of the universe is zero” is correct only within a particular model and chosen method of calculation, and this should not be forgotten when approaching the issue critically.

The combination of the tunneling mechanism and the zero-energy concept provides a mathematically consistent scenario. A quantum transition from a state in which space and time are absent generates a finite volume of spacetime filled with matter and radiation. In this emergent configuration, the positive energy of matter is automatically balanced by the negative gravitational energy, so the total energy remains zero. Thus, the birth of the universe is described not as “taking energy from nowhere” but through calculations of amplitudes and energy contributions. This process can be schematically reduced to three stages: (1) a quantum tunnel from “nothing” creates a small fragment of the universe; (2) particles and fields (positive energy) materialize within it, while the geometry contributes negative gravitational energy; (3) with successful compensation, a stable universe of zero total energy emerges.

Quantum amplitudes provide only a nonzero, but generally very small, probability of nucleation20. Most “attempts” at universe creation are either reversible or generate unstable configurations that immediately collapse back. However, statistically, even a single successful realization is sufficient: the emergence of at least a few “bubble” universes is guaranteed. Among them, our universe corresponds to a “fortunate” case – it has grown stably and evolves into the cosmos we know. Such a qualitative selection (“the anthropic principle” in the broad sense) means that we observe precisely the universe in which complex structures and observers could arise, even though it is extremely improbable among the infinite set of fluctuations.

According to quantum field theory, the vacuum is the ground state of quantized fields with minimal possible energy, but it is not completely empty. Due to Heisenberg’s uncertainty principle, short-lived energy fluctuations occur as virtual particle – antiparticle pairs continuously appear and annihilate. Usually, these pairs quickly vanish, “returning” their energy to the vacuum without disturbing the overall balance. However, occasionally an exceptional fluctuation occurs, with very high local energy and order. Such a fluctuation can give rise to a stable region – the embryo of a future universe. If the bubble volume exceeds a critical radius, it no longer collapses and begins to expand exponentially: its own space expands autonomously, engulfing more surrounding vacuum and initiating inflation. Ultimately, the entire observable cosmos forms from a statistically extremely rare but allowed quantum anomaly.

Thus, assuming the formal definition of “nothing” as zero geometry, quantum mechanics allows constructing a self-consistent picture: from “absolute nothing,” a bubble of spacetime arises with positive matter energies compensated by negative gravitational energy, summing to zero. Multiple fluctuations and statistical selection explain why we find ourselves in the one stable universe where observers could emerge.

The already existing universe then evolves according to the laws of thermodynamics. According to the second law, any isolated physical body (or system) tends toward the most probable macroscopic equilibrium state – maximum entropy. The directional tendency of entropy toward equilibrium is the universe’s fundamental “task,” with interesting implications. For instance, the concept of time is closely linked to entropy. We perceive the arrow of time: events unfold in one direction rather than the reverse. Many philosophers and physicists believe this phenomenon arises from the entropy gradient. Current understanding holds that the universe’s entropy was extremely low at the beginning and has been continuously increasing ever since. The standard interpretation is that “earlier moments in time are simply moments of lower entropy.” In this way, the direction of time can be “eliminated” as a fundamental property: it coincides with the direction of increasing entropy. If entropy were somehow to decrease (practically impossible), our perception of time could reverse. Alternatively, some approaches in quantum gravity suggest that time itself may be emergent or unnecessary at a fundamental level.

A similar “primary” role is attributed to information and matter. John Wheeler, in the “It from bit” hypothesis, asserts that the physical reality of the “bit” is primary, and matter can be seen as emerging from sufficient information. By this logic, everything we consider material (vacuum, fields, particles) is essentially wrapped in informational structures, making matter effectively materialized information. Here, information is defined in the Shannon sense as a measure of system uncertainty: Shannon entropy equates to the amount of uncertainty in a message. Landauer’s principle, formulated in 1961 by Rolf Landauer, states that in any computational system, regardless of its physical realization, the erasure of 1 bit of information releases heat. In simpler terms, when information is erased, the entropy of the system (or environment) increases, consistent with the second law of thermodynamics. Energetically inert or non-self-sustaining structures exemplify this process perfectly.

It is worth noting that the use of the term “non-living” here indicates a distinction among systems with common roots. To avoid speculative conflation of living and non-living systems, I will use the neutral term “structures,” emphasizing only the qualitative differences in their organization: physical systems inherently tend toward the decay of order. According to the second law, every physical body “ages”: metals rust, chemical compounds break down, hot bodies cool and evenly distribute heat. Over time, all closed systems approach thermodynamic equilibrium – a state of maximum entropy and maximum uncertainty. Even atoms and molecules are not eternal: many nuclei are radioactive and decay spontaneously, releasing energy and increasing the number of accessible states of the system (energy gradients are leveled). On an astrophysical scale, this is manifested in the life cycles of stars: first stable structures form (e.g., a stable star), then after fuel exhaustion, decay – a supernova or collapse into a black hole – leads to a final increase in the universe’s entropy. These processes underline the inevitable loss of information in any physical system over time.

Highly ordered informational structures, what we call “living,” are collections of physical objects maintaining themselves only through continuous exchange of matter and energy with the environment. Evolution has selected mechanisms allowing living beings to resist increasing entropy: for example, replication of DNA and regeneration systems maintain informational stability of the species. Yet evolution proceeds through random “failures” – mutations, which are manifestations of entropic chaos in genetic code. Mutations are inevitable disruptions in the transmission of hereditary information – eventually leading to organismal death, but precisely through this destructive process, new variants arise that are temporarily resilient to further degradation. Selection preserves these randomly emergent organisms. Thus, life as a whole is an arena of constant struggle against entropy, in which new informational structures arise within destructive processes. For highly ordered informational structures, the destruction of information is compounded by the tragedy of the struggle to preserve it, where the frameworks for storing and transmitting information (replication) are sacrificed, and during this struggle for survival, the fittest bearers of interest accumulate irretrievable losses of information at the scale of the organism.

Bearers of interest (Russian: носители интереса; Norwegian: interessebærere) – a philosophical concept introduced by the Norwegian philosopher Peter Wessel Zapffe in his work On the Tragic (1941). For Zapffe, a bearer of interest was a human whose reflective consciousness recognizes the tragic nature of their own position in an indifferent world; creating a new bearer is equivalent to imposing upon them an inevitable burden, making reproduction morally impermissible. In contemporary philosophical tradition, especially within antinatalism, the term acquires an expanded meaning, encompassing all systems capable of potential suffering.

Unlike biological antinatalism, which focuses on the suffering of all living beings, or ecological antinatalism, concerned with reducing anthropogenic impact on the planet, the sentiocentric approach (from the Latin sentio, meaning “to feel”) shifts the focus to any sentient phenomenon. Its subject is not life as such (bios), but the capacity for feeling – that is, possessing interests that may be violated. Thus, the category of bearers of interest includes not only humans and animals but any actual or hypothetical entities capable of subjective experience and, consequently, of suffering, including advanced forms of artificial intelligence (AGI).

Hereditary information in DNA is susceptible to damage – from radiation, free radicals, replication errors, etc. – and these disruptions manifest at the systemic, molecular, and cellular levels. DNA repair mechanisms21 are not perfect, and defects accumulate with age. This is most clearly visible in the case of telomeres – the terminal repeats of chromosomes: with each cell division telomeres shorten, and when shortening reaches a critical threshold programmed cell death (apoptosis) is triggered. In other words, the biological “safeguard” of genetic information is gradually exhausted, producing aging and the death of cellular populations. The organism’s neuronal systems are also subject to informational decay. With age the brain exhibits neuronal death and loss of synaptic connections; cellular homeostasis and mitochondrial function become disrupted. Mutations and protein aggregates accumulate in parallel, and the effectiveness of DNA repair mechanisms declines. Because memory and cognitive functions are implemented through vast networks of neurons, the loss of even a portion of information in neuronal connections leads to deterioration of the entire system’s performance. Just as a hardware device loses function when connections are severed, so biological neural networks degrade: network disconnection leads to the loss of reproducible information. Taken together, these processes – the accumulation of molecular “failures,” the loss of structures and connections – constitute the physiological side of aging and disease, when the organism’s informational organization disintegrates.

In all these examples – from inorganic, self-sustaining structures to highly ordered informational architectures – the dynamics of informational decay play the decisive role. Fundamental physical limits tie the quantity of stored information to a system’s energy and size. Thus, the Bekenstein22 bound shows that the maximum amount of data in a given region of space is determined by its energy and dimensions, demonstrating the inextricable connection of information with gravity and the structure of space. Indeed, atomic and molecular structures possess well-defined symmetries and energy levels (which give us the periodic table of the elements), while larger-scale organizations – star clusters, galaxies – form under the influence of gravity and energy exchange. Along the evolutionary trajectory the amount of information steadily increased – from elementary particles to deoxyribonucleic acid and neural networks. Yet at each link in this chain that pyramid of order was maintained at the expense of energy and accompanied by an inevitable converse: entropic destruction.

Organisms are too complex to resist the pressure of entropy indefinitely; therefore the eventual decay of an individual organism is unavoidable. Life merely postpones the fatal outcome by copying information into new carriers. This may be understood as the necessary consequence of the irreversible increase of entropy in nature. Every system, from atoms to brains, contains a temporal gradient of order, and the end of that structure is always associated with its informational decay.

From relatively recent research on the role of entropy in information one may recall the “second law of infodynamics,” although it should be noted that this idea appears highly speculative. In 2022 M. Vopson and S. Lepadatu of the University of Portsmouth proposed an idea they termed the “second law of infodynamics.” Their argument rests on a combined information-theoretic and empirical approach. Methodologically they begin by explicitly separating the total entropy of the physical subsystem under consideration into a component associated with thermodynamic microstates and a component interpreted as informational, that is, Shannon entropy.

The authors analyze operations on digital media and algorithmic transformations (copying, compression, filtering, error correction), measuring symbol distributions before and after such operations and calculating the change in the Shannon entropy of those distributions. They then consider biological molecular sequences and replication processes: statistical analysis of DNA and RNA sequences, as well as population dynamics, allows assessment of how replication with error correction and natural selection affect the statistical uncertainty of genetic information in populations. These measurements serve to illustrate that, in the practice of many information-relevant subsystems, there are tendencies toward local decreases in Shannon entropy as a result of operations intended to preserve, order, or compress representations of information. The key step in their work is to compare the directional changes of Shannon entropy with the energetic and thermodynamic accounting of those operations. Vopson and Lepadatu apply principles related to Landauer’s principle and the bounds linking information to energy and system size to show that local decreases in Shannon entropy can be accompanied by an equivalent or greater increase of thermodynamic entropy in the environment as a result of work expended to order the system and dissipate heat. Their calculations again confirm that, when all energy flows are accounted for, total entropy does not decrease; consequently, local trends toward reduced informational uncertainty do not contradict the classical second law of thermodynamics.

Другие книги автора

ВходРегистрация
Забыли пароль