Things Fall Apart

May 29, 2016 | Author: cute_guddy | Category: Topics, Books - Fiction
Share Embed Donate

Short Description



Name- Krishna Roll1


Things Fall Apart An Introduction to Entropy (The shape of the Universe) Entropy is for many people one of the most confusing topics that they hit in introductory physics. Most introductory physics terms like energy, velocity, and temperature refer to things that we are familiar with in our day to day life and have built some intuition for. Few people start taking physics with any intuition for entropy, however. Many popular science books and articles discuss entropy, though, usually equating it with "disorder" and noting that according to the laws of physics disorder and therefore entropy must always increase. Such accounts rarely in my experience make any attempt to define what is meant by disorder or how you would measure whether it is increasing or decreasing. Entropy is a well-defined quantity in physics, however, and the definition is fairly simple. The statement that entropy always increases can be derived from simple arguments, but it has dramatic consequences. In particular this statement explains many processes that we see occurring in the world irreversibly. I'm going to start with a general discussion of the notion of reversibility, leading up to the definition of entropy and the meaning of the law saying it must always increase. I'll end by talking about some of the implications of this law. The discussion throughout the paper presents ideas in a very general way that assumes no math or physics background. There is a (slightly) more mathematical appendix that discusses a technical issue in the definition of entropy and explains how the concept of entropy is used in defining temperature. Introduction: Time-Reversal Your kitchen floor is covered with bits of egg shell, yolk, and egg white in a large puddle by the counter. A moment later the yolk and white run together into one place, the bits of shell fall in around them and form a smooth surface surrounding all the liquid, and finally the whole mess soars up onto your counter, ending up as an intact egg. You leave a cool cup of water on the table and a few minutes later the water has gotten warmer but several cold ice cubes have coalesced out of it. A mass of dandelion spores flying about in the wind all land in nearly perfect unison on the same plant, attaching themselves to it in a spherical ball. If someone showed you a videotape of any of these events you would recognize instantly that the tape was being played backwards. These are just a few examples of the many processes in the world that happen in one direction only. Eggs break, but 3

they never spontaneously re-form. Ice and warm water combine into cool water, but water never just happens to split into cold ice and warm liquid. Why is that? Why do things happen in the direction they do? At first glance the question might seem silly. All of these events are governed by physical laws that tell them what to do. For example, objects fall towards the ground because of gravity, which seems to be a clear example of the laws of physics making things happen in one way only. Looked at more closely, however, this example isn't quite so simple. If I play you a videotape of an object falling you will notice that as it falls it picks up speed, going faster and faster towards the ground. Now suppose I play you the same tape in reverse. You will see an object moving up, slowing down the higher up it gets. Either version of the tape looks like a plausible scene. In fact gravity is one of the clearest examples of a time-reversible law. As another example consider a billiards game being played on a table with no friction, so the balls never slow down. If I play a tape of the balls normally you will see balls colliding off each other and the walls. If I play the tape backwards you will see the same thing. There's no way to tell which version is correct. Notice, however, that in the billiards example I had to specify that the table had no friction. A real billiard ball moving along a table will gradually slow to a stop. If I played you a tape of a billiard ball that started out at rest and started moving with nothing else touching it you would know that the tape was being played backwards. Friction, then, seems to be a good candidate for a physical effect that is not timereversible. To examine that claim, let's consider more carefully what friction is. What happens to the billiard ball as it is rolling? In fact the same thing happens as in the frictionless example; it experiences constant collisions. In this case the collisions are not just with other balls but also with the air molecules surrounding it. More importantly, because neither the table nor the ball has a perfectly smooth surface there are constant collisions between their molecules. The net result of all of these collisions is that the ball loses energy, and eventually comes to a stop. Where did the energy go? It went into all the air and table molecules that the ball collided with. They in turn scattered it out ever further into the atmosphere and down into the floor. So now let's look once again at the backwards tape of the billiard ball. We see it starting out at rest and gradually picking up speed. What causes that to happen? If we zoom in close enough we see that air molecules from all over the room happen to converge on the exact right spot to knock the ball in one direction. Moreover the molecules of the table, which are continually vibrating and moving, all happen to push against the ball in the same direction as the air molecules. Looked at closely enough, 4

there is nothing in this scene that violates the laws of physics. Rather what we see is the playing out of what appears to be a massive coincidence. In fact it turns out that in classical physics all of the fundamental laws of nature are perfectly time-reversible. (The question of time-reversibility in quantum mechanics is somewhat subtler, and I'm not going to discuss it in this paper.) All of the processes that we see occurring in one direction only do so because it would require strange coincidences for them to occur the other way. This statistical tendency for processes to occur in a particular direction is described by a rule called the second law of thermodynamics. Note that this "law" isn't a fundamental law of physics, but rather a statement of statistical probabilities. In the next section I'll describe how this law is formulated, i.e. how to know which processes will occur in one direction only. I'll do so by defining quantity entropy that can never decrease in any physical process. Entropy and the Second Law of Thermodynamics Entropy is a relationship between macroscopic and microscopic quantities. To illustrate what I mean by that statement I'm going to consider a simple example, namely a sealed room full of air. A full, microscopic description of the state of the room at any given time would include the position and velocity of every air molecule in it. In practical terms it would be essentially impossible to measure those quantities for every molecule in a room. Instead, what we typically observe are macroscopic quantities that arise as averages over large numbers of molecules. These macroscopic quantities include, for example, the temperature and density of the air. We might observe these quantities varying from one place to another in the room, but in general the smallest region in which we will be able to measure variations will still be large enough to contain an enormous number of molecules. (Just to give a feeling for the kind of numbers involved, a typical room at normal Earth temperatures and pressures will contain about 1027, or a billion billion billion, air molecules.) If I were able to measure the complete, microscopic state of the air molecules then I would know all the information there is to know about the macroscopic state. For example, if I knew the position of every molecule in the room I could calculate the average density in any macroscopic region. The reverse is not true, however. If I know the average density of the air in each cubic centimeter that tells me only how many molecules are in each of these regions, but it tells me nothing about where exactly the individual molecules within each such region are. Thus for any particular macro state there are many possible corresponding microstates. Roughly speaking, entropy is defined for any particular macro state as the number of corresponding microstates.


To recap: The microstate of a system consists of a complete description of the state of every constituent of the system. In the case of the air this means the position and velocity of all the molecules. (Going further to the level of atoms or particles wouldn't change the arguments here in any important way.) The macro state of a system consists of a description of a few, macroscopically measurable quantities such as average density and temperature. For any macro state of the system there are in general many different possible microstates. Roughly speaking, the entropy of a system in a particular macro state is defined to be the number of possible microstates that the system might be in. (In the appendix I'll discuss how to make this definition more explicit.) This definition should be clearer with the aid of a few examples. The first is a simplified example where the microstates consist of balls sitting in one of two bins. For the second example I will return to the room full of air and talk about the entropy of different patterns of density. Example I: Balls in bins Imagine I have ten thousand tiny balls, each labeled with a different number. I put all of the balls into one of two bins, Bin A and Bin B. In this case the microstate of the system consists of the position of each of the numbered balls. The macro state, i.e. what I actually observe, consists simply of how many balls are in each bin. For each macro state I can easily figure out how many possible microstates there are. For example, if I know that all of the balls are in Bin A then the microstate is determined uniquely. If I know that all but one of the balls are in Bin A then there are ten thousand possible microstates for the system, corresponding to the ten thousand different balls that might be in Bin B. In general, the entropy (possible number of microstates) goes up very rapidly as you get closer to having an even split between the two bins. For that particular case, i.e. five thousand balls in each bin, there are roughly 103008 possible microstates. Now let's allow the two bins to mix. I open up a large window connecting the two bins and start blowing giant fans that send all the balls constantly flying in different directions. Every minute I shut the window and turn off the fans, weigh each bin to figure out how many balls are in it, and then open the window, turn on the fans, and start again. If I set the experiment up carefully so the fans are blowing equally in both directions then eventually each ball will have a 50/50 chance of being in either bin, more or less independently of where all the other balls are. In other words, every microstate will be equally likely. Take a moment to convince yourself that those two statements are equivalent. If you've followed me so far, I hope it will be clear to you that not all macro states will be equally likely. The state with one ball in Bin B and all the rest in Bin A will be ten thousand times more likely than the state with all the balls 6

in Bin A. In fact the numbers involved are so huge (see the previous paragraph) that it is almost certain that once the balls have had time to mix I will find almost exactly half of them in each bin every time I look. In other words, entropy will tend to increase. If I start in a state with most of the balls in one bin (low entropy) the system will tend to move to a state where they are evenly distributed (high entropy). On the other hand if I start with high entropy, it's very unlikely that the entropy will spontaneously decrease. Example 2: Back to Air Density The previous example illustrates the definition of entropy well, but the setting is a bit artificial. Let's return to the example of a room filled with air and see how the same logic applies. Imagine I am going around the room with a handheld device that measures the local air density. In fact, this situation is almost exactly the same as the previous example. The balls are now air molecules and the bins are regions of the smallest size that my device can measure. In this case there are many bins instead of just two, but the same result will apply. The macro state in which there are roughly equal numbers of air molecules in each part of the room has much higher entropy than any macro state in which there are large density differences from one part of the room to another. In other words if you start with all the air bunched up in one corner of the room it will tend to flow out until it more or less evenly fills the space. If you see a videotape in which the air starts out uniform and then all moves into one corner of the room you can bet that it's being played backwards. The second law of thermodynamics states that the total entropy of the universe can never decrease. Note that the entropy of a particular system can increase, provided there is a corresponding decrease in one or more other systems. In the example of air density the entropy in the corner of the room that initially had all the air decreases as the air expands, but the total entropy in the room increases. Likewise the growth of an individual human involves building up many complex, low-entropy structures, but this building is done by breaking down complex molecules, plants, and other systems in the environment. Once again total entropy increases. In fact life with all its digestive, heat generating processes is a very efficient entropy-producing engine. A reversible process is one in which the total entropy of all systems involved remains constant, whereas any process in which total entropy increases is irreversible. Conclusion The second law of thermodynamics is a statistical law. There is nothing in the basic laws of physics that forbids a bunch of dandelion spores from converging on a plant stem and sticking to it. There are, however, many different ways that randomly moving molecules can knock spores off a dandelion, whereas it requires a very special 7

and unlikely combination of air movements to push the spores back on. This statistical tendency appears to be a fixed law of physics because the number of microscopic components (particles, atoms, molecules) making up any macroscopic system is so enormous that it would be inconceivable for the laws of thermodynamics to be violated by coincidence. All of this means that a videotape of the universe being played backwards would appear to be following all the usual laws of physics, but in a very strange and unlikely way. So far as we know entropy will continue increasing until someday the universe is filled with nothing but weak, uniform radiation. Fortunately this scenario, known as "the heat death of the universe," will not take place until after a length of time that makes the current age of the universe seem miniscule by comparison. For now we get to enjoy our complicated, low-entropy state. For that, it seems like having a few eggs break is a small price to pay.


The Arrow of Time

The Second Law of Thermodynamics Order out of Chaos "This is the way the world ends Not with a bang but a whimper." (T. S. Eliot) Thermodynamics is the branch of theoretical physics which deals with the laws of heat motion, and the conversion of heat into other types of energy. The word is derived from the Greek words thermo ("heat") and dynamics ("force"). It is based upon two fundamental principles originally derived from experiments, but which are now regarded as axioms. The first principle is the law of the conservation of energy, which assumes the form of the law of the equivalence of heat and work. The second principle states that heat cannot of itself pass from a cooler body to a hotter body without changes in any other bodies. The science of thermodynamics was a product of the industrial revolution. At the beginning of the 19th century, it was discovered that energy can be transformed in different ways, but can never be created or destroyed. This is the first law of thermodynamics—one of the fundamental laws of physics. Then, in 1850, Robert Clausius discovered the second law of thermodynamics. This states that "entropy" (i.e., the ratio of a body‟s energy to its temperature) always increases in any transformation of energy, for example, in a steam engine. Entropy is generally understood to signify an inherent tendency towards disorganization. Every family is well aware that a house, without some conscious intervention, tends to pass from a state of order to disorder, especially when young children are around. Iron rusts, wood rots, dead flesh decays, the water in the bath gets cold. In other words, there appears to be a general tendency towards decay. According to the second law, atoms, when left to themselves, will mix and randomize themselves as much as possible. Rust occurs because the iron atoms tend to mingle with oxygen in the surrounding air to form iron oxide. The fast moving molecules on the surface of the bath water collide with the slower moving molecules in the cold air and transfer their energy to them. This is a limited law, which has no bearing on systems consisting of a small number of particles (Microsystems) or to systems with an infinitely large number of particles (the universe). However, there have been repeated attempts to extend its application well beyond the proper sphere, leading to all kinds of false philosophical conclusions. 9

In the middle of the last century, R. Clausius and W. Thomson, the authors of the second principle of thermodynamics, attempted to apply the second law to the universe as a whole, and arrived at a completely false theory, known as the "thermal death" theory of the end of the universe. This law was redefined in 1877 by Ludwig Boltzmann, who attempted to derive the second law of thermodynamics from the atomic theory of matter, which was then gaining ground. In Boltzmann‟s version, entropy appears as a function of the probability of a given state of matter: the more probable the state, the higher its entropy. In this version, all systems tend towards a state of equilibrium (a state in which there is no net flow of energy). Thus, if a hot object is placed next to a cold one, energy (heat) will flow from the hot to the cold, until they reach equilibrium, i.e., they both have the same temperature. Boltzmann was the first one to deal with the problems of the transition from the microscopic (small-scale) to the macroscopic (large-scale) level in physics. He attempted to reconcile the new theories of thermodynamics with the classical physics of trajectories. Following Maxwell‟s example, he tried to resolve the problems through the theory of probability. This represented a radical break with the old Newtonian methods of mechanistic determinism. Boltzmann realized that the irreversible increase in entropy could be seen as the expression of a growing molecular disorder. His principle of order implies that the more probable state available to a system is one in which a multiplicity of events taking place simultaneously within the system cancel each other out statistically. While molecules can move randomly, on average, at any given moment, the same number will be moving in one direction as in another. There is a contradiction between energy and entropy. The unstable equilibrium between the two is determined by temperature. At low temperatures, energy dominates and we see the emergence of ordered (weak-entropy) and low energy states, as in crystals, where molecules are locked in a certain position relative to other molecules. However, at high temperature, entropy prevails, and is expressed in molecular disorder. The structure of the crystal is disrupted, and we get the transition, first to a liquid, then to a gaseous state. The second law states that the entropy of an isolated system always increases, and that when two systems are joined together, the entropy of the combined system is greater than the sum of the entropies of the individual systems. However, the second law of thermodynamics is not like other laws of physics, such as Newton‟s law of gravity, precisely because it is not always applicable. Originally derived from a particular sphere of classical mechanics, the second law is limited by the fact that Boltzmann took no account of such forces as electromagnetism or even gravity, allowing only for 10

atomic collisions. This gives such a restricted picture of physical processes, that it cannot be taken as generally applicable, although it does apply to limited systems, like boilers. The Second Law is not true of all circumstances. Brownian motion contradicts it, for example. As a general law of the universe in its classical form, it is simply not true. It has been claimed that the second law means that the universe as a whole must tend inexorably towards a state of entropy. By an analogy with a closed system, the entire universe must eventually end up in a state of equilibrium, with the same temperature everywhere. The stars will run out of fuel. All life will cease. The universe will slowly peter out in a featureless expanse of nothingness. It will suffer a "heat-death." This bleak view of the universe is in direct contradiction to everything we know about its past evolution, or see at present. The very notion that matter tends to some absolute state of equilibrium runs counter to nature itself. It is a lifeless, abstract view of the universe. At present, the universe is very far from being in any sort of equilibrium, and there is not the slightest indication either that such a state ever existed in the past, or will do so in the future. Moreover, if the tendency towards increasing entropy is permanent and linear, it is not clear why the universe has not long ago ended up in a tepid soup of undifferentiated particles. This is yet another example of what happens when attempts are made to extend scientific theories beyond the limits where they have a clearly proven application. The limitations of the principles of thermodynamics were already shown in the last century in a polemic between Lord Kelvin, the celebrated English physicist, and geologists, concerning the age of the earth. The predictions made by Lord Kelvin on the basis of thermodynamics ran counter to all that was known by geological and biological evolution. The theory postulated that the earth must have been molten just 20 million years ago. A vast accumulation of evidence proved the geologists right, and Lord Kelvin wrong. In 1928, Sir James Jean, the English scientist and idealist, revived the old arguments about the "heat death" of the universe, adding in elements taken from Einstein‟s relativity theory. Since matter and energy are equivalents, he claimed, the universe must finally end up in the complete conversion of matter into energy: "The second law of thermodynamics," he prophesied darkly, "compels materials in the universe (sic!) to move ever in the same direction along the same road which ends only in death and annihilation." (46) Similar pessimistic scenarios have been put forward more recently. In the words of a book, published recently:


"The universe of the very far future would thus be an inconceivably dilute soup of photons, neutrinos, and a dwindling number of electrons and positrons, all slowly moving farther and farther apart. As far as we know, no further basic physical processes would ever happen. No significant event would occur to interrupt the bleak sterility of a universe that has run its course yet still faces eternal life—perhaps eternal death would be a better description. "This dismal image of cold, dark, featureless near-nothingness is the closest that modern cosmology comes to the „heat death‟ of nineteenth century physics." (47) What conclusion must we draw from all this? If all life, indeed all matter, not just on earth, but throughout the universe, is doomed, then why bother about anything? The unwarranted extension of the second law beyond its actual scope of application has given rise to all manner of false and nihilistic philosophical conclusions. Thus, Bertrand Russell, the British philosopher, could write the following lines in his book Why I Am Not a Christian: "All the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and…the whole temple of man‟s achievement must inevitably be buried beneath the debris of a universe in ruins—all these things, if not quite beyond dispute, are yet so nearly certain that no philosophy which rejects them can hope to stand. Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul‟s habitation henceforth be safely built." (48) Order Out of Chaos In recent years, this pessimistic interpretation of the second law has been challenged by a startling new theory. The Belgian Nobel Prize winner Ilya Prigogine and his collaborators have pioneered an entirely different interpretation of the classical theories of thermodynamics. There are some parallels between Boltzmann‟s theories and those of Darwin. In both, a large number of random fluctuations lead to a point of irreversible change, one in the form of biological evolution, the other in that of the dissipation of energy, and evolution towards disorder. In thermodynamics, time implies degradation and death. The question arises, how does this fit in with the phenomenon of life, with its inherent tendency towards organization and ever increasing complexity. The law states that things, if left to themselves, tend towards increased entropy. In the 1960s, Ilya Prigogine and others realized that in the real world atoms and molecules are almost never "left to themselves." Everything affects everything else. Atoms and molecules are almost always exposed to the flow of energy and material from the 12

outside, which, if it is strong enough, can partially reverse the apparently inexorable process of disorder posited in the second law of thermodynamics. In fact, nature shows numerous instances not only of disorganization and decay, but also of the opposite processes—spontaneous self-organization and growth. Wood rots, but trees grow. According to Prigogine, self-organizing structures occur everywhere in nature. Likewise, M. Waldrop concluded: "A laser is a self-organizing system in which particles of light, photons, can spontaneously group themselves into a single powerful beam that has every photon moving in lockstep. A hurricane is a self-organizing system powered by the steady stream of energy coming in from the sun, which drives the winds and draws rainwater from the oceans. A living cell—although much too complicated to analyze mathematically—is a self-organizing system that survives by taking in energy in the form of food and excreting energy in the form of heat and waste." (49) Everywhere in nature we see patterns. Some are orderly, some disorderly. There is decay, but there is also growth. There is life, but there is also death. And, in fact, these conflicting tendencies are bound up together. They are inseparable. The second law asserts that all of nature is on a one-way ticket to disorder and decay. Yet this does not square with the general patterns we observe in nature. The very concept of "entropy," outside the strict limits of thermodynamics, is a problematic one. "Thoughtful physicists concerned with the workings of thermodynamics realize how disturbing is the question of, as one put it, „how a purposeless flow of energy can wash life and consciousness into the world.‟ Compounding the trouble is the slippery notion of entropy, reasonably well-defined for thermodynamic purposes in terms of heat and temperature, but devilishly hard to pin down as a measure of disorder. Physicists have trouble enough measuring the degree of order in water, forming crystalline structures in the transition to ice, energy bleeding away all the while. But thermodynamic entropy fails miserably as a measure of the changing degree of form and formlessness in the creation of amino acids, of microorganisms, of selfreproducing plants and animals, of complex information systems like the brain. Certainly these evolving islands of order must obey the second law. The important laws, the creative laws, lie elsewhere." (50) The process of nuclear fusion is an example, not of decay, but of the building-up of the universe. This was pointed out in 1931 by H. T. Poggio, who warned the prophets of thermodynamic gloom against the unwarranted attempts to extrapolate a law which applies in certain limited situations on earth to the whole universe. "Let us not be too sure that the universe is like a watch that is always running down. There may be a rewinding." (51) 13

The second law contains two fundamental elements—one negative and another positive. The first says that certain processes are impossible (e.g. that heat flows from a hot source to a cold one, never vice versa) and the second (which flows from the first) states that entropy is an inevitable feature of all isolated systems. In an isolated system all non-equilibrium situations produce evolution towards the same kind of equilibrium state. Traditional thermodynamics saw in entropy only a movement towards disorder. This, however, refers only to simple, isolated systems (e.g., a steam engine). Prigogine‟s new interpretation of Boltzmann‟s theories is far wider, and radically different. Chemical reactions take place as a result of collisions between molecules. Normally, the collision does not bring about a change of state; the molecules merely exchange energy. Occasionally, however, a collision produces changes in the molecules involved (a "reactive collision"). These reactions can be speeded up by catalysts. In living organisms, these catalysts are specific proteins, called enzymes. There is every reason to believe that this process played a decisive role in the emergence of life on earth. What appear to be chaotic, merely random movements of molecules, at a certain point reach a critical stage where quantity suddenly becomes transformed into quality. And this is an essential property of all forms of matter, not only organic, but also inorganic. "Remarkably, the perception of oriented time increases as the level of biological organization increases and probably reaches its culminating point in human consciousness." (52) Every living organism combines order and activity. By contrast, a crystal in a state of equilibrium is structured, but inert. In nature, equilibrium is not normal but, to quote Prigogine "a rare and precarious state." Non-equilibrium is the rule. In simple isolated systems like a crystal, equilibrium can be maintained for a long time, even indefinitely. But matters change when we deal with complex processes, like living things. A living cell cannot be kept in a state of equilibrium, or it would die. The processes governing the emergence of life are not simple and linear, but dialectical, involving sudden leaps, where quantity is transformed into quality. "Classical" chemical reactions are seen as very random processes. The molecules involved are evenly distributed in space, and their spread is distributed "normally" i.e., in a Gauss curve. These kinds of reaction fit into the concept of Boltzmann, wherein all side-chains of the reaction will fade out and the reaction will end up in a stable reaction, an immobile equilibrium. However, in recent decades chemical reactions were discovered that deviate from this ideal and simplified concept. They are known under the common name of "chemical clocks." The most famous examples are the Belousov-Zhabotinsky reaction, and the Brussels model devised by Ilya Prigogine. 14

Linear thermodynamics describes a stable, predictable behavior of systems that tend towards the minimum level of activity possible. However, when the thermodynamic forces acting on a system reach the point where the linear region is exceeded, stability can no longer be assumed. Turbulence arises. For a long time turbulence was regarded as a synonym for disorder or chaos. But now, it has been discovered that what appears to be merely chaotic disorder on the macroscopic (large-scale) level, is, in fact, highly organized on the microscopic (small-scale) level. Today, the study of chemical instabilities has become common. Of special interest is the research done in Brussels under the guidance of Ilya Prigogine. The study of what happens beyond the critical point where chemical instability commences has enormous interest from the standpoint of dialectics. Of particular importance is the phenomenon of the "chemical clock." The Brussels model (nicknamed the "Brusselator" by American scientists) describes the behavior of gas molecules. Suppose there are two types of molecules, "red" and "blue," in a state of chaotic, totally random motion. One would expect that, at a given moment, there would be an irregular distribution of molecules, producing a "violet" colour, with occasional flashes of red or blue. But in a chemical clock, this does not occur beyond the critical point. The system is all blue, then all red, and these changes occur at regular interval. "Such a degree of order stemming from the activity of billions of molecules seems incredible," say Prigogine and Stengers, "and indeed, if chemical clocks had not been observed, no one would believe that such a process is possible. To change colour all at once, molecules must have a way to „communicate.‟ The system has to act as a whole. We will return repeatedly to this key word, communicate, which is of obvious importance in so many fields, from chemistry to neurophysiology. Dissipative structures introduce probably one of the simplest physical mechanisms for communication." The phenomena of the “chemical clock” show how in nature order can arise spontaneously out of chaos at a certain point. This is an important observation, especial in relation to the way in which life arises from inorganic matter. "„Order through fluctuations‟ models introduce an unstable world where small causes can have large effects, but this world is not arbitrary. On the contrary, the reasons for the amplification of a small event are a legitimate matter for rational inquiry." In classical theory, chemical reactions take place in a statistically ordered manner. Normally, there is an average concentration of molecules, with an even distribution. In reality, however, local concentrations appear which can organize themselves. This result is entirely unexpected from the standpoint of the traditional theory. These focal points of what Prigogine calls "self-organization" can consolidate themselves to the 15

point where they affect the whole system. What was previously thought of as marginal phenomena turn out to be absolutely decisive? The traditional view was to regard irreversible processes as a nuisance, caused by friction and other sources of heat loss in engines. But the situation has changed. Without irreversible processes, life would not be possible. The old view of irreversibility as a subjective phenomenon (a result of ignorance) is being strongly challenged. According to Prigogine irreversibility exists on all levels, both microscopic and macroscopic. For him, the second law leads to a new concept of matter. In a state of non-equilibrium, order emerges. "Nonequilibrium brings order out of chaos."


The Function and Energetics of Entropy The function of entropy - its rationale or reason for existence - is to protect the conservation of energy and causality. The 1st law of thermodynamics (energy conservation) is protected by the second law (entropy) in that it is the function of the second law to create dimensional conservation domains in which energy can be used, transformed, and conserved simultaneously. For symmetric free energy (light), this domain is space, created by the entropic drive of free energy, the "intrinsic motion" of light, gauged by "velocity c", the electromagnetic constant. For asymmetric bound energy (matter), this domain is historic space-time, created by gravity and the entropic drive of bound energy, the "intrinsic motion" of matter's time dimension, gauged by "velocity T", the one-way motion of time. (Time is also ultimately gauged by "c" as the duration required by light to travel a given distance.). The historic component of space-time is the conservation domain of information and matter's "causal matrix". History is the temporal analog of space, created by the intrinsic motion of time, partially visible in our great telescopes, and forming with gravitation the "causal matrix" of space-time, upholding the effect of the "Universal Present Moment" for all bound energy forms in the Cosmos. Gravity is an entropy (and symmetry) conservation/conversion force, creating time from space and vice versa, the latter in conversions of bound to free energy, exampled by our Sun. Gravity and time induce each other in an endless cycle: gravity creates time from space (by the annihilation of space and the extraction of a metrically equivalent temporal residue); the intrinsic motion of time pulls space toward the center of mass and the beginning of the time line and the historic conservation domain of information. It is the connection between space and time, and the temporal (entropic) pull on space caused by time's intrinsic motion that creates the spatial flow of gravity. Gravity is the spatial consequence of the intrinsic motion of time. Gravitation creates time and space-time, and as a negative form of spatial entropy, causes the contraction and warming of space, rather than the reverse, as in the case of the intrinsic motion of light. The creation of dimensional conservation domains by the intrinsic, entropic motions c, T, and G (light, time, gravity), where energy can be both used, transformed, and yet simultaneously conserved, is the connection between the 1st and 2nd laws of thermodynamics. Consequently, wherever we find a form of energy, we will find an associated form of entropy (in entropy's primordial form, intrinsic dimensional motion). Entropy is the guarantee made by the 2nd law of thermodynamics to the first law, that energy will be conserved while it is being transformed and used. The "teeth" of this 17

guarantee is the effectively "infinite" velocity of both light and time, insuring that lost heat and opportunity cannot be recovered. Hence the "pure" or primordial forms of entropy not only establish the dimensional conservation domains of free and bound energy, but also protect and maintain their borders against violations of energy or causality by fast "space ship" or "time machine". Likewise, possible gravitational or inertial loopholes or breaches (such as "wormholes") in the metric fabric of spacetime are closed by the "event horizon" and central "singularity" of black holes - where g = c and time stands still. In the black hole, gravity takes over all energetic and entropic functions formerly provided by the electromagnetic metric. -Gm: The Negentropic Energy of Gravitation The effect of gravity is essentially to convert the spatial entropy drive of free energy (the intrinsic motion of light, as gauged by "velocity c") to the historical entropy drive of bound energy (the intrinsic motion of time, as gauged by "velocity T"), and vice versa. Gravity decelerates (or accelerates) the spatial expansion of the Cosmos in consequence. The increase in the historical conservation domain of matter (matter's causal information field - historic space-time) is funded by a decrease in the spatial conservation domain of light. The process is reversed by the gravitational conversion of mass to light, as in the stars. Gravity pays the entropy-"interest" on the symmetry debt of matter by the creation of bound energy's time dimension. As we saw earlier, the gravitational conversion of space and the drive of spatial entropy (S) (the intrinsic motion of light) to time and the drive of historical entropy (T) (the intrinsic motion of time), can be represented by the "concept equation": -Gm(S) = (T)m -Gm(S) - (T)m = 0 Time is the universal driver of entropy, whether implicitly in free energy (as "frequency"), or explicitly in bound energy. The intrinsic motion of light is caused by the symmetric, "wavelength", or spatial component of an electromagnetic wave "fleeing" the asymmetric, "frequency", or temporal component, which is, however, an embedded characteristic of light's own nature: "frequency multiplied by wavelength = c". The intrinsic motion of light suppresses the time dimension to an implicit condition, maintaining the metric symmetry of space. (Light has no time dimension: light's "clock" is stopped - as discovered by Einstein.) When this process is reversed, time and gravitation are created (space is pulled by time into history, where space self-annihilates, creating more time). The intrinsic motion of light is ultimately caused by an embedded but implicit (as "frequency") temporal entropy drive. All forms of entropy serve the conservation of energy, causality, and symmetry. It should be no surprise that both spatial and 18

temporal entropy are intrinsic to electromagnetic energy, ready to serve its free and bound expressions, massless light and massive particles. It is furthermore satisfying to our sense of the economy of nature to discover that gravitation arises as the mediating force between these two primordial forms of entropy, converting one to the other, with gravity itself conserving entropy, symmetry, causality, and energy. When light's entropy drive (implicit time) is converted to matter's entropy drive (explicit time), the very same temporal component of the electromagnetic wave is involved, simply switching from an implicit to an explicit condition. Space is gravitationally annihilated leaving a metrically equivalent temporal residue. Time and gravity induce each other endlessly. We can think of this continuous process as the result of either: 1) the intrinsic motion of the time charge, dragging space after it to the point-like center of mass, where space self-annihilates as it tries to squeeze into the time line, exposing a new and metrically equivalent temporal residue (whose intrinsic motion continues the entropic cycle); or 2) the insatiable entropy-energy debt of matter, sucking in more space to pay its temporal expenses, which are endless because (instead of simply paying off the entropy debt) time is being used to create a continuously expanding history (the causal information domain of bound energy historic space-time). But these explanations are essentially the same, as both depend upon the intrinsic motion of time and the metric equivalence of the dimensions. Time is the active principle of the gravitational "location" charge. "Spatio-Temporal" Entropy and the Third Law of Thermodynamics The 3rd law of thermodynamics, by Nearnst (1918), states that at absolute zero (Kelvin scale), the entropy of every system is zero. At first there seems to be something strange here, since we are used to thinking of entropy increasing as things cool off, and here is something very cold with no entropy at all. However, at absolute zero, a cup of tea (for example) has no heat at all and hence no (thermal) entropy. Its frozen crystal lattice is maximally ordered and quiescent; information entropy (the decay of information) is also at a minimum. It does, however, have temporal and gravitational entropy associated with its rest mass energy, which is unaffected by (low) temperature (time and gravity continue to flow), and which will eventually be expressed through either the gravitational release of bound energy (culminating in the "quantum radiance" of a black hole), or radioactive decay (culminating in "proton decay"). The third law therefore addresses "tertiary" spatio-temporal entropy ("work" and thermal entropy associated with material systems), but not primordial temporal or gravitational expressions of entropy; nor does it affect the entropy associated with velocity c and the expansion of the Universe. (See: "Spatial vs Temporal Entropy".) As the tea cools down, the entropy of the tea plus environment is increased, because radiant heat escapes from the tea to infinity, contributing to the expansion and cooling 19

of the Universe. It is this escape of radiant or free energy which ultimately allows the tea to cool, and which actually requires an expanding Universe to be effective. The Universe is the grandest example of a closed expanding system in which T falls while Q (total heat) remains constant (by the conservation of energy), thus constantly increasing the entropy (S) of the Cosmos (dS = dQ/T). Thermal entropy is gradually reversed in a collapsing Universe; temporal entropy, however, marches on. The Universe continues to age even as it collapses - time is always one-way (and the same way), and as the product of gravitation, time is "quite at home" in the contracting phase of the "Big Crunch". The "Big Crunch" terminates in a cosmic-sized black hole, which cannot sustain itself because it uses up all the space with which it creates its gravitational binding energy. Hence the "Big Crunch" will "flash over" to a new "Big Bang" because light, unlike matter, can create its own conservation/entropy domain (space) from nothing, or rather from its own nature via its own intrinsic motion. (Due to proton decay in the interior, black holes contain nothing but gravitationally bound photons, poised to escape if ever the gravitational bonds relax.) In a cyclic, closed Universe, assuming no energy slips through the cusps, the cosmic entropy tally (temporal vs. spatial) is reset to zero at every beginning. (See also: "The Connection Between Inflation and the "Big Crunch".) The Conservation of Information It is generally believed that our universe begins as a quantum fluctuation within the "multiverse", containing no net charge and no net energy - something like a scaled-up version of Dirac-Heisenberg virtual particle-antiparticle pair creation in the "vacuum" of space-time. While the details of the process remain highly speculative, conservation principles argue for the general validity of such a conception. "No net charge" is achieved by the equal admixture of matter and antimatter, and "no net energy" is the consequence of the negative energy input of gravity - gravity is united with the other forces in the cosmic beginning. Scale or magnitude is due to initial conditions in the multiverse, as are the other life-friendly parameters of the physical constants of our cosmos - all a random choice among countless possibilities."Inflation" from the state of a super-cooled "false vacuum" may also be involved. George Gamow referred to the primordial substance or energy state as "Ylem", which we might conceive of today as a "soup" composed of equal parts of leptoquarks and antileptoquarks, whose positive energy is balanced by the negative energy of gravity in the initiating stage of the "Big Bang" or "Creation Event". Regardless of the details, given such a balanced initial condition, the universe contains at its beginning no net 20

information, since it is symmetric in all respects, and information by definition is asymmetric. The Universe has the potential for information, but only if the initial symmetry of the universe can be broken and its energy converted into asymmetric particles (for example, matter in the absence of antimatter). It is thought that the initial symmetric energy state was broken by the asymmetric decay of electrically neutral leptoquarks vs. antileptoquarks, a decay mediated by the weak force, and resulting in an excess of approximately one part per ten billion of leptoquarks, which subsequently decayed through a "cascade" of hyperons to the familiar protons and neutrons of the modern-day universe. Because the Universe begins in a state of no net information, the Universe is not constrained to conserve some initial component of information in any ultimate sense; likewise, no initial limit is imposed (other than available energy) upon the amount of information the Universe may accumulate. But when all charges cancel, as for example in matter-antimatter annihilations, proton decay, or the "quantum radiance" of black holes, the information contained in these charges (and in their combinations and permutations) is canceled also. Nevertheless, due to the temporal/causal nature of matter, information is conserved in the historical realm of space-time, and this is permanent (if space-time is permanent). With the expansion of history, information becomes attenuated as matter's causal matrix is diluted; the information does not disappear, it simply becomes increasingly harder to trace and resolve as its influence spreads into an expanding system of causal networks. This expanding causal web (expanding historically at the metric equivalent of "velocity c") also means the information becomes harder - in fact impossible - to destroy. Because black holes induce proton decay in their interiors, and also annihilate space, black holes destroy matter and matter's information content. In complete fulfillment of Noether's theorem, black holes gravitationally return matter and information to its original state as the completely symmetric free energy of Hawking's "quantum radiance". But only a cosmic-sized black hole (as in the "Big Crunch") is large enough to encompass the entire space-time mesh of matter's historic domain of Information, the "causal matrix" of matter which is the source of the reality of today and the "universal present moment". Hence in the absence of a "Big Crunch", information is permanently stored in the historical domain of space-time, regardless of what happens to its physical origins.


HEAT DEATH OF THE UNIVERSE How it will all end? The heat death of the universe is a suggested fate of the universe, its final thermodynamic state in which it has diminished to a state of no thermodynamic free energy to sustain motion or life. In the language of physics, it has reached maximum entropy. The hypothesis of heat death stems from the 1850s ideas of William Thomson, 1st Baron Kelvin who extrapolated the views of the theory of heat as mechanical energy loss in nature, as embodied in the first two laws of thermodynamics, to the processes in the universe.

William Thomson (Lord Kelvin) originated the idea of universal heat death in 1852.

Maximum entropy In an alternate theory, the heat-death of the universe is when the universe has reached a state of maximum entropy. This happens when all available energy (such as from a hot source) has moved to places of less energy (such as a colder source). Once this has happened, no more work can be extracted from the universe. Since heat ceases to flow, no more work can be acquired from heat transfer. This same kind of equilibrium state will also happen with all other forms of energy (mechanical, electrical, etc.).


Since no more work can be extracted from the universe at that point, it is effectively dead, especially for the purposes of humankind. This concept is quite different from what is commonly referred to as cold death. Cold death is when the universe continues to expand forever. Because of this expansion, the universe continues to cool down. Eventually, the universe will be too cold to support any life, it will end in a whimper. The opposite of cold death, is NOT "heat death" but actually the Big Crunch. The big crunch occurs when the universe has enough matter density to contract back on itself, eventually shrinking to a point. This shrinking will cause the temperature to rise, resulting in a very hot end of the universe.


Appendix This appendix discusses a couple of issues that are somewhat more technical than the ideas discussed in the main part of the paper. In the text I said that the entropy of a system in a given macro state is roughly given by the number of corresponding microstates. In the first part of the appendix I give the actual definition of entropy, which is proportional to the logarithm of the number of microstates, and explain why that definition is more useful. The only math required for this section is a familiarity with the definition and properties of logarithms. The second part of the appendix relates entropy to a more familiar quantity, temperature. I explain in that section how temperature is defined using the idea of entropy to explain why energy flows from certain systems (i.e. "hot" ones) to other ("cold") ones. The basic ideas of this section should be accessible with no math, but the precise definition of temperature involves a derivative and thus requires some knowledge of introductory calculus. A. Entropy is Actually the Logarithm of the Number of Microstates If a system is in a macro state for which there are N possible microstates the entropy is not simply defined as N, but rather as kBlog(N). The number kB, called "Boltzmann's Constant," is simply a proportionality factor that sets the units of entropy. I'll ignore it for the rest of this appendix. Why is there a logarithm in the definition, though? In one sense it doesn't matter one way or another. Whether you take the logarithm or not it will still be true that states with higher entropy are more likely to occur, so entropy will tend to increase. The logarithm is convenient, however, because it makes entropy an "extrinsic" quantity. All that means is that if I have two systems with entropy S1 and S2 then the combined system consisting of both of them has entropy S1 + S2. (Entropy is usually denoted by S; I don't know why.) Many quantities in physics are extrinsic, such as mass. If I have a 2 kg weight and a 3 kg weight then the two of them together have a mass of 5 kg. To see why the logarithm accomplishes this let's consider two systems whose macro states correspond to N1 and N2 possible microstates, respectively. How many microstates are possible for the combined system? I would encourage you to stop and try to answer this question for yourself before reading on. The answer is that the combined system has N1 X N2 possible microstates. Say for example that the first system has three possible microstates and the second one has two (N1=3, N2=2). We can list all the possible states for the combined system in a grid: 24

1) System 1 in state 1, System 2 in state 1 3) System 1 in state 2, System 2 in state 1 5) System 1 in state 3, System 2 in state 1

2) System 1 in state 1, System 2 in state 2 4) System 1 in state 2, System 2 in state 2 6) System 1 in state 3, System 2 in state 2

The total number of possibilities is N1 x N2, or 6. This same process works for any numbers N1 and N2. Say we label the state of the combined system such that (2,4) means the first system is in its second possible microstate and the second one is in its fourth. The possible microstates for the combined system are: (1,1) (1,2) (1,3) ... (1,N2) (2,1) (2,2) (2,3) ... (2,N2) ... (N1,1) (N1,2) ... (N1,N2)

You should be able to convince yourself that there are N1 times N2 entries in the table above. Thus if entropy was defined as S = N the combined system would have entropy S = S1 x S2. With the logarithm, though, S = log(N1xN2) = log(N1) + log(N2) = S1 + S2. There is another subtlety in the definition of entropy that I glossed over in the main body of the paper. Consider the example of the room full of air, and recall that the microstate of that system is given by the exact position and velocity of every air molecule in the room. There are actually an infinite number of possible microstates for any given macro state of the room. This problem arises because position is a continuous quantity, meaning any particular molecule can be at any one of an infinite possible number of positions. This issue can be dealt with rigorously by using integrals over appropriately defined quantities instead of simply counting states. I'm not going to go into the details of that formalism, which is mathematically a bit complicated but conceptually equivalent to counting states. B. Temperature As I noted at the beginning of the paper, temperature is a quantity we have experience with in our daily lives. We know what hot and cold things feel like. Somewhat more rigorously, we can say that when a hot thing and a cold thing are put in contact, energy tends to flow from the hot one to the cold one until the two are at the same temperature. To give a fully rigorous definition of temperature, however, requires using the concept of entropy. In particular, the reason that energy tends to flow from hot to cold things is that such a flow increases the entropy of the system as a whole.


To see how this works I will first note that for most systems the entropy increases as the energy increases. Roughly speaking this occurs because a system with a lot of energy can have a lot of different microstates, depending on how that energy is divided among all its particles. Now suppose I put two systems in contact with each other in such a way that they can exchange energy. For example, if they are touching each other then collisions between molecules at the boundary can transfer energy between the two systems. As the molecules jiggle around more or less randomly some energy will go each way between the systems. As each system gains or loses energy its entropy will tend to go up or down. For a system in a particular state you can generally quantify how much its entropy will increase (or decrease) as you add (or subtract) energy. Let's suppose that of our two systems the entropy of system A depends much more strongly on energy than the entropy of system B. That means that if system B loses energy to system A the total entropy of the combined system will go up. Since the combined system will tend over time to evolve into its highest entropy state, on average system B will tend to lose more energy to system A than the other way around. Note that it doesn't matter whether system A or B has more energy; it simply matters which one will gain or lose more entropy by gaining or losing energy. I should emphasize here that there is no physical law requiring energy to flow from system B to system A. All the entropy is telling us is that of all the possible interactions that can go on between the systems there are more of them that will involve energy transfer from B to A than the other way around, so statistically that is what will tend to happen. Finally, I can formulate the preceding ideas mathematically and thus come to a definition of temperature. For each system I can define a quantity (dS/dE), the rate of change of entropy with respect to energy. This quantity, often denoted by the Greek letter , determines how energy will tend to flow between systems; it will tend to flow from systems with small  to systems with large . For historical reasons temperature is defined in the opposite way; energy tends to flow from systems with large temperature to systems with small temperature. So the temperature T of a system is defined asT = 1/ = 1/(dS/dE). To recap: Temperature is defined as a measure of how much the entropy of a system changes in response to an increase or decrease in energy. When two systems are put in contact energy will tend to flow between them in the direction that increases entropy, which means it will flow from the hotter system to the colder one.


View more...


Copyright ©2017 KUPDF Inc.