The Structure of the Physical Universe Vol 1-3 by Dewey B Larson

January 29, 2017 | Author: John Smith | Category: N/A
Share Embed Donate


Short Description

Download The Structure of the Physical Universe Vol 1-3 by Dewey B Larson...

Description

From; http://www.reciprocalsystem.com/nbm/index.htm

Nothing But Motion DEWEY B. LARSON Volume I of a revised and enlarged edition of THE STRUCTURE OF THE PHYSICAL UNIVERSE Preface

12. Basic Mathematical Relations

1. Background

13. Physical Constants

2. A Universe of Motion

14. Cosmic Elements

3. Reference Systems

15. Cosmic Ray Decay

4. Radiation

16. Cosmic Atom Building

5. Gravitation

17. Some Speculations

6. The Reciprocal Relation

18. Simple Compounds

7. High Speed Motion

19. Complex Compounds

8. Motion in Time

20. Chain Compounds

9. Rotational Combinations

21. Ring Compounds

10. Atoms

References

11. Sub-atomic Particles

Preface Nearly twenty years have passed since the first edition of this work was published. As I pointed out in the preface of that first edition, my findings indicate the necessity for a drastic change in the accepted concept of the fundamental relationship that underlies the whole structure of physical theory: the relation between space and time. The physical universe, I find, is not a universe of matter existing in a framework provided by space and time, as seen by conventional science, but a universe of motion, in which space and time are simply the two reciprocal aspects of motion, and have no other significance. What I

have done, in brief, is to determine the properties that space and time must necessarily possess in a universe composed entirely of motion, and to express them in the form of a set of postulates. I have then shown that development of the consequences of these postulates by logical and mathematical processes, without making any further assumptions or introducing anything from experience, defines, in detail, a complete theoretical universe that coincides in all respects with the observed physical universe. Nothing of this nature has ever been developed before. No previous theory has come anywhere near covering the full range of phenomena accessible to observation with existing facilities, to say nothing of dealing with the currently inaccessible, and as yet observationally unknown, phenomena that must also come within the scope of a complete theory of the universe. Conventional scientific theories accept certain features of the observed physical universe as given, and then make assumptions on which to base conclusions as to the properties of these observed phenomena: The new theoretical system, on the other hand, has no empirical content. It bases a11 of its conclusions solely on the postulated properties of space and time. The theoretical deductions from these postulates provide for the existence of the various physical entities and phenomenamatter, radiation, electrical and magnetic phenomena, gravitation, etc.-as well as establishing the relations between these entities. Since all conclusions are derived from the same premises, the theoretical system is a completely integrated structure, contrasting sharply with the currently accepted body of physical theory, which, as described by Richard Feynman, is ―a multitude of different parts and pieces that do not fit together very well.‖ The last twenty years have added a time dimension to this already unique situation. The acid test of any theory is whether it is still tenable after the empirical knowledge of the subject is enlarged by new discoveries. As Harlow Shapley once pointed out, facts are the principle enemies of theories. Few theories that attempt to cover any more than a severely limited field are able to survive the relentless march of discovery for very long without major changes or complete reconstruction. But no substantive changes have been made in the postulates of this new system of theory in the nearly twenty years since the original publication, years in which tremendous strides have been made in the enlargement of empirical knowledge in many physical areas. Because the postulates and whatever can be derived from them by logical and mathematical processes, without introducing anything from observation or other external sources, constitute the entire system of theory, this absence of substantive change in the postulates means that there has been no change anywhere in the theoretical structure. It has been necessary, of course, to extend the theory by developing more of the details, in order to account for some of the new discoveries, but in most cases the nature of the required extension was practically obvious as soon as the new phenomena or relationships were identified. Indeed, some of the new discoveries, such as the existence of exploding galaxies and the general nature of the products thereof, were actually anticipated in the first published description of the theory, along with many phenomena and relations that are still awaiting empirical verification. Thus the new theoretical system is ahead of observation and experiment in a number of significant respects.

The scientific community is naturally reluctant to change its views to the degree required by my findings, or even to open its journals to discussion of such a departure from orthodox thought. It has been a slow and difficult task to get a significant count of consideration of the new structure of theory. However, those who do examine this new theoretical structure carefully can hardly avoid being impressed by the logical and consistent nature of the theoretical development. As a consequence, many of the individuals who have made an effort to understand and evaluate the new system have not only recognized it as a major addition to scientific knowledge, but have developed an active personal interest in helping to bring it to the attention of others. In order to facilitate this task an organization was formed some ears ago with the specific objective of promoting understanding and eventual acceptance of the new theoretical system, the Reciprocal System of physical theory, as we are calling it. Through the efforts of this organization, the New Science Advocates, Inc., and its individual members, lectures on the new theory have been given at colleges and universities throughout the United States and Canada. The NSA also publishes a newsletter, and has been instrumental in making publication of this present volume possible. At the annual conference of this organization at the University of Mississippi in August 1977 I gave an account of the origin and early, development of the Reciprocal System of theory. It has been suggested by some of those who heard this, presentation that certain parts of it ought to be included in this present volume in order to bring out the fact that the central idea of the new system of theory, the general reciprocal relation between space and time, is not a product of a fertile imagination, but a conclusion reached as the result of an exhaustive and detailed analysis of the available empirical data in a number of the most basic physical fields. The validity of such a relation is determined by its consequences, rather than by its antecedents, but many persons may be more inclined to take the time to examine those consequences if they are assured that the relation in question is the product of a systematic inductive process, rather than something extracted out of thin air. The following paragraphs from my conference address should serve this purpose. Many of those who come in contact with this system of theory are surprised to find us talking of ―progress in connection with it. Some evidently look upon the theory as a construction, which should be complete before it is offered for inspection. Others apparently believe that it originated as some kind of a revelation, and that all I had to do was to write it down. Before I undertake to discuss the progress that has been made in the past twenty years, it is therefore appropriate to explain just what kind of a thing the theory actually is, and why progress is essential. Perhaps the best way of doing this will be to tell you something about how it originated. I have always been very much interested in the theoretical aspect of scientific research, and quite early in life I developed a habit of spending much of my spare time on theoretical investigations of one kind or another. Eventually I concluded that these efforts would be more likely to be productive if I directed most of them toward some specific goal, and I decided to undertake the task of devising a

method whereby the magnitudes of certain physical properties could be calculated from their chemical composition. Many investigators had tackled this problem previously, but the most that had ever been accomplished was to devise some mathematical expressions whereby the effect of temperature and pressure on these properties can be evaluated if certain arbitrary ―constants‖ are assigned to each of the various substances. The goal of a purely theoretical derivation, one which requires no arbitrary assignment of numerical constants, has eluded all of these efforts. It may have been somewhat presumptuous on my part to select such an objective, but, after all, if anyone wants to try to accomplish ; something new, he must aim at something that others have not done. Furthermore, I did have one significant advantage over my predecessors, in that I was not a professional physicist or chemist. Most people would probably consider this a serious disadvantage, if not a definite disqualification. But those who have studied the subject in depth are agreed that revolutionary new discoveries in science seldom come from the professionals in the particular fields involved. They are almost always the work of individuals who might be considered amateurs, although they are more accurately described by Dr. James B. Conant as uncommitted investigators.‖ The uncommitted investigator, says Dr. Conant, is one who does the investigation entirely on his own initiative, without any direction by or responsibility to anyone else, and free from any requirement that the work must produce results. Research is, in some respects, like fishing. If you make your living as a fisherman, you must fish where you know that there are fish, even though you also know that those fish are only small ones. No one but the amateur can take the risk of going into completely unknown areas in search of a big prize. Similarly, the professional scientist cannot afford to spend twenty or thirty of the productive years of his life in pursuit of some goal that involves a break with the accepted thought of his profession. But we uncommitted investigators are primarily interested in the fishing, and while we like to make a catch, this is merely an extra dividend. It is not essential as it is for those who depend on the catch for their livelihood. We are the only ones who can afford to take the risks of fishing in unknown waters. As Dr. Conant puts it, Few will deny that it is relatively easy in science to fill in the details of a new area, once the frontier has been crossed. The crucial event is turning the unexpected corner. This is not given to most of us to do... By definition the unexpected corner cannot be turned by any operation that is planned... If you want advances in the basic theories of physics and chemistry in the future comparable to those of the last two centuries, then it would seem essential that there continue to be people in a position to turn unexpected corners. Such a man I have ventured to call the uncommitted investigator. As might be expected, the task that I had undertaken was a long and difficult one, but after about twenty years I had arrived at some interesting mathematical expressions in several areas, one of the most intriguing of which was an

expression for the inter-atomic distance in the solid state in terms of three variables clearly related to the properties portrayed by the periodic table of the elements. But a mathematical expression, however accurate it may be, has only a limited value in itself. Before we can make full use of the relationship that it expresses, we must know something as to its meaning. So my next objective was to find out why the mathematics took this particular form. I studied these expressions from all angles, analyzing the different terms, and investigating all of the hypotheses as to their origin that I could think of. This was a rather discouraging phase of the project, as for a long time I seemed to be merely spinning my wheels and getting nowhere. On several occasions I decided to abandon the entire project, but in each case, after several months of inactivity I thought of some other possibility that seemed worth investigating, and I returned to the task. Eventually it occurred to me that, when expressed in one particular form, the mathematical relation that I had formulated for the inter-atomic distance would have a simple and logical explanation if I merely assumed that there is a general reciprocal relation between space and time. My first reaction to this thought was the same as that of a great many others. The idea of the reciprocal of space, I said to myself, is absurd. One might as well talk of the reciprocal of a pail of water, or the reciprocal of a fencepost. But on further consideration I could see that the idea is not so absurd after all. The only relation between space and time of which we have any actual knowledge is motion, and in motion space and time do have a reciprocal relation. If one airplane travels twice as fast as another, it makes no difference whether we say that it travels twice as far in the same time, or that it travels the same distance in half the time. This is not necessarily a general reciprocal relation, but the fact that it is a reciprocal relation gives the idea of a general relation a considerable degree of plausibility. So I took the next step, and started considering what the consequences of a reciprocal relation of this nature might be. Much to my surprise, it was immediately obvious that such a relation leads directly to simple and logical answers to no less than a half dozen problems of long standing in widely separated physical fields. Those of you who have never had occasion to study the foundations of physical theory in depth probably do not realize what an extraordinary result this actually is. Every theory of present-day physical science has been formulated to apply specifically to some one physical field, and not a single one of these theories can provide answers to major questions in any other field. They may help to provide these answers but in no case can any of them arrive at such an answer unassisted. Yet here in the reciprocal postulate we find a theory of the relation between space and time that leads directly, without any assistance from any other theoretical assumptions or from empirical facts, to simple and logical answers to many different problems in many different fields. This is something completely unprecedented. A theory based on the reciprocal relation accomplishes on a wholesale scale what no other theory can do at all. To illustrate what I am talking about, let us consider the recession of the distant galaxies. As most of you know, astronomical observations indicate that the most

distant galaxies are receding from the earth at speeds which approach the speed of light. No conventiona1 physical theory can explain this recession. Indeed, even if you put all of the theories of conventional physics together, you still have no explanation of this phenomenon. In order to arrive at any such explanation the astronomers have to make some assumption, or assumptions, specifically applicable to the recession itself. The current favorite, the Big Bang theory, assumes a gigantic explosion at some hypothetical singular point in the past in which the entire contents of the universe were thrown out into space at their present high speeds. The rival Steady State theory assumes the continual creation of new matter, which in some unspecified way creates a pressure that pushes the galaxies apart at the speeds now observed. But the reciprocal postulate, an assumption that was made to account for the magnitudes of the inter-atomic distances in the solid state, gives us an explanation of the galactic recession without the necessity of making any assumptions about that recession or about the that are receding. It is not even necessary to arrive at any c as to what a galaxy is. Obviously it must be something-or its existence could not be recognized-and as long as it is something, the reciprocal relation tells us that it must be moving outward away from our location at the speed of light, because the location which it occupies is so moving. On the basis of this relation, the spatial separation between any two physical locations, the ―elapsed distance,‖ as we may call it, is increasing at the same rate as the elapsed time. Of course, any new answer to a major question that is provided by a new theory leaves some subsidiary questions that require further consideration, but the road to the resolution of these subsidiary issues is clear once the primary problem is overcome. The explanation of the recession, the reason why the most distant galaxies recede with the speed of light, leaves us with the question as to why the closer galaxies have lower recession speeds, but the answer to this question is obvious, since we know that gravitation exerts a retarding effect which is greater at the shorter distances. Another example of the many major issues of long standing that are resolved almost automatically by the reciprocal postulate is the mechanism of the propagation of electromagnetic radiation. Here, again, no conventional physical theory is able to give us an explanation. As in the case of the galactic recession, it is necessary to make some assumption about the radiation itself before any kind of a theory can be formulated, and in this instance conventional thinking has not even been able to produce an acceptable hypothesis. Newton’s assumption of light corpuscles traveling in the manner of bullets from a gun, and the rival hypothesis of waves in a hypothetical ether, were both eventually rejected. There is a rather general impression that Einstein supplied an explanation, but Einstein himself makes no such claim. In one of his books he points out what a difficult problem this actually is, and he concludes with this statement: Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.

So, as matters now stand, conventional science has no explanation at all for this fundamental physical phenomenon. But here, too, the reciprocal postulate gives us a simple and logical explanation. It is, in fact, the same explanation that accounts for the recession of the distant galaxies. Here, again, there is no need to make any assumption about the photon itself. It is not even necessary to know what a photon is. As long as it is something, it is carried outward at the speed of light by the motion of the spatial location which it occupies. No more than a minimum amount of consideration was required in order to see that the answers to a number of other physical problems of long standing similarly emerged easily and naturally on application of the reciprocal postulate. This was clearly something that had to be followed up. No investigator who arrived at this point could stop without going on to see just how far the consequences of the reciprocal relation would extend. The results of that further investigation constitute what we now know as the Reciprocal System of theory. As I have already said, it is not a construction, and not a revelation. Now you can see just what it is. It is nothing more nor less than the total of the consequences that result if there is a general reciprocal relation between space and time. As matters now stand, the details of the new theoretical system, so far as they have been developed, can be found only in my publications and those of my associates, but the system of theory is not coextensive with what has thus far been written about it. In reality, it consists of any and all of the consequences that follow when we adopt the hypothesis of a general reciprocal relation between space and time. A general recognition of this point would go a long way toward meeting some of our communication problems. Certainly no one should have any objection to an investigation of the consequences of such a hypothesis. Indeed, anyone who is genuinely interested in the advancement of science, and who realizes the unprecedented scope of these consequences, can hardly avoid wanting to find out just how far they actually extend. As a German reviewer expressed it. Only a careful investigation of all of the author’s deliberations can show whether or not he is right. The official schools of natural philosophy should not shun this (considerable, to be sure) effort. After all, we are concerned here with questions of fundamental significance. Yet, as all of you undoubtedly know, the scientific community, particularly that segment of the community that we are accustomed to call the Establishment, is very reluctant to permit general discussion of the theory in the journals and in scientific meetings. They are not contending that the conclusions we have reached are wrong; they are simply trying to ignore them, and hope that they will eventually go away. This is, of course, a thoroughly unscientific attitude, but since it exists we have to deal with it, and for this purpose it will be helpful to have some idea of the thinking that underlies the opposition. There are some individuals who simply do not want their thinking disturbed, and are not open to any kind of an argument. William James, in one of his books, reports a conversation that he had with a prominent scientist concerning what we now call

ESP. This man, says James, contended that even if ESP is a reality, scientists should band together to keep that fact from becoming known, since the existence of any such thing would cause havoc in the fundamental thought of science. Some individuals no doubt feel the same way about the Reciprocal System, and so far as these persons are concerned there is not much that we can do. There is no argument that can counter an arbitrary refusal to consider what we have to offer. In most cases, however, the opposition is based on a misunderstanding of our position. The issue between the supporters of rival scientific theories normally is: Which is the better theory? The basic question involved is which theory agrees more closely with the observations and measurements in the physical areas to which the theories apply, but since all such theories are specifically constructed to it the observations, the decision usually has to rest to a large degree on preferences and prejudices of a philosophical or other non-scientific nature. Most of those who encounter the Reciprocal System of theory for the first time take it for granted that we are simply raising another issue, or several issues, of the same kind. The astronomers, for instance, are under the impression that we are contending that the outward progression of the natural reference system is a better explanation of the recession of the distant galaxies than the Big Bang. But this is not our contention at all. We have found that we need to postulate a general reciprocal relation between space and time in order to explain certain fundamental physical phenomena that cannot be explained by any conventional physical theory. But once we have postulated this relationship it supplies simple and logical answers for the major problems that arise in all physical areas. Thus our contention is not that we have a better assortment of theories to replace the Big Ban and other specialized theories of limited scope, but that we have a general theory that applies to all physical fields. These theories of limited applicability are therefore totally unnecessary. While this present volume is described as the first unit of a ―revised and enlarged‖ edition, the revisions are actually few and far between. As stated earlier, there have been no substantive changes in the postulates since they were originally formulated. Inasmuch as the entire structure o g theory has been derived from these postulates by deducing their logical and mathematical consequences, the development of theory in this new edition is essentially y significant difference b the same as in the original, the only significant difference being ina few places where points that were originally somewhat vague have been clarified, or where more direct lines of development have been substituted for the earlier derivations. However, many problems are encountered in getting an unconventional work of this kind into print, and in order to make the original publication possible at all it was necessary to limit the scope of the work, both as to the number of subjects covered and as to the extent to which the details of each subject were developed. For this reason the purpose of this new edition is not only to bring the theoretical structure up to date by incorporating all of the advances that have been made in the last twenty years, but also to present the portions of the original results--approximately half of the total--that had to be omitted from the first edition.

Because of this large increase in the size of the work, the new edition will be issued in several volumes. This first volume is self contained. It develops the basic laws and principles applicable to physical phenomena in general, and defines the entire chain of deductions leading from the fundamental postulates to each of the conclusions that are reached in the various physical areas that are covered. The subsequent volumes will apply the same basic laws and principles to a variety of other physical phenomena. It has seemed advisable to change the order of presentation to some extent, and as a result a substantial amount of the material omitted from the first edition has been included in this volume, whereas some subjects, such as electric and magnetic phenomena, that were discussed rather early in the first edition have been deferred to the later volumes. For the benefit of those who do not have access to the first edition (which is out of print) and wish to examine what the Reciprocal System of theory has to say about these deferred items before the subsequent volumes are published, I will say that brief discussions of some of these subjects are contained in my 1965 publication, New Light on Space and Time, and some further astronomical information, with particular reference to the recently discovered compact astronomical objects, can be found in Quasars and Pulsars, published in 1971. It will not be feasible to acknowledge all of the many individual contributions that have been made toward developing the details of thc theoretical system and bringing it to the attention of the scientific community. However, I will say that I am particularly indebted to the founders of the New Science Advocates, Dr. Douglas S. Cramer, Dr. Paul F. de Lespinasse, and Dr. George W. Hancock; to Dr. Frank A, Anderson, the current President of the NSA, who did the copy editing for this volume, along with his many other contributions; and to the past and present members of the NSA Executive Board: Steven Berline, RonaId F. Blackburn, Frances Boldereff, James N. Brown, Jr., Lawrence Denslow, Donald T. Elkins, Rainer Huck, Todd Kelso, Richard L. Long, Frank H. Meyer, William J. Mitchell, Harold Norris, Carla Rueckert, Ronald W. Satz, George Windolph, and Hans F. Wuenscher. D. B. Larson

CHAPTER 1

Background To the man of the Stone Age the world in which he lived was a world of spirits. Powerful gods hurled shafts of lightning, threw waves against the shore, and sent winter storms howling down out of the north. Lesser beings held sway in the forests, among the rocks, and in the flowing streams. Malevolent demons, often in league with the mighty rulers of the elements, threatened the human race from all directions, and only the intervention of an assortment of benevolent, but capricious, deities made man’s continued existence possible at all.

This hypothesis that material phenomena are direct results of the actions of superhuman beings was the first attempt to define the fundamental nature of the physical universe: the first general physical concept. The scientific community currently regards it as a juvenile and rather ridiculous attempt at an explanation of nature, but actually it was plausible enough to remain essentially unchallenged for thousands of years. In fact, it is still accepted, in whole or in part, by a very substantial proportion of the population of the world. Such widespread acceptance is not as inexplicable as it may seem to the scientifically trained mind; it has been achieved only because the ―spirit‖ concept does have some genuine strong points. Its structure is logical. If one accepts the premises he cannot legitimately contest the conclusions. Of course, these premises are entirely ad hoc, but so are many of the assumptions of modern science. The individual who accepts the idea of a ―nuclear force‖ without demur is hardly in a position to be very critical of those who believe in the existence of ―evil spirits.‖ A special merit of this physical theory based on the ―spirit‖ concept is that it is a comprehensive theory; it encounters no difficulties in assimilating new discoveries, since all that is necessary is to postulate some new demon or deity. Indeed, it can even deal with discoveries not yet made, simply by providing a ―god of the unknown.‖ But even though a theory may have some good features, or may have led to some significant accomplishments, this does not necessarily mean that it is correct, nor that it is adequate to meet current requirements. Some three or four thousand years ago it began to be realized by the more advanced thinkers that the ―spirit‖ concept had some very serious weaknesses. The nature of these weaknesses is now well understood, and no extended discussion of them is necessary. The essential point to be recognized is that at a particular stage in history the prevailing concept of the fundamental nature of the universe was subjected to critical scrutiny, and found to be deficient. It was therefore replaced by a new general physical concept. This was no minor undertaking. The ―spirit‖ concept was well entrenched in the current pattern of thinking, and it had powerful support from the ―Establishment,‖ which is always opposed to major innovations. In most of the world as it then existed such a break with accepted thought would have been impossible, but for some reason an atmosphere favorable to critical thinking prevailed for a time in Greece and neighboring areas, and this profound alteration of the basic concept of the universe was accomplished there. The revolution in thought came slowly and gradually. Anaxagoras, who is sometimes called the first scientist, still attributed Mind to all objects, inanimate as well as animate. If a rock fell from a cliff, his explanation was that this action was dictated by the Mind of the rock. Even Aristotle retained the ―spirit‖ concept to some degree. His view of the fall of the rock was that this was merely one manifestation of a general tendency of objects to seek their ―natural place,‖ and he explained the acceleration during the fall as a result of the fact ―that the falling body moved more jubilantly every moment because it found itself nearer home.‖1 Ultimately, however, these vestiges of the ―spirit‖ concept disappeared, and a new general concept emerged, one that has been the basis of all scientific work ever since. According to this new concept, we live in a universe of matter: one that consists of material ―things‖ existing in a setting provided by space and time. With the benefit of this

conceptual foundation, three thousand years of effort by generation after generation of scientists have produced an immense systematic body of knowledge about the physical universe, an achievement which, it is safe to say, is unparalleled elsewhere in human life. In view of this spectacular record of success, which has enabled the ―matter‖ concept to dominate the organized thinking of mankind ever since the days of the ancient Greeks, it may seem inconsistent to suggest that this concept is not adequate to meet present-day needs, but the ultimate fate of any scientific concept or theory is determined not by what it has done but by what, if anything, it now fails to do. The graveyard of science is full of theories that were highly successful in their day, and contributed materially to the advance of scientific knowledge while they enjoyed general acceptance: the caloric theory, the phlogiston theory, the Ptolemaic theory of astronomy, the ―billiard ball‖ theory of the atom, and so on. It is appropriate, therefore, that we should, from time to time, subject all of our basic scientific ideas to a searching and critical examination for the purpose of determining whether or not these ideas, which have served us well in the past, are still adequate to meet the more exacting demands of the present. Once we subject the concept of a universe of matter to a critical scrutiny of this kind it is immediately obvious, not only that this concept is no longer adequate for its purpose, but that modern discoveries have completely demolished its foundations. If we live in a world of material ―things‖ existing in a framework provided by space and time, as envisioned in the concept of a universe of matter, then matter in some form is the underlying feature of the universe: that which persists through the various physical processes. This is the essence of the concept. For many centuries the atom was accepted as the ultimate unit, but when particles smaller (or at least less complex) than atoms were discovered, and it was found that under appropriate conditions atoms would disintergrate and emit such particles in the process, the sub-atomic particles took over the role of the ultimate building blocks. But we now find that these particles are not permanent building blocks either. For instance, the neutron, one of the constituents, from which the atom is currently supposed to be constructed, spontaneously separates into a proton, an electron, and a neutrino. Here, then, one of the ―elementary particles,‖ the supposedly basic and unchangeable units of matter, transforms itself into other presumably basic and unchangeable units. In order to save the concept of a universe of matter, strenuous efforts are now being made to explain events of this kind by postulating still smaller ―elementary particles‖ from which the known sub-atomic particles could be constructed. At the moment, the theorists are having a happy time constructing theoretical ―quarks‖ or other hypothetical sub-particles, and endowing these products of the imagination with an assortment of properties such as ―charm,‖ ―color,‖ and so on, to enable them to fit the experimental data. But this descent to a lower stratum of physical structure could not be accomplished, even in the realm of pure hypothesis, without taking another significant steps away from reality. At the time the atomic theory was originally proposed by Democritus and his contemporaries, the atoms of which they conceived all physical structures to be composed were entirely hypothetical, but subsequent observations and experiments have revealed the existence of units of matter that have exactly the properties that are

attributed to the atoms by the atomic theory. As matters now stand, therefore, this theory can legitimately claim to represent reality. But there are no observed particles that have all of the properties that are required in order to qualify as constituents of the observed atoms. The theorists have therefore resorted to the highly questionable expedient of assuming, entirely ad hoc, that the observed sub-atomic particles (that is, particles less complex than atoms) are the atomic constituents, but have different properties when they are in the atoms than those they are found to have wherever they can be observed independently. This is a radical departure from the standard scientific practice of building theories on solid factual foundations, and its legitimacy is doubtful, to say the least, but the architects of the ―quark‖ theories are going a great deal farther, as they are cutting loose from objective reality altogether, and building entirely on assumptions. Unlike the hypothetical ―constituents‖ of the atoms, which are observed sub-atomic particles with hypothetical sets of properties instead of the observed properties, the quarks are hypothetical particles with hypothetical properties. The unreliability of conclusions reached by means of such forced and artificial constructions should be obvious, but it is not actually necessary to pass judgment on this basis, because irrespective of how far the subdividing of matter into smaller and smaller particles is carried, the theory of ―elementary particles, of matter cannot account for the observed existence of processes whereby matter is converted into non-matter, and vice versa. This interconvertibility is positive and direct proof that the ―matter‖ concept is wrong; that the physical universe is not a universe of matter. There clearly must be some entity more basic than matter, some common denominator underlying both matter and non-material phenomena. Such a finding, which makes conventional thinking about physical fundamentals obsolete, is no more welcome today than the ―matter‖ concept was in the world of antiquity. Old habits of thought, like old shoes, are comfortable, and the automatic reaction to any suggestion of a major change in basic ideas is resisted, if not outright resentment. But if scientific progress is to continue, it is essential not only to generate new ideas to meet new problems, but also to be equally diligent in discarding old ideas that have outlived their usefulness. There is no actual need for any additional evidence to confirm the conclusion that the currently accepted concept of a universe of matter is erroneous. The observed interconvertibility of matter and non-matter is in itself a complete and conclusive refutation of the assertion that matter is basic. But when the inescapable finality of the answer that we get from this interconvertibility forces recognition of the complete collapse of the concept of a universe of matter, and we can no longer accept it as valid, it is easy to see that this concept has many other shortcomings that should have alerted the scientific community to question its validity long ago. The most obvious weakness of the concept is that the theories that are based upon it have not been able to keep abreast of progress in the experimental and observational fields. Major new physical discoveries almost invariably come as surprises, ―unexpected and even unimagined surprises,‖2 in the words of Richard Schlegel. They were not anticipated on theoretical grounds, and cannot

be accommodated to existing theory without some substantial modification of previous ideas. Indeed, it is doubtful whether any modification of existing theory will be adequate to deal with some of the more recalcitrant phenomena now under investigation. The current situation in particle physics, for instance, is admittedly chaotic. The outlook might be different if the new information that is rapidly accumulating in this field were gradually clearing up the situation, but in fact it merely seems to deepen the existing crisis. If anything in this area of confusion is clear by this time it is that the ―elementary particles‖ are not elementary. But the basic concept of a universe of matter requires the existence of some kind of an elementary unit of matter. If the particles that are now known are not elementary units, as is generally conceded, then, since no experimental confirmation is available for the hypothesis of sub-particles, the whole theory of the structure of matter, as it now stands, is left without visible support. Another prime example of the inability of present-day theories based on the ―matter‖ concept to cope with new knowledge of the universe is provided by some of the recent discoveries in astronomy. Here the problem is an almost total lack of any theoretical framework to which the newly observed phenomena can be related. A book published a few years ago that was designed to present all of the significant information then available about the astronomical objects known as quasars contains the following statement, which is still almost as appropriate as when it was written: It will be seen from the discussion in the later chapters that there are so many conflicting ideas concerning theory and interpretation of the observations that at least 95 percent of them must indeed be wrong. But at present no one knows which 95 percent.3 After three thousand years of study and investigation on the basis of theories founded on the ―matter‖ concept we are entitled to something more than this. Nature has a habit of confronting us with the unexpected, and it is not very reasonable to expect the currently prevailing structure of theory to give us an immediate and full account of all details of a new area, but we should at least be able to place the new phenomena in their proper places with respect to the general framework, and to account for their major aspects without difficulty. The inability of present-day theories to keep up with experimental and observational progress along the outer boundaries of science is the most obvious and easily visible sign of their inadequacies, but it is equally significant that some of the most basic physical phenomena are still without any plausible explanations. This embarrassing weakness of the current theoretical structure is widely recognized, and is the subject of comment from time to time. For instance, a press report of the annual meeting of the American Physical Society in New York in February 1969 contains this statement: A number of very distinguished physicists who spoke reminded us of long-standing mysteries, some of them problems so old that they are becoming forgotten—pockets of resistance left far behind the advancing frontiers of physics.4 Gravitation is a good example. It is unquestionably fundamental, but conventional theory cannot explain it. As has been said it ―may well be the most fundamental and least understood of the interactions.‖5 When a book or an article on this subject appears, we

almost invariably find the phenomenon characterized, either in the title or in the introductory paragraphs, as a ―mystery,‖ an ―enigma,‖ or a ―riddle.‖ But what is gravity, really? What causes it? Where does it come from? How did it get started? The scientist has no answers . . . in a fundamental sense, it is still as mysterious and inexplicable as it ever was, and it seems destined to remain so. (Dean E. Wooldridge)6 Electromagnetic radiation, another of the fundamental physical phenomena, confronts us with a different, but equally disturbing, problem. Here there are two conflicting explanations of the phenomenon, each of which fits the observed facts in certain areas but fails in others: a paradox which, as James B. Conant observed, ―once seemed intolerable,‖ although scientists have now ―learned to live with it.‖7 This too, is a ―deep mystery,‖8 as Richard Feynman calls it, at the very base of the theoretical structure. There is a widespread impression that Einstein solved the problem of the mechanism of the propagation of radiation‖ and gave a definitive explanation of the phenomenon. It may be helpful, therefore, to note just what Einstein did have to say on this subject, not only as a matter of clarifying the present status of the radiation problem itself, but to illustrate the point made by P. W. Bridgman when he observed that many of the ideas and opinions to which the ordinary scientist subscribes ―have not been thought through carefully but are held in the comfortable belief . . . that some one must have examined them at some time.‖9 In one of his books Einstein points out that the radiation problem is an extremely difficult one, and he concludes that: Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement. 10 Here, in this statement, Einstein reveals (unintentionally) just what is wrong with the prevailing basic physical theories, and why a revision of the fundamental concepts of those theories is necessary. Far too many difficult problems have been evaded by simply assuming an answer and ―taking it for granted.‖ This point is all the more significant because the shortcomings of the ―matter‖ concept and the theories that it has generated are by no means confined to the instances where no plausible explanations of the observed phenomena have been produced. In many other cases where explanations of one kind or another have actually been formulated, the validity of these explanations is completely dependent on ad hoc assumptions that conflict with observed facts. The nuclear theory of the atom is typical. Inasmuch as it is now clear that the atom is not an indivisible unit, the concept of a universe of matter demands that it be constructed of ―elementary‖ material units of some kind. Since the observed sub-atomic particles are the only known candidates for this role it has been taken for granted, as mentioned earlier, that the atom is a composite of sub-atomic particles. Consideration of the various possible combinations has led to the hypothesis that is now generally accepted: an atom in which there is a nucleus composed of protons and neutrons, surrounded by some kind of an arrangement of electrons.

But if we undertake a critical examination of this hypothesis it is immediately apparent that there are direct conflicts with known physical facts. Protons are positively charged, and charges of the same sign repel each other. According to the established laws of physics, therefore, a nucleus composed wholly or partly of protons would immediately disintegrate. This is a cold, hard physical fact, and there is not the slightest evidence that it is subject to abrogation or modification under any circumstances or conditions. Furthermore, the neutron is observed to be unstable, with a lifetime of only about 15 minutes, and hence this particle fails to meet one of the most essential requirements of a constituent of a stable atom: the requirement of stability. The status of the electron as an atomic constituent is even more dubious. The properties, which it must have to play such a role, are altogether different from the properties of the observed electron. Indeed, as Herbert Dingle points out, we can deal with the electron as a constituent of the atom only if we ascribe to it ―properties not possessed by any imaginable objects at all.‖11 A fundamental tenet of science is that the facts of observation and experiment are the scientific court of last resort; they pronounce the final verdict irrespective of whatever weight may be given to other considerations. As expressed by Richard Feynman: If it (a proposed new law or theory) disagrees with experiment it is wrong. In that simple statement is the key to science.... That is all there is to it.12 The situation with respect to the nuclear theory is perfectly clear. The hypothesis of an atomic nucleus composed of protons and neutrons is in direct conflict with the observed properties of electric charges and the observed behavior of the neutron, while the conflicts between the atomic version of the electron and physical reality are numerous and very serious. According to the established principles of science, and following the rule that Feynman laid down in the foregoing quotation, the nuclear theory should have been discarded summarily years ago. But here we see the power of the currently accepted fundamental physical concept. The concept of a universe of matter demands a ―building block‖ theory of the atom: a theory in which the atom (since it is not an indivisible building block itself) is a ―thing‖ composed of ―parts‖ which, in turn, are ―things‖ of a lower order. In the absence of any way of reconciling such a theory with existing physical knowledge, either the basic physical concept or standard scientific procedures and tests of validity had to be sacrificed. Since abandonment of the existing basic concept of the nature of the universe is essentially unthinkable in the ordinary course of theory construction, sound scientific procedure naturally lost the decision. The conflicts between the nuclear theory and observation were arbitrarily eliminated by means of a set of ad hoc assumptions. In order to prevent the break-up of the hypothetical nucleus by reason of the repulsion between the positive charges of the individual protons it was simply assumed that there is a ―nuclear force‖ of attraction, which counterbalances the known force of repulsion. And in order to build a stable atom out of unstable particles it was assumed (again purely ad hoc) that the neutron, for some unknown reason, is stable within the nucleus. The more difficult problem of inventing some way of justifying the electron as an atomic constituent is currently being handled by assuming that the atomic electron is an entity that transcends reality. It is unrelated to anything that has ever been observed, and is itself

not capable of being observed: an ―abstract thing, no longer intuitable in terms of the familiar aspects of everyday experience,‖13 as Henry Margenau describes it. What the theorists‖ commitment to the ―matter‖ concept has done in this instance is to force them to invent the equivalent of the demons that their primitive ancestors called upon when similarly faced with something that they were unable to explain. The mysterious ―nuclear force‖ might just as well be called the ―god of the nucleus.‖ Like an ancient god, it was designed for one particular purpose; it has no other functions; and there is no independent confirmation of its existence. In effect, the assumptions that have been made in an effort to justify retention of the ―matter‖ concept have involved a partial return to the earlier ―spirit‖ concept of the nature of the universe. Since it is now clear that the concept of a universe of matter is not valid, one may well ask: How has it been possible for physical science to make such a remarkable record of achievement on the basis of an erroneous fundamental concept? The answer is that only a relatively small part of current physical theory is actually derived from the general physical principles based on that fundamental concept. ―A scientific theory,‖ explains R. B. Braithwaite, ―is a deductive system in which observable consequences logically follow from the conjunction of observed facts with the set of the fundamental hypotheses of the system. ―14 But modern physical theory is not one deductive system of the kind described by Braithwaite; it is a composite made up of a great many such systems. As expressed by Richard Feynman: Today our theories of physics, the laws of physics, are a multitude of different parts and pieces that do not fit together very well. We do not have one structure from which all is deduced.15 One of the principal reasons for this lack of unity is that modern physical theory is a hybrid structure, derived from two totally different sources. The small-scale theories applicable to individual phenomena, which constitute the great majority of the ―parts and pieces,‖ are empirical generalizations derived by inductive reasoning from factual premises. At one time it was rather confidently believed that the accumulation of empirically derived knowledge then existing, the inductive science commonly associated with the name of Newton, would eventually be expanded to encompass the whole of the universe. But when observation and experiment began to penetrate what we may call the far-out regions, the realms of the very small, the very large, and the very fast, Newtonian science was unable to keep pace. As a consequence, the construction of basic physical theory fell into the hands of a school of scientists who contend that inductive methods are incapable of arriving at general physical principles. ―The axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention,‖16 was Einstein’s dictum. The result of the ascendancy of this ―inventive‖ school of science has been to split physical science into two separate parts. As matters now stand, the subsidiary principles, those that govern individual physical phenomena and the low-level interactions, are products of induction from factual premises. The general principles, those that apply to large scale phenomena or to the universe as a whole, are, as Einstein describes them, ―pure inventions of the human mind.‖ Where the observations are accurate, and the

generalizations are justified, the inductively derived laws and theories are correct, at least within certain limits. The fact that they constitute by far the greater part of the current structure of physical thought therefore explains why physical science has been so successful in practice. But where empirical data is inadequate or unavailable, present-day science relies on deductions from the currently accepted general principles, the products of pure invention, and this is where physical theory has gone astray. Nature does not agree with these ―free inventions of the human mind.‖ This disagreement with nature should not come as a surprise. Any careful consideration of the situation will show that ―free invention‖ is inherently incapable of arriving at the correct answers to problems of long standing. Such problems do not continue to exist because of a lack of competence on the part of those who are trying to solve them, or because of a lack of adequate methods of dealing with them. They exist because some essential piece or pieces of information are missing. Without this essential information the correct answer cannot be obtained (except by an extremely unlikely accident). This rules out inductive methods, which build upon empirical information. Invention is no more capable of arriving at the correct result without the essential information than induction, but it is not subject to the same limitations. It can, and does, arrive at some result. General acceptance of a theory that is almost certain to be wrong is, in itself, a serious impediment to scientific progress, but the detrimental effect is compounded by the ability of these inventive theories to evade contradictions and inconsistencies by further invention. Because of the almost unlimited opportunity to escape from difficulties by making further ad hoc assumptions, it is ordinarily very difficult to disprove an invented theory. But the definite proof that the physical universe is not a universe of matter now automatically invalidates all theories, such as the nuclear theory of the atom, that are dependent on this ―matter‖ concept. The essential piece of information that has been missing, we now find, is the true nature of the basic entity of which the universe is composed. The issue as to the inadequacy of present-day basic physical theory does not normally arise in the ordinary course of scientific activity because that activity is primarily directed toward making the best possible use of the tools that are available. But when the question is actually raised there is not much doubt as to how it has to be answered. The answer that we get from P. A. M. Dirac is this: The present stage of physical theory is merely a steppingstone toward the better stages we shall have in the future. One can be quite sure that there will be better stages simply because of the difficulties that occur in the physics of today.17 Dirac admits that he and his fellow physicists have no idea as to the direction from which the change will come. As he says, ―there will have to be some new development that is quite unexpected, that we cannot even make a guess about.‖ He recognizes that this new development must be one of major significance. ―It is fairly certain that there will have to be drastic changes in our fundamental ideas before these problems can be solved‖17 he concludes. The finding of this present work is that ―drastic changes in our fundamental

ideas‖ will indeed be required. We must change our basic physical concept: our concept of the nature of the universe in which we live. Unfortunately, however, a new basic concept is never easy to grasp, regardless of how simple it may be, and how clearly it is presented, because the human mind refuses to look at such a concept in any simple and direct manner, and insists on placing it within the context of previously existing patterns of thought, where anything that is new and different is incongruous at best, and more often than not is definitely absurd. As Butterfield states the case: Of all forms of mental activity, the most difficult to induce even in the minds of the young, who may be presumed not to have lost their flexibility, is the art of handling the same bundle of data as before, but placing them in a new system of relations with one another by giving them a different framework.18 In the process of education and development, each human individual has put together a conceptual framework which represents the world as he sees it, and the normal method of assimilating a new experience is to fit it into its proper place in this general conceptual framework. If the fit is accomplished without difficulty we are ready to accept the experience as valid. If a reported experience, or a sensory experience of our own, is somewhat outside the limits of our complex of beliefs, but not definitely in conflict, we are inclined to view it skeptically but tolerantly, pending further clarification. But if a purported experience flatly contradicts a fundamental belief of long standing, the immediate reaction is to dismiss it summarily. Some such semi-automatic system for discriminating between genuine items of information and the many false and misleading items that are included in the continuous stream of messages coming in through the various senses is essential in our daily life, even for mere survival. But this policy of using agreement with past experience as the criterion of validity has the disadvantage of limiting the human race to a very narrow and parochial view of the world, and one of the most difficult tasks of science has been, and to some extent continues to be, overcoming the errors that are thus introduced into thinking about physical matters. Only a few of those who give any serious consideration to the subject still believe that the earth is flat, and the idea that this small planet of ours is the center of all of the significant activities of the universe no longer commands any strong support, but it took centuries of effort by the most advanced thinkers to gain general acceptance of the present-day view that, in these instances, things are not what our ordinary experience would lead us to believe. Some very substantial advances in scientific methods and equipment in recent years have enabled investigators to penetrate a number of far-out regions that were previously inaccessible. Here again it has been demonstrated, as in the question with respect to the shape of the earth, that experience within the relatively limited range of our day-to-day activities is not a reliable guide to what exists or is taking place in distant regions. In application to these far-out phenomena the scientific community therefore rejects the ―experience‖ criterion, and opens the door to a wide variety of hypotheses and concepts that are in direct conflict with normal experience: such things as events occurring without specific causes, magnitudes that are inherently incapable of measurement beyond a

certain limiting degree of precision, inapplicability of some of the established laws of physics to certain unusual phenomena, events that defy the ordinary rules of logic, quantities whose true magnitudes are dependent on the location and movement of the observer, and so on. Many of these departures from ―common sense‖ thinking, including almost all of those that are specifically mentioned in this paragraph, are rather ill-advised in the light of the facts that have been disclosed by this present work, but this merely emphasizes the extent to which scientists are now willing to go in postulating deviations from every-day experience. Strangely enough, this extreme flexibility in the experience area coexists with an equally extreme rigidity in the realm of ideas. The general situation here is the same as in the case of experience. Some kind of semi-automatic screening of the new ideas that are brought to our attention is necessary if we are to have any chance to develop a coherent and meaningful understanding of what is going on in the world about us, rather than being overwhelmed by a mass of erroneous or irrelevant material. So, just as purported new experiences are measured against past experience, the new concepts and theories that are proposed are compared with the existing structure of scientific thought and judged accordingly. But just as the ―agreement with previous experience,‖ criterion breaks down when experiment or observation enters new fields, so the ―agreement with orthodox theory‖ criterion breaks down when it is applied to proposals for revision of the currently accepted theoretical fundamentals. When agreement with the existing theoretical structure is set up as the criterion by which the validity of new ideas is to be judged, any new thought that involves a significant modification of previous theory is automatically branded as unacceptable. Whatever merits it may actually have, it is, in effect, wrong by definition. Obviously, a strict and undeviating application of this ―agreement‖ criterion cannot be justified‖ as it would bar all major new ideas. A new basic concept cannot be fitted into the existing conceptual framework, as that framework is itself constructed of other basic concepts‖ and a conflict is inevitable. As in the case of experience‖ it is necessary to recognize that there is an area in which this criterion is not legitimately applicable. In principle, therefore, practically everyone concedes that a new theory cannot be expected to agree with the theory that it proposes to replace, or with anything derived directly or indirectly from that previous theory. In spite of the nearly unanimous agreement on this, point as a matter of principle, a new idea seldom gets the benefit of it in actual practice. In part this is due to the difficulties that are experienced in trying to determine just what features of current thought are actually affected by the theory replacement. This is not always clear on first consideration, and the general tendency is to overestimate the effect that the proposed change will have on prevailing ideas. In any event, the principal obstacle that stands in the way of a proposal for changing a scientific theory or concept is that the human mind is so constituted that it does not want to change its ideas, particularly if they are ideas of long standing. This is not so serious in the realm of experience, because the innovation that is required here generally takes the form of an assertion that ―things are different‖ in the particular new area that is under consideration. Such an assertion does not involve a

flat repudiation of previous experience; it merely contends that there is a hitherto unknown limit beyond which the usual experience is no longer applicable. This is the explanation for the almost incredible latitude that the theorists are currently being allowed in the ―experience‖ area. The scientist is prepared to accept the assertion that the rules of the game are different in a new field that is being investigated, even where the new rules involve such highly improbable features as events that happen without causes and objects that change their locations discontinuously. On the other hand, a proposal for modification of an accepted concept or theory calls for an actual change in thinking, something that the human mind almost automatically resists, and generally resents. Here the scientist usually reacts like any layman; he promptly rejects any intimation that the rules which he has already set up, and which he has been using with confidence, are wrong. He is horrified at the mere suggestion that the many difficulties that he is experiencing in dealing with the ―parts‖ of the atom, and the absurdities or near absurdities that he has had to introduce into his theory of atomic structure are all due to the fact that the atom is not constructed of ―parts.‖ Inasmuch as the new theoretical system presented in this volume and those that are to follow not only requires some drastic reconstruction of fundamental physical theory, but goes still deeper and replaces the basic concept of the nature of the universe, upon which all physical theory is constructed, the conflicts with previous ideas are numerous and severe. If appraised in the customary manner by comparison with the existing body of thought many of the conclusions that are reached herein must necessarily be judged as little short of outrageous. But there is practically unanimous agreement among those who are in the front rank of scientific investigators that some drastic change in theoretical fundamentals is inevitable. As Dirac said in the statement previously quoted, ―There will have to be some new development that is quite unexpected, that we cannot even make a guess about.‖ The need to abandon a basic concept, the concept of a universe of matter that has guided physical thinking for three thousand years is an ―unexpected development,‖ just the kind of a thing that Dirac predicted. Such a basic change is a very important step, and it should not be lightly taken, but nothing less drastic will suffice. Sound theory cannot be built on an unsound foundation. Logical reasoning and skillful mathematical manipulation cannot compensate for errors in the premises to which they are applied. On the contrary, the better the reasoning the more certain it is to arrive at the wrong results if it starts from the wrong premises.

CHAPTER 2

A Universe of Motion The thesis of this present work is that the universe in which we live is not a universe of matter, but a universe of motion, one in which the basic reality is motion, and all physical entities and phenomena, including matter, are merely manifestations of motion. The atom, on this basis, is simply a combination of motions. Radiation is motion, gravitation is motion, an electric charge is motion, and so on.

The concept of a universe of motion is by no means a new idea. As a theoretical proposition it has some very obvious merits that have commended it to thoughtful investigators from the very beginning of systematic science. Descartes' idea that matter might be merely a series of vortexes in the ether is probably the best-known speculation of this nature, but other scientists and philosophers, including such prominent figures as Eddington and Hobbes, have devoted much time to a study of similar possibilities, and this activity is still continuing in a limited way. But none of the previous attempts to use the concept of a universe of motion as the basis for physical theory has advanced much, if any, beyond the speculative stage. The reason why they failed to produce any significant results has now been disclosed by the findings of the investigation upon which this present work is based. The inability of previous investigators to achieve a successful application of the ―motion‖ concept, we find, was due to the fact that they did not use this concept in its pure form. Instead, they invariably employed a hybrid structure, which retained elements of the previously accepted ―matter‖ concept. ―All things have but one universal cause‖ which is motion‖ 19 says Hobbes. But the assertion that all things are caused by motion is something quite different from saying that they are motions. The simple concept of a universe of motion‖ without additions or modifications–the concept utilized in this present work-is that of a universe which is composed entirely of motion. The significant difference between these two viewpoints lies in the role that they assign to space and time. In a universe of matter it is necessary to have a background or setting in which the matter exists and undergoes physical processes, and it is assumed that space and time provide the necessary setting for physical action. Many differences of opinion have arisen with respect to the details, particularly with respect to space–whether or not space is absolute and immovable, whether such a thing as empty space is possible, whether or not space and time are interconnected, and so on–but throughout all of the development of thought on the subject the basic concept of space as a setting for the action of the universe has remained intact. As summarized by J. D. North: Most people would accept the following: Space is that in which material objects are situated and through which they move. It is a background for objects of which it is independent. Any measure of the distances between objects within it may be regarded as a measure of the distances between its corresponding parts.20 Einstein is generally credited with having accomplished a profound alteration of the scientific viewpoint with respect to space, but what he actually did was merely to introduce some new ideas as to the kind of a setting that exists. His ―space‖ is still a setting, not only for matter but also for the various ―fields‖ , that he envisions. A field, he says, is ―something physically real in the space around it.‖21 Physical events still take place in Einstein’s space just as they did in Newton's space or in Democritus' space. Time has always been more elusive than space, and it has been extremely difficult to formulate any clear-cut concept of its essential nature. It has been taken for granted, however, that time, too, is part of the setting in which physical events take place; that is, physical phenomena exist in space and in time. On this basis it has been hard to specify just wherein time differs from space. In fact the distinction between the two has become

increasingly blurred and uncertain in recent years, and as matters now stand, time is generally regarded as a sort of quasi-space, the boundary between space and time being indefinite and dependent upon the circumstances under which it is observed. The modern physicist has thus added another dimension to the spatial setting, and instead of visualizing physical phenomena as being located in three-dimensional space, he places them in a four-dimensional space-time setting. In all of this ebb and flow of scientific thought the one unchanging element has been the concept of the setting. Space and time, as currently conceived, are the stage on which the drama of the universe unfolds–―a vast world-room, a perfection of emptiness, within which all the world show plays itself away forever.‖22 This view of the nature of space and time, to which all have subscribed scientist and layman alike, is pure assumption. No one, so far as the history of science reveals, has ever made any systematic examination of the available evidence to determine whether or not the assumption is justified. Newton made no attempt to analyze the basic concepts. He tells us specifically, ―I do not define time, space, place and motion, as being well known to all. ― Later generations of scientists have challenged some of Newton's conclusions, but they have brushed this question aside in an equally casual and carefree manner. Richard Tolman, for example, begins his discussion of relativity with this statement: ―We shall assume without examination . . . the unidirectional, one-valued, one-dimensional character of the time continuum.‖23 Such an uncritical acceptance of an unsubstantiated assumption ―without examination‖ , is, of course, thoroughly unscientific, but it is quite understandable as a consequence of the basic concept of a universe of matter to which science has been committed. Matter, in such a universe, must have a setting in which to exist. Space and time are obviously the most logical candidates for this assignment. They cannot be examined directly. We cannot put time under a microscope, or subject space to a mathematical analysis by a computer. Nor does the definition of matter itself give us any clue as to the nature of space and time. The net effect of accepting the concept of a universe of matter has therefore been to force science into the position of having to take the appearances which space and time present to the casual observer as indications of the true nature of these entities. In a universe of motion, one in which everything physical is a manifestation of motion, this uncertainty does not exist, as a specific definition of space and time is implicit in the definition of motion. It should be understood in this connection that the term ―motion,‖ as used herein, refers to motion as customarily defined for scientific and engineering purposes; that is, motion is a relation between space and time, and is measured as speed or velocity. In its simplest form, the ―equation of motion,‖ which expresses this definition in mathematical symbols, is v = s/t. The definition as stated, the standard scientific definition, we may call it, is not the only way in which motion can be defined. But it is the only definition that has any relevance to the development in this work. The basic postulate of the work is that the physical universe is composed entirely of motion as thus defined. What we are undertaking to do is to describe the consequences that necessarily follow in a universe composed of this

kind of motion. Whether or not one might prefer to define motion in some other way, and what the consequences of such a definition might be, has no bearing on the present undertaking. Obviously, the equation of motion, which defines motion in terms of space and time, likewise defines space and time in terms of motion. It tells us that in motion space and time are the two reciprocal aspects of that motion, and nothing else. In a universe of matter, the fact that space and time have this significance in motion would not preclude them from having some other significance in a different connection, but when it is specified that motion is the sole constituent of the physical universe, space and time cannot have any significance anywhere in that universe other than that which they have as aspects of motion. Under these circumstances, the equation of motion is a complete definition of the role of space and time in the physical universe. We thus arrive at the conclusion that space and time are simply the two reciprocal aspects of motion and have no other significance. On this basis, space is not the Euclidean container for physical phenomena that is most commonly visualized by the layman; neither is it the modified version of this concept which makes it subject to distortion by various forces and highly dependent on the location and movement of the observer, as seen by the modern physicist. In fact, it is not even a physical entity in its own right at all; it is simply and solely an aspect of motion. Time is not an order of succession, or a dimension of quasi-space, neither is it a physical entity in its own right. It, too, is simply and solely an aspect of motion, similar in all respects to space, except that it is the reciprocal aspect. The simplest way of defining the status of space and time in a universe of motion is to say that space is the numerator in the expression s/t, which is the speed or velocity, the measure of motion, and time is the denominator. If there is no fraction, there is no numerator or denominator; if there is no motion, there is no space or time. Space does not exist alone, nor does time exist alone; neither exists at all except in association with the other as motion. We can, of course, focus our attention on the space aspect and deal with it as if the time aspect, the denominator of the fraction, remains constant (or we can deal with time as if space remains constant). This is the familiar process known as abstraction, one of the useful tools of scientific inquiry. But any results obtained in this manner are valid only where the time (or space) aspect does, in fact, remain constant, or where the proper adjustment is made for whatever changes in this factor do take place. The reason for the failure of previous efforts to construct a workable theory on the basis of the ―motion,‖ concept is now evident. Previous investigators have not realized that the ―setting‖ concept is a creature of the ―matter‖ concept; that it exists only because that basic concept envisions material ―things‖ existing in a space-time setting. In attempting to construct a theoretical system on the basis of the concept of a universe of motion while still retaining the ―setting‖ concept of space and time, these theorists have tried to combine two incompatible elements, and failure was inevitable. When the true situation is recognized it becomes clear that what is needed is to discard the ―setting‖ concept of space and time along with the general concept of a universe of matter, to which it is intimately related, and to use the concept of space and time that is in harmony with the idea of a universe of motion.

In the discussion that follows we will postulate that the physical universe is composed entirely of discrete units of motion, and we will make certain assumptions as to the characteristics of that motion. We will then proceed to show that the mere existence of motions with properties as postulated, without the aid of any supplementary or auxiliary assumptions, and without bringing in anything from experience, necessarily leads to a vast number and variety of consequences which, in total, constitute a complete theoretical universe. Construction of a fully integrated theory of this nature, one, which derives the existence, and the properties of the various physical entities from a single set of premises, has long been recognized as the ultimate goal of theoretical science. The question now being raised is whether that goal is actually attainable. Some scientists are still optimistic. ―Of course, we all try to discover the universal law,‖ says Eugene P. Wigner, ―and some of us believe that it will be discovered one day.‖24 But there is also an influential school of thought which contends that a valid, generally applicable, physical theory is impossible, and that the best we can hope for is a ―model‖ or series of models that will represent physical reality approximately and incompletely. Sir James Jeans expresses this point of view in the following words: The most we can aspire to is a model or picture which shall explain and account for some of the observed properties of matter; where this fails, we must supplement it with some other model or picture, which will in its turn fail with other properties of matter, and so on.25 When we inquire into the reasons for this surprisingly pessimistic view of the potentialities of the theoretical approach to nature, in which so many present-day theorists concur, we find that it has not resulted from any new discoveries concerning the limitations of human knowledge, or any greater philosophical insight into the nature of physical reality; it is purely a reaction to long years of frustration. The theorists have been unable to find the kind of an accurate theory of general applicability for which they have been searching, and so they have finally convinced themselves that their search was meaningless; that there is no such theory. But they simply gave up too soon. Our findings now show that when the basic errors of prevailing thought are corrected the road to a complete and comprehensive theory is wide open. It is essential to understand that this new theoretical development deals entirely with the theoretical entities and phenomena, the consequences of the basic postulates, not with the aspects of the physical universe revealed by observation. When we make certain deductions with respect to the constituents of the universe on the basis of theoretical assumptions as to the fundamental nature of that universe, the entities and phenomena thus deduced are wholly theoretical; they are the constituents of a purely theoretical universe. Later in the presentation we will show that the theoretical universe thus derived from the postulates corresponds item by item with the observed physical universe, justifying the assertion that each theoretical feature is a true and accurate representation of the corresponding feature of the actual universe in which we live. In view of this oneto-one correspondence, the names that we will attach to the theoretical features will be those that apply to the corresponding physical features, but the development of theory will be concerned exclusively with the theoretical entities and phenomena.

For example, the ―matter‖ that enters into the theoretical development is not physical matter; it is theoretical matter. Of course, the exact correspondence between the theoretical and observed universes that will be demonstrated in the course of this development means that the theoretical matter is a correct representation of the actual physical matter, but it is important to realize that what we are dealing with in the development of theory is the theoretical entity, not the physical entity. The significance of this point is that physical ―matter,‖ ―radiation‖ and other physical items cannot be defined with precision and certainty, as there can be no assurance that our observations give us the complete picture. The ―matter‖ that enters into Newton's law of gravitation, for example, is not a theoretically defined entity; it is the matter that is actually encountered in the physical world: an entity whose real nature is still a subject of considerable controversy. But we do know exactly what we are dealing with when we talk about theoretical matter. Here there is no uncertainty whatever. Theoretical matter is just what the postulates require it to be–no more, no less. The same is true of all of the other items that enter into the theoretical development. Although physical observations have not yet given us a definitive answer to the question as to the structure of the basic unit of physical matter, the physical atom–indeed, there is an almost continuous revision of the prevailing ideas on the subject, as new facts are revealed by experiment–we know exactly what the structure of the theoretical atom is, because both the existence and the properties of that atom are consequences that we derive by logical processes from our basic postulates. Inasmuch as the theoretical premises are explicitly defined, and their consequences are developed by sound logical and mathematical processes, the conclusions that are reached with respect to matter, its structure and properties, and all other features of the theoretical universe are unequivocal. Of course, there is always a possibility that some error may have been made in the chain of deductions, particularly if the chain in question is a very long one, but aside from this possibility, which is at a minimum in the early stages of the development, there is no doubt as to the true nature and characteristics of any entity or phenomenon that emerges from that development. Such certainty is impossible in the case of any theory, which contains empirical elements. Theories of this kind, a category that includes all existing physical theories, are never permanent; they are always subject to change by experimental discovery. The currently popular theory of the structure of the atom, for example, has undergone a long series of changes since Rutherford and Bohr first formulated it, and there is no assurance that the modifications are at an end. On the contrary, a general recognition of the weakness of the theory as it now stands has stimulated an intensive search for ways and means of bringing it into a closer correspondence with reality, and the current literature is full of proposals for revision. When a theory includes an empirical component, as all current physical theories do, any increase in observational or experimental knowledge about this component alters the sense of the theory, even if the wording remains the same. For instance, as pointed out earlier, some of the recently discovered phenomena in the sub-atomic region, in which matter is converted to energy, and vice versa, have drastically altered the status of conventional atomic theory. The basic concept of a universe of material ―things,‖ to

which physical science has subscribed for thousands of years, requires the atom to be made up of elementary units of matter. The present theory of an atom constructed of protons, neutrons, and electrons is based on the assumption that these are the ―elementary particles‖ ; that is, the indivisible and unchangeable basic units of matter. The experimental finding that these particles are not only interconvertible, but also subject to creation from non-matter and transformation into non-matter, has changed what was formerly a plausible (even if somewhat fanciful) theory into a theory that is internally inconsistent. In the light of present knowledge, an atom simply cannot be constructed of ―elementary particles‖ of matter. Some of the leading theorists have already recognized this fact, and are casting about for something that can replace the elementary particle as the basic unit. Heisenberg suggests energy: Energy . . . is the fundamental substance of which the world is made. Matter originates when the substance energy is converted into the form of an elementary particle.26 But he admits that he has no idea as to how energy can be thus converted into matter. This ―must in some way be determined by a fundamental law,‖ he says. Heisenberg's hypothesis is a step in the right direction, in that he abandons the fruitless search for the ―indivisible particle,‖ and recognizes that there must be something more basic than matter. He is quite critical of the continuing attempt to invest the purely hypothetical ―quark‖ with a semblance of reality: I am afraid that the quark hypothesis is not really taken seriously today by its proponents. Questions dealing with the statistics of quarks, the forces that keep them together, the reason why the quarks are never seen as free particles, the creation of pairs of quarks inside an elementary particle, are all left more or less undefined.27 But the hypothesis that makes energy the fundamental entity cannot stand up under critical scrutiny. Its fatal defect is that energy is a scalar quantity, and simply does not have the flexibility that is required in order to explain the enormous variety of physical phenomena. By going one step farther and identifying motion as the basic entity this inadequacy is overcome, as motion can be vectorial, and the addition of directional characteristics to the positive and negative magnitudes that are the sole properties of the scalar quantities opens the door to the great proliferation of phenomena that characterizes the physical universe. It should also be recognized that a theory of the composite type, one that has both theoretical and empirical components, is always subject to revision or modification; it may be altered essentially at will. The theory of atomic structure, for instance, is simply a theory of the atom–nothing else–and when it is changed, as it was when the hypothetical constituents of the hypothetical nucleus were changed from protons and electrons to protons and neutrons, no other area of physical theory is significantly affected. Even when it is found expedient to postulate that the atom or one of its hypothetical constituents does not conform to the established laws of physical science, it is not usually postulated that these laws are wrong; merely that they are not applicable in the particular case. This fact that the revision affects only a very limited area gives the theory

constructors practically a free hand in making alterations, and they make full use of the latitude thus allowed. Susceptibility to both voluntary and involuntary changes is unavoidable as long as the development of theory is still in the stage where complex concepts such as ―matter‖ must be considered unanalyzable, and hence it has come to be regarded as a characteristic of all theories. The first point to be emphasized, therefore, in beginning a description of the new system of theory based on the concept of a universe of motion, the Reciprocal System, as it is called, is that this is not a composite theory of the usual type; it is a purely theoretical structure which includes nothing of an empirical nature. Because all of the conclusions reached in the theoretical development are derived entirely from the basic postulates by logical and mathematical processes the theoretical system is completely inflexible, a point that should be clearly understood before any attempt is made to follow the development of the details of the theory in the following pages. It is not subject to any change or adjustment (other than correction of any errors that may have been made, and extension of the theory into areas not previously covered). Once the postulates have been set forth, the entire character of the resulting theoretical universe has been implicitly defined, down to the minutest detail. Just because the motion of which the universe is constructed, according to the postulates, has the particular properties that have been postulated, matter, radiation, gravitation, electrical and magnetic phenomena, and so on, must exist, and their physical behavior must follow certain specific patterns. In addition to being an inflexible, purely theoretical product that arrives at definite and certain conclusions which are in full agreement with observation, or at least are not inconsistent with any definitely established facts, the Reciprocal System of theory is one of general applicability. It is the first thing of its kind ever formulated: the first that derives the phenomena and relations of all subdivisions of physical activity from the same basic premises. For the first time in scientific history there is available a theoretical system that satisfies the criterion laid down by Richard Schlegel in this statement: In a significant sense, the ideal of science is a single set of principles, or perhaps a set of mathematical equations, from which all the vast process and structure of nature could be deduced.28 No previous theory has covered more than a small fraction of the total field, and the present-day structure of physical thought is made up of a host of separate theories, loosely related, and at many points actually conflicting. Each of these separate theories has its own set of basic assumptions, from which it seeks to derive relations specifically applicable to certain kinds of phenomena. Relativity theory has one set of assumptions, and is applicable to one kind of phenomena. The kinetic theory has an altogether different set of assumptions, which it applies to a different set of phenomena. The nuclear theory of the atom has still another set of assumptions, and has a field of applicability all its own, and so on. Again quoting Richard Feynman: Instead of having the ability to tell you what the law of physics is, I have to talk about the things that are common to the various laws; we do not understand the connection between them.15

Furthermore, each of these many theories not only requires the formulation of a special set of basic assumptions tailored to fit the particular situation, but also finds it necessary to introduce a number of observed entities and phenomena into the theoretical structure, taking their existence for granted, and accepting them as ―given‖ , so far as the theory is concerned. The Reciprocal System now replaces this multitude of separate theories and subsidiary assumptions with a fully integrated structure of theory derived in its entirety from a single set of basic premises. The status of this system as a general physical theory is not a matter of opinion; it is an objective fact that can easily be verified by an examination of the theoretical development. Such an examination will disclose that the development leads to detailed conclusions in all major physical fields, and that these conclusions are derived deductively from the postulates of the system, without the aid of any supplementary or subsidiary assumptions, and without introducing anything from experience. The new theoretical structure not only covers the field to which the conventional physical theories are applicable; it also gives us answers to the basic physical questions with which the theories based on the ―matter‖ concept have been unable to cope, and it extends the scope of physical theory to the point where it is capable of dealing with those recent experimental and observational discoveries in the far-out regions of science that have been so baffling to those who are trying to understand them in the context of previously existing ideas. Of course, the theoretical development has not yet been carried to the point where it accounts for every detail of the physical universe. That point will not be reached for a long time, if ever. But it has been carried far enough to make it clear that the probability of being unable to deal with the remaining items is negligible, and that the Reciprocal System is, in fact, a general physical theory. The crucial importance of this status as a general physical theory lies in the further fact that it is impossible to construct a wrong general physical theory. At first glance this statement may seem absurd. It may seem almost self-evident that if validity is not required there should be no serious obstacle to constructing some kind of a theory of any subject. But even without any detailed consideration of the factors that are involved in the case of a general physical theory, a review of experience will show that this offhand opinion is incorrect. Construction of a general physical theory has been a prime goal of science for three thousand years, and an immense amount of time and effort has been devoted to the task, with no success whatever. The failure has not been a matter of arriving at the wrong answers; the theorists have not been able to formulate any single theory that would give them any answers, right or wrong, to more than a mere handful of the millions of questions that a general physical theory must answer. A long period of failure to find the correct theory is understandable, since the field that must be covered by a general theory is so immense and so extremely complicated, but thousands of years of inability to construct any general theory are explainable only on the basis that there is a reason why a wrong theory cannot be constructed. This reason is easily understood if the essential nature of the task is carefully examined. Construction of a general physical theory is analogous to the task of deciphering a very long message in code. If a coded message is short–a few words or a sentence–alternative

interpretations are possible, any or all of which may be wrong, but if the message is a very long one–a whole book in code would be an appropriate analogy to the subject matter of a general physical theory–there is only one way to make any kind of sense out of every paragraph, and that is to find the key to the cipher. If, and when, the message is finally decoded, and every paragraph is intelligible, it is evident that the key to the cipher has been discovered. The possibility that there might be an alternative key, a different set of meanings for the various symbols utilized, that would give every one of the thousands of sentences in the message a different significance, intelligible but wrong, is preposterous. It can therefore be definitely stated that a wrong key to the cipher is impossible. The correct general theory of the universe is the key to the code of nature. As in the case of the cipher, a wrong theory can provide plausible answers in a very limited field, but only the correct theory can be a general theory; one that is capable of producing explanations for the existence and characteristics of all of the immense number of physical phenomena. Thus a wrong general theory, like a wrong key to a cipher, is impossible. The verification of the validity of the theoretical structure as a whole that is provided by the demonstration that it is a general physical theory does not eliminate the need for checking each of the conclusions of the theory individually. It is not unlikely that those persons who carry out the process of development of the details of the theory will make some mistakes. But the fact that the individual conclusions have been derived by extension of a correct general structure of theory creates a strong presumption of their validity, a presumption that cannot be overcome by anything other than definite and conclusive contrary evidence. Hence, as conclusions are reached in the course of the development, it is not necessary to supply positive proof that they are correct, or to argue that the case in favor of their validity is superior to that of any competitor. All that is required is to show that these conclusions are not inconsistent with any definitely established facts. Recognition of this point is essential for a full understanding of the presentation in the pages that follow. Many persons will no doubt take the stand that they find the arguments in favor of certain of the currently accepted ideas more persuasive than those in favor of the conclusions derived from the Reciprocal System. Indeed, some such reactions are inevitable, since there will be a strong tendency to view these conclusions in the context of present-day thought, based on the no longer tenable concept of a universe of matter. But these opinions are irrelevant. Where it can be shown that the conclusions are legitimately derived from the postulates of the system, they participate in the proof of the validity of the structure of theory as a whole, a proof that has been established by two independent means: (1) by showing that this is a general physical theory, and that a wrong general physical theory is impossible, and (2) by showing that none of the authentic deductions from the postulates of the theory is inconsistent with any positively established information from observation or experiment. This second method of verification is analogous to the manner in which we would go about verifying the accuracy of an aerial map. The traditional method of map making involves first a series of explorations, then a critical evaluation of the reports submitted by the explorers, and finally the construction of the map on the basis of those reports that

the geographers consider most reliable. Similarly, in the scientific field, explorations are carried out by experiment and observation, reports of the findings and conclusions based on these findings are submitted, these reports are evaluated by the scientific community, and those that are judged to be authentic are added to the scientific map, the accepted body of factual and theoretical knowledge. But this traditional method of map making is not the only way in which a geographic map can be prepared. We may, for instance, devise some photographic system whereby we can secure a representation of an entire area in one operation by a single process. In either case, whether we are offered a map of the traditional kind or a photographic map we will want to make some tests to satisfy ourselves that the map is accurate before we use it for any important purposes, but because of the difference in the manner in which the maps were produced, the nature of these tests will be altogether different in the two cases. In checking a map of the traditional type we have no option but to verify each significant feature of the map individually, because aside from a relatively small amount of interrelation, each feature is independent. Verification of the position shown for a mountain in one part of the map does not in any way guarantee the accuracy of the position shown for a river in another part of the map. The only way in which the position shown for the river can be verified is to compare what we see on the map with such other information as may be available. Since these collateral data are often scanty, or even entirely lacking, particularly along the frontiers of knowledge, the verification of a map of this kind in either the geographic or the scientific field is primarily a matter of judgment, and the final conclusion cannot be more than tentative at best. In the case of a photographic map, on the other hand, each test that is made is a test of the validity of the process, and any verification of an individual feature is merely incidental. If there is even one place where an item that can definitely be seen on the map is in conflict with something that is positively known to be a fact, this is enough to show that the process is not accurate, and it provides sufficient justification for discarding the map in its entirety. But if no such conflict is found, the fact that every test is a test of the process means that each additional test that is made without finding a discrepancy reduces the mathematical probability that any conflict exists anywhere on the map. By making a suitably large number and variety of such tests the remaining uncertainty can be reduced to the point where it is negligible, thereby definitely establishing the accuracy of the map as a whole. The entire operation of verifying a map of this kind is a purely objective process in which features that can definitely be seen on the map are compared with facts that have been definitely established by other means. One important precaution must be observed in the verification process: a great deal of care must be exercised to make certain of the authenticity of the supposed facts that are utilized for the comparisons. There is no justification for basing conclusions on anything that falls short of positive knowledge. In testing the accuracy of an aerial map we realize that we cannot justify rejecting the map because the location of a lake indicated on the map conflicts with the location that we think the lake occupies. In this case it is clear that unless we actually know just where the lake is, we have no legitimate basis on which to dispute the location shown on the map. We also realize that there is no need to pay any attention to items of this kind: those about which we are uncertain. There are hundreds,

perhaps thousands, of map features about which we do have positive knowledge, far more than enough for purposes of comparison, so that we need not give any consideration to features about which there is any degree of uncertainty. Because the Reciprocal System of theory is a fully integrated structure derived entirely by one process–deduction from a single set of premices–it is capable of verification in the same manner as an aerial map. It has already passed such a test; that is, the theoretical deductions have been compared with the observed facts in thousands of individual cases distributed over all major fields of physical science without encountering a single definite inconsistency. These deductions disagree with many currently accepted ideas, to be sure, but in all of these cases it can be shown that the current views are not positive knowledge. They are either conclusions based on inadequate data, or they are assumptions, extrapolations, or interpretations. As in the analogous case of the aerial map, conflicts with such items, with what scientists think, are meaningless. The only conflicts that are relevant to the test of the validity of the theoretical system are conflicts with what scientists know. Thus, while recognition of human fallibility prevents asserting that every conclusion purported to be reached by application of this theory is authentic and therefore correct, it can be asserted that the Reciprocal System of theory is capable of producing the right answers if it is properly applied, and to the extent that the development of the consequences of the postulates of the theory has been correctly carried out, the theoretical structure thus derived is a true and accurate representation of the actual physical universe.

CHAPTER 3

Reference Systems As indicated in the preceding chapter, the concept of a universe of motion has to be elaborated to some extent before it is possible to develop a theoretical structure that will describe that universe in detail. The additions to the basic concept must take the form of assumptions–or postulates, a term more commonly applied to the fundamental assumptions of a theory–because even though the additional specifications (the physical specifications, at least) obviously do apply to the particular universe of motion in which we live, there does not appear to be adequate justification for contending that they necessarily apply to an possible universe of motion. It has already been mentioned that we are postulating a universe composed of discrete units of motion. But this does not mean that the motion proceeds in a series of jumps. This basic motion is progression in which the familiar progression of time is accompanied by a similar progression of space. Completion of one unit of the progression is followed immediately by initiation of another, without interruption. As an analogy, we may consider a chain. Although the chain exists only in discrete units, or links, it is a continuous structure, not a mere juxtaposition of separate units. Whether or not the continuity is a matter of logical necessity is a philosophical question that does not need to concern us at this time. There are reasons to believe that it is, in fact,

a necessity, but if not, we will introduce it into our definition of motion. In any event, it is part of the system. The extensive use of the term ―progression‖ in application to the basic motions with which we are dealing in the initial portions of this work is intended to emphasize this characteristic. Another assumption that will be made is that the universe is three-dimensional. In this connection, it should be realized that all of the supplementary assumptions that were added to the basic concept of a universe of motion in order to define the essential properties of that universe were no more than tentative at the start of the investigation that ultimately led to the development of the Reciprocal System of theory. Some such supplementary assumptions were clearly required, but neither the number of assumptions that would have to be made, nor the nature of the individual assumptions, was clearly indicated by existing knowledge of the physical universe. The only feasible course of action was to initiate the investigation on the basis of those assumptions, which seemed to have the greatest probability of being correct. If any wrong assumptions were made, or if some further assumptions were required, the theoretical development would, of course, encounter insurmountable difficulties very quickly, and it would then be necessary to go back and modify the postulates, and try again. Fortunately, the original postulates passed this test, and the only change that has been made was to drop some of the original assumptions that were found to be deducible from the others and therefore superfluous. No further physical postulates are required, but it is necessary to make some assumptions as to the mathematical behavior of the universe. Here our observations of the existing universe do not give us guidance of as definite a character as was available in the case of the physical properties, but there is a set of mathematical principles which, until very recent times, was generally regarded as almost self-evident. The main body of scientific opinion is now committed to the belief that the true mathematical structure of the universe is much more complex, but the assumption that it conforms to the older set of principles is the simplest assumption that can be made. Following the rule laid down by William of Occam; this assumption was therefore made for the purpose of the initial investigation. No modifications have since been found necessary. The complete set of assumptions that constitutes the fundamental postulates of the theory of a universe of motion can be expressed as follows: First Fundamental Postulate: The physical universe is composed entirely of one component, motion, existing in three dimensions, in discrete units, and with two reciprocal aspects, space and time. Second Fundamental Postulate: The physical universe conforms to the relations of ordinary commutative mathematics, its primary magnitudes are absolute, and its geometry is Euclidean. Postulates are justified by their consequences, not by their antecedents, and as long as they are rational and mutually consistent, there is not much that can be said about them, either favorably or adversely. It should be of interest, however, to note that the concept of a universe composed entirely of motion is the only new idea that is involved in the postulates that define the Reciprocal System. There are other ideas, which, on the basis of current thinking, could be considered unorthodox, but these are by no means new. For

example, the postulates include the assumption that the geometry of the universe is Euclidean. This is in direct conflict with present-day physical theory, which assumes a non-Euclidean geometry, but it certainly cannot be regarded as an innovation. On the contrary, the physical validity of Euclidean geometry was accepted without question for thousands of years, and there is little doubt but that non-Euclidean geometry would still be nothing but a mathematical curiosity had it not been for the fact that the development of physical theory encountered some serious difficulties which the theorists were unable to surmount within the limitations established by Euclidean geometry, absolute magnitudes, etc. Motion is measured as speed (or velocity, in a context that we will consider later). Inasmuch as the quantity of space involved in one unit of motion is the minimum quantity that takes part in any physical activity, because less than one unit of motion does not exist, this is the unit of space. Similarly, the quantity of time involved in the one unit of motion is the unit of time. Each unit of motion, then, consists of one unit of space in association with one unit of time; that is, the basic motion of the universe is motion at unit speed. Cosmologists often begin their analyses of large-scale physical processes with a consideration of a hypothetical ―empty‖ universe, one in which no matter exists in the postulated space-time setting. But an empty universe of motion is an impossibility. Without motion there would be no universe. The most primitive condition, the situation which prevails when the universe of motion exists, but nothing at all is happening in that universe, is a condition in which units of motion exist independently, with no interaction. In this condition all speed is unity, one unit of space per unit of time, and since all units of motion are alike–they have no property but speed, and that is unity for all–the entire universe is a featureless uniformity. In order that there may be physical phenomena that can be observed or measured there must be some deviation from this one-to-one relation, and since it is the deviation that is observable, the amount of the deviation is a measure of the magnitude of the phenomenon. Thus all physical activity, all change that occurs in the system of motions that constitutes the universe, extends from unity, not from zero. The units of space, time, and motion (speed) that form the background for physical activity are simply scalar magnitudes. As matters now stand, we have no geometric means of representation that will express all three magnitudes coincidentally. But if we assume that the time progression continues at a uniform rate, and we measure this progression by some independent device (a clock), then we can represent the corresponding spatial magnitude by a one-dimensional geometric figure: a line. The length of this line represents the amount of space corresponding to a given time magnitude. Where this time magnitude is unity, the length of the line also represents the speed, the space per unit time. In present-day scientific practice, the datum from which all speed measurements are made, the point identified with the mathematical zero, is some stationary point in the reference system. But, as has been explained, the reference datum for physical magnitudes in a universe of motion is not zero speed but unit speed. The natural datum is therefore continually moving outward (in the direction of greater magnitudes) from the conventional zero datum, and the true speeds that are effective in the basic physical

interactions can be correctly measured only in terms of deviation upward or downward from unity. From the natural standpoint a motion at unit speed is no effective motion at all. Expressing this in another way, we may say that the natural system of reference, the reference system to which the physical universe actually conforms, is moving outward at unit speed with respect to any stationary spatial reference system. Any identifiable portion of such a stationary reference system is called a location in that system. While less-than-unit quantities of space do not exist, points within the units can be identified. A spatial location may therefore be of any size, from a point to the amount of space occupied by a galaxy, depending on the context in which the term is used. To distinguish locations in the natural moving system of reference from locations in the stationary reference systems, we will use the term absolute location in application to the natural system. In the context of a fixed reference system an absolute location appears as a point (or some finite spatial magnitude) moving along a straight line. We are so accustomed to referring motion to a stationary reference system that it seems almost self-evident that an object that has no independent motion, and is not subject to any external force, must remain stationary with respect to some spatial coordinate system. Of course, it is recognized that what seems to be motionless in the context of our ordinary experience is actually moving in terms of the solar system as a reference; what seems to be stationary in the solar system is moving if we use the Galaxy as a reference datum, and so on. Current scientific theory also contends that motion cannot be specified in any absolute manner, and can only be stated in relative terms. However, all previous thought on the subject, irrespective of how it views the details, has made the assumption that the initial point of a motion is some fixed spatial location that can be identified as the spatial zero. But nature is not required to conform to human opinions and beliefs, and in this case does not do so. As indicated in the preceding paragraphs, the natural system of reference in a universe of motion is not a stationary system but a moving system. Inasmuch as each unit of the basic motion involves one unit of space and one unit of time, it follows that continuation of the motion through an interval during which time is progressing involves a continued increase, or progression, of both space and time. If an absolute spatial location X is in coincidence with spatial location x at time t, then at time t + n this absolute location X will be found at spatial location x + n. As seen in the context of a stationary spatial system of reference, each absolute location is moving outward from its point of reference at a constant unit speed. Because of this motion of the natural reference system with respect to the stationary systems, an object that has no independent motion, and is not subject to any external force, does not remain stationary in any system of fixed spatial coordinates. It remains at the same absolute location, and therefore moves outward at unit speed from its initial location, and from any object that occupies such a location. Thus far we have been considering the progression of the natural moving reference system in the context of a one-dimensional stationary reference system. Since we have postulated that the universe is three-dimensional, we may also represent the progression

in a three-dimensional stationary reference system. Because the progression is scalar, what this accomplishes is merely to place the one-dimensional system that has been discussed in the preceding paragraphs into a certain position in the three dimensional coordinate system. The outward movement of the natural system with respect to the fixed point continues in the same one-dimensional manner. The scalar nature of the progression of the natural reference system is very significant. A unit of the basic motion has no inherent direction; it is simply a unit of space in association with a unit of time. In quantitative terms it is a unit scalar magnitude: a unit of speed. Scalar motion plays only a very minor role in everyday life, and little attention is ordinarily paid to it. But our finding that the basic motion of the physical universe is inherently scalar changes this picture drastically. The properties of scalar motion now become extremely important. To illustrate the primary difference between scalar motion and the vectorial motion of our ordinary experience let us consider two cases which involve a moving object X between two points A and B on the surface of a balloon. In the first case, let us assume that the size of the balloon is maintained constant, and that the object X is something capable of independent motion, a crawling insect, perhaps. The motion of X is then vectorial. It has a specific direction in the context of a stationary spatial reference system, and if that direction is BA–that is, X is moving away from B–the distance XA decreases and the distance XB increases. In the second case, we will assume that X is a fixed spot on the balloon surface, and that its motion is due to expansion of the balloon. Here the motion of X is scalar. It is simply outward away from all other points on the balloon surface, and has no specific direction. In this case the motion away from B does not decrease the distance XA. Both XB and XA increase. The motion of the natural reference system relative to any fixed spatial system of reference is motion of this character. It has a positive scalar magnitude, but no inherent direction. In order to place the one-dimensional progression of an absolute location in a threedimensional coordinate system it is necessary to define a reference point and a direction. In the subsequent discussion we will be dealing largely with scalar motions that originate at specific points in the fixed coordinate system. The reference point for each of these motions is the point of origin. It follows that the motions can be represented in the conventional fixed system of reference only by the use of multiple reference points. This was brought out in the first edition of this work in the form of a statement that photons (which, as will be shown later, are objects without independent motion, and therefore remain in their absolute locations of origin) ―travel outward in all directions from various points of emission.” However, experience has indicated that further elaboration of this point is necessary in order to avoid misunderstandings. The principal stumbling block seems to be a widespread impression that there must be some kind of a conceptually identifiable universal reference system to which the motions of photons and other objects that remain in the same absolute locations can be related. The expression ―natural reference system‖ probably contributes to this impression, but the fact that a natural reference system exists does not necessarily imply that it must be related in any direct way to the conventional three-dimensional stationary frame of reference.

It is true that the expanding balloon analogy suggests something of this kind, but an examination of this analogy will show that it is strictly applicable only to a situation in which all existing objects are stationary in the natural system of reference, and are therefore moving outward at unit speed. In this situation, any location can be taken as the reference point, and all other locations move outward from that point; that is, all locations move outward away from all other locations. But just as soon as moving objects (entities that are stationary, or moving with low speeds, in the fixed reference system, and are therefore moving with high speeds relative to the natural system of reference–emitters of photons, for example) are introduced into the situation, this simple representation is no longer possible, and multiple reference points become necessary. In order to apply the balloon analogy to a gravitationally bound physical system it is necessary to visualize a large number of expanding balloons, centered on the various reference points and interpenetrating each other. Absolute locations are defined only in a scalar sense (represented one-dimensionally). They move outward, each from its own reference point, regardless of where those reference points may be located in the threedimensional spatial coordinate system. In the case of the photons, each emitting object becomes a point of reference, and since the motions are scalar and have no inherent direction, the direction of motion of each photon, as seen in the reference system, is determined entirely by chance. Each of the emitting objects, wherever it may be in the stationary reference system, and whatever its motions may be relative to that system, becomes the reference point for the scalar photon motion; that is, it is the center of an expanding sphere of radiation. The finding that the natural system of reference in a universe of motion is a moving system rather than a stationary system, our first deduction from the postulates that define such a universe, is a very significant discovery. Heretofore only one so-called ―universal force,‖ the force of gravitation, has been known. Later in the discussion it will be seen that the customary term ―universal‖ is somewhat too broad in application to gravitation‖ but this phenomenon (the nature of which will be examined later) affects all units and aggregates of matter within the observational range under all circumstances. While not actually universal, it can appropriately be called a ―general‖ force. In a universe of motion a force is necessarily a motion, or an aspect of motion. Since we will be working mainly in terms of motion for the present, it will be desirable at this point to establish the relation between the force and motion concepts. For this purpose, let us consider a situation in which an object is moving in one direction with a certain velocity, and is simultaneously moving in the opposite direction with an equal velocity. The net change of position of the object is zero. Instead of looking at the situation in terms of two opposing motions‖ we may find it convenient to say that the object is motionless, and that this condition has resulted from a conflict of two forces tending to produce motion in opposite directions. On this basis we define force as that which will produce motion if not prevented from so doing by other forces. The quantitative aspects of this relation will be considered later. The limitations to which a derived concept of this kind are subject will also have consideration in connection with subjects to be covered in the pages that follow. The essential point to be noted here is that ―force‖ is merely a special way of looking at motion.

It has long been realized that while gravitation has been the only known general force, there are many physical phenomena that are not capable of satisfactory explanation on the basis of only one such force. For example, Gold and Hoyle make this comment: Attempts to explain both the expansion of the universe and the condensation of galaxies must be very largely contradictory so long as gravitation is the only force field under consideration. For if the expansive kinetic energy of matter is adequate to give universal expansion against the gravitational field it is adequate to prevent local condensation under gravity, and vice versa. That is why, essentially, the formation of galaxies is passed over with little comment in most systems of cosmology.29 Karl K. Darrow made the same point in a different connection, emphasizing that gravitation alone is not sufficient in many applications. There must also be what he called an ―antagonist,‖ an ―essential and powerful force,‖ as he described it. May we now assume that the ultimate particles of the world act on each other by gravity alone, with motion as the sole antagonist to keep the universe from gathering into a single clump? The answer to this question is a forthright and irrevocable No!30 The globular star clusters provide an example illustrating Darrow's point. Like the formation of galaxies, the problem of accounting for the existence of these clusters is customarily ―passed over with little comment, by the astronomers, but a discussion of the subject occasionally creeps into the astronomical literature. A rather candid article by E. Finlay-Freundlich which appeared in a publication of the Royal Astronomical Society some years ago admitted that ―the main problem presented by the globular clusters is their very existence as finite systems.‖ Many efforts have been made to explain these clusters on the basis of motions acting in opposition to gravitation, but as this author concedes, there is no evidence of the existence of motions that would be adequate to establish an equilibrium, and he asserts that ―their structure must be determined solely by the gravitational field set up by the stars which constitute such a cluster.‖ This being the case, the only answer he was able to visualize was that the clusters ―have not yet reached the final state of equilibrium,‖ a conclusion that is clearly in conflict with the many observational indications that these clusters are relatively stable long-lived objects. The following judgment that Finlay-Freundlich expressed with respect to the results obtained by his predecessors is equally applicable to the situation as it stands today: All attempts to explain the existence of isolated globular clusters in the vicinity of the galaxy have hitherto failed.31 But now we find that there is a second ―general force‖ that has not hitherto been recognized, just the kind of an ―antagonist‖ to gravitation that is necessary to explain all of these otherwise inexplicable phenomena. Just as gravitation moves all units and aggregates of matter inward toward each other, so the progression of the natural reference system with respect to the stationary reference systems in common use moves material units and aggregates, as we see them in the context of a stationary reference system, outward away from each other. The net movement of each object, as observed, is

determined by the relative magnitudes of the opposing general motions (forces), together with whatever additional motions may be present. In each of the three illustrative cases cited, the outward progression of the natural reference system provides the missing piece in the physical puzzle. But these cases are not unique; they are only especially dramatic highlights of a clarification of the entire physical picture that is accomplished by the introduction of this new concept of a moving natural reference system. We will find it in the forefront of almost every subject that is discussed in the pages that follow. It should be recognized, however, that the outward motions that are imparted to physical objects by reason of the progression of the natural reference system are, in a sense, fictitious. They appear to exist only because the physical objects are referred to a spatial reference system that is assumed to be stationary, whereas it is, in fact, moving. But in another sense, these motions are not entirely fictitious, inasmuch as the attribution of motion to entities that are not actually moving takes place only at the expense of denying motion to other entities that are, in fact, moving. These other entities that are stationary relative to the fixed spatial coordinate system are participating in the motion of that coordinate system relative to the natural system. The motion therefore exists, but it is attributed to the wrong entities. One of the first essentials for an understanding of the system of motions that constitutes the physical universe is to relate the basic motions to the natural reference system, and thereby eliminate the confusion that has been introduced by the use of a fixed reference system. When this is done it can be seen that the units of motion involved in the progression of the natural reference system have no actual physical significance. They are merely units of a reference system in which the fictitious motion of the absolute locations can be represented. Obviously, the spatial aspect of these fictitious units of motion is equally fictitious, and this leads to an answer to the question as to the relation of the ―space‖ represented by a stationary three-dimensional reference system, extension space, as we may call it, to the space of the universe of motion. On the basis of the explanation given in the preceding pages, if a number of objects without independent motion (such as photons) originate simultaneously from a source that is stationary with respect to a fixed reference system, they are carried outward from the location of origin at unit speed by the motion of the natural reference system relative to the stationary system. The direction of motion of each of these objects, as seen in the context of the stationary system of reference, is determined entirely by chance, and the motions are therefore distributed over all directions. The location of origin is then the center of an expanding sphere, the surface of which contains the locations that the moving objects occupy after a period of time corresponding to the spatial progression represented by the radius of the sphere. Any point within this sphere can be defined by the direction of motion and the duration of the progression; that is, by polar coordinates. The sphere generated by the motion of the natural reference system relative to the point of origin has no actual physical significance. It is a fictitious result of relating the natural reference system to an arbitrary fixed system of reference. It does, however, define a reference frame that is well adapted to representing the motions of ordinary human experience. Any such sphere can be expanded indefinitely, and the reference system thus defined is therefore coextensive

with all other stationary spatial reference systems. Position in any one such system can be expressed in terms of any other merely by a change of coordinates. The volume generated in this manner is identical with the entity that is called ―space‖ in previous physical theories. It is the spatial constituent of a universe of matter. As brought out in the foregoing explanation, this entity, extension space, as we have called it, is neither a void, as contended by one of the earlier schools of thought, or an actual physical entity, as seen by an opposing school. In terms of a universe of motion it is simply a reference system. An appropriate analogy is the coordinate system on a sheet of graph paper. The original lines on this paper, generally lightly printed in color, have no significance so far as the subject matter of the graph is concerned. But if we draw some lines on this sheet that are relevant to the subject matter, then the printed coordinate system facilitates our assessment of the interrelations between the quantities represented by those lines. Similarly, extension space, per se, has no physical significance. It is merely a reference system, like the colored lines on the graph paper, that facilitates cognition of the relations between the significant entities and phenomena: the motions and their various aspects. The true ―space‖ that enters into physical phenomena is the spatial aspect of motion. As brought out earlier, it has no independent existence. Nor does time. Each exists only in association with the other as motion. We can, however, isolate the spatial aspect of a particular motion, or type of motion, and deal with it on a theoretical basis as if it were independent, providing that the rate of change of time remains constant, or the appropriate correction is applied for whatever deviation from a constant rate actually does take place. This ability to abstract the spatial aspect and treat it independently is the factor that enables us to relate the spatial aspect of translational motion to the reference system that we recognize as extension space. It may be of interest to note that this clarification of the nature of extension space gives us a partial answer to the long-standing question as to whether this space, which in the context of a universe of matter is ―space‖ in general, is finite or infinite. As a reference system it is potentially infinite, just as ―number‖ is potentially infinite. But it does not necessarily follow that the number of units of space participating in motions that actually have physical significance is infinite. A complete answer to the question is therefore not available at this stage of the development. The remaining issue will have further consideration later. The finding that extension space is merely a reference system also disposes of the issue with respect to ―curvature,‖ or other kinds of distortion, of space, and it rules out any participation of extension space in physical action. Such concepts as those involved in Einstein's assertion that ―space has the physical property of transmitting electromagnetic waves, are wholly incorrect. No reference system can have any physical effects, nor can any physical action affect a reference system. Such a system is merely a construct: a device whereby physical actions and their results can be represented in usable form.

Extension space, the ―container‖ visualized by most individuals when they think of space, is capable of representing only translational motion, and its spatial aspect, not physical space in general. But the spatial aspect of any motion has the same relation to the physical phenomena in which it is involved as the spatial aspect of translational motion that we can follow by means of its representation in the coordinate system. For example, the space involved in rotation is physical space, but it can be defined in the conventional reference system only with the aid of an auxiliary scalar quantity: the number of revolutions. By itself, that reference system cannot distinguish between one revolution and n revolutions. Nor is it able to represent vibrational motion. As will be found later in the development, even its capability of representing translational motion is subject to some significant limitations. Regardless of whether motion is translatory, vibratory, or rotational, its spatial aspect is ―space,‖ from the physical standpoint. And whenever a physical process involves space in general, rather than merely the spatial aspect of translational motion, all components of the total space must be taken into account. The full implications of this statement will not become apparent until we are ready to begin consideration of electrical phenomena, but it obviously rules out the possibility of a universal reference system to which all spatial magnitudes can be related. Furthermore, every motion, and therefore every physical object (a manifestation of motion) has a location in three-dimensional time as well as in three-dimensional space, and no spatial reference system is capable of representing both locations. It may be somewhat disconcerting to many readers to be told that we are dealing with a universe that transcends the stationary three-dimensional spatial reference system in which popular opinion places it: a universe that involves three-dimensional time, scalar motion, a moving reference system, and so on. But it should be realized that this complexity is not peculiar to the Reciprocal System. No physical theory that enjoys any substantial degree of acceptance today portrays the universe as capable of being accurately represented in its entirety within any kind of a spatial reference system. Indeed, the present-day ―official‖ school of physical theory says that the basic entities of the universe are not ―objectively real‖ at all; they are phantoms which can ―only be symbolized by partial differential equations in an abstract multidimensional space.‖ 32 (Werner Heisenberg) Prior to the latter part of the nineteenth century there was no problem in this area. It was assumed, without question, that space and time were clearly recognizable entities, that all spatial locations could be defined in terms of an absolute spatial reference system, and that time could be defined in terms of a universal uniform flow. But the experimental demonstration of the constant speed of light by Michelson and Morley threw this situation into confusion, from which it has never fully emerged. The prevailing scientific opinion at the moment is that time is not an independent entity, but is a sort of quasi-space, existing in one dimension that is joined in some manner to the three dimensions of space to form a four-dimensional continuum. Inasmuch as this creates as many problems as it solves, it has been further assumed that this continuum is distorted by the presence of matter. These assumptions, which are basic to, in relativity theory, the currently accepted doctrine, leave the conventional spatial reference system in

a very curious position. Einstein says that his theory requires us to free ourselves ―from the idea that co-ordinates must have an immediate metrical meaning.‖33 He defines this expression ―a metrical meaning‖ as the existence of a specific relationship between differences of coordinates and measurable lengths and times. Just what kind of a meaning the coordinates can have if they do not represent measurable magnitudes is rather difficult to understand. The truth is that the differences in coordinates, which, according to Einstein, have no metrical meaning, are the spatial magnitudes that enter into almost all of our physical calculations. Even in astronomy, where it might be presumed that any inaccuracy would be very serious, in view of the great magnitudes involved, we get this report from Hannes Alfven: The general theory (of relativity) has not been applied to celestial mechanics on an appreciable scale. The simpler Newtonian theory is still employed almost exclusively to calculate the motions of celestial bodies.34 Our theoretical development now demonstrates that the differences in coordinates do have ―metrical meaning‖ , and that wherever we are dealing with vectorial motions, or with scalar motions that can be referred to identifiable reference points, these coordinate positions accurately represent the spatial aspects of the translational motions that are involved. This explains why the hypothesis of an absolute spatial reference system for the universe as a whole was so successful for such a long time. The exceptions are exceptional in ordinary practice. The existence of multiple reference points has had no significant impact except in the case of gravitation, and the use of the force concept has sidestepped the gravitational issue. Only in recent years have the observations penetrated into regions outside the boundaries of the conventional reference systems. But we now have to deal with the consequences of this enlargement of the scope of our observations. In the course of this present work it has been found that the problems introduced into physical science by the extension of experimental and observational knowledge are directly due to the fact that some of the newly discovered phenomena transcend the reference systems into which current science is trying to place them. As we will see later, this is particularly true where variations in time magnitudes are involved, inasmuch as conventional spatial reference systems assume a fixed and unchanging progression of time. In order to get the true picture it is necessary to realize that no single reference system is capable of representing the whole of physical reality. The universe, as seen in the context of the Reciprocal System of theory, is much more complex than is generally realized, but the simple Newtonian universe was abandoned by science long ago, and the modifications of the Newtonian view that we now find necessary are actually less drastic than those required by the currently popular physical theories. Of course, in the final analysis this makes no difference. Scientific thought will have to conform to the way in which the universe actually behaves, irrespective of personal preferences, but it is significant that all of the phenomena of a universe of motion, as they emerge from the development of the Reciprocal System, are rational, clearly defined, and ―objectively real.‖

CHAPTER 4

Radiation The basic postulate of the Reciprocal System of theory asserts the existence of motion. In itself, without qualification, this would permit the existence of any conceivable kind of motion, but the additional assumptions included in the postulates act as limitations on the types of motion that are possible. The net result of the basic postulates plus the limitations is therefore to assert the existence of any kind of motion that is not excluded by the limiting assumptions. We may express this point concisely by saying that in the theoretical universe of motion anything that can exist does exist. The further fact that these permissible theoretical phenomena coincide item by item with the observed phenomena of the actual physical universe is something that will have to be demonstrated step by step as the development proceeds. Some objections have been raised to the foregoing conclusion that what can exist does exist, on the ground that actuality does not necessarily follow from possibility. But no one is contending that actual existence is a necessary consequence of possible existence, as a general proposition. What is contended is that this is true, for special reasons, in the physical universe. Philosophers explain this as being the result of a ―principle of nature.‖ David Hawkins, for instance, tells us that ―the principle of plenitude . . . says that all things possible in nature are actualized.‖35 What the present development has done is to explain why nature follows such a principle. Our finding is that the basic physical entities are scalar motions, and that the existence of different observable entities and phenomena is due to the fact that these motions necessarily assume specific directions when they appear in the context of a three-dimensional frame of reference. Inasmuch as the directions are determined by chance, there is a finite probability corresponding to every possible direction, and thus every possibility becomes an actuality. It should be noted that this is exactly the same principle that was applied in Chapter 3 to explain why an expanding sphere of radiation emanates from each radiation source (a conclusion that is not challenged by anyone). In this case, too, scalar motions exist, each of which takes one of certain permissible directions (limited by the translational character of the motions), and these motions are distributed over all of the directions. Inasmuch as it has been postulated that motion, as defined earlier, is the sole constituent of the physical universe, we are committed to the proposition that every physical entity or phenomenon is a manifestation of motion. The determination of what entities, phenomena, and processes can exist in the theoretical universe therefore reduces to a matter of ascertaining what kinds of motion and combinations of motions can exist in such a universe, and what changes can take place in these motions. Similarly, in relating the theoretical universe to the observed physical universe, the question as to what any observed entity or phenomenon is never arises. We always know what it is. It is a motion, a combination of motions, or a relation between motions. The only question that is ever at issue is what kinds of motions are involved. There has been a sharp difference of opinion among those interested in the philosophical

aspects of science as to whether the process of enlarging scientific understanding is ―discovery‖ or ―invention.‖ This is related to the question as to the origin of the fundamental principles of science that was discussed in Chapter l, but it is a broader issue that applies to all scientific knowledge, and involves the inherent nature of that knowledge. The specific point at issue is clearly stated by R. B. Lindsay in these words: Application of the term ―discovery‖ implies that there is an external world ―out there‖ wholly independent of the observer and with built-in regularities and laws waiting to be uncovered and revealed. They have always been there and presumably always will be; our task is by diligent search to find out what they are. On the other hand, the term ―invention‖ implies that the physicist uses not only his observations but his imaginative powers to construct points of view that identify with experience.36 The ―discovery‖ concept, says Lindsay, implies that the acquisition of scientific knowledge is cumulative, and that ultimately our understanding of the physical world should be essentially complete. On the contrary, the ―point of view of invention means that the process of creating new experience and the construction of new ideas to cope with this experience go hand in hand.‖ On this basis, ―the whole activity is open-ended‖ ; there is no place for the idea of completeness. The Reciprocal System now provides a definitive answer to this question. It not only establishes scientific investigation as a process of discovery, but reduces that discovery to deduction and verification of the deductions. All of the information that is necessary in order to arrive at a full description of any theoretically possible entity or phenomenon is implicit in the postulates. A full development of the consequences of the postulates therefore defines a complete theoretical unlverse. As will be seen in the pages that follow, the physical processes of the universe include a continuing series of interchanges between vectorial motions and scalar motions. In all of these interchanges causality is maintained; no motions of either type occur except as a result of previously existing motions. The concept of events occurring without cause, which enters into some of the interpretations of the theories included in the current structure of physics, is therefore foreign to the Reciprocal System. But the universe of motion is not deterministic in the strict Laplacian sense, because the directions of the motions are continually being redetermined by chance processes. The description of the physical universe that emerges from development of the consequences of the postulates of the Reciprocal System therefore identifies the general classes of entities and phenomena that exist in the universe, and the relations between them, rather than specifying the exact result of every interaction, as a similarly complete theory would do if it were deterministic. In beginning our examination of these physical entities and phenomena, the first point to be noted is that the postulates require the existence of real units of motion, units that are similar to the units of motion involved in the progression of the natural reference system, except that they actually exist, rather than being fictitious results of relating motion to an arbitrary reference system. These independent units of motion, as we will call them, are superimposed on the moving reference background in much the same manner as that in which matter is supposed to exist in the basic space of previous physical theory. Since they are units of the

same kind, however, these independent units are interrelated with the units of the background motion, rather than being separate and distinct from it, in the manner in which matter is presumed to be distinct from the space-time background in the theories based on the ―matter‖ concept. As we will see shortly, some of the independent motions have components that are coincident with the background motion, and these components are not effective from the physical standpoint; that is, their effective physical magnitude is zero. A point of considerable significance is that while the postulates permit the existence of these independent motions, and, on the basis of the principle previously stated, they must therefore exist in the universe of motion defined by the postulates, those postulates do not provide any mechanism for originating independent motions. It follows that the independent motions now existing either originated coincidentally with the universe itself, or else were originated subsequently by some outside agency. Likewise, the postulates provide no mechanism for terminating the existence of these independent motions. Consequently, the number of effective units of such motion now existing can neither be increased nor diminished by any process within the physical system. This inability to alter the existing number of effective units of independent motion is the basis for what we may call the general conservation law, and the various subsidiary conservation laws applying to specific physical phenomena. It suggests, but does not necessarily require, a limitation of the independent units of motion to a finite number. The issue as to the finiteness of the universe does not enter into any of the phenomena that will be examined in this present volume, but it will come up in connection with some of the subjects that will be taken up later, and it will be given further consideration then. The Reciprocal System of theory deals only with the physical universe as it now exists, and reaches no conclusions as to how that universe came into being, nor as to its ultimate fate. The theoretical system is therefore completely neutral on the question of creation. It is compatible with either the hypothesis of creation by some agency, or the hypothesis that the universe has always existed. Continuous creation of matter by action of the physical mechanism itself, as postulated by the Steady State theory of cosmology is ruled out, and there is nothing in that mechanism that will allow the universe to arrive at any kind of termination of its own accord. The question of creation or termination by action of an outside agency is beyond the scope of the theoretical development. Turning now to the question as to what kinds of motion are possible at the basic level, we note that scalar magnitudes may be either positive (outward, as represented in a spatial reference system), or negative (inward). But as we observe motion in the context of a fixed reference system, the outward progression of the natural reference system is always present, and thus every motion includes a one-unit outward component. The discrete unit postulate prevents any addition to an effective unit, and independent outward motion is therefore impossible. All dependent motion must have net inward or negative magnitude. Furthermore, it must be continuous and uniform at this stage of the development, because no mechanism is yet available whereby discontinuity or variability can be produced. Since the outward progression always exists, independent continuous negative motion is not possible by itself, but it can exist in combination with the ever-present outward progression.

The result of such a combination of unit negative and unit positive motion is zero motion relative to a stationary coordinate system. Another possibility is simple harmonic motion, in which the scalar direction of movement reverses at each end of a unit of space, or time. In such motion, each unit of space is associated with a unit of time, as in unidirectional translational motion, but in the context of a stationary three-dimensional spatial reference system the motion oscillates back and forth over a single unit of space (or time) for a certain period of time (or space). At first glance, it might appear that the reversals of scalar direction at each end of the basic unit are inadmissible in view of the absence of any mechanism for accomplishing a reversal. However, the changes of scalar direction in simple harmonic motion are actually continuous and uniform, as can be seen from the fact that such motion is a projection of circular motion on a diameter. The net effective speed varies continuously and uniformly from +1 at the midpoint of the forward movement to zero at the positive end of the path of motion, and then to -1 at the midpoint of the reverse movement and zero at the negative end of the path. The continuity and uniformity requirements are met both by a continuous, uniform change of direction, and by a continuous, uniform change of magnitude. As pointed out earlier, the theoretical structure that we are developing from the fundamental postulates is a description of what can exist in the theoretical universe of motion defined by those postulates. The question as to whether a certain feature of this theoretical universe does or does not correspond to anything in the actual physical universe is a separate issue that is explored in a subsequent step in the project, to be started shortly, in which the theoretical universe is compared item by item with the observed universe. At the moment, therefore, we are not concerned with whether or not simple harmonic motion does exist in the actual physical universe, why it exists, if it does, or how it manifests itself. All that we need to know for present purposes is that inasmuch as this kind of motion is continuous, and is not excluded by the postulates, it is one of the kinds of motion that exists in the theoretical universe of motion, under the most basic conditions. Under these conditions simple harmonic motion is confined to individual units. When the motion has progressed for one full unit, the discrete unit postulate specifies that a boundary exists. There is no discontinuity, but at this boundary one unit terminates and another begins. Whatever processes may have been under way in the first unit cannot carry over into the next. They cannot be divided between two totally independent units. Consequently, a continuous change from positive to negative can occur only within one unit of either space or time. As explained in Chapter 3, motion, as herein defined, is a continuous process–a progression–not a succession of jumps. There is progression even within the units, simply because these are units of progression, or scalar motion. For this reason, specific points within the unit–the midpoint, for example–can be identified, even though they do not exist independently. The same is true of the chain used as an analogy in the preceding discussion. Although the chain exists only in discrete units, or links, we can distinguish various portions of a link. For instance, if we utilize the chain as a means of measurement, we can measure 10-1/2 links, even though half of a link would not qualify as part of the chain. Because of this capability of identifying the different portions of the unit, we see the vibrating unit as

following a definite path. In defining this path we will need to give some detailed consideration to the matter of direction. In the first edition the word ―direction‖ was used in four different senses. Exception was taken to this practice by a number of readers, who suggested that it would be helpful if ―direction‖ were employed with only one significance, and different names were attached to the other three concepts. When considered purely from a technical standpoint, the earlier terminology is not open to legitimate criticism, as using words in more than one sense is unavoidable in the English language. However, anything that can be done to facilitate understanding of the presentation is worth serious consideration. Unfortunately, there is no suitable substitute for ―direction‖ in most of these applications. Some of the objections to the previous terminology were based on the ground that scalar quantities, by definition, have no direction, and that using the term ―direction‖ in application to these quantities, as well as to vectorial quantities, is contradictory and leads to confusion. There is merit in this objection, to be sure, in any application where we deal with scalar quantities merely as positive and negative magnitudes. But as soon as we view the scalar motions in the context of a fixed spatial reference system, and begin talking about ―outward‖ and ―inward,‖ as we must do in this work, we are referring not to the scalar magnitudes themselves, but to the representation of these magnitudes in a stationary spatial reference system, a representation that is necessarily directional. The use of directional language in this case therefore appears to be unavoidable. There are likewise some compelling reasons for continuing to use the term ―direction in time‖ in application to the temporal property analogous to the spatial property of direction. We could, of course, coin a new word for this purpose, and there would no doubt be certain advantages in so doing. But there are also some very definite advantages to be gained by utilizing the word ―direction‖ with reference to time as well as with reference to space. Because of the symmetry of space and time, the property of time that corresponds to the familiar property of space that we call ―direction‖ has exactly the same characteristics as the spatial property, and by using the term ―direction in time,‖ or ―temporal direction,‖ as a name for this property we convey an immediate understanding of its nature and characteristics that would otherwise require a great deal of discussion and explanation. All that is then necessary is to keep in mind that although direction in time is like direction in space, it is not direction in space. Actually, it should not be difficult to get away from the habit of always interpreting ―direction‖ as meaning ―direction in space‖ when we are dealing with motion. We already recognize that there is no spatial connotation attached to the term when it is used elsewhere. We habitually use ―direction‖ and directional terms of one kind or another in speaking of scalar quantities, or even in connection with items that cannot be expressed in physical imagery at all. We speak of wages and prices as moving in the same direction, temperature as going up or down, a change in the direction of our thinking, and so on. Here we realize that we are using the word ―direction‖ without any spatial significance. There should be no serious obstacle in the way of a similar conception of the meaning of ―direction in time.‖ In this edition the term ―direction‖ will not be used in referring to the deviations upward or

downward from unit speed. In the other senses in which the term was originally used it seems essential to continue utilizing directional language, but as an alternative to the further limitations on the use of the term ―direction‖ that have been suggested we will use qualifying adjectives wherever the meaning of the term is not obvious from the context. On this basis vectorial direction is a specific direction that can be fully represented in a stationary coordinate system. Scalar direction is simply outward or inward, the spatial representation of positive or negative scalar magnitudes respectively. Wherever the term ―direction‖ is used without qualification it will refer to vectorial direction. If there is any question as to whether the direction (scalar or vectorial) under consideration is a direction in space or a direction in time, this information will also be given. Vectorial motion is motion with an inherent vectorial direction. Scalar motion is inward or outward motion that has no inherent vectorial direction, but is given a direction by the factors involved in its relation to the reference system. This imputed vectorial direction is independent of the scalar direction except to the extent that the same factors may, in some instances, affect both. As an analogy, we may consider a motor car. The motion of this car has a direction in three-dimensional space, a vectorial direction, while at the same time it has a scalar direction, in that it is moving either forward or backward. As a general proposition, the vectorial direction of this vehicle is independent of its scalar direction. The car can run forward in any vectorial direction, or backward in any direction. If the car is symmetrically constructed so that the front and back are indistinguishable, we cannot tell from direct observation whether it is moving forward or backward. The same is true in the case of the simple scalar motions. For example, we will find in the pages that follow that the scalar direction of a falling object is inward, whereas the scalar direction of a beam of light is outward. If the two happen to traverse the same path in the same vectorial direction, as they may very well do, there is nothing observable that will distinguish between the inward and outward motion. In the usual situation the scalar direction of an observed motion must be determined from collateral information independently of the observed vectorial direction. The magnitude of a simple harmonic motion, like that of any other motion, is a speed, the relation between the number of units of space and the number of units of time participating in the motion. The basic relation, one unit of space per unit of time, always remains the same, but by reason of the directional reversals, which result in traversing the same unit of one component repeatedly, the speed of a simple harmonic motion, as it appears in a fixed reference system, is 1/x (or x/1). This means that each advance of one unit in space (or time) is followed by a series of reversals of scalar direction that increase the number of units of time (or space) to x, before another advance in space (or time) takes place. At this point the scalar direction remains constant for one unit, after which another series of reversals takes place. Ordinarily the vectorial direction reverses in unison with the scalar direction, but each end of a unit is the reference point for the position of the next unit in the reference system, and conformity with the scalar reversals is therefore not mandatory. Consequently, in order to maintain continuity in the relation of the vectorial motion to the fixed reference system the

vectorial direction continues the regular reversals at the points where the scalar motion advances to a new unit of space (or time). The relation between the scalar and vectorial directions is illustrated in the following tabulation, which represents two sections of a 1/3 simple harmonic motion. The vectorial directions are expressed in terms of the way the movement would appear from some point not in the line of motion. Unit Number 1 2 3 4 5 6

DIRECTION Scalar inward outward inward inward outward inward

Vectorial right left right left right left

The simple harmonic motion thus remains permanently in a fixed position in the dimension of motion, as seen in the context of a stationary reference system; that is, it is an oscillatory, or vibratory, motion. An alternative to this pattern of reversals will be discussed in Chapter 8. Like all other absolute locations, the absolute location occupied by the vibrating unit, the unit of simple harmonic motion, is carried outward by the progression of the natural reference system, and since the linear motion of the vibrating unit has no component in the dimensions perpendicular to the line of oscillation, the outward progression at unit speed takes place in one of these free dimensions. Inasmuch as this outward progression is continuous within the unit as well as from one unit of the reference frame to the next, the combination of a vibratory motion and a linear motion perpendicular to the line of vibration results in a path which has the form of a sine curve. Because of the dimensional relationship between the oscillation and the linear progression, there is a corresponding relation between the vectorial directions of these two components of the total motion, as seen in the context of a stationary reference system, but this relation is fixed only between these two components. The position of the plane of vibration in the stationary spatial system of reference is determined by chance, or by the characteristics of the originating object. Although the basic one-to-one space-time ratio is maintained in the simple harmonic motion, and the only change that takes place is from positive to negative and vice versa, the net effect, from the standpoint of a fixed system of reference is to confine one component–either space or time–to a single unit, while the other component extends to n units. The motion can thus be measured in terms of the number of oscillations per unit of time, a frequency, although it is apparent from the foregoing explanation that it is actually a speed. The conventional measurement in terms of frequency is possible only because the magnitude of the space (or time) term remains constant at unity. Here, in this oscillating unit, the first manifestation of independent motion (that is, motion that is separate and distinct from the outward motion of the natural reference system) that has emerged from the theoretical development, is the first physical object. In the motion of

this object we also have the first instance of ―something moving.‖ Up to this time we have been considering only the basic motions, relations between space and time that do not involve movement of any ―thing.‖ Experience in presenting the theory to college audiences has indicated that many persons are unable to conceive of the existence of motion without something moving, and are inclined to argue that this is impossible. It should be realized, however, that we are definitely committed to this concept just as soon as we postulate a universe composed entirely of motion. In such a universe, ―things‖ are combinations of motions, and motion is thus logically prior to ―things.‖ The concept-of a universe of motion is generally conceded to be reasonable and rational. The long list of prominent and not-so-prominent scientists and philosophers who have essayed to explore the implications of such a concept is sufficient confirmation of this point. It follows that unless some definite and positive conflict with reason or experience is encountered, the necessary consequences of that concept must also be presumed to be reasonable and rational, even though some of them may conflict with long-standing beliefs of some kind. There is no mathematical obstacle to this unfamiliar type of motion. We have defined motion, for purposes of a theory of a universe of motion, by means of the relation expressed in the equation of motion: v = s/t. This equation does not require the existence of any moving object. Even where the motion actually is motion of something, that ―something,‖ does not enter into any of the terms of the equation, the mathematical representation of the motion. The only purpose that it serves is to identify the particular motion under consideration. But identification is also possible where there is nothing moving. If, for example, we say that the motion we are talking about is the motion of atom A, we are identifying a particular motion, and distinguishing it from all other motions, but if we refer to the motion which constitutes atom A, we are identifying this motion (or combination of motions) on an equally definite basis, even though this is not motion of anything. A careful consideration of the points brought out in the foregoing discussion will make it clear that the objections that have been raised to the concept of motion without anything moving are not based on logical grounds. They stem from the fact that the idea of simple motion of this kind, merely a relation between space and time, is new and unfamiliar. None of us likes to discard familiar ideas of long standing and replace them with something new and different, but this is part of the price that we pay for progress. This will be an appropriate time to emphasize that combinations or other modifications of existing motions can only be accomplished by adding or removing units of motion. As stated in Chapter 2, neither space nor time exists independently. Each exists only in association with the other as motion. Consequently, a speed 1/a cannot be changed to a speed 1/b by adding b-a units of time. Such a change can only be accomplished by superimposing a new motion on the motion that is to be altered. In carrying out the two different operations that were involved in the investigation from which the results reported herein were derived, it would have been possible to perform them separately; first developing the theoretical structure as far as circumstances would permit, and then comparing this structure with the observed features of the physical universe. In

practice, however, it was more convenient to identify the various theoretical features with the corresponding physical features as the work progressed, so that the correlations would serve as a running check on the accuracy of the theoretical conclusions. Furthermore, this policy eliminated the need for the separate system of terminology that otherwise would have been required for referring to the various features of the theoretical universe during the process of the theoretical development. Much the same considerations apply to the presentation of the results, and we will therefore identify each theoretical feature as it emerges from the development, and will refer to it by the name that is customarily applied to the corresponding physical feature. It should be emphasized, however, that this hand in hand method of presentation is purely an aid to understanding. It does not alter the fact that the theoretical universe is being developed entirely by deduction from the postulates. No empirical information is being introduced into the theoretical structure at any point. All of the theoretical features are purely theoretical, with no empirical content whatever. The agreement between theory and observation that we will find as we go along is not a result of basing the theoretical conclusions on appropriate empirical premises; it comes about because the theoretical system is a true and accurate representation of the actual physical situation. Identification of the theoretical unit of simple harmonic motion that we have been considering presents no problem. It is obvious that each of these units is a photon. The process of emission and movement of the photons is radiation. The space-time ratio of the vibration is the frequency of the radiation, and the unit speed of the outward progression is the speed of radiation, more familiarly known as the speed of light. When considered merely as vibrating units, there is no distinction between one photon and another except in the speed of vibration, or frequency. The unit level, where speed 1/n changes to n/l cannot be identified in any directly observable way. We will find, however, that there is a significant difference between the manner in which the photons of vibrational speed 1/n enter into combinations of motions and the corresponding behavior of photons of vibrational speed greater than unity. This difference will be examined in detail in the chapters that follow. One of the things that we can expect a correct theory of the structure of the universe to do is to clear up the discrepancies and ―paradoxes‖ of previously existing scientific thought. Here, in the explanation of the nature of radiation that emerges from the development of theory, we find this expectation dramatically fulfilled. In conventional thinking the concepts of ―wave‖ and ―particle‖ are mutually exclusive, and the empirical discovery that radiation acts in some respects as a wave phenomenon, and in other respects as an assembly of particles has confronted physical science with a very disturbing paradox. Almost at the outset of our development of the consequences of the postulates that define a universe of motion we find that in such a universe there is a very simple explanation. The photon acts as a particle in emission and absorption because it has the distinctive feature of a particle: it is a discrete unit. In transmission it behaves as a wave because the combination of its own inherent vibratory motion with the translatory motion of the progression of the natural reference system causes it to follow a wave-like path. In this case the problem that seemed impossible to solve while radiation was looked upon as a single entity loses all of its difficult features as

soon as it is recognized as a combination of two different things. Another difficult problem with respect to radiation has been to explain how it can be propagated through space without some kind of a medium. This problem has never been solved other than by what was described by R. H. Dicke as a ―semantic trick‖ ; that is, assuming, entirely ad hoc, that space has the properties of a medium. One suspects that, with empty space having so many properties, all that had been accomplished in destroying the ether was a semantic trick. The ether had been renamed the vacuum.5 Einstein did not challenge this conclusion expressed by Dicke. On the contrary ―he freely admitted‖ not only that his theory still employed a medium, but also that this medium is indistinguishable, other than semantically, from the ―ether‖ of previous theories. The following statements from his works are typical: We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether.37 We shall say: our space has the physical property of transmitting waves, and so omit the use of a word (ether) we have decided to avoid.38 Thus the relativity theory does not resolve the problem. There is no evidence to support Einstin's assumption that space has the properties of a medium, or that it has any physical properties at all. The fact that no method of propagating radiation through space without a medium has ever been conceived is therefore still unreconciled with the absence of any evidence of the existence of a medium. In the theoretical universe of the Reciprocal System the problem does not arise, since the photon remains in the same absolute location in which it originates, as any object without independent motion must do. With respect to the natural reference system it does not move at all, and the movement that is observed in the context of a stationary reference system is a movement of the natural reference system relative to the stationary system, not a movement of the photon itself. In both the propagation question and the wave-particle issue the resolution of the problem is accomplished in the same manner. Instead of explaining why the seemingly complicated phenomena are complex and perplexing, the Reciprocal System of theory removes the complexity and reduces the phenomena to simple terms. As other long-standing problems are examined in the course of the subsequent development we will find that this conceptual simplicity is a general characteristic of the new theoretical structure.

CHAPTER 5

Gravitation Another type of motion that is permitted by the postulates, and therefore exists in the theoretical universe, is rotation. Before rotational motion can take place, however, there must exist some physical object (independent motion) that can rotate. This is purely a

matter of geometry. We are still in the stage of the development where we are dealing only with scalar motions, and a single scalar motion cannot produce the directional characteristics of rotation. Like the sine curve of the photon they require a combination of motions: a compound motion, we may say. Thus, while motion is possible without anything moving, rotation is not possible unless some physical object is available to be rotated. The photon of radiation is such an independent motion, or physical object, and it is evident, from the limitations that apply to the kinds of motion that are possible at this stage of the development, that it is the only primary unit that meets the requirement. Simple rotation is therefore rotation of the photon. In our ordinary experience rotation is a vectorial motion, and its direction (a vectorial direction) is relative to a fixed spatial system of reference. In the absence of other motion, an object rotating vectorially remains stationary in the fixed system. However, any motion of a photon is a scalar motion, inasmuch as the mechanism required for the production of vectorial motion is not yet available at this stage of the development. A scalar motion has an inherent scalar direction (inward or outward), and it is given a vectorial direction by the manner in which the scalar motion appears in the fixed coordinate system. As brought out in Chapter 4, the net scalar direction of independent motion is inward. The significance of the term ―net‖ in this statement is that a compound motion may include an outward component providing that the magnitude of the inward component of that motion is great enough to give the motion as a whole the inward direction. Since the vectorial direction that this inward motion assumes in a fixed reference system is independent of the scalar direction, the motion can take any vectorial direction that is permitted by the geometry of three-dimensional space. One such possibility is rotation. The special characteristic of rotation that distinguishes it from the simple harmonic motion previously considered is that in rotation the changes in vectorial direction are continuous and uniform, so that the motion is always forward, rather than oscillating back and forth. Consequently, there is no reason for any change in scalar direction, and the motion continues in the inward direction irrespective of the vectorial changes. Scalar rotation thus differs from inherently vectorial rotation in that it involves a translational inward movement as well as the purely rotational movement. A rolling motion is a good analogy, although the mechanism is different. The rolling motion is one motion, not a rotation and a translational motion. It is the rotation that carries the rolling object forward translationally. Similarly, the scalar rotation is only one motion, even though it has a translational effect that is absent in the case of vectorial rotation. To illustrate the essential difference between rotation and simple harmonic motion, let us return to the automobile analogy. If the car is on a very narrow road, analogous to the one-dimensional path of vibration of the photon, and it runs forward in moving north, then when it reverses its vectorial direction and moves south it also reverses its scalar direction and runs backward. But if the car is on a circular track and starts moving forward, it continues moving forward regardless of the changes in vectorial direction that are taking place. The vectorial direction of the inward translational movement of the rotating photon, like the vectorial direction of the non-rotating photon, is a result of viewing the motion in the

context of an arbitrary reference system, rather than an inherent property of the motion itself. It is therefore determined entirely by chance. However, the non-rotating photon remains in the same absolute location permanently, unless acted upon by some outside agency, and the direction determined at the time of emission is therefore also permanent. The rotating photon, on the other hand is continually moving from one absolute location to another as it travels back along the line of the progression of the natural reference system, and each time it enters a new absolute location the vectorial direction is redetermined by the chance process. Inasmuch as all directions are equally probable, the motion is distributed uniformly among all of them in the long run. A rotating photon therefore moves inward toward all space-time locations other than the one that it happens to occupy momentarily. Coincidentally, it continues to move outward by reason of the progression of the reference system, but the net motion of the observable aggregates of rotating photons in our immediate environment is inward. The determination of the vectorial direction corresponding to ―outward‖ automatically determines the vectorial direction of ―inward‖ in each case, inasmuch as one is the reverse of the other. Some of the readers of the first edition found the concept of ―inward motion‖ rather difficult. This was probably due to looking at the situation on the basis of a single reference point. ―Outward‖ from such a point is easily visualized, whereas ―inward‖ has no meaning under the circumstances. But the non-rotating photon does not merely move outward from the point of emission; it moves outward from all locations in the manner of a spot on the surface of an expanding balloon. Similarly, the rotating photon moves inward toward all locations in the manner of a spot on the surface of a contracting balloon. The outward motion is simply the spatial representation of an increasing scalar magnitude, whereas the inward motion is the spatial representation of a decreasing scalar magnitude. If that decreasing magnitude reaches zero, it continues as an increasing negative magnitude; that is, if the object which was moving inward toward a certain location eventually arrives at that location, it continues in motion beyond that point (providing that nothing intervenes). Since space and time locations cannot be identified by observation, neither inward nor outward motion can be recognized as such. It is possible, however, to observe the changes in relations between the moving objects and other physical structures. The photons of radiation, for instance, are observed to be moving outward from the emitting objects. Similarly, each of the rotating photons in the local environment is moving toward all other rotating photons, by reason of the inward motion in space in which all participate, and the change of relative position in space can be observed. This second class of identifiable objects in the theoretical universe thus manifests itself to observation as a number of individual units, which continually move inward toward each other. Here, again, the identification of the physical counterparts of the theoretical phenomena is a simple matter. The inward motion in all directions of space is gravitation, and the rotating photons are the physical objects that gravitate; that is, atoms and particles. Collectively, the atoms and particles constitute matter. As in the case of radiation, the new theoretical development leads to a very simple explanation of a hitherto unexplained phenomenon. Previous investigators in this area have arrived at a reasonably good understanding of the physical effects of gravitation, but

they are completely at sea as to how it originated, and how the apparent gravitational effect is propagated. Our finding is that these previous investigators have misunderstood the nature of the gravitational phenomenon. Except at extreme distances, each unit or aggregate of matter in the observed physical universe continually moves toward all others, unless restrained in some way. It has therefore been taken for granted that each particle of matter is exerting a force of attraction on the others. However, when we examine the characteristics of that presumed force we find that it is something of a very peculiar nature, totally unlike the forces of ordinary experience. As nearly as we can determine from observation, the gravitational ―force‖ acts instantaneously, without an intervening medium, and in such a manner that it cannot be screened off or modified in any way. These observed characteristics are so difficult to explain theoretically that the theorists have given up the search for an explanation, and are now taking the stand that the observations must, for some unknown reason, be wrong. Even though all practical gravitational calculations, including those at astronomical distances, are carried out on the basis of instantaneous action, without introducing any inconsistencies, and even though the concept of a force which is wholly dependent upon position in space being propagated through space is self-contradictory, the theorists take the stand that since they are unable to devise a theory to account for instantaneous action, the gravitational force must be propagated at a finite velocity, all evidence to the contrary notwithstanding. And even though there is not the slightest evidence of the existence of any medium in space, or the existence of any medium-like properties of space, the theorists also insist that since they are unable to devise a theory without a medium or something that has the properties of a medium, such an entity must exist, in spite of the negative evidence. There are many places in accepted scientific thought where the necessity of facing up to clear evidence from observation or experiment is avoided by the use of one or more of the evasive devices that the modern theorists have invented for the purpose, but this gravitational situation is probably the only major instance in which the empirical evidence is openly and categorically defied. While the total lack of any explanation of the gravitational phenomenon that is consistent with the observations has undoubtedly been the primary cause of the flagrantly unscientific attitude that has been taken here, an erroneous belief concerning the nature of electromagnetic radiation has been a significant contributing factor. The enormous extension of the known range of radiation frequencies in modern times has been accomplished mainly through the generation of these additional frequencies by electrical means, and it has come to be believed that there is a unique connection between radiation and electrical processes, that radiation is the carrier by means of which electrical and magnetic effects are propagated. From this it is only a short step to the conclusion that there must also be gravitational waves, carriers of gravitational energy. ―Such (gravitational) waves resemble electromagnetic waves,‖ says Joseph Weber, who has been carrying on an extensive search for these hypothetical waves for many years. The theoretical development in the preceding pages shows that this presumed analogy does not represent the reality of the universe of motion.

In that universe radiation and gravitation are phenomena of a totally different order. But it is worth noting that radical differences between these two types of phenomena are also apparent in the information that is available from empirical sources. That information is simply ignored in current practice because it conflicts with the popular theories of the moment. Radiation is a process whereby energy is transferred from a material aggregate at some particular location in space (or time) to other spatial (or temporal) locations. Each photon has a definite frequency of vibration and a corresponding energy content; hence these photons are essentially traveling units of energy. The emitting agency loses a specific amount of energy whenever a photon leaves. This energy travels through the intervening space (or time) until the photon encounters a unit of matter with which it is able to interact, whereupon the energy is transferred, wholly or in part, to this matter. At either end of the path the energy is recognizable as such, and is readily interchangeable with other types of energy. The radiant energy of the impinding photon may, for instance, be converted into kinetic energy (heat), or into electrical energy (the photoelectric effect), or into chemical energy (photochemical action). Similarly, any of these other types of energy, which may exist at the point of emission of the radiation, can be converted into radiation by appropriate processes. The gravitational situation is entirely different. Gravitational energy is not interchangeable with other forms of energy. At any specific location with respect to other masses, a mass unit possesses a definite amount of gravitational (potential) energy, and it is impossible to increase or decrease this energy content by conversion from or to other forms of energy. It is true that a change in location results in a release or absorption of energy, but the gravitational energy which the mass possesses at point A cannot be converted to any other type of energy at point A, nor can the gravitational energy at A be transferred unchanged to any other point B (except along equipotential lines). The only energy that makes its appearance in any other form at point B is that portion of the gravitational energy which the mass possessed at point A that it can no longer have at point B: a fixed amount determined entirely by the difference in location. Radiant energy remains constant while traveling in space, but can vary almost without limit at any specific location. The behavior of gravitation is the exact opposite. The gravitational effect remains constant at any specific location, but varies if the mass moves from one location to another, unless the movement is along an equipotential line. Energy is defined as capability of doing work. Kinetic energy, for example, qualifies under this definition, and any kind of energy that can be freely converted to kinetic energy likewise qualifies. But gravitational energy is not capable of doing work as a general proposition. It will do one thing, and that thing only: it will move masses inward toward each other. If this motion is permitted to take place, the gravitational energy decreases, and the decrement makes its appearance as kinetic energy, which can then be utilized in the normal manner. But unless gravitation is allowed to do this one thing which it is capable of doing, the gravitational energy is completely unavailable. It cannot do anything itself, nor can it be converted to any form of energy that can do something. The mass itself can theoretically be converted to kinetic energy, but this internal energy equivalent of the mass is something totally different from the gravitational energy. It is

entirely independent of position with reference to other masses. Gravitational, or potential, energy, on the other hand, is purely energy of position; that is, for any two specific masses, the mutual potential energy is determined solely by their spatial separation. But energy of position in space cannot be propagated in space; the concept of transmitting this energy from one spatial position to another is totally incompatible with the fact that the magnitude of the energy is determined by the spatial location. Propagation of gravitation is therefore inherently impossible. The gravitational action is necessarily instantaneous as Newton's Law indicates, and as has always been assumed for purposes of calculation. It is particularly significant, therefore, that the theoretical characteristics of gravitation, as derived from the postulates of the Reciprocal System, are in full agreement with the empirical observations, strange as these observations may seem. In the theoretical universe of motion gravitation is not an action of one aggregate of matter on another, as it appears to be. It is simply an inward motion of the material units an inherent property of the atoms and particles of matter. The same motion that makes an atom an atom also causes it to gravitate. Each atom and each aggregate is pursuing its own course independently of all others, but because each unit is moving inward in space, it is moving toward all other units, and this gives the appearance of a mutual interaction. These theoretical inward motions, totally independent of each other, necessarily have just the kind of characteristics that are observed in gravitation. The change in the relative position of two objects due to the independent motions of each occurs instantaneously, and there is nothing propagated from one to the other through a medium, or in any other way. Whatever exists, or occurs, in the intervening space can have no effect on the results of the independent motions. One of the questions that is frequently asked is how this finding that the gravitational motion of each aggregate is completely independent of all others can be reconciled with the observed fact that the direction of the (apparent) mutual gravitational force between two objects changes if either object moves. On the face of it, there appears to be a necessity for some kind of an interaction. The explanation is that the gravitational motion of an object never changes, either in amount or in direction. It is always directed from the location of the gravitating unit toward all other space and time locations. But we cannot observe the motion of an object inward in space; we can only observe its motion relative to other objects whose presence we can detect. The motion of each object therefore appears to be directed toward the other objects, but, in fact, it is directed toward all locations in space and time irrespective of whether or not they happen to be occupied. Whatever changes appear to take place in the gravitational phenomena by reason of change of position of any of the gravitating masses are not changes in the gravitational motions (or forces) but changes in our ability to detect those motions. Let us assume a mass unit X occupying location a, and moving gravitationally toward locations b and c. If these locations are not occupied, then we cannot detect this motion at all. If location b is occupied by mass unit Y. then we see X moving toward Y; that is, we can now observe the motion of X toward location b, but its motion toward location c is still unobservable. The observable gravitational motion of Y is toward X and has the direction ba.

Now if we assume that Y moves to location c, what happens? The essence of the theory is that the motion of X is not changed at all; it is entirely independent of the position of object Y. But we are now able to observe the motion of X toward c because there is a physical object at that location, whereas we are no longer able to observe the motion of X toward location b, even though that motion exists just as definitely as before. The direction of the gravitational motion (or force) of X thus appears to have changed, but what has actually happened is that some previously unobservable motion has become observable, while some previously observable motion has become unobservable. The same is true of the motion of object Y. It now appears to be moving in the direction ca rather than in the direction ba, but here again there has been no actual change, other than the change in the position of Y. Gravitationally, Y is moving in all directions at all times, irrespective of whether or not that motion is observable. The foregoing explanation has been presented in terms of individual mass units, rather than aggregates, as the basic question with respect to the effect of variable mass on the gravitational motion has not yet been considered. The discussion will be extended to the multiple units in the next chapter. As emphasized in Chapter 3, the identification of a second general force, or motion, to which all matter is subject, provides the must needed ―antagonist,‖ to gravitation, and enables explaining many phenomena that have never been satisfactorily explained on the basis of only one general force. It is the interaction of these two general forces that determines the course of major physical events. The controlling factor is the distance intervening between the objects that are involved. Inasmuch as the progression of space and time is merely a manifestation of the movement of the natural reference system with respect to the conventional stationary system of reference, the space progression originates everywhere, and its magnitude is always the same, one unit of space per unit of time. Gravitation, on the other hand, originates at the specific locations which the gravitating objects happen to occupy. Its effects are therefore distributed over a volume of extension space the size of which varies with the distance from the material object. In three-dimensional space, the fraction of the inward motion directed toward a unit area at distance d from the object is inversely proportional to the total area at that distance; that is, to the surface of a sphere of radius d. The effective portion of the total inward motion is thus inversely proportional to d². This is the inverse square law to which gravitation conforms, according to empirical findings. The net resultant of these two general motions in each specific case depends on their relative magnitudes. At the shorter distances gravitation predominates, and in the realm of ordinary experience all aggregate of matter are subject to net gravitational motions (or forces). But since, the progression of the natural reference system is constant at unit speed while the opposing gravitational motion is attenuated by distance it accordance with the inverse square law, it follows that at some specific distance, the gravitational limit of the aggregate of matter under consideration, the motions reach equality. Beyond this point the net movement is outward, increasing toward the speed of light as the gravitational effect continues to decrease. As a rough analogy, we may visualize a moving belt traveling outward from a central location and carrying an assortment of cubes and balls. The outward travel of the belt

represents the progression of the natural reference system. The cubes are analogous to the photons of radiation. Having no independent mobility of their own, they must necessarily remain permanently at whatever locations on the belt they occupy initially and they therefore move outward from the point of origin at the full speed of the belt. The balls, however, can be caused to rotate, and if the rotation is in the direction opposite to the travel of the belt and the rotational speed is high enough, the balls will move inward instead of outward. These balls represent the atoms of matter, and the inward motion opposite to the direction of the travel of the belt is analogous to gravitation. We could include the distance factor in the analogy by introducing some means of varying the speed of rotation of the balls with the distance from the central point. Under this arrangement the closer balls would still move inward, but at some point farther out there would be an equilibrium, and beyond this point the balls would move outward. The analogy is incomplete, particularly in that the mechanism whereby the rotation of the balls causes them to move inward translationally is not the same as that which causes the inward motion of the atoms. Nevertheless, it does show quite clearly that under appropriate conditions a rotational motion can cause a translational displacement, and it gives us a good picture of the general relations between the progression of the natural reference system, gravitational motion, and the travel of the photons of radiation. All aggregates of matter smaller than the largest existing units are under the gravitational control of larger aggregates; that is, they are within the gravitational limits of those larger units. Consequently, they are not able to continue the outward movement that would take place in the absence of the larger bodies. The largest existing aggregates are not limited in this manner, and on the basis of the principles that have been stated, any two such aggregates that are outside their mutual gravitational limits recede from each other at speeds increasing with the distance. In the observed physical universe, the largest aggregates of matter are galaxies. According to the foregoing theoretical findings, the distant galaxies should be receding from the earth at extremely high speeds increasing with distance up to the speed of light, which will be reached where the gravitational effect is reduced to a negligible level. Until quite recently, this theoretical conclusion would have been received with extreme skepticism, as it conflicts with what was then the accepted thinking, and there was no way in which it could be subjected to a test. But recent astronomical advances have changed this situation. Present-day instruments are able to reach out to distances so great that the effect of gravitation is minimal, and the observations with this improved equipment show that the galaxies are behaving in exactly the manner predicted by the new theory. In the meantime, however, the astronomers have been trying to account for this galactic recession in some manner consistent with present astronomical views, and they have devised an explanation in which they assume, entirely ad hoc, that there was an enormous explosion at some singular point in the past history of the universe which hurled the galaxies out into space at their present fantastically high speeds. If one were to be called upon to decide which is the better explanation–one which rests upon an ad hoc assumption of an event far out of the range of known physical phenomena, or one which

finds the recession to be an immediate and direct consequence of the fundamental nature of the universe–there can hardly be any question as to the decision. But, in reality, this question does not arise, as the case in favor of the theory of a universe of motion is not based on the contention that it provides better explanations of physical phenomena, a contention that would have to depend, in most instances, on conformity with nonscientific criteria, but on the objective and genuinely scientific contention that it is a fully integrated system of theory which is not inconsistent with any established fact in any physical area. Another significant effect of the existence of a gravitational limit, within which there is a net inward motion, and outside of which there is a net outward progression, is that it reconciles the seemingly uniform distribution of matter in the universe with Newton's Law of Gravitation and Euclidean geometry. One of the strong arguments that has been advanced against the existence of a gravitational force of the inverse square type operating in a Euclidean universe is that on such a basis, ―The stellar universe ought to be a finite island in the infinite ocean of space,‖39 as Einstein stated the case. Observations indicate that there is no such concentration. So far as we can tell, the galaxies are distributed uniformly, or nearly uniformly, throughout the immense region now accessible to observation, and this is currently taken as a definite indication that the geometry of the universe is non-Euclidean. From the points brought out in the preceding pages, it is now clear that the flaw in this argument is that it rests on the assumption that there is a net gravitational force effective throughout space. Our findings are that this assumption is incorrect, and that there is a net gravitational force only within the gravitational limit of the particular mass under consideration. On this basis it is only the matter within the gravitational limit that should agglomerate into a single unit, and this is exactly what occurs. Each major galaxy is a ―finite island in the ocean of space,‖ within its gravitational limit. The existing situation is thus entirely consistent with inverse square gravitation operating in a Euclidean universe, as the Reciprocal System requires. The atoms, particles, and larger aggregates of matter within the gravitational limit of each galaxy constitute a gravitationally bound system. Each of these constituent units is subject to the same two general forces as the galaxies, but in addition is subject to the (apparent) gravitational attraction of neighboring masses, and that of the entire mass within the gravitational limits acting as a whole. Under the combined influence of all of these forces, each aggregate assumes an equilibrium position in the three-dimensional reference system that we are calling extension space, or a net motion capable of representation in that system. So far as the bound system is concerned, the coordinate reference system, extension space, is the equivalent of Newton's absolute space. It can be generalized to include other gravitationally bound systems by taking into account the relative motion of the systems. Any or all of the aggregates or individual units that constitute a gravitationally bound system may acquire motions relative to the fixed reference system. Since these motions are relative to the defined spatial coordinate system, the direction of motion in each instance is an inherent property of the motion, rather than being merely a matter of chance, as in the case of the coordinate representation of the scalar motions. These

motions with inherent vectorial directions are vectorial motions: the motions of our ordinary experience. They are so familiar that it is customary to generalize their characteristics, and to assume that these are the characteristics of all motion. Inasmuch as these familiar vectorial motions have inherent directions, and are always motions of something, it is taken for granted that these are essential features of motions; that all motions must necessarily have these same properties. But our investigation of the fundamental properties of motion reveals that this assumption is in error. Motion, as it exists in a universe composed entirely of motion, is merely a relation between space and time, and in its simpler forms it is not motion of anything, nor does it have an inherent direction. Vectorial motion is a special kind of motion; a phenomenon of the gravitationally bound systems. The net resultant of the scalar motions of any object–the progression of the reference system and the various gravitational motions–has a vectorial direction when viewed in the context of a stationary reference system, even though that direction is not an inherent property of the motion. The observed motion of such an object, which is the net resultant of all of its motions, both scalar and vectorial, thus appears to be simply a vectorial motion, and is so interpreted in current practice. One of the prerequisites for a clear understanding of basic physical phenomena is a recognition of the composite nature of the observed motions. It is not possible to get a true picture of activity in a gravitationally bound system unless it is realized that an object such as a photon or a neutrino which is traveling at the speed of light with respect to the conventional frame of reference does so because it has no capability of independent motion at all, and is at rest in its own natural system of reference. Similarly, the behavior of atoms of matter can be clearly understood only in the light of a realization that they are motionless, or moving at low speeds, relative to the conventional reference system because they possess inherent motions at high speed which counterbalance the motion of the natural reference system that would otherwise carry them outward at the speed of the photon or the neutrino. It is also essential to recognize that the scalar motion of the photons can be accommodated within the spatial reference system only by the use of multiple reference points. Photons are continually being emitted from matter by a process that we will not be prepared to discuss until a later stage of the theoretical development. The motion of the photons emitted from any material object is outward from that object, not from the instantaneous position in some reference system which that object happens to occupy at the moment of emission. As brought out in Chapter 3, the extension space of our ordinary experience is ―absolute space,‖ for vectorial motion and for scalar motion viewed from one point of reference. But every other reference point has its own ―absolute space,‖ and there is no criterion by which one of these can be singled out as more basic than another. Thus the location at which a photon originates cannot be placed in the context of any general reference system for scalar motion. That location itself is the reference point for the photon emission, and if we choose to view it in relation to some reference system with respect to which it is moving, then that relative motion, whatever it may be, is a component of the motion of the emitted photons. Looking at the situation from the standpoint of the photon, we may say that at the moment of emission this photon is participating in all of the motions of the emitting

object, the outward progression of the natural reference system, the inward motion of gravitation, and all of the vectorial motions to which the material object is subject. No mechanism exists whereby the photon can eliminate any of these motions, and the outward motion of the absolute location of the emission, to which the photon becomes subject on separation from the material unit, is superimposed on the previously existing motions. This, again, means that the emitting object defines the reference point for the motion of the photon. In a gravitationally bound system each aggregate and individual unit of matter is the center of a sphere of radiation. This point has been a source of difficulty for some readers of the first edition, and further consideration by means of a specific example is probably in order. Let us take some location A as a reference point. All photons originating from a physical object at A move outward at unit speed in the manner portrayed by the balloon analogy. Gravitating objects move inward in opposition to the progression, and can therefore be represented by positions somewhere along the lines of the outward movement. Here, then, we have the kind of a situation that most persons are looking for: something that they can visualize in the context of the familiar fixed spatial coordinate system. But now let us take a look at one of these gravitating objects, which we will call B. For convenience, let us assume that B is moving gravitationally with respect to A at a rate which is just equal to the outward progression of the natural reference system, so that B remains stationary with respect to object A in the fixed reference system. This is the condition that prevails at the gravitational limit. What happens to the photons emitted from B? If the expanding system centered at A is conceived as a universal system of reference, as so many readers have evidently taken it to be, then these photons must be detached from B in a manner which will enable them to be carried along by progression in a direction outward from A. But the natural reference system moves outward from all locations; it moves outward from B in exactly the same manner as it does from A. There is no way in which one can be assigned any status different from that of the other. The photons originating at B therefore move outward from B. not from A. This would make no difference if B were itself moving outward from A at unit speed, as in that case outward from B would also be outward from A, but where B is stationary with respect to A in a fixed coordinate system, the only way in which the motions of the photons can be represented in that system is by means of two separate reference points. Thus there is a sphere of radiation centered at A, and another sphere centered at B. Where the spheres overlap, the photons may make contact, even though all are moving outward from their respective points of origin. It has been suggested that the theoretical conclusion that the unit outward motion of the photon is added to the motion imparted to the photon by the emitting object conflicts with the empirically established principle that the speed of radiation is independent of the speed of the source, but this is not true. The explanation lies in some aspects of the measurement of speed that have not been recognized. This matter will be discussed in detail in Chapter 7.

CHAPTER 6

The Reciprocal Relation Inasmuch as the fundamental postulates define a universe composed entirely of units of motion, and define space and time in terms of that motion, these postulates preclude space and time from having any significance other than that which they have in motion, and at the same time require that they always have that significance; that is, throughout the universe space and time are reciprocally related. This general reciprocal relation that necessarily exists in a universe composed entirely of motion has a far-reaching and decisive effect on physical structures and processes. In recognition of its crucial role, the name ―Reciprocal‖ has been applied to the system of theory based on the ―motion‖ concept of the nature of the universe. The reason for calling it a ―system of theory‖ rather than merely a ―theory‖ is that its subdivisions are coextensive with other physical theories. One of these subdivisions covers the same ground as relativity, another parallels the nuclear theory of the atom, still another deals with the same physical area as the kinetic theory, and so on. It is appropriate, therefore, to call these subdivisions ―theories,‖ and to refer to the entire new theoretical structure as the Reciprocal System of theory, even though it is actually a single fully integrated entity. The reciprocal postulate provides a good example of the manner in which a change in the basic concept of the nature of the universe alters the way in which we apprehend specific physical phenomena. In the context of a universe of matter existing in a space-time framework, the idea of space as the reciprocal of time is simply preposterous, too absurd to be given serious consideration. Most of those who encounter the idea of ―the reciprocal of space‖ for the first time find it wholly inconceivable. But these persons are not taking the postulates of the new theory at their face value, and recognizing that the assertion that ―space is an aspect of motion‖ means exactly what it says. They are accustomed to regarding space as some kind of a container, and they are interpreting this assertion as if it said that ―container space is an aspect of motion,‖ thus inserting their own concept of space into a statement which rejects all such previous ideas and defines a new and different concept. The result of mixing such incongruous and conflicting concepts cannot be otherwise than meaningless. When the new ideas are viewed in the proper context, the strangeness disappears. In a universe in which everything that exists is a form of motion, and the magnitude of that motion, measured as speed or velocity, is the only significant physical quantity, the existence of the reciprocal relation is practically self-evident. Motion is defined as the relation of space to time. Its mathematical expression is the quotient of the two quantities. An increase in space therefore has exactly the same effect on the speed, the mathematical measure of the motion, as a decrease in time, and vice versa. In comparing one airplane with another, it makes no difference whether we say that plane A travels twice as far in the same time, or that it travels a certain distance in half the time. Inasmuch as the postulates deal with space and time in precisely the same manner, aside from the reciprocal relation between the two, the behavior characteristics of the two

entities, or properties, as they are called, are identical. This statement may seem incredible on first sight, as space and time manifest themselves to our observation in very different guises. We know time only as a progression, a continual moving forward, whereas space appears to us as an entity that ―stays put.‖ But when we subject the apparent differences to a critical examination, they fail to stand up under the scrutiny. The most conspicuous property of space is that it is three-dimensional. On the other hand, it is generally believed that the observational evidence shows time to be one-dimensional. We have a subjective impression of a unidirectional ―flow‖ of time from the past, to the present, and on into the future. The mathematical representation of time in the equations of motion seems to confirm this view, inasmuch as the quantity t in v = s/t and related equations is scalar, not vectorial, as v and s are, or can be. Notwithstanding its general and unquestioning acceptance, this conclusion as to the onedimensionality of time is entirely unjustified. The point that is being overlooked is that ―direction,‖ in the context of the physical processes which are represented by vectorial equations in present-day physics, always means ―direction in space.‖ In the equation v = s/t, for example, the spatial displacement s is a vector quantity because it has a direction in space. It follows that the velocity v also has a direction in space, and thus what we have here is a space velocity equation. In this equation the term t is necessarily scalar because it has no direction in space. It is quite true that this result would automatically follow if time were one-dimensional, but the one-dimensionality is by no means a necessary condition. Quite the contrary, time is scalar in this space velocity equation (and in all of the other familiar vectorial equations of modern physics; equations that are vectorial because they involve direction in space) irrespective of its dimensions, because no matter how many dimensions it may have, time has no direction in space. If time is multi-dimensional, as our theoretical development finds it to be, then it has a property that corresponds to the spatial property that we call ―direction.‖ But whatever we call this temporal property, whether we call it ―direction in time,‖ as we are doing for reasons previously explained, or give it some altogether different name, it is a temporal property, not a spatial property, and it does not give time any direction in space. Regardless of its dimensions, time cannot be a vector quantity in any equation such as those of present-day physics, in which the property, which qualifies a quantity as vectorial, is that of having a direction in space. The existing confusion in this area is no doubt due, at least in part, to the fact that the terms ―dimension‖ and ―dimensional‖ are currently used with two different meanings. We speak of space as three-dimensional, and we also speak of a cube as threedimensional. In the first of these expressions we mean that space has a certain property that we designate as dimensionality, and that the magnitude applying to this property is three. In other words, our statement means that there are three dimensions of space. But when we say that a cube is three-dimensional, the significance of the statement is quite different. Here we do not mean that there are three dimensions of ―cubism,‖ or whatever we may call it. We mean that the cube exists in space and extends into three dimensions of that space.

There is a rather general tendency to interpret any postulate of multi-dimensional time in this latter significance; that is, to take it as meaning that time extends into n dimensions of space, or some kind of a quasi-space. But this is a concept that makes little sense under any conditions, and it certainly is not the meaning of the term ―three-dimensional time‖ as used in this work. When we here speak of time as three-dimensional we will be employing the term in the same significance as when we speak of space as threedimensional; that is, we mean that time has a property, which we call dimensionality, and the magnitude of that property is three. Here, again, we mean that there are three dimensions of the property in question: three dimensions of time. There is nothing in the role which time plays in the equations of motion in space to indicate specifically that it has more than one dimension. But a careful consideration along the lines indicated in the foregoing paragraphs does show that the present-day assumption that we know time to be one-dimensional is completely unfounded. Thus there is no empirical evidence that is inconsistent with the assertion of the Reciprocal System that time is three-dimensional. Perhaps it might be well to point out that the additional dimensions of time have no metaphysical significance. The postulates of a universe of motion define a purely physical universe, and all of the entities and phenomena of that universe, as determined by a development of the necessary consequences of the postulates, are purely physical. The three dimensions of time have the same physical significance as the three dimensions of space. As soon as we take into account the effect of gravitation on the motion of material aggregates, the second of the observed differences, the progression of time, which contrasts sharply with the apparent immobility of extension space, is likewise seen to be a consequence of the conditions of observation, rather than an indication of any actual dissimilarity. The behavior of those objects that are partially free from the gravitational attraction of our galaxy, the very distant galaxies, shows conclusively that the immobility of extension space, as we observe it, is not a reflection of an inherent property of space in general, but is a result of the fact that in the region accessible to detailed observation gravitation moves objects toward each other, offsetting the effects of the outward progression. The pattern of the recession of the distant galaxies demonstrates that when the gravitational effect is eliminated there is a progression of space similar to the observed progression of time. Just as ―now‖ continually moves forward relative to any initial point in the temporal reference system, so ―here‖ in the absence of gravitation, continually moves forward relative to any initial point in the spatial reference system. Little additional information about either space or time is available from empirical sources. The only items on which there is general agreement are that space is homogeneous and isotropic, and that time progresses uniformly. Other properties that are sometimes attributed to either time or space are merely assumptions or hypotheses. Infinite extent or infinite divisibility, for instance, are hypothetical, not the results of observation. Likewise, the assertions as to spatial and temporal properties that are made in the relativity theories are, as Einstein says, ―free inventions of the human mind,‖ not items that have been derived from experience.

In testing the validity of the conclusion that all properties of either space or time are properties of both space and time, such assumptions and hypotheses must be disregarded, since it is only conflicts with definitely established facts that are conclusive. The significance of a conflict with a questionable assertion cannot be other than questionable. ―Homogeneous‖ with respect to space is equivalent to ―uniform‖ with respect to time, and because the observations thus far available tell us nothing at all about the dimensions of time, there is nothing in these observations that is inconsistent with the assertion that time, like space, is isotropic. In spite of the general belief, among scientists and laymen alike, that there is a great difference between space and time, any critical examination along the foregoing lines shows that the apparent differences are not real, and that there is actually no observational evidence that is inconsistent with the theoretical conclusion that the properties of space and of time are identical. As brought out in Chapter 4, deviations from unit speed, the basic one-to-one space-time ratio, are accomplished by means of reversals of the direction of the progression of either space or time. As a result of these reversals, one component traverses the same path in the reference system repeatedly, while the other component continues progressing unidirectionally in the normal manner. Thus the deviation from the normal rate of progression may take place either in space or in time, but not in both coincidentally. The space-time ratio, or speed, is either 1/n (less than unity, the speed of light,) or n/1 (greater than unity). Inasmuch as everything physical in a universe of motion is a motion–that is, a relation between space and time, measured as speed–and, as we have just seen, the properties of space and those of time are identical, aside from the reciprocal relationship, it follows that every physical entity or phenomenon has a reciprocal. There exists another entity or phenomenon that is an exact duplicate, except that space and time are interchanged. For example, let us consider an object rotating with speed 1/n and moving translationally with speed 1/n. The reciprocal relation tells us that there must necessarily exist, somewhere in the universe, an object identical in all respects, except that its rotational and translational speeds are both n/1 instead of 1/n. In addition to the complete inversions, there are also structures of an intermediate type in which one or more components of a complex combination of motions are inverted, while the remaining components are unchanged. In the example under consideration, the translational speed may become n/1 while the rotational speed remains at 1/n, or vice versa. Once the normal (1/n) combination has been identified, it follows that both the completely inverted (n/1) combination and the various intermediate structures exist in the appropriate environment. The general nature of that environment in each case is also indicated, inasmuch as change of position in time cannot be represented in - a spatial reference system, and each of these speed combinations has some special characteristics when viewed in relation to the conventional reference systems. The various physical entities and phenomena that involve motion of these several inverse types will be examined at appropriate points in the pages that follow. The essential point that needs to be recognized at this time, because of its relevance to the subject matter now under consideration, is the existence of inverse forms of all of the normal (1/n) motions and combinations of motions.

This is a far-reaching discovery of great significance. In fact the new and more accurate picture of the physical universe that is derived from the ―motion‖ concept differs from previous ideas mainly by reason of the widening of our horizons that results from recognition of the inverse phenomena. Our direct physical contacts are limited to phenomena of the same type as those that enter into our own physical makeup: the direct phenomena, we may call them, although the distinction between direct and inverse is merely a matter of the way in which we see them, not anything that is inherent in the phenomena themselves. In recent years the development of powerful and sophisticated instruments has enabled us to penetrate areas that are far beyond the range of our unaided senses, and in these new areas the relatively simple and understandable relations that govern events within our normal experience are no longer valid. Newton's laws of motion, which are so dependable in everyday life, break down in application to motion at speeds approaching that of light; events at the atomic level resist all attempts at explanation by means of established physical principles, and so on. The scientific reaction to this state of affairs has been to conclude that the relatively simple and straightforward physical laws that have been found to apply to events within our ordinary experience are not universally valid, but are merely approximations to some more complex relations of general applicability. The simplicity of Newton's laws of motion, for instance, is explained on the ground that some of the terms of the more complicated general law are reduced to negligible values at low velocities, and may therefore be disregarded in application to the phenomena of everyday life. Development of the consequences of the postulates of the Reciprocal System arrives at a totally different answer. We find that the inverse phenomena that necessarily exist in a universe of motion play no significant role in the events of our everyday experience, but as we extend our observations into the realms of the very large, the very small, and the very fast, we move into the range in which these inverse phenomena replace or modify those which we, from our particular position in the universe, regard as the direct phenomena. On this basis, the difficulties that have been experienced in attempting to use the established physical laws and relations of the world of ordinary experience in the far-out regions are very simply explained. These laws and relations apply specifically to the world of immediate sense perception, phenomena of the direct space-time orientation, and they fail in application to any situation in which the events under consideration involve phenomena of the inverse type in any significant degree. They do not fail because they are wrong, or because they are incomplete; they fail because they are misapplied. No law–physical or otherwise-can be expected to produce the correct results in an area to which it has no relevance. The inverse phenomena are governed by laws distinct from, although related to, those of the direct phenomena, and where those phenomena exist they can be understood and successfully handled only by using the laws and relations of the inverse sector. This explains the ability of the Reciprocal System of theory to deal successfully with the recently discovered phenomena of the far-out regions, which have been so resistant to previous theoretical treatment. It is now apparent that the unfamiliar and surprising aspects of these phenomena are not due to aspects of the normal physical relations that come into play only under extreme conditions, as previous theorists have assumed; they

are due to the total or partial replacement of the phenomena of the direct type by the related, but different phenomena of the inverse type. In order to obtain the correct answers to problems in these remote areas, the unfamiliar phenomena that are involved must be viewed in their true light as the inverse of the phenomena of the directly observable region, not in the customary way as extensions of those direct phenomena into the regions under consideration. By identifying and utilizing this correct treatment the Reciprocal System is not only able to arrive at the right answers in the far-out areas, but to accomplish this task without disturbing the previously established laws and principles that apply to the phenomena of the direct type. In order to keep the explanation of the basic elements of the theory as simple and understandable as possible, the previous discussion has been limited to what we have called the direct view of the universe, in which space is the more familiar of the two basic entities, and plays the leading role. At this time it is necessary to recognize that because of the general nature of the reciprocal relation between space and time every statement that has been made with respect to space in the preceding chapters is equally applicable to time in the appropriate context. As we have seen in the case of space and time individually, however, the way in which the inverse phenomenon manifests itself to our observation may be quite different from the way in which we see its direct counterpart. Locations in time cannot be represented in a spatial reference system, but, with the same limitations that apply to the representation of spatial locations, they can be represented in a stationary three-dimensional temporal reference system analogous to the threedimensional spatial reference system that we call extension space. Since neither space nor time exists independently, every physical entity (a motion or a combination of motions) occupies both a space location and a time location. The location as a whole, the location in the physical universe, we may say, can therefore be completely defined only in terms of two reference systems. In the context of a stationary spatial reference system the motion of an absolute location, a location in the natural reference system, as indicated by observation of an object without independent motion, such as a photon or a galaxy at the observational limit, is linearly outward. Similarly, the motion of an absolute location with respect to a stationary temporal reference system is linearly outward in time. Inasmuch as the gravitational motion of ordinary matter is effective in space only, the atoms and particles of this matter, which are stationary with respect to the spatial reference system, or moving only at low velocities, remain in the same absolute locations in time indefinitely, unless subjected to some external force. Their motion in three-dimensional time is therefore linearly outward at unit speed, and the time location that we observe, the time registered on a clock, is not the location in any temporal reference system, but simply the stage of progression. Since the progression of the natural reference system proceeds at unit speed, always and everywhere, clock time, if properly measured, is the same everywhere. As we will see later in the development, the current hypotheses which require repudiation of the existence of absolute time and the concept of simultaneity of distant events are erroneous products of reasoning from premises in which clock time is incorrectly identified as time in general.

The best way to get a clear picture of the relation of clock time to time in general is to consider the analogous situation in space. Let us assume that a photon A is emitted from some material object X in the direction Y. This photon then travels at unit speed in a straight line XY which can be represented in the conventional fixed spatial reference system. The line of progression of time has the same relation to time in general (threedimensional time) as the line XY has to space in general (three-dimensional space). It is a one-dimensional line of travel in a three-dimensional continuum; not something separate and distinct from that continuum, but a specific part of it. Now let us further assume that we have a device whereby we can measure the rate of increase of the spatial distance XA, and let us call this device a ―space clock‖ , Inasmuch as all photons travel at the same speed, this one space clock will suffice for the measurement of the distance traversed by any photon, irrespective of its location or direction of movement, as long as we are interested only in the scalar magnitude. But this measurement is valid only for objects such as photons, which travel at unit speed. If we introduce an object, which travels at some speed other than unity, the measurement that we get from the space clock will not correctly represent the space traversed by that object. Nor will the space clock registration be valid for the relative separation of moving objects, even if they are traveling at unit speed. In order to arrive at the true amount of space entering into such motions we must either measure that space individually, or we must apply an appropriate correction to the measurement by the space clock. Because objects at rest in the stationary spatial reference system, or moving at low velocities with respect to it, are moving at unit speed relative to any stationary temporal reference system, a clock which measures the time progression in any one process provides an accurate measurement of the time elapsed in any low-speed physical process, just as the space clock in our analogy measured the space traversed by any photon. Here, again, however, if an object moves at a speed, or a relative speed, differing from unity, so that its movement in time is not the same as that of the progression of the natural reference system, then the clock time does not correctly represent the actual time involved in the motion under consideration. As in the analogy, the true quantity, the net total time, must be obtained either by a separate measurement (which is usually impractical) or by determining the magnitude of the adjustment that must be applied to the clock time to convert it to total time. In application to motion in space, the total time, like the clock registration, is a scalar quantity. Some readers of the previous edition have found it difficult to accept the idea that time can be three-dimensional because this makes time a vector quantity, and presumably leads to situations in which we are called upon to divide one vector quantity by another. But such situations are non-existent. If we are dealing with spatial relations, time is scalar because it has no spatial direction. If we are dealing with temporal relations, space is scalar because it has no temporal direction. Either space or time can be vectorial in appropriate circumstances. However, as explained earlier in this chapter, the deviation from the normal scalar progression at unit speed may take place either in space or in time, but not in both coincidentally. Consequently, there is no physical situation in which both space and time are vectorial.

Similarly, scalar rotation and its gravitational (translational) effect take place either in space or in time, but not in both. If the speed of the rotation is less than unity, time continues progressing at the normal unit rate, but because of the directional changes during rotation space progresses only one unit while time is progressing n units. Thus the change in position relative to the natural unit datum, both in the rotation and in its gravitational effect, takes place in space. Conversely, if the speed of the rotation is greater than unity, the rotation and its gravitational effect take place in time. An important result of the fact that rotation at greater-than-unit speeds produces an inward motion (gravitation) in time is that a rotational motion or combination of motions with a net speed greater than unity cannot exist in a spatial reference system for more than one (dimensionally variable) unit of time. As pointed out in Chapter 3, the spatial systems of reference, to which the human race is limited because it is subject to gravitation in space, are not capable of representing deviations from the normal rate of time progression. In certain special situations, to be considered later, in which the normal direction of vectorial motion is reversed, the change of position in time manifests itself as a distortion of the spatial position. Otherwise, an object moving normally with a speed greater than unity is coincident with the reference system for only one unit of time. During the next unit, while the spatial reference system is moving outward in time at the unit rate of the normal progression, gravitation is carrying the rotating unit inward in time. It therefore moves away from the reference system and disappears. This point will be very significant in our consideration of the high-speed rotational systems in Chapter 15. Recognition of the fact that each effective unit of rotational motion (mass) occupies a location in time as well as a location in space now enables us to determine the effect of mass concentration on the gravitational motion. Because of the continuation of the progression of time while gravitation is moving the atoms of matter inward in space, the aggregates of matter that are eventually formed in space consist of a large number of mass units that are contiguous in space, but widely dispersed in time. One of the results of this situation is that the magnitude of the gravitational motion (or force) is a function not only of the distance between objects, but also of the effective number of units of rotational motion, measured as mass, that each object possesses. This motion is distributed over all space-time directions, rather than merely over all space directions, and since an aggregate of n mass units occupies n time locations, the total number of space-time locations is also n, even though all mass units of each object are nearly coincident spatially. The total gravitational motion of any mass unit toward that aggregate is thus n times that toward a single mass unit at the same distance. It then follows that the gravitational motion (or force) is proportional to the product of the (apparently) interacting masses. It can now be seen that the comments in Chapter 5, with respect to the apparent change of direction of the gravitational motions (or forces) when the apparently interacting masses change their relative positions are applicable to multi-unit aggregates as well as to the individual mass units considered in the original discussion. The gravitational motion always takes place toward all space-time locations whether or not those locations are occupied by objects that enable us to detect the motion.

A point that should be noted in this connection is that two objects are in effective contact if they occupy adjoining locations in either space or time, regardless of the extent of their separation in the other aspect of motion. This statement may seem to conflict with the empirical observation that contact can be made only if the two objects are in the same place at the same clock time. However, the inability to make contact when the objects reach a common spatial location in a fixed reference system at different clock times is not due to the lack of coincidence in time, but to the progression of space that takes place in connection with the progression of time which is registered by the clock. Because of this space progression, the location that has the same spatial coordinates in the stationary reference system is not the same spatial location that it was at an earlier time. Scientific history shows that physical problems of long standing are usually the result of errors in the prevailing basic concepts, and that significant conceptual modifications are a prerequisite for their solution. We will find, as we proceed with the theoretical development, that the reciprocal relation between space and time which necessarily exists in a universe of motion is just the kind of a conceptual alteration that is needed to clear up the existing physical situation: one which makes drastic changes where such changes are required, but leaves the empirically determined relations of our everyday experience essentially untouched.

CHAPTER 7

High Speed Motion As brought out in Chapter 3, the ―space‖ of our ordinary experience, extension space, as we have called it, is simply a reference system, and it has no real physical significance. But the relationships that are represented in this reference system do have physical meaning. For example, if the distance between object A and object B in extension space is x, then if A moves a distance x in the direction AB while B remains stationary with respect to the reference system, the two objects will come in contact. The contact has observable physical results, and the fact that it occurs at the coordinate position reached by object A after a movement defined in terms of the coordinates from a specific initial position in the coordinate system demonstrates that the relation represented by the difference between coordinates has a definite physical meaning. Einstein calls this a ―metrical‖ meaning; that is, a connection between the coordinate differences and ―measurable lengths and times.‖ To most of those who have not made any critical study of the logical basis of so-called ―modern physics‖ it probably seems obvious that this kind of a meaning exists, and it is safe to say that comparatively few of those who now accept Einsten's relativity theory because it is the orthodox doctrine in its field realize that his theory denies the existence of such a meaning. But any analysis of the logical structure of the theory will show that this is true, and Einstein’s own statement on the subject, previously quoted, leaves no doubt on this score. This is a prime example of a strange feature of the present situation in science. The members of the scientific community have accepted the basic theories of ―modern physics,‖ as correct, and are quick to do battle on their behalf if they are challenged, yet

at the same time the majority are totally unwilling to accept some of the aspects of those theories that the originators of the theories claim are essential features of the theoretical structures. How many of the supporters of modern atomic theory, for example, are willing to accept Heisenberg's assertion that atoms do not ―exist objectively in the same sense as stones or trees exist‖ ?40 Probably about as many as are willing to acceptEinstein'sassertion that coordinate differences have no metrical meaning. At any rate, the present general acceptance of the relativity theory as a whole, regardless of the widespread disagreement with some of its component parts, makes it advisable to point out just where the conclusions reached in this area by development of the consequences of the postulates of the Reciprocal System differ from the assertions of relativity theory. This chapter will therefore be devoted to a consideration of the status of the relativity concept, includes the extent to which the new findings are in agreement with it. Chapter 8 will then present the full explanation of motion at high speeds, as it is derived from the new theoretical development. It is worth noting in this connection that Einstein himself was aware of ―the eternally problematical character‖ of his concepts, and in undertaking the critical examination of his theory in this chapter we are following his own recommendation, expressed in these words: In the interests of science it is necessary over and over again to engage in the critique of these fundamental concepts, in order that they may not unconsciously rule us. This becomes evident especially in those situations involving development of ideas in which the consistent use of the traditional fundamental concepts leads us to paradoxes difficult to resolve.41 In spite of all of the confusion and controversy that have surrounded the subject, the factors that are involved are essentially simple, and they can be brought out clearly by consideration of a correspondingly simple situation, which, for convenient reference, we will call the ―two-photon case.‖ Let us assume that a photon X originates at location O in a fixed reference system, and moves linearly in space at unit velocity, the velocity of light (as all photons do). In one unit of time it will have reached point x in the coordinate system, one spatial unit distant from 0. This is a simple matter of fact that results entirely from the behavior of photon X, and is totally independent of what may be done by or to any other object. Similarly, if another photon Y leaves point O simultaneously with X, and travels at the same velocity, but in the opposite direction, this photon will reach point y, one unit of space distant from O. at the end of one unit of elapsed time. This, too, is entirely a matter of the behavior of the moving photon Y. and is independent of what happens to photon X or to any other physical object. At the end of one unit of time, as currently measured, X and Y are thus separated by two units of space (distance) in the coordinate system of reference. In current practice some repetitive physical process measures time. This process, or the device, in which it takes place, is called a clock. The progression of time thus measured is the standard time magnitude which, on the basis of current understanding, enters into physical relations. Speed, or velocity, the measure of motion, is defined as distance (space) per unit time. In terms of the accepted reference systems, this means distance between coordinate locations divided by clock registration. In the two-photon case, the increase in coordinate separation during the one unit of elapsed time is two units of space.

The relative velocity of the two photons, determined in the standard manner, is then two natural units; that is, twice the velocity of light, the velocity at which each of the two objects is moving. In 1887, an experiment by Michelson and Morley compared the velocity of light traveling over round trip paths in different directions relative to the direction of the earth’s motion. The investigators found no difference in the velocities, although the accuracy of the experiment was far greater than would be required to reveal the expected difference had it been present. This experiment, together with others, which have confirmed the original findings, makes it necessary to conclude that the velocity of light in a vacuum is constant irrespective of the reference system. The determination of velocity in the standard manner, dividing distance traveled by elapsed time, therefore arrives at the wrong answer at high velocities. As expressed by Capek, the initial impact of this discovery was ―shattering.‖ It seemed to undermine the whole structure of theoretical knowledge that had been erected by centuries of effort. The following statement by Sir James Jeans, written only a few decades after the event, shows what a blow it was to the physicists of that day: For more than two centuries this system of laws (Newton's) was believed to give a perfectly consistent and exact description of the processes of nature. Then, as the nineteenth century was approaching its close, certain experiments, commencing with the famous Michelson-Morley experiment, showed that the whole scheme was meaningless and self-contradictory.42 After a quarter of a century of confusion, Albert Einstein published his special theory of relativity, which proposed a theoretical explanation of the discrepancy. Contradictions and uncertainties have surrounded this theory from its inception, and there has been continued controversy over its interpretation in specific applications, and over the nature and adequacy of the various explanations that have been offered in attempts to resolve the ―paradoxes‖ and other inconsistencies. But the mathematical successes of the theory have been impressive, and even though the mathematics antedated the theory, and are not uniquely connected with it, these mathematical successes, in conjunction with the absence of any serious competitor, and the strong desire of the physicists to have something to work with, have been sufficient to secure general acceptance. Now that a new theory has appeared, however, the defects in the relativity theory acquire a new significance, as the arguments which justify using a theory in spite of contradictions and inconsistencies if it is the only thing that is available are no longer valid when a new theory free from such defects makes its appearance. In making the more rigorous appraisal of the theory that is now required, it should be recognized at the outset that a theory is not valid unless it is correct both mathematically and conceptually. Mathematical evidence alone is not sufficient, as mathematical agreement is no guarantee of conceptual validity. What this means is that if we devise a theoretical explanation of a certain physical phenomenon, and then formulate a mathematical expression to represent the relations pictured by the theory, or do the same thing in reverse manner, first formulating the

mathematical expression on an empirical basis, and then finding an explanation that fits it, the mere fact that this mathematical expression yields results that agree with the corresponding experimental values does not assure us that the theoretical explanation is correct, even if the agreement is complete and exact. As a matter of principle, this statement is not even open to question, yet in a surprisingly large number of instances in current practice, including the relativity theory, mathematical agreement is accepted as complete proof. Most of the defects of the relativity theory as a conceptual scheme have been explored in depth in the literature. A comprehensive review of the situation at this time is therefore unnecessary, but it will be appropriate to examine one of the long-standing ―paradoxes‖ which is sufficient in its self to prove that the theory is conceptually incorrect. Naturally, the adherents of the theory have done their best to ―resolve‖ the paradox, and save the theory, and in their desperate efforts they have managed to muddy the waters to such an extent that the conclusive nature of the case against the theory is not generally recognized. The significance of this kind of a discrepancy lies in the fact that when a theory makes certain assertions of a general nature, if any one case can be found where these assertions are not valid, this invalidates the generality of the assertions, and thus invalidates the theory as a whole. The inconsistency of this nature that we will consider here is what is known as the ―clock paradox.‖ It is frequently confused with the ―twin paradox,‖ in which one of a set of twins stays home while the other goes on a long journey at a very high speed. According to the theory, time progresses more slowly for the traveling twin, and he returns home still a young man, while his brother has reached old age. The clock paradox, which replaces the twins with two identical clocks, is somewhat simpler, as it evades the question as to the relation between clock registration and physical processes. In the usual statement of the paradox, it is assumed that a clock B is accelerated relative to another identical clock A, and that subsequently, after a period of time at a constant relative velocity, the acceleration is reversed, and the clocks return to their original locations. According to the principles of special relativity, clock B, the moving clock, has been running more slowly than clock A, the stationary clock, and hence the time interval registered by B is less than that registered by A. But the special theory also tells us that we cannot distinguish between the motion of clock B relative to clock A and the motion of clock A relative to clock B. Thus it is equally correct to say that A is the moving clock and B is the stationary clock, in which case the interval registered by clock A is less than that registered by clock B. Each clock therefore registers both more and less than the other. Here we have a situation in which a straightforward application of the special relativity theory leads to a conclusion that is manifestly absurd. This paradox, which stands squarely in the way of any claim that relativity theory is conceptually valid, has never been resolved except by means which contradict the basic assumptions of the relativity theory itself. Richard Schlegel brings this fact out very clearly in a discussion of the paradox in his book Time and the Physical World. ―Acceptance of a preferred coordinate system‖ is necessary in order to resolve the contradiction, he points out, but ―such an assumption brings a profound modification to special relativity theory; for the assumption

contradicts the principle that between any two relatively moving systems the effects of motion are the same, from either system to the other.‖43 G. J. Whitrow summarizes the situation in this way: ―The crucial argument of those who support Einstein (in the clock paradox controversy) automatically undermines Einstein's own position.‖44 The theory based primarily on the postulate that all motion is relative contains an internal contradiction which cannot be removed except by some argument relying on the assumption that some motion is not relative. All of the efforts that have been made by the professional relativists to explain away this paradox depend, directly or indirectly, on abandoning the general applicability of the relativity principle, and identifying the acceleration of clock B as something more than an acceleration relative to clock A. Moller, for example, tells us that the acceleration of clock B is ―relative to the fixed stars.‖45 Authors such as Tolman, who speaks of the ―lack of symmetry between the treatment given to the clock A, which was at no time subjected to any force, and that given to clock B which was subjected to . . . forces . . . when the relative motion of the clocks was changed,‖46 are simply saying the same thing in a more roundabout way. But if motion is purely relative, as the special theory contends, then a force applied to clock B cannot produce anything more than a relative motion–it cannot produce a kind of motion that does not exist–and the effect on clock A must therefore be the same as that on clock B. Introduction of a preferred coordinate system such as that defined by the average positions of the fixed stars gets around this difficulty, but only at the cost of destroying the foundations of the theory, as the special theory is built on the postulate that no such preferred coordinate system exists. The impossibility of resolving the contradiction inherent in the clock paradox by appeal to acceleration can be demonstrated in yet another way, as the acceleration can be eliminated without altering the contradiction that constitutes the paradox. No exhaustive search has been made to ascertain whether this streamlined version, which we may call the ―simplified clock paradox‖ has been given any consideration previously, but at any rate it does not appear in the most accessible discussions of the subject. This is quite surprising, as it is a rather obvious way of tightening the paradox to the point where there is little, if any, room for an attempt at evasion. In this simplified clock paradox we will merely assume that the two clocks are in uniform motion relative to each other. The question as to how this motion originated does not enter into the situation. Perhaps they have always been in relative motion. Or, if they were accelerated, they may have been accelerated equally. At any rate, for purposes of the inquiry, we are dealing only with the clocks in uniform relative motion. But here again, we encounter the same paradox. According to the relativity theory, each clock can be regarded either as stationary, in which case it is the faster, or as moving, in which case it is the slower. Again each clock registers both more and less than the other. There are those who claim that the paradox has been resolved experimentally. In the published report of one recent experiment bearing on the subject the flat assertion is made that ―These results provide an unambiguous empirical resolution of the famous clock paradox.‖47 This claim is, in itself, a good illustration of the lack of precision in current thinking in this area, as the clock paradox is a logical contradiction. It refers to a specific situation in which a strict application of the special theory results in an absurdity.

Obviously, a logical inconsistency cannot be ―resolved‖ by empirical means. What the investigators have accomplished in this instance is simply to provide a further verification of some of the mathematical aspects of the theory, which play no part in the clock paradox. This one clearly established logical inconsistency is sufficient in itself, even without the many items of evidence available for corroboration, to show that the special theory of relativity is incorrect in at least some significant segment of its conceptual aspects. It may be a useful theory; it may be a ―good‖ theory from some viewpoint; it may indeed have been the best theory available prior to the development of the Reciprocal System, but this inconsistency demonstrates conclusively that it is not the correct theory. The question then arises: In the face of these facts, why are present-day scientists so thoroughly convinced of the validity of the special theory? Why do front-rank scientists make categorical assertions such as the following from Heisenberg? The theory . . . has meanwhile become an axiomatic foundation of all modern physics, confirmed by a large number of experiments. It has become a permanent property of exact science just as has classical mechanics or the theory of heat.48 The answer to our question can be extracted from this quotation. ―The theory,‖ says Heisenberg, has been ―confirmed by a large number of experiments.‖ But these experiments have confirmed only the mathematical aspects of the theory. They tell us only that special relativity is mathematically correct, and that it therefore could be valid. The almost indecent haste to proclaim the validity of theories on the strength of mathematical confirmation alone is one of the excesses of modern scientific practice which, like the over-indulgence in ad hoc assumptions, has covered up the errors introduced by the concept of a universe of matter, and has prevented recognition of the need for a basic change. Like any other theory, special relativity cannot be confirmed as a theory unless its conceptual aspects are validated. Indeed, the conceptual aspects are the theory itself, as the mathematics, which are embodied in the Lorentz equations, were in existence before Einstein formulated the theory. However, establishment of conceptual validity is much more difficult than confirmation of mathematical validity, and it is virtually impossible in a limited field such as that covered by relativity because there is too much opportunity for alternatives that are mathematically equivalent. It is attainable only where collateral information is available from many sources so that the alternatives can be excluded. Furthermore, consideration of the known alternatives is not conclusive. There is a general tendency to assume that where no satisfactory alternatives have thus far been found, there is no acceptable alternative. This gives rise to a great many erroneous assertions that are given credence because they are modeled after valid mathematical statements, and have a superficial air of authenticity. For example, let us consider the following two statements: A. As a mathematical problem there is virtually only one possible solution (the Lorentz transformation) if the velocity of light is to be the same for all. (Sir George Thomson) 49

B. There was and there is now no understanding of it (the Michelson Morley experiment) except through giving up the idea of absolute time and of absolute length and making the two interdependent concepts. (R. A. Millikan)50 The logical structure of both of these statements (including the implied assertions) is the same, and can be expressed as follows: 1. A solution for the problem under consideration has been obtained. 2. Long and intensive study has failed to produce any alternative solution. 3. The original solution must therefore be correct. In the case of statement A, this logic is irrefutable. It would, in fact, be valid even without any such search for alternatives. Since the original solution yields the correct answers, any other valid solution would necessarily have to be mathematically equivalent to the first, and from a mathematical standpoint equivalent statements are merely different ways of expressing the same thing. As soon as we obtain a mathematically correct answer to a problem, we have the mathematically correct answer. Statement B is an application of the same logic to a conceptual rather than a mathematical solution, but here the logic is completely invalid, as in this case alternative solutions are different solutions, not merely different ways of expressing the same solution. Finding an explanation which fits the observed facts does not, in this case, guarantee that we have the correct explanation. We must have additional confirmation from other sources before conceptual validity can be established. Furthermore, the need for this additional evidence still exists as strongly as ever even if the theory in question is the best explanation that science has thus far been able to devise, as it is, or at least should be, obvious that we can never be sure that we have exhausted the possible alternatives. The theorists do not like to admit this. When they have devoted long years to the study and investigation of a problem, and the situation still remains as described by Millikan–that is, only one explanation judged to be reasonably acceptable has been found–there is a strong temptation to assume that no other possible explanation exists, and to regard the available theory as necessarily correct, even where, as in the case of the special theory of relativity, there may be specific evidence to the contrary. Otherwise, if they do not make such an assumption, they must admit, tacitly if not explicitly, that their abilities have thus far been unequal to the task of finding the alternatives. Few human beings, in or out of the scientific field, relish making this kind of an admission. Here, then, is the reason why the serious shortcomings of the special theory are currently looked upon so charitably. Nothing more acceptable has been available (although there are alternatives toEinstein's interpretation of the Lorentz equations that are equally consistent with the available information), and the physicists are not willing to concede that they could have overlooked the correct answer. But the facts are clear. No new valid conceptual information has been added to the previously existing body of knowledge by the special theory. It is nothing more than an erroneous hypothesis: a conspicuous addition to the historical record cited by Jeans:

The history of theoretical physics is a record of the clothing of mathematical formulae, which were right, or very nearly right, with physical interpretations, which were often very badly wrong.51 ‖As an emergency measure,‖ say Toulmin and Goodfield, ―physicists have resorted to mathematical fudges of an arbitrary kind.‖52 Here is the truth of the matter. The Lorentz equations are simply fudge factors: mathematical devices for reconciling discordant results. In the two-photon case that we are considering, if the speed of light is constant irrespective of the reference system, as established empirically by the Michelson-Morley experiment, then the speed of photon X relative to photon Y is unity. But when this speed is measured in the standard way (assuming that this might be physically possible), dividing the coordinate distance xy by the elapsed clock time, the relative speed is two natural units (2c in the conventional system of units) rather than one unit. Here, then, is a glaring discrepancy. Two different measurements of what is apparently the same thing, the relative speed, give us altogether different results. Both the nature of the problem and the nature of the mathematical answer provided by the Lorentz equations can be brought out clearly by consideration of a simple analogy. Let us assume a situation in which the property of direction exists, but is not recognized. Then let us assume that two independent methods are available for measuring motion, one of which measures the speed, and the other measures the rate at which the distance from a specified reference point is changing. In the absence of any recognition of the existence of direction, it will be presumed that both methods measure the same quantity, and the difference between the results will constitute an unexpected and unexplained discrepancy, similar to that brought to light by the Michelson-Morley experiment. An analogy is not an accurate representation. If it were, it would not be an analogy. But to the extent that the analogy parallels the phenomenon under consideration it provides an insight into aspects of the phenomenon that cannot, in many cases, be directly apprehended. In the circumstances of the analogy, it is evident that a fudge factor applicable to the general situation is impossible, but that under some special conditions, such as uniform linear motion following a course at a constant angle to the line of reference, the mathematical relation between the two measurements is constant. A fudge factor embodying this constant relation, the cosine of the angle of deviation, would therefore bring the discordant measurements into mathematical coincidence. It is also evident that we can apply the fudge factor anywhere in the mathematical relation. We can say that measurement 1 understates the true magnitude by this amount, or that measurement 2 overstates it by the same amount, or we can divide the discrepancy between the two in some proportion, or we can say that there is some unknown factor that affects one and not the other. Any of these explanations is mathematically correct, and if a theory based on any one of them is proposed, it will be ―confirmed‖ by experiment in the same manner that special relativity and many other products of present-day physics are currently being ―confirmed.‖ But only the last alternative listed is conceptually correct. This is the only one that describes the situation as it actually exists. When we compare these results of the assumptions made for purposes of the analogy with the observed physical situation in high-speed motion we find a complete

correspondence. Here, too, mathematical coincidence can be attained by a set of fudge factors, the Lorentz equations, in a special set of circumstances only. As in the analogy, such fudge factors are applicable only where the motion is constant both in speed and in direction. They apply only to uniform translational motion. This close parallel between the observed physical situation and the analogy strongly suggests that the underlying cause of the measurement discrepancy is the same in both cases; that in the physical universe, as well as under the circumstances assumed for purposes of the analogy, one of the factors that enters into the measurement of the magnitudes involved has not been taken into consideration. This is exactly the answer to the problem that emerges from the development of the Reciprocal System of theory. We find from this theory that the conventional stationary three-dimensional spatial frame of reference correctly represents locations in extension space, and that, contrary to Einstein's assertion, the distance between coordinates in this reference system correctly represents the spatial magnitudes entering into the equations of motion. However, this theoretical development also reveals that time magnitudes in general can only be represented by a similar three-dimensional frame of reference, and that the time registered on a clock is merely the one-dimensional path of the time progression in this three-dimensional reference frame. Inasmuch as gravitation operates in space in our material sector of the universe, the progression of time continues unchecked, and the change of position in time represented by the clock registration is a component of the time magnitude of any motion. In everyday life, no other component of any consequence is present, and for most purposes the clock registration can be taken as a measurement of the total time involved in a motion. But where another significant component is present, we are confronted with the same kind of a situation that was portrayed by the analogy. In uniform translational motion the mathematical relation between the clock time and the total time is a constant function of the speed, and it is therefore possible to formulate a fudge factor that will take care of the discrepancy. In the general situation where there is no such constant relationship, this is not possible, and the Lorentz equations cannot be extended to motion in general. Correct results in the general situation can be obtained only if the true scalar magnitude of the time that is involved is substituted for clock time in the equations of motion. This explanation should enable a clear understanding of the position of the Reciprocal System with respect to the validity of the Lorentz equations. Inasmuch as no method of measuring total time is currently available, there is a substantial amount of convenience in being able to arrive at the correct numerical results in certain applications by using a mathematical fudge factor. In so doing, we are making use of an incorrect magnitude that we are able to measure in lieu of the correct magnitude that we cannot measure. The Reciprocal System agrees that when we need to use fudge factors in this manner, the Lorentz equations are the correct fudge factors for the purpose. These equations simply accomplish a mathematical reconciliation of the equations of motion with the constant speed of light, and since this constant speed, which was accepted by Lorentz as an empirically established fact, is deduced from the postulates of the Reciprocal System, the mathematical treatment is based on the same premises in both cases, and necessarily

arrives at the same results. To this extent, therefore, the new system of theory is in accord with current thinking. As P. W. Bridgman once pointed out, many physicists regard ―the content of the special theory of relativity as coextensive with the content of the Lorentz equations.‖53 K. Feyerabend gives us a similar report: It must be admitted, however, that contemporary physicists hardly ever use Einstein’s original interpretation of the special theory of relativity. For them the theory of relativity consists of two elements: (1) The Lorentz transformations; and (2) mass-energy equivalence.54 For those who share this view, the results obtained from the Reciprocal System of theory in this area make no change at all in the existing physical picture. These individuals should find it easy to accommodate themselves to the new viewpoint. Those who still take their stand with Einstein will have to face the fact that the new results show, just as the clock paradox does, that Einstein's interpretation of the mathematics of high speed motion is incorrect. Indeed, the mere appearance of a new and different explanation of a rational character is a crushing blow to the relativity theory, as the case in its favor is argued very largely on the basis that there is no such alternative. As Einstein says, ―if the velocity of light is the same in all C.S. (coordinate systems), then moving rods must change their length, moving clocks must change their rhythm . . . there is no other way.‖ 55 The statement by Millikan quoted earlier is equally positive on this score. The status of an assertion of this kind, a contention that there is no alternative to a given conclusion, is always precarious, because, unlike most propositions based on other grounds, which can be supported even in the face of some adverse evidence, this contention that there is no alternative is immediately and utterly demolished when an alternative is produced. Furthermore, the use of the ―no alternative‖ argument constitutes a tacit admission that there is something dubious about the explanation that is being offered; something that would preclude its acceptance if there were any reasonable alternative. In contribution, in the form of the special theory, can be accurately evaluated only if it is realized that this, too, is a fudge, a conceptual fudge, we might call it. As he explains in the statement that has been our principal target in this chapter, what he has done is to eliminate the ―metrical meaning,‖ of spatial coordinates; that is, he takes care of the discrepancy between the two measurements by arbitrarily decreeing that one of them shall be disregarded. This may have served a certain purpose in the past by enabling the scientific community to avoid the embarrassment of having to admit inability to find any explanation for the high speed discrepancy, but the time has now come to look at the situation squarely and to recognize that the relativity concept is erroneous. It is not always appreciated that the mathematical fudge accomplished by the use of the Lorentz equations works in both directions. If the velocity is not directly determined by the change in coordinate position during a given time interval, it follows that the change in coordinate position is not directly determined by the velocity. Recognition of this point

will clear up any question as to a possible conflict between the conclusions of Chapter 5 and the constant speed of light. In closing this discussion of the high speed problem, it is appropriate to point out that the identification of the missing factor in the motion equations, the additional time component that becomes significant at high speeds, does not merely provide a new and better explanation of the existing discrepancy. It eliminates that discrepancy, restoring the ―metrical meaning‖ of the coordinate distances in a way that makes them entirely consistent with the constant speed of light.

CHAPTER 8

Motion in Time The starting point for an examination of the nature of motion in time is a recognition of the status of unit speed as the natural datum, the zero level of physical activity. We are able to deal with speeds measured from some arbitrary zero in our everyday life because these are not primary quantities; they are merely speed differences. For example, where the speed limit is 50 miles per hour, this does not mean that an automobile is prohibited from moving at any faster rate. It merely means that the difference between the speed of the vehicle and the speed of the portion of the earth's surface over which the vehicle is traveling must not exceed 50 miles per hour. The car and the earth’s surface are jointly moving at higher speeds in several different directions, but these are of no concern to us for ordinary purposes. We deal only with the differences, and the datum from which measurement is made has no special significance. In current practice we regard a greater rate of change in vehicle location relative to the local frame of reference as being the result of a greater speed, that quantity being measured from zero. We could equally well measure from some arbitrary non-zero level, as we do in the common systems of temperature measurement, or we could even measure the inverse of speed from some selected datum level, and attribute the greater rate of change of position to less ―inverse speed,‖ In dealing with the basic phenomena of the universe, however, we are dealing with absolute speeds, not merely speed differences, and for this purpose it is necessary to recognize that the datum level of the natural system of reference is unity, not zero. Since motion exists only in units, according to the postulates that define a universe of motion, and each unit of motion consists of one unit of space in association with one unit of time, all motion takes place at unit speed, from the standpoint of the individual units. This speed may, however, be either positive or negative, and by a sequence of reversals of the progression of either time or space, while the other component continues progressing unidirectionally, an effective scalar speed of 1/n, or n/1, is produced. In Chapter 4 we considered the case in which the vectorial direction of the motion reversed at each end of a one-unit path, the result being a vibrational motion. Alternatively, the vectorial direction may reverse in unison with the scalar direction. In this case space (or time) progresses one unit in the context of a fixed reference system while time (or space) progresses n units. Here the result is a translatory motion at a speed of 1/n (or n/1) units.

The scalar situation is the same in both cases. A regular pattern of reversals results in a space-time ratio of 1/n or n/1. In the example shown in the tabulation in Chapter 4, where the space-time ratio is 1/3, there is a one-unit inward motion followed by an outward unit and a second inward unit. The net inward motion in the three-unit sequence is one unit. A continuous succession of similar 3-unit sequences then follows. As indicated in the accompanying tabulation, the scalar direction

DIRECTION Number Unit 1 2 3 4 5 6

Scalar inward outward inward inward outward inward

Vibratory Vectorial right left right left right left

Scalar inward outward inward inward outward inward

Translation Vectorial forward backward forward forward backward forward

of the last unit of each sequence is inward. (A sequence involving an even number of n alternates n - 1 and n + 1. For instance, instead of two four-unit sequences, in which the last unit of each sequence would be outward, there is a three-unit sequence and a five-unit sequence.) The scalar direction of the first unit of each new sequence is also inward. Thus there is no reversal of scalar direction at the point where the new sequence begins. In the vibrational situation the vectorial direction continues the regular succession of reversals even at the points where the scalar direction does not reverse, but in the translational situation the reversals of vectorial direction conform to those of the scalar direction. Consequently, the path of vibration remains in a fixed location in the dimension of the oscillation, whereas the path of translation moves forward at the scalar space-time ratio 1/n (or n/1). This is the pattern followed by certain scalar motions that will be discussed later and by all vectorial motions: motions of material units and aggregates. When the progression within a unit of motion reaches the end of the unit it either reverses or does not reverse. There is no intermediate possibility. It follows that what appears to be a continuous unidirectional motion at speed 1/n is, in fact, an intermittent motion in which space progresses at the normal rate of one unit of space per unit of time for a fraction 1/n of the total number of space units involved, and has a net resultant of zero, in the context of the fixed reference system, during the remainder of the motion. If the speed is 1/n–one unit of space per n units of time–space progresses only one unit instead of the n units it would progress unidirectionally. The result of motion at the 1/n speed is therefore to cause a change of spatial position relative to the location that would have been reached at the normal rate of progression. Motion at less than unit speed, then, is motion in space. This is a well-known fact. But because of the uncritical acceptance of Einstein’s dictum that speeds in excess of that of light are impossible, and a failure to recognize the reciprocal relation between space and time, it has not heretofore been realized that the inverse of this kind of motion is also a physical reality. Where the speed is n/1, there is a reversal of the time component that results in a change of position in time relative to that which would take place at the normal rate of time progression, the elapsed

time registered on a clock. Motion at speeds greater than unity is therefore motion in time. The existence of motion in time is one of the most significant consequences of the status of the physical universe as a universe of motion. Conventional physical science, which recognizes only motion in space, has been able to deal reasonably well with those phenomena that involve spatial motion only. But it has not been able to clarify the physical fundamentals, a task for which an understanding of the role of time is essential, and it is encountering a growing number of problems as observation and experiment are extended into the areas where motion in time is an important factor. Furthermore, the number and scope of these problems has been greatly increased by the use of zero speed, rather than unit speed, as the reference datum for measurement purposes. While motion at speeds of 1/n (speeds less than unity) is motion in space only, when viewed relative to the natural (moving) reference system, it is motion in both space and time relative to the conventional systems that utilize the zero datum. It should be understood that the motions we are now discussing are independent motions (physical phenomena), not the fictitious motion introduced by the use of a stationary reference system. The term ―progression,‖ is here being utilized merely to emphasize the continuing nature of these motions, and their space and time aspects. During the one unit of motion (progression) at the normal unit speed that occurs periodically when the average speed is 1/n, the spatial component of this motion, which is an inherent property of the motion independent of the progression of the natural reference system, is accompanied by a similar progression of time that is likewise independent of the progression of the reference system, the time aspect of which is measured by a clock. Thus, during every unit of clock time, the independent motion at speed 1/n involves a change of position in three-dimensional time amounting to 1/n units. As brought out in the preliminary discussion of this subject in Chapter 6, the value of n at the speeds of our ordinary experience is so large that the quantity 1/n is negligible, and the clock time can be taken as equivalent to the total time involved in motion. At higher speeds, however, the value of 1/n becomes significant, and the total time involved in motion at these high speeds includes this additional component. It is this heretoforeunrecognized time component that is responsible for the discrepancies that present-day science tries to handle by means of fudge factors. In the two-photon case considered in Chapter 7, the value of 1/n is 1/1 for both photons. A unit of the motion of photon X involves one unit of space and one unit of time. The time involved in this unit of motion (the time OX) can be measured by means of the registration on a clock, which is merely the temporal equivalent of a yardstick. The same clock can also be used to measure the time magnitude involved in the motion of photon Y (the time OY), but this use of the same temporal ―yardstick‖ does not mean that the time interval OY through which Y moves is the same interval through which X moves, the interval OX, any more than using the same yardstick to measure the space traversed by Y would make it the same space that is traversed by X. The truth is that at the end of one unit of the time involved in the progression of the natural reference system (also measured by a clock), X and Y are separated by two units of total time (the time OX and the time OY), as well as by two units of space (distance). The relative speed is the

increase in spatial separation, two units, divided by the increase in temporal separation, two units, or 2/2 = 1. If an object with a lower speed v is substituted for one of the photons, so that the separation in space at the end of one unit of clock time is 1 + v instead of 2, the separation in time is also 1 + v and the relative speed is (1+ v)/(1 + v) = 1. Any process that measures the true speed rather than the space traversed during a given interval of standard clock time (the time of the progression of the natural reference system) thus arrives at unity for the speed of light irrespective of the system of reference. When the correct time magnitudes are introduced into the equations of motion there is no longer any need for fudge factors. The measured coordinate differences and the measured constant speed of light are then fully compatible, and there is no need to deprive the spatial coordinates of their ―metrical meaning.‖ Unfortunately, however, no means of measuring total time, except in certain special applications, are available at present. Perhaps some feasible method of measurement may be developed in the future, but in the meantime it will be necessary to continue on the present basis of applying a correction to the clock registration, in those areas where this is feasible. Under these circumstances we can consider that we are using correction factors instead of fudge factors. There is no longer an unexplained discrepancy that needs to be fudged out of existence. What we now find is that our calculations involve a time component that we are unable to measure. In lieu of the measurements that we are unable to make, we find it possible, in certain special cases, to apply correction factors that compensate for the difference between clock time and total time. A full explanation of the derivation of these correction factors, the Lorentz equations, is available in the scientific literature, and will not be repeated here. This conforms with a general policy that will be followed throughout this work. As explained in Chapter 1, most existing physical theories have been constructed by building up from empirical foundations. The Reciprocal System of theory is constructed in the opposite manner. While the empirically based theories start with the observed details and work toward the general principles, the Reciprocal System starts with a set of general postulates and works toward the details. At some point each of the branches of the theoretical development will meet the corresponding element of empirical theory. Where this occurs in the course of the present work, and there is agreement, as there is in the case of the Lorentz equations, the task of this presentation is complete. No purpose would be served by duplicating material that is already available in full detail. Most of the other well-established relationships of physical science are similarly incorporated into the new system of theory, with or without minor modifications, as the development of the theoretical structure proceeds, not because of the weight of observational evidence supporting these relations, or because anyone happens to approve of them, or because they have previously been accepted by the scientific world, but because the conclusions expressed by these relations are the same conclusions that are reached by development of the new theoretical system. After such a relation has thus been taken into the system, it is, of course, part of the system, and can be used in the same manner as any other part of the theoretical structure.

The existence of speeds greater than unity (the speed of light), the speeds that result in change of position in time, conflicts with current scientific opinion, which accepts Einstein’s conclusion that the speed of light is an absolute limit that cannot be exceeded. Our development shows, however, that at one point where Einstein had to make an arbitrary choice between alternatives, he made the wrong choice, and the speed limitation was introduced through this error. It does not exist in fact. Like the special theory of relativity, the theory from which the speed limitation is derived is an attempt to provide an explanation for an empirical observation. According to Newton's second law of motion, which can be expressed as a= F/m, if a constant force is applied to the acceleration of a constant mass it should produce an acceleration that is also constant. But a series of experiments showed that where a presumably constant electrical force is applied to a light particle, such as an electron, in such a manner that very high speeds are produced, the acceleration does not remain constant, but decreases at a rate which indicates that it would reach zero at the speed of light. The true relation, according to the experimental results, is not Newton's law, a = F/m, but a = -\/1—(v /c ) F/m. In the system of notation used in this work, which utilizes natural, rather than arbitrary, units of measurement, the speed of light, designated as c in current practice, is unity, and the variable speed (or velocity), v, is expressed in terms of this natural unit. On this basis the empirically derived equation becomes a = F/m. There is nothing in the data derived from experiment to tell us the meaning of the term 1 v² in this expression; whether the force decreases at higher speeds, or the mass increases, or whether the velocity term represents the effect of some factor not related to either force or mass. Einstein apparently considered only the first two of these alternatives. While it is difficult to reconstruct the pattern of his thinking, it appears that he assumed that the effective force would decrease only if the electric charges that produced the force decreased in magnitude. Since all electric charges are alike, so far as we know, Whereas the primary mass concentrations seem to be extremely variable, he chose the mass alternative as being the most likely, and assumed for purposes of his theory that the mass increases with the velocity at the rate indicated by the experiments. On this basis, the mass becomes infinite at the speed of light. The results obtained from development of the consequences of the postulates of the Reciprocal System now show that Einstein guessed wrong. The new information developed theoretically (which will be discussed in detail later) reveals that an electric charge is inherently incapable of producing a speed in excess of unity, and that the decrease in the acceleration at high speeds is actually due to a decrease in the force exerted by the charges, not to any change in the magnitude of either the mass or the charge. As explained earlier, force is merely a concept by which we visualize the resultant of oppositely directed motions as a conflict of tendencies to cause motion rather than as a conflict of the motions themselves. This method of approach facilitates mathematical treatment of the subject, and is unquestionably a convenience, but whenever a physical situation is represented by some derived concept of this kind there is always a hazard that the correspondence may not be complete, and that the conclusions reached through the

medium of the derived concept may therefore be in error. This is what has happened in the case we are now considering. If the assumption that a force applied to the acceleration of a mass remains constant in the absence of any external influences is viewed only from the standpoint of the force concept, it appears entirely logical. It seems quite reasonable that a tendency to cause motion would remain constant unless subjected to some kind of a modifier. But when we look at the situation in its true light as a combination of motions, rather than through the medium of an artificial representation by means of the force concept, it is immediately apparent that there is no such thing as a constant force. Any force must decrease as the speed of the motion from which it originates is approached. The progression of the natural reference system, for instance, is motion at unit speed. It therefore exerts unit force. If the force–that is, the effect–of the progression is applied to overcoming a resistance to motion (the inertia of a mass) it will ultimately bring the mass up to the speed of the progression itself: unit speed. But a tendency to impart unit speed to an object that is already moving at high speed is not equivalent to a tendency to impart unit speed to a body at rest. In the limiting condition, where an object is already moving at unit speed, the force due to the progression of the reference system has no effect at all, and its magnitude is zero. Thus, the full effect of any force is attained only when the force is exerted on a body at rest, and the effective component in application to an object in motion is a function of the difference between the speed of that object and the speed that manifests itself as a force. The specific form of the mathematical function, ~ rather than merely 1-v, is related to some of the properties of compound motions that will be discussed later. Ordinary terrestrial speeds are so low that the corresponding reduction in the effective force is negligible, and at these speeds forces can be considered constant. As the speed of the moving object increases, the effective force decreases, approaching a limit of zero when the object is moving at the speed corresponding to the applied force–unity in the case of the progression of the natural reference system. As we will find in a later stage of the development, an electric charge is inherently a motion at unit speed, like the gravitational motion and the progression of the natural reference system, and it, too, exerts zero force on an object moving at unit speed. As an analogy, we may consider the case of a container full of water, which is started spinning rapidly. The movement of the container walls exerts a force tending to give the liquid a rotational motion, and under the influence of this force the water gradually acquires a rotational speed. But as that speed approaches the speed of the container the effect of the ―constant‖ force drops off, and the container speed constitutes a limit beyond which the water speed cannot be raised by this means. The force vanishes, we may say. But the fact that we cannot accelerate the liquid any farther by this means does not bar us from giving it a higher speed in some other way. The limitation is on the capability of the process, not on the speed at which the water can rotate. The mathematics of the equation of motion applicable to the acceleration phenomenon remain the same in the Reciprocal System as inEinstein's theory. It makes no difference mathematically whether the mass is increased by a given amount, or the effective force is decreased by the same amount. The effect on the observed quantity, the acceleration, is

identical. The wealth of experimental evidence that demonstrates the validity of these mathematics therefore confirms the results derived from the Reciprocal System to exactly the same degree that it confirms Einstein’s theory. All that this evidence does in either case is to show that the theory is mathematically correct. But mathematical validity is only one of the requirements that a theory must meet in order to be a correct representation of the physical facts. It must also be conceptually valid; that is, the meaning attached to the mathematical terms and relations must be correct. One of the significant aspects of Einstein’s theory of acceleration at high speeds is that it explains nothing; it merely makes assertions. Einstein gives us an ex cathedra pronouncement to the effect that the velocity terms represents an increase in the mass, without any attempt at an explanation as to why the mass increases with the velocity, why this hypothetical mass increment does not alter the structure of the moving atom or particle, as any other mass increment does, why the velocity term has this particular mathematical form, or why there should be a speed limitation of any kind. Of course, this lack of a conceptual background is a general characteristic of the basic theories of present-day physics, the ―free inventions of the human mind,‖ as Einstein described them, and the theory of mass increase is not unusual in this respect. But the arbitrary character of the theory contrasts sharply with the full explanation provided by the Reciprocal System. This new system of theory produces simple and logical answers for all questions, similar to those enumerated above, that arise in connection with the explanation that it supplies. Furthermore, one of these is, in any respect, ad hoc. All are derived entirely by education from the assumptions as to the nature of space and time that constitute the basic premises of the new theoretical system. Both the Reciprocal System and Einstein's theory recognize that there; a limit of some kind at unit speed. Einstein says that this is a limit on the magnitude of speed, because on the basis of his theory the mass reaches infinity at unit speed, and it is impossible to accelerate an infinite mass. The Reciprocal System, on the other hand, says that the limit is on the capability of the process. A speed in excess of unity cannot be produced by electromagnetic means. This does not preclude acceleration to higher speeds by other processes, such as the sudden release fo large quantities of energy in explosive events, and according to this new theoretical viewpoint there is no definite limit to speed magnitudes. In deed, the general reciprocal relation between space and time requires that speeds in excess of unity be just as plentiful, and cover just as wide a range, in the universe as a whole, as speeds less than unity. The apparent predominance of low-speed phenomena is merely a result of observing the universe from a location far over on the low-speed side of the neutral axis. One of the reasons why Einstein's assertion as to the existence of limiting speed was so readily accepted is an alleged absence of any observational evidence of speeds in excess of that of light. Our new theoretical development indicates, however, that there is actually no lack of evidence. The difficulty is that the scientific community currently holds a mistaken belief as to the nature of the change of position that is produced by such a motion. We observe that a motion at a speed less than that of light causes a change of location in space, the rate of change varying with the speed (or velocity, if the motion is other then linear). It is currently taken for granted that a speed in excess of that of light

would result in a still greater rate of change of spatial location, and the absence of any clearly authenticated evidence of such higher rates of location change is interpreted as proof of the existence of a speed limitation. But in a universe of motion an increment of speed above unity (the speed of light) does not cause a change of location in space. In such a universe there is complete symmetry between space and time, and since unit speed is the neutral level, the excess speed above unity causes a change of location in threedimensional time rather than in three-dimensional space. From this it can be seen that the search for ―tachyons‖ , hypothetical particles that move with a spatial velocity greater than unity, will continue o be fruitless. Speeds above unity cannot be detected by measurements if the rate of change of coordinate positions in space. We can detect them only by means of a direct speed measurement, or by some collateral effects. There are many observable effects of the required nature, but their status as evidence of speeds greater than that of light is denied by present-day physicists on the ground that it conflicts with Einstein’s assumption of an increase in mass at high speeds. In other words, the observations are required to conform to the theory, rather than requiring the theory to meet the standard test of science: conformity with observation and measurement. The current treatment of the abnormal redshifts of the quasars is a glaring example of this unscientific distortion of the observations to fit the theory. We have adequate grounds to conclude that these are Doppler shifts, and are due to the speeds at which these objects are receding from the earth. Until very recently there was no problem in this connection. There was general agreement as to the nature of the redshifts, and as to the existence of a linear relation between the redshift and the speed. This happy state of affairs was ended when quasars were found with redshifts exceeding 1.00. On the basis of the previously accepted theory, a 1.00 redshift indicates a recession speed equal to the speed of light. The newly discovered redshifts in the range above 1.00 therefore constitute a direct measurement of quasar motions at speeds greater than that of light. But the present-day scientific community is unwilling to challenge Einstein, even on the basis of direct evidence, so the mathematics of the special theory of relativity have been invoked as a means of saving the speed limitation. No consideration seems to have been given to the fact that the situation to which the mathematical relations of special relativity apply does not exist in the case of the Doppler shift. As brought out in Chapter 7, and as Einstein has explained very clearly in his works, the Lorentz equations, which express those mathematics, are designed to reconcile the results of direct measurements of speed, as in the Michelson-Morley experiment, with the measured changes of coordinate position in a spatial reference system. As everyone, including Einstein, has recognized, it is the direct speed measurement that arrives at the correct numerical magnitude. (Indeed, Einstein postulated the validity of the speed measurement as a basic principle of nature.) Like the result of the Michelson-Morley experiment, the Doppler shift is a direct measurement, simply a counting operation, and it is not in any way connected with a measurement of spatial coordinates. Thus there is no excuse for applying the relativity mathematics to the redshift measurements. Inasmuch as the ―time dilatation‖ aspect of the Lorentz equations is being applied to some other phenomena that do not seem to have any connection with spatial coordinates,

it may be desirable to anticipate the subsequent development of theory to the extent of stating that the discussion in Chapter 15 will show that those ―dilatation‖ phenomena that appear to involve time only, such as the extended lifetime of fast-moving unstable particles, are, in fact, consequences of the variation of the relation between coordinate spatial location (location in the fixed reference system) and absolute spatial location (location in the natural moving system) with the speed of the objects occupying these locations. The Doppler effect, on the other hand, is independent of the spatial reference system. The way in which motion in time manifests itself to observation depends on the nature of the phenomenon in which it is observed. Large redshifts are confined to high-speed astronomical objects, and a detailed examination of the effect of motion in time on the Doppler shift will be deferred to Volume II, where it will be relevant to the explanation of the quasars. At this time we will take a look at another of the observable effects of motion in time that is not currently recognized as such by the scientific community: its effect in distorting the scale of the spatial reference system. It was emphasized in Chapter 3 that the conventional spatial reference systems are not capable of representing more than one variable–space-and that because there are two basic variables–space and time–in the physical universe we are able to use the spatial reference systems only on the basis of an assumption that the rate of change of time remains constant. We further saw, earlier in this present chapter that at all speeds of unity or less time does, in fact, progress at a constant rate, and all variability is in space. It follows that if the correct values of the total time are used in all applications, the conventional spatial coordinate systems are capable of accurately representing all motions at speeds of 1/n. But the scale of the spatial coordinate system is related to the rate of change of time, and the accuracy of the coordinate representation depends on the absence of any change in time other than the continuing progression at the normal rate of registration on a clock. At speeds in excess of unity, space is the entity that progresses at the fixed normal rate, and time is variable. Consequently, the excess speed above unity distorts the spatial coordinate system. In a spatial reference system the coordinate difference between two points A and B represents the space traversed by any object moving from A to B at the reference speed. If that reference speed is changed, the distance corresponding to the coordinate difference AB is changed accordingly. This is true irrespective of the nature of the process utilized for measurement of the distance. It might be assumed, for instance, that by using something similar to a yardstick, which compares space directly with space, the measurement of the coordinate distance would be independent of the reference speed. But this is not correct, as the length of the yardstick, the distance between its two ends, is related to the reference speed in the same manner as the distance between any other two points. If the coordinate difference between A and B is x when the reference speed has the normal unit value, then it becomes 2x if the reference speed is doubled. Thus, if we want to represent motions at twice the speed of light in one of the standard spatial coordinate systems that assume time to be progressing normally, all distances involved in these motions must be reduced by one half. Any other speed greater than unity requires a corresponding modification of the distance scale.

The existence of motion at greater-than-unit speeds has no direct relevance to the familiar phenomena of everyday life, but it is important in all of the less accessible areas, those that we have called the far-out regions. Most of the consequences that apply in the realm of the very large, the astronomical domain, have no significance in relation to the subjects being discussed at this early stage of the theoretical development, but the general nature of the effects produced by greater-than-unit speeds is most clearly illustrated by those astronomical phenomena in which such speeds can be observed on a major scale. A brief examination of a typical high-speed astronomical object will therefore help to clarify the factors involved in the high-speed situation. In the preceding pages we deduced from theoretical premises that speeds in excess of the speed of light can be produced by processes that involve large concentrations of energy, such as explosions. Further theoretical development (in Volume II) will show that both stars and galaxies do, in fact, undergo explosions at certain specific stages of their existence. The explosion of a star is energetic enough to accelerate some portions of the stellar mass to speeds above unity, while other portions acquire speeds below this level. The low-speed material is thrown off into space in the form of an expanding cloud of debris in which the particles of matter retain their normal dimensions but are separated by an increasing amount of empty space. The high-speed material is similarly ejected in the form of an expanding cloud, but because of the distortion of the scale of the reference system by the greater-than-unit speeds, the distances between the particles decrease rather than increase. To emphasize the analogy with the cloud of material expanding into space, we may say that the particles expanding into time are separated by an increasing amount of empty time. The expansion in each case takes place from the situation that existed at the time of the explosion, not from some arbitrary zero datum. The star was originally stationary, or moving at low speed, in the conventional spatial reference system, and was stationary in time in the moving system of reference defined by a clock. As a result of the explosion, the matter ejected at low speeds moves outward in space and remains in the original condition in time. The matter ejected at high speeds moves outward in time and remains in its original condition in space. Since we see only the spatial result of all motions, we see the low-speed material in its true form as an expanding cloud, whereas we see the high-speed material as an object remaining stationary in the original spatial location. Because of the empty space that is introduced between the particles of the outwardmoving explosion product, the diameter of the expanding cloud is considerably larger than that of the original star. The empty time introduced between the particles of the inward-moving explosion product conforms to the general reciprocal relation, and inverts this result. The observed aggregate, a white dwarf star, is also an expanding object, but its expansion into time is equivalent to a contraction in space, and as we see it in its spatial aspect, its diameter is substantially less than that of the original star. It thus appears to observation as an object of very high density. The white dwarf is one member of a class of extremely compact astronomical objects discovered in recent years that is today challenging the basic principles of conventional physics. Some of these objects, such as the quasars are still without any plausible explanation. Others, including the white dwarfs, have been tied in to current physical

theory by means of ad hoc assumptions, but since the assumptions made to explain each of these objects are not applicable to the others, the astronomers are supplied with a whole assortment of theories to explain the same phenomenon: extremely high densities. It is therefore significant that the explanation of the high density of the white dwarf stars derived from the postulates of the Reciprocal System of theory is applicable to all of the other compact objects. As will be shown in the detailed discussion, all of these extremely compact astronomical objects are explosion products, and their high density is in all cases due to the same cause: motion at speeds in excess of that of light. This is only a very brief account of a complex phenomenon that will be examined in full detail later, but it is a striking illustration of how the inverse phenomena predicted by the reciprocal relation can always be found somewhere in the universe, even if they involve such seemingly bizarre concepts as empty time, or high speed motion of objects stationary in space. Another place where the inability of the conventional spatial reference systems to represent changes in temporal location, other than by distortion of the spatial representation, prevents it from showing the physical situation in its true light is the region inside unit distance. Here the motion in time is not due to a speed greater than unity, but to the fact that, because of the discrete nature of the natural units, less than unit space (or time) does not exist. To illustrate just what is involved here, let us consider an atom A in motion toward another atom B. According to current ideas, atom A will continue to move in the direction AB until the atoms, or the force fields surrounding them, if such fields exist, are in contact. The postulates of the Reciprocal System specify, however, that space exists only in units. It follows that when atom A reaches point X one unit of space distant from B. it cannot move any closer to B in space. But it is free to change its position in time relative to the time location occupied by atom B. and since further movement in space is not possible, the momentum of the atom causes the motion to continue in the only way that is open to it. The spatial reference system is incapable of representing any deviation of time from the normal rate of progression, and this added motion in time therefore distorts the spatial position of the moving atom A in the same manner as the speeds in excess of unity that we considered earlier. When the separation in time between the two atoms has increased to n units, space remaining unchanged (by means of continued reversals of direction), the equivalent spatial separation, the quantity that is determined by the conventional methods of measurement, is l /n units. Thus, while atom A cannot move to a position less than one unit of space distant from atom B. it can move to the equivalent of a closer position by moving outward in time. Because of this capability of motion in time in the region inside unit distance it is possible for the measured length, area, or volume of a physical object to be a fraction of a natural unit, even though the actual one, two, or three-dimensional space cannot be less than one unit in any case. It was brought out in Chapter 6 that the atoms of a material aggregate, which are contiguous in space, are widely separated in time. Now we are examining a situation in which a change of position in the spatial coordinate system results from a separation in time, and we will want to know just where these time separations differ. The explanation is that the individual atoms of an aggregate such as a gas, in which the atoms are

separated by more than unit distance, are also separated by various distances in time, but these atoms are all at the same stage of the time progression. The motion of these atoms meets the requirement for accurate representation in the conventional spatial coordinate systems; that is, it maintains the fixed time progression on which the reference system is based. On the other hand, the motion in time that takes place inside unit distance involves a deviation from the normal time progression. A spatial analogy may be helpful in getting a clear view of this situation. Let us consider the individual units (stars) of a galaxy. Regardless of how widely these stars are separated, or how much they move around within the galaxy, they maintain their status as constituents of the galaxy because they are all receding at the same speed (the internal motions being negligible compared to the recession speed). They are at the same stage of the galactic recession. But if one of these stars acquires a spatial motion that modifies its recession speed significantly, it moves away from the galaxy, either temporarily or permanently. Thereafter, the position of this star can no longer be represented in a map of the galaxy, except by some special convention. The separations in time discussed in Chapter 6 are analogous to the separations in space within the galaxies. The material aggregates that we are now discussing retain their identities just as the galaxies do, because their individual components are progressing in time at the same rates. But just as individual stars may acquire spatial speeds which cause them to move away from the galaxies, so the individual atoms of the material aggregates may acquire motions in time which cause them to move away from the normal path of the time progression. Inside unit distance this deviation is temporary and quite limited in extent. In the white dwarf stars the deviations are more extensive, but still temporary. In the astronomical discussions in Volume II we will consider phenomena in which the magnitude of the deviation is sufficient to carry the aggregates that are involved completely out of the range of the spatial coordinate systems. So far as the inter-atomic distance is concerned, it is not material whether this is an actual spatial separation or merely the equivalent of such a separation, but the fact that the movement of the atoms changes from a motion in space to a motion in time at the unit level has some important consequences from other standpoints. For instance, the spatial direction AB in which atom A was originally moving no longer has any significance now that the motion is taking place inside unit distance, inasmuch as the motion in time which replaces the previous motion in space has no spatial direction. It does have what we choose to call a direction in time, but this temporal direction has no relation at all to the spatial direction of the previous motion. No matter what the spatial direction of the motion of the atom may have been before unit distance was reached, the temporal direction of the motion after it makes the transition to motion in time is determined purely by chance. Any kind of action originating in the region where all motion is in time is also subject to significant modifications if it reaches the unit boundary and enters the region of space motion. For example, the connection between motion in space and motion in time is scalar, again because there is no relation between direction in space and direction in time. Consequently, only one dimension of a two-dimensional or three-dimensional motion can

be transmitted across the boundary. This point has an important bearing on some of the phenomena that will be discussed later. Another significant fact is that the effective direction of the basic scalar motions, gravitation and the progression of the natural reference system, reverses at the unit level. Outside unit space the progression of the reference system carries all objects outward in space away from each other. Inside unit space only time can progress unidirectionally, and since an increase in time, with space remaining constant, is equivalent to a decrease in space, the progression of the reference system in this region, the time region, as we will call it, moves all objects to locations which, in effect, are closer together. The gravitational motion necessarily opposes the progression, and hence the direction of this motion also reverses at the unit boundary. As it is ordinarily observed in the region outside unit distance, gravitation is an inward motion, moving objects closer together. In the time region it acts in the outward direction, moving material objects farther apart. On first consideration, it may seem illogical for the same force to act in opposite directions in different regions, but from the natural standpoint these are not different directions. As emphasized in Chapter 3, the natural datum is unity, not zero, and the progression of the natural reference system therefore always acts in the same natural direction: away from unity. In the region outside unit distance away from unity is also away from zero, but in the time region away from unity is toward zero. Gravitation likewise has the same natural direction in both regions: toward unity. It is this reversal of coordinate direction at the unit level that enables the atoms to take up equilibrium positions and form solid and liquid aggregates. No such equilibrium can be established where the progression of the natural reference system is outward, because in this case the effect of any change in the distance between atoms resulting from an unbalance of forces is to accentuate the unbalance. If the inward-directed gravitational motion exceeds the outward-directed progression, a net inward motion takes place, making the gravitational motion still greater. Conversely, if the gravitational motion is the smaller, the resulting net motion is outward, which still further reduces the already inadequate gravitational motion. Under these conditions there can be no equilibrium. In the time region, however, the effect of a change in relative position opposes the unbalanced force, which caused the change. If the gravitational motion (outward in this region) is the greater an outward net motion takes place, reducing the gravitational motion and ultimately bringing it into equality with the constant inward progression of the reference system. Similarly, if the progression is the greater, the net movement is inward, and this increases the gravitational motion until equilibrium is reached. The equilibrium that must necessarily be established between the atoms of matter inside unit distance in a universe of motion obviously corresponds to the observed inter-atomic equilibrium that prevails in solids and, with certain modifications, in liquids. Here, then, is the explanation of solid and liquid cohesion that we derive from the Reciprocal System of theory, the first comprehensive and completely self-consistent theory of this phenomenon that has ever been formulated. The mere fact that it is far superior in all respects to the currently accepted electrical theory of matter is not, in itself, very significant, inasmuch as the electrical hypothesis is definitely one of the less successful

segments of present-day physical theory, but a comparison of the two theories should nevertheless be of interest from the standpoint of demonstrating how great an advance the new theoretical system actually accomplishes in this particular field. A detailed comparison will therefore be presented later, after some further groundwork has been laid.

CHAPTER 9

Rotational Combinations One of the principal difficulties that is encountered in explaining the Reciprocal System of theory, or portions thereof, is a general tendency on the part of readers or listeners to assume that the author or speaker, whoever he may be, does not actually mean what he says. No previous major theory is purely theoretical; every one takes certain empirical information as a given element in the premises of the theory. The conventional theory of matter, for example, takes the existence of matter as given. It then assumes that this matter is composed of ―elementary particles,‖ which it attempts to identify with observed material particles. On the basis of this assumption, together with the empirical information introduced into the theory, it then attempts to explain the observed range of structural characteristics. Inasmuch as all previous theories of major scope have been constructed on this pattern, there is a general impression that physical theories must be so constructed, and it is therefore assumed that when reference is made to the fact that the Reciprocal System utilizes no empirical data of any kind, this statement must have some meaning other than its literal significance. The theoretical development in the preceding chapters should dispose of this misapprehension so far as the qualitative aspect of the universe is concerned. While the task is still only in the early stages, enough of the basic features of the physical universe– radiation, matter, gravitation, etc,.–have been derived by deduction from the postulates, without the aid of further assumptions, or of empirical information, to demonstrate that a purely theoretical qualitative development is, in fact, feasible. But a complete account of a theoretical universe must necessarily include the quantitative aspects of physical phenomena as well as the qualitative aspects. Here is another place where the way in which the development of theory has taken place is mistakenly regarded as the way in which this development must take place. The theoretical products of the Newtonian era, the so-called ―classical‖ physics, were capable of being expressed in simple mathematical terms. But some deviations from the classical laws have been encountered in the far-out regions that have been reached by observation and experiment in recent years, and the physicists have not been able to account for these deviations without employing extremely complex mathematical processes, together with conceptual artifices of a rather dubious character, such as Einstein’s ―rubber yardstick‖ , or fudge factor. In the light of the points brought out in the preceding chapter it is now evident that the difficulties are due to a misunderstanding of the basic nature of the farout phenomena, but since the modern theorists have not realized this, they have

concluded that the true relationships of the universe are extremely complex, and that they cannot be expressed by anything other than very complex mathematics. The general acceptance of this view of the situation has led a large segment of the scientific community, particularly the theoretical physicists, to the further conclusion that any treatment of the subject matter by means of simple mathematics is necessarily wrong, and can safely be dismissed without examination. Indeed, many of these individuals go a step farther, and characterize such a treatment as ―non-mathematical.‖ This attitude is, of course, preposterous, and it cannot be defended, but it is nevertheless so widespread that it constitutes a serious obstacle in the way of a full appreciation of the merits of any simple mathematical treatment. In beginning the quantitative development of the Reciprocal System of theory it is therefore necessary to emphasize that simplicity is a virtue, not a defect. It is so recognized, in principle, by scientists in general, including those who are now contending that the universe is fundamentally complex, or even, as expressed by P. W. Bridgman, that it ―is not intrinsically reasonable or understandable.‖56 In its entirety, the universe is indeed complex, extremely so, but as the initial steps in the development of the Reciprocal System in the preceding pages have already begun to demonstrate from a qualitative standpoint, it is actually a complex aggregate of interrelated simple elements. The principal advantage of mathematical treatment of physical subject matter is the precision with which knowledge of a mathematical character can be developed and expressed. This is offset to a considerable degree, however, by the fact that mathematical knowledge of physical phenomena is incomplete, and from the physical standpoint, ambiguous. No mathematical statement of a physical relation is complete in itself. As Bridgman frequently pointed out, it must be accompanied by a ―text‖ that tells us what the mathematics mean, and how they are to be applied. There is no definite and fixed relation between this text and the mathematics; that is, every mathematical statement of a physical relation is capable of different interpretations. The importance of this point in the present connection lies in the fact that the Reciprocal System makes relatively few changes in the mathematical aspects of current physical theory. The changes that it calls for are primarily conceptual. They require different interpretations of the mathematics, changes in the text, as Bridgman would say. Such changes, modifications of our ideas as to what the mathematics mean, obviously cannot be represented by alterations in the mathematical expressions. These expressions will have to stand as they are. Many readers of the first edition have asked that the new ideas be ―put in mathematical form.‖ But what these individuals really mean is that they want the theory put into some different mathematical form. They are, in effect, demanding that we change the mathematics and leave the concepts alone. This, we cannot do. The errors in current physical thought are primarily conceptual, not mathematical, and the corrections have to be made where the errors are, not somewhere else. There is nothing extraordinary about the close correlation between the mathematical aspects of the Reciprocal System and those of current theory. The conventional mathematical relations were, for the most part, derived empirically, and any correct theory of a more general nature must necessarily arrive at these same mathematics. But

there is no guarantee that the prevailing interpretation of these mathematical results is correct. On the contrary, as Jeans pointed out in the statement previously quoted, the physical interpretations of correct mathematical formulae have often been ―badly wrong.‖ Correction of the errors that have been made in the interpretation of the mathematical expressions often has very significant consequences, not so much in the particular area to which such an expression is directly applicable, but in collateral areas. The interpretation is usually tailored to fit the immediate physical situation reasonably well, but if it is not correct it becomes an impediment to progress in related areas. If it does not actually lead to erroneous conclusions such as the limitation on speed that Einstein derived from a wrong interpretation of the mathematics of acceleration at high speeds, it at least misses all of the significant collateral implications of the true explanation. For example, the mathematical statement of the recession of the distant galaxies merely tells us that these galaxies are receding at speeds directly proportional to their distances. The currently popular interpretation of this mathematical relation assumes that the recession is an ordinary vectorial motion. The problem in accounting for it then becomes a matter of identifying (or inventing) a force of sufficient magnitude to produce the extremely high speeds of the most distant objects. The accepted hypothesis is that they were produced by a gigantic explosion of the entire contents of the universe at some unique stage of its history. The Reciprocal System is in agreement with the mathematical aspects of current theory. It arrives theoretically at the conclusion that the distant galaxies must recede at speeds proportional to their respective distances: the same conclusion that present-day astronomy derives empirically. But the new theoretical system says that this recession is not a vectorial motion imparted to the galaxies by some powerful force. It is a scalar outward motion that results from viewing the galaxies in the context of a stationary spatial frame of reference rather than in the natural moving system of reference to which all physical objects actually conform. So far as the recession phenomenon itself is concerned, it makes little difference, aside from the implications for cosmology, which interpretation of the mathematical relation between speed and distance is accepted, but on the basis of the currently popular hypothesis, this relation has no further significance, whereas on the basis of the explanation derived from the postulates of the Reciprocal System, the same forces that apply to the distant galaxies are applicable to all atoms and aggregates of matter, producing effects which vary with the relative magnitudes of the different forces involved. On the basis of this new information, the mathematical relation, which applies to the galaxies, is one of far-reaching importance. This present chapter will initiate a demonstration that the very complex mathematical relations that are encountered in many physical areas are the result of permutations and combinations of simple basic elements, rather than a reflection of a complex fundamental reality. The process whereby the compound unit of motion that we call an atom is produced by applying a rotational motion to a previously existing vibrational motion, the photon, is typical of the manner in which the complex phenomena of the universe are built up from simple foundations. We start with a uniform linear, or translational, motion at unit speed. Then by directional reversals we produce a simple harmonic motion, or vibration. Next the vibrating unit is caused to rotate. The addition of this motion of a

different type alters the behavior of the unit–gives it different properties, as we say–and puts it into a new physical category. All of the more complex physical entities with which we will deal in the subsequent pages are similarly built up by compounding the simpler motions. The first phase of this mathematical development is a striking example of the way in which a few very simple mathematical premises quickly proliferate into a large number and variety of mathematical consequences. The development will begin with nothing more than the series of cardinal numbers and the geometry of three dimensions. By subjecting these to simple mathematical processes, the applicability of which to the physical universe of motion is specified in the fundamental postulates, the combinations of rotational motions that can exist in the theoretical universe will be ascertained. It will then be shown that these rotational combinations that theoretically can exist can be individually identified with the atoms of the chemical elements and the sub-atomic particles that are observed to exist in the physical universe. A unique group of numbers representing the different rotational components will be derived for each of these combinations. The set of numbers applying to each element or type of particle theoretically determines the properties of that substance, inasmuch as these properties, like all other quantitative features of a universe of motion, are functions of the magnitudes of the motions that constitute the material substances. It will be shown in this and the following chapter that this theoretical assertion is valid for some of the simpler properties, including those, which depend upon the position of the element in the periodic table. The application of these numerical factors to other properties will be discussed from time to time as consideration of these other properties is undertaken later in the development. One preliminary step that will have to be taken is to revise present measurement procedures and units in order to accommodate them to the natural moving system of reference. Because of the status of unity as the natural reference datum, a deviation of n-l units downward from unity to a speed 1/n has the same natural magnitude as a deviation of n-l units upward to a speed n/1, even though, when measured from zero speed in the conventional manner, the changes are wholly disproportionate. When n is 4, for example, the upward change is from 1 to 4, an increase of 3 units, whereas the downward change is from 1 to ¼, a decrease of only ¾ unit. In order to reflect the fact that these deviations are actually equal in magnitude from the natural standpoint, the basis on which the fundamental processes of the universe take place, it is necessary to set up a new system of speed measurement, in which we express the magnitude of the speed in terms of the deviation, upward or downward, from unit speed, instead of measuring from some zero in the conventional manner. Inasmuch as the units in which speeds are measured on this basis are not commensurable with those of speed as measured from zero, it would lead to complete confusion if the units of the new system were called units of speed. For this reason, when reference is made to speed in terms of its natural magnitude in any of the publications dealing with the Reciprocal System of theory, it is not called speed. Instead, the term ―speed displacement‖ is used, the units of this displacement being natural units of deviation from unity.

In practice, the term ―speed displacement‖ is usually shortened to ―displacement,‖ and this has led to some criticism of the terminology on the ground that ―displacement‖ already has other scientific meanings. But it is highly desirable, as an aid to understanding, that the idea of a deviation from a norm should be clearly indicated in the language that is used, and there are not many English words that meet the requirements. Under the circumstances, ―displacement,‖ appears to be the best choice. The sense in which this term is used will almost always be indicated by the context in which it appears, and in the few cases where there might be some question, the possibility of confusion can be avoided by employing the full name, ―speed displacement.‖ Another reason for the use of a distinctive term in designating natural speed magnitudes is that this is necessary in order to make the addition of speeds meaningful. Conventional physics claims that it recognizes speed as a scalar quantity, but in actual practice gives it no more than a quasi-scalar status. True scalar quantities are additive. If we have five gallons of gasoline in one container and ten gallons in another, the total, the quantity in which we are most interested, is fifteen gallons. The corresponding sum of two speeds of the same object–rotational and translational, for example–has no meaning at all in current physical thought. In the universe of motion described by the Reciprocal System of theory, however, the scalar total of all of the speeds of an object is one of the most important properties of that object. Thus, even though speed has the same basic significance in the Reciprocal System as in conventional theory–that is, it is a measure of the magnitude of motion–the manner in which speed enters into physical phenomena is so different in the two systems that it would be inappropriate to express it in the same units of measurement in both cases, even if this were not ruled out for other reasons. It would, of course, be somewhat simpler if we could say ―speed‖ whenever we mean speed, and not have to use two different terms for the same thing. But the meaning of whatever is said should be clear in all cases if it is kept in mind that whenever reference is made to ―displacement,‖ this means ―speed,‖ but not speed as ordinarily measured. It is speed measured in different quantities, and from a different reference datum. A decrease in speed from 1/1 to 1/n involves a positive displacement of n-1 units; that is, an addition of n-1 units of motion in which time is unidirectional while the space direction alternates, thus, in effect, adding n-1 units of time to the original speed 1/1. Similarly, an increase in speed from 1/1 to n/1 involves a negative displacement, an addition of n-1 units of motion in which space is unidirectional while the time direction alternates; thus, in effect, adding n-1 units of space to the original speed 1/1. In the first edition of this work the displacements here designated positive and negative were called ―time displacement‖ and ―space displacement‖ respectively, to emphasize the fact that the positive displacement represents an increased amount of time in association with one unit of space, while the reverse is true in negative displacement. Experience has shown, however, that the original terminology tends to be confusing, particularly in that it is frequently interpreted as indicating addition of independent quantities of time or space to the phenomena under consideration, whereas, in fact, it is the speed that is being increased or decreased. As pointed out in Chapter 2, in a universe of motion there is no such thing as physical space or time independent of motion. We can abstract the space aspect of motion mentally, and imagine it existing independently, as a reference system

(extension space) or otherwise, but we cannot add or subtract space or time in actual practice except by superimposing a new motion on the motion we wish to alter. If we were dealing with speed measured from the mathematical zero it would be logical to apply the term positive to an addition to the speed, but where we measure from unity the values increase in both directions, and there is no reason why one increase should be considered any more ―positive‖ than the other. The choice has therefore been made on a convenience basis, and the ―positive‖ designation has been applied to the displacements on the low speed side of the unit speed datum because these are the displacements of the material system of phenomena. We will find, as we proceed, that the displacements toward higher speeds, where they occur at all in the material sector, do so mainly as negative modifications of the predominantly low speed motion combinations. Inasmuch as the units of positive displacement and of negative displacement are simply units of deviation from the natural speed datum they are additive algebraically. Thus, if there exists a motion in time with a negative speed displacement of n-1 units (equivalent to n units of speed in conventional terms) we can reduce the speed to zero, relative to the natural datum, by adding a motion with a positive speed displacement of n-1 units. Addition of further positive displacement will result in a net speed below unity; that is, a motion in space. But there is no way by which we can alter either the time aspect or the space aspect of the motion independently. The variable in a universe of motion is speed, and the variation occurs only in displacement units. The change in terminology has been made in the hope that it will contribute toward a full realization that what we are dealing with are units of speed, even though, for technical reasons, we cannot call it speed. In the case of radiation, there is no inherent upper limit to the speed displacement (conventionally measured as frequency), but in actual practice a limit is imposed by the capabilities of the processes that produce the radiation, examination of which will be deferred until after further groundwork has been laid. The range of radiation frequencies is so wide that, except near 1/1, where the steps from n to n + 1 are relatively large, the frequency spectrum is practically continuous. The rotational situation is very different. In contrast to the almost unlimited number of possible vibrational frequencies, the maximum number of units of rotational displacement that can participate in any one combination of rotations is relatively small, for reasons which will appear in the course of the discussion. Furthermore, probability considerations dictate the distribution of the total number of rotational displacement units among the different rotations in each individual case, so that in general there is only one stable combination among the various mathematically possible ways of distributing a given total rotational displacement. This limits the possible rotational combinations that we identify as material atoms and particles to a relatively small series, the successive members of which differ initially by one displacement unit, and at a later stage by two of the single displacement units. With this understanding of the fundamentals, let us now proceed to an examination of the general characteristics of the combinations of rotational motions. The existence of different rotational patterns is clear from the start, as the rotation can not only take place at different speeds (displacements), but, in a three-dimensional universe, can also take

place independently in the different dimensions. As we will see in our investigation, however, some restrictions are imposed by geometry. The photon cannot rotate around the line of vibration as an axis. Such a rotation would be indistinguishable from no rotation at all. But it can rotate around either or both of the two axes perpendicular to the line of vibration and to each other. One such rotation of the one-dimensional photon generates a two-dimensional figure: a disk. Rotation of the disk around the second available axis then generates a three-dimensional figure: a sphere. This exhausts the available dimensions, and no further rotation of the same nature can take place. The basic rotation of the atom or particle is therefore two-dimensional, and, as brought out in Chapter 5, it is in the inward scalar direction. But after the twodimensional rotation is in existence it is possible to give the entire combination of vibrational and rotational motions a rotation around the third axis, which is also inward from the scalar standpoint, but is opposed to the two-dimensional rotation vectorially. This reverse rotation is optional, as the basic rotation is distributed over all three dimensions, and nothing further is required for stability. A rotating system therefore consists of a photon rotating two-dimensionally, with or without a reverse rotation in the third dimension. Although the two dimensions of the basic rotation have been treated separately for descriptive purposes, first generating a disk by one rotation, and then a sphere by the second, it should be understood that there are not two one-dimensional rotations; there is one two-dimensional rotation. This distinction has a significant bearing on the properties of the rotational combinations. The combined magnitude of two one-dimensional rotations of n displacement units each is 2n. The magnitude of a two-dimensional rotation in which the displacement is n in each dimension is n². It is not essential that all of the rotations be effective in the physical sense. Unless there is effective rotation in at least one dimension it is meaningless to speak of rotation, as such motion cannot be distinguished from translation. But if there is effective rotation–that is, rotation with a speed differing from unity–in at least one dimension, there can be rotation at unit speed (zero displacement) in the other dimension or dimensions. The vibrational speed displacement of the basic photon may be either negative (greater than unity) or positive (less than unity). Let us consider the case of a photon with a negative displacement, to which we propose to add a unit of rotational displacement (rotate the photon). Inasmuch as the individual units of vibrational displacement are discrete (that is, they are not tied together in any way), the one applied unit of rotational motion results in rotation of only one of the vibrational units. Because of the lack of any connection between the vibrational units there is no force resisting separation. When the one unit starts moving inward by reason of the rotation it therefore moves away from the remainder of the photon, which continues to be carried outward by the progression of the natural reference system. Irrespective of the number of vibrational units in the photon to which the rotational displacement was added, the compound motion produced by this addition thus contains only the vibrational units that are being rotated. The remaining vibrational units of the original photon continue as a photon of lower displacement. When a compound motion of this type, rotation of a vibration, is formed, the inward motion due to the rotation replaces the outward motion of the progression of the reference

system. Thus the components of the compound motion are not subject to oppositely directed motions in the manner of the multi-unit rotating photons, and these components do not separate spontaneously. However, the rotational displacement of the photon now under consideration is negative. If the rotational displacement applied to this photon is also negative, the displacement units, being units of the same scalar nature, are additive in the same manner as the vibrational units of a photon. Like the photon units, they are easily separated when even a relatively small force is applied, and the rotational displacement is therefore readily transferred from the original photon to some other object, under appropriate conditions. For this reason, combinations of negative vibrational and negative rotational displacements are inherently unstable. On the other hand, if the applied rotational displacement is positive, equal numbers of the positive and negative displacement units neutralize each other. In this case the combination has no net displacement. A motion that does have a net displacement cannot be extracted from such a combination without the intervention of some outside agency. It is simple enough to separate one negative unit from an aggregate of n negative units, but getting one negative unit out of nothing at all is not so easily accomplished. A combination of a negative vibration and a positive rotation (or vice versa) is therefore inherently stable. All that has been said about additions to a photon with negative displacement applies with equal force, but in the inverse manner, to the addition of rotation to a photon with positive displacement. We therefore arrive at the conclusion that in order to produce stable combinations photons oscillating in time (negative displacement) must be rotated in space (positive displacement), whereas photons oscillating in space must be rotated in time. This alternation of positive and negative displacements is a general requirement for stability of compound motions, and it will play an important part in the theoretical development in the subsequent pages. It should be understood, however, that stability is dependent on the environment. Any combination will break up if the environmental conditions are sufficiently unfavorable. Conversely, there are situations, to be examined later, in which environmental influences create conditions that confer stability on combinations that are normally unstable. The combinations in which the net rotation is in space (positive displacement) can be identified with the relatively stable atoms and particles of our local environment, and constitute what we will call the material system. For the present we will confine the discussion to the members of this material system, and will leave the inverse type of combination, the cosmic system, as we will call it, for later consideration. Inasmuch as the oscillating photon is being rotated in two dimensions (the basic positive rotation), one unit of two-dimensional positive displacement is required to neutralize the negative vibrational displacement of the photon, and reduce the net total displacement to zero. Because of its lack of any effective deviation from unit speed (the reference datum) this combination of motions has no observable physical properties, and for that reason it was somewhat facetiously called ―the rotational equivalent of nothing‖ in the first edition. But this understates the significance of the combination. While it has no effective net total magnitude, its rotational component does have a direction. The idea of a motion that has direction but no magnitude sounds something like a physicist's version of the Cheshire cat, but the zero effective magnitude is a property of the structure as a whole,

while the rotational direction of the two-dimensional motion, which makes the addition of further positive rotational displacement possible, is a property of one component of the total structure. Thus, even though this combination of motions can do nothing itself, it does constitute a base from which something (a material particle) can be constructed that cannot be formed directly from a linear type of motion. We will therefore call it the rotational base. There are actually two of the rotational bases. The one we have been discussing is the base of the material system. The structures of the cosmic system are constructed from a different base; one that is just the inverse of the material base. In this inverse combination the photon is oscillating in space (positive displacement) and rotating in time (negative displacement). Successive additions of positive displacement to the rotational base produce the combinations of motions that we identify as the sub-atomic particles and the atoms of the chemical elements. The next two chapters will describe the structures of the individual combinations. Before beginning this description, however, it will be in order to make some general comments about the implications of the theoretical conclusion that the atoms and particles of matter are systems of rotational motions. One of the most significant results of the new concept of the structure of atoms and particles that has been developed from the postulates of the Reciprocal System is that it is no longer necessary to invoke the aid of spirits or demons or their modern equivalents: mysterious hypothetical forces of a purely ad hoc nature–to explain how the parts of the atom hold together. There is nothing to explain, because the atom has no separate parts. It is one integral unit, and the special and distinctive characteristics of each kind of atom are not due to the way in which separate ―parts‖ are put together, but are due to the nature and magnitude of the several distinct motions of which each atom is composed. At the same time, this explanation of the structure of the atom tells us why such a unit can expel particles, or disintegrate into smaller units, even though it has no separate parts; how it can act, in some respects, as if it were an aggregate of sub-atomic units, even though it is actually a single integral entity. Such a structure can obviously part with some of its motion, or absorb additional units of motion, without in any way altering the fact that it is a single entity, not a collection of parts. When the pitcher throws a curve ball, it is still a single unit–it is a baseball–even though it now has both a translational motion and a rotational motion, which it did not have while still in his hand. We do not have to worry about what kind of a force holds the rotational ―part,‖ the translational ―part,‖ and the horsehide covered ―nucleus‖ together. There has been a general impression that if we can get particles out of an atom, then there must be particles in atoms; that is, the atom must be constructed of particles. This conclusion seems so natural and logical that it has survived what would ordinarily be a fatal blow: the discovery that the particles which emanate from the atom in the process of radioactivity and otherwise are not the constituents of the atom; that is, they do not have the properties which are required of the constituents. Furthermore, it is now clear that a great variety of particles that cannot be regarded as constituents of normal atoms can be

produced from these atoms by appropriate processes. The whole situation is now in a state of confusion. As Heisenberg commented: Wrong questions and wrong pictures creep automatically into particle physics and lead to developments that do not fit the real situation in nature.27 It is now apparent that all of this confusion has resulted from the wholly gratuitous, but rarely questioned, assumption that the sub-atomic particles have the characteristics of ―parts‖ ; that is, they exist as particles in the structure of the atom, they require something that has the nature of a ―force‖ to keep them in position, and so on. When we substitute motions for parts, in accordance with the findings of the Reciprocal System, the entire situation automatically clears up. Atoms are compound motions, sub-atomic particles are less complex motions of the same general nature, and photons are simple motions. An atom, even though it is a single unitary structure without separate parts, can eject some of its motion, or transfer it to some other structure. If the motion which separates from the atom is translational it reappears as translational motion of some other unit; if it is simple linear vibration it reappears as radiation; if it is a rotational motion of less than atomic complexity it reappears as a sub-atomic particle; if it is a complex rotational motion it reappears as a smaller atom. In any of these cases, the status of the original atom changes according to the nature and magnitude of the motion that is lost. The explanation of the observed interconvertibility of the various physical entities is now obvious. All of these entities are forms of motion or combinations of different forms, hence any of them can be changed into some other form or combination of motion by appropriate means. Motion is the common denominator of the physical universe.

CHAPTER 10

Atoms In some respects, the combinations of motions with greater rotational displacement, those which constitute the atoms of the chemical elements, are less complicated than those with the least displacement, the sub-atomic particles, and it will therefore be convenient to discuss the structure of these larger units first. Geometrical considerations indicate that two photons can rotate around the same central point without interference if the rotational speeds are the same, thus forming a double unit. The nature of this combination can be illustrated by two cardboard disks interpenetrated along a common diameter C. The diameter A perpendicular to C in disk a represents one linear oscillation, and the disk a is the figure generated by a one-dimensional rotation of this oscillation around an axis B perpendicular to both A and C. Rotation of a second linear oscillation, represented by the diameter B. around axis A generates the disk b. It is then evident that disk a may be given a second rotation around axis A, and disk b may be given a second rotation around axis B without interference at any point, as long as the rotational speeds are equal.

The validity of the mathematical principles of probability is covered in the fundamental postulates by specifically including them in the definition of ―ordinary commutative mathematics,‖ as that term is used in the postulates. The most significant of these principles, so far as the atomic structures are concerned, are that small numbers are more probable than large numbers, and symmetrical combinations are more probable than asymmetrical combinations of the same total magnitude. For a given number of units of net rotational displacement the double rotating system results in lower individual displacement values, and the probability principles give them precedence over single units in which the individual displacements are higher. All rotating combinations with sufficient net total displacement to enable forming double units therefore do so. To facilitate a description of these objects we will utilize a notation in the form a-b-c, where c is the speed displacement of the one-dimensional reverse rotation, and a and b are the displacements in the two dimensions of the basic two-dimensional rotation. Later in the development we will find that the one-dimensional rotation is connected with electrical phenomena, and the two-dimensional rotation is similarly connected with magnetic phenomena. In dealing with the atomic and particle rotations it will be convenient to use the terms ―electric‖ and ―magnetic‖ instead of ―one-dimensional‖ and ―two-dimensional‖ respectively, except in those cases where it is desired to lay special emphasis on the number of dimensions involved. It should be understood, however, that designation of these rotations as electric and magnetic does not indicate the presence of any electric or magnetic forces in the structures now being described. This terminology has been adopted because it not only serves our present purposes, but also sets the stage for the introduction of electric and magnetic phenomena in a later phase of the development. Where the displacements in the two magnetic dimensions are unequal, the rotation is distributed in the form of a spheroid. In such cases the rotation which is effective in two dimensions of the spheroid will be called the principal magnetic rotation, and the other the subordinate magnetic rotation. When it is desired to distinguish between the larger and the smaller magnetic rotational displacements, the terms primary and secondary will be used. Where motion in time occurs in the material structures now being discussed, the negative displacement values of this motion will be distinguished by placing them in parentheses. All values not so identified refer to positive displacement (motion in space). Some questions now arise as to the units in which the displacements should be expressed. As will quickly be seen when we start to identify the individual structures, the natural unit of displacement is not a convenient unit in application to the double rotating systems. The smallest change that can take place in these systems involves two natural units. As stated in Chapter 9, probability considerations dictate the distribution of the total displacement of a combination among the different dimensions of rotation. The possible rotating combinations therefore constitute a series, successive members of which differ by two of the natural onedimensional units of displacement. Since we will not encounter single units in these atomic structures, it will simplify our calculations if we work with double units rather than the single natural units. We will therefore define the unit of electric displacement in the atomic structures as the equivalent of two natural one-dimensional displacement units.

On this basis, the position of each element in the series of combinations, as determined by its net total equivalent electric displacement, is its atomic number. For reasons that will be brought out later, half of the unit of atomic number has been taken as the unit of atomic weight. At the unit level dimensional differences have no numerical effect; that is, 1³ = 1² = 1. But where the rotation extends to greater displacement values a two-dimensional displacement n is equivalent to n² one-dimensional units. If we let n represent the number of units of electric displacement, as defined above, the corresponding number of natural (single) units is 2n, and the natural unit equivalent of a magnetic (two-dimensional) displacement n is 4n², Inasmuch as we have defined the electric displacement unit as two natural units, it then follows that a magnetic displacement n is equivalent to 2n² electric displacement units. This means that the unit of magnetic displacement, the increment between successive values of the two-dimensional rotational displacement, is not a specific magnitude in terms of total displacement. Where the total displacement is the significant factor, as in the position in the series of elements, the magnetic displacement value must be converted to equivalent electric displacement units by means of the 2n² relation. For some other purposes, however, the displacement value in terms of magnetic units does have significance in its own right, as we will see in the pages that follow. In order to qualify as an atom–a double rotating system–a rotational combination must have at least one effective magnetic displacement unit in each system, or, expressing the same requirement in a different way, it must have at least one effective displacement unit in each of the magnetic dimensions of the combination structure. One positive magnetic (double) displacement unit is required to neutralize the two single negative displacement units of the basic photons; that is, to bring the total scalar speed of the combination as a whole down to zero (on the natural basis). This one positive unit is not part of the effective rotation. Thus, where there is no rotation in the electric dimension, the smallest combination of motions that can qualify as an atom is 2-1-0. This combination can be identified as the element helium, atomic number 2. Helium is a member of a family of elements known as the inert gases, a name that has been applied because of their reluctance to enter into chemical combinations. The structural feature that is responsible for this chemical behavior is the absence of any effective rotation in the electric dimension. The next element of this type has one additional unit of magnetic displacement. Since the probability factors operate to keep the eccentricity at a minimum, the resulting combination is 2-2-0, rather than 3-1-0. The succeeding increments of displacement similarly go alternately to the principal and subordinate rotations. Helium, 2-1-0, already has one effective displacement unit in each magnetic dimension, and the increase to 2-2-0 involves a second unit in one dimension. As previously indicated, the electric equivalent of n magnetic units is 2n². Unlike the addition of another electric unit, the addition of a magnetic unit is not a simple process of going from l to 2. In the case of the electric displacement there is first a single unit, then another single unit for a total of two, another bringing the total to three, and so on. But 2 x 1² = 2, and 2 x 2² = 8. In order to

increase the total electric equivalent of the magnetic displacement from 2 to 8 it would be necessary to add the equivalent of 6 units of electric displacement, and there is no such thing as a magnetic equivalent of 6 electric units. The same situation arises in the subsequent additions, and the increase in magnetic displacement must therefore take place in full 2n² equivalents. Thus the succession of inert gas elements is not 2, 10, 16, 26, 36, 50, 64, as it would be if 2(n+1)² replaces 2n² in the same manner that n+ l replaces n in the electric series, but 2, l0, 18, 36, 54, 86, 118. For reasons which will be developed later, element 118 is unstable, and disintegrates if formed. The preceding six members of this series constitute the inert gas family of elements. The number of mathematically possible combinations of rotations is greatly increased when electric displacement is added to these magnetic combinations, but the combinations that can actually exist as elements are limited by probability considerations, as noted in Chapter 9. The magnetic displacement is numerically less than the equivalent electric displacement, and is more probable for this reason. Its status as the essential basic rotation also gives it precedence over the electric rotation. Any increment of displacement consequently adds to the magnetic rotation if possible, rather than to the rotation in the electric dimension. This means that the role of the electric displacement is confined to filling in the intervals between the inert gas elements. On this basis, if all rotational displacement in the material system were positive, the series of elements would start at the lowest possible magnetic combination, helium, and the electric displacement would increase step by step until it reached a total of 2n² units, at which point the relative probabilities would result in a conversion of these 2n² electric units into one additional unit of magnetic displacement, whereupon the building up of the electric displacement would be resumed. This behavior is modified, however, by the fact that electric displacement in ordinary matter, unlike magnetic displacement, may be negative instead of positive. The restrictions on the kinds of motions that can be combined do not apply to minor components of a system of motions of the same type, such as rotations. The net effective rotation of a material atom must be in space in order to give rise to those properties which are characteristic of ordinary matter. It necessarily follows that the magnetic displacement, which is the major component of the total, must be positive. But as long as the larger component is positive, the system as a whole can meet the requirement that the net rotation be in space (positive displacement) even if the smaller component, the electric displacement, is negative. It is possible, therefore, to increase the net positive displacement a given amount either by direct addition of the required number of positive electric units, or by adding a magnetic unit and then adjusting to the desired intermediate level by adding the appropriate number of negative units. Which of these alternatives will actually prevail is affected to a considerable degree by the conditions that exist in the atomic environment, but in the absence of any bias due to these conditions, the determining factor is the size of the electric displacement, lower displacement values being more probable than higher values. In the first half of each group intermediate between two inert gas elements, the electric displacement is minimized if the increase in atomic number (equivalent electric displacement) is accomplished by direct

addition of positive displacement. When n² units have been added, the probabilities are nearly equal, and as the atomic number increases still further, the alternate arrangement becomes more probable. In the latter half of each group, therefore, the increase in atomic number is normally attained by adding one unit of magnetic displacement, and then reducing to the required net total by adding negative electric displacement, eliminating successive units of the latter to move up the atomic series. By reason of the availability of negative electric displacement as a component of the atomic rotation, an element with a net displacement less than that of helium becomes possible. Adding one negative electric displacement unit to helium produces this element, 2-1-(1), which we identify as hydrogen,, and thereby, in effect, subtracting one positive electric unit from the equivalent of two units (above the rotational base) that helium possesses. Hydrogen is the first in the ascending series of elements, and we may therefore give it the atomic number 1. The atomic number of any other material element is its net equivalent electric displacement. Above helium, 2-1-0, we find lithium, 2-1-1, beryllium, 2-1-2, boron, 2-1-3, and carbon, 21-4. Since this is an 8-atom group, the probabilities are nearly even at this point, and carbon can also exist as 2-2-(4). The elements that follow move up the atomic series by reducing the negative displacements: nitrogen, 2-2-(3), oxygen, 2-2-(2), fluorine, 2-2-(1), and finally the next inert gas, neon, 2-2-0. Another similar 8-element group follows, adding a second magnetic unit in the other magnetic dimension. This carries the series up to another inert gas element, argon, 3-2-0. Table 1 shows the normal displacements of the elements to, and including, argon. TABLE 1 THE ELEMENTS OF THE LOWER GROUPS Displacements Element 2-1-(1) 2-1-0 2-1-1 2-1-2 2-1-3 2-1-4 2-2-(4) 2-2-(3) 2-2-(2) 2-2-(1) 2-2-0

Hydrogen Helium Lithium Beryllium Boron

Atomic Number 1 2 3 4 5

Carbon

6

Nitrogen Oxygen Fluorine Neon

7 8 9 10

Displacements

2-2-1 2-2-2 2-2-3 2-2-4 2-2-(4) 3-2-(3) 3-2-(2) 3-2-(1) 3-2-0

Element

Atomic Number

Sodium Magnesium Aluminum

11 12 13

Silicon

14

Phosphorus Sulfur Chlorine Argon

15 16 17 18

At element 18, argon, the magnetic displacement has reached a level of two units above the rotational base in each of the magnetic dimensions. In order to increase the rotation in either dimension by an additional unit a total of 2x3², or 18, units of electric displacement are required. This results in a group of 18 elements which reaches the midpoint at cobalt, 3-2-9, and terminates at krypton, 3-3-0. A second 18-element group follows, as indicated in Table

2. TABLE 2 THE INTERMEDIATE ELEMENTS Displacements 3-2-1 3-2-2 3-2-3 3-2-4 3-2-5 3-2-6 3-2-7 3-2-8 3-2-9 3-3-(9) 3-3-(8) 3-3-(7) 3-3-(6) 3-3-(5) 3-3-(4) 3-3-(3) 3-3-(2) 3-3-(1) 3-3-0

Element Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton

Atomic Displacements Number 19 3-3-1 20 3-3-2 21 3-3-3 22 3-3-4 23 3-3-5 24 3-3-6 25 3-3-7 26 3-3-8 3-3-9 27 3-3-(9) 28 4-3-(8) 29 4-3-(7) 30 4-3-(6) 31 4-3-(5) 32 4-3-(4) 33 4-3-(3) 34 4-3-(2) 35 4-3-(1) 36 4-3-0

Element Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium

Atomic Number 37 38 39 40 41 42 43 44

Rhodium

45

Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon

46 47 48 49 50 51 52 53 54

The final two groups of elements, Table 3, contain 2x4², or 32, members each. The heaviest elements of the last group have not yet been observed, as they are highly radioactive, and consequently unstable in the terrestrial environment. In fact, uranium, element number 92, is the heaviest that exists naturally on earth in any substantial quantities. As we will see later, however, there are other conditions under which the elements are stable all the way up to number 117. TABLE 3 THE ELEMENTS OF THE HIGHER GROUPS Displacements 4-3-1 4-3-2 4-3-3 4-3-4 4-3-5 4-3-6 4-3-7 4-3-8 4-3-9

Element Cesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium

Atomic Displacements Number 55 4-4-1 56 4-4-2 57 4-4-3 58 4-4-4 59 4-4-5 60 4-4-6 61 4-4-7 62 4-4-8 63 4-4-9

Element Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium

Atomic Number 87 88 89 90 91 92 93 94 95

4-3-10 4-3-11 4-3-12 4-3- 13 4-3-14 4-3-15 4-3-16 4-3-(16) 4-4-(15) 4 4-(14) 4 4-(13) 4 4-(12) 4 4-(11) 4 4-(10) 4-4-(9) 4-4-(8) 4-4-(7) 4-4-(6) 4-4-(5) 4-4-(4) 4-4-(3) 4-4-(2) 4-4-(1) 4-4-0

Gadolinium Terbium Dysprosium Holmium Erbium Thulium

64 65 66 67 68 69

Ytterbium

70

Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury Thallium Lead Bismuth Polonium Astatine Radon

71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86

4-4-10 4-4-11 4-4-12 4-4-13 4-4-14 4-4-15 4-4-16 5-4-(16) 5-4-(15) 5-4-(14) 5 4-(13) 5-4-(12) 5 4-(11) 5-4-(10) 5-4-(9) 5-4-(8) 5-4-(7) 5-4-(6) 5-4-(5) 5-4-(4) 5-4-(3) 5-4-(2) 5-4-(1)

Curium Berkelium Californium Einsteinium Fermium Mendelevium

96 97 98 99 100 101

Nobelium

102

Lawrencium Rutherfordium Hafnium

103 104 105 106 107 108 109 110 111 112 113 114 115 116 117

For convenience in the subsequent discussion these groups of elements will be identified by the magnetic n value, with the first and second groups in each pair being designated A and B respectively. Thus the sodium group, which is the second of the 8-element groups (n=2) will be called Group 2B. At this point it will be appropriate to refer back to this statement that was made in Chapter 9: The (mathematical) development will begin with nothing more than the series of cardinal numbers and the geometry of three dimensions. By subjecting these to simple mathematical processes, the applicability of which to the physical universe of motion is specified in the fundamental postulates, the combinations of rotational motions that can exist in the theoretical universe will be ascertained, and it will be shown that these rotational combinations that theoretically can exist can be individually identified with the atoms of the chemical elements and the sub-atomic particles that are observed to exist in the physical universe. A unique group of numbers representing the different rotational components will be derived for each of these combinations. A review of the manner in which the figures presented in Tables 1 to 3 were derived will show that this commitment, so far as it applies to the elements, has been carried out in full. This is a very significant accomplishment. Both the existence of a series of theoretical elements identical with the observed series of chemical elements, and the numerical values which theoretically characterize each individual element have been derived from the general

properties of mathematics and geometry, without making any supplementary assumptions or introducing any numerical values specifically applicable to matter. The possibility that the agreement between the series of elements thus derived and the known chemical elements could be accidental is negligible, and this derivation is, in itself, a conclusive proof that the atoms of matter are combinations of motions, as asserted by the Reciprocal System of theory. But this is only the beginning of a vast process of mathematical development. The numerical values at which we have arrived, the atomic numbers and the three displacement values for each element, now provide us with the basis from which we can derive the quantitative relations in the areas that we will examine. The behavior characteristics, or properties, of the elements are functions of their respective displacements. Some are related to the total net effective displacement (equal to the atomic number in the combinations thus far discussed), some are related to the electric displacement, others to the magnetic displacement, while still others follow a more complex pattern. For instance, valence, or chemical combining power, is determined by either the electric displacement or one of the magnetic displacements, while the inter-atomic distance is affected by both the electric and magnetic displacements, but in different ways. The manner in which the magnitudes of these properties for specific elements and compounds can be calculated from the displacement values has been determined for many properties and for many classes of substances. These subjects will be considered individually in the chapters that follow. One of the most significant advances toward an understanding of the relations between the structures of the different chemical elements and their properties was the development of the periodic table by Mendeleeff in l869. In this diagram the elements are arranged horizontally in periods and vertically in groups, the order within the period being that of the atomic number (approximately defined in the original work by the atomic weights). When the elements are correctly assigned in the periods, those in the vertical groups are quite similar in their properties. On comparing the periodic table with the rotational characteristics of the elements, as tabulated in this chapter, it is evident that the horizontal periods reflect the magnetic rotational displacement, while the vertical groups represent the electric rotational displacement. In revising the table to take advantage of the additional information derived from the Reciprocal System of theory we may therefore replace the usual group and period numbering by the more meaningful displacement values. When this is done it is apparent that a further revision of the tabular arrangement is required in order to put all of the elements into their proper positions. Mendeleeff's original table included nine vertical groups, beginning with the inert gases, Group O, and ending with a group in which the three elements iron, cobalt, and nickel, and the corresponding elements in the higher periods, were all assigned to a single vertical position. In the more modern versions of the table the number of vertical groups has been expanded to avoid splitting each of the longer periods into two sub-periods, as was done by Mendeleeff. One of the most popular of these revised versions utilizes 18 vertical groups, and puts 15 elements of each of the last two periods into one of these l8 positions in order to accommodate the full number of elements. In the light of the new information now available, it can be seen that Mendeleeff based his

arrangement on the relations existing in the 8-element rotational groups, 2A and 2B in the notation used in this work, and forced the elements of the larger groups into conformity with this 8-element pattern. The modern revisers have made a partial correction by setting up their tables on the basis of the l8-element rotational groups, 3A and 3B, leaving blank spaces where the 8-element groups have no counterparts of the l8-element values. But these tables still retain a part of the original distortion, as they force the members of the 32-element groups into the l8-element pattern. To construct a complete and accurate table, it is only necessary to carry the revision procedure one step farther, and set up the table on the basis of the largest of the magnetic groups, the 32-element groups 4A and 4B. For some purposes a simple extension of the current versions of the table to the full 32 position width necessary to accommodate Grcups 4A and 4B is probably all that is needed. On the other hand, the useful chemical information displayed by the table is confined mainly to the elements with electric displacements below l0, and separating the central elements of the two upper groups from the main portion of the table, as in the conventional arrangements, has considerable merit. The particular elements that are thus separated on the basis of the electric displacement are not the same ones that are treated separately in the conventional tables, but the general effect is much the same. When the table is thus divided into two sections, it also appears that there are some advantages to be gained by a vertical, rather than a horizontal, arrangement, and the revised table, Fig. 1, has been set up on this basis. The new concept of ―divisions,‖ which is emphasized in this table, will be explained in Chapter 18. Inasmuch as carbon and silicon play both positive and negative roles rather freely, they have each been assigned to two positions in the table, but hydrogen, which is usually shown in two positions in the conventional tables, is necessarily negative on the basis of the principles that have been developed in this work and is only shown in one position. The aspects of its chemical behavior that have led to its classification with the electropositive elements will also be explained in Chapter 18. Figure 1 The Revised Periodic Table of the Elements Div.

Magnetic Displacement 2-1

2-2

3-2

3-3

4-3

4-4

3 Li

11 Na

19 K

37 Rb

55 Cs

87 Fr

4 Be

12 Mg

20 Ca

38 Sr

56 Ba

88 Ra

5 B

13 Al

21 Sc

39 Y

57 La

89 Ac

Electric Displacement

Div.

1 I

2 3

10

II

4-3

4-4

64 Gd

96 Cm

6 C

14 Si

22 Ti

40 Zr

58 Ce

90 Th

4

11

65 Tb

97 Bk

23 V

41 Nb

59 Pr

91 Pa

5

12

66 Dy

98 Cf

24 Cr

42 Mo

60 Nd

92 U

6

13

67 Ho

99 Es

25 Mn

43 Tc

61 Pm

93 Np

7

14

68 Er

90 Fm

26 Fe

44 Ru

62 Sm

94 Pu

8

15

69 Tm

101 Md

27 Co

45 Rh

63 Eu

95 Am

9

16

70 Yb

102 No

28 Ni

46 Pd

78 Pt

110

(8)

(15)

71 Lu

103 Lr

29 Cu

47 Ag

79 Au

111

(7)

(14)

72 Hf

104 Rf

30 Zn

48 Cd

80 Hg

112

(6)

(13)

73 Ta

105 Ha

31 Ga

49 In

81 Tl

113

(5)

(12)

74 W

106

(4)

(11)

75 Re

107

(3)

(10)

76 Os

108

(2)

(9)

77 Ir

109

4-4

5-4

II

III

6 C

14 Si

32 Ge

50 Sn

82 Pb

114

7 N

15 P

33 As

51 Sb

83 Bi

115

8 O

16 S

34 Se

52 Te

84 Po

116

1 H

9 F

17 Cl

35 Br

53 I

85 At

117

2 He

10 Ne

18 Ar

36 Kr

54 Xe

86 Rn

III

IV

(1) 0

0

In the original construction of the periodic table the known properties of certain elements were combined with the atomic number sequence to establish the existence of the relations between the elements of the various periods and groups, and thereby to predict previously undetermined properties, and even the existence of some previously unknown elements. The table thus added significantly to the chemical knowledge of the time. In this work, however,

the revised table is not being presented as an addition to the information contained in the preceding pages, but merely as a convenient graphic method of expressing some portions of that information. Everything that can be learned from the table has already been set forth in more detailed form, verbally and mathematically, in this and the earlier chapters. Some of the implications of this information, such as its application to the property of valence, will have further consideration later.

CHAPTER 11

Sub-Atomic Particles While the series of elements contains no combinations of motions with net positive displacement less than that of hydrogen, 2-1-(1), this does not mean that such combinations are non-existent. It merely means that they do not have sufficient speed displacement to form two complete rotating systems, and consequently do not have the properties, which distinguish the rotational combinations that we call atoms. These less complex combinations of motion can be identified as the sub-atomic particles. As is evident from the foregoing, these particles are not constituents of atoms, as seen in current scientific thought. They are structures of the same general nature as the atoms of the elements, but their net total displacement is below the minimum necessary to form the complete atomic structure. They may be characterized as incomplete atoms. The term ―sub-atomic‖ is currently applied to these particles with the implication that the particles are, or can be, building blocks from which atoms are constructed. Our new findings make this sense of the term obsolete, but the name is still appropriate in the sense of a system of motions of a lower degree of complexity than atoms. It will therefore be retained in this work, and applied in this modified sense. The term ―elementary particle‖ must be discarded. There are no ―elementary‖ particles in the sense of basic units from which other structures can be formed. A particle is smaller and less complex than an atom, but it is by no means elementary. The elementary unit is the unit of motion. The theoretical characteristics of the sub-atomic particles, as derived from the postulates of the Reciprocal System, have been given considerable additional study since the date of the last previous publication in which they were discussed, and there has been a significant increase in the amount of information that is available with respect to these objects, including the theoretical discovery of some particles that are more complex than those described in the first edition. Furthermore, we are now in a position to examine the structure and behavior of the cosmic sub-atomic particles in greater depth (in the later chapters). In order to facilitate the presentation of this increased volume of information, a new system of representing the dimensional distribution of the rotation has been adopted. This means, of course, that we are now using one system of notation for the rotation of the elements, and a different system to represent the rotations of the same nature when we are dealing with the particles. At first glance, this may seem to be introducing an unnecessary

complication, but the truth is that as long as we want to take advantage of the convenience of using the double displacement unit in dealing with the elements, while we must use the single unit in dealing with the particles, we are necessarily employing two different systems, whether they look alike or not. In fact, lack of recognition of this difference has led to some of the confusion that we now wish to avoid. It appears, therefore, that as long as two different systems of notation are necessary for convenient handling of the data, we might as well set up a system for the particles in a manner that will best serve our purposes, including being distinctive enough to avoid confusion. The new notation used in this edition will indicate the displacements in the different dimensions, as in the first edition, and will express them in single units, as before, but it will show only effective displacements, and will include a letter symbol that will specifically designate the rotational base of the particle. It is necessary to take the initial non-effective rotational unit into consideration in dealing with the elements because of the characteristics of the mathematical processes that we will utilize. This is not true in the case of the subatomic particles, and as long as the atomic (double) notation cannot be used in any event, we will show only the effective displacements, and will precede them with either M or C to indicate whether the rotational base of the combination is material or cosmic. This will have the added advantage of clearly indicating that the rotational values in any particular case are being expressed in the new notation. This change in the symbolic representation of the rotations, and the other modifications of terminology that we are making in this edition, may introduce some difficulties for those who have already become accustomed to the manner of presentation in the earlier works. It seems advisable, however, to take advantage of any opportunities for improvement in this respect that may be recognized in the present early stage of the theoretical development, as improvements of this nature will become progressively less feasible as time goes on and existing practices become resistant to change. On the new basis, the material rotational base is M 0-0-0. To this base may be added a unit of positive electric displacement, producing the positron, M 0-0-1, or a unit of negative electric displacement, in which case the result is the electron, M 0-0-(1). The electron is a unique particle. It is the only structure constructed on a material base, and therefore stable in the local environment, that has an effective negative displacement. This is possible because the total rotational displacement of the electron is the sum of the initial positive magnetic unit required to neutralize the negative photon displacement (not shown in the structural notation) and the negative electric unit. As a two-dimensional motion, the magnetic unit is the major component of the total rotation, even though its numerical magnitude is no greater than that of the one-dimensional electric rotation. The electron thus complies with the requirement that the net total rotation of a material particle must be positive. As brought out earlier, adding motion with negative displacement has the effect of adding more space to the existing physical situation, whatever it may be, and the electron is therefore, in effect, a rotating unit of space. We will see later that this fact plays an important part in many physical phenomena. One immediate, and very noticeable, result is that electrons are plentiful in the material environment, whereas positrons are extremely rare. On the basis of the same considerations that apply to the electron, we can regard the

positron as essentially a rotating unit of time. As such, it is readily absorbed into the material system of combinations, the constituents of which are predominantly time structures; that is, rotational motions with net positive displacement (speed = 1/t). The opportunities for utilizing the negative displacement of the electrons in these structures, on the contrary, are very limited. If the addition to the rotational base is a magnetic unit rather than an electric unit, the result could be expressed as M 1-0-0. It now appears, however, that the notation M ½-½-0 is preferable. Of course, half units do not exist, but a unit of two-dimensional rotation obviously occupies both dimensions. To recognize this fact we will have to credit one half to each. The ½-½ notation also ties in better with the way in which this system of motions enters into further combinations. We will call this M ½-½-0 particle the massless neutron, for reasons, which will appear shortly. At the unit level in a single rotating system, the magnetic and electric units are numerically equal; that is, 1² = 1, Addition of a unit of negative electric displacement to the M ½-½-0 combination of motions, the massless neutron, therefore produces a combination with a net total displacement of zero. This combination, M ½-½-(1), Can be identified as the neutrino. In the preceding chapter, the property of the atoms of matter known as atomic weight, or mass, was identified with the net positive three-dimensional rotational displacement (speed) of the atoms. This property will be discussed in more detail in the next chapter, but at this time we will note that the same relationship also applies to the sub-atomic particles; that is, these particles have mass to the extent that they have net positive rotational displacement in three dimensions. None of the particles thus far considered meets this requirement. The electron and the positron have effective rotation in one dimension; the massless neutron in two. The neutrino has no net displacement at all. The sub-atomic rotational combinations thus far identified are therefore massless particles. By combination with other motions, however, the displacement in one or two dimensions can attain the status of a component of a three-dimensional displacement. For instance, a particle may acquire a charge, which is a motion of a kind that will be examined later in the development, and when this happens, the entire displacement, both of the charge and of the original particle, will then manifest itself as mass. Or a particle may combine with other motions in such a way that the displacement of the massless particle becomes a component of the three-dimensional displacement of the combination structure. Addition of a unit of positive, instead of negative, electric displacement to the massless neutron would produce M ½-½-1, but the net total displacement of this combination is 2, which is sufficient to form a complete double rotating system, an atom, and the greater probability of the double structure precludes the existence of the M ½-½-1 combination, other than momentarily. The same probability considerations likewise exclude the two-unit magnetic structure M 11-0, and its positive derivative M 1-1-1, which have net displacements of 2 and 3 respectively. However, the negative derivative, M 1-1-(1), formed in practice by the addition of a neutrino, M ½-½-(1), to a massless neutron, M ½-½-0. can exist as a particle, as its net

total displacement is only one unit; not enough to make the double structure mandatory. This particle can be identified as the proton. Here we have an illustration of the way in which particles that are individually massless, because they have no three-dimensional rotation, combine to produce a particle with an effective mass. The massless neutron rotates only two-dimensionally, while the neutrino has no net rotation. But by adding the two, a combination with effective rotation in all three dimensions is produced. The resulting particle, the proton, M 1-1-(1), has one unit of mass. At the present, rather early, stage of the theoretical development it is not possible to make a precise evaluation of the probability factors and other influences that determine whether or not a theoretically feasible rotational combination will actually be able to exist under a given set of circumstances. The information now available indicates, however, that any material type combination with a net displacement less than 2 should be capable of existing as a particle in the local environment. In actual practice none of the single system combinations identified in the preceding paragraphs has been observed, and there is considerable doubt as to whether there is any way whereby they can be observed, other than through indirect processes which enable us to infer their existence The neutrino, for example, is ―observed‖ only by means of the products of certain events in which this particle is presumed to participate. The electron, the positron, and the proton have been observed only in the charged state, not in the uncharged condition, which constitutes the basic state of all of the rotational combinations thus far discussed. Nevertheless, there is sufficient evidence to indicate that all of these uncharged structures do, in fact, exist, and play significant roles in physical processes. This evidence will be forthcoming as we continue the theoretical development. In the previous publications, the M ½-½-0 combination (1-1-0 in the notation utilized in those works) was identified as the neutron, but it was noted that in some physical processes, such as cosmic ray decay, the magnetic displacement that could be expected to be ejected in the form of neutrons is actually transferred in massless form. Since the observed neutron is a particle of unit atomic weight, it was at that time concluded that in these particular instances the neutrons act as combinations of neutrinos and positrons, both massless particles. On this basis, the neutron plays a dual role, massless under some conditions, but possessing unit mass under other circumstances. Further investigation, centering mainly on the secondary mass of the sub-atomic particles, which will be discussed in Chapter 13, has now disclosed that the observed neutron is not the single effective magnetic rotation with net displacements M ½-½-0. but a more complex particle of the same net displacement, and that the single magnetic displacement is massless. It is no longer necessary to conclude that the same particle acts in two different ways. Instead, there are two different particles. The explanation is that the new findings have revealed the existence of a type of structure intermediate between the single rotating systems of the massless particles and the complete double systems of the atoms. In these intermediate structures there are two rotating systems, as in the atoms of the elements, but only one of these systems has a net effective displacement. The rotation in this system is that of the proton, M 1-1-(1). In the second

system there is a neutrino type rotation. The massless rotations of the second system can be either those of the material neutrino, M ½-½-(1), or those of the cosmic neutrino, C ½-½-1. With the material neutrino rotation the combined displacements are M ½-½-(2) This combination is the mass one isotope of hydrogen, a structure identical with that of the normal mass two atom (deuterium), M 2-2(2), or M 2-1-(1) in the atomic notation, except that it has one less unit of magnetic rotation, and therefore one less unit of mass. When the cosmic neutrino rotation is added to the proton, the combined displacements are M 2-2-0. the same net total as that of the single magnetic rotation. This theoretical particle, the compound neutron, as we will call it, can be identified as the observed neutron. The identification of the separate rotations of these intermediate type structures with the rotations of the neutrinos and protons should not be interpreted as meaning that neutrinos and protons actually exist as such in the combination structures. What is meant is that one of the component rotations that constitutes the compound neutron, for instance, is the same kind of a rotation as that which constitutes a proton when it exists separately. Inasmuch as the net total displacement of the compound neutron is identical with that of the massless neutron, those aspects of the behavior of the particles–properties, as they are called–which are dependent on the net total displacement are the same for both. Likewise, those properties that are dependent on total magnetic displacement, or total electric displacement, are identical. But there are other properties that are related to those features of the particle structure in which the two neutrons differ. The compound neutron has an effective unit of three-dimensional displacement in the rotating system with the proton type rotation, and it therefore has one unit of mass. The massless neutron, on the other hand, has no effective three-dimensional displacement, and therefore no mass. The two neutrons also differ in that, although it is (or at least, as we will see in Chapter 17, may be) a still unobserved particle, the massless neutron is theoretically stable in the material environment, whereas the life of the compound neutron is short because of the ―foreign‖ nature of the rotation in the second system. After about 15 minutes, on the average, the compound neutron ejects the second rotating system in the form of a cosmic neutrino, and the particle reverts to the proton status. The structures of the sub-atomic particles of the material system may now be summarized as follows: Massless particles M 0-0-0 M 0-0-1 M 0-0-(1) M ½-½-0 M ½-½-(1) Particles with mass

rotational base positron electron massless neutron neutrino

M+

charged positron

0-0-1

MM M+ M C

0-0-(1) 1-1-(1) 1-1-(1) 1-1-(1) (½)-(½)-1

charged electron proton charged proton compound neutron

CHAPTER 12

Basic Mathematical Relations It was pointed out in the introductory chapters that when we postulate a universe composed entirely of motion, every entity or phenomenon that exists in this universe is either a motion, a combination of motions, or a relation between motions. The discussion thus far has been addressed mainly to an examination of the primary features of the possible motions, and certain of the combinations of these motions. At this point it will be advisable to consider some of the basic kinds of relations that exist between motions. Inasmuch as motion in general is defined as a relation between space and time, expressed symbolically by s/t, all of the different kinds of motions, and the relations between motions, can be expressed in space-time terms. Such an analysis into space and time components will be particularly helpful in putting the various physical relationships into the proper perspective, and our first objective in the field we are now entering will therefore be to establish the space-time equivalents of the various quantities that constitute the so-called ―mechanical‖ system. Consideration of the analogous quantities of the electrical system will be deferred until we are ready to begin an examination of electrical phenomena. One set of these mechanical quantities is customarily expressed in velocity terms, and it presents no problems. One-dimensional velocity is, by definition, s/t. It follows that twodimensional and three-dimensional velocity is s²/t² and s³/t³ respectively. Acceleration, the time rate of change of one-dimensional velocity, is s/t². In addition to these quantities which express motion as velocities (or speeds), there is also a set of quantities which are fundamentally based on resistance to movement, although in some applications this basic significance is obscured by other factors. The objects, which resist movement, are atoms and particles of matter: three-dimensional combinations of motions. In a universe of motion, where nothing exists but motion, the only thing that can resist change of motion is motion. The particular motion that resists any change in the motion of an atom is the inherent motion of the atom itself, the motion that makes it an atom. Furthermore, only a three-dimensional motion, or motion that is automatically distributed over three dimensions, is able to offer effective resistance, as any vacant dimension permits motion to take place without hindrance. The magnitude of the resistance can be expressed in terms of the quantity required to eliminate the effective existing motion; that is, to reduce this motion to unity in the conventional reference system. This is the inverse of the motion of the atom, s³/t³, and the

resistance to motion, or inertia, is therefore t³/s³. In more general application, inertia is known as mass. Inasmuch as current physical theory recognizes gravitation and inerti a as phenomena of a quite different character, the equivalence of gravitational and inertial mass, which has been experimentally demonstrated to the almost incredible accuracy of less than one part in 1011, is regarded as very significant, although there is considerable difference of opinion as to what that significance actually is. As expressed by Clifford M. Will, ―the theoretical interpretation of the Eötvös experiment (which demonstrates the equivalence) has varied."57 Will asserts that it is now believed that the results of this experiment rule out all non-metric theories of gravitation (he defines metric theories as those ―in which gravitation can be treated as being synonymous with the curvature of space and time‖ ). After the theorists have arrived at such a far-reaching conclusion on the basis of what Will admits is no more than a ―conjecture,‖ it comes as something of an anticlimax when the Reciprocal System reveals that nothing of an esoteric nature is involved. Gravitation is a motion, but it can manifest itself either directly as motion or inversely as resistance to another motion. Multiplying mass, t³/s³, by velocity, s/t, we obtain momentum, t² /s², the reciprocal of twodimensional velocity. Another multiplication by velocity, s/t, gives us energy, t/s. Energy, then, is the reciprocal of velocity. When one-dimensional motion is not restrained by opposing motion (force) it manifests itself as velocity; when it is so restrained it manifests itself as potential energy. Kinetic energy is merely ―energy in transit,‖ so to speak. It is a measure of the energy that has been used to produce the velocity of a mass (½mv² = ½t³/s³ x s²/t² = ½ t/s), and can be extracted for other use by terminating the motion (velocity). This explanation of the nature of energy should be of some assistance to those who are still having some difficulty with the concept of scalar motion. Both speed and energy are scalar measures of motion. But on our side of the unit speed boundary, the low-speed side, where all motion is in space, speed can be represented in our conventional spatial system of reference because it causes a change of position, inward or outward, in space, whereas energy cannot be so represented. On the high-speed side of the boundary, the relations are inverted. There all motion is in time, and the measure of that motion, the energy, t/s, the inverse of speed, s/t, can be represented in a stationary temporal reference system, whereas speed is neither inward nor outward from the time standpoint, and cannot be represented in the temporal coordinate system. Here is the reason for the purely scalar nature of any increment of speed beyond the unit level, such as those discussed in Chapter 8. The added speed does have a direction, but it is a direction in time, and it has no vectorial effect in a spatial system of reference. We will find this very significant when we undertake an examination of some of the recently discovered high-speed astronomical objects in Volume II. Force, which is defined as the product of mass and acceleration, becomes t³/s³ x s/t²= t/s². Acceleration and force are thus inverse quantities, in the sense in which that term is generally used in this work; that is, they are identical except that space and time are

interchanged. They are not inverse in the mathematical sense, as their product is not equal to unity. One special type of force that is of particular interest is the gravitational force, that which the aggregates of matter appear to exert on each other by reason of there motions inward in space. In this case, the mathematical expression F = kmm'/d² by which the force is ordinarily calculated is quite different from the general force equation F= ma. When taken at their face value, these two expressions are clearly irreconcilable. If gravitational force is actually a force, even a force of the ―as if,‖ variety, it cannot be proportional to the product of two masses (that is, to m2) when force in general is proportional to the first power of the mass. There is an obvious contradiction here. Most of the other common quantities of the mechanical system can be reduced to spacetime terms without any complications. For example: Impulse, the product of force and time, has the same dimensions as momentum. Ft= t/s² x t = t²/s² Both work and torque are the products of force and distance, and have the same dimensions as energy. Fs= t/s² x s= t/s Pressure is force per unit area. F/s² = t/s x I/s² = t/s4 Density is mass per unit volume. m/s³ = t³/s³ x 1/s³ = t³/s6 Viscosity is mass per unit length per unit time. m x 1/s x 1/t = t³/s³ x 1/s x 1/t = t²/s4 Surface tension is force per unit length. F/s = t/s² x 1/s = t/s³ Power is work per unit time. W/t = t/s x 1/t = 1/s All of the established relations in the field of mechanics have the same dimensional consistency on the basis of these space-time dimensions as in the conventional forms, since the mass terms in the equations are, in all cases, balanced by derivatives of mass on the opposite side of the equation. The numerical values in these equations likewise retain the same relationships, as all that we have done, from this standpoint, is to change the size of the unit in which the quantity of mass is expressed. What has been accomplished, then, is to express mass in terms of the components of motion. Since mechanics deals

only with space, time, and mass, it follows that, so far as mechanics is concerned, by reducing mass to motion we have confirmed the validity of the basic postulate that the physical universe is composed entirely of motion. This is a very significant point. The concept of the nature of the physical universe on which conventional physics is based, the concept of a universe of matter existing in a framework provided by space and time, identifies matter as a fundamental quantity. The results of this present work now show that, in the physical field that is the most completely developed and understood, the fundamental entity is motion, not matter. Furthermore, it is now possible to see why the common denominator of the universe has to be motion; why it could not be anything else. It has to be something to which all of the mechanical quantities can be reduced (and all other physical quantities as well, but for the present we are examining the mechanical relations). The only entity that meets these requirements is the simple relationship between space and time that we are defining as motion. Motion is the common denominator of the field of mechanics. It still remains to be established that motion is the common denominator of the entire universe, but the demonstration that all of the quantities with which mechanics deals, including mass , can be reduced to motion creates a strong presumption that when the more complex phenomena in other fields are equally well understood they will also be found to be reducible to motion. The development of theory in the subsequent pages of this and the volumes to follow will show that this logical expectation is realized, and that all physical phenomena and entities can, in fact, be reduced to motion. The application of the Reciprocal System of theory to mechanics throws a significant light on the relation of this theoretical system to conventional scientific thought. It was asserted in Chapter 6 that the concept of a universe of motion, on which the new theoretical system is based, is ―just the kind of a conceptual alteration that is needed to clear up the existing physical situation: one which makes drastic changes where such changes are required, but leaves the empirically determined relations of our everyday experience essentially untouched.‖ Here, in application to a field in which the entire body of knowledge is a network of ―empirically determined relations,‖ the validity of this assertion is dramatically demonstrated. The only change that is found to be necessary in mechanics is to recognize the fact that mass is reducible to motion. Otherwise, the entire structure of mechanical theory is incorporated into the Reciprocal System just as it stands. As will be shown in the pages that follow, the same is true in other fields to the extent that the prevailing ideas in those fields are, like the principles of mechanics, solidly based on empirically determined facts. But where the prevailing ideas are based on assumptions–"free inventions of the human mind,‖ in Einstein’s words–the development of the theory of a universe of motion now shows that most of these invented ideas are erroneous, in part if not in their entirety. The Reciprocal System diverges from current scientific thought only in those respects where current theory has been led astray by erroneous assumptions. As indicated earlier, the phenomena involved are mainly those not accessible to direct apprehension, primarily the phenomena of the very small, the very large, and the very fast. In all of the space-time expressions of physical quantities that were derived in the preceding pages of this chapter, the dimensions of the denominator of the fraction are

either equal to or greater than the dimensions of the numerator. This is another result of the discrete unit postulate, which prevents any interactions from being carried beyond the unit level. Addition of speed displacement to motion in space reduces the speeds; the atomic rotation can take place only in the negative scalar direction, and so on. The same principle applies to the dimensions of physical quantities, and the dimensions of the numerator of the space-time expression of any real physical quantity cannot be greater than those of the denominator. Purely mathematical relations that violate this principle can, of course, be constructed, but according to the theoretical findings they have no real physical significance. For example, the reciprocal of viscosity is known as fluidity, and in certain applications it is more convenient for purposes of calculation to work with fluidity values rather than viscosity values. But the space-time expression for fluidity is s4/t², and on the basis of the principle just stated, we must conclude that viscosity is the quantity that has a real physical existence. The most notable of the quantities excluded by this dimensional principle is ―action.‖ This is the product of energy, t/s, and time t, and in space-time terms it is t²/s. Thus it is not admissible as a real physical quantity. In view of the prominent place which it occupies in some physical areas, this conclusion that it has no actual physical significance may come as quite a surprise, but the explanation can be seen if we examine the most familiar of the conventional applications of action: its use in the expression of Planck's constant. The equation connecting the energy of radiation with the frequency is E = hv where h is Planck's constant. In order to be dimensionally consistent with the other quantities in the equation this constant must be expressed in terms of action. It is clear, however, from the explanation of the nature of the photon of radiation that was developed in Chapter 4, that the so-called ―frequency‖ is actually a speed. It can be expressed as a frequency only because the space that is involved is always a unit magnitude. In reality, the space dimension belongs with the frequency, not with the Planck constant. When it is thus transferred, the remaining dimensions of the constant are t²/s², which are the dimensions of momentum, and are the reversing dimensions that are required to convert speed s/t to energy t/s. In space-time terms, the equation for the energy of radiation is t/s = t²/s² x s/t Similar situations have developed in other cases where dimensions have been improperly assigned in current practice. The energy of rotation, for instance, is commonly expressed as ½ Iw ², where I is the moment of inertia, and w is the angular velocity. The moment of inertia is the product of the mass and the square of the distance:I = ms² = t³/s³ x s² = t³/s This result shows that the moment of inertia is an artificial construct without physical significance. The important part that it plays in the expression for rotational energy may seem inconsistent with this conclusion, but again the explanation is that the space magnitude has been improperly assigned. It belongs with the velocity term, not with the

mass term. When it is so transferred, the moment of inertia is eliminated, and the rotational energy equation reverts to the normal kinetic form E= ½mv². The equation in its usual form is merely a mathematical convenience, and does not reflect the actual physical situation. In addition to the kinds of relations that have been discussed so far in this chapter, where the relations themselves are familiar, and only the analysis into space and time components is new, there are other types of physical relations that are peculiar to the universe of motion. At this time we will want to examine two of these: the limitations on unidirectional motion, and the relations between motion in space and motion in time. The translational and vibrational speeds with which we have been mainly concerned thus far are speeds attained by means of directional reversals, and their magnitudes are not subject to any limits other than those arising from the finite capabilities of the originating processes. Rotation, however, is unidirectional from the scalar standpoint, and unidirectional magnitudes are limited by the discrete unit postulate. On the basis of this postulate, the maximum possible one-dimensional unidirectional speed is one net displacement unit. However, the atom rotates in the inward scalar direction, and inward motion necessarily takes place in opposition to the omnipresent outward motion of the natural reference system. Two inward displacement units are therefore required in order to reach the limit of one net unit. These two units extend from unity in the positive scalar direction (the positive zero, in terms of the natural system) to unity in the negative scalar direction (the negative zero), and they constitute the maximum for any one-dimensional unidirectional motion. In three-dimensional space (or time) there can be two displacement units in each of the three dimensions, and the maximum three-dimensional unidirectional displacement is therefore 2³, or 8, units. There have been some suggestions that the number of possible directions (and consequently displacements) in three-dimensional space ought to be 3 x 2 = 6 rather than 2³ = 8. It should therefore be emphasized that we are not dealing with three individual dimensions of motion, we are dealing with three-dimensional motion. The possible directions in a three-dimensional continuum can be visualized by regarding a two-unit cube as being an assemblage of eight one-unit cubes. The diagonals from the center of the assemblage to the opposite corner of each of the cubes then define the eight possible directions. An important consequence of the fact that there are eight displacement units between the zero point of the positive motion and the end of the second unit, which is the zero from the negative standpoint, is that in any physical situation involving rotation, or other threedimensional motion, there are eight displacement units between positive and negative magnitudes. A positive displacement x from the positive datum is physically equivalent to a negative displacement 8—x from the negative datum. This is a principle that will have a wide field of application in the pages that follow. The key factor in the relation between motion in space and motion in time is the previously mentioned fact that in the context of a spatial reference system all motion in time is scalar, and in the context of a temporal reference system all motion in space is scalar. The regions of motion in time and motion in space therefore meet in what is

essentially no more than a point contact. It follows that of all of the possible directions that a motion in time can take, only one of these time directions brings the motion in time into contact with the region of motion in space. Only in this one direction can an effect be transmitted across the regional boundary. Inasmuch as all possible directions are equally probable, in the absence of any factors that would establish a preference, the ratio of the transmitted effect to the total magnitude of the motion is numerically equal to the total number of possible directions. As can be seen from the foregoing explanation, the transmission ratio depends on the nature of the motion, particularly on the number of dimensions involved. However, the value with which we will be most concerned is that applicable to the basic properties of matter. This is the relation that was called the inter-regional ratio in the first edition, and it appears advisable to retain this name, although the more extensive information now available shows the relation is not as general as the name might indicate. On the basis of the theoretical considerations discussed in the preceding paragraphs, there are 4 possible orientations of each of the two two-dimensional rotations of the atoms, and 8 possible orientations of the one-dimensional rotations, making a total of 4 x 4 x 8 = 128 different positions that a unit displacement of the scalar translational motion of the atom (the inward scalar effect of the rotation) can take in three-dimensional time. In addition, each of the rotating systems of the atom has an initial unit of vibrational displacement with three possible orientations, one in each dimension. For the two-dimensional basic rotation this means nine possible positions, of which two are occupied. Thus, for each of the 128 possible rotational positions there is an additional 2/9 vibrational position which any given displacement unit may occupy. The inter-regional ratio is then 128 (1 + 2/9) = 156.44. It is this inter-regional ratio that accounts for the small ―size‖ of atoms when the dimensions of these objects are measured on the assumption that they are in contact in the solid state. According to the theory developed in the foregoing pages, there can be no physical distance less than one natural unit, which, as we will see in the next chapter, is 4.56 x 10-6 cm. But because the inter-atomic equilibrium is established in the region inside this unit, the measured inter-atomic distance is reduced by the inter-regional ratio, and this measured value is therefore in the neighborhood of 10-8 cm. The inversion of space and time at the unit level also has an important effect on the dimensions of inter-regional relations. Inside unit space no changes in space magnitudes can take place, since less than unit space does not exist. However, as pointed out earlier, the motion in time, which can take place inside the space unit, is equivalent to a motion in space because of the inverse relation between space and time. An increase in the time aspect of a motion in this inside region (the time region, where space remains constant at unity) from 1 to t is equivalent to a decrease in the space aspect from 1 to 1/t. Where the time is t, the speed in this region is equivalent space 1/t divided by time t, or 1/t². In the region outside unit space, the speed corresponding to one unit of space and time t is 1/t. Now we find that in the time region it is 1/t². The time region speed, and all quantities derived therefrom, which means all of the physical phenomena of the inside region, as all of these phenomena are manifestations of motion, are therefore second power expressions

of the corresponding quantities of the outside region. This is an important principle that must be taken into account in any relation involving both regions. The intra-region relations may be equivalent; that is, the expression a= be is the mathematical equivalent of the expression a² = b²c². But if we measure the quantity a in the outside region, it is essential that the equation be expressed in the correct regional form: a = b²c². Although the difficulties which the Reciprocal System of theory does not encounter do not enter into the development of thought in these pages, and, strictly speaking, have no real place in the discussion, it may be of interest, while we are considering some of the factors that enter into the phenomena of very small dimensions, to point out that the theory of a universe of motion is free from the problem of infinities that plagues all conventional theories in this physical area. Richard Feynman gives us a candid assessment of the existing theoretical situation: We really do not know exactly what it is that we are assuming that gives us the difficulty producing infinities. A nice problem! However, it turns out that it is possible to sweep the infinities under the rug, by a certain crude skill, and temporarily we are able to keep on calculating.... We have all these nice principles and known facts, but we are in some kind of trouble: either we get the infinities, or we do not get enough of a description–we are missing some parts.58 The Reciprocal System is free of these problems because it is a fully quantized system of theory. Every physical phenomenon, this theory tells us, is a manifestation of motion, and every motion involves at least one unit of space and one unit of time. For convenience, we may identify a ―point‖ within a unit of space or a unit of time, but such a point has no independent existence. Nothing less than one unit of either space or time exists in the universe of motion.

CHAPTER 13

Physical Constants Because motion and its components, space and time, exist only in units, the derivatives of motion, dimensional variations of the basic relation between space and time, such as acceleration, force, etc., also exist only in natural units. A natural unit of force, for example, is a natural unit of time divided by a two-dimensional natural unit of space. It then follows that where a relation of the kind discussed in Chapter 12 is correctly stated, it is valid as a quantitative relation between units without any arbitrary ―constant.‖ The expression F = ma, for example, tells us that one natural unit of force applied to one natural unit of mass will produce an acceleration of one natural unit. When all quantities are expressed in natural units there are no numerical constants in equations of this kind aside from what we may call structural factors: geometrical factors such as the number of effective dimensions, numerical factors such as the second and third powers of the quantities entering into the relations, and so on.

There has been a great deal of speculation as to the nature and origin of the ―fundamental constants‖ of present-day physics. An article in the Sept. 4, 1976 issue of Science News, for example, contends that we are confronted with a dilemma, inasmuch as there are only two ways of looking at these constants, neither of which is really acceptable. We must either, the article says, ―swallow them ad hoc‖ without justification for ―their necessity, their constancy, or their values,‖ or we must accept the Machian hypothesis that they are, in some unknown way, determined by the contents of the universe as a whole. The development of the Reciprocal System of theory has now resolved this dilemma in the same way that it handled a number of the long-standing problems considered in the earlier pages; that is, by exposing it as fictitious. When all quantities are expressed in the proper units–the natural units of which the universe of motion is constructed–the ―fundamental constants‖ reduce to unity and vanish. A preliminary step that has to be taken before we can compare the mathematical results derived from the new theory with the numerical values obtained by measurement is to ascertain the conversion ratios by which the values in the natural system can be converted to the conventional system of units in which the measurements are reported. Inasmuch as the conventional units are arbitrary, there is no way in which the conversion factors can be calculated theoretically. It is necessary to utilize a measurement of some specific physical quantity for each independent conventional unit. Any physical quantity, which involves the item in question, and can be clearly identified, will theoretically serve the purpose, but for maximum accu1acy certain basic phenomena that are relatively simple, and have been carefully studied observationally, are clearly preferable. There is no question as to where we should obtain the value of the natural unit of speed, or velocity. The speed of radiation, measured as the speed of light in a vacuum, 2.99793 x 1010 cm/sec, is an accurately measured quantity that is definitely identified as the natural unit by the theoretical development. There are some uncertainties with respect to the other conversion factors, both as to the accuracy of the experimental values from which they have to be calculated, and as to whether all of the minor factors that enter into the theoretical situation have been fully taken into account. Some improvement has been made in both respects since the first edition was published, and the principal discrepancies that existed in the original findings have been eliminated, or at least greatly minimized. No significant changes were required in the values of the basic natural units, but some of the details of the manner in which these units enter into the determination of the ―constants‖ and other physical magnitudes have been clarified in the course of extending the development of the theoretical structure. One of the problems in this connection is that of arriving at a decision as to which of the reported measured values should be used in the calculations. Ordinarily it would be assumed that the more recent results are the more accurate, but an examination of these recent values and the methods by which they have been obtained indicates that this is not necessarily true. Apparently the ―consistent‖ values listed in the up-to-date tabulations involve some adjustments of the raw data to conform with current theoretical ideas as to the relations that should exist between the various individual values. For purposes of this present work the unadjusted data are preferable.

The principal question at this point concerns the experimental values of Avogadro's number, as only three conversion constants are required for present purposes, and there are no significant differences in the measurements of the quantities that will be used in calculating two of these constants. The more recent values reported for Avogadro’s number are somewhat lower than those reported earlier, but the correlation with the gravitational constant, which will be discussed shortly, favors some of the earlier results. The value adopted for use in evaluating the conversion constant for mass, 6.02486 x 1023 g-mol-1, has therefore been taken from a 1957 tabulation by Cohen, Crowe and DuMond.59 In any event, it should be understood that wherever the results obtained in this work are expressed in the arbitrary units of a conventional system, they are accurate only to the degree of accuracy of the experimental values of the quantities used in determining the conversion constants. Any future change in these values resulting from improvement of experimental techniques will involve a corresponding change in the values calculated from theoretical premises. However, this degree of uncertainty does not apply to any results that are stated in natural units, or in conventional terms such as units of atomic number that are equivalent to natural units. As in the first edition, the natural unit of time has been calculated from the Rydberg fundamental frequency. A question has arisen here because this frequency varies with the mass of the emitting atom. The original calculation was based on the value applicable to hydrogen, but this has been questioned, as the prevailing opinion regards the vague applicable to infinite mass as the fundamental magnitude. A definitive answer to this question will not be available until the theory of the variation in the frequency has been worked out, but in the meantime a review of the situation indicates that we should stay with the hydrogen value in the interim. From the theoretical viewpoint it would seem that the unit value would come from an atom of unit magnitude, rather than from an infinite number of atoms. Also, even though the difference is small, the value thus derived seems to be more consistent with the general pattern of measured magnitudes than the alternative. From the manner in which the Rydberg frequency appears in the mathematics of radiation, particularly in such simple relations as the Balmer series of spectral lines, it is evident that this frequency is another physical manifestation of a natural unit, similar in this respect to the speed of light. It is customarily expressed in cycles per second on the assumption that it is a function of time only. From the explanation previously given, it is apparent that the frequency of radiation is actually a velocity. The cycle is an oscillating motion over a spatial or temporal path, and it is possible to use the cycle as a unit only because that path is constant. The true unit is one unit of space per unit of time (or the inverse of this quantity). This is the equivalent of one half-cycle per unit of time rather than one full cycle, as a full cycle involves one unit of space in each direction. For present purposes the measured value of the Rydberg frequency should therefore be expressed as 6.576115 x 1015 half-cycles per second. The natural unit of time is the reciprocal of this figure, or 1.520655 x 10-16 seconds. Multiplying the unit of time by the natural unit of speed, we obtain the value of the natural unit of space, 4.558816 x 10-6 centimeters.

By combing these two natural units as required, the natural units of all of the quantities of the velocity group can be calculated. Those of the inverse quantities, the energy group, can also be calculated in the same centimeter-second terms, but this gives us expressions such as 3.711381 x 10-32 sec³/cm³, which is the natural unit of mass. This value has no practical use because the inverse relations between the quantities of the velocity group and those of the energy group have not hitherto been recognized. In setting up the conventional system of units it has been assumed that mass is another fundamental quantity for which an additional arbitrary unit is necessary. The ratio of the velocitybased unit of mass to this arbitrary unit, the gram, can be derived from any clearly defined physical relation involving mass that has been accurately measured in conventional units. As indicated earlier, the measurement selected for this purpose is that of Avogadro's constant. This constant is the number of molecules per gram molecular weight, or in application to atoms, the number of atoms per gram atomic weight. The reported value is 6.02486 x 1023. The reciprocal of this number, 1.65979 x 10-24, in grams, is therefore the mass equivalent of unit atomic weight, the unit of inertial mass, as we will call it. With the addition of the value of the natural unit of inertial mass to the values previously derived for the natural units of space and time, we now have all of the information required for calculation of the natural units of the other primary quantities of the mechanical system. The mechanical units can be summarized as follows:

s space t time s/t speed s/t² acceleration t/s energy t/s² force 4 t/s pressure t²/s² momentum t³/s³ inertial mass

Natural Units of Primary Quantities Space-time Units Conventional Units -6 4.558816 x 10 cm 4.558816 x 10-6 cm -16 1.520655 x 10 sec 1.520655 x 10-16 sec 2.997930 x 1010 cm/sec 2.997930 x 1010 cm/sec 26 1.971473 x 10 cm/sec² 1.971473 x 1026 cm/sec² 3.335635 x 10-11 see/cm 1.49175 x 10-3 ergs -6 7.316889 x 10 sec/cm² 3.27223 x 10² dynes 5 4 3.520646 x 10 sec/cm 1.57449 x 1013 dynes/cm² 1.112646 x 10-21 sec²/cm² 4.97593 x 10-14 g-cm/sec -32 3.711381 x 10 sec³/cm³ 1.65979 x 10-24 g

The values given in the first column of this tabulation are those derived by applying the natural units of space and time to the space-time expressions for each physical quantity. In the case of the quantities of the speed or velocity type, these are also the values applicable in the conventional systems of measurement. However, mass is regarded as an independent fundamental variable in the conventional systems, and a mass term is introduced into each of the quantities of the energy type. Momentum, for example, is not treated as t²/s², but as the product of mass and velocity, which, in space-time terms, is t³/s³ x s/t. The use of an arbitrary unit of mass then introduces a numerical factor. Thus, in order to arrive at the values of the natural units in terms of the cgs system of measurement, each of the values given for the energy group in the first column of the tabulation must be divided by this factor: 2.236055 x 10-8. As we saw in Chapter 10, the masses of the atoms of matter can be expressed in terms of units of equivalent electric displacement. The minimum quantity of displacement is one

atomic weight unit. It is therefore evident that this displacement unit is some kind of a natural unit of mass. In the first edition it was identified as the natural unit of mass in general. The continuing theoretical development has revealed, however, that this atomic weight unit, the unit of inertial mass, is actually a composite that includes not only a unit of what we will now call primary mass, the basic mass quantity, but also a unit of secondary mass. The concept of secondary mass was introduced in the first edition, without being developed very far. A considerably more detailed treatment is now available. The inward motion in space which gives rise to the primary mass does not take place from an initial level occupying a fixed location in a stationary frame of reference. Instead, the initial level itself is in motion in the region inside unit space. Since mass is an expression of the inward motion that is effective in the context of a stationary reference system, the primary mass is modified by the mass equivalent of the motion of the initial level. While the previous deductions with respect to the essential features of the secondary mass component have been confirmed in the subsequent studies, a few of the details take on a somewhat different appearance when viewed in the light of the more complete information now available. The recent findings indicate that although the primary mass is a function of the net total effective positive rotational displacement, the movement of the initial level that is responsible for the existence of the secondary mass depends on the magnitudes of the displacements in the different dimensions separately. The scalar directions of the motions inside unit distance play an important part in determining these magnitudes. Outside unit distance, the scalar direction of the rotational motion is inward because it must oppose the outward motion of the natural reference system. However, as we saw in Chapter10, the magnitude of that inward motion depends to some extent on whether the displacement in the electric dimension is positive or negative. Inside unit space there is still more variability, as the motion in this region is in time, and there is no fixed relation between direction in time and direction in space. (The rotational motion of which the material atom or particle is constructed is motion in space, but inside one spatial unit the translational motion of the atom is in time.) Because of this directional freedom in the time region, the secondary mass may be either positive or negative. Furthermore, the directions of the individual displacement units are independent of each other, and the net total secondary mass of a complex atom may be relatively small because of the presence of nearly equal numbers of positive and negative secondary mass components. This directional variability introduces a number of complications into the secondary mass pattern of the elements. The complete pattern has not yet been identified, but a substantial amount of information is now available with respect to the values applying to sub-atomic particles and the elements of low atomic number. The magnitudes of the natural units applicable to physical quantities are independent of the sector or region of the universe in which the phenomena to which they relate are located. As explained in Chapter12, however, only a fraction of any physical effect can be transmitted across a regional boundary, and the measured value beyond that boundary is substantially less than the original unit. This is the principal reason for the great

disparity between the magnitudes of the primary and secondary mass. A unit of mass in the region inside unit distance is inherently just as large as a unit of mass in the region outside unit distance. But when both are measured in terms of their effect in the outside region, the inside, or secondary, mass is reduced by the interregional ratio. In this chapter we are dealing with some very small quantities, and for greater accuracy we will extend the previously calculated value of the inter-regional ratio to two more decimal places, making it 156.4444. The reciprocal of this ratio, 0.00639205, is the fraction of a time region unit that is effective outside unit distance. It is therefore the unit of secondary mass applicable to the basic two-dimensional rotation of the atom or particle. The unit of inertial mass is one such secondary unit plus one unit of primary mass, or a total of 1.00639205. An analysis of the secondary mass relations enables us to compute the mass of each of the sub-atomic particles, a magnitude that is of interest not only as one more item of information about the physical universe, but also because of the light that it throws on the structure of the individual particle. Here we must take into account not only the twodimensional component of the secondary mass, the magnetic component, as we will call it, following our usual terminology, but also the other components that may be involved in the secondary mass. One of these is the component due to the electric rotation, if any. Inasmuch as this electric rotation, the rotation in the third dimension, is not an independent motion, but a reverse rotation of the pre-existing two-dimensional rotating system, or systems, it adds neither primary mass nor the magnetic unit that is the principal component of the secondary mass. It contributes only the mass equivalent of a unit of one-dimensional rotation. In this case, the 1/9 factor representing the possible positions of the basic photon applies directly against the basic 1/128 relation. We then have for the unit of electric mass: 1/9 x 1/128 = 0.00086806 This value applies specifically where the motion around the electric axis is a rotation of a two-dimensional displacement distributed over all three dimensions, as in a double rotating system. Where only one two-dimensional rotation is involved, the electric mass is 2/3 of the full unit, or 0.00057870. When two of the two-dimensional rotations (four dimensions in all) are consolidated to form a double rotating system (three dimensions), the two 0.00057870 mass units become one 0.00086806 unit. Another secondary mass component that may be present is the mass due to an electric charge. Like all other phenomena in a universe of motion, a charge is a motion, an additional motion of the atom or particle. We are not ready to discuss charges in detail at this stage of the presentation, so for the present we will merely note that on the basis of the restrictions on combinations of motions defined in Chapter 9, the charge, as a motion of the rotating particle or atom, must have a displacement opposite to that of the rotation in order to be stable. This means that the motion that constitutes the charge is on the far side of another regional boundary–another unit level–and it is subject to two successive inter-regional transmission factors.

The relation between the time region and the third region, in which the motion of the charge takes place, is similar to that between the time region and the region outside unit space. The inter-regional ratio is the same, except that because the electric charge is onedimensional the factor 1 + 1/9 has to be substituted for the factor 1+2/9 that appears in the inter-regional ratio previously calculated. This makes the interregional ratio applicable to the relation with the third region 128 x (1+1/9)= 142.2222. The mass of unit charge is the reciprocal of the product of the two inter-regional ratios, 156.4444 and 142.2222, and amounts to 0.00004494. The charge applicable to electrons and positrons deviates from this normal value because these particles have effective rotations in only one dimension, leaving the other two dimensions open. In some way, the exact nature of which is not yet clear, the motion of the charge is able to take place in these two dimensions of the time region instead of in the normal manner. Since this is on the opposite side of the unit boundary, the direction of the effect is reversed, making the mass increment due to the charge negative, as well as reducing its magnitude by one third. The effective mass of a charge applied to an electron or positron is therefore -2/3 x 0.00004494= -0.00002996 We may now apply the calculated values of the several mass components, as given in the foregoing paragraphs, to a determination of the masses of the sub-atomic particles described in Chapter 11. For convenience, these values will be recapitulated as follows: p primary mass m magnetic mass gravitational mass E electric mass (3 dim.) e electric mass (2 dim.) C mass of normal charge c mass of electron charge

1.00000000 0.00639205 1.00639205 0.00086806 0.00057870 0.00004494 -0.00002996

These are the masses of the various components on the natural scale. The measured values are reported in terms of a scale based on an arbiter assumed mass for some atom or isotope that is taken as a standard. For a number of years there were two such scales in common use, the chemical scale, based on the atomic weight of oxygen as 16, an the physical scale, which assigned the 16 value to the O16 isotope. More recently, a scale based on an atomic weight of 12 for the C12 isotope has found favor, and most of the values given in the current literature are expressed in terms of the C12 scale. In the light of the finding of this work the shift away from the O16 scale is unfortunate, as the theoretical development indicates that the O16 isotope has a mass c exactly 16 on the natural scale, and the physical scale (O16 = 16) is therefore coincident with the natural scale. It will, of course, be necessary to use the natural scale for our purposes. The observed values quote for comparison with the theoretical masses will therefore be stated in terms of the equivalent O16 physical scale. Here again we face the same issue that was encountered early in this chapter in connection with the selection of an empirical value c Avogadro’s number as a basis for calculating the unit of mass: the question as to whether we should regard the most recent determination as the most accurate. It would appear that the arguments that led to the acceptance of the 1957 value of Avogadro’s number are also applicable to the particle

masses, particularly since the agreement between the calculated and observed masses of the electron and proton is quit satisfactory on this basis. The empirical values cited in the paragraphs that follow have therefore been taken from the 1957 compilation by Cohen, Crowe and Du Mond.59 Since mass is three-dimensional, an independent one-dimensional or two-dimensional rotation has no mass. Nevertheless, when such a rotation becomes a component of a three-dimensional rotation, it contributes to the mass equivalent of that rotation. This amount that a rotation which is massless when independent will add to the mass of a particle or atom when it joins that combination of motions constitutes what we will call potential mass. In the case of the particles with no effective two-dimensional rotational displacement, the electron and the positron, the appropriate unit of electric mass, 0.00057870, is the entire mass of the particle, and even that mass is only potential, rather than actual, as long as the particle is in the basic uncharged condition. When a charge is added, the effect of the charge is distributed over all three dimensions by the chance process that governs the directions of the motion of the charge in the time region. Thus the charged particle has effective motion in all three dimensions, irrespective of the number of dimensions of rotation. This not only makes the mass of the charge itself an effective quantity, but, as indicated in Chapter 11, it also raises the potential mass of the rotation of the particles to the effective status. The net effective mass of the electron or the positron is then the rotational value 0.00057870 less the mass of the charge 0.00002996, or 0.00054874. The observed value is 0.00054877. The massless neutron, the M ½-½-0 combination, has no effective rotation in the third dimension, but no rotation from the natural standpoint is rotation at unit speed from the standpoint of a fixed reference system. This rotational combination therefore has an initial unit of electric rotation, with a potential mass of 0.00057870, in addition to the mass of the two-dimensional basic rotation 1.00639205, making the total potential mass of this particle 1.00697075. In this connection, it should be noted that the electron and positron also have rotation at unit speed (no rotation, in terms of the natural system) in the two inactive dimensions, but these rotations involve no mass, as they are independent, and are not rotating anything. The initial unit of rotation in the third dimension of the massless neutron, on the other hand, is a reverse rotation of the two-dimensional structure, and it therefore adds an electric mass unit. The neutrino, M ½-½-(1), has the same unit positive displacement in the magnetic dimensions as the massless neutron, but it has neither primary nor magnetic mass because these are functions of the net total displacement, and that quantity is zero for the neutrino. But since the electric mass is independent of the basic rotation, and has its own initial unit, the neutrino has the same potential mass as the uncharged electron or positron, 0.00057870. The potential mass of both the massless neutron and the neutrino is actualized when the rotations of these particles are joined to produce a three-dimensional rotation. The mass

of the resulting particle is then 1.00754945. As indicated in Chapter 11, this particle is the proton. As it is observed, however, the proton is positively charged, and in this condition the foregoing figure is increased by the mass of a unit charge, 0.00004494. The resulting mass of the theoretical charged proton is 1.00759439. The mass of the observed proton has been measured as 1.007600. Consolidation of two protons results in the formation of a double rotating system. As stated earlier, this substitutes one three-dimensional electric unit of mass- for two of the two-dimensional units, reducing the combined mass by 0.00028935. The mass of the product, the deuterium atom (H²), is the sum of two (uncharged) proton masses less this amount, or 2.014810. The corresponding observed value is 2.014735. Inasmuch as the proton already has a three-dimensional status, addition of another neutrino alters only the electric mass. The material neutrino adds the normal twodimensional electric unit, 0.00057870, making the total for the product, the mass one isotope of hydrogen, 1.00812815. The measured value is reported as 1.008142. The successive additions of neutrinos to the massless neutron that eventually produce the mass one isotope of hydrogen should be given special attention, as the considerations which will be discussed in Chapter 17 indicate that this addition process plays a very significant part in the overall cyclic mechanism of the universe. The following tabulation shows how the mass of the hydrogen isotope is built up step by step.

M ½-½-0 M ½-½-(1) M 1-1-(1) M 2-2-(1) M 1½-1½-(2) * potential mass

Step by Step Building Process for the Hydrogen Isotope primary mass magnetic mass electric mass massless neutron neutrino proton neutrino hydrogen (H1)

1.00000000 0.00639205 0.00057870 1.00697075* 0.00057870* 1.00754945 0.00057870* 1.008l28l5

Neutrinos are plentiful in the local environment. The requirement for production of new matter in the form of hydrogen by the addition process is therefore a continuing supply of massless neutrons. In Chapter 15 we will find that there is in operation a gigantic process that furnishes just such a supply. Addition of a cosmic neutrino, the rotational displacements of which are on the opposite side of the unit boundary, to the proton, involves an additional initial electric unit, as both the rotation in time and the rotation in space must start from unity. Also the spatial effect of the cosmic neutrino rotation is three-dimensional, since the spatial direction of motion in time is indeterminate. The total addition of mass to the proton in the production of the compound neutron is then 0.00144676, and the resulting mass of the particle is 1.00899621. It has been measured as 1.008982.

The following is a summary of the particle masses and the mass components from which these masses are built up. The empirical values from the 1957 compilation are given for comparison. As noted earlier, the correlation is quite satisfactory for the electron and the proton, as it is within the estimated range of experimental error. The divergence in the case of the heavier particles is not large, but it exceeds the estimated error. Whether the source of this discrepancy is in the theoretical development or in the experimental determinations remains to be ascertained. Mass Composition e-c e-c e e e p+m+e p + m + 2e p + m + 2e + C p + m + 3e p + m + 3e + E * potential mass

Particle charged electron charged positron electron positron neutrino massless neutron proton charged proton hydrogen (H1) compound neutron

Calculated 0.00054874 0.00054874 0.00057870* 0.00057870* 0.00057870* 1.00697075* 1.00754945 1.00759439 1.008l28l5 1.00899621

Mass Observed 0.00054876 0.00054876 massless massless massless massless unobserved 1.007593 1.008142 1.008982

In the first edition the relation between the natural unit of mass and the arbitrary unit in the cgs system was identified in terms of the gravitational constant. It has recently been pointed out by Todd Kelso and Steven Berline that the relation thus established cannot be converted to a different system of units such as the SI (mks) system. This made it evident that the interpretation of the gravitational phenomenon on which the previous determination was based was, in some way, erroneous An analysis of the situation was therefore carried out in order to locate the point of error. The invalidation of the interpretation of the gravitational equation has no effect on any other feature of the theoretical results that have been obtained from the Reciprocal System, as described in this volume Its sole result has been to leave this system of theory without an, connection between the gravitational equation and the theoretical structure. Once the situation is viewed in this light, it is immediately apparent that the lack of connection between the equation and physical theory is not peculiar to the Reciprocal System. Conventional theory does not identify the connection either. The physics textbooks find it necessary to admit this fact in statements such as the following: ―It should be noted that Newton's law of universal gravitation is not a defining equation like Newton's second principle of mechanics and cannot be derived from defining equations. It represents an observed relation” . This is a theoretical discrepancy that conventional physics has not been able to resolve. But it is an isolated discrepancy, and it has been swept, under the rug by assigning fictitious dimensions to the gravitational constant. It follows from this that the error lies in some interpretation of that ―observed relation‖ that has been common to both conventional theory and the Reciprocal System. Evidently the developers of both systems of theory have misunderstood the true nature of the phenomenon. Here, again, recognition of the source of the difficulty points the way to the

resolution of the problem. As brought out in the earlier chapters, one mass does not actually exert a force on another–each is pursuing its own course independently of all others–but the results of the inward motions of two masses are similar to those that would follow if the masses did attract each other. These results can therefore be represented in terms of an attractive force, on an ―as if‖ basis. But in order to do this we must put the ―as if‖ forces on the same footing as real forces. A force can only be exerted against a resistance. Hence, when we attribute a force to the motion of one mass we cannot also attribute a force to the motion of the other. We must attribute a resistance to the second mass. Thus, an ―as if‖ force, a gravitational force, is exerted against an ―as if,‖ inertial resistance. In the previous discussion we identified gravitation as three-dimensional motion, s³/t³, and inertia as three-dimensional resistance to motion, t³/s³. The product of the gravitational motion and the inertial resistance therefore does not have the dimensions of mass to the second power, as the conventional expression of the gravitational equation indicates; it is dimensionless. This is a situation in which the ability to reduce all physical quantities to space-time terms is very helpful. It will also be convenient to exam the dimensional situation independently before taking up the question of the numerical values. The gravitational equation, as expressed in current practice, is assigned dimensions as follows: (dynes cm² g-²) x g² x cm-² = dynes

(13-1)

Reducing equation 13-1 to space-time terms in accordance with the relations established in Chapter 12 (in which dynes, as g-cm/sec², are t³/s³ x s x 1/t² = t/s²), we have (t/s² x s² x s6/t6) x t6/s6 x 1/s² = t/s²

(13-2)

In the light of the new understanding of the mm' term as the dimensionless product of gravitational and inertial mass, it is now evident that the s6/t6 dimensions belong with mm' rather than with the gravitational constant. When they are so applied, the resulting dimensions of mm' cancel out, as the true theoretical dimensions do. We can therefore replace them with the correct dimensions. As pointed out in the first edition, there are also two other errors in the customary assignment of dimensions to this equation. The distance term is actually dimensionless. It is the ratio of 1/n² to 1/1² The dimensions that are mistakenly assigned to this term belong to a term whose existence has not been recognized because it has unit value, and therefore does not enter into the numerical calculation. In order to put the ―as if‖ gravitational interaction on the same basis as a real interaction, we have to express it in terms of the action of a force on a resistance, not as the action of a mass on a resistance. And since the dimensions of the mass term cancel, so that the gravitational mass enters the equation only as a dimensionless number, the force of gravitation has to be expressed in actual force terms; that is, as t/s². The correct dimensional form of the equation is then (s³/t³ x t³/s³) x t/s² = t/s²

(13-3)

Turning now to the numerical magnitudes, we note that while the dimensions of the mm' term cancel out, the magnitudes do not. Every unit of mass is both a unit of s³/t³ and a unit of t³/s³, each in its proper context. Since the units are independent, the effective magnitude of the ―as if‖ action of m units of gravitation against m' units of inertial resistance is mm'. However, expressing both of the mass terms in conventional units

introduces a numerical error, as only the inertial mass term is counterbalanced by a conventional mass magnitude on the other side of the equation. To compensate for this error a corresponding inverse factor must be introduced into the gravitational constant. There is no error if the gravitational mass is expressed in natural units, as the value 1 does not require any counterbalancing term. The relation between the natural and conventional units therefore determines the magnitude of the necessary correction factor. One gram is 6.02486 x 1023 units of inertial mass (t³/s³). The reciprocal of this number is 1.65979 x 10-24. But only one sixth of the total number of mass units is effective in the gravitational interaction because this ―as if‖ interaction takes place in only one dimension, and in only one of the two directions in this dimension. The total number of s³/t³ units corresponding to an effective mass of one gram is therefore 9.95 x 74 x 10-24. Expressing this mass as one unit overstates the numerical value, and a correction of this magnitude must therefore be included as a component of the gravitational constant. A small additional correction is required because of the effect of the secondary mass. Gravitation and inertia are inversely related relative to the primary mass; that is, the primary mass is p/(p + s) units of gravitational mass and also p/(p + s) units of inertial mass, where p and s are the primary and secondary masses respectively. The product of a unit of gravitational mass and a unit of inertial mass is therefore 1/(1 + s)² units of primary mass. Where the result is expressed in terms of inertial mass, another 1 + s factor is introduced. The total effect of the secondary mass is then the introduction of a factor of 1.019299. Applying this factor to the value 9.95874 x 10-24, we obtain 1.015093 x 10-23. Replacing the 1/s² distance term by a t/s² force term has the effect of introducing a time dimension, which must be expressed in natural units to avoid creating a numerical unbalance. The numerical value of the natural unit of time, 1.520655 x 10-16, offsets in part the errors in the mass term. The net correction to be made is 1.015093 x 10-23 divided by the natural unit of time, and amounts to 6.67537 x 10-8. This is the gravitational constant in the cgs system of units. Looking now at the question of conversion to a different system of units, the issue that initiated the restudy of the situation, we find that a change from cgs to mks units in the conventional form of the equation (13-1) results in a change of 10-6 in the mass term, 10-4 the distance term, and 10-5 in the force term. A change of 10-3 in the gravitational constant is then required for a balance. In the theoretical equation (13-3) the net effect of a change in the system of units is confined to the relation of the natural and conventional units of mass. As can be seen from the explanation that has been given, the gravitational constant is proportional to the ratio of these units. Changing the conventional unit from grams to kilograms alters this ratio by 10-3. The gravitational constant is then changed by the same amount. This agrees with the result obtained from equation 13-1. Those who are familiar with the first edition will have noticed that the values of the natural unit of inertial mass and related quantities, as given earlier in this chapter, are larger than the values given in the original publication. At the time of the original investigation it seemed clear that a factor of 1/3 entered into the mass situation in some way, and there appeared to be sufficient justification for applying this factor to the size of the basic unit. As brought out in the preceding paragraphs, we now find that the 1/3 factor

is a result of the one-dimensional nature of the ―as if‖ gravitational interaction. This factor has therefore been eliminated from the mass units. As a result, the natural unit of inertial mass, as defined in this edition, is three times the value given in the first edition (with a small adjustment to reflect the results of the continuing studies of the details of the phenomena involved). The use of these larger units has no effect on the physical relations involving inertial mass, as the expressions of these relations are balanced equations in which the mass terms are in equilibrium with terms representing quantities derived from mass.

CHAPTER 14

Cosmic Elements As pointed out in Chapter 6, the inversion of space and time in physical phenomena that is possible by reason of the reciprocal relation between the two entities may apply to only one of the constituent motions of a complex physical entity or phenomenon, or it may apply to the entire structure. We have already examined some of the effects of inversion of single motion components, such as translational motion in time, negative displacement in the electric dimension of the atomic rotation, etc. Now we are ready to take a look at the consequences of complete inversions. It has already been noted that the rotational combinations which constitute the atoms and sub-atomic particles of the material system are photons vibrating in time and rotating in space, and that they are paralleled by a similar system of combinations in which the photons are vibrating in space and rotating in time. The point to be emphasized at this juncture is that the inverse system, the cosmic system of atoms and sub-atomic particles, is identical with the material system in every respect, except for the space-time inversion. Corresponding to carbon, 2-1-4, there is cosmic carbon, (2)-(1)-(4). Corresponding to the neutrino, M ½-½-(1), there is a cosmic neutrino, C (½)-(½)-1, and so on. Furthermore, this identity applies with equal force to all of the entities and phenomena of the physical universe. Since everything that exists in the material sector of the universe is a manifestation of motion, every item is exactly duplicated in the cosmic sector with space and time interchanged. The detailed description of the material sector of the universe that we are deriving item by item through development of the consequences of the basic postulates of the Reciprocal System of theory is therefore equally applicable to the cosmic sector. Thus, even though the cosmic sector is almost entirely unobservable, we have just as exact and just as detailed knowledge of that sector (aside from information about specific individuals of the various classes of objects) as we do of the material sector. It should be noted, however, that our knowledge of the material sector is knowledge of how the phenomena of that sector appear to observation from a point within that sector; that is, a location in a gravitationally bound system. What we know about the cosmic sector through application of the reciprocal relation is knowledge of the same kind,

information as to how the phenomena of the cosmic sector appear to observation from a location within that sector; a location in a system that is gravitationally bound in time. Such knowledge has no direct significance from our standpoint, as we cannot make observations from such a base, but it does provide a basis from which we can determine how the phenomena of the cosmic sector, and the phenomena originating in that sector, theoretically should appear to our observation. One of the most perplexing questions of present-day physics is: Where is the antimatter? Considerations of symmetry applied to the current theories of the structure of matter indicate that there should be ―anti‖ forms of the elements of which ordinary matter is constituted, and that the ―antimatter‖ composed of those ―antielements,‖ ought to be equally as abundant in the universe as a whole as ordinary matter. ―Antistars,‖ and ―antigalaxies‖ should theoretically be as plentiful as ordinary stars and ordinary galaxies. But there is no hard evidence of the existence of any such objects. It has been suggested, to be sure, that some of the observed galaxies may be composed of antimatter. Alfven, for example, says that there is a ―distinct possibility that antiworlds may actually be neighbors of ours, astronomically speaking. It cannot be excluded that the Andromeda nebula, the closest galaxy to ours, or even stars within our own galaxy, are composed of antimatter.‖60 But this is pure speculation, in the absence of any demonstrated means of distinguishing the radiation produced by a galaxy of the hypothetical antimatter from that produced by a galaxy of ordinary matter. So the question remains, Where is the antimatter? The Reciprocal System now provides the answer. This new structure of theory agrees that antimatter (actually reciprocal matter: cosmic matter, s we are calling it) exists, and that it is equally as abundant in the physical universe as ordinary matter. But it tells us that the galaxies of cosmic matter are not localized in space; they are localized in threedimensional time. The progression of time to which we are subject carries us through this three-dimensional time in a manner analogous to a linear motion through threedimensional space. Only a very small fraction of the total number of objects occupying positions in the spatial reference system would be encountered in the course of a onedimensional spatial motion of this kind, and the same is true of the number of cosmic objects that are encountered in our progression through time, is compared with the total number of such objects occupying positions n a three-dimensional temporal reference system. Furthermore, gravitation in the cosmic sector acts in time, rather than in space, and the atoms of which a cosmic aggregate is composed are contiguous in time, but widely dispersed in space. Thus, even the relatively small number of cosmic aggregates that we do encounter in our movement through time are not encountered as spatial aggregates; they are encountered as individual atoms widely dispersed in space. We cannot recognize a cosmic star or galaxy because we observe it only one atom at a time. Radiation from the cosmic aggregate is similarly dispersed. Such radiation is continually reaching us, but as we observe it, this radiation originates from individual, widely scattered, atoms, rather than from localized aggregates, and it is therefore isotropic from our viewpoint. This radiation can no doubt be equated with the ―blackbody radiation‖ currently attributed to the remnants of the ―Big Bang.‖

All of the somewhat sensational suggestions as to the existence of observable stars and galaxies of antimatter, and the possible consequences of interaction between these aggregates and bodies composed of ordinary matter are thus without foundation. The antimatter-fueled generators, which supply the energy for space travel in science fiction, will have to remain on the science fiction shelves. The difference between a cosmic star and a white dwarf star should be noted particularly. Both are on the time side of the dividing line so far as the translational speed is concerned; that is, both are composed of matter that is moving faster than the speed of light. But the white dwarf is otherwise no different from the ordinary star of the material sector. The space-time relationship is inverted only in the translational motion of its components. In the cosmic star, on the contrary, all of the space-time relations are the inverse of those of the ordinary material star; not only the translational motion, but also the vibrational and rotational motions of its constituent atoms, and, what is especially significant in the present connection, the effect of gravitation. Consequently, the white dwarf is an aggregate in space, and we see it as such, whereas the cosmic star is an aggregate in time, and we cannot recognize it as an aggregate. Even those contacts which do take place between matter and the individual particles of cosmic matter (antimatter) that enter the local environment do not have the kind of results that are anticipated on the basis of current theory. In present-day thought the essential difference between matter and antimatter is conceived as a charge reversal. An atom is thought to consist of a positively charged nucleus surrounded by negatively charged electrons. It is then assumed that the antiatom has the reverse structure: a negatively charged nucleus surrounded by positively charged electrons (positrons). The further assumption then follows that an effective contact between any particle and its antiparticle would result in cancellation of all charges and reduction of both particles to radiant energy. This is a typical example of the results of the compartmental nature of present-day physical theory, which permits an assumption to be used in one field of application, and a direct contradiction of that assumption to be applied in another field, both under the banner of ―modern physics.‖ Where the accepted theory requires that opposite charges neutralize each other on close approach, it is assumed that they do so. Where this does not fit the theory, as in the electrical explanation of the structure of matter, it is cheerfully assumed that the charges accommodate their behavior to the requirements of the theory, and take up stable relative positions instead of destroying each other. In the present instance, both of these contradictory assumptions are employed at the same time. The stable charges that somehow have no effect on each other are ―annihilated,‖ by other charges, presumably identical in nature. Our findings are that wherever electric charges actually do exist, opposite charges destroy each other on contact. It does not follow, however, that charge neutralization is equivalent to annihilation. In actual practice, only one of the reactions between particles and what are presumed to be antiparticles follows the theoretical scenario of annihilation. The electron and positron do, in fact, annihilate each other on contact, with the production of oppositely directed photons. The antiparticle of the proton, in the accepted sense of the term–a particle equivalent to the proton in all observable respects except that it is negatively charged–has

been detected, but contact of this antiproton with a proton does not result in annihilation of the particles into radiant energy. ―Here the situation is not as straightforward as in the annihilation of an electron-positron pair,‖61 report Boorse and Motz. And indeed it is not. The interaction of these particles produces an assortment of transient and stable particles not essentially different from those, which appear in other high-energy interactions. As these authors say, ―different kinds of mesons are released‖ in the process. In the light of our new findings it is evident that these are not annihilation reactions; they are cosmic atom building reactions. We will examine the nature and characteristics of such reactions in Chapter 16. Detection of the antineutron has also been reported, but the evidence for this is indirect, and it is rather difficult to reconcile the various ideas as to just what an antineutron would be with the concept of charge reversal as the essential difference between particle and antiparticle. On the basis of the charge reversal hypothesis, the neutral particles should have no ―anti‖ forms. Indeed, those who contend that ―every particle has its antiparticle‖ justify this statement by asserting that each neutral particle is its own antiparticle. This would rule out the existence of a distinct antineutron, in the currently accepted sense of the term. In any event, this problem with respect to the neutral particles is another item that, like the lack of annihilation in the ―annihilation reactions‖ , emphasizes the inadequacy of the conventional theory of atomic structure in application to the ―antimatter‖ phenomena. In a universe of motion the atom is not an electrical structure. As has been brought out in detail in the earlier pages, it is a combination of rotational and vibrational motions. In the structures of the material type the speed of the rotational motions is less than unity (the speed of light) while the speed of the vibrational motion is greater than unity. In the structures of the cosmic type these relations are reversed. Here the speed of the vibrational motion is less than unity and the speed of the rotational motion is greater than unity. The true ―antiparticle‖ of a material particle or atom is a combination of motions in which the positive rotational displacements and negative vibrational displacements of the material structure are replaced by negative rotational displacements and positive vibrational displacements of equal magnitude. In one of the reactions currently attributed to mutual annihilation of antiparticles, the neutralization of displacements is actually accomplished, and in this case, the combination of electrons and positrons, the particles are actually annihilated; that is, they are converted to radiant energy and their existence as particles of the rotational class is terminated. But there are, in reality, two different processes involved in this reaction. First, the oppositely directed charges cancel each other, leaving both particles in the uncharged condition. Subsequently, their rotations, M 0-0-1 and M 0-0-(1) combine to 00-0, which is no effective rotation at all. In the vernacular, we might describe this second process as straightening out the rotational motion. There is a short interval between the two processes, and the effects attributed to ―positronium,‖ a hypothetical short-lived combination of an electron and a positron, probably originate during this interval. The extent to which annihilation can actually take place in contacts between antiparticles other than the electron and positron is still an open question. If the observed antiproton is actually the true antiparticle of the proton–that is, a cosmic proton–the results of the

observed contacts of these particles indicate rather definitely that annihilation is confined to the one-dimensional particles. If the observed antiproton is merely a material proton with a negative charge, a possibility that cannot be ruled out at the present stage of the investigation, the observed results of the interactions are not relevant to the question, but the situation is still unfavorable for annihilation, as the obstacles in the way of securing simultaneous contact between the corresponding motions obviously increase with the complexity of the rotational combination, and it is very doubtful if the necessary coincident contacts can be obtained in different dimensions. It therefore appears that the intriguing possibility of energy production by contact between matter and antimatter is not only ruled out as a large scale process by the impossibility of concentrating antimatter in space, as previously indicated, but is also unlikely even as a single atom process. Inasmuch as our present objective is to examine those phenomena of the cosmic sector of the universe that are accessible to our observation, the observed antiparticles, which are products of high-energy processes in the material sector, are pertinent only to the extent that they throw some light on the kind of behavior that can be expected from the cosmic objects that do enter our field of observation. As indicated earlier, some of these incoming objects make themselves known as a result of chance encounters during our progress through three-dimensional time. Additionally, there are processes, to be described later, which result in the ejection of substantial quantities of matter from each sector into the other. The portion of the material sector within our observational range is therefore subject to a continual inflow of cosmic matter. The incoming particles of this matter can be identified as the cosmic rays. As they appear to observation, the cosmic rays are particles entering the local frame of reference from all directions and at extremely high speeds, together with a variety of secondary particles produced in events initiated by the primary particles. The secondaries include some common sub-atomic particles of the material system, such as electrons and neutrinos, and also a number of transient particles of extremely short lifetime, from 10-6 seconds downward, that were unknown prior to the discovery of the cosmic rays, but have since been produced by high energy processes in the particle accelerators. In current thought, the primaries are regarded as ordinary material atoms. The evidence in favor of this conclusion may be summarized as follows: 1. Sub-atomic particles are excluded, as they are all incapable, for one reason or another, of producing the observed effects. This means that, unless they belong to an otherwise unknown class of particle, the primary cosmic rays must be atoms. 2. The masses of the atoms that constitute the primaries cannot be determined at the present stage of instrumentation and techniques, but it is possible to determine the charges on the individual particles, and on the assumption that they are fully ionized, this indicates the atomic numbers. The distribution of the elements in the incoming cosmic rays, on this basis, approximates the estimated distribution in the observed universe as a whole. In the absence of any known alternative, this amount of evidence has been sufficient to secure general acceptance of the conclusion that the primaries are atoms of ordinary

material elements. When the issue as to its validity is raised, however, as it must be when an alternative appears, it is clear that there are many counter indications in the empirical data. The most serious items are the following: 1. The speeds and energies of the primaries are too high to be compatible with production by ordinary physical processes. No known process, or even a plausible speculative process, based on conventional physics, is capable of producing energies that extend up to the vicinity of 1020 eV. As expressed in the Encyclopedia Britannica, “how to explain the acquisition of such energies is a disturbing physical and cosmological problem.‖ 2. With the exception of some of the relatively low energy rays that are thought to originate in the sun, most of the primaries have energies in the range, which indicates speeds in the neighborhood of the speed of light. Inasmuch as some decrease in speed has undoubtedly taken place before the observations, it is quite probable on the basis of the observational evidence (that is, disregarding any purely theoretical limitation) that the rays originally entering the local environment were traveling at the full speed of light. This is another indication of an extraordinary origin. 3. While the distribution of elements deduced from the cosmic ray charges approximates the estimated distribution in the observed universe as a whole, there are some very significant differences. For example, the proportion of iron atoms in the cosmic rays is 50 times that in average matter. Lithium has been reported to be as much as 1000 times as abundant (although some of the lithium may be a decay product). The cosmic rays therefore cannot be merely ordinary matter drawn from the common pool and accelerated to high speeds by some unknown process. They must have originated from some unusual kind of source. These anomalies in the ―charge spectrum‖ of the cosmic rays are given little attention in current physical thought, probably because they have no known explanation, but the significance that such deviations from the normal abundance would have, if confirmed, was clearly recognized at the time when the first indications of these deviations were observed. For instance, Hooper and Scharff (1958) made this comment: ―An excess of heavy nuclei would suggest the necessity of reconsidering our fundamental ideas on the origin of the primary radiation.‖62 4. All of the major products of the primary rays have extremely short lifetimes. If they do not undergo collisions before this time has elapsed, they decay in flight to particles of lower mass and equal or longer lifetime. There is much available evidence to indicate that this is also true of the primaries. For example, in some of the observed events a transient particle leaves the scene of the event in a continuation of the line of travel of the primary, and carries the bulk of the original energy. The straightforward interpretation of such events is that they represent processes in which the primary decays to the transient particle and continues on its way. The existence of a substantial number of high-energy pions in the incoming stream of particles is another item of evidence pointing in the same direction, as similar, but earlier, decays of primaries will produce pions with very high energies. It has been estimated that as much as 15 percent of the

incoming high-energy particles are pions. The conclusion that can logically be drawn from the observations is that the primaries are of the same general nature as the known transient particles, and that the entire cosmic ray phenomenon is a single process taking place in a succession of decay events–a process in which an atom with some strange and unusual properties is converted first into other similar, but less massive, particles, and then finally into products that are compatible with the local environment. The considerations summarized in the foregoing paragraphs indicate that the current explanation of the nature of the primary cosmic rays is not correct. They point to the conclusion that these primaries are not atoms of material elements, as now believed, but atoms of a special kind which have characteristics similar to those of the transient particles, and are produced under some unusual conditions that lead to entry into the local frame of reference at the full speed of light. Since we now find from the theoretical development that there is a continuing inflow of cosmic atoms, which are atoms of a special kind that, according to the theory, enter our environment at the speed of light, and are subject to rapid decay in the manner of the observed transient particles, the identity of the theoretical and observed phenomena is almost self-evident. An outstanding characteristic of the results obtained from development of the consequences of the postulates of the Reciprocal System of theory–one that we have had occasion to mention several times in the preceding pages–is the way in which they resolve long-standing and seemingly extremely difficult questions in a surprisingly simple manner. Nowhere is this more evident than in the case of the cosmic rays, where the finding that these incoming particles are atoms from the high-energy sector of the universe clears up the many previously intractable issues in this area with remarkable ease. The basic questions: What are the cosmic rays?, and Where do they come from?, are answered automatically by the theoretical discovery of a sector of the universe in which objects with the observed properties of the cosmic rays are indigenous. The particular properties that characterize the constituents of the cosmic rays, and distinguish them from the constituents of aggregates of ordinary matter, are naturally the ones that are the most difficult to explain on the basis of current theories which try to fit them into the material system of phenomena, but these explanations are practically obvious once the existence of the cosmic (high energy) sector is recognized. The energy questions are the central problems. As stated by W. F. G. Swann, ―no piece of matter can, under ordinary circumstances, contain, in any form, enough energy to provide cosmic ray energies for its particles.‖63 But this is only one phase of the energy problem. The total energy involved is also far too large. If cosmic rays move in straight lines, as does starlight, and have the same energy density as starlight, then the power supplies to each will have to be the same. There seems no conceivable way to find this much energy for cosmic radiation. (Leverett Davis, Jr.)64 Here again we meet the ―There is no other way‖ contention that is being used to justify so many of the otherwise untenable theories and assertions of present-day science, and again

the development of the Reciprocal System demonstrates that there is a ―conceivable way.‖ But because the cosmic ray physicists have been confined within the limited horizons of conventional basic ideas, they have not been able to account for the observed energies on any straightforward basis. They have therefore been forced to invent exotic hypothetical mechanisms for acceleration of the cosmic rays from the relatively low energies that are available in the material sector to the high levels that are actually observed, and equally far-fetched ―storage‖ processes to avoid the difficulty cited by Davis. The existence of another half of the universe, in which the prevailing speeds are greater than the speed of light, and the energies of the mass units are correspondingly great, disposes of both aspects of the energy issue. There are observable explosion processes in the material sector (which will be examined in detail in Volume II) that result in the acceleration of large quantities of matter to speeds in excess of the speed of light. The most energetic portions of these high-speed explosion products are ejected into the cosmic sector, the sector of motion in time. From the general reciprocal relation between space and time we can deduce that these same processes are operative in the cosmic sector, and that they result in the ejection of large quantities of cosmic matter into the material sector. This is the matter that we observe in the form of the cosmic rays. The characteristics of these interchange processes, as they will be developed in Volume II, explain why the distribution of the elements in the cosmic rays differs from the estimated average distribution in the observed physical universe. It will be shown that the proportion of heavier elements in matter increases with the age of the matter, and it will be further shown that the matter ejected from one sector of the universe into the other consists principally of the oldest (or most advanced) matter in the originating sector. Thus the cosmic rays are not representative of cosmic matter in general; they are representative of the cosmic matter that corresponds to the oldest matter in the material sector. The isotropic distribution of the incoming rays is likewise a necessary result of entry from the region of motion in time. Both the spatial location of entry, and the direction of motion of the particle after entry, are determined by chance, as the contact of the space and time motions is purely scalar. The identification of the cosmic rays as atoms of the cosmic elements was clear from the beginning of the development of the Reciprocal System. As stated earlier, the available evidence indicates that these so-called ―rays‖ must be atoms. On the other hand, their observed properties are quite different from those of the atoms of ordinary matter. The natural conclusion from these facts would be that the atoms of the cosmic rays are atoms of some different kind. Conventional science cannot accept this answer because it has no place for the kind of an atom that is indicated. The physicists have therefore been forced to conclude that the cosmic rays are ordinary atoms that, for some unknown reason, have unusual properties. In contrast, the basic postulates of the Reciprocal System require the existence of a type of atom, the inverse of the material atom, that has just the kind of characteristics, when observed in the material sector, that are found in the cosmic rays. It should be noted in this connection that the concept of antimatter, the conventional alternative to the reciprocal matter required by the postulates of the Reciprocal System, cannot be applied to the cosmic rays, because the interaction of matter and antimatter is

theoretically supposed to result in annihilation of both substances, rather than the particle production and other phenomena that are actually observed in the cosmic ray interactions. Although only a limited amount of time could be allotted to the cosmic rays in the early stages of the development of the Reciprocal System, because of the large number of physical areas that had to be given some study in order to confirm the status of the theory as one of general application, the first edition did include an account of the nature and origin of the primary rays, an explanation of the kind of modifications that these particles must undergo in the material environment, and a general description of this modification, or decay, process. In the meantime there has been substantial progress, both experimentally and theoretically, and it is now possible to expand the previous presentation very materially. The extension of theory in the cosmic ray area that has taken place in the twenty years since the publication of the first edition provides a good illustration of what is involved in the development of the theoretical system from the fundamental postulates. The basic facts–the identity of the cosmic rays, their place of origin, the reason for their enormous energies, etc.–were almost self-evident once the reciprocal relation between space and time was recognized. But it cannot be expected that such an understanding of the basic facts will immediately clear up the entire multitude of questions that arise in the course of developing the details of the theoretical structure. The answers to these questions are available. They can be derived from the fundamentals of the system of theory. But they do not emerge automatically. Where a theory is developed entirely by deduction from a single set of premises, as is true of the Reciprocal System, there should not be many cases in which wrong answers are reached, if the theoretical foundations are solid, and due care is exercised in the logical development. Only a very few of the conclusions stated in the first edition of this work have been invalidated by the twenty years of additional study that have followed. But it is altogether unrealistic to expect that the first exploration of a physical field by means of a totally new method of approach will accurately identify all of the significant features of the phenomena in that field. It is a virtual certainty that many of the original conclusions will be incomplete. Here, again, the Reciprocal System is no exception. The explanation of cosmic ray decay that will be given in the next chapter is, in all essential respects, the same explanation that was presented in the first edition. However, the development of the theoretical structure in the intervening years has brought to light many necessary consequences of the postulates of the Reciprocal System that have a significant bearing on the decay process and contribute to a more complete understanding of the decay events. These new items of information include such things as the existence of a transition zone, the two-dimensional nature of the motion in that zone, the existence of the massless form of the neutron, and the nature of the limitation on the lifetimes of the cosmic particles. With the benefit of all of this additional theoretical knowledge, and a substantial increase in the amount of available empirical information, it will be possible to define the decay sequence more accurately. Nevertheless, the presentation in Chapter 15 is not a new explanation of the phenomenon; it is the same explanation in more complete form.

CHAPTER 15

Cosmic Ray Decay On the basis of the information developed in Chapter 14 we may describe the cosmic rays in general terms as cosmic atoms and particles which enter the material environment at the speed of light, at random spatial locations, and with random directions. Here, then, are the contents of the cosmic sector of the universe as they appear, very fleetingly, to our observation. We will now examine what happens to these objects after they arrive. In the earliest observed stages the cosmic particles are known as the primary cosmic rays. As many observers have pointed out, there is no assurance that these are the original rays, as the decay process may have already begun before the primary rays are observed. The theoretical development indicates that this is, indeed, true, as the primaries contain a considerable percentage of particles that are clearly decay products rather than normal constituents of the origina1 rays. In the subsequent discussion we will follow the general practice, and will refer to the observed incoming particles as the primary rays, but it should be understood that this does not imply that the observed primaries are identical with the particles that originally crossed the boundary into the material sector. Since the cosmic rays enter the material sector from a region in which the prevailing speeds are greater than unity, these particles make their entry at the speed of light. It is the decrease from a speed greater than unity to a speed less than unity which constitutes entry into the materia1 sector, but the dividing line between the cosmic sector and the material sector is unit speed in all three scalar dimensions. The speed of the primaries therefore remains at or near unity in the observable dimension even after the speed, in total, has decreased to some extent. This accounts for the previously noted fact that the observed speeds of the incoming particles are mainly close to the speed of light. Inasmuch as these speeds, and the corresponding kinetic energies, are greatly in excess of the normal speeds and energies of the material sector, transfer of the excess kinetic energy to the environment begins immediately on entry. Gravitational and electromagnetic forces, to which the cosmic atom is subject as soon as it crosses the boundary, accomplish part of the energy reduction. Contact with material particles is also an important factor, and a further loss occurs in connection with the reduction of the internal energy that must also take place. The cosmic atoms of maximum energy content (kinetic equivalent) are those of the most abundant cosmic elements: c-hydrogen and c-helium. The principal constituents of the cosmic rays, the cosmic elements of low atomic number, are therefore not only entering the material frame of reference at speeds which are far too high to be compatible with the material environment, but are also entering in the form of structures whose internal energy (rotational displacement) content is also much too great. These elements must lose rotational energy, as well as kinetic energy, before they can assume forms that will merge with the material phenomena. The required loss of rotational energy from the atomic structures is accomplished by ejection of particles of an appropriate nature. A readjustment of some kind in the atomic motions is required at very short intervals, and

the probability principles insure that the direction of the rearrangements is toward greater stability. In the material environment this means a reduction of the excess rotational energy. At the present stage of the theoretical development it appears that the limitation of the lifetimes of the cosmic elements to extremely short intervals is due to the fact that the rotation in the cosmic structure takes place at a speed greater than unity, and this structure therefore moves inward in time, rather than in space. Consequently, it can exist in a stationary spatial frame of reference for only one unit of time. If it is moving translationally at a speed above unity in all scalar dimensions, as is true of most of the cosmic atoms encountered by chance in our passage through time, it moves away from the line of the time progression of the material sector, and disappears. But this option is not available to cosmic atoms that have dropped below unit speed, and instead, they separate into two or more particles, each of which then has its own appropriate lifetime. The natural unit of time, in application to macroscopic physical phenomena, was evaluated in Chapter 13 as 1.521 x 10-16 seconds. Some of the observed particles have lifetimes in this neighborhood, but others range all the way from about 10-16 seconds to about 10-24 seconds. As will be brought out later, the magnitude of the deviation from unit time has been correlated with the dimensions of the spatial motion of the particles, but the exact nature of the modifying factor has not yet been identified, and for the present we will treat it as a modifier of the unit of time, similar to the inter-regional ratio that modifies the unit of space in application to the time region. The limiting lifetime to which the foregoing comments apply is the limit at zero speed. At higher speeds, the lifetime, as measured by a conventional clock, increases in accordance with the relations expressed in the Lorentz equations, which, as noted earlier, are equally as applicable in the Reciprocal System of theory as in conventional physics. The explanation of this longer life that we deduce from theory is that the particle can remain intact in the spatial reference system as long as it remains in the same unit of time. But an object moving at the speed of light remains in the same unit of time (in the natural system, which is controlling) permanently and such an object can exist indefinitely in any system of reference. The decrease in life at the lower speeds follows the mathematical pattern derived by Lorentz. From the foregoing it is evident that the primary cosmic rays, moving at the speed of light, did not necessarily enter the material sector in our immediate vicinity. The rays that we observe may have entered anywhere in interstellar, or even in intergalactic, space. In general, as pointed out in the first edition, the successive steps of the decay process which the cosmic atoms undergo after their entry consist of ejections of rotational displacement in the form of massless particles, which continue until the residual cosmic element reaches a status in which it can be transformed into a material structure. Of course, nothing physical can be transformed into something different. Only in the world of magic is that possible. Addition or removal of some constituent can alter a physical entity, but it can be transformed only into some other form of the same thing, as the term itself implies. In the case of the elements the transformation is made possible by the specific relation between the space and time zero points.

As explained in Chapter 12, the difference between a positive speed displacement x and the corresponding negative speed displacement 8—x (or 4—x in the case of twodimensional motion) is simply a matter of the orientation of the motion with respect to these space and time zero points. The rotational motions of material atoms and particles are all oriented on the basis of the spatial (positive) zero, because, as noted earlier, it is this orientation that enables the rotational combination to remain in a fixed spatial reference system. Similarly, the cosmic atoms and particles are oriented on the basis of the temporal (negative) zero, and are therefore capable of remaining permanently within a fixed temporal reference system, whereas they have only a transient existence in a spatial system. The only difference between a motion with a positive speed displacement x and one with a negative speed displacement 8—x (or 4—x) is in this orientation of the scalar direction. Either can therefore be converted to the other by a directional inversion. For example, if the negative magnetic displacements of the cosmic helium atoms, (2)-(1)0, are replaced by the 4—x positive values, this inverts the scalar directions of the rotations without altering the nature or magnitude of either of the rotational components. The product, an atom of the material element argon, 2-3-0 (or 3-2-0 in our usual notation) is therefore the same physical object as the cosmic helium atom. It is merely moving in a different scalar direction. Conversion of cosmic helium into argon is nothing more than a change to another form of the same thing, and thus it is a physical possibility that can be accomplished under the right conditions and by the appropriate processes. Every atom of either the cosmic or the material type in which the speed displacements do not exceed 3 in either of the magnetic dimensions or 7 in the electric dimension has an equivalent oppositely directed structure. This is illustrated in the following table of equivalents of cosmic and material elements of the inert gas series, the elements with no effective displacement in the electric dimension. Cosmic System c-helium c-neon c-argon c-krypton

(2)-(1)-0 (2)-(2)-0 (3)-(2)-0 (3)-(3)-0

2-3-0 2-2-0 1-2-0 1-1-0

Material System argon neon helium 2 neutrons

It does not follow that a direct conversion of an atom of such an element to the equivalent inverse structure is always possible. On the contrary, it is seldom possible. For instance, in order to convert the cosmic helium atom directly to argon the rotations in the two magnetic dimensions would have to be inverted simultaneously, and at the same time the approximately 40 mass units required by the argon atom would have to be obtained from somewhere. The c-helium atom cannot meet these requirements, so at the end of the appropriate unit of time when it must do something, it does what it can do; that is, it ejects a massless particle. This carries away some positive rotational displacement, and moves the residual cosmic atom up the series of elements toward a higher cosmic atomic number, the equivalent of a lower material atomic number. This process continues until the residual cosmic atom is c-krypton, each rotating system of which is equivalent to a neutron. Here the transformation requirements can be met, as the inversion of each rotation involves only a single effective unit, and no provision for addition of mass is necessary, since the product of the conversion is a massless neutron.

The scalar directions of the c-krypton motions therefore invert, and two massless neutrons take their places in the material system. The question as to what then happens to these particles will be discussed in Chapter 17. The general nature of the cosmic ray decay process, as described in the foregoing paragraphs, was clear from the start of the investigation of the role of the cosmic rays in the theoretical universe of the Reciprocal System. It was therefore evident that the ejections during this decay process must consist of positive rotational displacement in order that the cosmic atoms would be modified in the direction of greater stability in the material environment and ultimately built up to the level where conversion is possible. In the first edition these ejections were discussed in terms of neutrons and neutron equivalents, although it was noted that, in the terrestrial environment at least, they must be massless. Transfer of mass in these events is impossible, as the cosmic atoms have no actual mass. The mass indicated by their behavior in the observed reactions is merely the mass equivalent of the cosmic (inverse) mass that these atoms of the cosmic elements actually do possess. What these atoms must eject is positive magnetic rotational displacement, and this can only take place through the medium of massless particles. The conclusion reached in the earlier study was that in these ejection events the carrier particles must be pairs of neutrinos and positrons (jointly equivalent to neutrons rotationally, but massless) rather than neutrons of the observed type. The more recent finding that the neutron exists in a massless form now resolves this difficulty, as it is now evident that the ejected particles are massless neutrons. The progress that has been made in both the observational and the theoretical fields has also enabled defining the decay path more accurately and in more detail than was possible in the first edition. Inasmuch as all features of the cosmic sector of the universe are identical with the corresponding features of the material sector, except that space and time are interchanged, the matter accelerated to high speeds by cosmic explosions of astronomical magnitude includes all of the components of cosmic matter: sub-atomic particles and atoms of all of the elements. But in order to be accelerated all the way to unity in three dimensions, a particle must offer a full unit of resistance in all three dimensions. Consequently, the only particles that are able to accelerate up to the escape speeds are the double rotating systems, the atoms. The unit particle in the interchange between the cosmic and material sectors is the atom of unit atomic number, the mass two isotope of hydrogen (deuterium). The mass one isotope of hydrogen does not qualify as a full-sized unit, but it lacks only the equivalent of a cosmic massless neutron, and this can be provided by ejection of a massless neutron of the material type. When subjected to a powerful explosive acceleration the H1 atom therefore ejects such a particle and assumes the H² status. The sub-atomic particles are not capable of being accelerated to the escape speed. They are all either inherently massless, or easily separated into massless components, and when they reach their limiting speeds they take the massless forms and thereby terminate the acceleration. The total absence of sub-atomic particles in the cosmic rays that results from this inability to reach the escape speed is not currently recognized because the singly charged particles are mistakenly identified as protons, and the cosmic atoms in the decay sequence–mesons, in the conventional terminology–are accorded a somewhat

indefinite kind of a sub-atomic status. But the absence of electrons is a conspicuous and puzzling feature of the cosmic ray phenomenon, and it imposes some severe constraints on theories which try to account for the origin of the rays. An effect so gross as to exclude completely high-energy electrons from the spectrum at the earth should, it would seem, be accounted for unambiguously by any successful theory for the origin of the cosmic radiation. (T. M. Donahue)65 The unambiguous explanation is now available. No sub-atomic particles are present in the original cosmic rays because these particles are not capable of accelerating to the high inverse speeds necessary for entry into the material sector. The cosmic property of inverse mass is observed in the material sector as a mass of inverse magnitude. Where a material atom has a mass of Z units on the atomic number scale, the corresponding cosmic atom has an inverse mass of Z units, which is observed in the material sector as if it were a mass of l/Z units. The masses of the particles with which we are now concerned are conventionally expressed in terms of million electron volts (MeV). One atomic mass unit (emu) is equivalent to 931.152 MeV. The atomic number equivalent is twice this amount, or 1862.30 MeV. The primary rotational mass of an element of atomic number Z is then 1862.30 Z MeV, and that of a cosmic element of atomic number Z is 1862.30/Z MeV. Where the atomic mass m is expressed in terms of atomic weight, this becomes 3724.61/m MeV. As matters now stand, neither the theoretical calculations nor the observations of the masses of the cosmic elements above hydrogen in the cosmic atomic series are sufficiently accurate to justify taking the secondary mass into consideration. The theoretical discussion of the masses of these elements will therefore be confined to the primary mass only, disregarding the small modification due to the secondary mass effect. For the same reasons, both the calculated and observed values in the comparisons that follow will be stated in terms of the nearest whole number of MeV. An exception has been made in the case of hydrogen, because the secondary mass of this element under normal conditions is relatively large, and the probability that it will be altered by changes in environmental conditions is relatively small. Since the mass of a material H² atom is 1.007405 on the atomic number scale, the mass of a cosmic H² atom is the reciprocal of this figure, or 0.99265 units. This is equivalent to 1848.61 MeV. At this point it will be necessary to recognize that the combinations of motions that constitute the atoms of the elements, both material and cosmic, are capable of acquiring additional motion components of a different kind, each unit of which alters the mass of the atom by one atomic weight unit. It will be convenient to defer the detailed consideration of this new type of motion, which we will call a gravitational charge, until we are ready to discuss the entire class of motions to which it belongs, but for present purposes we need to note that each material element of atomic number Z exists in a number of different forms, or isotopes, each of which has atomic weight 2Z+G, where G is the number of gravitational charges. The normal mass of the corresponding cosmic isotopes is the reciprocal of 2Z+ G. but when the cosmic atoms enter the material environment they are able to add gravitational charges of the material (positive) type to the cosmic combinations of motions (including the gravitational charges of the cosmic

(negative) type, if any). Each such material type charge adds one atomic weight unit, or 931.15 MeV, to the isotopic mass of the cosmic atom. In the first edition it was recognized that the incoming cosmic rays would consist primarily of c-hydrogen, but at that time there were no observational indications of any cosmic ray particles in the hydrogen mass range, and the extension of the theoretical development to the questions of scalar motion in two dimensions and the lifetimes of the cosmic atoms had not yet been undertaken. The exact theoretical status of the incoming c-hydrogen atoms was therefore still uncertain. Inasmuch as the ―mesons‖ then known were mainly cosmic elements of the inert gas series, it was concluded that the original chydrogen atoms must be stripped of their one-dimensional rotation and reduced to the two-dimensional (inert gas) condition almost immediately on crossing the speed boundary. In the meantime, however, the investigators have been able to extend their observations to earlier portions of the decay path, and they have recently discovered a short-lived particle with a mass that is reported as 3695 MeV. Identification of this 3695 ―psi‖ particle as a ―cosmic deuteron with two material isotopic charges‖66 by Ronald W. Satz was the crucial theoretical advance that opened the door to a clarification of the status of cosmic hydrogen. This now enables us to close the gap, and trace the progress of the cosmic atom from its entry into the material sector in the form of cosmic hydrogen (c-H²) all the way to its final transformation into material particles. For reasons which will be explained in Volume II, the cosmic atom has an effective translational motion in two of the three scalar dimensions at the neutral point where it enters the material half of the universe. The terrestrial environment, into which the observable cosmic atoms enter, is favorable for the acquisition of gravitational charges of the material type. Each of the two dimensions of motion therefore adds such a charge. The two charges acquired by the c-H² atom add 1862.30 MeV to the 1848.61 MeV mass equivalent of the cosmic mass, bringing the total mass of this, the first of the theoretical cosmic ray particles, to 3710.91 MeV. The mass of the newly discovered psi particle is reported as 3695 MeV. In view of the many uncertainties involved in the observations, this can be regarded as consistent with the theoretical value. As mentioned earlier, the particle lifetimes are correlated with the dimensions of the spatial motions that the particles acquire, the translational motion and the gravitational charges. While the theoretical situation has not yet been clarified, we find empirically that the life of a particle with two dimensions of scalar motion in space and no gravitational charge is about 10-16 seconds, approximately the natural unit of time. Each dimension of motion modifies the unit of time applicable to the particle life by approximately 10-8, while each gravitational charge modifies the unit by about 10-2. On this basis, the following approximate lifetimes are applicable: Dimensions

Charges

Life (sec)

Dimensions

Charges

Life (sec)

3 2 2

0 2 0

10-24 10-20 10-16

1 1

l 0

10-10 10-8

The reported lifetime of the 3695 psi particle is in the neighborhood of 10-20 seconds, which agrees with the theoretical determination of the dimensions of motion on which the mass calculation is based. The general decay pattern defined in the preceding pages indicates that c-H² should undergo an ejection of positive rotational displacement, converting it to c-He³. From the expression 3724.61/m, we obtain 1242 MeV as the rotational mass of c-He³, to which we add the mass of two gravitational charges for a total of 3104 MeV. The observed 3695 particle decays to another psi particle with a reported mass of 3105 MeV, and a life of about 10-20 seconds. This second particle can clearly be identified with the c-He³ atom. Thus the observed masses, the lifetimes, and the decay pattern all confirm the basic identification of the c-hydrogen particle by Satz. Another decay of the same kind would produce c-He4, and it is probable that some particles of this composition are occasionally formed. Indeed, any cos mic atom between c-hydrogen and c-krypton may appear in the cosmic ray products. But the probabilities favor certain specific cosmic elements, and these are the products that constitute the normal decay sequence we are now examining. The speeds of the cosmic rays and their decay products decrease rapidly in the material environment, and by the time the decay of c-He³ is due the additional energy loss in the decay process is usually sufficient to drop the cosmic residue into the speed range below unity. The consequent elimination of the motion in the second scalar dimension results in a double decay which adds two atomic weight units to the cosmic atom. The product is c-Li5. Further increases in the inverse mass of the residual cosmic atom by successive additions of single atomic weight units would be possible, but the probabilities favor larger steps as the material equivalent of a cosmic unit increment continues decreasing. The one unit increment in each of the two steps from c-He3 to c-Li-5 is therefore followed by a series of increments that are uniformly one atomic weight unit larger in each successive decay, except for the step between c-N14 and c-Ne20, where the increase over the size of the previous increment is two units. On this basis, the two l-unit increments that produce c-Li5 are followed by a 2-unit increment to c-Be7, a 3-unit increment to c-B10, a 4-unit increment to c-N14, and a 6-unit increment to c-Ne20. These decay products are not capable of retaining both of the gravitational charges of their precursors, but they keep one of the charges, and all of the cosmic elements identified as members of this section of the decay sequence have masses which include a 931.15 gravitational increment, as well as the basic mass equivalent of the cosmic element, 1862.30/Z MeV. The indicated life of a cosmic atom with one gravitational charge, after dropping into the range of one-dimensional motion, is about l010 seconds. These theoretical masses and lifetimes are in agreement with the observed properties of the class of transient cosmic ray particles known as hyperons, as indicated in the following tabulation: Element c-Li5

Particle omega

MASS Calculated 1676

Observed 1673

Lifetime 1.30 x l0-10

c-B10 c-N14 c-Ne20

xi sigma lambda

1304 1197 1117

1321 1197 1116

1.67 x l0-10 1.48 x l0-10 2.52 x l0-10

The masses given are those of the negatively charged particles. Positive electric charges and other variable factors introduce a ―fine structure‖ into the numerical values of the properties of the particles that has not yet been studied in the context of the Reciprocal System. The observed decay pattern is in agreement with the theory, so far as its general direction is concerned; that is, all of the members of the series decay in such a manner that the eventual result is c-neon. It is still uncertain, however, whether the decay always passes through all of the stages identified with the normal sequence, or whether this sequence is subject to modification, either by omission of one or more of the steps, or by a variation in the size of the ejections of time displacement. The c-Be7 atom, mass 1463 MeV, for instance, is not listed in the tabulation, as its identification with an observed particle of mass 1470 MeV is rather uncertain. This does not preclude its definite identification as a decay product eventually. It may be noted in this connection that the omega particle (cLi5) was found only as a result of an intensive search stimulated by a theoretical prediction. However, the fact that the last three members of this hyperon series (which were the first to be discovered and are still the best known) are separated by only one decay step, suggests that there is little, if any, deviation from the normal sequence in those cases where the full range of decay from c-He to c-Ne is involved. When we examine the properties of gravitational charges at a later stage of the theoretical development we will find that the stability of these charges is a function of the atomic number. The mathematical expression of this relation which we will derive from theory indicates that the stability limit for a double gravitational charge in the terrestrial environment falls between the material equivalents of c-He³ and c-Li5. This accounts for the previously mentioned fact that c-Li5 and the elements above it in the cosmic series are incapable of retaining two gravitational charges. But the center of the zone of stability for these elements is closer to the +1 isotope (one gravitational charge) than to the zero isotope (the basic rotation), and for this reason they are all singly (gravitationally) charged, as indicated in the preceding discussion. From c-Si27 upward in the cosmic series, the center of the zone of stability is closer to the zero isotope, and these elements carry no gravitational charges. Without the gravitational charge, the mass of c-Si27, the decay product resulting from a 7unit addition to c-Ne20, is 137.95 MeV, and the low speed lifetime is about 10-8 seconds. The corresponding observed particle is the pion, with measured mass 139.57 MeV, and lifetime 2.602 x 10-8 seconds. Pions are frequently reported as products of observed cosmic ray events initiated by primaries. As we will see in the next chapter, such production is quite feasible where there is a violent contact of some kind, with the release of a large amount of energy, but direct production of pions in decay is not consistent with the decay pattern as derived from theory. The apparent direct production is, however, understandable when the relative lifetimes of the pion and the earlier decay products are taken into consideration.

There is no reason to believe that normal decay in flight will result in any change of direction. Ejection of massless particles will take care of the conservation requirements without the necessity of directional modification. Because the entire decay process up to the production of the pion occupies only a very short time compared to the lifetime of the pion itself, it is unlikely that the usual methods of observation will be able to distinguish between a pion and a cosmic particle undergoing a complete decay to the pion status in flight. In the kind of a situation mentioned in Chapter 14, for instance, where a pion apparently leaves the scene of an event in a continuation of the direction of motion of the primary, and carries the bulk of the original energy, leading to the conclusion that the primary decayed directly to the pion, there is nothing in the observations that is inconsistent with the theoretical conclusion that during a short interval at the beginning of the motion attributed to the pion, the cosmic particle was actually going through the preceding steps in the decay sequence. The next event in this decay sequence, the decay of the pion, involves an 8-unit increment to c-Ar35. Again the zero isotope is the stable form. This leads to a mass of 106.42 MeV and a theoretical life equal to that of the pion. The observed particle is the muon, with mass 105.66 MeV, formed by decay of the pion, as required by the theory. Both the decay to c-Si27 (the pion) and the subsequent decay to c-Ar35 (the muon) continue the same pattern of a uniform one unit increase in the cosmic mass increment in each succeeding event that was followed in the earlier decay steps. But inasmuch as cargon is equivalent to helium, which, from the material standpoint, is only one step away from the neutron that is the end product of the decay process, the following ejection of positive displacement carries the cosmic atom to the final cosmic structure, c-krypton. Each of the two rotating systems of the c-Kr atom is rotationally equivalent to a neutron, and converts to that particle. Since c-Kr is massless (that is, its observed mass is merely the mass equivalent of the inverse mass of the cosmic sector) the conversion products are massless neutrons, or their equivalents, pairs of neutrinos and positrons. Some of the aspects of this conversion process will be given further consideration in Chapter 17. Unlike the decay events, which involve changes in the atomic structure, and therefore do not take place until they must, the conversion of the c-krypton rotations to massless neutrons is merely a change in scalar direction to conform with the new environment, and it takes place as soon as it can do so. Consequently, the c-krypton atom, as such, has a zero lifetime. As soon as the particle ejection from c-argon takes place, the conversion to massless neutrons begins. In view of the non-appearance of c-krypton, the apparent lifetime of c-argon, the muon, is the sum of its own proper lifetime and the conversion time. The value reported from observation is 2.20 x 10-6 seconds. A theoretical explanation of this value is not yet available, but it is probably significant that the difference between this and the life of an uncharged particle moving in one dimension, about 10-8 seconds, is approximately that associated with a gravitational charge. The absence of the c-krypton atom from the decay process is not due to any abnormal instability of this cosmic atom itself, but to the preference for the alternate scalar direction that prevails in the material environment. In the reverse process, where the

directional preference favors the c-krypton atom over the neutron alternate, it plays a prominent part, as we will see in Chapter 16. In those cases where the incoming cosmic atom is not in the normal decay sequence it ejects enough positive displacement in one or two decay events to reach one of the positions in that sequence, after which it follows the normal path in the same manner as the products of the decay of cosmic hydrogen. However, these heavier elements are beyond the stability limit for two gravitational charges, in a low energy environment, and consequently they do not form structures analogous to the psi particles. This has the effect of increasing the probability that some of the decay products that normally carry one gravitational charge will occasionally be found in the uncharged condition. The one allowable charge would result in an asymmetrical structure during the time that the speed of these particles is in the two-dimensional range, and if they are observed at this stage they are likely to be uncharged (gravitationally). The uncharged lifetime for a particle moving two-dimensionally is approximately one natural unit of time, or about 10-16 seconds. Such a life is the most definite indication that an observed particle is in this early stage of the decay process. For example, the eta particle, with observed mass 549 MeV and a life of .25 x 10-16 seconds is probably a gravitationally uncharged c-Be7 atom, which theoretically has a mass of 532 MeV. A more questionable identification equates the rho particle with c-Li5. The theoretical mass in this case is 745 MeV, and the observed values range from 750 to 770, the more recent measurements being the higher. The rho lifetime has been reported as about l10-23 seconds, but this is too short to be a decay time. It is evidently a fragmentation time, a concept which will be explained in connection with the discussion of particle production in the accelerators. Both c-Li5 and c-Be7 are in the normal decay sequence, a fact which lends some support to the foregoing identifications. The reported observations of particles that are outside the normal decay sequence will be given some further consideration in the next chapter. If the incoming cosmic atom is above c-krypton in the cosmic atomic series, so that it cannot enter the normal decay sequence in the manner of the elements of lower atomic number, it must nevertheless separate into parts at the end of the appropriate unit of time, and since it cannot eject massless neutrons as the lighter atoms do, it fragments into smaller units, which then follow the normal decay path.

CHAPTER 16

Cosmic Atom Building In essence, the cosmic ray decay is a process whereby high energy combinations of motions that are unstable at speeds less than that of light are converted in a series of steps to low energy structures that are stable at the lower speeds. A requirement that must be met in order to make the process feasible is the existence of a low energy environment that can serve as a sink for the energy that must be withdrawn from the cosmic structures.

Where a high energy environment is created, either fortuitously or deliberately, the decay process is reversed, and cosmic elements of lower atomic number are produced from cosmic elements of higher atomic number, or from material particles, kinetic energy being absorbed from the environment to meet the additional energy requirements. The first step in the reverse process is the inverse of the last step in the decay process: a neutron equivalent is converted into one of the rotating systems of a cosmic krypton atom by inversion of the orientation with respect to the space-time zero points. It is convenient, from a practical standpoint, to work with electrically charged particles. The standard technique in the production of transient particles therefore is to use protons, or hydrogen atoms which fragment to protons, as the ―raw material‖ for cosmic atom building. In the high energy environment that is created in the production apparatus, the particle accelerators, the proton, M 1-1-(1), ejects an electron, M 0-0-(1), and then separates into two massless neutrons, M ½-½-0, each of which converts to a half c-Kr atom (that is, one of the rotating systems of that atom) by directional inversion. These half c-Kr atoms cannot add displacement and become muons because they are unable to dispose of the proton mass, which persists as a gravitational charge (half of the normal size, as the proton has only one rotating system). They remain as particles of a distinct type, each with half of the c-Kr mass (52 MeV), and half of the 931 MeV mass of a normal gravitational charge, the total being 492 MeV. They can be identified as K mesons, or kaons, the observed mass of which is 494 MeV. As can be seen from the foregoing, the initial production of transient (cosmic) particles in the accelerators is always accompanied by a copious production of kaons. Each of the subsequent steps in the cosmic at building process that requires the addition of mass, such as the product of c-neon (the lambda particle) from c-silicon (the pion) and the product of the psi-3105 particle from one of the heaviest of the hyperons similar to the initial cosmic particle production, except that the proton mass is added to the product as a gravitational charge instead of forming a kaon. Where kaons appear in connection with the product of these particles, they are the result of secondary processes. Furthermore, kaons are not produced in the decay processes, either in the cosmic rays or in the accelerators, because the decay takes place on a massless basis. A few kaons appear in the cosmic ray decay events, but they are not decay products. They are produced in collisions of cosmic rays with material atoms under conditions such that a temporary excess of energy is created—in miniature equivalents of the particle accelerators, we may say. If the reverse process, the atom building process, is carried beyond c-hydrogen the final particle vanishes into the cosmic sector. Otherwise the cosmic atom building which takes place in the material sector is eventually succeeded by a decay that follows the normal path back to the point of reconversion into massless neutrons. Where the excess kinetic energy in the environment is too great to permit the decal proceed to completion, the production and decay processes arrive at an equilibrium consistent with the existing energy level.

In such a high-energy environment, the life of a particle may be terminated by a fragmentation process before the unit time limitation takes effect. This is simply a process of breaking the particle into two or more separate parts. The degree of fragmentation depends on the energy of the disruptive forces, and at the lower energy levels the products of fragmentation of any transient particle are mainly pions. At higher energies kaons appear, and in the fragmentation of hyperons the mass of the gravitational charges may come off in the form of neutron or protons. Corresponding to fragmentation is the inverse process of consolidation, in which particles of smaller mass join to form particles of larger mass. Thus a particle, with a mass measured as 1020 MeV has been observed to fragment into two kaons. The 36 MeV excess mass goes into kinetic energy. Under appropriate conditions, the two kaons may consolidate to form a particle, utilizing 36 MeV of kinetic energy to supply the necessary addition to the mass of the two smaller particles. The essential difference between the two pairs of processes—building, and decay on the one hand, and fragmentation and consolidatior on the other—is that building and decay proceed from higher to lower cosmic atomic number, and vice versa, whereas fragmentation consolidation proceed from greater to less equivalent mass per particle, and vice versa. The decay process as a whole is a conversion from cosmic status to material status, and the atom building in the particle accelerators is a partial and temporary reversal of this process, but fragmentation and consolidation are merely changes in the state of the atomic constituents, a process that is common in both sectors. The change in cosmic atomic number due to fragmentation may be either upward or downward, in contrast to the decay process, which always results in an increase in the cosmic atomic number. This difference is a consequence of the manner in which the mass of the gravitational charges enters into the respective processes. For example, the decay of c-St, the pion, is in the direction of c-Kr. On the other hand, the kaon, a gravitationally charged c-Kr atom, cannot decay into any other cosmic particle, as it is at the end of the line so far as decay is concerned, but it can fragment into any combination of particles whose combined mass does not exceed the 492 MeV kaon mass. Fragmentation into pions reverses the direction of the decay. If the maximum conversion to pions (mass 138 MeV each) takes place, three pions are produced. Frequently, a larger part of the total energy goes into the kinetic energy of the products, and the production of pions decreases to two. The existence of both 2-pion and 3-pion events has been given a great deal of attention because of the bearing that they have on various hypotheses as to the laws that govern particle transformations. The present study indicates, however, that if the basic requirement, an excess energy environment, is met, so that conversion of the kaon to the material status is prevented, there are no restrictions on the fragmentation reactions, other than those considerations that are applicable to matter and energy in general in the material sector of the universe. The study of the transient particles, which had its origin in the observation of the cosmic rays, is now carried on mainly in the accelerators. It is assumed that the same particles and the same processes are involved, and that the details thereof can be more

conveniently ascertained where the conditions are subject to control. This is true, to a degree, of course, but the situation in the accelerators is much more complex than that to which the incoming cosmic rays are subject. The atom building process does not merely invert the decay process. The actual inverse of the cosmic ray decay is a situation in which material elements enter a cosmic (high energy) environment and eject negative displacement in order to build up into structures that can ultimately convert to the cosmic status. The cosmic entities initially produced in this process are sub-atomic particles. The accelerators, however, produce the cosmic elements that are closest to conversion to the material status (c-Kr, etc.), and then drive them back up the decay path by creating temporary energy concentrations in the material (low energy) environment. Because of the uneven character of these concentrations of energy, cosmic atom building in the accelerators is accompanied by numerous events of the inverse (decay) character, and by various fragmentation and consolidation processes that involve neither building nor decay. Many of the phenomena observed in the accelerator experiments are therefore peculiar to the kind of environment existing in the accelerators, and are not encountered in either the cosmic ray decay or in normal cosmic atom building. It should also be kept in mind that the actual observations of these events, the ―raw‖ data, have little meaning in themselves. In order to acquire any real significance they must be interpreted in the light of some kind of a theory as to what is happening, and in such areas as particle physics the final conclusion is often ten percent fact and ninety percent interpretation. The theoretical findings of this work are in agreement with the experimental results, and they also agree with the conclusions of the experimenters in most cases, but it can hardly be expected that the agreement will be complete where there are so many uncertainties in the interpretation of the experimental results. The sequence of events in cosmic atom building in the accelerators has been observed experimentally in the so-called ―resonance‖ experit meets. These involve accelerating two streams of particles—stable or transient—to extremely high speeds and allowing them to collide. The relation of the amount of interaction, the ―cross-section,‖ to the energy involved is not constant, but shows peaks or ―resonances‖ at certain farily welldefined values. This result is interpreted as indicating the production of very short-lived particles (indicated lifetime about 10-23 seconds) at the energies of the resonance peaks. This interpretation is confirmed in this work by the agreement of the sequences of resonance particles with the theoretical results of the cosmic atom building process. Because of the difference in the nature of the processes, the sequence of elements in cosmic atom building is not the inverse of the decay sequence, although most of the decay products above c-He are included. As brought out in Chapter 15, the decay process is essentially a matter of ejecting positive rotational displacement. There is also a decrease in equivalent mass, but the mass loss is a secondary effect. The primary objective of the process is to get rid of the excess rotational energy. In the atom building process in a high energy environment the necessary energy is readily available, and the essential task is to provide the required mass. This is supplied in the form of c-Kr atoms, mass 51.73 each. The full sequence of cosmic atoms in the building process therefore consists of a series of elements, the successive members of which differ by 52 MeV. Aside from the lower end of the series, where two of the 52 MeV units are required per

cosmic atomic weight unit, the only significant deviations from this pattern in the experimental results are that c-B9 is absent, while c-Ne (a member of the decay sequence) and c-O appear in lieu of, or in addition to, c-F. The complete atom building sequence is shown in Table 4.

TABLE 4 COSMIC ATOM BUILDING SEQUENCE Atomic Number 36 18 12 (10) 9 (8) 7 6 5 4-½ 4 3-½ 3 2-½ * member of decay sequence

Element *c-Kr *c-A c-Mg *c-Ne c-F c-O *c-N c-C *c-B10 c-B9 c-Be8 *c-Be7

Atomic Mass 52 103 155 186 207 232 266 310 372 466 532

c-Li6

621

*c-Li5

745

51.73 n 52 103 155 207 259 310 362 414 466 517 569 621 672 724

Most of the reported experimental results omit many of the steps in the full sequence. Whether this means that double or triple jumps are being made, or whether the intermediate stages have been missed by the investigators is not yet clear. However, the most complete set of results, the ―sigma‖ series, is close enough to the theoretical sequence to suggest that the build-up does, in fact, proceed step by step as indicated in Table 4. Regardless of any deviations from the normal sequence that may take place earlier, the first phase of the atom building process always terminates at c-Li5 (the omega particle, mass 1676 MeV) because, as is evident from the description of the steps in the cosmic ray decay, the motion must enter a second dimension in order to accomplish any further decrease in the cosmic atomic number. This requires a relatively large increase in energy, from 1676 to 3104 MeV. In the decay process there is no alternative, and the big drop in energy must take place, but in the reverse process the addition of energy in smaller amounts is made possible by reason of the ability of the cosmic atom to retain additional gravitational charges in an excess energy environment. The doubly (gravitationally) charged cosmic element of lowest energy within the atom building range is c-Kr, the first atom that can be formed from conversion of material

particles. The energy difference between doubly charged c-Kr and the last singly charged product, c-Li5, is substantial (238 MeV), and all of the cosmic atom building series theoretically include doubly charged c-Kr as well as singly charged c-Li5. There are, in fact, some intermediate stages. All but the last small increment of the mass required for the second charge is added in the form of c-Kr atoms (52 MeV each), as in building up the rotational mass, and this addition is accomplished in four steps. Similar inter-stages are possible between c-Be7 and c-Li6, also between c-Li6 and c-Li5, where two c-Kr mass increments are required between the cosmic elements. Beyond doubly charged c-Kr, the regular sequence is again followed, with some omissions or deviations which, as mentioned earlier, may or may not represent the true course of events. At doubly charged c-Li5, mass 2607 MeV, the atom building process again reaches the one-dimensional limit, and a third charge is added in the same manner as the second, inaugurating a new series of resonances which extends to the neighborhood of the 3104 MeV required for the production of the first of the particles that have scalar motion in two dimensions. Table 5 compares the theoretical and observed values of the masses of the particles included in the several series of resonances that have been reported. The agreement is probably about as close as can be expected in view of the difficulties involved in making the measurements. In more than a third of the total number of cases the measured mass is within 10 MeV of the theoretical value. It is also worth noting that in the only case where enough measurements are available to provide a good average value for an individual cosmic element, the 11 measurements on c-Li5, the agreement between this average and the theoretical mass is exact. All of the singly charged transient particles moving in only one dimension are stable against decay for about l0-10 seconds. However, they are extremely vulnerable to fragmentation under conditions such as those that prevail in the accelerators, and only the particles of lowest mass escape fragmentation long enough to decay. The lifetime of the heavier particles is limited by fragmentation to the absolute minimum, which appears to be the unit of time corresponding to three scalar dimensions of motion, or about 10-24 seconds. In the tabulations of particle data in the current scientific literature,

TABLE 5 “BARYON RESONANCES” c-Atomic number

Element

Grav. charge

7 4 3-½ 3

*c-N c-Be8 *c-Be7 c-Li6

1 1 1 1

InterTheor. stage Sigma Series 1197 1397 1463 1552 a 1604

Mass Obs. **

Obs. ***

1190 1385 1480 1620

2-½

*c-Li5

1

36 18 12 10 9 8 7 5 3 2-½ 10

*c-Kr *c-Ar c-Mg *c-Ne c-F c-O *c-N *c-B c-Li6 *c-Li5 *c-Ne

2 2 2 2 2 2 2 2 2 2 3

10 4 3 2-½

*c-Ne c-Be8 c-Li6 *c-Li5

1 1 1 1

12 8 4 2-½

c-Mg c-O c-Be8 *c-Li5

2 2 2 2

5 3 2-½

*c-B c-Li6 *c-Li5

1 1 1

36 10 5 3

*c-Kr *c-Ne *c-B c-Li5

2 2 2 2

3-½ 3 2-½

*c-Be7 c-Li6 *c-Li5

1 1 1

14

*c-St

2

1676 a 1728 b 1779 c 1831 d 1882 1914 1965 2017 2048 2069 2095 2128 2234 2483 2607 2979 Lambda Series 1117 1397 1552 1676 a 1728 b 1779 c 1831 d 1882 2017 2095 2328 2607 Xi Series 1303 1552 1676 c 1831 1914 2048 2234 2483 N Series 1463 1552 1676 a 1728 b 1779 d 1882 1995

1670 1750 1765

1690 1840 1880

1915 1940 2000 2030 2070 2080 2100 2250 2455 2620 3000 1115 1405 1520 1670

1690 1750

1815 1830

2100 2350 2585

1870-1860 2020-2010 2110

1320 1530 1630 1820 1940 2030 2250 2500 1470 1535 1670 1700 1780 1860

1520 1688

1990

10 8 6 5 2-½ 10

*c-Ne c-O c-C *c-B *c-Li5 *c-Ne

2 2 2 2 2 3

6 2-½

c-C *c-Li5

1 1

2048 2095 2172 2234 2607 2979

2190 2220 2650 3030

2040 2100 2175

Delta Series

d 36 *c-Kr 18 *c-Ar 6 c-C 3-½ *c-Be7 36 *c-Kr * Decay sequence ** Well-established resonances *** Less certain resonances

2 2 2 2 3

1241 1676 1882 1914 1965 2172 2394 2845

1236 1670 1890 1910 1950

1690

1960 2160

2420 2850

the information with respect to the series of resonances thus far discussed is presented under the heading of ―Baryon Resonances.‖ A further classification of ―Meson Resonances‖ gives similar information concerning particles that were observed by a variety of other techniques. These are, of course, entities of the same nature—cosmic elements in the decay range—and largely the same elements, but because of the wide variations in the conditions under which they were produced the meson list includes a number of additional elements. Indeed, it includes all of the elements of the regular atom building sequence (with c-Ne and c-O substituted for c-F, as previously noted), and one additional isotope, c-Ci11. The masses derived from the experiments are compared with the theoretical masses of the cosmic elements in Table 6. The names currently applied to the observed particles have no significance, and have been omitted. In preparing this table, the observed particles were first assigned to the corresponding cosmic elements, an assignment that could be made without ambiguity, as the maximum experimental deviations from the theoretical masses are, in all but a very few instances, considerably less than the mass differences between the successive elements or isotopes. On the assumption that the deviations of the reported values from the true masses of the particles are due to causes whose effects are randomly related to the true masses, the individual values were averaged for comparison with the theoretical masses. The close correlation between the two sets of values not only confirms the status of these observed particles as cosmic elements, but also validates the assumption of random deviations, on which the averaging was based. Presumably these deviations are, in part, due to inaccuracies in obtaining and processing the experimental data, but they may also include a random distribution of differences of a real character: more of the ―fine structure‖ which, as previously noted, has not yet been studied in the context of the Reciprocal System. The averaged values are shown in parentheses. Where only single measurements are available, the deviations from the theoretical values are naturally greater, but they are

generally within the same range as those of the individual values that enter into the averages. Longer lived decay products such as c-Ne and c-N are not usually classified with the resonances, but they have been included in the table to show the complete picture. The gaps still remaining in the table will no doubt be filled as further experimental work is done. Indeed, many of these gaps, particularly in the upper portion of the mass range, can be filled immediately, simply by consolidating Tables 5 and 6. The difference between these two sets of resonances is only in the experimental procedures by which the reported values were derived. All of the transient

TABLE 6 “MESON RESONANCES” c-Atomic number 3

Element c-Li6 5

2-½

*c-Li

36 18 12 10 8 7 6 5-½ 5 4-½ 4 3-½

*c-Kr *c-Ar c-Mg *c-Ne c-O *c-N c-C12 c-C11 *c-B10 c-B9 c-Be8 *c-Be7

3

c-Li6

2-½

*c-Li

5

36 *c-Kr 8 c-O 5 *c-B10 4-½ c-B9 4 c-Be8 3-½ *c-Be7 36 *c-Kr 36 (kaon)½ c-Kr * Decay sequence

Grav. InterTheor. charge stage 0 621 a 673 0 745 a 797 d 952 1 983 1 1034 1 1086 1 1117 1 1164 1 1197 1 1241 1 1270 1 1303 1 1345 1 1397 1 1463 a 1515 1 1552 a 1604 1 1676 b 1779 c 1831 2 1914 2 2095 2 2234 2 2276 2 2328 2 2394 3 2845 1-½ 1423

Mass Obs. ** 700 (760) 784 (951) (986) (1031) (1090) 1116 (1165) 1197 (1240) (1274) 1310

(1455) 1516 1540 (1623) (1674) (1773) (1840) 1930 2100 2200 2275 2360 2375 2800 (1427)

Mass Individual Values

750,770 940,953-958 970,990,997 1020,1033,1040 1080,1100 1150,1170-1175 1237,1242 1265,1270,1286

1440,1470

1600,1645 1660,1664-1680,1690 1760,1765-1795 1830,1850

1416,1421,1430,1440

particles, irrespective of the category to which they are currently assigned, are cosmic elements or isotopes, with or without gravitational charges of the material type. The absence of singly (gravitationally) charged particles corresponding to c-B9 from the list of observed resonances is rather conspicuous, particularly since the similar particle of twice this atomic weight, c-F18 is also missing, as noted earlier. The reason for this anomaly is still unknown. The last particle listed in Table 6 is a kaon, one of the two rotating systems of a c-Kr atom, with a full gravitational charge in addition to the half-sized charge that it normally carries. This particle has the same relation to the normal kaon that the atoms of the doubly charged series in Tables 5 and 6 bear to the corresponding singly charged atoms. ln the first edition it was suggested that some of the cosmic ray particles entering the material sector might be cosmic chemical compounds rather than single atoms. In the light of the more complete information now available with respect to the details of the inter-regional transfer of matter, this possibility must now be excluded, but short-lived associations between cosmic and material particles, and perhaps, in some cases, between cosmic particles, are feasible, and evidence of some such associations has been obtained. For example, the lambda meson (c-neon) is reported to participate in a number of combinations with material elements, called hyperfragments, which disintegrate after a brief existence. The current view is that the meson, which is assumed to be a sub-atomic particle, replaces one of the ―nucleons‖ in the material atom. However, we find (l) that the material atom is not composed of particles, (2) that there are no nucleons, and (3) that the mesons are full-sized atoms, not sub-atomic particles. The hyperfragment therefore cannot be anything more than a temporary association between a material atom and a cosmic atom. The new findings as to the nature of the transient particles, and their production and decay, do not negate the results of the vast amount of work that has been done toward determining the behavior characteristics of these particles. As stated earlier in this chapter, these theoretical findings are generally consistent not only with the actual experimental results, but also with the experimenters' ideas as to what the raw data—the various ―tracks,‖ electrical measurements, counter readings, etc.—signify with respect to the existence and behavior of the different transient particles. But what appears to be an immense amount of experimental data actually contributes very little toward an explanation of the nature of these particles, and their place in the physical universe; it merely serves to define the problem. As expressed by V. F. Weisskopf, in a review of the situation, ―The present theoretical activities are attempts to get something from almost nothing.‖ Much of the information derived from observation is ambiguous, and some of it is definitely misleading. The experimentally established facts obviously have a bearing on the problem, but they are too limited in their scope to warn the investigators that they cannot be fitted into the pattern to which scientists are accustomed. For instance, in the world of ordinary matter, a particle mass less than that of the lightest isotope of hydrogen indicates that the particle belongs to the sub-atomic class. But when the effective masses of the transient particles, as determined by experiment, are interpreted according to this

familiar pattern, they give a totally false account of the nature of these entities. Thus, while the determination of the particle masses adds to the total amount of factual information available, its practical effect is to lead the investigators away from the truth rather than toward it. The following statements by Weisskopf in his review indicate that he suspected that some such misinterpretation of the empirical data is responsible for the confusion that currently surrounds the subject. We are exploring unknown modes of behavior of matter under completely novel conditions.... It is questionable whether our present understanding of high-energy phenomena is commensurate to the intellectual effort directed at their interpretation.67 Availability of a general physical theory which enables us to deduce the nature and characteristics of the transient particles in full detail from theoretical premises, rather than having to depend on physical observation of a very limited scope, now opens the door to a complete understanding. The foregoing pages have provided an account of what the transient particles are, where the particles of natural origin (the cosmic rays) come from, what happens to them after they arrive, and how they are related to the transient particles produced in the accelerators. The aspects of these particles that have been so difficult to explain on the basis of conventional theory—their multiplicity, their extremely short lifetimes, the high speed and great energies of the natural particles, and so on—are automatically accounted for when their origin and general nature is understood. Another significant point is that, on the basis of the new theoretical explanation, the cosmic rays have a definite and essential place in the mechanism of the universe. One of the serious weaknesses of conventional physical theory is that it is unable to find roles for a number of the recently discovered phenomena such as the cosmic rays, the quasars, the galactic recession, etc., that are commensurate with the magnitude of the phenomena, and is forced to treat them as products of exceptional or abnormal circumstances. In view of the wide extent of the phenomena in question, and their far-reaching consequences, such characterization is clearly inappropriate. The theoretical finding that these are stages of the cosmic cycle through which all matter eventually passes now eliminates this inconsistency, and identifies each of these phenomena with a significant phase of the normal activity of the universe. The existence of a hitherto unknown second half of the universe is the key to an understanding of all of these currently misinterpreted phenomena, and the most interesting feature of the cosmic rays is that they give us a fleeting glimpse of the entities of which the physical objects of that second half, the cosmic sector, are constructed.

CHAPTER 17

Some Speculations The Reciprocal System of theory consists of the fundamental postulates, together with everything that is implicit in the postulates; that is, everything that can legitimately be derived from those postulates by logical and mathematical processes without introducing

anything from any other source. It is the theory as thus defined that can claim to be a true and accurate representation of the observed physical universe, on the grounds specified in the earlier pages. The conclusions stated in this and related publications by the present author and others are the results of the efforts that have thus far been made to develop the consequences of the postulates in detail. However‖ the findings that have emerged from the early phases of this theoretical development call for some drastic modifications of the prevailing conceptions of the nature of some of the basic physical entities and phenomena. Such conceptual changes are not easily made, and the persistence of previous habits of thought makes it difficult, not only for the readers of these works, but also for the investigators themselves, to grasp the full implications of the new ideas when they first make their appearance. The existence of scalar motion in more than one dimension, which plays an important part in the subject matter of the two preceding chapters, is a good example. It is now clear that such motion is a necessary and unavoidable consequence of the basic postulates, and there is no inherent obstacle that would stand in the way of a complete and detailed understanding of its nature and effects if it could be considered in isolation, without interference from previously existing ideas and beliefs. But this is not humanly possible. The minds into which this idea enters are accustomed to thinking along very different lines, and inertia of thought is similar to inertia of matter, in that it can be fully overcome only over a period of time. Even the simple concept of motion that is inherently scalar, and not merely a vectorial motion whose directional aspects are being disregarded, involves a conceptual change of no small magnitude, and the first edition of this work did not go beyond this point, except in specifying that the increase in the speed of recession of the galaxies is linear beyond the gravitational limit‖ a tacit assertion that the increment is scalar. Subsequent studies of high energy astronomical phenomena carried the development of thought on the subject a step farther, as they led to the conclusion that the quasars are moving in two dimensions. However, it took additional time to achieve a recognition of the fact that unit scalar speed in three dimensions constitutes the line of demarcation between the region of motion in space and the region of motion in time, and the first publication in which this point was brought out specifically was Quasars and Pulsars (1971). Now we further find that the same considerations also apply to the incoming cosmic particles. At the moment, it appears that the full scope of the subject has been covered, but past experience does not encourage a positive statement to that effect. This experience demonstrates how difficult it is to attain a comprehensive understanding of the various aspects of any new item of information that is derived from the basic postulates, and it explains why identification of the source from which the correct answers can be obtained does not automatically give us all of those answers; why the results obtained by application of the Reciprocal System of theory, like the products of all other research into previously unknown physical areas, necessarily differ in the degree of certainty that can be ascribed to them, particularly in the relatively early stages of an investigation. Many are established beyond a reasonable doubt; others can best be characterized as ―work in progress‖ ; still others are little, if any, more than speculations. However, because of the extremely critical scrutiny to which a theory based on a new and

radically different fundamental concept is customarily (and properly) subjected, publication of the results of the theoretical development described in this work has, in general, been limited to those items which have been given long and careful examination, and can be considered as having a very high degree of probability of being correct. Almost thirty years of study and investigation went into the project before the first edition of this work was published. The additions and modifications in this new edition are the result of another twenty years of review and extension of the original findings by the author and others. Inasmuch as the results of this development are conclusions about one universe derived in their entirety from one set of basic premises, every advance that is made in the understanding of phenomena in one physical field throws some light on outstanding questions in other fields. A review such as that required for the preparation of this new edition has the benefit of all of the advances that have been made subsequent to the last previous systematic study of each area, and a considerable amount of clarification of the subject matter previously examined, and extension of the development into new subject areas, was accomplished almost automatically during the revision of the text. Where it is evident that the new theoretical conclusions thus derived are firm enough to meet the criteria that were applied to the original publication they have been included in this new edition. But in general, any new ideas of major consequence that have emerged from this rather rapid review have been held over for further study in order to be sure that they receive adequate consideration before publication. In one particular case, however, there seems to be sufficient justification for making an exception to this general policy. In the preceding pages, the discussion of the decay of the cosmic elements after entry into the material environment was carried to the point where the decay was complete, and it was noted that the ultimate result would necessarily be conversion of the cosmic elements into forms that would be compatible with the new environment. Since hydrogen is the predominant constituent of the material sector of the universe, this element must ultimately be produced from the decay products, but just how the transition is accomplished has not been clear theoretically, and empirical information bearing on the subject is practically non-existent. It would be a significant advance toward completion of the basic theoretical structure if this gap could be closed. Consideration of the question during preparation of the text of the new edition has uncovered some interesting possibilities in this connection, and a discussion of these ideas in the present work seems to be warranted, even though it must be admitted that they are still speculative, or at least no more than ―work in progress.‖ The first of these tentative new conclusions is that the muon neutrino is not a neutrino. As the theoretical development now stands, there is no place for any neutrinos other than the electron neutrino and its cosmic analog, the electron antineutrino, as it is currently known. Of course, the door is not completely closed. Earlier in this volume it was asserted that sufficient evidence is now available to demonstrate that the physical universe is, in fact, a universe of motion, and that a correct development of the consequences of the postulates that define such a universe will produce an accurate representation of the existing physical universe. It is not contended‖ however, that the present author and his associates are infallible, and that the conclusions which they have

reached by these means are always correct. It is conceivable that further theoretical clarification may change some aspects of our existing view of the neutrino situation‖ but the theory as it now stands has no place for muon neutrinos. As brought out in the previous pages ―however‖ the theory does require the production of a different massless particle in the processes in which the ―muon neutrino‖ now appears‖ and the logical conclusion is that the particle now called the muon neutrino is the particle required by the theory: the massless neutron. From the observational standpoint this changes nothing but the name, as these two massless particles cannot be distinguished by any currently known means. On the theoretical side, the observed particle fits in very well with the theoretical deductions as to the behavior of the massless neutron. This particle should theoretically be produced in every decay event, whereas the neutrino should appear only in the last step, where separation of the residual cosmic atom into two massless particles takes place. This is in accord with observation, as the ―muon neutrino‖ appears in both the pion decay and the muon decay, whereas the electron neutrino appears only in the decay of the muon. Empirical confirmation of the theoretical produce lion of massless neutrons in the earlier decay events has not yet been observed, but this is understandable. The reported products of the decay of a positive muon are also in agreement with the massless neutron hypothesis. These products are currently considered to be a positron, which, according to our findings, is M 0-0-1, an electron neutrino, M 2-2-(1), and a ―muon antineutrino,‖ which we now identify as a massless neutron, M 2-2-0. The positron and the electron neutrino are jointly equivalent to a second massless neutron. Their appearance as two particles rather than one is probably due to the fact that they are the products of the final conversion of the residual cosmic atom, in which the electric and magnetic rotations are oppositely directed, rather than merely discrete particles ejected from the cosmic atom. It is claimed that muons also exist with negative charges, and that these decay into the antiparticles of the decay products of the positive muon: an electron, an electron antineutrino, and a ―muon neutrino.‖ These asserted products are the equivalent of two cosmic massless neutrons. The production of such particles, or of cosmic particles of any kind, other than the members of the regular decay sequence, as the result of a decay process, is rather difficult to reconcile with the theoretical principles that have been developed. Theoretical considerations indicate that there is no such thing as an ―antimeson,‖ and that the negatively charged muon is identical with the positively charged muon, except for the difference in the charge. On this basis, the decay products should differ only in that an electron replaces the positron. Inasmuch as two of the decay particles in each case are unobservable, there appears to be a rather strong probability that their identification in current physical thought comes from the ninety percent of interpretation rather than from the ten percent of observation that enters into the reported results. However, it is the existence of some unresolved questions of this kind that has made it necessary to characterize the contents of this chapter as somewhat speculative. On the basis of the theoretical decay pattern, the incoming cosmic atoms are eventually converted into massless neutrons and their equivalents. The problem then becomes: What

happens to these particles? There are no experimental or observational guideposts along this route; we will have to depend entirely on theoretical deductions. The massless neutron already has a material type structure--that is, a negative vibration and a positive rotation--and no conversion process is required. Likewise, no decay or fragmentation process is possible because this particle has only one rotational displacement unit. Progress toward the hydrogen goal must therefore take place by means of addition processes. Addition of a massless neutron to a positron, a proton, a compound neutron, or a second massless neutron, would produce a particle in which there is a single rotating system of displacement 2 (on the particle scale). As indicated in Chapter 11, it appears that such a particle, if it exists at all, is unstable, and in the absence of any means of transferring one of the units of displacement to a second rotating system, the unstable particle will decay back to particles of the original types. Such additions will therefore accomplish nothing. The additions that are actually possible constitute a regular series. The decay product, the massless neutron, M ½-½-0, can combine with an electron, M 0-0-(l), to form a neutrino, M ½-½-(1). Another massless neutron added to the neutrino produces a proton, M 1-1(1). As has been indicated, addition of a massless neutron to the proton is not feasible, but a neutrino can be added, and this produces the mass one hydrogen isotope, M ½-½-(2). So far as the rotational displacement is concerned, we now have a clear and consistent picture. By addition of the supply of massless neutrons resulting from the decay of the cosmic rays to electrons and neutrinos, particles that are plentiful in the material environment, hydrogen, the basic element of the material system, is produced. But there is still one important factor to be accounted for. There is no problem in the addition of the massless neutron to the electron, but in adding to the neutrino to produce the proton a unit of mass must be provided. The question that must be answered before this hypothetical hydrogen building process can be considered a reality is: Where does the required mass come from? It appears, on the basis of the recent extensions of the theory, that the answer to this question can be found in a hitherto unrecognized property of particles with twodimensional rotation. As explained in Chapter 12, mass is t³/S³, the reciprocal of threedimensional speed, whereas energy is t/s, the reciprocal of one-dimensional speed. Obviously, there is an intermediate quantity, the reciprocal of two-dimensional speed, t²/s². This has been recognized as momentum, or impulse, but it has been regarded as a derivative of mass. Indeed‖ momentum is customarily defined as the product of mass and velocity. What has not been recognized is that the reciprocal of two-dimensional speed can exist in its own right, independent of mass, and that a two-dimensional massless particle can have what we may call internal momentum, t²/s² ”just as a three-dimensional atom has mass― t³/s³. The internal energy of an atom‖ the energy equivalent of its mass‖ is equal to the product of its mass and the square of unit speed‖ t³/s³ x s²/t² = t/s. This is the relation discovered by Einstein‖ and expressed as E = mc². In order to provide the unit mass required in the addition of a massless neutron to a neutrino to form a proton, a unit quantity of energy, t/s must be provided.

The kinetic energy of a particle with internal momentum M is the product of this momentum and the speed: Mv = t²/s² x s/t = t/s. Inasmuch as the massless neutron has unit magnetic displacement, and therefore unit momentum, and being massless it moves with unit speed (the speed of light), its kinetic energy is unity. Thus the kinetic energy of the massless neutron is equal to the energy requirement for the production of a unit of mass, and by coming to rest in the stationary frame of reference the massless neutron can provide the energy as well as the rotational displacement necessary to produce the proton by combination with a neutrino. Here, then, is what appears, on initial consideration at least, to be a complete and consistent theoretical explanation of the transition from decay product to material atom. There is, of course, no observational confirmation of the hypothetical processes, and such confirmation may be hard to get. The conclusions that have been reached will therefore have to rest entirely on their theoretical foundations for the time being. It is worth noting that, on the basis of these conclusions, the hydrogen produced from the decay products originates somewhat uniformly throughout the extension space of the material sector, inasmuch as the neutrino population must be fairly uniformly distributed. This is in agreement with other deductions that were discussed in the first edition, and will be given further consideration in Volume II of this work. The standing of the conclusions that have just been outlined is considerably strengthened by the fact that the two lines of theoretical development meet at this point. As stated earlier, the inflow of cosmic matter into the material sector is counterbalanced by an ejection of matter from the material sector into the cosmic sector in the form of high-speed explosion products. These are the two crucial phases of the great cycle which constitutes the continuing activity of the universe. But the slow process of growth and development that the arriving matter undergoes before it is ready to participate in the events which will eject it back into the cosmic sector, and complete the cycle, is an equally important, even though less spectacular, aspect of the cycle. Consequently, one of the major tasks involved in developing a theoretical account of the physical universe from the basic postulates of the Reciprocal System is to trace the evolutionary path of the new matter, and of the aggregates into which that matter gathers. Our first concern, however, must be to identify the participants in physical activity, and to define their principal properties, as these are items of information that will be required before the events in which these entities participate can be accurately evaluated. Now that we have arrived, at least tentatively, at the hydrogen stage, we will defer further consideration of the evolution of matter to Volume II, and will return to our examination of the individual material units and their primary combinations.

CHAPTER 18

Simple Compounds In the preceding chapters we have determined the specific combinations of simple rotations that are stable in the material sector of the universe, and we have identified each of these

combinations, within the experimental range, with an observed sub-atomic particle or atom of an element. We have then shown that an exact duplicate of this system of material rotational combinations, with space and time interchanged, exists in the cosmic sector, and we have identified all of the observed particles that do not belong to the material system as atoms or particles of the cosmic system. To the extent that observational or experimental data are available, therefore, we have established agreement between the theoretical and observed structures. So far as these data extend, there are no loose ends; all of the observed entities have been identified theoretically, and while not all of the theoretical entities have been observed, there are adequate theoretical explanations for this. The number of observed particles is increased substantially by a commonly accepted convention which regards particles of the same kind, but with different electric charges, as different particles. No consideration has been given to the effects of electric charges in this present discussion, as the existence of such charges has no bearing on the basic structure of the units. These charges may play a significant part in determining whether or not certain kinds of reactions take place under certain circumstances, and may have a major influence on the details of those reactions, just as the presence or absence of concentrations of kinetic energy may have a material effect on the course of events. But the electric charge is not part of the basic structure of the atom or sub-atomic particle. As will be brought out when we take up consideration of electrical phenomena, it is a temporary appendage that can be attached or removed with relative ease. The electrically charged atom or particle is therefore a modified form of the original rotational combination rather than a distinctly different type of structure. Our examination of the basic structures is not yet complete, however, as there are some associations of specific numbers of specific elements that are resistant to dissociation, and therefore act in the manner of single units in processes of low or moderate energy. These associations, or molecules, play a very important part in physical activity, and in order to complete our survey of the units of which material aggregates are composed we will now develop the theory of the structure of molecules, and will determine what kinds of molecules are theoretically possible. The concept of the molecule originated from a study of the behavior of gases, and as originally formulated it was essentially empirical. The molecule, on this basis, is the independent unit in a gas aggregate. But this definition cannot be applied to a solid, as the independent unit in a solid is generally the individual atom, or a small group of atoms, and in this case the molecule has no physical identity. In order to make the molecule concept more generally applicable, therefore, it has been redefined on a theoretical rather than an empirical basis, and as now conceived, a molecule is the smallest unit of a substance which can (theoretically) exist independently and retain all of the properties of the substance. The atoms of a molecule are held together by inter-atomic forces, the nature and magnitude of which will be examined in detail later. The strength of these forces determines whether or not the molecule will break up under whatever disruptive forces it may be subjected to, and the manner in which certain atoms are joined in a molecule may have an effect on the magnitude of the inter-atomic forces, but the determination of what atoms can combine with

what other atoms, and in what proportions, is governed by an entirely different set of factors. ln current theory, the factors responsible for the inter-atomic force, or ―bond,‖ are presumed to have a double function, not only determining the strength of the cohesive force, but also determining what combination can take place. The results of the present investigation indicate, however, that the force which determines the equilibrium distance between any two atoms is identical in origin and in general character regardless of the kind of atoms involved, and regardless of whether or not those atoms can, or do, take part in the formation of a molecule. Experience has indicated that it is advisable to lay more emphasis on the independence of these two aspects of the interrelations between atoms, and for this purpose the plan of presentation employed in the first edition will be modified in some respects. As already mentioned, the information that will be developed with respect to the molecular structure will be presented before any discussion of inter-atomic forces is undertaken. Furthermore, present indications are that whatever advantages there may be in using the familiar term ―bond‖ in describing the various molecular structures are outweighed by the fact that the term ―bond‖ almost inevitably implies the notion of a force of some kind. Inasmuch as the different molecular ―bonds‖ merely reflect different relative orientations of the rotations of the interacting atoms, and have no force implications, we will abandon the use of the term ―bond‖ in this sense, and will substitute ―orientation‖ for present purposes. The term ―bond‖ will be used in a different sense in a later chapter where it will actually relate to a force. The existence of molecules, either combinations of specific numbers of like atoms, or chemical compounds, which are combinations of unlike atoms, is due to the limitations on the establishment of inter-atomic equilibrium that are imposed by the presence of motion in time in the electric dimension of the atoms of certain elements. Those elements whose atoms rotate entirely in space (positive displacement in all rotational dimensions), or which are able to attain the all-positive status by reorientation on the 8-x basis, are not subject to any such limitations. An atom of an element of this kind can establish an equilibrium with any other such atoms in any proportions, except to the extent that the physical properties of the elements involved (such as the melting points) or conditions in the environment (such as the temperature) interfere. Material aggregates of this kind are called mixtures. In some cases, where the mixture is homogeneous and the composition is uniform, the term alloy is applied. There is a class of intermetallic compounds, in which these positive constituents are combined in definite proportions. CuZn and Cu5Zn8 are compounds of this class. But the combinations of copper and zinc are not limited to specific ratios of this kind in the way in which the composition of true chemical compounds is restricted. The commercially important alloys of these two metals extend through the entire range from a brass with 90 percent copper and 10 percent zinc to a solder with 50 percent of each constituent, and the possible alloys extend over a still wider range. The intermetallic compounds are merely those alloys whose proportions are especially favorable from a geometric standpoint. A typical comment in a chemistry textbook is that ―The theory of the bonding forces involved in these intermetallic compounds is very complex and is not, as yet, very well understood.‖ The reason is that there are no ―bonding forces‖ in these substances in the same sense in

which that term is ordinarily used in application to the true chemical compounds. As has been stated, negative rotation in the electric dimension of an atom is admissible because the requirement that the net total rotational displacement must be positive (in the material sector) can be met as long as the magnetic rotation is positive. In the time region inside unit distance, however, the electric and magnetic rotations act independently. Here the presence of a randomly oriented electric rotation in time makes it impossible to maintain a fixed inter-atomic equilibrium. Any relation of space to time is motion, and motion destroys the equilibrium. But an equilibrium can be established in certain cases if both of the interacting atoms are specifically oriented along the line of interactions in such a manner that the negative displacement in the electric dimension of one atom is counterbalanced by an equal positive displacement in one of the dimensions of the second atom, so that the magnitude of the resulting relative motion is zero with respect to the natural datum. Or a multi-atom group equilibrium may be established where the total negative displacements of the atoms with electric rotation in time are exactly equal to the total effective positive displacements of the atoms with which the interaction is taking place. In these cases there is an equilibrium because the net total of the positive and negative displacement involved is zero. Alternatively, the equilibrium may be based on a total of 8 or 16 units, since, as we have found, there are 8 displacement units between one zero point and the next. A negative displacement x may be counterbalanced by a positive displacement 8-x, the net total being 8, which is the next zero point, the equivalent of the original zero. As an analogy, we may consider a circle, the circumference of which is marked off into 8 equal divisions. Any point on this circle can be described in either of two ways: as x units clockwise from zero, or as 8—x units counter-clockwise from 8. A distance of 8 units clockwise from zero is equivalent to zero. Thus a balance between x and 8—x, with the midpoint at 8, is equivalent to a balance between x and -x, with the midpoint at zero. The situation in the inter-atomic space-time equilibrium is similar. As long as the relative displacement of the two interacting motions, the total of the individual values, amounts to the equivalent of any one of the zero points, the system is in equilibrium. Because of the specific requirements for the establishment of equilibrium, the components of combinations of this kind, molecules of chemical compounds, exist in definite proportions, each n atoms of one component being associated with a specific number of atoms of the other component or components. In addition to the constant proportions of their components, compounds also differ from mixtures or alloys in that their properties are not necessarily similar to those of the components, as is generally true in the all-positive combinations, but may be of an altogether different nature, as the resultant of a space-time equilibrium of the required character may differ widely from any of the effective rotational values of the individual elements. The rotational displacement in the dimension of interaction determines the combining power, or valence, of an element. Since the negative displacement is the foreign component of the material molecule that has to be counterbalanced by an appropriate positive displacement to make the compound possible, the negative valence of an element is the number of units of effective negative displacement that an atom of that element possesses. It

follows that, with some possible exceptions that will be considered later, there is only one value of the negative valence for any element. The positive valence of an atom in any particular orientation is the number of units of negative displacement which it is able to neutralize when oriented in that manner. Each element therefore has a number of possible positive valences, depending on its rotational displacements and the various ways in which they can be oriented. The occurrence of these alternate orientations is largely dependent upon the position of the element within the rotational group, and in preparation for the ensuing discussion of this subject it will be advisable, for convenient reference, to set up a classification according to position. Within each of the rotational groups the minimum electric displacement for the elements in the first half of the group is positive, whereas for those in the latter half of the group it is negative. We will therefore apply the terms electropositive and electronegative to the respective halves. It should be understood, however, that this distinction is based on the principle that the most probable orientation in the electric dimension considered independently is that which results in the minimum displacement. Because of the molecular situation as a whole, an electronegative element often acts in an electropositive capacity– indeed, nearly all of them take the positive role in chemical compounds under some conditions, and many do so under all conditions–but this does not affect the classification that has been defined. There are also important differences between the behavior of the first four members of each series of positive or negative elements and that of the elements with higher rotational displacements. We will therefore divide each of these series into a lower division and an upper division, so that those elements with similar general characteristics can be treated together. This classification will be based on the magnitude of the displacement, the lower division in each case including the elements with displacements from l to 4 inclusive, and the upper division comprising those with displacements of 4 or more. The elements with displacement 4 belong to both divisions, as they are capable of acting either as the highest members of the lower divisions or as the lowest members of the upper divisions. It should be recognized that in the electronegative series the members of the lower divisions have the higher net total positive displacement (higher atomic number). For convenience, these divisions within each rotational group will be numbered in the order of increasing atomic number as follows: These are the divisions which were indicated in the revised periodic table in Chapter 10. As will be seen from the points developed in the subsequent discussion, the division to which an element belongs has an important bearing on its chemical behavior. Including this divisional assignment in the table therefore adds substantially to the amount of information that is represented. Where the normal displacement x exceeds 4, the equivalent displacement 8-x is numerically less than x, and therefore more probable, other things being equal. One effect of this probability relation is to give the 8-x positive valence preference over the negative valence in Division III, and thereby to limit the negative components of chemical compounds to the elements of Division I

Division I

Lower electropositive

Division II

Upper electropositive

Division III

Upper electronegative

Division IV

Lower electronegative

V, except in one case where a Division III element acquires the Division IV status for reasons that will be discussed later. When the positive component of a compound is an element from Division I, the normal positive displacement of this element is in equilibrium with the negative displacement of the Division IV element. In this case both components are oriented in accordance with their normal displacements. The same is true if either or both of the components is double or multiple. We will therefore call this the normal orientation. The corresponding normal valences are the positive valence (x) and the negative valence (-x). It is theoretically possible for any Division I element to form a compound with any Division IV element on the basis of the appropriate normal valences, and all such compounds should be stable under favorable conditions, but whether or not any specific compound of this type will be stable under the normal terrestrial conditions is determined by probability considerations. An exact evaluation of these probabilities has not yet been attempted, but it is apparent that one of the most important factors in the situation is the general principle that a low displacement is more probable than a high displacement. If we check the theoretically possible normal valence compounds against the compounds listed in a chemical handbook, we will find nearly all of the low positive-low negative combinations in this list of common compounds. The low positive-high negative, and the high positive-low negative combinations are much less fully represented, while we will find the high positive-high negative combinations rather scarce. The geometrical symmetry of the resulting crystal structure is the other major determinant. A binary compound of two valence four elements (RX), for example, is more probable than a compound of a valence four and a valence three element (R3X4). The effect of both of these probability factors is accentuated in Division II, where the displacements corresponding to the normal valence have the relatively high values of 5 or more. Consequently, this valence is utilized only to a limited extent in this division, and is generally replaced by one of the alternative valences. Inasmuch as the basic requirement for the formation of a chemical compound is the neutralization of the negative electric displacement, the alternative positive valences are simply the results of the various ways in which the atomic rotation can be oriented to attain an effective positive displacement that will serve the purpose. Since each type of valence corresponds to a particular orientation, the subsequent discussion will be carried on in terms of valence, the existence of a corresponding orientation in each case being understood. The predominant Division III valence is based on balancing the 8-x displacement (positive because of the zero point reversal) against the displacement of the negative component. The resulting relative displacement is 8, which, as explained earlier, is the equivalent of zero. We

will call this the neutral valence. This valence also plays a prominent part in the purely Division lV compounds. The higher Division III members of Groups 4A and 4B are unable to utilize the 8—x neutral valence because for these elements the values of 8—x are less than zero, and therefore meaningless. instead, these elements form compounds on the basis of the next higher equivalent of zero displacement. Between the 8-unit level and this next zero equivalent there are two effective initial units of motion, as well as an 8-unit increment. The total effective displacement at this point is therefore l8, and the secondary neutral valence is 18-x. A typical series of compounds utilizing this valence, the oxides of the Division III elements of Group 4A, consists of HfO2, Ta2O5, WO3, Re2O7, and OSO4. Symmetry considerations favor balancing two electric displacements to arrive at the necessary space-time equilibrium, where conditions permit, but where the all-electric orientation encounters difficulties, it is possible for one of the magnetic rotations to take the positive role in the inter-atomic equilibrium. The magnetic valences, which apply in these magnetic-electric orientations, are the most common basis of combination in Division II, where the positive valences are high, and the neutral valences are excluded because the 8–x displacement is negative. They also make their appearance in the other three divisions where probability considerations permit. Each element has two magnetic rotations and therefore has two possible first order magnetic valences. In alternate groups the two rotations are equal, where no environmental influences are operative, and on this basis the number of magnetic valences should be reduced to one in half of the groups. As we saw in our original consideration of the atomic rotation in Chapter 10, however, any element can rotate with an addition of positive electric rotational displacement to the appropriate magnetic rotation, or with an addition of negative electric rotational displacement to the next higher magnetic rotation. Because of this flexibility, the limitation of the elements of alternate groups to a single magnetic valence actually applies only to the elements of Division I. Here this restriction has no real significance, as the elements of this division make little use of the magnetic valence in any event, because of the high probability of the low positive valences. To distinguish between the two magnetic valences, we will call the larger one the primary magnetic valence, and the smaller one the secondary magnetic valence. Neither of these valences has any inherent probability advantage over the other, but the geometrical considerations previously mentioned do have a significant effect. For instance, where the magnetic valence can be either two or three, a combination with a valence three negative element takes the form R3X2 if the magnetic valence is two, and the form RX if the alternate valence prevails. The latter results in the more symmetrical, and hence more probable, structure. Conversely, if the negative element has valence four, the R2X structure developed on the basis of a magnetic valence of two is more symmetrical than the R4X4 structure that results if the magnetic valence is three, and it therefore takes precedence. Many of the theoretically possible magnetic valence compounds that are on the borderline of stability, and do not make their appearance as independent units, are stable when joined with some other valence combination. For example, there are three theoretically possible first

order valence oxides of carbon: CO2 (positive electric valence), CO (primary magnetic valence), and C2O (secondary magnetic valence). The first two are common compounds. C2O is not. But there is another well-known compound, C3O2, which is obviously the combination CO C2O. As we will see later, this ability of the less stable combinations to participate in complex structures plays an important role in compound formation. The first order valences of the elements, the valences that have been discussed thus far, are summarized in Table 7. The great majority of the true chemical compounds of all classes are formed on the basis of these valences.

TABLE 7 FIRST ORDER VALENCES Group Division

Magnetic Valences Primary

Secondary

Element

Electric Valences Normal Neutral (*Sec.)

lB

IV

1

1

H

lB

0

2

1

He

2A

I

2

1

Li Be B C

1 2 3 4

2A

IV

2

1

C N O F

4 5

2A

0

2

2

Ne

2B

I

2

2

Na Mg Al Si

2B

IV

3

2

Si P S Cl

2B

0

3

3

Ar

Negative 1

4 3 2 1

1 2 3 4 4 5 6 7

4 3 2 1

3A

I

3

2

K Ca Sc Ti

1 2 3 4

3A

II

3

2

V Cr Mn Fe Co

5 6 7 8

3A

III

3

2

Ni Cu Zn Ga Ge

1 2 3 4

3A

IV

3

2

As Se Br

3A

0

3

3

Kr

3B

I

3

3

Rb Sr Y Zr

1 2 3 4

3B

II

4

3

Nb Mo Tc Ru Rh

5 6 7 8

3B

III

4

3

Pd Ag Cd In Sn

1 2 3 4 5 6 7

3B

IV

4

3

Sb Te I

3B

0

4

3

Xe

5 6 7

3 2 1

3 2 1

4A

I

4

3

Cs Ba La Ce

1 2 3 4

4A

II

4

3

Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb

5 6 7 8

4A

III

4

3

Lu Hf Ta W Re Os Ir Pt Au Hg Tl Pb

4* 5* 6* 7* 8*

1 2 3 4

4A

IV

4

3

Bi Po At

5 6 7

4A

0

4

4

Rn

4B

I

4

4

Fr Ra Ac Th

1 2 3 4

4B

II

5

4

Pa U Np Pu

5 6 7 8

3 2 1

Am Cm Bk Cf Es Fm Md No 4B

III

5

4

Lr Rf Ha

4* 5*

There is also an alternate type of inter-atomic orientation that gives rise to what we may call second order valences. As has been emphasized in the previous discussion, an equilibrium between positive and negative rotational displacements can take place only where the net resultant is zero, or the equivalent of zero, because any value of the space-time ratio other than unity (zero displacement) constitutes motion, and makes fixed equilibrium positions impossible. In the most probable condition, the initial level from which each rotation extends is the same zero point, or, where the nature of the orientation requires different zero points, the closest combination that is possible under the circumstances. This arrangement, the basis of the first order valences, is clearly the most probable, but it is not the only possibility. Inasmuch as the separation between natural zero points (unit speed levels) is two linear units (or eight three-dimensional units) it is possible to establish an equilibrium in which the initial level of the positive rotation (the positive zero) is separated from the initial level of the negative rotation (the negative zero) by two linear units. The effect of this separation on the valence is illustrated in Fig. 2. Fig.2 (a)

(b)

(c)

The basis of the first order valences is shown in (a). Here the normal positive valence V balances an equal negative valence V at an equilibrium point represented by the double line. In (b) the initial level of the positive rotation has been offset to the next zero point, two units distant from the point of equilibrium. These two units, being on the positive side of the equilibrium point, add to the effective positive displacement, and the positive valence therefore increases to V+2; that is, V+2 negative valence units are counterbalanced. In (c) it is the initial level of the negative rotation that has been offset from the point of equilibrium.

Here the two intervening units add to the effective negative displacement, and the positive valence decreases to V-2, as the V units of positive displacement are now able to balance only V—2 negative valence units. By reason of the availability of the zero point modifications illustrated in Fig.2(b), each of the positive first order valences corresponds to a second order valence, an enhanced valence, as we will call it, that is two units greater in the case of the direct valences (x+2), and two units less for the inverse valences: 8 - (x+2) = 6-x. Compounds based on enhanced normal valences are relatively uncommon, as the normal valence itself has a high degree of probability, and the enhanced valence is not only inherently less probable, but also has a higher effective displacement in any specific application, which decreases the relative probability still further. The probability factors are more favorable for the enhanced neutral valence, as in this case the effective displacement is less than that of the corresponding first order valences. The compounds of this type are therefore more numerous, and they include such well-known substances as SO2 and PCI3. An interesting application of this valence is found in ozone, which is an oxide of oxygen, analogous to SO2. It should theoretically be possible for valences to be diminished by orientation in the manner shown in Fig.2 (c), but it is doubtful if any stable compounds are actually formed on the basis of diminished electric valences. The reason for their absence is not yet understood. The magnetic valences are both enhanced and diminished. Either the primary or the secondary valence may be modified, but since enhancement is in the direction of lower probability (higher numerical value) the number of common compounds based on the enhanced magnetic valences is relatively small. Diminishing the valence improves the probability, and the diminished valence compounds are therefore more plentiful in the rotational groups in which they are possible (those with primary magnetic valences above two), although the list is still very modest compared to the immense number of compounds based on the first order valences. As indicated earlier, one component of any true chemical compound must have a negative displacement of four or less, as it is only through the establishment of an equilibrium between such a negative displacement and an appropriate positive displacement that the compound comes into existence. The elements with the required negative displacement are those which comprise Division IV, and it follows that every compound must include at least one Division IV element, or an element which has acquired Division IV status by valence enhancement. If there is only one such component, the positive-negative orientation is fixed, as the Division IV element is necessarily the negative component. Where both components are from Division IV, however, one normally negative element must reorient itself to act in a positive capacity, and a question arises as to which retains its negative status. The answer to this question hinges on the relative negativity of the elements concerned. Obviously a small displacement is more negative than a large one, since it is farther away from the neutral point where positive and negative displacements of equal magnitude are equivalent. Within any one group the order of negativity is therefore the same as the displacement sequence. In Group 2B, for instance, the most negative element is chlorine, followed by sulfur, phosphorus, and silicon, in that order. This means that the negative component in any Division IV chlorine-sulfur combination is chlorine, and the product is a

compound such as SCl2, not ClS or Cl2S. On the other hand, the compound P2S3 is in order, as phosphorus is normally positive to sulfur. Where the electric displacements are equal, the element with the smaller magnetic displacement is the more negative, as the effect of a greater magnetic displacement is to dilute the negative electric rotation by distributing it over a larger total displacement. We therefore find ClF3 and IBr3, but not FCl3 or BrI3. The magnitude of the variation in negativity due to the difference in magnetic displacement is considerably less then that resulting from inequality of electric displacement, and the latter is therefore the controlling factor except where the electric displacements are the same in both components. On the foregoing basis, all elements of Divisions I, II, and III are positive to Division IV elements. The displacement 4 elements on the borderline between Divisions III and IV belong to the higher division when combined with elements of lower displacement, and when elements lower in the negative series acquire valences of 4 or more through enhancement or reorientation they also assume Division III properties and become positive to the other Division IV elements. Thus chlorine, which is negative to oxygen in the purely Division IV compound OCl2, is the positive component in Cl2O7. Similarly, the normal relations of phosphorus and sulfur, as they exist in P2S3, are reversed in S3P4, where sulfur has the valence 4. Hydrogen, like the displacement 4 members of the higher groups, is a borderline element, and because of its position is able to assume either positive or negative characteristics. It is therefore positive to all purely negative elements (Division IV below valence 4), but negative to all strictly positive elements (Divisions I and II), and to the elements of Division III. Because of its lower magnetic displacement, it is also negative to the higher borderline elements: carbon, silicon, etc. The fact that hydrogen is negative to carbon is particularly significant in view of the importance of the carbon-hydrogen combination in the organic compounds. Another point that should be noted here is that when hydrogen acts in a positive capacity, it does so as a Division III element, not as a member of Division I. Its + 1 valence is therefore magnetic. This is why hydrogen was assigned only to the negative position in the revised periodic table, rather than giving it two positions, as has been customary. The variation in negativity with the size of the magnetic displacement has the effect of extending the Division III behavior into Division IV to a limited extent in the higher groups. Lead, for example, has practically no Division IV characteristics, and bismuth has less than its counterparts in the lower groups. At the lower end of the atomic series this situation is reversed, and the Division IV characteristics extend into Division III, as an alternative to the normal positive behavior of some of the elements of that division. Silicon, for instance, not only forms combinations such as MnSi and CoSi3, which, on the basis of the information currently available, appear to be intermetallic compounds similar to those of the higher Division III elements, but also combinations such as Mg2Si and CaSi2, which are probably true compounds analogous to Be2C and CaC2. Carbon carries this trend still farther and forms carbides with a wide variety of positive components.

In the 2A group, the Division IV characteristics extend to the fifth element, boron. This is the only case in which the fifth element of a series has Division IV properties, and the behavior of boron in compound formation is correspondingly unique. In its Division I capacity, as the positive component in compounds such as B203, boron is entirely normal. But its first order negative valence would be -5. Formation of compounds based on this -5 valence conflicts with the previously stated limitation of the negative valence to a maximum of four units. Boron therefore shifts to an enhanced negative valence, adding two positive units to its first order value of -5, with a resultant of -3. The direct combinations of boron with positive elements have such structures as FeB and Cu3 B2. However, many of the borides have complex structures in which the effective valences are not as clearly indicated. This raises a question as to whether boron may be an exception to the rule limiting the maximum negative valence to -4, and may utilize both the -5 and -3 valences. This issue will be considered in the next chapter.

CHAPTER 19

Complex Compounds The discussion in the preceding chapter had direct reference only to compounds of the type Rm Xn, in which m positive atoms are combined with n negative atoms, but the principles therein developed are applicable to all combinations of atoms. Our next objective will be to apply these principles to an examination of some of the more complex situations. Any atom in one of the simple compounds may be replaced by another atom of the same valence number and type. Thus any or all of the four chlorine atoms in CCl4 may be replaced by equivalent negative atoms, producing a whole family of compounds such as CCl3 Br, CCl2 F2 , CClI3 , CF4 , etc. Or we may replace n of the valence one chlorine atoms by one atom of negative valence n, obtaining such compounds as COCl2 , COS, CSTe, and so on. Replacements of the same kind can be made in the positive component, producing compounds like SnCl4 . Simple replacement by an atom of a different valence type is not possible. Copper, for instance, has the same numerical valence as sodium, but the sodium atoms in a compound such as Na2 O are not replaceable by copper atoms. There is a compound Cu2 O, but the neutral valence structure of this compound is very different from the normal valence structure of Na2 O. Similarly, if we exchange a positive hydrogen (magnetic valence) atom for one of the sodium (normal valence) atoms in Na2 O, the process is not one of simple replacement. Instead of NaHO, we obtain NaOH, a compound of a totally different character. A factor which plays an important part in the building of complex molecular structures is the existence of major differences in the magnitudes of the rotational forces in the various inter-atomic combinations. Let us consider the compound KCN, for example. Nitrogen is the negative element in this compound, and the positive-negative combinations are K-N

and C-N. When we compute the inter-atomic distances, by means of the relations that will be developed later, we find that the values in natural units are .904 for K-N and .483 for C-N. As stated in Chapter 18, the term ―bond‖ is not being used in this work in any way connected with the subject matter of that chapter: the combining power or valence. The term ―valence bond,‖ or any derivative such as ―covalent bond,‖ has no place in the theoretical structure of the Reciprocal System. However, use of the word ―bond‖ is convenient in referring to the cohesion between specific atoms, atomic groups, or molecules, and in the subsequent discussion it will be employed in this restricted sense. On this basis we may say that the force of cohesion, or ―bond strength,‖ is considerably greater for the C-N bond than for the K-N bond, as is indicated by the difference in the inter-atomic distances. It has usually been assumed that this force of cohesion is an indication of the strength of the inter-atomic forces, but in reality the relation is inverse. As explained in Chapter 8, the gravitational forces exerted by the atoms, the forces due to the atomic rotation, are forces of repulsion in the time region, and the cohesion is therefore greater when the rotational forces are weaker. The short C-N distance, and the corresponding strength of this bond, are the results of inactive force dimensions in this combination which reduce the effective repulsive force, and require the atoms to move closer together to establish equilibrium with the constant force of the progression of the reference system. Because of its greater strength, the C-N bond remains intact through many processes which disrupt or modify the K-N bond, and the general behavior of the compound KCN is that of a K-CN combination rather than that of a group of independent atoms such as we find in K2 O. Groups like CN which have relatively high bond strengths and are therefore able to maintain their identity while changes are taking place elsewhere in the compounds in which they exist are called radicals. Inasmuch as the special properties of these radicals are due to the differences between their bond strengths and those of the other bonds within the compounds, the extent to which any particular group acts as a radical depends on the magnitude of these differences. Where the inter-atomic forces are very weak, and the bond is correspondingly strong, as in the C-N combination, the radical is very resistant to separation, and acts as a single atom in most respects. At the other extreme, where the differences between the various inter-atomic forces in the molecule are small, the boundary line between radicals and non-radical atomic groupings is rat per vague. The stronger radicals are definite structural groups. NH4 is, to a large degree, structurally interchangeable with the sodium atom, OH can substitute for I in the CdI2 crystal without changing the structure, and so on. The weakest radicals, those with the smallest margins of bond strength, crystallize in structures in which the radical, as such, plays no part, and the structural units are the individual atoms. The perovskite (CaTiO3 ) structure is a familiar example. Here each atom is structurally independent, and hence this type of arrangement is available for a compound like KMgF3 in which there definitely are no radicals, as well as for a compound such as KIO3 which contains a borderline group. From a structural standpoint the IO3 group in KIO3 is not a radical, although it acts as a radical in some other physical phenomena, and is commonly recognized as one.

From a thermal standpoint, for example, the IO3 group is definitely a radical at low temperatures, the entire group acting as a unit. But unlike the strong radicals such as OH and CN, which maintain this single unit status under all ordinary conditions, IO3 separates into two thermal units at higher temperatures. Other radical groups are still less resistant to the thermal forces. The CrO3 group, for example, acts as a single thermal unit at the lower temperatures, but in the upper part of the solid temperature range all four atoms are thermally independent. The thermal behavior of chemical compounds, including the examples mentioned, will be discussed in a subsequent volume. In order to take the place of single atoms in the three-dimensional inorganic structures, the radicals must have three-dimensional force distributions, and where some of the interatomic forces are inherently two-dimensional, as is true in some of the lower group elements, for reasons that will be explained later, the three-dimensional distribution must be achieved by the geometrical arrangement. The typical inorganic radical therefore consists of a group of satellite atoms clustered three-dimensionally around one or more central atoms. Inasmuch as the satellite atoms are between the central atom and the opposite component of the compound, the effective valence of the radical must have the same sign as that of the satellite atoms. This limitation on the net valence means that the great majority of these inorganic radicals are negative, as hydrogen is the only element that has a two-dimensional force distribution when acting in a positive capacity. The most important hydrogen radical of this class is ammonium, NH4 , in which hydrogen has the magnetic valence 1 and nitrogen the negative valence 3 for a net group valence of +1. The phosphonium radical is similar, but less common. A variation of NH4 is the tetramethylammonium radical N(CH3 )4 , in which the hydrogen atoms are replaced by positive CH3 groups. The theoretically possible number of negative radicals is very large, but the effect of probability factors limits the number of those actually existing to a small fraction of the number that could theoretically be constructed. Other things being equal, those groups with the smallest net displacement are the most probable, so we find BO2 -1 commonly, and BO3 -3 less frequently, but not BO4 -5, BO5 -7, or the other higher members of this series. Geometrical considerations also enter into the situation, the most probable combinations, where other features are equal, being those in which the forces can be disposed most symmetrically. The status of the binary radicals such as OH, SH, and CN, is ambiguous on the basis of the criteria developed thus far, since there is no distinction between central and satellite atoms in their structures, but these groups can be included with the inorganic radicals because they are able to enter into the three-dimensional inorganic geometric arrangements. Another special class of radicals combines positive and negative valences of the same element. Thus there is the azide radical N3 , in which one nitrogen atom with the neutral valence +5 is combined with two negative nitrogen atoms, valence -3 each, for a group total of -1. Similarly, a carbon atom with the primary magnetic valence +2 joins with a negative carbon atom, valence -4, to form the carbide radical, C2 , with a net valence of 2.

The common boride radicals, the combination boron structures mentioned in Chapter 18, are B2 , B4 , and B6 . The best known B4 compounds are all direct combinations with valence 4 elements of Division I. It can therefore be concluded that the net valence of the B4 combination is -4. Similarly, the role of B6 in such compounds as CaB6 and BaB6 indicates that the net valence of the B6 radical is -2. The status of B2 is not as clearly indicated, but it also appears to have a net valence of -2; that is, it is simply half of the B4 combination. This net valence of -2 could be produced either by a combination of the -3 negative valence with the secondary magnetic valence, +1, or by a combination of the -5 negative valence with the positive valence +3. The same two alternatives are available for B4 . The combination of +1 and -3 valences is also feasible for the radical B6 , and on the basis of these values the valences of all of the boride radicals constitute a consistent system, as shown by the following tabulation: B2 B4 B6

Positive B+1 2 B+1 4 B+1

Negative B-3 2 B-3 2 B-3

Net -2 -4 -2

On the other hand, the B6 radical cannot be produced by a combination of +3 and-5 valences, and in order to utilize the -5 valence it would be necessary to substitute valence +2 in the positive position. The -3 negative valence thus leads to a more consistent set of combinations, as well as being consistent with the boron valence in the direct combinae lions of boron with positive elements. At least for the present, therefore, it will have to be concluded that the weight of the evidence favors a single negative valence (-3) for boron. The general principles of compound formation developed for the simpler combinations apply with equal force to compounds contemning radicals of the inorganic class. The basic requirement is that the group valence of the radical be in equilibrium with an equal and opposite valence. A negative radical such as SO4 therefore joins the necessary number of positive atoms to form a compound on the order of K2 SO4 . The positive NH4 radical similarly joins with a negative atom to produce a compound like NH4 Cl. Or both components may be radicals, as in (NH4 )2 SO4 One new factor introduced by the grouping is that the relative negativity of the atoms within the group no longer has any significance. The azide group, N3 , for instance, is negative, and cannot be anything but negative. In the compound ClN3 , then, the chlorine atom is necessarily positive, even though chlorine is negative to nitrogen in direct Division IV combinations such as NCl3 . In the magnetic valence compounds the negative electric displacement is in equilibrium with one of the magnetic displacements of the positive component. This leaves the positive electric displacement free to exert a directional influence on other molecules or atoms. In its general aspects, this directional effect is similar to the orienting influence of the space-time equilibrium that is required in order to enable atoms of negative elements to join with other atoms in compounds. In both cases there are certain relative positions of the interacting atoms or molecules that permit a closer approach, which results in a greater cohesive force. Neither of these orienting agencies contributes anything to the cohesive forces; they simply hold the participants in the positions in which the stronger forces are generated. Without the directional restrictions imposed by these orienting

influences the relative positions would be random, and the greater cohesive forces would not develop. Since all magnetic valence compounds have free electric displacements, they all have strong combing tendencies, forming what we may call molecular compounds; that is, compounds in which the constituents are molecules instead of the individual atoms or radicals of the atomic compounds. Inasmuch as the free electric displacements are all positive, there is no valence equilibrium involved, and the molecular compounds can be of almost any character, but geometrical and symmetry considerations favor associations with units of the same kind, or with closely related units. Double molecules of a compound are not readily recognized in the solid or liquid states, but in spite of the obstacles to recognition there are many well-known combinations such as FeO, Fe2 O3 , C2 O, CO, etc. Water and ammonia, both magnetic valence compounds, are particularly versatile in forming combinations of this type, and join with a great variety of substances for form hydrates and ammoniates. There is only one free electric displacement in any binary magnetic valence combination, and the orienting effect is therefore exerted in only one direction. When the active molecular orientation effects, as we will call them, of a pair of molecules such as FeO and Fe2 O3 are directed toward each other, the system is closed, and the resulting Fe3 O4 association has no further combining tendencies. Even where several H2 O molecules combine with the same base molecule, as is very common, the association is between the base molecule and each H2 O molecule individually. A different situation develops where a two-dimensional molecule is formed on the basis of a magnetic valence. Here the intermolecular distance may be reduced to the point where three molecules are within a single natural unit of space, in which case each molecule exerts an orienting effect not only upon its immediate neighbor in the active direction, but also upon the next molecule beyond it. Limitation of the effective inter-atomic forces to two dimensions in this class of compounds contributes to the extension of the magnetic orientation effects in two separate ways. First, it reduces the inter-atomic distance by one third, since there is no effective rotational force in the third dimension. In the compound lithium chloride, for example, the distance between lithium and chlorine atoms on a three-dimensional basis would be 1.321 natural units. By reason of the two-dimensional orientation, this drops to .881 units. Then, the distance between molecules 1 and 3 is further reduced by the geometric effect illustrated in Fig. 3. In an aggregate in which the structural units are arranged three-dimensionally, as in (a), molecule 2 interposes its full diameter between molecules 1 and 3. Where the inter-atomic distance is x, the distance between the centers of molecules 1 and 3 is then 4x. But if the structural units are arranged twodimensionally, as in (b), this distance is reduced to 2y, where y is the distance between adjacent central atoms. In the case of lithium chloride, this reduction is not sufficient to enable any interaction between molecules 1 and 3, as the 2y distance is 1.398, and no effect is exerted where this distance exceeds unity. But there are other compounds, particularly those of carbon and nitrogen, in which the 2y distance is, or can be, less than unity. The C-C distance, for example, ranges from .406 to .528. With some aid from the geometric

Figure 3 (a)

(b)

arrangement in the case of the greater distances, a large number of carbon compounds based on the magnetic orientation are within the range where the orienting effects of the free electric displacement extend to the third molecule. These two-dimensional magnetic valence molecules with very short inter-atomic distances are actually stable structures with their negative electric rotations fully counterbalanced by appropriate positive magnetic rotations, and they are therefore capable of independent existence in the manner of the other molecules that we have considered. Because of their strong combining tendencies, however, most of them do not actually lead an independent life more than momentarily if there are other molecules present with which they can combine, and in recognition of the fact that they are normally constituents of molecular compounds rather than molecules in their own right we will hereafter refer to them as magnetic neutral groups. While there are many atomic combinations with inter-atomic distances less than one half natural unit, or so close to this figure that they can be brought within it by structural modifications, the number of such combinations that can form magnetic neutral groups is limited by various factors such as probability, valence, relative negativity, etc. Thus the combinations CN and OH are excluded because they have active valences; that is, they are negative radicals, not neutral groups. NH2 is excluded by a probability situation that will be discussed later; OH2 is excluded because hydrogen is strongly positive to oxygen, and so on. Furthermore, the binary valence two combinations are subject to an additional restriction. Its exact nature is not yet clear, but its effect is to put CO at the limit of stability, so that combinations such as NO and CS are excluded. The practical effect of these several restrictions, together with the limitations on the inter-atomic distance, is to confine the magnetic neutral groups, aside from CO, almost entirely to combinations of carbon, nitrogen, and boron with valence one negative atoms or radicals. In the subsequent discussion we will find it convenient to use a diagram which identifies the orientation effects that are exerted by the various structural units, and thus shows how the different types of molecular compounds are held in combining positions; that is, positions in which the inter-group cohesive forces are maximized. In the diagram we will represent valence effects by double lines, as in CH3 =OH, while the primary molecular orientation effect will be represented by single lines, as in CH-CH. The secondary molecular effects exerted on the third group in line will then be shown by connecting lines, with arrows to indicate the direction of the orienting effect.

As this diagram indicates, there is a primary orientation effect between CH groups 1 and 2, and between groups 3 and 4. Because these effects are unidirectional, and paired, there is no interaction between groups 2 and 3. If the CH groups were three-dimensional, like the FeO and Fe2 O3 molecules previously mentioned, there would be no combination between the 1-2 pair and the 3-4 pair, and the result would be two CH-CH molecules. But because group 3 is within one unit of distance of group 1, the orienting effect of the free electric displacement of group 1, which acts at short range against group 2, also acts against group 3 at longer range, as shown in the diagram. Similarly, the 4-3 effect acts at long range against group 2. Thus the 1-2 and 3-4 pairs are held in the combining position by the secondary orientation effects in spite of the lack of any primary effect between groups 2 and 3. The relation of these orienting influences to the cohesion between the constituents of the atomic or molecular compound can be compared to the effect of a reduced temperature on a saturated liquid. The result of the lower temperature is solidification, and in the solid there is an additional cohesive force between the atoms that did not exist in the liquid, but this new force is not supplied by the temperature. What the change in the temperature actually accomplished was to create the necessary conditions under which the atoms could assume the relative positions in which the inter-atomic forces of cohesion are operative. Similarly, the orienting effects of the valence equilibrium and the free rotational displacement of the magnetic neutral groups do not provide the forces that hold the molecules together; they merely create the conditions which allow the stronger cohesive forces to operate. When the atoms or neutral groups are subjected to the orienting effects that permit them to establish equilibrium at one of the shorter inter-atomic or inter-group distances, it is the point of equilibrium between the rotational forces and the oppositely directed force due to the progression of the natural reference system that determines the magnitude of the cohesive forces. An important consequence is that the cohesive force between any two specific magnetic neutral groups is the same regardless of whether the orientation results from the short range primary effect, or the long range secondary effect, of the free electric displacements. In the preceding diagram, the magnitude of the cohesive force between groups 2 and 3 is identical with that of the 1-2 and 3-4 forces. It is simply the cohesive force between two CH groups. As we will see later, particularly in Chapter 21, this point is quite significant in connection with the attempts that are being made to draw conclusions concerning the molecular structure from the magnitudes of the inter-group forces. As the diagram indicates by the arrows at the two ends of the four-group combination, the 2-1 and 3-4 secondary orientation effects are not satisfied, and they are capable of extension to any other atom or group that comes within range. Such a combination of neutral groups is therefore open to further combination in both directions. The system is not closed by the addition of more groups of the same character, since this still leaves active secondary orientation effects at each end of the combined structure. The unique combining power that results from this continuation of the secondary effects gives rise to an extremely large and complex variety of chemical compounds. There is almost no limit on the number of groups that can be joined. As long as each end of the molecule is a

magnetic neutral group with an active secondary effect, there are still two active ends no matter how many groups are added. The necessary closure to form a compound without further combining tendencies can be attained in one of two ways. Enough of these magnetic neutral groups may combine to permit the ends of the chain to swing around and join, satisfying the unbalanced secondary effects, and creating a ring compound. Or, alternatively, the end groups may attach themselves to atoms or radicals which do not have the orienting effects of the magnetic groups. Such additions close the system and form a chain compound. Both the chain and ring structures are known as organic compounds, a name surviving from the early days of chemistry, when it was believed that natural products were composed of substances of a nature totally different from that of the constituents of inorganic matter. As used herein, the term ―organic‖ will refer to all compounds with the characteristic two-dimensional magnetic valence structure, rather than being defined as usual to cover only carbon compounds with certain exceptions. The excluded carbon compounds are practically the same under both definitions, and the only significant difference is that in this work a few additional compounds, such as the hydronitrogens, which have the same type of structure as the organic carbon compounds are included in the organic classification. The valence equilibrium must be maintained in the chain compounds, and the addition of a positive radical or atom at one end of the chain must be balanced by the addition of a negative unit with the same net valence at the other end. This equilibrium question does not arise in connection with the ring compounds as all of the structural units in the ring are either magnetic neutral groups or neutral associations of atoms or groups with active valences. Here the complete valence balance is achieved within the groups or associations. In order to join the two-dimensional magnetic group structures any radicals which are to occupy the end positions must also be two-dimensional. The inherently three-dimensional inorganic radicals such as NO3 , SO4 , etc., do not qualify. The two-atom and three-atom radicals like OH, CN, and NO2 are arranged three-dimensionally in the inorganic compounds, but they are not necessarily limited to this kind of an arrangement, and they can be disposed two-dimensionally. These radicals are therefore available for the twodimensional compounds. The two-dimensional structure also reverses the requirement with respect to the net valence of the radicals. The external contacts of the two-dimensional groups are made primarily by the central atoms, and instead of having the same direction as that of the satellite atoms, the net group valence conforms to the valence of the central atom. These groups, the organic radicals, are therefore opposite in valence to their counterparts among the inorganic radicals. Corresponding to the positive ammonium radical NH4 is the negative amine radical NH2 , the negative radical CN- in which carbon has the magnetic valence 2 has an organic analog in the positive radical CN+, in which carbon has the normal valence 4, and so on. Furthermore, the combinations of carbon and the valence one negative elements, including hydrogen, which are inherently twodimensional, and are therefore precluded from acting as inorganic radicals, are fully

compatible with the two-dimensional neutral groups. Since there are a large number of such combinations, the great majority of the organic radicals are structures of this type. From the foregoing it can be seen that the organic compounds are subject to exactly the same valence considerations as the inorganic compounds. They are, in fact, atomic associations of identically the same general nature. The only difference is that the very short inter-atomic distances in the magnetic valence compounds of the lower group elements permit the existence of secondary orientation effects that enable these compounds to unite into complex structures. This unification of the whole realm of chemical compounds is an example of the kind of simplification that results when the true reason for a physical phenomenon is ascertained. As we saw in Chapter 18, the formation of chemical compounds takes place because the atoms of the purely electronegative elements (Division IV) cannot establish a stable relationship with atoms of other elements except under certain special conditions in which their negative displacement (motion in time) is counterbalanced by an appropriate positive displacement of the elements with which they are interacting. These requirements are equally as applicable to carbon and the other lower elements as to the constituents of the inorganic compounds. All chemical compounds are governed by the same general principles. The clarification of the nature of the organic compounds will, of course, require some modification of existing chemical ideas. The concept of an electronic origin of the cohesive forces must be abandoned. Electrons are independent physical entities. They are not constituents of atoms, and they are not available to generate cohesive forces, even if they were capable of so doing. (It should be noted that the foregoing statement does not assert that there are no electrons in the atoms. That is an entirely different issue which will be given consideration when we are ready to begin a discussion of electrical phenomena.) The concepts of ―double bonds‖ and ―triple bonds‖ will also have to be discarded, along with the curious idea of ―resonance,‖ in which a system alternating between two possible states is supposed to acquire an additional energy component by reason of the alternation. Some of the theoretical concepts that are untenable in the light of the new findings, such as the ―double bonds‖ , have been quite useful in practice, and for this reason many chemists will no doubt find it difficult to believe that these ideas are actually wrong. As explained in the introductory discussion, however, much of the progress that has been made in the scientific field has been made with the help of theories that are now known to be wrong, and have been discarded. The reason for this is that none of these theories was entirely wrong. In order to gain any substantial degree of acceptance a theory must be correct in at least some respects, and, as experience has demonstrated in many cases, these valid features can contribute materially to an understanding of the phenomena to which they relate, even though other portions of the theory are totally incorrect. The necessity of parting with cherished ideas of long standing will be less distressing if it is realized that the ―double bonds‖ and associated concepts that must now be abandoned are not tangible physical entities; they are merely inventions by which certain empirical relations of a mathematical nature are clothed in descriptive language for more convenient manipulation. Linus Pauling brings this out clearly in the following statements:

The structural elements that are used in classical structure theory, the carbon-carbon single bond, the carbon-carbon double bond, the carbon-hydrogen bond, and so on, also are idealizations, having no existence in reality.... It is true that chemists, after long experience in the use of classical structure theory, have come to talk about, and probably to think about, the carbon-carbon double bond and other structural units of the theory as though they were real. Reflection leads us to recognize, however, that they are not real, but are theoretical constructs in the same way as the individual Kekule structures for benzene.68 When a correct theory appears it must include the valid features of the previous incorrect theory. But the identity of these features as they appear in the context of the different theories is often obscured by the fact that they are expressed in different language. In the case we are now considering, current chemical theory says that the cohesion in organic compounds is due to electronic forces. Development of the Reciprocal System of theory now leads to the conclusion that there are no electrons in the atomic structures, and consequently there are no electronic forces. At first glance, then, it would appear that the new findings repudiate the entire previous structure of thought. On closer examination, however, it can be seen that the electrons, as such, actually play no part in most of the explanations of physical and chemical phenomena that are presumably derived from the electronic theory. The theoretical development actually uses only the numerical values. For example, the conclusions that are drawn from the positions of the elements in the periodic table are currently expressed in terms of the number of electrons. Carbon has a valence of four in its ―saturated‖ condition because it has four electrons in its atomic structure, so the electronic theory says. It is clear from the empirical evidence that there actually are four units of some kind in the carbon atom, whereas the sodium atom has only one unit of this kind. But the empirical observations give us nothing but the numbers 4 and 1; they tell us nothing at all about the nature of the units to which the numerical values apply. The conclusion that these units are electrons is pure assumption, and the identification with electrons plays no part in the application of the theory. The maximum valence of carbon is four, not four electrons. Moseley's Law, which relates the frequencies of the characteristic x-rays of the elements to their atomic numbers, is another example. It is currently accepted as ―definite proof‖ of the existence of specific numbers of electrons in the atoms of these elements. Conclusions of the same kind are drawn from the optical spectra. In a publication of the National Bureau of Standards entitled Atomic Energy Levels we find this positive statement: ―Each chemical element can emit as many atomic spectra as it has electrons.‖ But, in fact, the empirical evidence in both cases contributes nothing but numbers. Here, again, the observations tell us that certain specific numbers of units are involved, but they give us no indication as to the nature of these units. So far as we can tell from the empirical information, they can be any kind of units, without restriction. Thus, when we discard the electronic theory in application to these phenomena we are not making any profound change; we are merely altering the language in which our understanding of the phenomena is expressed. instead of saying that there are 11 electrons in sodium, one of which is in a particular ―configuration,‖ we say, on the basis of our theoretical findings, that the total number of effective speed displacement units in

the rotational motions of the sodium atom is 11, and that only one of these applies to the electric (one-dimensional) rotation. Carbon has 6 total displacement units in its rotational motions, with 4 in the electric dimension. It follows that in those properties which are related to the total effective speed displacement (the net total quantity of motion in the atom) the number applicable to sodium is 11, and that applicable to carbon is 6, while in those properties which are determined by the displacement in the electric dimension individually the respective numbers are 1 for sodium and 4 for carbon. It is an equally simple matter to translate the formation of ―ionic compounds‖ from the language of the electronic theory to the language of the Reciprocal System. The electronic theory says that stability is attained by conforming to the ―electronic configuration‖ of one of the inert gas elements, and that potassium and chlorine, for example, accomplish this by transferring one electron from potassium to chlorine, thus bringing both to the status of the inert gas element argon. The Reciprocal System says that chlorine has a negative rotational speed displacement of one unit (a unit motion in time) in its electric dimension, and that it can enter into a chemical combination only by means of a relative orientation in which that negative displacement is balanced at a zero point by an appropriate positive displacement. Potassium has a positive displacement of one unit, and the combination of this one positive unit and the negative unit of chlorine produces the required net total of zero. So far as the ―ionic compounds‖ are concerned, the Reciprocal System changes practically nothing but the language, as the foregoing example shows. But when the language change is made, it becomes evident that the same theory that applies to this one restricted class of compounds applies to all of the true chemical compounds. On this basis there is no need for the profusion of subsidiary theories that have been formulated in order to deal with those classes of compounds to which the basic ―ionic‖ explanation is not applicable. Instead of calling upon the multitude of different ―bonds‖ –the ionic bond, the ion-dipole bond, the covalent bond, the hydrogen bond, the three-electron bond, and the numerous ―hybrid‖ bonds–that are required in order to adapt the electronic theory to the many types of compounds, the Reciprocal System applies the same theoretical principles to all compounds. In these cases that we have considered, the translation from electronic language to the language of the Reciprocal System leads to a significant clarification of the mechanism of the processes that are involved. Whatever value there may be in the electronic theory is not lost when that theory is abandoned; it is carried over into the theoretical structure of the Reciprocal System in different language.

CHAPTER 20

Chain Compounds In undertaking a general survey of such an extended field as that of the structure of the organic compounds it is obviously essential to use some kind of a classification system to

group the compounds of similar characteristics together, so that we may avoid the necessity of dealing with so many individual substances. The distinction between chain and ring compounds has already been mentioned. The chemical properties of the chain compounds are determined primarily by the nature of the positive and negative radicals or atoms, and it will therefore be convenient to set up two separate classifications for these compounds, one on the basis of the positive component, and the other on the basis of the negative component. In general, the classifications utilized in this work will conform to the commonly recognized groupings, but the defining criteria will not necessarily be the same, and this will result in some divergence in certain cases. The first positive classification that we will consider comprises those compounds whose positive components contain valence four carbon atoms. These are called paraffins. This name originally referred only to hydrocarbons, but as used herein it will apply to all chain compounds with valence four carbon at the positive end of the molecule. The term ―saturated compound‖ is commonly used with essentially the same significance so far as the chain compounds are concerned, but its application is usually extended to the cyclic compounds as well. To avoid confusion it will not be used in this work, since the cyclic compounds cannot be considered saturated on the basis of the criteria that we are setting up. The paraffin hydrocarbon, or alkane, chain is a linking of CH2 neutral groups with a CH2 positive radical at one end of the chain, and a negative hydrogen atom at the other. The cohesion between this hydrogen atom and the adjacent CH2 group is very strong, and for most purposes it will be convenient to regard the CH2 • H combination as a negative CH3 radical. On this basis, the paraffin hydrocarbon chain is CH3 • (CH2)n• C3. If a valence two carbon atom is substituted for the valence four carbon atom of the paraffins, the result is an olefin, a chain which is identical with that of the paraffins except that it has the primary magnetic valence radical CH instead of the normal valence radical CH3 in the positive position. The general formula for the olefin hydrocarbons, or alkenes, is CH • (CH2)n• CH3. In the usual version of this formula one of the CH2 groups is placed outside of the CH group, but this is obviously incompatible with the structural principles developed in the preceding pages. On first consideration it might appear that the chemical evidence is favorable to the conventional CH2 • CH sequence. When we remove all of the internal magnetic neutral groups we come down to CH • CH3 as the theoretical structure of ethylene, the first of the olefins, whereas it is generally agreed that the chemical behavior of this compound is more in harmony with the structure CH2 • CH2. This apparent contradiction is explained by the nature of the CH3 negative radical. As has been pointed out, this radical is actually CH2 • H. For most purposes the combination may be treated as a single unit, but if we express the ethylene formula in full form as CH • CH2 • H it can be seen that the association between the CH and H structural units is closer than that between CH2 and H. It is true that the CH2 group is between CH and H when the ethylene molecule is intact, but CH and H are partners in a valence equilibrium,

whereas the intervening CH2 group is neutral. Consequently, if the molecule is sufficiently disturbed by chemical or other means, the CH and H units join and the compound enters the subsequent reaction as two methylene (CH2) molecules. This is not an unusual situation. Many observers have commented that the reacting molecule under such circumstances is not necessarily the same as the static molecule. A valence one carbon atom in the positive position produces an acetylene. Both the olefin and acetylene classifications, as herein defined, should be understood as including all compounds with the specified positive components, not merely the hydrocarbons. In the acetylenes, as in the olefins, the currently accepted molecular formulas must be revised to put the positive valence component at the end of the chain. We also find that the valence one orientation of a lone carbon atom is more stable if it is joined to a neutral group in which carbon has the same valence, rather than to one in which the carbon valence is +2. The independent carbon atom that constitutes the positive component of the acetylenes is therefore followed by a CH neutral group. The remainder of the acetylene hydrocarbon, or alkyne, molecule is identical with the corresponding portion of a molecule of either of the other two hydrocarbon chains, and the general formula is C • CH • (CH2)n• CH3. Acetylene itself is similar to ethylene in that the true structure is C • CH • H2 with a valence equilibrium between the single C and H atoms which causes them to combine if the molecule breaks up. The compound therefore acts chemically as two CH units. Addition of CH2 neutral groups to the straight chain hydrocarbons does not necessarily take place in the existing chain. The incoming groups may instead be inserted between the positive and negative components of any of the neutral groups, enlarging that group from CH2 to CH • CH2 • H2 which we may write as CH • CH3, or CHCH3, as previously indicated. Further additions may then be made in the same manner as they are made in the principal chain, lengthening the neutral group indefinitely. Such a lengthened group is known as a branch of the principal chain, and structures of this kind are called branched chain compounds. No branching of the CH3 radical is possible, since addition of a CH2 group results in CH2 • CH2 • H2 or CH2 • CH3, which merely extends the straight chain. A CH2 group may be added to the CH olefin radical however, as the product in this case is CCH3, which is not equivalent to an extension of the chain. This CCH3 group may then be lengthened in the usual manner to C • CH2 • CH3, and so on. Under the accepted systems of nomenclature the branched chain compounds are named as derivatives of the straight chain compounds, the chain position being indicated by number, as in 2-methyl butane, 2,3-dimethyl hexane, etc. The added possibility of a modification of the positive radical in the olefins introduces an extra variation into the system which is taken into account by setting up several basic classifications: 1-olefins, 2-olefins, 3-olefins, and so on. Branching is handled in the same manner as in the paraffins, and the compounds have names such as 2-ethyl-1-hexene, 3,4-dimethyl-2-pentene, etc.

The names applied to the paraffins under this current system are equally applicable to these compounds on the basis of the structural relations developed in this work. However, the current ideas as to the structure of the olefins and acetylenes, and the system of nomenclature that has been applied to them, are products of the electronic theory of compound formation. The results of our theoretical development show that certain modifications of the previously accepted structural arrangements are required, as has been noted, and the nature of these modifications is such that changes in the names applied to some of the compounds would also be appropriate. On this new basis no special system of names is required for the olefins, as the paraffin system can be applied to the olefins as well. The only difference between the two is in the branching of the olefin radical, and this can be handled by utilizing the 1-alkyl term, available but not used in the paraffin compounds. On this basis 1-pentene, CH • (CH2)3 • CH3, will become simply pentene, while 2-pentene, CCH3 • (CH2)2 • CH3, becomes 1-methyl butene, and 3-pentene, (C • CH2 • CH3) • CH2 • CH3, becomes 1-ethyl propene. The paraffin names are also applicable to the acetylenes in the same manner. 1-pentyne, C • CH • (CH2)2 • CH3, becomes pentyne; 2-pentyne, C • CCH3 • CH2 • CH3, becomes 2-methyl butyne, and so on. Such a revision of the nomenclature is not only desirable from the standpoint of more accurately reflecting the true structure of the molecules, and for the sake of uniformity, but also accomplishes a substantial amount of simplification. The information derived from theory will likewise require some modification of the conventional methods of representing the molecular structure of the organic compounds. The so-called ―extended‖ formulas, based on concepts such as electrons and double bonds that have no place in the molecule as we find it, must be discarded. But for most purposes the exact arrangement of the individual atoms is immaterial. The structural unit is the group rather than the atom, and the positions of the groups determine the nature and magnitude of the structure-dependent properties of the compound. The notation that has been used thus far, the ―condensed‖ structural formula which shows only the composition and sequence of the groups, is therefore adequate for most normal applications. The usual arrangement of these condensed formulas is not entirely satisfactory, as it does not recognize the existence of positive and negative valences, and therefore fails to distinguish between groups of the same composition but opposite valence. The CH; end groups in the paraffin molecule, for example, are currently regarded as identical. Since the opposing valences play a very important part in the molecular structure it is desirable that the formula should definitely indicate the positive and negative components of the compound. This can be accomplished without any serious dislocation of familiar patterns by identifying the positive and negative components of the compound as a whole with the left and right ends of the formula respectively, as is common practice in the inorganic division. It would be logical to extend this policy to the individual components of the molecules, and that probably should be done some day as a matter of consistency, but some compromise with logic and consistency seems advisable in this present work in order to avoid creating further complications for the readers, who already have many unavoidable departures from conventional practice to contend with. The familiar expressions for such

primary units as NH2 and OH will. therefore be retained, together with expansions such as NH • CH2 • CH3, O • CH2 • CH3, etc., even though this reverses the regular positive to negative order in most of the negative radicals. Continued use of CH3 rather than CH2 • H to represent the negative methyl radical is also.a departure from consistent practice, but in this case the condensed form is not only more familiar but also more convenient. The full CH2 • H representation will therefore be used only where, as in the discussion of the structure of the ethylene molecule, it is necessary to stress the true nature of the radical. In the case of the analogous CH2 negative radical there is no significant advantage to be gained by use of the condensed expression, and this radical, which is a combination of a CH neutral group and a negative hydrogen atom will be shown in its true form as CH • H. For a correct representation of the molecular structure it is essential that the neutral groups be clearly identified. Where there are methyl substitutions, the identification can be accomplished by omitting the dividing mark between the components of the neutral group; e.g., CH3 • CHCH3 • CH2 • CHCH3 • CH3, 2,4-dimethyl pentane. Longer neutral groups can be identified by parentheses, the positive-negative order being preserved within the group. The formula of 3–propyl pentane on this basis is CH3 • CH2 • (CH • CH2 • CH2 • CH3) • CH2 • CH3. If further subdivision within the neutral groups is necessary, the distinction between main and subgroupings can be indicated by brackets or other suitable symbols. Where a valence two negative component is involved and the chain is double, the customary expression such as (CH3 • CH2)2 • O is appropriate if the chains are equal. Unequal chains can be represented by treating the valence two component and one of the branches as a negative radical in this manner: CH2 • CH2 • CH2 • (O • CH2 • CH3),or the two branches can be shown on separate lines, as

In order to facilitate the presentation of the new principles of molecular structure that have been developed from the postulates of the Reciprocal System the revised structural formulas as described in the foregoing paragraphs will be used throughout this work. In designating positions in the chain we will number from the positive end, rather than following the Geneva system, which regards the two ends as interchangeable. The different numbering is necessary for clarity, in view of the modifications that have been made, not only in the order of the groups but also, in some cases, in the group composition. However, this revised numbering will be used only for purposes of the discussion, and the accepted names of the compounds will be retained, to avoid unnecessary confusion. A complete overhaul of the organic nomenclature will be advisable sooner or later. The somewhat minor modifications of current structural ideas that are required in the olefins and acetylenes become more significant in the diolefins, a class of compounds in which a pair of CH neutral groups with the acetylene carbon valence (one) is inserted into the olefin chain, a valence two structure. The C5compounds of this class are known as pentadienes. If the CH groups replace the CH2 groups in the third and fourth positions of

pentene the result is CH • CH2 • CH • CH • CH3. Instead of using the same numbering system that is applied to the other hydrocarbon families, the diolefins are numbered according to the locations of the hypothetical ―double bonds,‖ and this compound is called 1,3-pentadiene. Since the CH3 group at the negative end of the pentene molecule is actually CH2 • H2 the CH2 portion is open to replacement by CH. The incoming CH groups may therefore occupy the fourth and fifth positions, producing CH • CH2 • CH2 • CH • CH • H2 now called 1,4-pentadiene. Another possible structure involves removing the hydrogen atom from the CH positive radical, and splitting the molecule into two chains. If the chains are equal, we have C(CH•CH3H2, which we may also represent as

This is 2,3-pentadiene. A variation of this structure removes the CH2 group from one of the CH3 combinations. This reduces the compound to a C4 status, but it can be brought back up to a pentadiene by inserting the CH2 group in the other branch, which produces what is called 1,2-pentadiene:

One of the most important of the diolefins, from the industrial standpoint, is isoprene, another C5 compound, currently called 2-methyl1,3-butadiene. The structure is the same as that of 1,4-pentadiene, except that the CH2 group next to the first of the CH neutral groups is moved out of the chain and attached to the CH group as a branch: CH • CH2 • CCH3 • CH • H. Nitrogen, which is next to carbon in the atomic series, is also the next most prolific in the formation of compounds. Some of the ―carbon‖ compounds, such as urea, one of the first organic compounds to be synthesized, actually contain more nitrogen than carbon, but the positive component in these compounds is carbon, and the lengthening of the chain takes place primarily by the addition of carbon groups. There are other compounds, however, in which nitrogen takes the positive role both in the compound as a whole and in the neutral groups. Corresponding to the hydrocarbons are the hydronitrogens. The positive nitrogen radical in these compounds is NH2+, in which nitrogen has the enhanced neutral valence three. A combination of this radical with the negative amine group is hydrazine, NH2 • NH2. Inserting one NH neutral group we obtain triazane, NH2 • NH • NH2. Another similar addition produces tetrazane, NH2 • NH • NH • NH2. Just how far this addition process can be carried is uncertain, as the theoretical limits have not been established, and the hydronitrogens have not been given the same exhaustive study as the corresponding carbon compounds. A nitrogen series corresponding to the acetylenes has a lone nitrogen atom with the secondary magnetic valence one as the positive component. The parent compound of this series is diimide, N • NH2. One added NH neutral group results in

triazene, N • NH • NH2, and by a second addition we obtain tetrazene, N • NH • NH • NH2. Here again, the ultimate length of the chain is uncertain. All of the neutral groups in these nitrogen compounds have the composition NH2 in which nitrogen has the secondary magnetic valence one. A neutral group NH2 based on the primary magnetic valence is theoretically possible, but this group is identical with the amine radical except for the rotational orientation, and the orientation is subject to change in accordance with the relative probabilities. The amine radical is a more probable structure, and it prevents the existence of the NH2 neutral group. The NH2+ radical is also a much less probable structure than the amine radical, in which nitrogen has its normal negative valence, but this positive radical is not in competition with the amine group. Wherever a number of NH2 units exist in close proximity the interatomic forces tend toward combination, and in order that such combination may take place some groups must be reoriented so that they may act as the positive components of the compounds. The NH2+ radical has the most probable of the positive orientations, and it therefore takes over the positive role in NH2 • NH2 and similar combinations, a position that is not open to the amine radical. The NH2 neutral group has no such protected status. Beyond carbon and nitrogen the ability to form compounds of the molecular type drops sharply, but the corresponding elements in the higher groups do participate in a few compounds of this nature. Silicon forms a series of hydrides analogous to the paraffin hydrocarbons, with the composition SiH3 • (SiH2)n • H, and also some compounds intermediate between the silicon and carbon chains. Typical examples of the latter are Si3 • CH2 • SiH2 • H, and Si(CH3)3 • CH2 • SiH2 • CH2 • SiH2 • H. Germanium forms a series of hydrides, known as germanes, which are similar to the silicon hydrides, or silanes, and have the composition Ge3 • (GeH2)n • H. Only a few members of this series are known. An unstable tin hydride, Sn3 • SnH2 • H2 has also been reported. It could be expected that the higher valence three elements would form a limited number of compounds similar to the hydronitrogens, but the known compounds of this type are still scarce. Among those that have been reported are diphosphene, PH2 • PH2, and cacodyl, As(CH3)2 • As(CH3)2. Since the minimum magnetic valence of phosphorus and arsenic is two, these compounds cannot have the hydrazine structure NH2 • NH • H2 and are probably PH • PH2 • H and AsCH3 • As(CH3)2 • CH3. As pointed out in connection with ethylene and acetylene, the chemical behavior of such compounds is explained by the tendency of the positive and negative components of the compound as a whole, such as PH and H in diphosphene, to join when the compound is disturbed during a chemical reaction. Another series of compounds of the molecular class, but not related to either carbon or nitrogen, is based on boron. Because it acts as a Division IV element in these twodimensional compounds, boron takes the valence five, rather than the normal valence three which it has in a compound such as B2O3, where it acts as an element of Division I. The valence one radical on the valence five basis would be BH4, or an equivalent, but such a radical would be three-dimensional, and not capable of joining a two-dimensional chain. The positive radical in the boron chain is therefore the valence two combination B3. As in the hydrocarbons, the negative component of the molecule as a whole is hydrogen, and because of the valence of the positive radical two negative hydrogen atoms are required. Here again, the association between the hydrogen atoms and the adjacent

BH neutral group is close, as in the hydrocarbons, and the combination could be regarded as a valence two negative B3 radical. For present purposes, however, it appears advisable to show it in its true form as BH • H2. The magnetic neutral groups of the boron compounds can be formed on the basis of either the primary or the secondary magnetic valence, which produce BH2 and BH respectively. Because it minimizes the number of hydrogen atoms at the negative end of the molecule, the negative radical BH • H2 takes precedence over BH2 • H2 even where the interior groups are BH2 combinations. This presence of a BH neutral group at the negative end of the compound, together with some other factors that apparently favor BH over BH2, has the effect of making the BH structures more stable than those in which the neutral groups are BH2. The basic hydride of boron is diborane, B3 • BH • H2. Addition of BH neutral groups produces a series of compounds with the composition B3 • (BH)n • H2, the best known of which are hexaborane, in which n is 5, and decaborane, in which n is 9. Substitution of a pair of BH2 groups for two of the BH groups results in a series which has the composition B3 • (BH2)2 • (BH)n • H2. Beyond tetraborane, the first member of this series (n=1),these compounds, as indicated in the preceding paragraph, are less stable than the corresponding compounds of the all-BH series. In all of these boron compounds replacement of hydrogen atoms by other valence one atoms or radicals is possible in the same manner as in the hydrocarbons, but to a much more limited extent. As noted earlier, the extension of Division IV characteristics into Division III, which gives rise to the two-dimensional combining tendencies of boron, does not apply to the corresponding elements of the higher groups to any substantial degree, and they do not duplicate the boron series of compounds. There is an unstable hydride of aluminum, Al2H6, and a compound Ga2H6 called digallane, both of which may be structurally similar to diborane, but there is little, if any lengthening of these compounds by means of magnetic neutral groups. From the overall chemical standpoint, the molecular compounds formed by positive elements other than carbon are not of much concern, and they are given little or no attention in any but specialized textbooks. They are important in the present connection, however, because they serve to confirm the theoretical conclusions that were reached with respect to the structure of the carbon compounds. The nitrogen and boron compounds are not only constructed in accordance with the general pattern deduced from theory, and followed by the carbon compounds-that is, a chain of magnetic neutral groups with a positive radical at one end and a negative radical at the other-but also support the theoretical conclusions with respect to the structural details, inasmuch as they are like the carbon compounds in those respects in which the theory finds these elements to be alike, whereas they differ from the carbon compounds in those respects in which there are theoretical differences. For example, all three of these elements form both valence two (CH2 etc.) and valence one (CH etc.) magnetic neutral groups (with the exeption of NH2, the absence of which has been explained), because these magnetic valences are properties of the group of elements (2A) to which all three belong. On the other hand, the radicals in the end positions are unlike because the electric valences, which apply to these radicals, are properties of each of the three elements individually, and they are all different.

The second system of classification of the organic chain compounds, that based on the nature of the negative components, is not an alternate but a parallel system. A compound classified as an alcohol because of the nature of its negative component also belongs to one of the categories set up on the basis of the identity of the positive component. The previous discussion was confined mainly to the hydrocarbons to simplify the presentation, but all of the statements that were made with reference to compounds in which the negative component is hydrogen, alone or in combination with CH2 as a negative CH3 radical, are equally applicable to those in which the hydrogen has been replaced by an equivalent negative atom or group. Thus we have paraffinic alcohols, olefinic (unsaturated) alcohols, and so on. The primary requirement for the one for one substitutions is that the valence of the substituent must conform to the hydrogen valence both in magnitude and in sign. This requirement has been obscured to a large extent by current structural theories which do not recognize the existence of positive and negative valence in organic compounds, but some of the hydrogen atoms in these compounds are positive and others are negative, and this determines what substitutions can take place. Hydrogen in combination with carbon is negative, and may be replaced by any of the halogens or by negative radicals. Hydrogen combined with oxygen is positive, and can therefore be replaced only by positive elements and radicals. Thus from acetic acid, CH3 • CO • OH, we obtain by substitution CH2Cl • CO • OH, chloroacetic acid, but CH3 • CO • ONa, or Na • (O • CO • CH3 ),sodium acetate. A hydrogen atom acting alone may be either positive or negative, depending on its environment. The hydrogen atom at the end of a hydrocarbon chain is negative, and may be replaced by a halogen. CH3 • CH2 • H, ethane, becomes CH3 • CH2 • Cl, ethyl chloride. The lone hydrogen atom in formic acid, H • CO • OH, is positive, and a halogen cannot replace it. The normal valence alkali elements cannot replace this lone magnetic valence hydrogen atom either, and an incoming positive atom goes to the OH radical. The hydrogen in N-H combinations is also resistant to monatomic substitutions, but replacement by radicals of the proper valence is readily accomplished. Elements with higher valences substitute quite freely for either carbon or hydrogen in the positive and negative radicals, but enter into the magnetic neutral groups mainly as constituents of the common valence one radicals: OH, NH2, etc. Except in the direct carbon-oxygen combination CO, a single atom of valence two or three in a neutral group is necessarily a constituent of an extended radical such as (O • CH2 • CH3). In beginning a consideration of the principal families of substituted compounds, we will look first at the alcohols. This alcohol classification is one of several which result from the addition of oxygen to the hydrocarbons in different ways. Here an OH radical is directly attached to a hydrocarbon group, replacing a negative hydrogen atom. It is not essential, however, that this OH group replace the particular atom that constitutes the negative component of the compound as a whole. The chemical behavior of the normal alcohols, in which the OH radical is at the end of the chain, as in ethyl alcohol, CH3 • CH2 • OH, is closely paralleled if OH is substituted for a hydrogen atom in one of the neutral groups, as in secondary butyl alcohol, CH3 • CH2 • CHOH • CH3. If the substitution takes place in the positive radical the result is somewhat different. Such a substitution is more

readily made if oxygen is first introduced at the more favorable negative end of the compound, and the product of a double OH substitution is a dibasic alcohol, or glycol, the most familiar compound being ethylene glycol, CH2OH • CH2 • OH. Earlier in this chapter it was noted that the paraffin hydrocarbons are not actually the symmetrical structures that they appear to be. There is a combination of one carbon atom and three hydrogen atoms at each end of the molecule, but one end of the chain is necessarily positive, which means that the CH3 group at this end is a radical in which carbon has the +4 valence, while the other end is necessarily negative, and this, as previously explained, means that the CH3 group in this position is actually a close association of a negative hydrogen atom with a CH2 neutral group in which carbon has its +2 valence. Where the true molecular structure is important, as in understanding the chemical behavior of ethylene, it is essential to recognize that CH3 in the negative position is, in fact, CH2 • H. As indicated in the formula given for ethylene glycol, this same asymmetry also exists in the other seemingly symmetrical compounds. The CH2OH group in the positive position in the glycols has a +4 carbon valence and a group valence of + 1. In the negative position, the carbon valence is +2, and the true structure is CH2 • OH. The chemistry textbooks contain statements such as this: ―Theoretically the simplest glycol should be dihydroxy methane, CH2(OH)2.‖ The foregoing explanation of the glycol structure shows why this compound would not be a glycol, and why no such compound has been found. An oxygen atom added to a hydrocarbon may replace the two hydrogen atoms of a CH2 neutral group rather than forming an OH radical. The resulting group CO is very close to the point of not being able to act as a magnetic neutral group at all, and it is greatly restricted as to its position in the molecule. Straight chains of CO groups similar to the CH2 chains are not possible. This explains why carbon monoxide occurs as a separate compound, whereas methylene does not. In order to enable the CO group to join an organic combination some assistance from the geometric arrangement is necessary (a point which will be discussed further in connection with our examination of the ring compounds), and in the chain compounds this can be accomplished most readily at the negative end of the molecule. In the usual arrangement, therefore, a single CO neutral group is joined directly to the negative atom or radical. If the negative component is the radical OH, the resulting compound contains the combination CO • OH, and is an acid. Acetic acid, CH3 • CO • OH, and acrylic acid, CH • CH2 • CO • OH, are representative paraffmic and olefmic (unsaturated) acids respectively. Here again, a shift of the carbon valence to +4 produces a positive radical of the same composition, and enables formation of dibasic acids, such as oxalic acid, COOH • CO • OH, maleic acid, COOH • CH • CH • CO • OH, etc. Modification of the acid structure by substituting an alkyl group for the hydroxyl hydrogen results in another prolific family of compounds, the esters. Ethyl acetate, CH3 • CO • (O • CH2 • CH3), and diethyl oxalate, CO(O • CH2 • CH3) • CO • (O • CH2 • CH3), are typical of the mono and di esters respectively. A similar substitution in an alcohol produces an ether. This compound may be considered as a radical of the composition O • (CH2)n • CH3 in combination with an alkyl group. If we now substitute a second radical of the same kind for one of the hydrogen atoms in the adjacent hydrocarbon group we

obtain an acetal. Another such replacement results in an orthoester. By successive substitutions in ethyl alcohol, CH3 • CH2 • OH,for instance, we produce methyl ethyl ether, CH3 • CH2 • (O • CH3),dimethyl acetyl, CH3 • CH • (O • CH3)2, and trimethyl orthoacetate, CH3 • C • (O • CH3)3. Elimination of a water molecule from two acid molecules produces an anhydride, such as acetic anhydride, (CH3 • CO)2 • O. No new structural features are involved in these compounds. If the CO neutral group is joined directly to the negative hydrogen atom at the end of the hydrocarbon chain the compound is an aldehyde. Acetaldehyde, CH3 • CO • H, is the most familiar member of this family. The aldehyde radical is usually expressed as CHO (to avoid confusion with the OH radical, the textbooks say),but this does not reflect the true status of the CO combination as a neutral group. It may be worth noting that the CHO representation also does not explain, as the CO • H formula does, why one of the most prominent features of the aldehydes is that they are good reducing agents. Like the other organic families that have been discussed thus far, the aldehydes form dibasic, as well as monobasic, compounds. The simplest dibasic aldehyde is glyoxal, COH • CO • H. As in such structures as COOH • CO • OH, the conversion of the negative radical to a positive radical involves a valence shift, but in the acids the change is in the carbon valence, which goes from +2 in CO • OH to +4 in COOH, while in the aldehydes the change is in the hydrogen valence, which goes from - 1 in CO • H to + 1 in COH. These are the most basic valence changes in organic reactions, and their concurrent accomplishment is an essential element in a wide variety of chemical reactions. For instance, in the addition reactions that convert olefinic compounds to the paraffin status, such as adding HBr to acrylic acid, the carbon valence in the positive radical increases two units from +2 to +4. At the same time, the hydrogen atom that had a + 1 valence in HBr decreases that valence by two units to the - 1 level in the addition product CH2Br • CH2 • CO • OH. There are no obstacles in the way of a change of valence. This is merely a matter of reorientation, a change of rotational direction, and each atom is free to reorient itself to conform tc its environment. But the positive-negative balance in the compound must be maintained, and the change from positive to negative, or vice versa, in the hydrogen valence is one of the most common ways of compensating for an increase or decrease in the carbon valence. Because of the close association between the negative hydrogen atom of the hydrocarbons and the adjoining CH2 group, the CO neutral group is able to occupy a position adjoining the CH2 • H combination as an alternate to the aldehyde position next to the hydrogen atom. In this more remote position it is near the limit of stability, and this makes association with the positive radical more probable than participation in the negative combination CO • CH2 • H. For this reason, the monobasic compounds in this family, the ketones, have oxygen in the positive radical, COCH3, rather than in the negative radical as usual. The first member of the family, dimethyl ketone, or acetone, has the structure COCH3 • CH2 • H. The corresponding dibasic compound is dimethyl diketone, COCH3 • CO • CH2 • H. The monobasic ketone structure can be verified by comparing the results of simple addition reactions of the ketones with those of the aldehydes, the isomeric compounds in

which the CO group is neutral. The addition of hydrogen to the aldehydes proceeds in this manner: CH3 • CH2 • CO • H + H2 = CH3 • CH2 • CH2 • OH The final product, propyl alcohol, is a normal chain compound with a CH3 radical in the positive position, just as in the aldehyde itself. Only the negative end of the molecule has been altered. If the CO group in the corresponding ketone, methyl ethyl ketone, or 2butanone, had the same status as in the aldehyde (that is, if the compound were CH3 • CH2 • CO • CH3), we would expect essentially the same result. We would expect the CH3 positive radical to remain intact, and the product to be a primary, or perhaps a secondary, alcohol. But since the CO group in the ketone is part of a radical in which the carbon valence is four, and the compound is actually COCH3 • CH2 • CH3, both CH3 groups are negative. Addition of a hydrogen atom to the neutral group CH2 produces a third negative CH3 group. Inasmuch as no positive CH radical is present, hydrogenation results in a tertiary alcohol, in which the CH3 groups are negative, as in the original ketone: COCH3•CH2•CH3 + H2 = C(CH3)3•OH In the organic chain compounds thus far discussed, lengthening of the chain is accomplished mainly by the addition of CH2 neutral groups and, in some cases, CH • CH pairs. Introduction of oxygen produces a neutral group CHOH, and substitution of this group for CH2 originates additional families of compounds. These include such important substances as the hydroxy acids, the polyhydroxy alcohols, and the saccharides. The hydroxy acids may be either monobasic, like lactic acid, CH3 • CHOH • CO • OH, or dibasic, similar to tartaric acid, COOH • (CHOH)2 • CO • OH. In both cases the chains can be extended by adding more CHOH groups, although addition of CH2 is also possible, as in malic acid, COOH • CHOH • CH2 • CO • OH. The polyhydroxy alcohols are extensions of the glycol chain with CHOH neutral groups. The general formula is CH2OH • (CHOH)n• CH2 • OH. The saccharides result from conversion of the CH3 radicals in the aldehydes and ketones to CH2OH and addition of CHOH neutral groups. The products derived from the aldehydes are aldoses, the general formula for which is CH2OH • (CHOH)n • CO • H. Those derived from the ketones are ketoses, and have the structure (CO • CH2 • OH) • (CHOH)n • CH2 • OH. When nitrogen is introduced into an aldehyde or ketone, replacing the carbon-oxygen combination with a triple combination of nitrogen, hydrogen, and oxygen in the form of the valence two oxime radical NH • O, the nature of the addition products again shows the same relation to the structures of the two oxo derivatives that we noted in the case of hydrogen addition. Adding NH to the aldehyde alters only the negative radical, which expands from CO • H to CH • NH • O. Propionaldehyde, CH3 • CH2 • CO • H, for example, becomes propionaldehyde oxime, CH3 • CH2 • (CH • NH • O). On the other hand, addition of NH to the ketones requires a molecular rearrangement to bring both CH3 groups, which are negative, into combination with positive carbon in the positive radical. Adding NH to acetone, COCH3 • CH3 produces dimethyl ketoxime, C(CH3)2 • NH • O. As indicated in

these formulas, it is necessary to change the expression for the oxime radical from the conventional NOH to NH • O to show the true composition. Another way in which nitrogen may be introduced into the hydrocarbons is by substituting the NH2 amine group for negative hydrogen. Further substitutions are then possible for the positive hydrogen atoms in NH2, giving rise to a great variety of structures. The compounds in which the NH2 radical remains intact are primary amines, those with NH and one positive substitution are secondary amines, and those in which both hydrogen atoms have been replaced, leaving only the lone nitrogen atom from the original amine group, are tertiary amines. Since the amine replacements are positive, these compounds may have more than one olefinic branch, as in diallyamine, (CH • CH2 • CH2)2 • NH,a type of structure not found in the hydrocarbons, where all hydrogen atoms are negative, and can be replaced only by negative substituents. Diamines have the usual double structure, with CH2NH2 in the positive position and the normal amine combination CH2 • NH2 at the negative end of the molecule. Like the hydroxyl group OH which attaches to CH to form the neutral group CHOH, the amine group joins with CH to form a neutral group CHNH2. This group is more restricted as to its position in the chains than CHOH, which substitutes quite freely for CH2, but it has a special importance in that it is an essential component of the amino acids, which, in turn, are the principal building blocks af the proteins, the basic constituents of living matter. In the monoacids the CHNH2 group in effect extends the acid radical from CO • OH to CHNH2 • CO • OH. Further lengthening of the chain takes place by addition of hydrocarbon neutral groups, or CHOH, rather than CHNH2. Thus d-alanine, CH3 • CHNH2 • CO • OH lengthens to 1-leucine, CH3 • CHCH3 • CH2 • CHNH2 • CO • OH. These two compounds are members of one sub-group of the amino acids in which the positive radical is CH3. A second sub-group utilizes the carboxyl radical COOH in the posiiive position. The simplest compound of this type is d-aspartic acid, COOH • CH2 • CHNH2 • CO • OH. The third of the sub-groups, the diamino acids, has amine radicals in both the positive and negative positions, as in d-lysine, CH2NH2 • (CH2)3 • CHNH2 • CO • OH. Another combination containing nitrogen is the cyanide, or nitrile, radical. In the normal radical CN nitrogen has the negative valence three and carbon has the primary magnetic valence two, the net group valence being - 1. The positive and negative roles are reversed in the radical NC2 in which nitrogen has the enhanced neutral valence three. In this orientation nitrogen has Division III properties, and is positive to carbon rather than negative as usual. Since the negative valence of carbon is four, the net valence of the radical NC is - l, identical with the valence of CN. The NC compounds, the isocyanides, therefore have the same composition as the cyanides, but different properties. The CN+ radical makes its appearance in such compounds as cyanoacetic acid, CN • CH2 • CO • OH. Here nitrogen is negative, as in the CN• radical, but carbon has the normal positive valence four, and the net group valence is therefore + 1. Cyanogen, CN • CN, is a combination of the + 1 and -1 radicals. Compounds with the CO • CN combination in the negative position are nat generally regarded as constituting a separate family, and are named as members of the normal cyanides.

Introduction of the CO neutral group in conjunction with NH2 produces an amide, a structure which is open to an unusually wide variety of additions and substitutions. If we start with acetamide, CH3 • CO • NH2, we may add CH2 groups in the normal manner to form propionamide, CH3 • CH2 • CO • NH2, and the higher homologs, or we may substitute positive radicals for the amine hydrogen, obtaining compounds like N-ethy. acetamide, CH3 • CO • (NH • CH2 • CH3). The NH combination, which has a net valence of -2, can take the place of oxygen in the CO group of the amide, forming a CNH neutral group which has similar properties. Such a replacement in acetamide gives us acetamidine, CH3 • CNH • NH2. If the neutral CO group in acetamide is replaced by the positive CO radical we obtain aminoacetone, COCH3 • CH2 • NH2. Further replacement of carbon by nitrogen then changes the radical COCH3 to CONH2, and produces a whole new series: urea, CONH2 • NH2, and its derivatives. Another CO group changes the monobasic carbamide, urea, to a dibasic compound, oxamide, CONH2 • CO • NH2. A negative combination of oxygen and nitrogen that can be substituted for hydrogen is the nitro group, NO2. This results in a family known as the nitroparaffins. 1-nitropropane, CH3 • (CH2)2 • NO2, is typical. The NO group in these nitroparaffins is a combination of positive nitrogen (valence +3) with negative oxygen (-2 each). An isomeric family of compounds, the alkyl nitrites, substitutes a group ONO, in which one oxygen atom with the enhanced neutral valence +4 and a nitrogen atom with its normal -3 valence form a valence one positive radical ON. A further combination with negative oxygen then produces a valence one negative radical ONO. The CO • NO2 combination, like CO • CO, is outside the magnetic neutral limits under ordinary conditions, and there is no CO • NO2 series of compounds corresponding to those based on CO•NH2. In the quaternary ammonium compounds nitrogen has its neutral valence five, as in the inorganic nitrates, and joins with the equivalent of five valence one negative atoms or radicals to form compounds ranging from simple combinations such as tetramethylammonium hydroxide, N(CH3)4 • OH, to some very complex, and biologically important, compounds such as lecithin. The quaternary ammonium portion of the lecithin molecule also exists separately as choline, N(CH3)3OH • CH2• CH2• OH. Addition of oxygen to the cyanide and isocyanide radicals produces the radicals OCN and ONC, which form the basis of the cyanates and isocyanates. A comparison of the cyanides and cyanates provides a good illustration of the way in which the various pertinent factors enter into the construction of chemical compounds. Each element has several possible rotational orientations which it can assume to form chemical combinations, and in each of these orientations it has an effective speed displacement, or valence, which determines the status that the element can assume in a compound, and the ratio in which it combines with the other components. Some orientations are inherently more probable than others, but the type of combination that will be the most stable cannot be determined solely on the basis of this probability, since other factors also enter into the situation. The limitation imposed on direct combinations by the relative negativity of the constituents is one such factor. The greater relative probability of low net group valences in the radicals is another. Replacement capacity is likewise a significant factor. A valence

one radical is not only an inherently more probable structure than one of higher valence; it also has an ability to replace hydrogen atoms quite freely, while radicals of higher valence can accomplish such replacements only with some difficulty: In an environment favorable to these replacements the valence one radical therefore takes precedence, if such a radical can be formed. In any particular instance where there are two or more possible ways of constructing a valence one radical, the combined influence of all effective factors determines which of the possible combinations has the greatest over-all probability, and consequently the greatest stability. Where the margin of one structure over another is small, both may exist under appropriate conditions; where it is large, only the more stable compound can exist. In the cyanides the net total of all factors affecting the combination of carbon and nitrogen favors carbon valence +2 and nitrogen valence -3. An alternate with carbon -4 and nitrogen +3 is close enough to be stable. When oxygen, with valence -2, is added to either of these radicals the positive valence must increase by two units if the addition product is to be a valence one substitute for negative hydrogen. This is possible in both cases, as both carbon and nitrogen have the required higher valences. Carbon steps up from the primary magnetic valence +2 in CN to the normal valence +4 in OCN. Nitrogen goes from the enhanced neutral valence +3 in NC to the neutral valence +5 in ONC. The negative valences are unchanged: nitrogen has -3 in both CN and OCN, carbon has -4 in NC and ONC. The participation of elements of the higher rotational groups in chemical compounds involves no new structural features. Because of factors such as the higher magnetic valences, the greater inter-atomic distances, and the prevalence of three-dimensional force distributions, in the higher rotational groups, these elements are excluded from many of the types of combinations and structures in which the elements of Group 2A participate. But to the extent to which these elements can occupy positions in such combinations and structures, they do so on the same basis as the analogous Group 2A elements. The descriptions of the various types of combinations and structures in the preceding pages therefore apply to the compounds of these higher group elements as well as to those of the elements that were specifically mentioned. Sulfur comes the nearest to duplicating the lower group structures. The corresponding Group 2A element, oxygen, uses its negative valence almost exclusively, and to the extent that its somewhat greater inter-atomic distances will permit, sulfur, which has the same -2 valence, duplicates the oxygen compounds. Corresponding to the alcohols, acids, ethers, amides, etc. which have been discussed in the preceding pages, there are thioalcohols, thioacids, thioethers, thioamides, etc., that are identical except that sulfur substitutes for oxygen. The inter-atomic distance C-S is greater than the C-O distance, and this makes the sulfur compounds somewhat less stable than their oxygen analogs, limiting the total number of these compounds rather severely. One significant point is that the C-S distance will not permit the formation of CS neutral groups, and replacement of neutral CO by CS. This eliminates the possibility of families of sulfur compounds similar to the oxygen families whose negative radicals are CO • OH, CO • NH2, CO • OCH3, and so on. There are thioacids, but the radical is not CS • OH, or CS • SH; it is CO • SH. Where the formula of

a compound, as written in accordance with current practice, appears to indicate the presence of a CS group in a neutral position, this is actually a valence two combination that forms part of the positive radical. Thus thioacetamide and thiourea, commonly represented as CH3 • CS • NH2 and NH2 • CS • NH2, are actually CSCH3 • NH2 and CSNH2 • NH2. Neither CSOH nor CSSH is barred from acting as a valence one positive radical, a position in which the inter-atomic distance is not a controlling factor, but both are limited in their stability. CSOH tends to rearrange to the more probable form COSH2 while CSSH is vulnerable to loss of a CS molecule. For example, xanthic acid, CSSH • (O • CH2 • CH3) spontaneously separates into CS and ethyl alcohol. Oxidation of the sulfides provides another example of the displacement of the valences by addition of a strongly negative element. In methyl sulfide, (CH3)2S, sulfur has its normal negative valence, -2. Because it is positive to oxygen, oxidation forces it into the positive position in the compound, with a +4 valence, and the CH3 groups, which can take either + 1 or - 1, shift to the negative. The product is methyl sulfoxide, SO(CH2 • H)2. An additional oxygen atom is accommodated by a further shift in the sulfur valence to its maximum value +6 (the neutral valence). The new compound that is formed is methyl sulfone, SO(CH2 • H)2. The single element radicals, such as N3(N+5 • N-3 • N-3) and C2 (C+2• C•-4) conform to the same pattern of behavior as the other radicals. These particular combinations form azides and carbides respectively. The latter, since they contain no element other than carbon and hydrogen, have been named as a hydrocarbon family, although from a structural standpoint the introduction of the C2 radical into a normal hydrocarbon is the equivalent of the substitution of any other radical, and the resulting compounds should logically be called carbides. The carbide structure is quite evident in such compounds as (CH • CH2)2 • C2, which is divinylacetylene, or 1,5 hexadien-3-yne. The valence balance here is the same as in the binary carbides: CaC2, etc. As indicated earlier, however, probability considerations favor valence one radicals, where such radicals are possible, and in the hydrocarbons the C2, combination generally joins with a positive hydrogen atom to form the valence one radical C2H, structurally analogous to OH. The compounds utilizing this radical may be either olefinic (example: vinylacetylene, CH • CH2 • C2H) or acetylenic (example: butadiyne, C • CH • C2H). Magnetic neutral groups can be added in the usual manner, forming compounds such as 1,5 hexadiyne, C • CH • CH2 • CH2 • C2H. This compound, also known as dipropargyl, is isomeric with benzene, and attracted a great deal of attention in the early days of structural chemistry when the ―benzene problem‖ was the center of attention. A simple carbide, H • C2H, is the initial product of the action of water on calcium carbide, but since hydrogen is negative to carbon a direct combination of this kind between carbon and positive hydrogen is unstable, and the hydrogen carbide promptly changes to acetylene, in which the hydrogen atoms are negative. The valence changes in this series of reactions are interesting. In the original calcium carbide the valences are Ca +2, C+2, C-4. The reaction with water substitutes two + 1 hydrogen atoms for the calcium. The relative negativity of carbon and hydrogen then forces hydrogen into the negative position, and since the total negative valence on this basis is only two units, carbon has to take its + 1 valence to reach an equilibrium.

Although the three-dimensional inorganic radicals of the SO4 type are not able to substitute freely for hydrogen in organic compounds in the manner of the organic radicals, it is possible for organic chains to replace the atoms that are joined to these three-dimensional radicals in the inorganic compounds. In other words, there is no room for a three-dimensional component in a two-dimensional structure, but a two-dimensional combination can occupy a position in a three-dimensional structure. Typical compounds are ethyl sulfate, (CH3 • CH2)2 • SO4, and methyl phosphate, (CH3)3 • PO4. Compounds of the metals with organic radicals are usually grouped in a separate category as metal-organic, or organometallic, but they are classified as organic in this work, inasmuch as they have the regular organic structure. A compound such as ethyl sodium, Na • CH2 • CH3, has exactly the same structure as the corresponding paraffin hydrocarbon, propane, CH3 • CH2 • CH3. A compound such as diphenyl tin has exactly the same structure as diphenyl methane, one of the aromatic ring compounds that we will examine in Chapter 21. No separate consideration needs to be given, therefore, to either the organometallic compounds, or those compounds which have both organic and inorganic components, in this discussion of molecular structure. The number and diversity of the chain compounds can be increased enormously by additional branching, by combinations of the various substituents that have been discussed, and by the use of some less common substituents, but all such compounds follow the same structural principles that have been outlined for the most common organic chain families. There are some additional ways in which structural variations can occur, and to complete the molecular picture a few comments on these items are advisable, but since they are equally applicable to the ring compounds it will be appropriate to defer this discussion until after we have examined the ring structures.

CHAPTER 21

Ring Compounds The second major classification of the organic compounds is that of the ring compounds. These ring structures are again divided into three sub-classes. In two of these, the positive components of the magnetic neutral groups of the rings are carbon atoms: the cyclic, or alicyclic, compounds in which the predominant carbon valence is two, and the aromatic compounds in which this valence is one. In the third class, the heterocyclic compounds, one or more of the carbon atoms in the ring is replaced by an atom of some other element. All of these classes are further subdivided into mononuclear and polynuclear divisions, the basic structure of the latter being formed by a condensation or fusion of two or more rings. It should be understood that the classifications are not mutually exclusive. A compound may consist of a ring joined to one or more chains; a chain compound may have one paraffinic and one olefinic branch; a cyclic ring may be joined to an aromatic ring; and so on.

As in the chain compounds, a parallel classification divides the ring compounds into families characterized by the nature of the negative components: hydrocarbons, alcohols, amines, etc. The normal cyclic hydrocarbon, a cyclane, or cycloparaffin, is a simple ring of CH2 neutral groups. The general formula can be expressed as -(CH2)n Beginning with cyclopropane (N=3) normal cyclanes have been prepared with all values of n up to more than 30. The neutral groups in these rings are identical with the CH2 neutral groups in the chain compounds, and they may be expanded in the same manner by CH2 additions. Corresponding to the branched chain compounds we therefore have branched rings such as ethylcyclohexane, -(CH2)5 (CH • CH2 • CH3)-, and 1-methyl-2ethyl cyclopentane, -CHCH3 • (CH • CH2 • CH3) • (CH2)3-. In the notation used herein, the neutral groups will be clearly identified by parentheses or other means, and the positive-negative order will be preserved within these groups as in the neutral groups of the chain compounds. To identify the substance as a ring compound and to show that the end positions in the straight line formula have no such special significance as they do in the chain compounds, dashes will be used at each end of the ring formula as in the examples given. If two or more rings are present, or if a portion of the compound is outside the ring, the positions of the dashes will so indicate. While any group could be taken as the starting point in expressing the formula of a single ring, the order of the usual numbering system will be followed as far as possible, to minimize the deviations from familiar practice. The branch names such as 1-methyl-2-ethyl are then clearly indicated by the formula. Replacement of all of the valence two groups in the cyclic ring by valence one groups, where such replacement is possible, converts the cyclic compound into an aromatic. In general, however, the distinctive aromatic characteristics do not appear unless the replacement is complete, and the intermediate structures in which CH or its equivalent has been substituted for CH2 in only part of the ring positions will be included in the cyclic classification. Since the presence of the remaining CH2 groups is the principal determinant of the molecular properties, the predominant carbon valence, in the sense in which that term is used in defining the classes of ring compounds, is two, even where there are more CH than CH2 groups in the molecule. As mentioned earlier, the probabilities favor association of like forces in the molecular compounds. The CH2 groups have sufficient latitude in their geometric arrangement to be able to compensate for substantial variations, and single CH2 groups can therefore fit into the molecular structure without difficulty, but the CH groups have very little geometric leeway, and for that reason they nearly always exist in pairs. This does not mean that the individual group is positively barred from existing separately, and in some of the more complex structures single CH groups can be found, but in the simple rings the pairs are so much more probable than the odd numbers of groups that the latter are excluded. The first two-group substitution in the cyclanes produces the cyclenes, or cycloolefins. A typical compound is cyclohexene, -(CH2)4 • (CH)2 . The designations cycloparaffin and cycloolefin are not appropriate, in view of the findings of this work, as the cycloparaffins contain no carbon atoms with the characteristic paraffin valence, and it is the substitution

of two acetylene valence groups into the CH2 rings that forms the cycloolefins. The names cyclane and cyclene are therefore preferable. Substitution of two more CH groups into the ring produces the cyclodienes. The existence of two CH • CH pairs in these compounds introduces a new factor in that the positions of the pairs within the ring may vary. No question of this kind arises in connection with cyclopentadiene, -(CH)Q•CH2, the first compound in this series, but in cyclohexadiene two different arrangements are possible: -(CH)4• (CH2)2 which is known as 1,3-cyclohexadiene, and -(CH)2 • CH2 • (CH)2 • CH2 which is I,4-cyclohexadiene. Negative hydrogen atoms in the cyclic compounds may be replaced by equivalent atoms or groups in the same manner as those in the magnetic neutral groups of the chain compounds. The resulting products, such as cyclohexyl chloride, -(CH2)5 • CHCI-, cyclohexanol, -(CH2)5 • CHOH-, cyclohexylamine, -(CH2)5 • CHNH2, etc., have properties quite similar to those of the equivalent chain compounds: chlorides, alcohols, amines, and so on. There are no atomic groups in the normal cyclic rings which have an amount of freedom of geometric arrangement comparable to that of the radicals at the two ends of the aliphatic chains, and the substituents which are limited to the radicals in the chains do not appear at all in the cyclic compounds unless a branch becomes long enough to put the end group beyond the range of the forces originating in the ring. In this case the structure is in effect a combination chain and ring compound. Because of this geometric restriction the range of substituents in the normal types of cyclic compounds is considerably narrower than in the chains. In addition to those already mentioned, Cl, OH, and NH2, the primary list includes the remaining halogens, oxygen, CN, and CO • OH. The compounds formed by direct substitution of oxygen for the two hydrogen atoms of the CH2 group are named as ketones, but they do not have the ketone structure, as the resulting CO group is part of the ring and is a magnetic neutral group. One substitution produces cyclohexanone, -(CH2)5 • CO-. A second results in a compound such as 1,3-cyclohexanedione, -CO • CH2 • CO • (CH2)3-. The CO substitution can extend all the way to cyclohexane hexone, -(CO)6, in which no hydrogen remains. It is also possible to make the oxygen substitution by means of a valence one combination instead of the full valence two replacement, in which case we obtain a compound such as cyclohexyl methyl ether, (CH2)5 • (CH • OCH3)-. Additional families of compounds are produced both by secondary substitutions, which result in structures on the order of cyclohexyl acetate, -(CH2)5 • CH(O • CO • CH3)-, and by parallel substitutions in two or more neutral groups. An example of the type of structure that is produced by the multiple substitutions is 1,2,3-cyclopropanetricarboxylic acid, -(CH • CO • OH)3-. The naturally occurring compounds of this cyclic class are highly branched rings beginning with such substances as menthol, -CHCH3 • CH2 • CHOH • (CH • CHCH3 • CH3) • (CH2)2, and extending to very complex structures, but they follow the same general structural patterns as the simpler cyclic compounds, and will not require additional discussion in the present connection.

As mentioned earlier, the CH2 groups have a considerable degree of structural latitude because of their three-atom composition. The angle between the effective lines of force varies from about 120 degrees in cyclopropane to less than 15 degrees in the largest cyclic rings thus far studied. The two-atom groups such as CH do not have this structural freedom, and are restricted to a narrow range in the vicinity of 60 degrees. The theoretically exact limits have not yet been determined, but the difficulties involved in the preparation of derivatives of cyclooctatetraene, -(CH)8, indicate that this compound is at the extreme limit of stability. This would suggest a maximum deviation of about 15 degrees from the 60 degree angle of the six-member ring. The atoms of which the molecular compounds are composed have a limited range in which they can assume positions above or below the central plane of the molecule. The actual angles between the effective lines of force will therefore deviate slightly from the figures given above, which are based on positions in the central plane, but this does not affect the point which is being made, which is that the cyclic ring is very flexible, whereas the aromatic ring is practically rigid. As long as there is even one CH2 group in the ring it has the cyclic flexibility. Cyclopentadiene can exist in spite of the rigidity of the portion of the ring occupied by the four CH groups because the CH2 group that completes the structure is able to accommodate itself to the position necessary for closing the ring. But when all of the three-atom groups have been replaced by two-atom groups or single atoms the ring assumes the aromatic rigidity. Cyclobutadiene, for example, would consist of four CH groups only, and the maximum deviation of the CH lines of force, somewhere in the neighborhood of 75 degrees, is far short of the 90 degrees that would be required for closure of the cyclobutadiene ring. All attempts to produce such a compound have therefore failed. The properties of the various ring compounds are dependent to a considerable degree on this question as to whether the members of the rings are restricted to certain definite positions, or have a substantial range of variability within which they can adjust to the requirements for combination. In view of this natural line of demarcation, the aromatic classification, as used in this work, is limited to the rigid structures, specifically to those compounds composed entirely of valence one CH groups or their monovalent substitution products, except for such connecting carbon atoms as may be present. Because of the limitations on the atomic positions, the aromatic compounds, with the exception of cyclooctatetraene, are confined to the six-member rings, the valence one equivalents of cyclohexane and its derivatives, and there are no aromatic analogs of cyclobutane, cycloheptane, etc. The structural rigidity therefore limits the compound forming versatility of the aromatic rings to a substantial degree, but this is more than offset by other effects of the same factor. The locations in the chain compounds which are open to the greatest variety of combinations are the ends of the chain and its longer branches, if any. In the aromatic rings every ring location has, to some degree, the properties of an end. Also, because of the rigidity of the ring, the maximum intergroup distance 1-3 in the ring is about ten percent less than the distance between the equivalent groups in the aliphatic chain, after making an allowance for the small amount of flexibility that does exist. This

brings some additional combinations of elements within the limit of effectiveness of the free electric displacements, and in these rings we find not only groups such as COH, CCI, CNH2, etc., which are the valence one equivalents of the combinations that make up the cyclic rings and the interior portions of the chain compounds, but also other combinations such as CNO2 and CSH which are just beyond the magnetic neutral limits in the nonaromatic structures. The number of available combinations in which the neutral group CO accompanies the negative radical is similarly increased. Secondary substitutions extend the length and diversity of the magnetic neutral groups of the ring, and produce a wide variety of single branch compounds on the order of isobutyl benzene, -(CH)5 • (C • CH2 • CHCH3 • CH3)- and N-ethyl aniline, -(CH)5 • (C • NH • CH2 • CH3)-, but the principal field for variability in the mononuclear aromatics lies in their capability of multiple branching. The aromatic rings not only have a greater variety of available substituents than any other type of molecular compound, but also a larger number of locations where these substituents may be introduced. This versatility is compounded by the fact that in the rings, as in the chains, the order of sequence of the groups has a definite effect on the properties of the compound. The behavior of 1,2-dichlorobenzene, -(CCl)2•(CH)4, for instance, is in many respects quite different from that of 1,4-dichlorobenzene, -CCl • (CH)2 • CCl • (CH)2 . A significant feature of the aromatic rings is their ability to utilize larger numbers of the less versatile substituents. For example, the limitation of such groups as NO2 to the negative radical in the chains means that only one such group can exist in any chain compound, unless a branch becomes so long that the compound is in effect a union of two chains. In the aromatic ring this limitation is removed, and compounds with three or four of the highly reactive nitro groups in the six-member ring are common. The list includes such well-known substances as picric acid (2,4,6-trinitrophenol), -COH • CNO2 • CH • CNO2 • CH • CNO2-, and TNT (2,4,6-trinitrotoluene), -CH3 • CNO2 • CH • CNO2 • CH • CNO2- . Since there is only one hydrogen atom in the CH group, the direct substitutions in the aromatic rings are limited to valence one negative components. In order to establish a valence equilibrium with a bivalent atom or radical two of the aromatic rings are required. These bivalent atoms or groups therefore constitute a means whereby two rings can be joined. Diphenyl ether, for example, has the structure -(CH)5 • C-OC • (CH)5-, in which the oxygen atom is not a member of either ring but participates in the valence equilibrium. The bivalenrt negative radical NH similarly produces diphenylamine, -(CH)5 • C-NH-C • (CH)5-. Each of these rings is a very stable structure with a minimum of eleven constituent atoms, and a possibility of considerable enlargement by substitution. This method of joining rings is therefore a readily available process whereby stable molecules of large size may be constructed. Further additions and substitutions may be made not only in the rings and their branches, but in the connecting link as well. Thus the addition of two CH2 groups to diphenyl ether produces dibenzyl ether, -(CH)5 • CCH2 • O • CH2 C • (CH)5-.

According to the definition of an aromatic compound, these multiple ring structures are not purely aromatic, as the connecting links do not qualify. This is a situation which we will encounter regardless of the manner in which the various organic classifications are set up, as the more complex compounds are primarily combinations of the different basic types of structure. Ordinarily a compound is classified as a ring structure if it contains a ring of any kind, even though the ring may be only a minor appendage on a long chain, and it is considered as an aromatic if there is at least one aromatic ring present. In the multiple ring compounds the combination (CH)5 • C, which is a benzene ring less one hydrogen atom, acts as a monovalent positive radical, the phenyl radical, and the simple substituted compounds can be named either as derivatives of benzene or as phenyl compounds; i.e., chlorobenzene or phenyl chloride. The net positive valence one is the valence condition in which the ring is left when a hydrogen atom is removed, but this net valence is due entirely to the + 1 valence of the lone carbon atom from which the hydrogen atom was detached, all other groups being neutral, and it does not necessarily follow that the carbon valence will remain at + 1. As emphasized earlier, valence is simply a matter of rotational orientation, and when acting alone any atom can assume any one of its possible valences, providing that there are no specific obstacles in the environment. The lone carbon atom is therefore free to accommodate itself to different environments by reorientation on the basis of any of its alternate valences: +2, +4, or -4. If two phenyl radicals are brought together, the inter-atomic forces will tend to establish an equilibrium. A valence balance is a prerequisite for a force equilibrium, and the carbon atoms will therefore reorient themselves to balance the valences. There are two possible ways of accomplishing this result. Since carbon has only one negative valence, -4, one carbon atom takes this valence, and a second must assume the +4 valence in order to arrive at an equilibrium. In a direct combination of two phenyl groups these valence changes can be made in the two independent carbon atoms, without modifying the neutral groups in any way, and this is therefore the most probable structure in such compounds as biphenyl, -(CH)5 • C-C • (CH)5-. A similar balanced pair of positive and negative valence 3 nitrogen atoms may be introduced, in combination with the valence 4 carbon atoms, to form azobenzene, -(CH)5 • CNNC • (CH)5-. The alternative is to make both valence changes in the same phenyl group, giving the lone carbon atom the -4 valence and increasing the v,alence of the carbon atom in an adjacent neutral group from +1 to +4. The product is a ring in which there are four CH neutral groups, a CH group with a net valence of +3, and a single carbon atom with the -4 valence. By this means the phenyl group is changed from a univalent positive radical, C • (CH)5, to a univalent negative radical, (CH)4 • CH • C. Like the methyl group, which can act either as a positive radical CH3 with valence + 1, or as a negative radical CH2 • H with valence - 1, the phenyl group is able to combine with substances of either valence type, taking the negative valence in combination with a positive component, and the positive valence when combining with a negative atom or group. It is negative in all of the phenyl compounds of the metal-organic class, and not only forms compounds such as phenyl copper, Cu-C • (CH)5-, and diphenyl zinc, Zn(-C • (CH)5-)2, but also combination phenyl-halide structures like phenyl tin trichloride, SnCl3-C • (CH)5-.

In combination with the CH3 radical the phenyl group is positive. Either radical can take either valence, but the methyl group probabilities are nearly equal, while the positive valence is more probable in the phenyl group, since it involves no change in the benzene ring other than the removal of a hydrogen atom. The combination -(CH)5 • CCH3- is therefore toluene, with positive phenyl and negative methyl (carbon valence two), rather than phenyl methane, which would have negative phenyl and positive methyl (carbon valence four). This option is not available in combination with other hydrocarbon radicals, or with carbon itself, and in such compounds the phenyl radical replaces hydrogen, and is negative. An additional phenyl substitution in toluene, for example, reduces the CH3 radical to CH2. This group cannot have the -2 net valence that would be necessary for combination with positive phenyl radicals, and both of the phenyl groups assume the negative status in the resulting compound, diphenyl methane. The olefinic and acetylenic benzenes likewise have this type of structure in which the phenyl radical is negative. Styrene, for instance, is not vinyl benzene, -(CH)5 • C-CH2 • CH, as that combination would contain two positive components and no negative. It is phenyl ethylene, CH • CH2 • -C • (CH)5-, in which CH is positive and the phenyl group is negative. An interesting phenyl compound is phenyl acetylene, the conventional formula for which is C6HS • C • CH. On the basis of our finding that hydrogen is negative to carbon, the hydrogen atom in the acetylene CH would have to be negative. But this is not true, as it can be replaced by sodium. It seems evident, then, that this is phenyl carbide, -(CH)5 • CC2H, a compound similar to butadiyne, which we have already identified as a carbide, C • CH • C2H. As noted previously, the relative negativity of carbon and hydrogen has no meaning with reference to the carbide radical, which has a net negative valence, and cannot be other than negative regardless of what element or group it combines with. According to the textbooks, the phenyl compound is identified as an acetylene because ―it undergoes the typical acetylene reactions.‖ But so does any other carbide. The acetylene lamp was a ―carbide‖ lamp to the cyclists of an earlier day. Like the phenyl radical, the cyclic radicals can accommodate themselves to either the positive or negative position in the molecule. These radicals, too, are positive in the monosubstituted compounds. A methyl substitution produces hexahydro toluene, not cyclohexyl methane. But if there are two cyclic substitutions in a methyl group they are both negative, and dicyclohexyl methane is a reality. At this point it will be desirable to examine the effects of the various modifications of the ring structure on the cohesion of the molecule. We may take the benzene ring as the basic aromatic structure. Textbooks and monographs on the aromatic compounds typically contain a chapter, or at least a lengthy section, on the ―benzene problem. ―69 The problem, in essence, is that all of the evidence derived from observation and experiment indicates that the interatomic forces and distances between any two of the six CH groups in the ring are identical, but no theory of the chemical ―bond‖ has been able to account for the structure of the benzene molecule without utilizing two or more different kinds of bonds. The currently favored ―solution‖ of the problem is to sweep it under the rug by postulating that the structure alternates, or ―resonates,‖ between the different bond arrangements.

The development of the Reciprocal System of theory now shows that the forces between the groups in the benzene ring are, in fact, identical. As has been emphasized throughout the preceding discussion, however, the existence and nature of chemical compounds is not determined by the cohesive forces between the atoms of the different elements, but by the directional relationships which the atomic rotations must assume in order to permit elements with electric rotation in time to establish stable force equilibria in space. The findings of this theoretical develop ment agree that the orienting effects which enable CH groups to combine into the benzene ring are of two different types, a short range effect and a long range effect, but they also reveal that the nature of the orienting influences has no bearing on the magnitude of the interatomic forces, and this explains why no difference in these forces can be detected experimentally. The forces between any two of the CH neutral groups in the ring are identical. Inasmuch as the orienting factors cause the atoms to align their rotations in certain specific relative directions, they are, in a sense, forces, but in order to distinguish them from the actual cohesive forces that hold the atoms, groups, and molecules together in the positions determined by these orienting factors we are using the term ―effects‖ rather than ―forces‖ in application to the orientation, even though this introduces an element of awkwardness into the presentation. The nature of these effects, as they apply to the benzene ring, can be illustrated by an orientation diagram of the kind previously introduced.

The pairs of CH groups, 1-2, 3-4, and 5-6, in the diagram, are held in the combining positions by the orienting effects of a directional character that are exerted by all magnetic groups or compounds. Alternate groups, 1-3, 2-4, etc., are within unit distance, and therefore within the effective range of these orienting effects. The primary effect of group l, for instance, is directed toward group 2, but group 3 is also within unit distance, and consequently there is a long range 1-3 secondary effect as well as a short range 1-2 primary effect. Because of the directional nature of these orienting effects there is no 2-3 primary effect, but the pairs 1-2 and 3-4 are held in position by the 1-3 and 4-2 secondary effects. If we replace one of the hydrogen atoms with some negative substituent, the orientation situation is unchanged. The new neutral group, or that portion of it which is within the range of the ring forces if the group is a long one, takes over the functions of the CH group without alteration. However, removal of a hydrogen atom and conversion of the benzene molecule into a positive phenyl radical changes the orientation pattern to

The secondary effect 3-5 has now been eliminated, as the lone carbon atom does not have the free electric rotation characteristic of the magnetic groups or compounds, but the

remaining orientation effects are still adequate to hold the structure together. The further valence change that is necessary if the phenyl radical is to assume a negative valence similarly eliminates the 4-2 secondary effect, as group 4 is no longer magnetic. However, the two carbon atoms and one hydrogen atom combine into a radical CCH, with a net valence of - 1. This radical has no orienting effect on its neighbors, but the adjoining magnetic neutral groups do exert their effect on it. The orientation pattern is

As previously explained, the carbon atoms in the CCH combination have valences +4 and -4. If we remove the hydrogen atom from this group we obtain a ring in which four CH neutral groups are combined with two individual carbon atoms. This structure is neutral and is capable of existing as an independent compound, but, like the methylene molecule, it does not actually do so, because it has a strong tendency to form a double ring. The four CH groups which are attached to the C-C combination can be duplicated on the opposite side of the C-C line of action, forming another similar ring which utilizes the same pair of carbon atoms as part of its ring structure. The fact that the effects originating from the free electric rotations are exerted on the carbon atoms by the CH groups on one side does not in any way interfere with the existence of similar effects on the other side. The orientation relations in the second ring are identical with those of the first. Neither ring can now recapture a hydrogen atom and become a phenyl radical because the presence of the other ring prevents the approach of the free hydrogen atoms. The double ring compound therefore has a high degree of stability. This compound is naphthalene, -(CH)4 • C=C • (CH)4, a condensed ring aromatic hydrocarbon. When used in the formula of a compound in this work, the double mark between two carbon atoms is a symbol indicating the condensed ring type of structure in which the rings are joined at two positions rather than at a single position as in compounds such as biphenyl. It has no implications of the kind associated with the ―double bonds‖ of the electronic theory. A third ring added in the same manner produces anthracene. Further similar additions in line result in a series of compounds: naphthacene, pentacene, and so on. But it is not necessary that the additions be made in line, and each of these compounds is accompanied by others which have the same composition, but different structures. For instance, the four ring compounds of the naphthacene composition, C18H12, include chrysene, naphthanthracene, 3,4-benzophenanthrene, and triphenylene. Pyrene has the same four rings, but a more compact structure, and a composition C16H10. The structural behavior of the condensed rings is essentially the same as that of the single benzene rings. They join to form compounds such as binaphthyl and bianthryl; they act as radicals (naphthyl, anthryl, phenanthryl, etc.); they attach more rings by substitution for hydrogen to produce compounds such as triphenyl anthracene; and they form a great variety of compounds by utilizing the other negative substituents available to the aromatic rings. Many interesting and important compounds are included in this category,

but no new structural features are involved, and they are therefore outside the scope of the present discussion. The two CH groups of the middle ring of the anthracene structure are not necessary for stability, and they can be eliminated. The resulting compound is biphenylene, -(CH)4 • CC=CC • (CH)4 . A structure with only one CH group in the middle ring, intermediate between anthracene and biphenylene, is ruled out by the low probability of the continued existence of a single CH group, but a similar compound can be formed by putting a CH2 group in the intermediate position, as the CH2 groups are not restricted to pairs. The new compound is fluorene. Another CH2 group in the opposite position restores the anthracene structure with a cyclic middle ring. This compound is dihydroanthracene. As previously mentioned, a ring with even one CH2 group deviates substantially from the typical aromatic behavior, and any such ring is classified with the cyclic structures, but this effect is confined to the specific ring, and any adjacent aromatic rings retain their aromatic character. Such compounds as fluorene and dihydroanthracene should therefore be regarded as combination cyclic-aromatic structures. These compounds occur in large numbers and in great variety, but the principles of combination are the same as in the purely aromatic compounds, and do not need to be repeated. Since the cyclic compounds are less stable than the corresponding aromatics, the combination structures do not cover as large a field as the aromatic compounds, but a very stable structure such as that of naphthalene does extend through the entire substitution range. Beginning with the purely aromatic compound, successive pairs of hydrogen atoms can be added all the way to the purely cyclic compound, decahydronaphthalene. The reduction in the variety of combination structures due to the fact that the cohesive force in the cyclic ring is weaker than that in the aromatic ring is offset to some extent by the ability of the CH2 groups to form rings of various sizes. 1,2,3,4tetrahydronaphthalene, for instance, can drop one of its CH2 groups, forming indane, (CH)4 • C=C • (CH2)3-. Because of the CH2 flexibility, the cyclic ring in this compound is still able to close even if two of the remaining CH2 groups are replaced by CH. This produces indene, -(CH)4 • C=C • (CH)2 • CH2 . Polynuclear cyclic compounds are formed in the same manner as the polynuclear aromatic and combination structures, but not in as great a number or variety. Corresponding to biphenyl and its substitution products are dicyclopentyl, dicyclohexyl, etc., and their derivatives; triphenyl methane has a cyclic equivalent in tricyclohexyl methane; the cyclic analog of naphthalene is bicyclodecane, and so on. The last major division of the ring compounds is the heterocyclic class, in which are placed all compounds in which any of the carbon atoms in the cyclic or aromatic rings are replaced by other elements. The principal reason for setting up a special classification for these compounds is that most of the substitutions of other elements for carbon require valence changes of one kind or another, unlike the substitutions for hydrogen, which normally involve no valence modifications, except in those cases where two valence one hydrogen atoms are replaced by one valence two substituent.

Some of the heterocyclic substitutions are of this two for one character, and in those cases the normal cyclic or aromatic structure is not altered. For example, if we begin with quinone, -(CH)2 • CO • (CH)2 • CO-, an aromatic carbon compound, and replace two of the CH groups with NH neutral groups we obtain uracil, -NH • CO • NH • CH • CH • CO-. One more similar pair replacement removes the last of the hydrocarbon groups and results in urazine, -NH • CO • NH • NH • CO • NH-. In the compound cyclohexane hexone previously mentioned all of the hydrogen has been replaced, and in borazole, BH • NH • BH • NH • BH • NH-, all carbon is eliminated. All of these heterocyclic compounds are composed entirely of two-member magnetic neutral groups, and therefore have the benzene structure: six groups arranged in a rigid aromatic ring. More commonly, however, the heterocyclic substituent is a single atom or a radical, and such a substitution requires a valence change in some other part of the ring to maintain the valence equilibrium. Substitutions therefore often take place in balanced pairs. In pyrone, -(CH)2 • CO • (CH)2 • O-, for example, the CO combination is not a neutral group, but a radical with valence +2 which balances the -2 valence of the oxygen atom. The CH2 radical, in which carbon also has its normal valence +4, has the same function in pyran, -(CH)2 • CH2 • (CH)2 • O-. Substitution of two nitrogen atoms with the balanced valences of +3 and -3 in the aromatic ring produces a diazine. If the nitrogen atoms are in the 1,2 positions the compound is pyridazine, -N•N•(CH)4 . The properties of the 1,3 and 1,4 compounds are enough different from those of pyridazine that they have been given distinctive names, pyrimidine and pyrazine, respectively. Since the positive and negative radicals in a ring have no fixed positions similar to the two ends of the chains, it is not possible to indicate their status by their positions as we do in the formulas we are using for the chain compounds. Some appropriate method of identification probably should be devised in order to make the formula as representative of the actual structure as possible, but this is not necessary for the purposes of the present work, and can be left for later consideration. The following orientation diagrams for pyrone and pyridazine are typical of those for heterocyclic compounds with single atom or radical substitutions:

If the valence equilibrium is not achieved in this manner by means of a pair of substitutions, a valence change in one of the neutral groups is necessary. A single nitrogen atom substituted into the ring requires a +3 valence elsewhere in the structure to counterbalance the negative nitrogen valence. This is readily accomplished by a shift of one of the carbon valences to +4. The reconstructed ring then consists of a nitrogen atom, valence -3, a CH radical, valence +3, and four CH neutral groups. This compound is

pyridine, -(CH)5 • N-. Hydrogenation can be carried out by steps through intermediate compounds all the way to the corresponding cyclic structure, piperidine, -(CH2)5 • NH-. When oxygen, or another valence two negative component, is introduced into the aromatic ring the necessary valence balance may be attained by a simultaneous replacement of one of the CH neutral groups by a CH2 radical, as already noted in the case of pyran. Or the required balance can be achieved without introduction of additional hydrogen if the carbon valences in two of the CH groups are stepped up to the +2 level (the primary magnetic valence), forming two CH radicals, each with valence + 1. This leaves an unstable odd number of CH neutral groups in the six-member ring, but there is sufficient flexibility in the structure to enable a ring closure on a five-member basis, and stability is restored by ejecting a neutral group. The resulting compound is furan, -(CH)4 • O-, a five-member ring with one oxygen atom, two CH neutral groups, and two CH valence one positive radicals. Substituting sulfur instead of oxygen produces thiophene, (CH)4 • S-, while inserting the negative radical NH into the same position produces pyrrole, -(CH)4 • NH-. Each of these furan type compounds also exists in the cyclic dihydro and tetrahydro forms. The furan orientation pattern is

The essential feature of all of these five-member rings of the furan class is a valence equilibrium in which three of the five components participate, the two remaining components being the neutral groups that furnish the ring-forming capability. In furan the equilibrium combination is C+-O•2-C+. Formation of a similar combination with nitrogen in the negative position requires that some element or radical positive to nitrogen take the positive position, and in the heterocyclic division nitrogen itself commonly accepts this role. The most probable valence under these conditions is +3, as in hydrazine. The two nitrogen valences, +3 and -3, are then in equilibrium, and in this case the fifth component of the five-member ring must be a neutral group. Since it is a single group, it is the cyclic group CH2, and the neutral trio is N+3-N-3-CH2°. The compound is isopyrazole, -N • CH • CH • CH2 • N-. An alternate group arrangement produces isoimidazole, -N • CH2 • N • CH • CH-. A variation of this structure moves a hydrogen atom from the CH2 group to the positive nitrogen, which changes the neutral combination to NH +2-N-3-CH+1. . The compounds formed on this basis are pyrazole, -N • (CH)3 • NH-, and imidazole, -N • CH • NH • CH • CH-. From these basic heterocyclic types a great variety of condensed systems such as coumarone (benzofuran), indole (benzopyrrole), quinoline (benzopyridine), etc., can be formed by combination with other rings. Both the single rings and the condensed systems are then open to further enlargement by all of the processes of addition and substitution previously discussed, and a very substantial proportion of the known organic compounds belong to this class. From a structural standpoint, however, the basic principles involved in the formation of all of these compounds are those that have been covered in the preceding discussion.

In the foregoing pages we have encountered several kinds of isomerism, the existence of different compounds with the same composition. Some, such as the cyanides and isocyanides, differ only in valence; some, such as the straight chain and branched paraffins, differ in the position of the neutral groups; and some, such as the aldehydes and the ketones, differ in the assignment of the atoms of the constituent elements to the structural groups. Most of these isomers that we have examined thus far are distinct stable compounds. There are also some isomeric systems in which the two forms of a substance convert so readily from one to the other that they establish an equilibrium which varies in accordance with the conditions to which the compound is subject. This form of isomerism is known as tautomerism. One of the familiar examples of tautomerism is that between the, ―keto‖ and ―enol‖ forms of certain substances. Ethyl acetoacetate COCH3 • CH2 • CO • (O • CH2 • CH3), is the keto form of a compound that also exists in the enol form as the ethyl ester of hydroxycrotonic acid, COH • CHCH3 • CO • (O • CH2 • CH3). The compound freely changes from one form to the other to meet changing physical and chemical conditions. This is another example of counterbalancing carbon and hydrogen valence changes, and it is an indication of the ease with which such changes can be made. In the radical COCH3 the carbon valence is +4, and all hydrogen is negative. The transition to the enol form involves a drop in the carbon valence to +2, and one hydrogen atom shifts from - 1 to + 1 to maintain the balance. The CH2 group in the radical is then superfluous, and it moves to the adjacent neutral group. The remainder of the molecule is unchanged. The development of the Reciprocal System of theory has not yet been extended to a study of tautomerism. Nor has it been applied to those kinds of isomerism which depend on the geometrical arrangement of the component parts of the molecules, such as optical isomerism. These aspects of the general subject of molecular structure will therefore have to be left for later treatment. This chapter is the last of the four that have been devoted to an examination of the structure of chemical compounds. In closing the discussion it will be appropriate to point out just how the presentation in these chapters fits into the general plan of the work, as defined in Chapter 2. The usual discussion of molecular structure, as we find it in the textbooks, starts with the empirical observation that certain chemical compounds-sodium chloride, benzene, water, ethyl alcohol, etc.-exist, and have certain properties, including different molecular structures. The theoretical treatment then attempts to devise plausible explanations for the existence of these observed compounds, their structures, and other properties. This present work, on the other hand, is entirely deductive. By developing the necessary consequences of the fundamental postulates of the Reciprocal System we find that in a universe of motion matter must exist; it must exist in the form of a series of elements; and those elements must have the capability of combining in certain specific ways to form chemical compounds. In this and the preceding chapters, the most important of the possible types of molecular structures have been derived from theory, and specific compounds have been characterized by composition and structure. The second objective of the work is to identify these theoretical combinations with the observed chemical compounds. For example, we deduce purely from theory that there must exist a compound in the form of a chain of three groups of atoms, in which the first

group contains three atoms of element number one and one atom of element number six, and has a net group combining power, or valence, of + 1. The second group has two atoms of element number one and one of number six, and is neutral; that is, its net valence is zero. The third group has one atom of element number one and one of element number eight, and a valence of - 1. This theoretical composition and structure are in full agreement with the composition of the observed compound known as ethyl alcohol, and with the structure of that compound as deduced from physical and chemical observation and measurement. We are thus entitled to conclude that ethyl alcohol is the chemical compound existing in the physical universe that corresponds to the compound which must exist in the theoretical universe of the Reciprocal System. In other words, we have identified the theoretical compound as ethyl alcohol. The great majority of the identifications cited in the preceding pages are unequivocalalmost self-evident, we may say-and this agreement establishes the validity of both the theoretical development and the empirical determination of the molecular structures. Where there are discrepancies, some of them, such as the one involved in the structure of ethylene, are quite easily explained. However, as the size and complexity of the molecules increases, the number and variety of the possible modifications of the theoretical structure also increases, in even greater proportion, and the observable differences between the various modifications decrease. The validity of the identifications is therefore less certain than in the case of the smaller and simpler molecules, but this does not mean that there is any additional uncertainty with respect to the existence of the more complex theoretical compounds. It merely means that the available empirical information is not adequate to permit a definite decision as to which of the observed compounds corresponds to a particular theoretical structure. It can be expected, therefore, that further investigation will clear up most of these questions. The discussion of chemical compounds in this and the preceding three chapters completes the description of the primary physical entities, the actors in the drama of the physical universe. In the next volume we will begin applying the theoretical findings to an examination of the drama itself: the action in which these entities are involved.

Nothing but Motion Dewey B. Larson References 1. Butterfield, Herbert, The Origins of Modern Science, Revised Edition, The Free Press, New York, 1965, page 18. 2. Schlegel, Richard, Completeness in Science, Appleton-Century-Crofts, New York, 1967, page 152. 3. Burbidge and Burbidge, Quasi-Stellar Objects, W. H. Freeman & Co., San Francisco, 1967, page vii. 4. New Scientist, Feb. 13, 1969.

5. Dicke, R. H., American Scientist, March 1959. 6. Wooldridge, Dean E., The Machinery of Life, McGraw-Hill Book Co., New York, 1966, page 4. 7. Conant, James B., Modern Science and Modern Man, Columbia University Press, 1952, page 47. 8. Feynman, Richard, The Character of Physical Law, The M.I.T. Press, 1967, page 145. 9. Bridgman, P. W., The Nature of Physical Theory, Princeton University Press, 1936, page 134. 10. Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938, page 159. 11. Dingle, Herbert, A Century of Science, Hutchinson's Publications, London, 1951, page 315. 12. Feynman, Richard, op. cit., page 156. 13. Margenau, Henry, Quantum Theory, Vol. 1, edited by D. R. Bates, Academic Press, New York, 1961, page 6. 14. Braithwaite, R. B., Scientific Explanation, Cambridge University Press, 1953, page 22. 15. Feyoman, Richard, op. cit., page 30. 16. Einstein, Albert, The Structure of Scientific Thought, E. H. Madden, editor, Houghton MiMin Co., Boston, 1960, page 82. 17. Dirac, P. A. M., Scientific American, May 1963. 18. Butterfield, Herbert, op. cit., page 13. 19. Hobbes, Thomas, The Metaphysical System of Hobbes, M. W. Calkins, editor, The Open Court Publishing Co., La Sane, III., 1948, page 22. 20. North, J. D., The Measure of the Universe, The Clarendon Press, Oxford, 1965, page 367. 21. Einstein, Albert, Relativity, Henry Ho It & Co., New York, 1921, page 74. 22. Hocking, William E., Preface to Philosophy, The Macmillan Co., New York, 1946, page 425. 23. Tolman, Richard C., The Theory of the Relativity of Motion, University of California Press, 1917, page 27.

24. Wigner, Eugene P., Symmetries and Reflections, Indiana University Press, 1967, page 30. 25. Jeans, Sir James, The Universe Around Us, Cambridge University Press, 1947, page 113. 26. Heisenberg, Werner, On Modern Physics, Clarkson N. Potter, New York, 1961, page 16. 27. Heisenberg, Werner, Physics Today, March 1976. 28. Schlegel, Richard, op. cit., page 18. 29. Gold and Hoyle, Paris Symposium on Radio A stronomy, paper 104, Stanford University Press, 1959. 30. Darrow, Karl K., Scientific Monthly, March 1942. 31. Finlay-Freundlich, E., Monthly Notices of the Royal Astronomical Society, 105237. 32. Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New York, 1952, page 38. 33. Einstein, Albert, A Ibert Einstein: Philosopher-Scientist, Paul Schilpp, editor, the Library of Living Philosophers, Evanston, 111., 1949, page 67. 34. Alfven, Hannes, Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966 page 92. 35. Hawkins, David, The Language of Nature, W. H. Freeman & Co., San Francisco. 1964, page 183. 36. Lindsay, R. B., Physics Today, Dec. 1967. 37. Einstein, Albert, Sidelights on Relativity, E. P. Dutton & Co., New York, 1922 page 23. 38. Einstein and Infeld, op. cit., page 185. 39. Einstein, Albert, Relativity, op. cit., page 126. 40. Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958 page 129. 41. Einstein, Albert, Foreword to Concepts of Space, by Max Jammer, Harvard University Press, 1954. 42. Jeans, Sir James, op. cit., page 78. 43. Schlegel, Richard, Time and the Physical World, Michigan State University Press 1961, page 160.

44. Whitrow, G. J., The Natural Philosophy of Time, Thomas Nelson & Sons, London 1961, page 218. 45. Moller, C., The Theory of Relativity, The Clarendon Press, Oxford, 1952, page 49. 46. Tolman, Richard C., Relativity, Thermodynamics and Cosmology, The Clarendon Press Oxford, 1934, page 195. 47. Science, July 14, 1972. 48. Heisenberg, Werner, Philosophic Problems of Nuclear Science, op. cit., page 12. 49. Thomson, Sir George, The Inspiration of Science, Oxford University Press, London 1961, page 66. 50. Millikan, Robert A., Time and Its Mysteries, Collier Books, New York, 1962, page 24. 51. Jeans, Sir James, Physics and Philosophy, The Macmillan Co., New York, 1945 page 190. 52. Toulmin and Goodfield, The Architecture of Matter, Harper & Row, New York 1962, page 298. 53. Bridgman, P. W., A Sophisticate's Primer of Relativity, Wesleyan University Press 1962, page 10. 54. Feyerabend, P. K., Philosophy of Science, The Delaware Seminar, Vol. 2 (19621963) Bernard Baumrin, editor, Interscience Publishers, New York, 1963, page 17. 55. Einstein and Infeld, op. cit., page 195. 56. Bridgman, P. W., Reflections of a Physicist, Philosophical Library, New York, 1955 page 186. 57. Will, Clifford M., Scientific American, Nov. 1974. 58. Feynman, Richard, op. cit., pages 156, 166. 59. Cohen, Crowe and Du Mond, The Fundamental Constants of Physics, Interscienc' Publishers, New York. 1957. 60. Alfven, Hannes, Scientific American, Apr. 1967. 61. Boorse and Motz, The World of the Atom, Vol. 2, Basic Books, New York, 1966 page 1457. 62. Hooper and Scharff, The Cosmic Radiation, John Wiley & Sons, New York, 1958 page 57. 63. Swann, W. F. G., Journal of the Franklin Institute, May 1962.

64. Davis, Leverett, Jr., Nuovo Cimento Suppl., 10th Ser., Vol. 13, No. 1, (1959). 65. Donahue, T. M., Physical Review, 2nd Ser., Vol. 84, No. 5, (1951), page 972. 66. Satz, Ronald W., Reciprocity, May 1975. 67. Weisskopf, V. F., Comments on Nuclear and Particle Physics, Jan.-Feb. 1969. 68. Pauling, Linus, The Nature of the Chemical Bond, Cornell University Press, 1960 page 217. 69. See, for instance, Badger, G. M., The Structures and Reactions of the Aromatic Compounds, Cambridge University Press, 1954, Chapter 1. 70. Weisskopf, V. F., Lectures in Theoretical Physics, Vol. 111, Britten, J. Downs, and B. Downs, editors, Interscience Publishers, New York, 1961, page 80.

DEWEY B. LARSON: THE COLLECTED WORKS

Dewey B. Larson (1898-1990) was an American engineer and the originator of the Reciprocal System of Theory, a comprehensive theoretical framework capable of explaining all physical phenomena from subatomic particles to galactic clusters. In this general physical theory space and time are simply the two reciprocal aspects of the sole constituent of the universe–motion. For more background information on the origin of Larson’s discoveries, see Interview with D. B. Larson taped at Salt Lake City in 1984. This site covers the entire scope of Larson’s scientific writings, including his exploration of economics and metaphysics.

Physical Science The Structure of the Physical Universe The original groundbreaking publication wherein the Reciprocal System of Physical Theory was presented for the first time. The Case Against the Nuclear Atom ―A rude and outspoken book.‖

The Universe of Motion The third volume of the revised edition of The Structure of the Physical Universe, applying the theory to astronomy. The Liquid State Papers A series of privately circulated papers on the liquid state of matter.

Beyond Newton ―...Recommended to anyone who thinks the subject of gravitation and general relativity was opened and closed by Einstein.‖

The Dewey B. Larson Correspondence Larson’s scientific correspondence, providing many informative sidelights on the development of the theory and the personality of its author.

New Light on Space and Time A bird’s eye view of the theory and its ramifications.

The Dewey B. Larson Lectures Transcripts and digitized recordings of Larson’s lectures.

The Neglected Facts of Science Explores the implications for physical science of the observed existence of scalar motion. Quasars and Pulsars Explains the most violent phenomena in the universe.

The Collected Essays of Dewey B. Larson Larson’s articles in Reciprocity and other publications, as well as unpublished essays.

Metaphysics Beyond Space and Time A scientific excursion into the largely unexplored territory of metaphysics.

Economic Science Nothing but Motion The first volume of the revised edition of The Structure of the Physical Universe, developing the basic principles and relations. Basic Properties of Matter The second volume of the revised edition of The Structure of the Physical Universe, applying the theory to the structure and behavior of matter, electricity and magnetism.

The Road to Full Employment The scientific answer to the number one economic problem. The Road to Permanent Prosperity A theoretical explanation of the business cycle and the means to overcome it.

From; http://www.reciprocalsystem.com/bpm/index.htm

Basic Properties of Matter DEWEY B. LARSON Volume II of a revised and enlarged edition of THE STRUCTURE OF THE PHYSICAL UNIVERSE

Preface 1. Solid Cohesion 2. Inter–atomic Distances 3. Distances in Compounds 4. Compressibility 5. Heat 6. Specific Heat Patterns 7. Temperature Relations 8. Thermal Expansion 9. Electric Currents 10. Electrical Resistance 11. Thermoelectric Properties 12. Scalar Motion 13. Electric Charges 14. The Basic Forces

15. Electrical Storage 16. Induction of Charges 17. Ionization 18. The Retreat From Reality 19. Magnetostatics 20. Magnetic Quantities and Units 21. Electromagnetism 22. Charges in Motion 23. Magnetic Materials 24. Isotopes 25. Radioactivity 26. Atom Building 27. Mass and Energy References

Preface THIS volume is the second in a series in which I am undertaking to develop the consequences that necessarily follow if it is postulated that the physical universe is composed entirely of motion. The characteristics of the basic motion were defined in Nothing But Motion,the first volume of the series, in the form of seven assumptions as to the nature and interrelation of space and time. In the subsequent development, the necessary consequences of these assumptions have been derived by logical and mathematical processes without the introduction of any supplementary or subsidiary assumptions, and without introducing anything from experience. Coincidentally with this theoretical development, it has been shown that the conclusions thus reached are consistent with the relevant data from observation and experiment, wherever a comparison can be made. This justifies the assertion that, to the extent to which the

development has been carried, the theoretical results constitute a true and accurate picture of the actual physical universe. In a theoretical development of this nature, starting from a postulate as to the fundamental nature of the universe, the first results of the deductive process necessarily take the form of conclusions of a basic character: the structure of matter, the nature of electromagnetic radiation, etc. Inasmuch as these are items that cannot be apprehended directly, it has been possible for previous investigators to formulate theories of an ad hoc nature in each individual field to fit the limited, and mainly indirect, information that is available. The best that a correct theory can do in any one of these individual areas is to arrive at results that also agree with the available empirical information. It is not possible, therefore, to grasp the full significance of the new development unless it is recognized that the new theoretical system, the Reciprocal System, as we call it, is one of general application, one that reaches all of its conclusions all physical fields by deduction from the same set of basic premises. Experience has indicated that it is difficult for most individuals to get a broad enough view of the fundamentals of the many different branches of physical science for a full appreciation of the unitary character of this new system. However, as the deductive development is continued, it gradually extends down into the more familiar areas, where the empirical information is more readily available, and less subject to arbitrary adjustment or interpretation to fit the prevailing theories. Thus the farther the development of this new general physical theory is carried, the more evident its validity becomes. This is particularly true where, as in the subject matter treated in this present volume, the theoretical deductions provide both explanations and numerical values in areas where neither is available from conventional sources. There has been an interval of eight years between the publication of Volume I and the first complete edition of this second volume in the series. Inasmuch as the investigation whose results are here being reported is an ongoing activity, a great deal of new information has been accumulated in the meantime. Some of this extends or clarifies portions of the subject matter of the first volume, and since the new findings have been taken into account in dealing with the topics covered in this volume, it has been necessary to discuss the relevant aspects of these findings in this volume, even though some of them may seem out of place. If, and when, a revision of the first volume is undertaken, this material will be transferred to Volume I. The first 11 chapters of this volume were published in the form of reproductions of the manuscript pages in 1980. Publication of the first complete edition has been made possible through the efforts of a group of members of the International Society of Unified Science, including Rainer Huck, who handled the financing, Phil Porter, who arranged for the printing, Eden Muir, who prepared the illustrations, and Jan Sammer, who was in charge of the project. D. B. Larson December 1987

CHAPTER 1

Solid Cohesion The consequences of the reversal of direction (in the context of a fixed reference system) that takes place at unit distance were explained in a general way in Chapter 8 of Volume I. As brought out there, the most significant of these consequences is that establishment of an equilibrium between gravitation and the progression of the natural reference system becomes possible. There is a location outside unit distance where the magnitudes of these two motions are equal: the distance that we are calling the gravitational limit. But this point of equality is not a point of equilibrium. On the contrary, it is a point of instability. If there is even a slight unbalance of forces one way or the other, the resulting motion accentuates the unbalance. A small inward movement, for instance, strengthens the inward force of gravitation, and thereby causes still further movement in the same direction. Similarly, if a small outward movement occurs, this weakens the gravitational force and causes further outward movement. Thus, even though the inward and outward motions are equal at the gravitational limit, this is actually nothing but a point of demarcation between inward and outward motion. It is not a point of equilibrium. In the region inside unit distance, on the contrary, the effect of any change in position opposes the unbalanced forces that produced the change. If there is an excess gravitational force, an outward motion occurs which weakens gravitation and eliminates the unbalance. If the gravitational force is not adequate to maintain a balance, an inward motion takes place. This increases the gravitational effect and restores the equilibrium. Unless there is some intervention by external forces, atoms move gravitationally until they eventually come within unit distance of other atoms. Equilibrium is then established at positions within this inside region: the time region, as we have called it. The condition in which a number of atoms occupy equilibrium positions of this kind in an aggregate is known as the solid state of matter. The distance between such positions is the inter-atomic distance, a distinctive feature of each particular material substance that we will examine in detail in the following chapter. Displacement of the equilibrium in either direction can be accomplished only by the application of a force of some kind, and a solid structure resists either an inward force, a compression, or an outward force, a tension. To the extent that resistance to tension operates to prevent separation of the atoms of a solid it is commonly known as the force of cohesion. The conclusions with respect to the nature and origin of atomic cohesion that have been reached in this work replace a familiar theory, based on altogether different premises. This previously accepted hypothesis, the electrical theory of matter, has already had some consideration in the preceding volume, but since the new explanation of the nature of the cohesive force is basic to the present development, some more extensive comparisons of the two conflicting viewpoints will be in order before we proceed to develop the new theoretical structure in greater detail.

The electrical, or electronic, theory postulates that the atoms of solid matter are electrically charged, and that their cohesion is due to the attraction between unlike charges. The principal support for the theory comes from the behavior of ionic compounds in solution. A certain proportion of the molecules of such compounds split up, or dissociate, into oppositely charged components which are then called ions. The presence of the charges can be explained in either of two ways: (1) the charges were present, but undetectable, in the undissolved material, or (2) they were created in the solution process. The adherents of the electrical theory base it on explanation (1). At the time this explanation was originally formulated, electric charges were thought to be relatively permanent entities, and the conclusion with respect to their role in the solution process was therefore quite in keeping with contemporary scientific thought. In the meantime, however, it has been found that electric charges are easily created and easily destroyed, and are no more than a transient feature of matter. This cuts the ground from under the main support of the electrical theory, but the theory has persisted because of the lack of any available alternative. Obviously some kind of a force must hold the solid aggregate together. Outside of the forces known to result directly from observable motion, there are only three kinds of force of which there has heretofore been any definite observational knowledge: gravitational, electric, and magnetic. The so-called ―forces‖ which play various roles in present-day atomic physics are purely hypothetical. Of the three known forces, the only one that appears to be strong enough to account for the cohesion of solids is the electric force. The general tendency in scientific circles has therefore been to take the stand that cohesion must result from the operation of electrical forces, notwithstanding the lack of any corroboration of the conclusions reached on the basis of the solution process, and the existence of strong evidence against the validity of those conclusions. One of the serious objections to this electrical theory of cohesion is that it is not actually a theory, but a patchwork collection of theories. A number of different explanations are advanced for what is, to all appearances, the same problem. In its basic form, the theory is applicable only to a restricted class of substances, the so-called ―ionic‖ compounds. But the great majority of compounds are ―non-ionic.‖ Where the hypothetical ions are clearly non-existent, an electrical force between ions cannot be called upon to explain the cohesion, so, as one of the general chemistry tests on the author‘s shelves puts it, ―A different theory was required to account for the formation of these compounds.‖ But this ―different theory,‖ based on the weird concept of electrons ―shared‖ by the interacting atoms, is still not adequate to deal with all of the non-ionic compounds, and a variety of additional explanations are called upon to fill the gaps. In current chemical parlance the necessity of admitting that each of these different explanations is actually another theory of cohesion is avoided by calling them different types of ―bonds‖ between the atoms. The hypothetical bonds are then described in terms of interaction of electrons, so that the theories are united in language, even though widely divergent in content. As noted in Chapter19, Vol. I, a half dozen or so different types of bonds have been postulated, together with ―hybrid‖ bonds which combine features of the general types.

Even with all of this latitude for additional assumptions and hypotheses, some substances, notably the metals, cannot be accommodated within the theory by any expedient thus far devised. The metals admittedly do not contain oppositely charged components, if they contain any charged components at all, yet they are subject to cohesive forces that are indistinguishable from those of the ionic compounds. As one prominent physicist, V. F. Weisskopf, found it necessary to admit in the course of a lecture, ―I must warn you I do not understand why metals hold together.‖ Weisskopf points out that scientists cannot even agree as to the manner in which the theory should be applied. Physicists give us one answer, he says, chemists another, but ―neither of these answers is adequate to explain what a chemical bond is.‖1 This is a significant point. The fact that the cohesion of metals is clearly due to something other than the attraction between unlike charges logically leads to a rather strong presumption that atomic cohesion in general is non-electrical. As long as some nonelectrical explanation of the cohesion of metals has to be found, it is reasonable to expect that this explanation will be found applicable to other substances as well. Experience in dealing with cohesion of metals thus definitely foreshadows the kind of conclusions that have been reached in the development of the Reciprocal System of theory. It should also be noted that the electrical theory is wholly ad hoc. Aside from what little support it can derive from extrapolation to the solid state of the conditions existing in solutions, there is no independent confirmation of any of the principal assumptions of the theory. No observational indication of the existence of electrical charges in ordinary matter can be detected, even in the most strongly ionic compounds. The existence of electrons as constituents of atoms is purely hypothetical. The assumption that the reluctance of the inert gases to enter into chemical compounds is an indication that their structure is a particularly stable one is wholly gratuitous. And even the originators of the idea of ―sharing‖ electrons make no attempt to provide any meaningful explanation of what this means, or how it could be accomplished, if there actually were any electrons in the atomic structure. These are the assumptions on which the theory is based, and they are entirely without empirical support. Nor is there any solid basis for what little theoretical foundation the theory may claim, inasmuch as its theoretical ties are to the nuclear theory of atomic structure, which is itself entirely ad hoc. But these points, serious as they are, can only be regarded as supplementary evidence, as there is one fatal weakness of the electrical theory that would demolish it even if nothing else of an adverse nature were known. This is our knowledge of the behavior of positive and negative electric charges when they are brought into close proximity. Such charges do not establish an equilibrium of the kind postulated in the theory; they destroy each other. There is no evidence which would indicate that the result of such contact is any different in a solid aggregate, nor is there even any plausible theory as to why any different outcome could be expected, or how it could be accomplished. It is worth noting in this connection that while current physical theory portrays positive and negative charges as existing in a state of congenial companionship in the nuclear theory of the atom and in the electrical theory of matter, it turns around and gives us explanations of the behavior of antimatter in which these charges display the same

violent antagonism that they demonstrate to actual observation. This is the kind of inconsistency that inevitably results when recalcitrant problems are ―solved‖ by ad hoc assumptions that involve departures from established physical laws and principles. In the context of the present situation in which the electrical theory is challenged by a new development, all of these deficiencies and contradictions that are inherent in the electrical theory become very significant. But the positive evidence in favor of the new theory is even more conclusive than the negative evidence against its predecessor. First, and probably the most important, is the fact that we are not replacing the electrical theory of matter with another ―theory of matter.‖ The Reciprocal System is a complete general theory of the physical universe. It contains no hypotheses other than those relating to the nature of space and time, and it produces an explanation of the cohesion of solids in the same way that it derives logical and consistent explanations of other physical phenomena: simply by developing the consequences of the basic postulates. We therefore do not have to call upon any additional force of a hypothetical nature to account for the cohesion. The two forces that determine the course of events in the region outside unit distance also account for the existence of the inter-atomic equilibrium inside this distance. It is significant that the new theory identifies both of these forces. One of the major defects of the electrical theory of cohesion is that it provides only one force, the hypothetical electrical force of attraction, whereas two forces are required to explain the observed situation. Originally it was assumed that the atoms are impenetrable, and that the electrical forces merely hold them in contact. Present-day knowledge of compressibility and other properties of solids has demolished this hypothesis, and it is now evident that there must be what Karl Darrow called an ―antagonist,‖ in the statement quoted in Volume I, to counter the attractive force, whatever it may be, and produce an equilibrium. Physicists have heretofore been unable to find any such force, but the development of the Reciprocal System has now revealed the existence of a powerful and omnipresent force hitherto unknown to science. Here is the missing ingredient in the physical situation, the force that not only explains the cohesion of solid matter, but, as we saw in Volume I, supplies the answers to such seemingly far removed problems as the structure of star clusters and the recession of the galaxies. One point that should be specifically noted is that it is this hitherto unknown force, the force due to the progression of the natural reference system, that holds the solid aggregate together, not gravitation, which acts in the opposite direction in the time region. The prevailing opinion that the force of gravitation is too weak to account for the cohesion is therefore irrelevant, whether it is correct or not. Inasmuch as the new theoretical system applies the same general principles to an understanding of all of the inter-atomic and inter-molecular equilibria, it explains the cohesion of all substances by the same physical mechanism. It is no longer necessary to have one theory for ionic substances, several more for those that are non-ionic, and to leave the metals out in the cold without any applicable theory. The theoretical findings with respect to the nature of chemical combinations and the structure of molecules that were outlined in the preceding volume have made a major contribution to this simplification of the cohesion picture, as they have eliminated the need for different kinds of cohesive forces, or ―bonds.‖ All that is now required of a theory of cohesion is that it

supply an explanation of the inter-atomic equilibrium, and this is provided, for all solid substances under all conditions, by balancing the outward motion (force) of gravitation against the inward motion (force) of the progression of the natural reference system. Because of the asymmetry of the rotational patterns of the atoms of many elements, and the consequent anisotropy of the force distributions, the equilibrium locations vary not only between substances, but also between different orientations of the same substance. Such variations, however, affect only the magnitudes of the various properties of the atoms. The essential character of the inter-atomic equilibrium is always the same. As indicated in the original discussion of gravitation, even though the various aggregates of matter do not actually exert gravitational forces on each other, the observable results of their gravitational motions are identical with those that would be produced if such forces did exist. The same is true of the results of the progression of the natural reference system. There is a considerable element of convenience in expressing these results in terms of force, on an ―as if‖ basis, and this practice has already been followed to some extent in the previous volume. Now that we are ready to begin a quantitative evaluation of the inter-atomic relations, however, it is desirable to make it clear that the force concept is being used only for convenience. Although the quantitative discussion that follows, like the earlier qualitative discussion, will be carried on in terms of forces, what we will actually be dealing with are the inward and outward motions of each individual atom. While the items that have been mentioned add up to a very impressive case in favor of the new theory of cohesion, the strongest confirmation of its validity comes from its ability to locate the point of equilibrium; that is to give us specific values of the interatomic distances. As will be demonstrated in Chapter 2, we are already able, by means of the newly established relations, to calculate the possible values of the inter-atomic distance for most of the simpler substances, and there do not appear to be any serious obstacles in the way of extending the calculations to more complex substances whenever the necessary time and effort can be applied to the task. Furthermore, this ability to determine the location of the point of equilibrium is not limited to the simple situation where only the two basic forces are involved. Chapters 4 and 5 will show that the same general principles can also be applied to an evaluation of the changes in the equilibrium distance that result from the application of heat or pressure to the solid aggregate. Although, as stated in Volume I, the true magnitude of a unit of space is the same everywhere, the effective magnitude of a spatial unit in the time region is reduced by the inter-regional ratio. It is convenient to regard this reduced value, 1/156.44 of the natural unit, as the time region unit of space. The effective portion of a time region phenomenon may extend into one or more additional units, in which case the measured distance will exceed the time region unit, or the original single unit may not be fully effective, in which case the measured distance will be less than the time region unit. Thus the interatomic equilibrium may be reached either inside or outside the time region unit of distance, depending on where the outward rotational forces reach equality with the inward force of the progression of the natural reference system. Extension of the interatomic distance beyond one time region unit does not take the equilibrium system out of the time region, as the boundary of that region is at one full-sized natural unit of distance,

not at one time region unit. So far as the inter-atomic force equilibrium is concerned, therefore, the time region unit of distance does not represent any kind of a critical magnitude. As we saw in our examination of the composition of the magnetic neutral groups, however, the natural unit as it exists in the time region (the time region unit) is a critical magnitude from the orientation standpoint. An explanation of this difference can be derived from a consideration of the difference in the inherent nature of the two phenomena. Where the inter-atomic distance is less than one time region unit, the rotational forces are acting against the inward force of the progression of the reference system during only a portion of the unit progression. Similarly, where the inter-atomic distance is greater than one time region unit, the unit inward force is acting against only a portion of the greater-than-unit outward rotational forces. The variations in distance thus reflect differences in the magnitudes of the rotational forces. But the orientation effect has no magnitude. It either exists, or does not exist. As we have noted in the previous discussion, particularly in connection with the structure of the benzene molecule, this effect, if it exists, is the same regardless of whether it acts at short range or at long range. The essential requirement that it must meet is that it must be continuously effective. Otherwise, the orientation is destroyed during the off period. Where the rotational forces extend beyond one time region unit, so that the unit orientation effect is coincident with only a portion of the total rotational forces, the orienting effect is not continuous, and no orientation takes place. In this chapter we are dealing mainly with what we are calling ―rotational forces.‖ These are, of course, the same ―as if‖ forces due to the scalar aspect of the atomic rotation that were called ―gravitational‖ in some other contexts, the choice of language depending on whether it is the origin or the effect of the force that is being emphasized in the discussion. For a quantitative evaluation of the rotational forces we may use the general force equation, providing that we replace the usual terms of the equation with the appropriate time region terms. As explained in introducing the concept of the time region in Chapter 8 of Vol. I, equivalent space 1/t replaces space in the time region, and velocity is therefore 1/t² Energy, the one-dimensional equivalent of mass, which takes the place of mass in the time region expression of the force equation, because the three rotations of the atom act separately, rather than jointly, in this region, is the reciprocal of this expression, or t2. Acceleration is velocity divided by time: 1/t³. The time region equivalent of the equation F = ma is therefore F = Ea = t²x1/t³=1/t in each dimension. At this point we will need to take note of the nature of the increments of speed displacement in the time region. In the outside region additions to the displacement proceed by units: first one unit, then another similar unit, yet another, and so on, the total up to any specific point being n units. There is no term with the value n. This value appears only as a total. The additions in the time region follow a different mathematical pattern, because in this case only one of the components of motion progresses, the other remaining fixed at the unit value. Here the displacement is 1/x, and the sequence is 1/1, 1/2, 1/3...1/n. The quantity 1/n is the final term, not the total. To obtain the total that corresponds to n in the outside region it is necessary to integrate the quantity 1/x from x = 1 to x = n. The result is ln n, the natural logarithm of n.

Many readers of the first edition have asked why this total should be an integral rather than a summation. The answer is that we are dealing with a continuous quantity. As pointed out in the introductory chapters of the preceding volume, the motion of which the universe is constructed does not proceed in a succession of jumps. Even though it exists only in units, it is a continuous progression. A unit of this motion is a specific portion of this continuity. A series of units is a more extended segment of that continuity, and its magnitude is an integral. In dealing with the basic individual units of motion in the outside region it is possible to use the summation process, but only because in this case the sum is the same as the integral. To get the total of the 1/x series we must integrate. To evaluate the rotational force we integrate the quantity 1/t from unity, the physical datum or zero level, to t: (1-1) If the quantity ln t is below unity in any dimension there is no effective outward force in that dimension, but the natural logarithm exceeds unity for all values of x above 2, and the atoms of all elements have a rotational displacement of 2 (equivalent to t = 3) or more in at least one dimension. Consequently, all have effective rotational forces. The force computed from equation 1-1 is the inherent rotational force of the individual atom; that is, the one- dimensional force which it exerts against a single unit of force. The force between two (apparently) interacting atoms is F = ln tA ln tB

(1-2)

For a two-dimensional magnetic rotation this becomes F = ln² tA ln² tB

(1-3)

As we found in Chapter12, Vol. I, the equivalent of distance s in the time region is s², and the gravitational force in this region therefore varies inversely as the fourth power of the distance rather than the square. Applying this factor to the expression for the force of the two-dimensional rotation, together with the inter-regional ratio, the ratio of effective to total force derived in the same chapter, we obtain the effective force of the magnetic rotation of the atom: Fm = (0.006392)4 s-4 ln² tA ln² tB

(1-4)

The distance factor does not apply to the force due to the progression of the natural reference system, as this force is omnipresent, and unlike the rotational force is not altered as the objects to which it is applied change their relative positions. At the point of equilibrium, therefore, the rotational force is equal to the unit force of the progression. Substituting unity for Fm in equation 1-4, and solving for the equilibrium distance, we obtain s0 = 0.006392 ln½ tA ln½ tB

(1-5)

The inter-atomic distances for those elements which have no electric rotation, the inert gas series, may be calculated directly from this equation. In the elements, however, tA = tB in most cases, and it will be convenient to express the equation in the simplified form: s0 = 0.006392 ln t

(1-6) -8

The values thus calculated are in the neighborhood of 10 cm, and for convenience this quantity has been taken as a unit in which to express the inter-atomic and inter-molecular distances. When converted from natural units to this conventional unit, the Angstrom unit, symbol Å, equation 1-6 becomes s0 = 2.914 ln t Å

(1-7)

In applying this equation we encounter another of the questions with respect to terminology that inevitably arise in a basically new treatment of any subject. The significance of the quantity t as used in the foregoing discussion and in the equations is obvious from the context—it is the magnitude of the effective rotation—but the question is: What shall we call it? The basic quantity with which we are dealing, the rotational speed displacement, does not enter into the equations directly. The mathematical structure of these equations requires us to enter them with values that include the initial unit which constitutes the natural zero datum. Furthermore, each double vibrational unit rotates independently, and when the rotation extends to a second such unit the increment in the value of t is only one half unit per added unit of displacement. Under these circumstances, where the relation of the term t to the displacement is variable, it seems advisable to give this term a distinctive name, and we will therefore call it the specific rotation. As brought out in the discussion of the general characteristics of the atomic rotation in Chapter10, Vol. I, the two magnetic displacements may be unequal, and in this event the speed distribution takes the form of a spheroid with the principal rotation effective in two dimensions and the subordinate rotation in one. The average effective value of the specific rotation under these conditions is (t12t2)¹/3. In this case we are dealing with the properties of a single entity, and the mathematical situation seems clear. But it is not so evident how we should arrive at the effective specific rotation where there is an interaction between two atoms whose individual rotations are different. As matters now stand it appears that the geometric mean of the two specific rotations is the correct quantity, and the values tabulated in Chapters 2 and 3 have been calculated on this basis. It should be noted, however, that this conclusion as to the mathematics of the combination is still somewhat tentative, and if further study shows that it must be modified in some, or all, applications, the calculated values will be subject to corresponding modifications. Any changes will be small in most cases, but they will be substantial where there is a large difference between the two components. The absence of major discrepancies between the calculated and observed distances in combinations of atoms with much different dimensions therefore gives some significant support to the use of the geometric mean pending further theoretical clarification. The inter-atomic distances of four of the five inert gas elements for which experimental data are available follow the regular pattern. The values calculated for these elements are compared with the experimental distances in Table 1.

Table 1: Distances - Inert Gas Elements Atomic Number 10 18 36 54

Element Neon Argon Krypton Xenon

Specific Rotation 3-3 4-3 4-4 4½-4½

Distance Calculated Observed 3.20 3.20 3.76 3.84 4.04 4.02 4.38 4.41

Helium, which also belongs to the inert gas series, has some special characteristics due to its low rotational displacement, and will be discussed in connection with other elements affected by the same factors. The reason for the appearance of the 4¹/2 value in the xenon rotation will also be explained shortly. The calculated distances are those which would prevail in the absence of compression and thermal expansion. A few of the experimental data have been extrapolated to this zero base by the investigators, but most of them are the actual observed values at atmospheric pressure and at temperatures which depend on the properties of the substances under examination. These values are not exactly comparable to the calculated distances. In general, however, the expansion and compression up to the temperature and pressure of observation are small. A comparison of the values in the last two columns of Table 1 and the similar tables in Chapters 2 and 3 therefore gives a good picture of the extent of agreement between the theoretical figures and the experimental results. Another point about the distance correlations that needs to be taken into account is that there is a substantial amount of variation in the experimental results. If we were to take the closest of these measured values as the basis for comparison, the correlation would be very much better. One relatively recent determination of the xenon distance, for example, arrives at a value of 4.34, almost identical with the calculated distance. There are also reported values for the argon distance that agree more closely with the theoretical result. However, a general policy of using the closest values would introduce a bias that would tend to make the correlation look more favorable than the situation actually warrants. It has therefore been considered advisable to use empirical data from a recognized selection of preferred values. Except for those values identified by asterisks, all of the experimental distances shown in the tables are taken from the extensive compilation by Wyckoff.2 Of course, the use of these values selected on the basis of indirect criteria introduces a bias in the unfavorable direction, since, if the theoretical results are correct, every experimental error shows up as a discrepancy, but even with this negative bias the agreement between theory and observation is close enough to show that the theoretical determination of the inter-atomic distance is correct in principle, and to demonstrate that, with the exception of a relatively small number of uncertain cases, it is also correct in the detailed application. Turning now to the elements which have electric as well as magnetic displacement, we note again that the electric rotation is one-dimensional and opposes the magnetic rotation. We may therefore obtain an expression for the effect of the electric rotational force on the magnetically rotating photon by inverting the one-dimensional force term of equation 12.

Fe = 1/(ln t‘A ln t‘B)

(1-8)

Inasmuch as the electric rotation is not an independent motion of the basic photon, but a rotation of the magnetically rotating structure in the reverse direction, combining the electric rotational force of equation 1-8 with the magnetic rotational force of equation 1-4 modifies the rotational terms (the functions of t) only, and leaves the remainder of equation 1-4 unchanged. ln² tA ln² tB F = (0.006392)4 —————

(1-9)

s4 ln t‘A ln t‘B Here again the effective rotational (outward) and natural reference system progression (inward) forces are necessarily equal at the equilibrium point. Since the force of the progression of the natural reference system is unity, we substitute this value for F in equation 1-9 and solve for s0, the equilibrium distance, as before. (ln½ tA ln½ tB) s0 = (0.006392) —————— ¼

(1-10)

¼

(ln t‘A ln t‘B) Again simplifying for application to the elements, where A is generally equal to B, s0 = 0.006392 ln t/ln½ t‘

(1-11)

In Angstrom units this become s0 = 2.914 ln t/ln½ t‘Å

(1-12)

As already noted, when the rotation is extended to a second (double) vibrational unit, to vibration two, we may say, each added displacement unit adds only one half unit to the specific rotation. Inasmuch as 8 electric displacement units distributed threedimensionally bring the rotation to a new zero point, and cause the rotational motion to revert to the translational status, the change to vibration two in the electric dimension must take place before the displacement reaches 8. Specific rotation 8 (displacement 7) is therefore followed by 8¹/2, 9, 9¹/2, etc. But the first effective rotational displacement unit is necessarily one-dimensional, and the linear equivalent of the 8-unit limit is 2 units. Thus this first unit has already reached the one-dimensional limit. The succeeding displacement units have the option of continuing on the one-dimensional basis and extending the rotation to vibration two rather than extending it into additional dimensions. The change to vibration two therefore may take place immediately after the first displacement unit. In this case specific rotation 2 (displacement 1) is followed by 2¹/2, 3, 3¹/2, etc. The lower value is commonly found where it first becomes possible; that is, displacement 2 normally corresponds to rotation 2¹/2 rather than 3. The next element may take the intermediate value 3¹/2, but beyond this point the higher vibration one value normally prevails. In the first edition it was indicated that the one or two vibrational diplacement units being rotated did not necessarily constitute the entire vibrational component of the basic photon, inasmuch as these one or two units are capable of being rotated independently of

the remaining vibrational units, if any. Further consideration now leads to the conclusion that one or two units of a multi-unit photon frequency can, in fact, be set in rotation independently, as previously indicated, and that the original photon may have had an excess of vibrational units, but that in such an event the rotating portion of the photon begins moving inward, whereas the non-rotating portion continues moving outward by reason of the progression of the natural reference system. The two portions therefore separate, and the rotating portion retains no non-rotating vibrational component. The general pattern of the magnetic rotational values is the same as that of the electric values. The tendency to substitute specific rotation 2¹/2 for 3 applies to the magnetic as well as to the electric rotation, and in the lower group combinations (both elements and compounds) that follow the regular electropositive pattern the specific magnetic rotations are usually 2¹/2–2¹/2 or 3–2¹/2, rather than 3-3. But the upper limit for specific magnetic rotation on a vibration one basis is 4 (three displacement units) instead of 8, as the twodimensional rotation reaches the upper zero level at 4 displacement units in each dimension. Rotation 4¹/2 therefore follows rotation 4 in the regular sequence, as we saw in the values given for xenon in Table 1. It is possible to reach rotation 5 in one dimension, however, without bringing the magnetic rotation as a whole up to the 5 level, and 5–4 or 5–4¹/2 rotation occurs in some elements either in lieu of, or in combination with, the 4¹/2-4 or 4¹/2–4¹/2 rotation

Chapter 2

Inter-atomic Distances As equation 1-10 indicates, the distance between any two atoms in a solid aggregate is a function of the specific rotations of the atoms. Since each atom is capable of assuming any one of several different relative orientations of its rotational motions, it follows that there are a number of possible specific rotations for each combination of atoms. This number of possible alternatives is still further increased by two additional factors that were discussed earlier. The atom has the option, as we noted in Chapter10, vol. I, of rotating with the normal magnetic displacement and a positive electric displacement, or with the next higher magnetic displacement and a negative electric increment. And in either case, the effective quantity, the specific rotation, may be modified by extension of the motion to a second vibrating unit, as brought out in Chapter 1. It is possible that each of these many variations of the magnitude of the specific rotation, and the corresponding values of the inter-atomic distances, may actually be realized under appropriate conditions, but in any particular set of circumstances certain combinations of rotations are more probable than the others, and in ordinary practice the number of different values of the distance between the same two atoms is relatively small, except in certain special cases. As matters now stand, therefore, we are able to calculate from theoretical premises a small set of possible inter-atomic distances for each element or compound. Ultimately it will no doubt be advisable to evaluate the probability relations in detail so that

the results of the calculations will be as specific as possible, but it has not been feasible to undertake this full treatment of the probability relationships in this present work. In an investigation of so large a field as the structure of the physical universe there must not only be some selection of the subjects that are to be covered, but also some decisions as to the extent to which that coverage will be carried. A comprehensive treatment of the probability relations wherever they enter into physical situations could be quite helpful, but the amount of time and effort required to carry out such a project will undoubtedly be enormous, and its contribution to the major objectives of this present undertaking is not sufficient to justify allocating so much of the available resources to it. Similar decisions as to how far to carry the investigation in certain areas have had to be made from time to time throughout the course of the work in order to limit it to a finite size. It might be well to point out in this connection that it will never be possible to calculate a unique inter-atomic distance for every element or combination of elements, even when the probability relations have been definitely established, as in many cases the choice from among the alternatives is not only a matter of relative probability, but also of the history of the particular specimen. Where two or more alternative forms are stable within the range of physical conditions under which the empirical examination is being made; the treatment to which the specimen has previously been subjected plays an important part in the determination of the structure. It does not follow, however, that we are totally precluded from arriving at definite values for the inter-atomic distances. Even though no quantitative evaluation of the relative probabilities of the various alternatives is yet available, the nature of the major factors involved in their determination can be deduced theoretically, and this qualitative information is sufficient in most cases to exclude all but a very few of the total number of possible variations of the specific rotations. Further-more, there are some series relations by means of which the range of variability can be still further narrowed. These series patterns will be more evident when we examine the distances in compounds in the next chapter, and they will be given more detailed consideration at that point. The first thing that needs to be emphasized as we begin our analysis of the factors that determine the inter-atomic distance is that we are not dealing with the sizes of atoms; what we are undertaking to do is to evaluate the distance between the equilibrium positions that the atoms occupy under specified conditions. In Chapter 1 we examined the general nature of the atomic equilibrium. In this and the following chapter we will see how the various factors involved in the relations between the rotations of the (apparently) interacting atoms affect the point of equilibrium, and we will arrive at values of the inter-atomic distances under static conditions. Then in Chapters 5 and 6 we will develop the quantitative relations that will enable us to determine just what changes take place in these equilibrium distances when external forces in the form of pressure and temperature are applied. As we have seen in the preceding volume, all atoms and aggregates of matter are subject to two opposing forces of a general nature: gravitation and the progression of the natural reference system. These are the primary forces (or motions) that determine the course of physical events. Outside the gravitational limits of the largest aggregates, the outward motion due to the progression of the natural reference system exceeds the inward motion of

gravitation, and these aggregates, the major galaxies, move outward from each other at speeds increasing with distance. Inside the gravitational limits the gravitational motion is the greater, and all atoms and aggregates move inward. Ultimately. if nothing intervenes, this inward motion carries each atom within unit distance of another, and the directional reversal that takes place at the unit boundary then results in the establishment of an equilibrium between the motions of the two atoms. The inter-atomic distance is the distance between the atomic centers in this equilibrium condition. It is not, as currently assumed, an indication of the sizes of the atoms. The current theory which regards the interatomic distance as a measure of ―size‖ is, in many respects, quite similar to the electronic ―bond‖ theory of molecular structure. Like the electronic theory, it is based on an erroneous assumption - in this case, the assumption that the atoms are in contact in the solid state - and like the electronic theory, it fits only a relatively small number of substances in its simple form, so that it is necessary to call upon a profusion of supplementary and subsidiary hypotheses to explain the deviations of the observed distances from what are presumed to be the primary values. As the textbooks point out, even in the metals, which are the simplest structures from the standpoint of the theory, there are many difficult problems, including the awkward fact that the presumed ―size‖ is variable, depending on the nature of the crystal structure. Some further aspects of this situation will be considered in Chapter 3. The resemblance between these two erroneous theories is not confined to the lack of adequate foundations and to the nature of the difficulties that they encounter. It also extends to the resolution of these difficulties, as the same principles that were derived from the postulates of the Reciprocal System to account for the formation of molecules of chemical compounds, when applied in a somewhat different way, are the general considerations that govern the magnitude of the inter-atomic distance in both elements and compounds. Indeed, all aggregates of electronegative elements are molecular in their composition, rather than atomic, as the molecular requirement that the negative electric displacement of an atom of such an element must be counterbalanced by an equivalent positive displacement in order to arrive at a stable equilibrium in space applies with equal force to a combination with a like atom. As we saw in our examination of the structural situation, electropositive elements are not subject to this restriction, but in many cases the molecular (balanced orientation) type of structure takes precedence over the electropositive structure by reason of collateral factors that affect the relative probability. Because of this fact that the distances follow the structural pattern, the various ways of orienting the atomic rotations that were discussed in Chapter18, Vol. I, with a few modifications due to the special conditions that exist in the elemental aggregates, determine the manner in which the atoms of an element are able to combine with each other, and the effective values of the specific rotations in these combinations. In the electropositive elements the specific rotations are based, in the first instance, on the rotational displacements as listed in Chapter10, Vol. I. Where the inter-atomic orientation is the normal positive arrangement, the displacements as listed are translated directly into specific rotations by addition of the initial unit and reduction of the incremental values where the rotation extends to vibration two. Except for the elements of group 2A, which, as already noted, are subject to some special considerations because of their low magnetic

displacements, the elements of Division I all follow the regular electropositive pattern of specific rotations. The only irregularities are in the electric rotations of the second and third elements of each group, where the point of transition to vibration two varies between groups. The inter-atomic distances in this division are listed in Table2. The regular electropositive pattern is also applicable in Division II, and a number of the Division II elements of Group 3A crystallize on this basis, with inter-atomic distances determined in the same manner as in Division I. As noted in Volume I, however, the Division II elements generally favor the magnetic type of orientation in chemical compounds because the normal positive orientation becomes less probable as the displacement, increases. The same probability considerations operate against the positive orientation in the elements of this division, but instead of employing the magnetic orientation as the alternate, these elements utilize a type of orientation that is available only where all rotations of each participant in a combination are identical with those of the other. This arrangement reverses the effective directions of the rotations of alternate atoms. The resulting relative rotation is a combination of x and 8-x (or 4-x), as in the neutral orientation, and the effective specific rotations are 10 for vibration one and 5 for vibration two. A combination value 5-10 is also common.

Table 2: Distances - Division I Group 2B

3A

3B

4A

4B

Atomic Number

Element

11

Sodium

12 13 19 20 21 22 37 38 39 40 55 56 57 58 89 90

Magnesium Aluminum Potassium Calcium Scandium Titanium Rubidium Strontium Yttrium Zirconium Cesium Barium Lanthanum Cerium Actinium Thorium

Specific Rotation Magnetic Electric 3-2½ 2 3-3 3-2½ 2½ 3-2½ 3 4-3 2 4-3 2½ 4-3 4 4-3 5 4-4 2 4-4 2½ 4-4 3½ 4-4 5 4½-4½ 2 5-4½ 3 4½-4½ 4 5-4½ 5 4½-5 4 4½-5 5

Distance Calc. Obs. 3.70

3.71

3.17 2.83 4.49 4.00 3.18 2.95 4.85 4.32 3.64 3.18 5.23 4.36 3.70 3.61 3.79 3.52

3.21 2.86 4.50 3.98 3.20 2.92 4.87 4.28 3.63 3.23 5.24 4.34 3.74 3.63 3.76* 3.56

This reverse type of structure makes its appearance in body-centered cubic crystal forms of Chromium and Iron which coexist with the regular positive hexagonal or face-centered

cubic structures. Vanadium and Niobium, the first Division II elements of their respective groups, combine the positive and reverse orientations. Beyond Niobium the positive orientation does not appear in the common Division II forms of the elements, the structures to which the present discussion is limited, and all elements take the reverse orientation, except Europium and Ytterbium, which combine it with a unit-specific rotation; that is, no electric-rotational displacement at all, as in the inert gas elements. On the basis of the considerations discussed in Chapter 1, the average effective specific rotation for such rotational combinations has been taken as the geometric mean of the two components. Where the orientations are the same, and the only difference is in the magnitude, as in the 5-10 combination, and in the combinations of magnetic rotations that we will encounter later, the equilibrium is reached in the normal manner. If two different electric rotations are involved, the two-atom pairs cannot attain spatial equilibrium individually, but they establish a group equilibrium similar to that which is achieved where n atoms of valence one each combine with one atom of valence n. The Division II distances are shown in Table 3.

Table 3: Distances - Division II Group 3A

Atomic Number 23 24 25 26

3B

27 28 41 42 43 44 45

4A

46 59 60 62 63 64 65 66 67 68 69 70

Specific Rotation Magnetic Electric Vanadium 4-3 6-10 4-3 7 Chromium 4-3 10 Manganese 4-3 8 4-3 8½ Iron 4-3 10 Cobalt 4-3 9 Nickel 4-3 9½ Niobium 4-4 6-10 Molybdenum 4-4½ 10 Technetium 4-4½ 10 Ruthenium 4-4½ 10 4-4 10 Rhodium 4-4½ 10 Palladium 4-4½ 10 Praseodymium 5-4½ 5 Neodymium 5-4½ 5 Samarium 5-4½ 5 Europium 4½-5 1-5 Gadoaium 5-4½ 5 Terbium 5-4½ 5 Dysprosium 5-4½ 5 Holmium 4½-5 5 Erbium 4½-5 5 Thulium 4½-5 5 Ytterbium 4½-4½ 1-5 Element

Distance Calc. Obs. 2.62 2.62 2.68 2.72 2.46 2.49 2.59 2.58 2.56 2.57 2.46 2.48 2.52 2.51 2.49 2.49 2.83 2.85 2.72 2.72 2.73 2.73* 2.73 2.70 2.66 2.69 2.73 2.76 2.73 2.74 3.61 3.64 3.61 3.65 3.61 3.62* 3.96 3.96 3.61 3.62 3.61 3.59 3.61 3.58 3.52 3.56 3.52 3.53 3.52 3.52 3.86 3.87

4B

71 91 92 93 94 95 96 97

Lutetium Protactinium Uranium Neptunium Plutonium Americum Curium Berkelium

4½-5 4½-5 4½4½ 4½-4½ 4½4½ 4½-4½ 4½-4½ 4½-4½

5 5-10 10 5 5-10 5 5-10 5

3.52 3.22 2.87 3.43 3.14 3.43 3.14 3.43

3.50* 3.24* 2.85 3.46* 3.15* 3.46* 3.10* 3.40*

Because of the greater probability of the electropositive types of combinations, the characteristics of Division II carry over into the first elements of Division III, and these elements, Nickel, Palladium, and Lutetium, are included in the table. Some similar modifications of the normal division boundaries have already been noted in connection with other subjects. The net total rotation of the material atom is a motion with positive displacement-that is, a speed less than unity-and as such it normally results in a change of position in space. Inside unit space, however, all motion is in time. The orientation of the atom for the purpose of the space-time equilibrium therefore exists in the three dimensions of time. As we saw in our examination of the inter-regional situation in Chapter12, Volume I, each of these dimensions contacts the space of the region outside unit distance individually. To the extent that the motion in a dimension of time acts along the line of this contact it is a motion in equivalent space. Otherwise it has no spatial effect beyond the unit boundary. Because of the independence of the three dimensions of motion in time the relative orientation of the electric rotation of any combination of atoms may be the same in all spatial dimensions, or there may be two or three different orientations. In most of the elements that have been discussed thus far the orientation is the same in all spatial dimensions, and in the exceptions the alternate rotations are symmetrically distributed in the solid structure. The force system of an aggregate of such elements is isotropic. It follows that any aggregate of atoms of these elements has a structure in which the constituents are arranged in one of the Geometrical patterns possible for equal forces: an isometric crystal. All of the electropositive elements (Divisions I and II) crystallize in isometric forms, and, except for a few which apparently have quite complex structures, each of the crystal forms of these elements belongs to one or another of three types: the facecentered cube, the body-centered cube, or the hexagonal close-packed structure. We now turn to the other major subdivision of the elements, the electronegative class, those whose normal electric displacement is negative. Here the force system is not necessarily isotropic, since the most probable arrangement in one or two dimensions may be the negative orientation, a direct combination of two ne-ative electric displacements, similar to the all-positive combinations. It is not possible to have negative orientation in all three dimensions, and wherever it does exist in one or two dimensions the rotational forces of the atoms are necessarily anisotropic. The controlling factor is the requirement that the net total rotational displacement of a material atom as a whole must be positive. Negative orientation in all three dimensions is obviously incompatible with this requirement, but if the negative

displacement is restricted to one dimension the aggregate has fixed atomic positions in two dimensions, with a fixed average position in the third because of the positive displacement of the atom as a whole. This results in a crystal structure that is essentially equivalent to one with fixed positions in all dimensions. Such crystals are not usually isometric, as the interatomic distance in the odd dimension is generally different from that of the other two. Where the distances in all dimensions do happen to coincide, we will find on further investigation that the space symmetry is not an indication of force symmetry. If the negative displacement is very small, as in the lower division IV elements, it is possible to have negative orientation in two dimensions if the positive displacement in the third dimension exceeds the sum of these two negative components, so that the net result is still positive. Here the relative positions of the atoms are fixed in one dimension only, but the average positions in the other two dimensions are constant by reason of the net positive displacement of the atoms. An aggregate of such atoms retains most of the external characteristics of a crystal, but when the internal structure is examined the atoms appear to be distributed at random, rather than in the orderly arrangement of the crystal. In reality there is just as much order as in the crystalline structure, but part of the order is in time rather than in space. This form of matter can be identified as the glassy, or vitreous, form, to distinguish it from the crystalline form. The term ―state‖ is frequently used in this connection instead of ―form‖, but the physical state of matter has an altogether different meaning based on other criteria, and it seems advisable'to confine the use of this term to the one application. Both glasses and crystals are in the solid state. In beginning a consideration of the structures of the individual electronegative elements, we will start with Division III. The general situation in this division is similar to that in Division II, but the negativity of the normal electric displacement introduces a new factor into the determination of the orientation pattern, as the most probable orientation of an electronegative element may not be capable of existing in all three dimensions. As stated earlier, where two or more different orientations are possible in a given set of circumstances the relative probability is the deciding factor. Low displacements are more probable than high displacements. Simple orientations are more probable than combinations. Positive electric orientation is more probable than negative. In Division I all of these factors operate in the same direction. The positive orientation is simple, and it also has the lowest displacement value. All structures in this division are therefore formed on the basis of the positive orientation. In Division II the margin of probability is narrow. Here the positive displacement x is greater than the inverse displacement 8-x, and this operates against the greater inherent probability of a simple positive structure. As a result, both the positive and reverse types of structure are found in this division, together with a combination of the two. In Division III the negative orientation has a status somewhat similar to that of the positive orientation in Division II. As a simple orientation, it has a relatively high probability. But it is limited to one dimension. The regular division III structures of Groups 3A and 3B are therefore anisotropic, with the reverse orientation in the other two dimensions. A combination of these two types of orientation is also possible, and in copper and silver, the first Division III elements of their respective groups, the crystals formed on the basis of this

combination orientation have cubic symmetry. As in Division II, the elements of Division III in Groups 4A and 4B crystallize entirely on the basis of the reverse orientation. Table 4 lists what may be considered as the regular inter-atomic distances of the elements of Division III. Although the probability of the negative orientation is greater in Division IV than in Division III, because of the smaller displacement values, this type of structure seldom appears in the crystals of the lower division. The reason is that where this orientation exists in the elements of the lower displacements, it exists in two dimensions, and this produces a glassy or vitreous aggregate rather than a crystal. The reverse orientation is not subject to any restrictive factor of this nature, but it is less probable at the lower displacements, and except in Group 4A, where it continues to predominate, this orientation appears less frequently as the displacement decreases. Where it does exist it is increasingly likely to combine with some other type of orientation. As a result of these limitations that are applicable to the inherently more probable types of orientation, many of the Division IV structures are formed on the basis of the secondary positive orientation, a combination of two 8-x displacements.

Table 4: Distances - Division III Group 3A

3B

4A

Atomic Number 29

Element Copper

30

Zinc

31

Gallium

47

Silver

48

Cadmium

49

Indium

72 73 74 75 76 77 78 79

Hafnium Tantalum Tungsten Rhenium Osmium Eridium Platinum Gold

80

Mercury

81

Thallium

Specific Rotation Magnetic Electric 4-3 8-10 4-4 7 4-4 10 4-3 6 4-3 10 4-5 8-10 5-4 7 5-4 10 5-4 6 5-4 6-10 4-4½ 5 4½-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4½-4½ 10 4-4½ 5-10 4½-4½ 5 4½-4½ 5

Distance Calc. Obs. 2.53 2.55 2.90 2.91 2.66 2.66 2.79 2.80 2.46 2.44 2.87 2.88 3.20 3.26* 2.94 2.97 3.33 3.37 3.21 3.24 3.26 3.32 2.87 2.86 2.73 2.74 2.73 2.77* 2.73 2.73 2.73 2.71 2.73 2.77 2.87 2.88 2.98 3.00 3.43 3.47 3.43 3.45

The secondary positive orientation is not possible in the electropositive divisions, as 8-x is negative in these divisions, and like the negative orientation itself, an 8-x negative combination would be confined to a subordinate role in one or two dimensions of an asymmetric structure. Such a crystal structure cannot compete with the high probability of

the symmetrical electropositive crystals, and therefore does not exist. In the electronegative divisions, however, the 8-x displacement is positive, and there are no limitations on it, aside from those arising from the high displacement values. The effective displacement of this secondary positive orientation is even greater than might be expected from the magnitude of the quantity 8-x, as the change of zero points for the two oppositely directed motions is also oppositely directed, and the new zero points are 16 displacement units apart. The resultant relative displacement is 16-2x, and the corresponding specific rotation is 18-2x. In Division IV the numerical values of the latter expression range from 10 to 16, and because of the low probability of such high rotations, the secondary positive orientation is limited to one or one and one-half dimensions in spite of its positive character. In Division III the 8-x displacements are lower, but in this case they are too low. A two-unit separation of the zero points (16 displacement units) cannot be maintained unless the effective displacement is at least 8 (one full three-dimensional unit). The secondary positive orientation is therefore confined to Division IV. A special type of structure is possible only for those electronegative elements which have a rotational displacement of four units in the electric dimension. These elements are on the borderline between Divisions III and IV, where the secondary positive and reverse orientations are about equally probable. Under similar conditions other elements crystallize in hexagonal or tetragonal structures, utilizing the different orientations in the different dimensions. For these displacement 4 elements, however, the two orientations produce the same specific rotation: 10. The inter-atomic distance in these crystals is therefore the same in all dimensions, and the crystals are isometric, even though the rotational forces in the different dimensions are not of the same character. The molecular arrangement in this crystal pattern, the diamond structure, shows the true nature of the rotational forces. Outwardly this crystal cannot be distinguished from the isotropic cubic crystals, but the analogous body-centered cubic structure has an atom at each corner of the cube as well as one in the center, whereas the diamond structure leaves alternate corners open to accommodate the abnormal projection of forces in the secondary positive dimension. In those of the lower elements of Division IV that are beyond the range of the inverse type of orientation, there is no available alternative for combination with the secondary positive orientation. The crystals of these elements therefore have no effective electric rotation in the remaining dimensions, and the relative specific rotation in these dimensions is unity, as in all dimensions of the inert gas elements. The most common distances in the aggregates of the Division IV elements are shown in Table 5. Up to this point, no consideration has been given to the elements of atomic number below 10, as the rotational forces of these elements are subject to certain special influences which make it desirable to discuss them separately. One cause of deviation from the normal behavior is the small size of the rotational groups. In the larger groups the four divisions are distinct, and, except for some overlapping, each has its own characteristic force combinations, as we have seen in the preceding paragraphs. In an 8-element group, however, the second series of four elements, which would normally constitute Division III, is actually in the Division IV position. As a result, these four elements have, to a certain extent, the properties of both divisions. Similarly, the Division I elements of these groups may, in some

cases, act as if they were members of Division III. A second influence that affects the forces and the crystal structures of the lower group elements is the inactivity of the rotational forces in certain dimensions that was mentioned earlier. A specific rotation of two units produces no effect in the positive direction. The reason for this is revealed by equation 1-1. By applying this equation we find that the effective rotational force (ln t) for t = 2 is 0.693, which is less than the opposing space-time force 1.00. The net effective force of specific rotation 2 is therefore below the minimum value for action in the positive direction. In order to produce an active force the specific rotation must be high enough to make ln t greater than unity. This is accomplished at rotation 3.

Table 5: Distances - Division IV Group 2B

Atomic Number 14 15 16 17

3A

32 33 34 35

3B 50 51 52 53 4A

82 83 84

Specific Rotation Magnetic Electric Silicon 3-3 5-10 3-3 10 Phosphorus 3-4 3-4 1 3-3 10 Sulfur 3-3 1 3-3 16 Chlorine 3-3 1-16 Germanium 4-3 10 4-3 12 Arsenic 4-3 10 4-3 14 Selenium 3-4 1 4-3 16 Bromine 3-4 1 4½-4 10 Tin 5-4 5-10 5-4 10 5-4 12 Antimony 5-4 4-10 5-4½ 14 Tellurium 5-4½ 1-10 5-4 16 Iodine 5-4 1-16 5-4 1 Lead 4½-4½ 5 4½-4½ 5 Bismuth 4½-4½ 5-10 Polonium 4½-4½ 5 Element

Distance Calc. Obs. 2.31 2.35 2.19

2.2

3.46 2.11 3.21 1.92 2.48 2.46 2.37 2.46 2.32 3.46 2.25 3.46 2.80 3.22 2.94 2.83 3.34 2.82 3.71 2.68 3.54 4.46 3.43 3.43 3.14 3.43

3.48* 2.07 3.27* 1.82 2.52 2.43 2.44* 2.51 2.32 3.46 2.27 3.30 2.80 3.17 3.02 2.87 3.36* 2.86 3.74 2.70 3.54 4.41* 3.49 3.47* 3.10 3.40*

The specific magnetic rotation of the 1B group, which includes only the two elements hydrogen and helium, and the 2A group of eight elements beginning with lithium, combines the values 3 and 2. Where the value 2 applies to the subordinate rotation (3-2), one dimension is inactive; where it applies to the principal rotation (2-3), two dimensions are inactive. This reduces the force exerted by each atom to 2/3 of the normal amount in the case of one inactive dimension, and to 1/3 for two inactive dimensions. The inter-atomic distance is proportional to the square root of the product of the two forces involved. Thus the reduction in distance is also 1/3 per inactive dimension. Since the electric rotation is not a basic motion, but a reverse rotation of the magnetic rotational system, the limitations to which the basic rotation is subject are not applicable. The electric rotation merely modifies the magnetic rotation, and the low value of the force integral for specific rotation 2 makes itself apparent by an inter-atomic distance which is greater than that which would prevail if there were no electric displacement at all (unit specific rotation). Theoretical values of the inter-atomic distances of the lower group elements are compared with measured values in Table 6

Table 6: Distances - Lower Group Elements Group 1B *2A

Atomic Number 1 2 3 4 5

Element Hydrogen Helium Lithium Beryllimn Boron C (diamond)

6

C (graphite)

7

Nitrogen

8

Oxygen

9

Fluorine

Specific Rotation Magnetic Electric 3(1) 10 3(1) 1 2½-2½ 2 3(2) 2½ 3(2) 5 3-3 10 3(2) 5-10 3(2) 1 3-3 1 3(1½) 10 3-3 1 3(1½) 10 3-3 1 3(2) 10

Distance Calc. Obs. 0.70 0.74* 1.07 1.09 3.05 3.03 2.282 2.28 1.68 1.74* 2.11 2.03* 1.54 1.54 1.41 1.42 3.21 3.40 1.06 1.06 3.21 3.44* 1.06 1.15* 3.21 3.20* 1.41 1.44*

The figures in parentheses in column 4 of this table indicate the effective number of dimensions. Thus the notation 3(1) shown for hydrogen means that this element has a specific magnetic rotation of 3, effective in only one dimension. Except where the crystals are isometric, there is still much uncertainty in the distance measurements on these lower group elements, and many other values have been reported in addition to those included in the table. This situation will be discussed at length in Chapter 3, where we will have the benefit of measurements of the distances between like atoms that are constituents of chemical compounds.

As indicated in the introductory paragraphs of this chapter, we are not yet in a position where we can determine specifically just what the inter-atomic distance will be for any given element under a given set of conditions. The theoretical considerations that have been discussed actually do lead to specific values in many cases, but in other instances there is an uncertainty as to which of two or more theoretically possible rotational arrangements corresponds to the observed crystal structure. Continuing progress is being made in both the experimental and the theoretical fields, and it can be expected that these uncertainties will gradually diminish toward the irreducible minimum that was mentioned earlier. In the course of this process there will necessarily be some changes in the identifications of the observed inter-atomic distances with the theoretically possible structures. A comparison of Tables 1 to 6 with the corresponding tabulations of the first edition should therefore be of interest as an indication of the nature and magnitude of the changes that have taken place in our view of this interatomic distance situation in the last twenty years, and by extension, an indication of the amount of change that can be expected in the future. Such a comparison shows that the modifications of the original conclusions that now appear to be required, in the light of the additional information that has been made available, are confined almost entirely to those which have resulted from a better theoretical understanding of the behavior of the specific magnetic rotation above an effective value of 4. Few changes are required in either the magnetic or electric values in those rotational combinations where the specific magnetic rotation is 4-4 or less. One of the puzzling features of the rotational situation as it appeared at the time of the original publication was the apparent retrograde progression of the specific magnetic rotation in Groups 4A and 4B. It was recognized at that time that both the 4½ and 5 values of the specific rotation correspond to the same displacement, 4, the difference being that in the case of the 4½ value the rotation extends to two units of vibration, and the last increment of specific rotation in this case is only half size. The next half unit increment, if such an increment were possible, would bring the 4½ rotation back to the 5 value. It would therefore appear that the sequence of specific rotations beyond 4½-4 should be 4½-4½, 5-4½, 5-5, and so on. But the tendency is in the opposite direction. Instead of moving toward higher values as the atomic number increases, there is actually a decreasing trend. This was already evident at the time of publication of the first edition, as the low inter-atomic distances of the series of elements from Tungsten to Platinum could not be accounted for unless the specific magnetic rotation dropped back to 4-4½ from the higher levels of the preceding elements of the 4A group. This decreasing trend has become even more prominent as distances have become available for additional elements of Group 4B, as some of these values indicate specific magnetic rotations of 4-4, or possibly even 4-3½. As it happens, the continuation of the trend toward lower values in the more recent data has had the effect of clarifying the situation. It is now evident that the 5-5 specific rotation is not reached within the accessible portion of Groups 4A and 4B. (Considerations that will be discussed later show that the specific rotation of 5-5 would be unstable.) The lower values in the 4A and 4B groups do not result from a decrease in the magnetic displacement, but from a shift of the existing displacement units from vibration one to vibration two, a process which reduces the specific rotation of the units by one half. On a vibration one basis,

rotational displacements 4-3 correspond to specific rotations 5-4. Conversion of successive units of displacement to vibration two, without change in the number of displacement units, results in a series of specific rotations, 5-4, 4½-4, 4-4½, 4-4, and so on. A similar series with one additional displacement unit goes through the values 5-4½, 4½-5, 4½-4½, 4½-4, and then follows the same route as the series with the lower displacement. The modifications that have been made in the theoretical rotational values applicable to the elements of these two highest rotational groups since the publication of the first edition are the result of a review of the situation in the light of this new understanding of the trend of the specific rotation. The general pattern in group 4A is now seen to be that of the series from 5-4½ to 4-4½, with a return to 4½-4½ in the lower electronegative elements. So far as can be determined at this time, Group 4B follows the same pattern one step farther advanced; that is, it begins with 4½-5 rather than 5-4½. The difference in the inter-atomic distance corresponding to one of the steps in this conversion process is relatively small, and in view of the substantial variation in the experimental values it has not appeared advisable to take into account the possibility of combinations such as 4½-5 specific rotation of one atom of a pair and 4½-4½ in the other. It seems clear that such combinations do exist in some of the lower group elements, Sodium, for example, and they probably play some part in the higher groups. Most of the reported distances for Holmium and Erbium, for instance, agree more closely with a combination of 5-4½ and 4½-5 than with either individually. However, all of these values are theoretically possible, and the only question at issue in this and many other similar cases is which theoretical value corresponds to the observed distance. Definitive answers to identification questions of this kind will have to wait until the theoretical probabilities are specifically evaluated, or the experimental uncertainties are resolved. Many questions concerning alternate crystal structures will also have to wait for more information from theory or experiment, particularly where crystal forms that exist only at high temperatures or pressures are involved. There is, however, a large body of information already available in this area, and it can be tied into the theoretical picture as soon as someone has the time and the inclination to undertake the task.

Chapter 3

Distances in Compounds Thus far in the discussion of the inter-atomic distances we have been dealing with aggregates composed of like atoms. The same general principles apply to aggregates of unlike atoms, but the existence of differences between the components of such systems introduces some new factors that we will now want to examine. The matters to be considered in this chapter have no relevance to direct combinations of electropositive elements (aggregates of which are mixtures or alloys, rather than chemical compounds). As noted in Chapter18, Vol. I, the proportions in which such elements can

combine may be determined, or limited, by geometrical considerations, but aside from such effects, unlike atoms of this kind can combine on the same basis as like atoms. Here the forces are identical in character and concurrent, the type of combination that we have called the positive orientation. The resultant specific electric rotation, according to the principles previously set forth, is (t1t2)1/2, the geometric mean of the two constituents. If the two elements have different magnetic rotations, the resultant is also the geometric mean of the individual rotations, as the magnetic rotations always have positive displacements, and these combine in the same manner as the positive electric displacements. The effective electric and magnetic specific rotations thus derived can then be entered in the applicable force and distance equations from Chapter 1. Combinations of unlike positive atoms may also take place on the basis of the reverse orientation, the alternate type of structure that is available to the elemental aggregates. Where the electric rotations of the components differ, the resultant specific rotation of the two-atom combination will not be the required neutral 5 or 10, but a second pair of atoms inversely oriented to the first results in a four-atom group that has the necessary rotational balance. As brought out in VolumeI, the simplest type of combination in chemical compounds is based on the normal orientation, in which Division I electropositive elements are joined with Division IV electronegative elements on the basis of numerically equal displacements. The resultant effective specific magnetic rotation can be calculated in the same manner as in the all-positive structures, but, as we saw in our consideration of the inter-atomic distances of the elements, where an equilibrium is established between positive and negative electric rotations, the resultant is the sum of the two individual values, rather than the mean.

Table 7: Distances - NaCl Type Compounds Specific Rotation

Distance

Compound Magnetic

Elec.

Calc.

Obs.

LiH

3(2)

3(2)

3

2.04

2.04

LiF

3(2)

3(2)

3

2.04

2.01

LiCl

3(2)

3½-3½

4

2.57

2.57

LiBr

3(2)

4-4

4

2.77

2.75

Li

3(2)

5-4

4

2.96

3.00

NaF

3-2½

3(2)

4

2.26

2.31

NaCl

3-2½

3½-3½

4

2.77

2.81

NaBr

3-2½

4-4

4

2.94

2.98

NaI

3-3

5-4

4

3.21

3.23

MgO

3-3

3(2)



2.15

2.10

MgS

3-3

3½-3½



2.60

2.59

MgSe

3-3

4-4



2.76

2.72

KF

4-3

3(2)

4

2.63

2.67

KCl

4-3

3½-3½

4

3.11

3.14

KBr

4-3

4-4

4

3.30

3.29

KI

4-3

5-4

4

3.47

3.52

CaO

4-3

3(2)



2.38

2.40

CaS

4-3

3½-3½



2.81

2.84

CaSe

4-3

4-4



2.98

2.95

CaTe

4-3

5-4



3.13

3.17

ScN

4-3

3(2)

7

2.22

2.22

TiC

4-3

3(2)



2.12

2.16

RbF

4-4

3(2)

4

2.77

2.82

RbCl

4-4

3½-3½

4

3.24

3.27

RbBr

4-4

4-4

4

3.43

3.43

RbI

4-4

5-4

4

3.61

3.66

SrO

4-4

3(2)



2.51

2.57

SrS

4-4

3½-3½



2.92

2.93

SrSe

4-4

4-4



3.10

3.11

SrTe

4-4

5-4



3.26

3.24

CsF

5-4

3(2)

4

2.96

3.00

CsCl

5-4

4-3

4

3.47

3.51

BaO

5-4½

3(2)



2.72

2.76

BaS

5-4½

4-3



3.17

3.17

BaSe

5-4½

4-4



3.30

3.31

BaTe

5-4½

5-4



3.47

3.49

LaN

5-4

3(2)

6

2.61

2.63

LaP

5-4

4-3



2.99

3.01

LaAs

5-4

4-4

7

3.04

3.06

LaSb

5-4

5-4

7

3.20

3.24

LaBi

5-4

5-4½

7

3.24

3.28

When this arrangement unites one electropositive atom with each electronegative atom the resulting structure is usually a simple cube with the atoms of each element occupying alternate comers of the cube. This is called the Sodium Chloride structure, after the most familiar member of the family of compounds crystallizing in this form. Table 7 gives the interatomic distances of a number of common NaCl type crystals. From this tabulation it can be seen that the special rotational characteristics which certain of the elements possess in the elemental aggregates carry over into their compounds. The second element in each group shows the same preference for rotation on the basis of vibration two that we encountered in examining the structures of the elements. Here, again, this preference extends to some of the following elements, and in such series of compounds as CaO, ScN, TiC, one component keeps the vibration two status throughout the series, and the resulting effective rotations are 5½, 7, 8½, rather than 6, 8, 10. The elements of the lower groups have inactive force dimensions in the compounds just as in the elemental structures previously examined. If the active dimensions are not the same in both components, the full rotational force of the more active component is effective in its excess dimensions, the effective rotation in an inactive dimension being unity. For example, the value of ln t for magnetic rotation 3 is 1.099 in three dimensions, or 0.7324 in two dimensions. If this two-dimensional rotation is combined with a three-dimensional magnetic rotation x, the resultant value of ln t is (0.7324 x)½, the geometric mean of the individual values, in two dimensions, and x in the third. The average value for all three dimensions is (0.7324 x2)¹/3. This dimensional inactivity in the lower groups plays only a minor role in the structures of the elements, as can be seen from the fact that it did not need any attention until almost the end of Table 8.

Table 8: Distances - CaF2 Type Compounds Compound Na2O Na2S Na2Se Na2Te Mg2Si Mg2Ge Mg2Sn

Specific Rotation Magnetic 3-2½ 3(2) 3-2½ 4-3 3-2½ 4-4 3-2½ 5-4½ 3-3 4-3 3-3 4-4 3-3 5-4

Elec. 3½ 4 4 4 5 5½ 5½

Distance Calc. Obs. 2.39 2.40 2.83 2.83 2.94 2.95 3.13 3.17 2.73 2.77 2.76 2.76 2.90 2.93

Mg2Pb K2O K2S K2Se K2Te CaF2 Rb2O Rb2S SrF2 SrCl2 BaF2 BaCl2

3-3 4-3 4-3 4-3 4-3 4-3 4-4 4-4 4-4 4-4 5-4 5-4½

5-4½ 3(2) 4-3 4-4 5-4½ 3(2) 3(2) 4-3 3(2) 4-3 3(2) 4-3

5½ 3½ 4 4 4 5½ 3½ 4 5½ 5½ 5½ 5½

2.94 2.79 3.17 3.30 3.51 2.38 2.94 3.30 2.50 2.98 2.68 3.17

2.96 2.79 3.20 3.33 3.53 2.36 2.92 3.31 2.50 3.03 2.68 3.18*

The compounds of lithium with valence one negative elements follow the regular pattern, and were included in Table 7, but the compounds with valence two elements are irregular, and they have therefore been omitted from Table 8. As we will see in Chapter 6, the irregularity is due to the fact that the two lithium atoms in a molecule of the CaF2 type act as a radical rather than as independent constituents of the molecule. These two normal orientation tables, 7 and 8, provide an impressive confirmation of the validity of the theoretical findings. One of the problems in dealing with the inter-atomic distances of the elements is that because of the relatively small total number of elements, the number to which any particular magnetic rotational combination is applicable is quite small, and consequently it is rather difficult to establish a prima facie case for the authenticity of the rotatioml values. But this is not true of the normal type compounds, as they are more numerous and less variable. There are two elements in these tables, sulfur and chlorine, that have different magnetic rotations under different conditions. These elements have 4-3 rotation in the CaF2 type crystals, and in the NaCl type combinations with elements of group 4A. In the other compounds of the NaCl type they take the 3½-3½ rotations. There are also two more elements, each of which, according to the information now available, deviates from its normal rotations in one of the listed compounds. Otherwise, all of the elements entering into the 60 compounds in the two tables have the same specific magnetic rotations in every compound in which they participate. Furthermore, when the inherent differences between the elemental and compound aggregates are taken into account, there is also agreement between these rotations in the compounds and the specific rotations of the same elements in the elemental aggregates. The most common difference of this kind is a result of the fact that the Division IV element in a compound has a purely negative role. For this reason it takes the magnetic rotation of the next higher group. In the elemental aggregates half of the atoms are reoriented to act in a positive capacity. Consequently, thev tend to retain the normal rotation of the group to which they actually belong. For example, the Division IV elements of Group 3A, germanium, arsenic, selenium, and bromine, have the normal specific rotation of their group, 4-3, in the crystals of the elements, but in the compounds they take the 4-4 specific rotation of Group 3B, acting as negative members of that group. Another difference between the two classes of structures is that those elements of the higher

groups that have the option of extending their rotation to a second vibrational unit are less likely to do so if they are combining with an element which is rotating entirely on the basis of vibration one. Aside from these deviations due to known causes, the values of the specific magnetic rotation determined for the elements in Chapter 2 are also generally applicable to the compounds. This equivalence does not apply to the specific electric rotations, as they are determined by the way in which the rotations of the constituents of each aggregate are oriented relative to each other, a relation that is different in the two classes of structures. This applicability of the same equations and, in general, the same numerical values, to the calculation of distances in both elements and compounds contrasts sharply with the conventional theory that regards the inter-atomic distance as being determined by the ―sizes‖ of the atoms. The sodium atom, or ―ion,‖ in the NaCl crystal, for example, is asserted to have a radius only about 60 percent as large as the radius of the atom in the elemental aggregate. If this atom takes part in a compound which cannot be included in the ―ionic‖ class, current theory gives it still a different ―size‖ : what is called a ―covalent‖ radius. The need for assuming any extraordinary changeability in the size of what, so far as we can tell, continues to be the same object, is now eliminated by the finding that the variations in the inter-atomic distance have nothing to do with the sizes of the atoms, but merely indicate differences in the location of the equilibrium between the inward and the outward forces to which the atoms are subject. Another type of orientation that forms a relatively simple binary compound is the rotational combination that we found in the diamond structure. As in the elements, this is an equilibrium between an atom of a Division IV element and one of Division III, the requirement being that t1+ t2 = 8. Obviously, the only elements that can meet this requirement by themselves are those whose negative rotational displacement (valence) is 4, but any Division IV element can establish an equilibrium of this kind with an appropriate Division III element. Closely associated with this cubic diamond-like Zinc Sulfide class of crystals is a hexagonal structure based on the same orientation, and containing the same equal proportions of the two constituents. Since these controlling factors are identical in the two forms, the crystals of the hexagonal Zinc Oxide class have the same inter-atomic distances as the corresponding Zinc Sulfide structures. In such instances, where the inter-atomic forces are the same, there is little or not probability advantage of one type of crystal over the other, and either may be formed under appropriate conditions. Table 9 lists the inter-atomic distances for some common crystals of these two classes.

Table 9: Distances - Diamond Type Compounds Compound AlP AlAs AlSb SiC

3-4 3-4 3-4 3-4

Specific Rotation Magnetic Elec. ZnS (Cubic) Class 3½-3½ 10 4-4 10 5-4½ 10 3(2) 10

Distance Calc. 2.32 2.62 2.62 1.94

Obs. 2.35 2.43 2.66 1.93*

CuCl CuBr CuI ZnS ZnSe ZnTe GaP GaAs GaSb AgI CdS CdTe InP InAs InSb AlN ZnO ZnS GaN AgI CdS CdSe InN

3-4 3-4 3-4 3-4 3-4 3-4 3-4 3-4 3-4 4-4 4-4 4-4 4-4 4-4 4-4 3-4 3-4 3-4 3-4 4-4 4-4 4-4 4-4

3½-3½ 4-4 5-4 3½-3½ 4-4 5-4½ 3½-3½ 4-4 5-4½ 5-4 3½-3½ 5-4 3½-3½ 4-4 5-4 3(2) 3(2) 3½-3½ 3(2) 5-4 3½-3½ 4-4 3(2)

10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

2.32 2.46 2.59 2.32 2.46 2.62 2.32 2.46 2.62 2.80 2.51 2.80 2.51 2.66 2.80 1.94 1.94 2.32 1.94 2.80 2.51 2.66 2.15

2.35 2.46 2.62 2.36 2.45 2.63* 2.36 2.43 2.65 2.81 2.52 2.78 2.54 2.62 2.80 1.90 1.95 2.33 1.94 2.81 2.51 2.63 2.13

The comments that were made about the consistency of the specific rotation values in Tables 7 and 8 are applicable to the values in Table 9 as well. Most of the elements participating in the compounds of this table have the same specific rotations as in the previous tabulations, and where there are exceptions, the deviations are of a regular and predictable nature. A feature of Table 9 is the appearance of one of the normally electropositive elements of group 2B, Aluminum, in the role of a Division III element. Beryllium and magnesium also form ZnS type compounds, but like the lithium compounds previously mentioned they are irregular, probably for the same reason, and have not been included in the tabulation. The Division III behavior of these normally Division I elements is a result of the small size of the lower groups, which puts their their Division I elements into the same positions with respect to the electronegative zero point as the Division III elements of the larger groups. This relationship is indicated in the following tabulation, where the asterisks identify those elements that are normally in Division I.

Division III Be* B*

Mg* Al*

Zn Ga

C N O F

Si P S Cl

Ge As Se Br

None of the orientations thus far considered is applicable to compounds of the Division II elements. The normal orientation does not exist above a specific rotation of 5, as the higher value would put the relative rotation above the limiting value 10. The Zinc Oxide and Zinc Sulfide types of combination are electronegative structures, and the reverse orientation of the Division II elemental structures is not available for compounds with negative elements. The Division II elements therefore form their compounds on the basis of the magnetic orientation. This type of structure is theoretically available for any element, but its use is limited by probability considerations. It is utilized in many of the compounds of Divisions III and IV, especially in the higher rotational groups, but rarely appears in Division I combinations because of the very high probability of the normal orientation in this division. Since the magnetic rotation is distributed over all three dimensions, its effective component is not altered by a change in position, and has the same value in the magnetic orientations as in the corresponding compounds based on the electric orientations. In order to establish the magnetic type of equilibrium, however, the axis of the negative electric rotation has to be parallel to that of one of the magnetic rotations, and it is therefore perpendicular to the axis of the positive electric rotation. Consequently, the latter takes no part in the normal interatomic force equilibrium, and it constitutes an additional orienting influence, the effects of which were discussed in Volume I. In these compounds of the magnetic type the displacement of the negative component (-x) is balanced by a numerically equal positive displacement (x). Thus the magnetic orientation is somewhat similar to the normal orientation. However, the magnetic rotation is opposite in vectorial direction to the electric rotation, and the resultant relative rotation effective in the dimension of combination is therefore one of the neutral values 10, 5, or a combination of these two, rather than the 2x of the normal orientation. Compounds based on the magnetic orientation occur in a variety of crystal forms, the nature of which depends on the degree of force symmetry and the number of atoms of each kind in the equilibrium system. In some cases there is enough symmetry to make isometric structures of the NaCl, CaF2, and similar types possible. Other crystals are asymmetric. A common arrangement for the binary compounds is the Nickel Arsenide structure, a hexagonal crystal in which the positive atoms occupy the face positions and the negative atoms are in the central positions, spaced alternately 1/4 and 3/4 along the c axis. Table 10 shows the inter-atomic distances calculated for some NiAs and NaCl, type crystals of binary magnetic orientation compounds of Group 3A. Almost all of the NiAs type compounds that have been examined in the course of this present work take the vibration one value of the specific electric rotation: 10. The magnetic orientation compounds with the NaCl structure are quite evenly divided between the 10 rotation and the combination 5-10 in the 3A group, but utilize the 5-10 rotation almost exclusively in the higher groups. In order to show as wide a variety of the features of these magnetic type compounds as is possible in the limited amount of space that can be allotted to them, Table 10 has been restricted to Group 3A compounds, and the following Table 11 gives the data for a representative sample of the compounds of the rare earth elements (from Group 4A), together with a selection of compounds from Group 4B, in which the identical values of the inter-atomic distance in the combinations of the elements of this group with the

Division IV elements of Group 2A are emphasized.

Table 10: Distances - Binary Magnetic Orientation Compounds Specific Rotation Magnetic

Compound

Elec.

Distance Calc. Obs.

NiAs (Hexagonal) Class—Group 3A VS VSe CrS CrSe CrSb CrTe MnAs MnSb FeS FeSe FeSb FeTe CoS CoSe CoSb CoTe NiS NiAs NiTe VN VO CrN MnO MnS MnSe FeO CoO

4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 3-4 3-4 3-4 3-4 3-4 3½-3½ 3½-3½ 3½-3½

3½-3½ 10 4-4 10 3½-3½ 10 4-4 10 5-4½ 10 5-4½ 10 4-4 10 5-4½ 10 3½-3½ 10 4-4 10 5-4 10 5-4 10 3½-3½ 10 4-4 10 5-4 10 5-4 10 3½-3½ 10 4-3 10 5-4 10 NaCl (Cubic) Class-Group 3A 4-3 3(2) 10 4-3 3(2) 10 4-3 3(2) 10 3½-3½ 3(2) 5-10 3½-3½ 3½-3½ 5-10 3½-3½ 4-4 5-10 3-4 3(2) 5-10 3-4 3(2) 5-10

2.42 2.56 2.42 2.56 2.73 2.73 2.56 2.73 2.42 2.56 2.69 2.59 2.32 2.46 2.59 2.59 2.37 2.42 2.64

2.42 2.55 2.44 2.54 2.74 2.77 2.58 2.78 2.45 2.55 2.67 2.61 2.33 2.46 2.58 2.62 2.38 2.43 2.64

2.04 2.04 2.04 2.18 2.59 2.75 2.12 2.12

2.06 2.05 2.07 2.22 2.61 2.72 2.16 2.12

Thus far the calculation of equilibrium distances has been carried out by crystal types as a matter of convenience in identifying the effect of various atomic characteristics on the crystal form and dimensions. It is apparent from the points brought out in the discussion, however, that identification of the crystal type is not always essential to the determination of the inter-atomic distance. For example, let us consider the series of compounds NaBr, Na2Se, and Na3As. From the relations that have been established in the preceding pages we may conclude that these Division I compounds are formed on the basis of the normal orientation. We therefore apply the known value of the relative specific electric rotation of a normal orientation Sodium compound, 4, and the known values of the normal specific magnetic rotations of Sodium and the Group 3B elements, 3-3½ and 4-4 respectively, to

equation 1-10, from which we ascertain that the most probable inter-atomic distance in all three compounds is 2.95, irrespective of the crystal structure. (Measured values are 2.97, 2.95, and 2.94 respectively.)

Table 11: Distances - Binary Magnetic Orientation Compounds Compound CeN CeP CeS CeAs CeSb CeBi PrN PrP PrAs PrSb NdN NdP NdAs NdSb EuS EuSe EuTe GdN YbSe YbTe ThS ThP UC UN UO NpN PuC PuN PuO AmO

Specific Rotation Magnetic 5-4 3(2) 5-4 4-3 5-4 3½-3½ 5-4 4-4 5-4 5-4 5-4 5-4 5-4 3(2) 5-4 4-3 4½-4 4-4 4½-4 5-4 5-4 3(2) 5-4 4-3 4½-4 4-4 4½-4 5-4 5-4 4-3 5-4 4-4 5-4 5-4½ 5-4 3(2) 4½-4 4-4 4½-4 5-4 4½-4½ 3½-3½ 4½-4½ 4-3 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2)

Elec. 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10

Distance Calc. Obs. 2.52 2.50 2.94 2.95 2.89 2.89* 3.06 3.03 3.22 3.20 3.22 3.24 2.52 2.58 2.94 2.93 2.98 3.00 3.14 3.17 2.52 2.57 2.94 2.91 2.98 2.98 3.14 3.15 2.94 2.98 3.06 3.08 3.26 3.28 2.52 2.50* 2.98 2.93 3.14 3.17 2.85 2484 2.91 2.91 2.47 2.50* 2.47 2.44* 2.47 2.46* 2.47 2.45* 2.47 2.46* 2.47 2.45* 2.47 2.48* 2.47 2.48*

The possible inter-atomic distances in the more complex compounds can be calculated in a similar manner, without the necessity of analyzing the great variety of geometrical structures in which these compounds crystallize. The usefulness of this procedure in application to compounds in general is limited, at the present stage of the theoretical development, because we are not normally able to define the specific rotations from theoretical premises as definitely as in the foregoing illustration. It is of considerable value, however, in dealing with the lower electronegative elements, whose specific electric rotations are confined to the neutral values, and whose variability in the magnetic dimensions is only in the number of

inactive dimensions (that is, dimensions in which the specific rotation is 2). The elements involved are those of groups 1B and 2A; hydrogen, carbon, nitrogen, oxygen, and fluorine, together with Boron, one of the normally electropositive elements of Group 2A. The other two positive elements of this group, lithium and beryllium, are also two-dimensional under most conditions, but they take the positive orientation, and have much greater inter-atomic distances. Table 12 gives the theoretically possible inter-atomic distances of these lower group elements, with some examples of the measured values corresponding to the calculated distances.

Table 12: Distances - Lower Negative Elements Specific Rotation Magnetic 3(1) 3(1) 3(1) 3(1½) 3(1½) 3(1½) 3(1) 3(2) 3(1½) 3(2) 3(2) 3(2) 3(2) 3(2) Calc. 70. 92.

Comb. H-H H-F H-C

Obs. 74. 92. 94.

H-N H-C C-N N-N C-O C-O

Example H2 HF Benzene Formic acid Hydrazine Ethylene NaCN N2 COS CO2

C-N

Cyanogen

1.16

C-O

H-B N-N N-0 C-C

B2H6 CuN3 N2O Acetylene

1.17 1.17 1.19 1.20

C-F C-C C-C B-C

H-O 1.06

1.18

Distance n.u. Å .241 .70 .317 .92 .363 1.06 .406 1.18 .445 1.30 .483 1.41 .528 1.54

Elec. 10 10 10 10 10 10 5-10

Calc. 1.30

Comb. H-B C-O B-F

Example B2H6 CaCO3 BF3

Obs. 1.27 1.29 1.30

95.

C-N

Oxamide

1.31

1.04 1.06 1.09 1.09 1.10 1.15

C-F C-C C-C N-O C-C C-N

Cf3Cl Ethylene Benzene HNO3 Graphite DI-Alanine Methyl ether CH3F Diamond Propane B(CH3)2

1.32 1.34 1.39 1.41 1.42 1.42

1.41

1.54

1.42 1.42 1.54 1.54 1.56

The experimental results are not all in agreement with the theory. On the contrary, they are widely scattered. The measured C-C distances, for example, cover almost the entire range from 1.18, the minimum for this combination, to the maximum 1.54. However, the basic compounds of each class do agree with the theoretical values. The paraffin hydrocarbons, benzene, ethylene, and acetylene, have C-C distances approximating the theoretical 1.54,

1.41, 1.30, and 1.18 respectively. All C-H distances are close to the theoretical 0.92 and 1.06, and so on. It can reasonably be concluded, therefore, that the significant deviations from the theoretical values are due to special factors that apply to the less regular structures. A detailed investigation of the reasons for these deviations is beyond the scope of this present work. However, there are two rather obvious causes that are worth mentioning. One is that forces exerted by adjacent atom may modify the normal result of a two-atom interaction. An interesting point in this connection is that the effect, where it occurs, is inverse; that is, it increases the atomic separation, rather than decreasing it as might be expected. The natural reference system always progresses at unit speed, irrespective of the positions of the structures to which it applies, and consequently the inward force due to this progression always remains the same. Any interaction with a third atom introduces an additional rotational outward) force, and therefore moves the point of equilibrium outward. This is illustrated in the measured distances in the polynuclear derivatives of benzene. The lowest C-C distances in these compounds, 1.38 and 1.39, are found along the outer edges of the molecular structures, while the corresponding distances in the interiors of the compounds, where the influence of adjoining atoms is at a maximum, characteristically range from 1.41 to 1.43. Another reason for discrepancies is -that in many instances the measurement and the theoretical calculation do not apply to the same quantity. The calculation gives us the distance between structural units, whereas the measurements apply to the distances between specific atoms. Where the atoms are the structural units, as in the compounds of the NaCl class, or where the inter-group distance is the same as the inter-atomic distance, as in the normal paraffins, there is no problem, but exact agreement cannot be expected otherwise. Again we can use benzene as an example. The C-C distance in benzene is generally reported as 1.39, whereas the corresponding theoretical distance, as indicated in Table 12, is 1.41. But, according to the theory, benzene is not a ring of carbon atoms with hydrogen atoms attached; it is a ring of CH neutral groups, and the 1.41 neutral value applies to the distance between these neutral groups, the structural units of the atom. Since the hydrogen atoms are known to be outside the carbon atoms, if these atoms are coplanar it follows that the distance between the effective centers of the CH groups must be somewhat greater than the distance between the carbon atoms of these groups. The 1.39 measurement between the carbon atoms is therefore entirely consistent with the theoretical distance calculations. The same kind of a deviation from the results of the (apparent) direct interaction between two individual atoms occurs on a larger scale where there is a group of atoms that is acting structurally as a radical. Many of the properties of molecules composed in part, or entirely, of radicals or neutral groups are not determined directly by the characteristics of the atoms, but by the characteristics of the groups. The NH4 radical, for example, has the same specific rotations, when acting as a group, as the rubidium atom, and it can be substituted in the NaCl type crystals of the rubidium halides without altering the volume. Consequently, the inter-atomic distances have no direct significance in compounds containing these groups. It is theoretically feasible to locate the effective centers of the various groups, and to measure the inter-group distances that correspond to those calculated from theory, but this task has not yet been undertaken, and it will not be possible it this time to present a comparison between theoretical and experimental distances in compounds containing radicals

comparable to the comparisons in Tables 1 to12. Some preliminary results have been made, however, on the relation between the theoretical distances and the density in complex compounds. There are a number of factors, not yet investigated in detail, that have some influence on the density of solid matter, and for that reason the conclusions thus far derived from theory are somewhat tentative, and the correlations between theory and observation are only approximate. Nevertheless, certain aspects of these tentative results are significant, and are of enough interest to justify giving them some attention. If we divide the molecular mass, in terms of atomic weight units, by the density, we arrive at the molecular volume in terms of the units entering into the density measurement. For present purposes it will be convenient to convert this quantity to natural units of volume. The applicable conversion factor is the cube of the time region unit of distance divided by the mass unit atomic weight. In the cgs system of units it has the numerical value 14.908. In Table 13 the average volumes per volumetric group of a representative number of inorganic compounds containing radicals (V), as calculated from the measured densities, are compared with the cubes of the inter-group distances (S03), as calculated on the theoretical basis previously described.

Table 13: Molecular Volume NaNO3 KNO3 Ca(NO3)2 RbNO3 Sr(NO3)2 CsNO3 Na2CO2 MgCO3 K2CO3 CaCO3 BaCO3 FeCO3 CoCO3 Cu2CO3 ZnC3 Ag2CO3

m

d

n

V

S03

c

ab1

ab2

85.01 101.10 164.10 147A9 211.65 194.92 106.00 84.33 138.20 100.09 197.37 115.86 118.95 187.09 125.39 275.77

2.261 2.109 2.36 3.11 2.986 3.685 2.509 3.037 2.428 2.711 4.43 3.8 4.13 4.40 4.44 6.077

2 2 3 2 3 2 3 2 3 2 2 2 2 3 2 3

1.261 1.608 1.554 1.590 1.585 1.774 0.944 0.931 1.272 1.238 1.494 1.022 0.966 0.950 0.947 1.015

1.241 1.565 1.565 1.63 1.631 1.825 0.970 0.970 1.222 1.222 1.532 0.976 0.976 0.976 0.976 1.096

4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5

3-3 4-3 4-3 4-4 4-4 4½-4½ 3-3 3-3 4-3 4-3 4½-4½ 4-3 4-3 4-3 4-3 4-4

4-5 4-5 4-5 4-4 4-4 4-4 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½

The specific electric rotation (c) for the compounds with the normal orientation is 4, as in the valence one binary compounds. Those with the magnetic orientation take the neutral value 5. The applicable specific magnetic rotations for the positive component and the negative radical are shown in the columns headed ab1 and ab2 respectively. Columns 2, 3, and 4 give the molecular mass (m), the density of the solid compound (d), and the number of volumetric units in the molecule (n). Here, again, as in the earlier tables, the calculated and

empirical values are not exactly comparable, as the measured values of the densities have been used directly, rather than being projected back to zero temperature, a refinement that would be required for accuracy, but is not justified at this early stage of the investigation. In this table there are five pairs of compounds, such as Ca(NO3)2 and KNO3 in which the inter-group distances are the same, and the only difference between the pairs, so far as the volumetric factors are concerned, is in the number of structural groups. Because of the uncertainties involved in the measured densities, it is difficult to reach firm conclusions on the basis of each pair considered individually, but the average volume per group, calculated from the density, in the five two-group structures is 1.267, whereas in the five three-group structures the average is 1.261. It is evident from this that the volumetric equality of the group and the independent atom which we noted in the case of the NH4 radical is a general proposition, in this class of compounds at least. This is a point that will have a special significance when we take up consideration of the liquid volume relations. In closing the discussion in this chapter it is appropriate to reiterate that the values of the inter-atomic and inter-group distance derived from theory apply to the separations as they would exist if the equilibrium were reached at zero temperature and zero pressure. In the next two chapters we will consider how these distances are modified when the solid structure is subjected to finite pressures and temperatures.

Chapter 4

Compressibility One of the simplest physical phenomena is compression, the response of the time region equilibrium to external forces impressed upon it. With the benefit of the information developed earlier, we are now in a position to begin an examination of the compression of solids, disregarding for the present the question of the origin of the external forces. For this purpose we introduce the concept of pressure, which is defined as force per unit area. P=F/s²

(4-1)

In many cases it will be convenient to deal with pressure on a volume basis rather than on an area basis. We therefore multiply both force and area by distance, s, which gives us the alternative equation: P=Fs/s³=E/V

(4-2)

In the region outside unit distance, where the atoms or molecules of matter are independent, the total energy of an aggregate can thus be expressed in terms of pressure and volume as E=PV As we will find in the next chapter when we begin consideration of thermal motion, a condition of constant temperature is a condition of constant energy, other things being equal. Equation 4-3 thus tells us that for an aggregate in which the cohesive forces between the atoms or molecules are negligible, an ideal gas, the volume at constant

(4-3)

temperature is inversely proportional to the pressure. This is Boyle's law, one of the wellestablished relations of physics. For application to the time region in which the solid equilibrium is located, the second power of the volume must be substituted for the first power, in accordance with the general inter-regional relation previously established. The time region equivalent of Boyle‘s Law is therefore PV²=k

(4-4)

In terms of volume this becomes V=k/P½

(4-5)

This equation tells us that at constant temperature the volume of a solid is inversely proportional to the square root of the pressure. The pressure represented by the symbol P in this equation is, of course, the total effective pressure; that is, the pressure equivalent of all of the forces acting in opposition to the rotational forces of the atom. The force due to the progression of the natural reference system opposes the rotational forces, and acts in parallel with the external compressive forces, but it has the same magnitude regardless of whether or not any such external forces are present. It therefore exerts what we may call an internal pressure, an already existing level of pressure to which an external pressure becomes an addition. In order to conform to established usage and to avoid confusion, the symbol P will hereafter refer to the external pressure only, the total pressure being expressed as P0 + P. On this basis, quation 4-5 may be restated as V=k/(P0+P)½

(4-6)

Compression is normally expressed in terms of relative rather than absolute volumes, the reference volume being the volume at zero external pressure, where equation 4-6 has the form V=k/P0½

(4-7)

Dividing the equation 4-6 by equation 4-7, and rearranging, we obtain V P0½ — = ——— V0 (P0+P)½

(4-8)

As this equation brings out, the internal pressure, P0, is the key factor in the compression of solids. Inasmuch as this pressure is a result of the progression of the natural reference system which, in the time region, is carrying the atoms inward in opposition to their rotational forces (gravitation), the inward force acts only on two dimensions (an area), and the magnitude of the pressure therefore depends on the orientation of the atom with respect to the line of the progression. As indicated in connection with the derivation of the inter-regional ratio, there are 156.44 possible positions of a displacement unit in the time region, of which a fraction az represents the area subjected to pressure, a and z being the effective displacements in the active dimensions. The letter symbols a, b, and c, are used as indicated in Chapter 10, Volume I. The displacement z is either the electric displacement c or the second magnetic displacement b, depending on the orientation of the atom.

From the principle of equivalence of natural units it follows that each natural unit of pressure exerts one natural unit of force per unit of cross-sectional area per effective rotational unit in the third dimension of the equivalent space. However, the pressure is measured in the units applicable to the effect of external pressure. The forces involved in this pressure are distributed to the three spatial dimensions and to the two directions in each dimension. Only those in one direction of one dimension–one sixth of the total–are effective against the time region structure. Applying this 1/6 factor to the ratio az/156.444, we have for the internal pressure per rotational unit at unit volume, P0 = az/938.67

(4-9)

This expression may now be generalized to apply to y rotational units and V units of volume, as follows: P0 = azy/(938.67V)

(4-10)

The force exerted by the progression of the natural reference system is independent of the geometrical arrangement of the atoms, and the volume term in equation 4-10 refers to what we may call the three-dimensional atomic space, the cube of the inter-atomic distance, rather than to the geometric volume. We will therefore replace V by S03. This gives us the internal pressure equation in final form: P0 = azy/(936.67S03)

(4-11)

The value derived from this equation is the magnitude of the internal pressure in terms of natural units. To obtain the pressure in terms of any conventional system of units it is only necessary to apply a numerical coefficient equal to the value of the natural unit of pressure in that system. This natural unit was evaluated in Volume I as 5.282 x 1012 dynes/cm2. The corresponding values in the systems of units used in the reports of the experiments with which comparisons will be made in this chapter are: 1.554 x 107 atm 1.606 x 107 kg/cm2 1.575 x 107 megabars In terms of the units used by P.W. Bridgman, the pioneer investigator in the field, in most of his work, equation 4-11 takes the form P0 =17109 azy/S03 kg/cm²

(4-12)

The internal pressure thus calculated for any specific substance is not usually constant through the entire external pressure range. At low total pressures, the orientation of the atom with respect to the line of progression of the natural refe- rence system is determined by the thermal forces which, as we will see later, favor the minimum values of the effective cross-sectional area. In the low range of total pressures, therefore, the cross-section is as small as the rotational displacements of the atom will permit. In accordance with Le Chatelier‘s Principle, a higher pressure, either internal or external, applied against the equilibrium system causes the orientation to shift, in one or more steps, toward higher displacement values. At extreme pressures the compressive force is exerted against the maximum cross-section: 4 magnetic units in one dimension and 8 electric units in another. Similarly, only one of the magnetic rotational units in the atom participates in the radial component y of the resistance to compression at the low

pressures, but further application of pressure extends the participation to additional rotational units, and at extreme pressures all of the rotational units in the atom are involved. The limiting value of y is therefore the total number of such units. The exact sequence in which these two kinds of factors increase in the intermediate pressure range has not yet been determined, but for present purposes a resolution of this issue is not necessary, as the effect of any specific amount of increase is the same in both cases. Helium and neon, the first two of the inert gases, the elements that have no effective rotation in the electric dimension, take the absolute minimum compression factors: one rotating unit with one effective unit of displacement in each of the two effective dimensions. The azy factors for these elements can be expressed as 1-1-1. In this notation, which we will use for convenience in the subsequent discussion, the numerical values of the compression factors are given in the same order as in the equations. It should be noted that the absolute minimum compression, that applicable to the elements of least displacement, is explicitly defined by the factors 1-1-1. The value of the factor a increases in the higher members of the inert gas series because of their greater magnetic displacement. Because of their negative displacement in the electric dimension, which, in this context, is equivalent to the zero displacement of the inert gases, the electronegative elements follow the inert gas pattern, taking the minimum 1-1-1 factors in the lowest members of the lowest rotational groups, and values that are higher, but still generally well below those of the corresponding electro-positive elements, as the displacement increases in either or both of the atomic rotations. None of the elements of the electronegative divisions below electric displacement 7 has the 4-8 az factors initially, although they are capable of these high levels, and can eventually reach them under appropriate conditions. All of the electropositive elements studied by Bridgman have the full 4 units in one dimension; that is, a = 4. The value of z for the alkali metals is equal to the electric displacement, one unit, and since y takes the minimum value under low pressure conditions, the compression factors for these elements are 4-1-1. The displacement 2 elements (calcium, etc.) take the intermediate values 4-2-1 or 4-3-1. The greater displacements of the elements that follow have a double effect. They increase the internal pressure directly by enlarging the effective cross-section, and this higher internal pressure then has the same effect as a greater external pressure in causing a further increase in the compression factors. Most of these elements therefore utilize the full displacements of the active cross-section dimensions from the start of compression; that is, 4-4-1 (az = ab, two magnetic dimensions) in some of the lower group elements and the transition elements of Group 4A, and 4-8-1, or 4-8-n (az = ac, one magnetic and one electric dimension) in the others. The factors that determine the internal pressures of the compounds that have been investigated thus far fall mainly in the intermediate range, between 4-1-1 and 4-4-1. NaCl, for instance, has 4-2-1 initially, and shifts to 4-3-1 in the pressure range between 30 and 50 M kg/cm2. AgCl has 4-3-1 initially, and carries these factors up to a transition point near Bridgman's pressure limit of 100 M kg/cm2. CaF2 has the factors 4-4-1 from the start of compression. The initial values of the internal pressure of most of the inorganic compounds examined in this investigation are based on one or another of these

three patterns. Those of the organic compounds are mainly 4-1-1, 4-2-1, or an intermediate value 4-1½-1 Compression is ordinarily measured in terms of relative volume, and most of the discussion in this chapter will deal with the subject on this basis, but for some purposes we will be interested in the compressibility, the rate of change of volume under pressure. This rate is obtained by differentiating equation 4-8. 1 dV P0½ —– —– = ———— V0 dP 2(P0+P)³/2

(4-13)

The compressibility at P0, the initial compressibility, is of particular interest. For all practical purposes it is the same as the compressibility at one atmosphere, this pressure being only a small fraction of the internal pressure P0. The initial compressibility may be obtained from equation 4-13 by letting P equal zero. The result is 1 dV —– —– V0 dP (P=0)

1 = —— 2P0

(4-14)

Since the initial compressibility is a quantity that can be measured, its simple and direct relation to the internal pressure provides a significant confirmation of the physical reality of that theoretical property of matter. Initial compressibility factors derived theoretically for those elements on which consistent compressibility data are available for comparison, the internal pressures calculated from these factors, and the initial compressibilities corresponding to the calculated internal pressures are listed in Table14, together with measured values of the initial compressibility at room temperature. Two sets of experimental values are given, one from Bridgman and one from a more recent compilation. Values of S03, except those marked with asterisks, are computed from the inter-atomic distances (S0) in the tables of Chapter 2. Where the structure is anisotropic the S03 value shown is the product of one of the distances given in the earlier tabulations by the square of the other. The reason for the occurrence of the indicated deviations from the Chapter 2 values will be explained later.

Table 14: Initial Compressibility S03 Li Be C(dia.) Na Mg Al Si K Ca Ti V

1.151 0.482 0.147 2.048 1.291 0.915 0.497 3.659 2.588 1.033 0.729

Comp. Factors a z y 4 1 1 4 4 1 4 6 1 4 1 1 4 4 1 4 5 1 4 4 1 4 1 1 4 3 1 4 8 1 4 8 1

P0 (M kg/cm2) 59.5 568 2793 33.4 212 374 551 18.7 79.3 530 751

Initial Compressibility x 106 Calc. Obs.3 Obs.4 8.42 8.41 8.46 0.88 0.87 0.98 0.18 0.18 0.18 14.97 15.1 14.42 2.36 2.86 2.77 1.34 1.30 1.36 0.91 0.31 0.99 26.74 31.0 30.4 6.31 5.51 6.45 0.94 0.77 0.93 0.67 0.59 0.61

Cr Mn Fe Co Ni Cu Zn Ge Rb Sr Zr Nb Mo Ru Rh Pd Ag Cd In Sn Sb Cs Ba La Ce Pr Nd Sm Gd Dy Ho Er Tm Yb Lu Ta W Ir Pt Au Ti Pb Bi Th U

0.603 0.705 0.603 0.603* 0.603* 0.652 0.903 0.603 4.616 3.268 1.306 0.921 0.764* 0.764* 0.764 0.823 0.956 1.118 1.165* 0.913* 1.325* 5.774 2.686* 2.044 1.893 1.758* 1.758* 1.758* 1.346* 1.346 1.346* 1.346* 1.346* 2.167* 1.346* 1.027* 0.953* 0.823 0.823 0.953 1.631 1.249* 1.249 1.758 0.984

4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

8 8 8 8 8 6 4 4 1 3 6 8 8 8 8 8 8 4 4 4 4 1 2 4 4 4 4 4 4 4 4 4 4 2 4 8 8 8 8 8 4 4 3 8 8

1 1 1 1 1 1 1 1 1 1 1½ 1½ 2 2 2 1½ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 3 3 2 1½ 1 1 1 1 1

908 777 908 908 908 630 303 454 14.8 62.8 472 892 1433 1433 1433 998 573 245 235 300 207 11.9 51.0 134 145 156 156 156 203 203 203 203 203 63.2 203 1066 1723 1996 1330 862 168 219 164 311 556

0.55 0.64 0.55 0.55 0.55 0.79 1.65 1.10 33.78 7.96 1.06 0.56 0.35 0.35 0.35 0.50 0.87 2.04 2.13 1.67 2.42 42.0 9.80 3.73 3.45 3.21 3.21 3.21 2.46 2.46 2.46 2.46 2.46 7.92 2.46 0.47 0.29 0.25 0.38 0.58 2.98 2.25 3.05 1.61 0.90

0.50 0.76 0.57 0.52 0.50 0.70 1.64 1.33 38.7 7.9 1.06 0.55 0.34 0.34 0.36 0.51 0.96 1.89 1.64 2.32 59.0 3.39 3.45

0.47 0.28 0.35 0.56 3.31 2.29 2.71 0.94

0.52 1.65 0.58 0.51 0.53 0.72 1.64 1.27 31.4 8.46 1.18 0.58 0.36 0.31 0.36 0.54 0.97 2.10 2.38 0.80 2.56 49.1 9.78 4.04 4.10 3.21 3.00 3.34 2.56 2.55 2.47 2.38 2.47 7.38 2.38 0.49 0.30 0.28 0.35 0.57 2.74 2.29 3.11 1.81 0.99

In most cases the difference between the calculated and measured compressibilities is within the probable experimental error. Substantial deviations from the calculated values are to be expected in the case of low melting point elements such as the alkali metals, unless corrections have been applied to the empirical data, as there is an additional component in the initial volume of such substances. Elsewhere, the differences between the calculated compressibilities and either of the two sets of experimental values are, on the average, no greater than the differences between the experimental results. This process is repeated at successively higher pressure levels until the maximum compression factors for the element are reached. Because of the nature of this compression pattern, a convenient method of analyzing the experimental values of the volume of various substances under compression can be made available by expressing equation 4-8 in the form (V0 /V)² = 1+P/P0

(4-15)

According to this equation, if we plot the reciprocals of the squares of the relative volumes against the corresponding total pressure ratios we should obtain a straight line intersecting the zero pressure ordinate at the reference volume 1.00. The slope of the line is determined by the magnitude of the internal pressure, P0 Fig.1(a) is a curve of this kind for the element tin, based on Bridgman‘s experimental values.

Figure1: Compression Patterns

Where there is a transition to a higher set of compression factors within the experimental range, and the magnitude of P0 changes, the volumes diverge from the original line and follow a second straight line, the slope of which is determined by the new compression factors. On preparing curves of this kind for the other elements investigated by Bridgman, we find that about two-thirds of them actually do conform to a single straight line up to the 30,000 kg/cm2 pressure limit of his earlier work. His studies of the less compressible substances, such as the higher elements of the electropositive divisions, were not carried beyond this level, but he measured the compression up to 100,000 kg/cm2 on many other elements, and most of them were found to undergo a transition in which the effective internal pressure increases without any volume discontinuity. The compression curve for such a substance consists of two straight line segments connected by a smooth transition curve, as in Fig.1(b), which represents Bridgman's values for silicon. In addition to the changes of this type, commonly called second order transitions, some solid substances undergo first order transitions in which there is a modification of the crystal structure and a volume discontinuity at the transition point. The effective internal

pressure generally changes during a transition of this kind, and the resulting volumetric pattern is similar to that of KCl, Fig.1(c). With the exception of some values which are rather erratic and of questionable validity, all of Bridgman's results follow one of these three patterns or some combination of them. The antimony curve, Fig.1(d), illustrates one of the combination patterns. Here a second order transition between 30,000 and 40,000 kg/cm2 is followed by a first order transition at a higher pressure. The numerical values corresponding to these curves are given in the tables that follow. The experimental second order curves are smooth and regular, indicating that the transition process takes place freely when the appropriate pressure is reached. The first order transitions, on the other hand, show considerable irregularity, and the experimental results suggest that in many substances the structural changes at the transition points are subject to a variable amount of delay due to internal conditions in the solid aggregate. In these substances the transition does not take place at a well-defined pressure, but somewhere within a relatively broad transition zone, and the exact course of the transition may vary considerably between one series of measurements and another. Furthermore, there are many substances which appear to experience similar delays in achieving volumetric equilibrium even where no transitions take place. The compression curves suggest that a number of the reported transitions are actually volume adjustments which merely reflect delayed response to the pressure applied earlier. For example, in the barium curve based on Bridgman's results there are presumably two transitions, one between 20,000 and 25,000 kg/cm2, and the other between 60,000 and 70,000 kg/cm2. Yet the experimental volumes at 60,000 and 100,000 kg/cm2 are both very close to the values calculated on the basis of a single straight line relation. It is quite probable, therefore, that this element actually follows one linear relation at least up to the vicinity of 100,000 kg/cm2. The deviations from the theoretical curves that are found in the experimental volumes of substances with relatively high melting points are generally within the experimental error range, and those larger deviations that do make their appearance can, in most cases, be explained on the foregoing basis. The compression curves for substances with low melting points show systematic deviations from linearity at the lower pressures, but this is a normal pattern of behavior resulting from the proximity of the change of state. As will be brought out in detail in our examination of the liquid state, the physical state of matter is basically a property of the individual atom or molecule. The state of the aggregate merely reflects the state of the majority of its individual constituents. Consequently, a solid aggregate at any temperature near the melting point contains a specific proportion of liquid molecules. Since the volume of a liquid molecule differs from that of a solid molecule, the volume of the aggregate is modified accordingly. The amount of the volume deviation in each case can be calculated by methods that will be described in the subsequent discussion of the liquid volume relations. Table 15 compares the results of the application of equation 4-8 with Bridgman‘s measurements on some of the elements that maintain the same internal pressure all the way up to his pressure limit of 100,000 kg/cm2. In many cases he made several series of measurements on the same element. Most of these results agree within about 0.003, and it does not appear that listing all of the individual values in the tabulations would serve any

useful purpose. The values given in Table15, and in the similar tables that follow, are those obtained in experiments that were carried to the full 100,000 kg/cm2 pressure level. Where the high pressure measurements were started at some elevated pressure, or where the measurement interval was greater than usual, the gaps have been filled from the results of other Bridgman experiments.

Table 15: Relative Volumes Under Compression Calc. Pressure (M kg/cm2)

Obs.

Calc.

Zn 4-4-1

0 5 10 15 20 25 30 35 40 50 60 70 80 90 100

1.000 .992 .984 .976 .969 .961 .954 .947 .940 .927 .914 .902 .890 .879 .868

Obs.

Calc.

Zr 4-6-1½

1.000 .992 .984 .977 .969 .964 .954 .939 .925 .912 .900 .889 .878 .868

1.000 .995 .990 .985 .980 .975 .970 .965 .960 .951 .942 .933 .925 .917 .909

1.000 .995 .989 .983 .978 .973 .969 .964 .960 .946 .937 .929 .922 .916 .910

Obs.

In 4-4-1 1.000 .988 .980 .970 .960 .951 .942 .933 .925 .909 .893 .878 .864 .851 .838

1.000 .988 .980 .967 .955 .948 .936 .932 .919 .903 .888 .874 .860 .847 .835

Calc.

Obs.

Sn 4-4-1 1.000 .992 .984 .976 .968 .961 .954 .947 .940 .926 .913 .901 .889 .878 .867

1.000 .991 .982 .975 .966 .960 .951 .936 .923 .909 .897 .886 .875 .864

Table16 extends the volume comparisons to representative elements of the classes that are subject to transitions within the experimental range of pressures. Transitions reported by the investigator or indicated by the theoretical calculations are shown by horizontal lines in the appropriate columns. In these tabulations the position of the upper branch of each curve has been fixed by using the experimental volume at a selected pressure in the straight line segment above the transition (identified by the symbol R) as a reference point. Thus the slope of this upper branch of the curve is determined theoretically, but its position relative to the 1/V2 scale is empirical. Some work has been done toward extension of the theoretical development to a determination of the exact position of the upper section of each curve, but this project is not far enough advanced to justify any discussion of it at this time.

Table 16: Relative Volumes Under Compression Calc. Pressure (M kg/cm2) 0 5

Obs.

Calc.

Al 4-5-1 4-8-1 1.000 .993

1.000 .993

Obs.

Calc.

Si 4-4-1 4-8-1 1.000 .996

1.000 .995

Obs. Ca 4-3-1 4-4-1

1.000 .970

1.000 .969

Calc.

Obs.

Sb 4-4-1 4-4-1½ 1.000 .988

1.000 .987

10 15 20 25 30 35 40 50 60 70 80 90 100

.987 .981 .974 .968 .964

.987 .981 .975 .969 .964

.991 .987 .982 .978 .974

.990 .986 .981 .978 .974

.957 .949 .942 .935 .928 .922 .915 R

.958 .951 .944 .937 .929 .922 .915

.966 .960 .956 .952 .948 .944 .940 R

.968 .962 .957 .952 .948 .944 .940

Ba 4-2-1 4-3-1 0 5 10 15 20 25 30 35 40 50 60 70 80 90 100

1.000 .955 .915 .880 .848 .820 .794 .771 .750 .712 .679 .650 .625 .603 .582

1.000 .955 .914 .879 .841 .814 .789 .770 .747 .712 .682 .639 .618 .598 .580

.943 .917 .895 .878 .862 .847 .832 .805 R .780 .758 .737 .718 .701

.942 .918 .897 .878 .861 .845 .832 .805 .780 .748 .732 .716 .702

.977 .966 .955 .945 .935 .925 .916 .899 .888 .875 .864 R

.975 .964 .954 .944 .934 .925 .917 .899 .886 .875 .864 .815 .803

La 4-4-1 4-8-1

Pr 4-4-1 4-4-1½

U 4-8-1 4-8-2

1.000 1.000 .982 .981 .965 .963 .949 .947 .933 .933 .918 .917 .904 .905 .891 .893 .878 .881 .858 .863 .845 .846 .833 .832 .821 .819 .809 .808 .798 R .798

1.000 1.000 .984 .983 .970 .967 .955 .953 .942 .940 .929 .927 .916 .915 .904 .904 .893 .893 .878 .878 .863 .863 .849 R .849 .835 .836 .822 .823 .810 .811

1.000 1.000 .996 .955 .991 .990 .987 .986 .983 .981 .979 .978 .975 .973 .971 .971 .967 .966 .960 .960 .956 .955 .952 .951 .949 .947 .945 .944 .941 R .941

Compressibility patterns of compounds are theoretically identical with those of the elements, and this theoretical conclusion is confirmed by compression data for a representative group of inorganic compounds presented in Table17.

Table 17: Relative Volumes Under Compressi Calc. Pressure (M kg/cm2) 0 5 10

Obs.

Calc.

NaCl 4-2-1 4-2-1½ .994 .979 .964

1.000 .982 .966

Obs.

NaI 4-2-1 4-2-1½ .987 .964 .942

1.000 .970 .944

Calc.

Obs.

KCl 4-2-1 4-2-1½ .994 .973 .953

1.000 .974 .952

Calc.

Obs.

ZnS 4-4-1 4-4-1½ .995 .991 .986

1.000 .994 .988

15 20

.950 .937 R

.951 .937

.922 .903R

.922 .902

25 30 35 40 50 60 70 80 90 100

.924 .912 .900 .889 .867 .847 .829 .815 .802 .790 R

.924 .912 .901 .892 .865 .848 .832 .817 .803 .790

.885 .868 .853 .840 .819 .799 .781 .765 .749 .734 R

.886 .871 .858 .840 .816 .795 .777 .761 .747 .734

AgCl 4-3-1 0 5 10 15 20 25 30 35 40 50 60 70 80 90 100

1.000 .990 .980 .971 .961 .952 .944 .935 .927 .911 .895 .881 .867 .854 .841

1.000 .989 .979 .969 .960 .952 .942 .937 .926 .910 .896 .883 .871 .860 .835

CsBr 4-3-1 4-4-1 .984 .962 .942 .923 .905 R .888 .871 .856 .842 .815 .790 .777 .760 .743 .728 R

1.000 .971 .947 .925 .905 .888 .870 .859 .840 .814 .792 .773 .757 .742 .728

.934 .916 R .803 R .791 .779 .768 .757 .741 .727 .714 .702 .690 .679 R

.933 .916 .803 .789 .778 .768 .758 .742 .723 .710 .698 .688 .679

NH4Cl 4-2-1 4-4-1 1.000 1.000 .974 .973 .950 .951 .928 .933 .910 .918 .900 .905 .889 .891 .879 .883 .869 .867 .851 .846 .833 .828 .817 .812 .801 .798 .787 .785 .773 R .773

.982 .977 R

.982 .977

.973 .969 .964 .960 .952 .945 .940 .934 .929 .924 R

.972 .967 .963 .961 .954 .947 .940 .934 .929 .924

KNO3 4-3-1 4-3-2 .894 .878 .862 .847 .833 .820R .807

1.000 .882 .862 .846 .831 .820 .804

.783 .761 .744 .733 .723 .712 .703 R

.781 .762 .745 .732 .720 .711 .703

As might be expected for the less uniform composition, transitions are somewhat more common in the compounds, but otherwise there is no difference in the compression curves. The curve for KCl, shown graphically in Fig. 1 and by numerical values in Table17, is of special interest because it includes a sharp first order transition in which there is a substantial decrease in the basic volume while the compression factors remain unchanged. The magnitude of the volume reduction that takes place indicates that there is a reorientation of the atomic rotations in which the neutral specific electric rotation 5 is substituted for the normal rotation 4 as the effective relative value. The theoretical volumes beyond the transition point, as shown in the table, are based on the small atomic volume corresponding to the higher rotation. Up to 20,000 kg/cm2 the volume follows the curve corresponding to compression factors 4-2-1 and S03 = 1.222, which produce an internal pressure of 112.7 M kg/cm2. At the transition point the basic volume (S03) drops to 0.976, increasing the internal pressure to 141.1 M kg/cm2. The compression then

continues on this basis up to the vicinity of 45,000 kg/cm2, where the compression factors change from 4-2-1 to 4-3-1, and the internal pressure rises accordingly. As in the compression of the elements, the theoretical calculations do not always confirm the transitions reported by the experimenters. On the other hand, these calculations show that a large proportion of the compounds, including six of the eight in Table17, undergo either a transition or some other process in which they eliminate a volume component in the pressure range below 5000 kg/cm2. The effect on the compression curve is to cause the linear segment of the curve to intersect the zero pressure ordinate at a volume below 1.000. The origin of these volume adjustments is still uncertain. The occurrence of a number of observable first order transitions at relatively low pressures suggests that some early second order transitions may also be taking place. But it is also possible that voids in the structure may be eliminated in the early stages of compression, or that there are geometrical readjustments. The structural characteristics of the organic compounds make them particularly susceptible to such geometrical readjustments. Because of their low melting points, their volumes under low pressure also include the additional component that exists near the change of state. It appears, however, that in a wide range of compounds elimination of these extra volume components is substantially complete at some pressure well below the 40,000 kg/cm2 level to which Bridgman's measurements on solid organic compounds were carried. This means that there is a fairly wide pressure range in which these compounds follow the normal compression pattern. The following comparison of theoretical and observed volume ratios for benzene and some of its polynuclear derivatives gives an indication of how the elimination of the excess volume progresses. A measured ratio lower than the theoretical means that some of the excess volume is eliminated in the pressure range for which the ratio is measured, and the amount of the difference is an indication of the amount by which the normal loss of volume due to compression is increased. Benzene P (M kg/cm2) 40/20 40/25 40/30 40/35

Ratio Calc. .938 .954 .970 .985

Obs. .920 .943 .964 .984

Benzene Naphthalene Anthracene

Ratio 40/25 Calc. Obs. .954 .943 .954 .950 .954 .953

As these figures indicate, benzene is just getting rid of the last of the excess volume at the pressure limit of the experiments, and there is no linear section of the benzene compression curve on which the slope can be measured for comparison with the theoretical value. With increased molecular complexity, however, the linear section of the curve lengthens, and for compounds with characteristics similar to those of anthracene there is a 15,000 kg/cm2 interval in which the measured volumes should follow the theoretical line. Compounds of this nature have magnetic rotation 3-3 and electric rotation 4. The effective value of S03 is therefore 0.812, and where the compression factors are 4-1½-1

the resulting internal pressure is 127.2 M kg/cm2. As shown in the values tabulated for benzene, which were computed on the basis of this internal pressure, the ratio of the volume at 40,000 kg/cm2 to that at 25,000 kg/cm2 should be 0.954 for all organic compounds with characteristics (molecular complexity, melting point, compression factors, etc.) similar to those of anthracene. Table18 shows that this theoretical conclusion is corroborated by Bridgman's measurements.

Table 18: Measured Volume Ratio - 40/25 M/kg/cm 2 (Theoretikal radio:.954) Urea Nitrourea Cyanamide o-Xylene p-Xylene Triphenyl methane o-Diphenyl benzene m-Diphenyl benzene p-Diphenyl benzene Chlorobenzene o-Nitrochlorobenzene o-Nitrobromobenzene p-Nitrobromobenzene o-Nitroiodobenzene

.954 .956 .953 .956 .956 .953 .954 .955 .955 .954 .956 .955 .953 .953

p-Nitroiodobenzene o-Chlorobenzoic acid m-Chlorobenzoic acid p-Chlorobenzoic acid o-Bromobenzoic acid m-Bromobenzoic acid p-Bromobenzoic acid m-Iodobenzoic acid p-Iodobenzoic acid p-Nitroaniline o-Acetyl tuluidine Tetrahydronaphthalene Anthracene Acenaphthene

.955 .954 .953 .954 .954 .954 .954 .955 .953 .954 .954 .953 .953 .955

At the time the theoretical values listed in the foregoing tables were originally calculated, Bridgman's results constituted almost the whole of the experimental data then available in the high pressure range, and his experimental limit at 100,000 kg/cm2 was the boundary of the empirical knowledge of the effect of high pressure. In the meantime the development of shock wave techniques by American and Russian investigators has enabled measuring compressions at pressures up to several million atmospheres. With the benefit of these new measurements we are now able to extend the correlation between theory and experiment into the region of the maximum compression factors. The nature of the response of the compression factors to the application of pressure has already been explained, and the maximum factors for each group of elements have been identified. However, the magnitude of the base volume (S03) also enters into the determination of the internal pressure, and coincidentally with the increase in these factors there is a trend toward a minimum base volume. In themselves, modifications of the crystal structure play only a small part in the compressibility picture. Application of sufficient pressure causes a solid to assume one of the crystal forms corresponding to the closest packing of the atoms, the face-centered cubic or close-packed hexagonal for isometric crystals, and the nearest equivalent structures if the crystals are anisometric. If some different crystal form exists at zero pressure, the volume decrease due to the change to one of the close-packed forms shows up as a percentage reduction in all subsequent volumes, but the compressibility is not otherwise affected. However, a difference in crystal structure often indicates a difference in the relative orientation of the atomic

rotations. Any such change in orientation alters the internal pressure, and consequently has a significant effect on the compressibility. Application of pressure tends to favor what may be called ―regular‖ structures at the expense of those structures that are able to exist only because of special conditions applicable to the particular elements involved. This tendency is evident from the start of the compression process, and is responsible for the large number of deviations from the Chapter 2 values of the inter-atomic distances that are identified by asterisks in Table14. For example the five elements from chromium to nickel have a number of different interatomic distances at low pressure, and are able to crystallize in alternate forms. In the early stages of compression, however, all of these elements, except manganese, orient themselves on the basis of the neutral relative rotation 10, and have an internal pressure that reflects the corresponding value of S03, which is 0.603. At still higher pressures vanadium shifts to the same relative rotation and joins the group. Manganese probably does likewise, but empirical confirmation of this change is still lacking. Thus the change of variation of the atomic arrangements is greatly reduced by external pressure. One of the collateral effects is that the amount of uncertainty in the identification of the rotation orientation, and the resulting base volume, is minimized. Most of the elements that change to a lower base volume at the start of compression maintain this new value of S03 throughout the remainder of the present range of the shock wave experiments. Those that do not make this change in the early stages of compression generally do so at some higher pressure. Only a few keep the same base volume up to the shock wave pressure limit. Still fewer undergo a second transition to a lower base volume. Thus the general pattern involves one reduction of the base volume in the pressure range from zero external pressure up to the limit of the shock wave experiments. This pattern is reflected in the twelve series of measurements that have been selected for comparison with the theoretical values. Out of the twelve elements that are included, only two, copper and chromium, have the same base volume in the shock wave range as at zero pressure. Four continue with the values of S03 applicable to the early stages of compression, the values listed in Table14, and six change to a lower base volume somewhere above Bridgman's pressure limit. The minimum base volumes, the corresponding maximum compression factors, and the resulting internal pressures for these elements are shown in Table19.

Table 19: Maximum Internal Pressures V Cr Co Ni Cu Mo

c 10 10 10 10 8-10 10

a-b 4-3 4-3 4-3 4-3 4-3 4-4

3

S0 0.603 0.603 0.603 0.603 0.652 0.764

a-z-y 4-8-2 4-8-3 4-8-3 4-8-3 4-8-3 4-8-4

P0 1816 2724 2724 2724 2519 2866

Ag W Au Tl Pb Th

c 8-10 10 10 5-10 5-10 5

a-b 4-4 4-4½ 4-4½ 4-4½ 4-4½ 4½-4½

S03 0.823 0.822 0.822 1.074 1.074 1.631

a-z-y 4-8-4 4-8-5 4-8-5 4-8-5 4-8-5 4-8-5

P0 2661 3330 3330 2549 2549 1678

Here again, as in the pressure range of the Bridgman experiments, the theoretical development is not yet far enough advanced to enable specifying the exact locations of

the upper sections of the compression curves. Nor is it yet clear in all cases just how many of the possible intermediate values of the compression factors are actually utilized as the pressure increases. What we are able to do at the present rather early stage of the development of the theory is to demonstrate that in this extreme high pressure range, as well as at the lower pressures of the preceding tables, the volume varies inversely with the square root of the total pressure, strictly in accordance with the theory. In this connection it should be noted that the section of each compression curve that is based on the maximum value of the internal pressure is long enough to make the square root pattern clear and distinct. Furthermore, we are able to show that the slope of the last section of the experimental curve for each element is identical with the theoretical slope determined by the calculated maximum values of the internal pressure, and that the slope of each of the intermediate sections is in agreement with one of the possible intermediate values of that internal pressure. An exact theoretical definition of the curves will have to wait for further progress along the lines discussed earlier. In the meantime, the amount of theoretical information already available will serve as a means of testing the validity of each set of empirical results, and will also enable a reasonable amount of extrapolation of the compression curves beyond the present limits of the shock wave technology. Table 20 is a comparison of the theoretical volumes, based on an empirical reference volume for each of the sections of the curves, as in the preceding tables, with the shock wave results obtained at Los Alamos5 on the elements that were investigated over the widest range of pressures. Unless there is an increase in the compression factors in the vicinity of 100,000 atmospheres, the compression curves established on the basis or Bridgman's measurements extend into the lower range of these shock wave experiments. In these cases the theoretical volumes up to the first change in the compression factors are calculated on the basis of the reference volume selected from the Bridgman data, and no reference point is identified in this table.

Table 20: Shock Wave Compressions P a-z-y 0.1 4-8-3 .2 .3 .4 .5 4-8-4 .6 .7 .8 .9 1.0 1.1 1.2 4-8-5 1.3

Calc. W .972 .946 .922 .900 .880 .865 .850 .836R .823 .810 .798 .787 .778

Obs. a-z-y

Calc.

.970 4-8-1½ .944 4-8-3 .921 .901 .882 .866 .851 .836 .824 4-8-5 .812 .800 .790 .780

Au .946 .911 .888R .867 .847 .828 .811 .794 .780 .771 .762R .754 .745

Obs a-z-y

Calc.

Obs.

.953 4-8-2 .917 .888 .864 4-8-3 .843 .825 .810 .796 .783 .772 .762 .752 .743 4-8-4

Mo .966 .936 .908 .885 .868 .851 .836 .822 .808 .795R .783 .771 .761

.966 .937 .912 .890 .870 .852 .836 .821 .807 .795 .783 .772 .762

1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1

.770 .762R .754 .747 .739 .732 .725 .718

0.1 4-8-1½ .2 .3 .4 .5 .6 .7 4-8-3 .8 .9 1.0 1.1 1.2 1.3 1.4

Cr .955R .924 .895 .869 .845 .823 .805 .794 .783 .772R .762 .752 .742 .733

0.1 4-8-1½ .2 .3 .4 .5 .6 .7 .8 .9 4-8-3 1.0 1.1 1.2 1.3 1.4 1.5 1.6

Co .953 .921 .893 .867 .843R .821 .801 .782 .769 .759 .749 .739 .730 .721 .712R .704

0.1 4-8-1 .2 4-8-2 .3 .4

Ag .922 .879 .848 .820

.771 .762 .754 .746 .738 .731 .725 .718 .955 .920 .891 .867 .846 .827 .811 .797 .784 .772 .761 .751 .742 .733

.737 .730 .722 .715 .708 .701 .694

4-4-1½ 4-4-3

4-8-3

4-8-5

.956 4-8-1½ .920 .890 .865 .843 .823 .806 .791 4-8-3 .776 .764 .752 .741 .731 .721 .712 .704 .929 4-4-3 .881 .845 .817 4-8-3

Pb .858 .796R .753 .716 .691 .673R .656 .640 .628 .619R .610 .602 .594 .586 Ni .953 .921 .893 .867 .843R .821 .801 .790 .779 .768 .758 .748 .739 .730 .721R Tl .850 .787 .736R .702

.735 .728 .720 .714 .708 .702 .696

.752 .743R .734 .726

.752 .743 .734 .726

V .939 .900 .867R .838 .811 .787 .765 .750 .736 .723R .710 .698 .687

.945 .902 .867 .838 .812 .790 .770 .753 .737 .723 .709 .697 .686

.954 4-8-1 .919 .889 4-8-1½ .865 .843 .825 .808 4-8-3 .794 .780 .768 .757 .747 .738 .729 .721

Cu .945 .898 .865 .838 .814R .792 .772 .760 .749 .738 .728 .718 .708 .699 .690R

.940 .897 .864 .836 .814 .794 .777 .762 .749 .737 .726 .716 .707 .698 .690

.853 4-8-1 .783 4-8-2 .736 .703

Th .869 .792 .747 .710

.870 .795 .744 .707

.865 4-8-1 .796 4-8-1½ .751 .718 .693 .673 .656 .642 4-8-2 .630 .619 .609 .600 .593 .586

.5 .6 .7 4-8-4 .8 .9 1.0 1.1 1.2 1.3 1.4 1.5 1.6

.794R .771 .752 .741 .730 .720R .710 .701 .692 .683 .675 .667

.794 .775 .759 .744 4-8-5 .731 .720 .710 .700 .692 .684 .677 .670

.678R .656 .637 .623 .614 .605R .597 .588 .581 .573 .566

.678 .658 4-8-3 .642 .628 .616 .605 4-8-5 .596 .587 .580 .573 .567

.677R .652 .632R .613 .596 .583 .572 .562 .553 .544 .535

.677 .652 .632 .614 .599 .585 .573 .562 .553 .544 .535

A rather surprising feature of these comparisons is that the agreement between the shock wave results and the theoretical volumes is as close as the agreement between Bridgman's static values and the theory. It is true that this set of measurements was deliberately selected for the comparison, and it represents the best results rather than the average, but in any event the close correlation is a significant confirmation of the validity of both the shock wave techniques and the theoretical relations. The question that now arises is what course the compressibility follows beyond the pressure range of this table. In some cases a transition to a smaller base volume appears to be possible. Copper, for instance, may shift to the rotations of the preceding electropositive elements at some pressure above that of the tabulation. Aside from such special cases, the factors that determine the compressibility in the range below two million atmospheres have reached their limits. At the present stage of the investigation, however, the possibility that some new factor may enter into the picture at extreme pressures cannot be excluded. A ―collapse‖ of the atomic structure of the kind envisioned by the nuclear theory is. of course, impossible, but as matters now stand we are not in a position to say that all aspects of the compressibility situation have been explored. It is conceivable that there may be some, as yet unknown, capability of change in the atomic motions that would increase the resistance to pressure beyond what now appears to be the ultimate limit. Some shock wave measurements have been made at still higher pressure levels, and these should throw some light on the question. Unfortunately, however, the results are rather ambiguous. Three of the elements included in these experiments, lead, tin, and bismuth, follow the straight line established in Table 20 up to the maximum pressures of about four million atmospheres. On the other hand, five elements on which measurements were carried to maximums between three and five million atmospheres show substantially lower compressions than a projection of the Table 20 curves would indicate. The divergence in the case of gold, for example, is almost eight percent. But there are equally great differences between the results of different experiments, notably in the case of iron. Whether or not some new factor enters into the compression situation at pressures above those of Table 20 will therefore have to be regarded as an open question.

Chapter 5

Heat IF an atom is subjected to an external force of a transient nature, such as that involved in a violent contact a motion is imparted to it. Where the magnitude of the force is great enough the atom is ejected from the time region and the inter-atomic equilibrium is destroyed. If the force is not sufficient to, accomplish this ejection, the motion is turned back at some intermediate point and it becomes a vibratory, or oscillating, motion. Where two or more atoms are combined into a molecule, the molecule becomes the thermal unit. The statements about atoms in the preceding paragraph are equally applicable to these molecular units. In order to avoid continual repetition of the expression ―atoms and molecules,‖ the references to thermal units in the discussion that follows will be expressed in terms of molecules, except where we are dealing specifically with substances such as aggregates of metallic elements, in which the thermal units are definitely single atoms. Otherwise the individual atoms will be regarded, for purposes of the discussion, as monatomic molecules. The thermal motion is something quite different from the familiar vibratory motions of our ordinary experience. In these vibrations that we encounter in everyday life, there is a continuous shift from kinetic to potential energy, and vice versa, which results in a periodic reversal of the direction of motion. In such a motion the point of equilibrium is fixed, and is independent of the amplitude of the vibration. In the thermal situation, however, any motion that is inward in the context of the fixed reference system is coincident with the progression of the natural reference system, and it therefore has no physical effect. Motion in the outward direction is physically effective. From the physical standpoint, therefore, the thermal motion is a net outward motion that adds to the gravitational motion (which is outward in the time region) and displaces the equilibrium point in the outward direction. In order to act in the manner described, coinciding with the progression of the natural reference system during the inward phase of the thermal cycle and acting in conjunction with gravitation in the outward phase, the thermal vibration must be a scalar motion. Here again, as in the case of the vibratory motion of the photons, the only available motion form is simple harmonic motion. The thermal oscillation is identical with the oscillation of the photon except that its direction is collinear with the progression of the natural reference system rather than perpendicular to it. However, the suppression of the phvsical effects of the vibration during the half of the cycle in which the thermal motion is coincident with the reference system progression gives this motion the physical characteristics of an intermittent unidirectional motion, rather than those of an ordinary vibration. Since the motion is outward during half of the total cycle, each natural unit of thermal vibration has a net effective magnitude of one half unit. Inasmuch as the thermal motion is a property of the individual molecule, not an aspect of a relation between molecules, the factors that come into play at distances less than unity do not apply here, and the direction of the thermal motion, in the context of a stationary reference system is always outward. As indicated earlier, therefore, continued increase in the magnitude of the thermal motion eventually results in destruction of the inter-atomic

force equilibrium and ejection of the molecule from the time region. It should be noted, however, that the gravitational motion does not contribute to this result, as it changes direction at the unit boundary. The escape cannot be accomplished until the magnitude of the thermal motion is adequate to achieve this result unassisted. When a molecule acquires a thermal motion it immediately begins transferring this motion to its surroundings by means of one or more of several processes that will be considered in detail at appropriate points later in this and the subsequent volumes. Coincident with this outflow there is an inflow of thermal motion from the environment, and, in the absence of an externally maintained unbalance, an equilibrium is ultimately re ached at a point where inflow and outflow are equal. Any two molecules or aggregates that have established such an equilibrium with each other are said to be at the same temperature. In the universe of motion defined by the postulates of the Reciprocal System, speed and energy have equal standing from the viewpoint of the universe as a whole. But on the low speed side of the neutral axis, where all material phenomena are located, energy is the quantity that exceeds unity. Equality of motion in the material sector is therefore synonymous with equal energy. Thus a temperature equilibrium is a condition in which inflow and outflow of energy are equal. Where the thermal energy of a molecule is fully effective in transfer on contact with other units of matter, its temperature is directly proportional to its total thermal energy content. Under these conditions, E = kT

(5-1)

In natural units the numerical coefficient k is eliminated, and the equation becomes E=T

(5-2)

Combining Equation 5-2 with Equation 4-3 we obtain the general gas equation, PV = T, or in conventional units, where R is the gas constant. PV = RT

(5-3)

These are the relations that prevail in the ―ideal gas state.‖ Elsewhere the relation between temperature and energy depends on the characteristics of the transmission process. Radiation originates three-dimensionally in the time region, and makes contact one-dimensionally in the outside region. It is thus four-dimensional, while temperature is only one-dimensional. We thus find that the energy of radiation is proportional to the fourth power of the temperature. Erad = kT4

(5-4)

This relation is confirmed observationally. The thermal motion originating inside unit distance is likewise four-dimensional in the energy transmission process. However, this motion is not transmitted directly into the outside region in the manner of radiation. The

transmission is a contact process, and is subject to the general inter-regional relation previously explained. Instead of E = kT4, as in radiation, the thermal motion is E2 = k’T4, or E = kT2

(5-5)

A modification of this relation results from the distribution of the thermal motion over three dimensions of time, while the effective component in thermal interchan ce is only one-dimensional. This is immaterial as long as the thermal motion is confined to a single rotational unit, but the effective component of the thermal motion of magnetic rotational displacement n is only 1/n3 of the total. We may therefore generalize equation 5-5 by applying this factor. Substituting the usual term heat (symbol H) for the time region thermal energy E, we then have H = T2/n3

(5-6)

The general treatment of heat in conventional physical theory is empirically based, and is not significantly affected by the new theoretical development. It will not be necessary, therefore, to give this subject matter any attention in this present work, where we are following a policy of not duplicating information that is available elsewhere, except to the extent that reference to such information is required in order to avoid gaps in the theoretical development. The thermal characteristics of individual substances, on the other hand, have not been thoroughly investigated. Since they are of considerable importance, both from the standpoint of practical application and because of the light that they can shed on fundamental physical relationships, it is appropriate to include some discussion of the status of these items in the universe of motion. One of the most distinctive thermal properties of matter is the specific heat, the heat increment required to produce a specific increase in temperature. This can be obtained by differentiating equation 5-6. dH/dT = 2T/n3

(5-7)

Inasmuch as heat is merely one form of energy it has the same natural unit as energy in general, 1.4918 x 10-3 ergs. However, it is more conunonly measured in terms of a special heat energy unit, and for present purposes the natural unit of heat will be expressed as 3.5636 x 10-11 gram-calories, the equivalent of the general energy unit. Strictly speaking, the quantity to which equation 5-7 applies is the specific heat at zero pressure, but the pressures of ordinary experience are very low on a scale where unit pressure is over fifteen million atmospheres, and the question as to whether the equation holds good at all pressures, an issue that has not yet been investigated theoretically, is of no immediate concern. We can take the equation as being applicable under any condition of constant pressure that will be encountered in practice. The natural unit of specific heat is one natural unit of heat per natural unit of temperature. The magnitude of this unit can be computed in terms of previously established quantities, but the result cannot be expressed in terms of conventional units because the conventional temperature scales are based on the properties of water. The scales in

common use for scientific purposes are the Celsius or Centigrade, which takes the ice point as zero, and the Kelvin, which employs the same units but measures from absolute zero. All temperatures stated in this work are absolute temperatures, and they will therefore be stated in terms of the Kelvin scale. For uniformity, the Kelvin notation (oK, or simply K) will also be applied to temperature differences instead of the customary Celsius notation (oC). In order to establish the relation of the Kelvin scale to the natural system, it will be necessary to use the actual measured value of some physical quantity, involving temperature, just as we have previously used the Rydberg frequency, the speed of light, and Avogadro‘s number to establish the relations between the natural and conventional units of time, space, and mass. The most convenient empirical quantity for this purpose is the gas constant. It will be apparent from the facts developed in the discussion of the gaseous state in a subsequent volume of this series that the gas constant is the equivalent of two-thirds of a natural unit of specific heat. We may therefore take the measured value of this constant, 1.9869 calories, or 8.31696 x 107 ergs, per gram mole per degree Kelvin, as the basis for conversion from conventional to natural units. This quantity is commonlv represented by the symbol R, and this symbol will be employed in the conventional manner in the following pages. It should be kept in mind that R = 2/3 natural unit. For general purposes the specific heat will be expressed in terms of calories per gram mole per degree Kelvin in order to enable making direct comparisons with empirical data compiled on this basis, but it would be rather awkward to specifv these units in every instance, and for convenience only the numerical values will be given. The foregoing units should be understood. Dividing the gas constant by Avogadro‘s number, 6.02486 x 1023 per g-mole, we obtain the Bolzman constant, the corresponding value on a single molecule basis: 1.38044 x 10-16 ergs/deg. As indicated earlier, this is two-thirds of the natural unit, and the natural unit of specific heat is therefore 2.07066 x 10-16 ergs/deg. We then divide unit energy, 1.49175 x 10-3 ergs, by this unit of specific heat, which gives us 7.20423 x 1012 degrees Kelvin, the natural unit of temperature in the region outside unit distance (that is, for the gaseous state of matter). We will also be interested in the unit temperature on the T3 basis, the temperature at which the thermal motion reaches the time region boundary. The 3/4 power of 7.20423 x 1012 is 4.39735 x 109. But the thermal motion is a motion of matter and involves the 2/9 vibrational addition to the rotationally distributed linear motion of the atoms. This reduces the effective temperature unit by the factor 1 + 2/9, the result being 3.5978 x 109 degrees K. On first consideration, this temperature unit may seem incredibly large, as it is far above any observable temperature, and also much in excess of current estimates of the temperatures in the interiors of the stars, which, according to our theoretical findings, can be expected to approach the temperature unit. However, an indication of its validity can be obtained by comparison with the unit of pressure, inasmuch as the temperature and pressure are both relatively simple physical quantities with similar, but opposite, effects on most physical properties, and should therefore have units of comparable magnitude. The conventional units, the degree K and the gram per cubic centimeter have been

derived from measurements of the properties of water, and are therefore approximately the same size. Thus the ratio of natural to conventional units should be nearly the same in temperature as in pressure. The value of the temperature unit just calculated, 3.5978 x 109 degrees K, conforms to this theoretical requirement, as the natural unit of pressure derived in Volume I is 5.386 x 109 g/cm3. Except insofar as it enters into the determination of the value of the gas constant, the natural unit of temperature defined for the gaseous state plays no significant role in terrestrial phenomena. Here the unit with which we are primarily concerned is that applicable to the condensed states. Just as the gaseous unit is related to the maximum temperature of the gaseous state, the lower unit is related to the maximum temperature of the the liquid state. This is the temperature level at which the unit molecule escapes from the time region in one dimension of space. The motion in this low energy range takes place in only one scalar dimension. We therefore reduce the three-dimensional unit, 3.5978 x 109 K, to the one-dimensional basis, and divide it by 3 because of the restriction to one dimension of space. The natural unit applicable to the condensed state is then 1/3 (3.598 x 109)¹/3, degrees K = 510.8 oK. The magnitude of this unit was evaluated empirically in the course of a study of liquid volume carried out prior to the publication of The Structure of the Physical Universe in 1959. The value derived at that time was 510.2, and this value was used in a series of articles on the liquid state that described the calculation of the numerical values of various liquid properties, including volume, viscosity, surface tension, and the critical constants. Both the 510.2 liquid unit and the gaseous unit were listed in the 1959 publication, but the value of the gaseous unit given there has subsequently increased by a factor of 2 as a result of a review of the original derivation. Since the basic linear vibrations (photons) of the atom are rotated through all dimensions they have active components in the dimensions of any thermal motion, whatever that dimension may be, just as they have similar components parallel to the rotationally distributed motions. As we found in our examination of the effect on the rotational situation, this basic vibrational component amounts to 2/9 of the primary magnitude. Because the thermal motion is in time (equivalent space) its scalar direction is not fixed relative to that of the vibrational component. This vibrational component will therefore either supplement or oppose the thermal specific heat. The net specific heat, the measured value, is the algebraic sum of the two. This vibrational component does not change the linear relation of the specific heat to the temperature, but it does alter the zero point, as indicated in Fig.2.

Figure 2

In this diagram the line OB’ is the specific heat curve derived from equation 5-7, assuming a constant value of n and a zero initial level. If the scalar direction of the vibrational component is opposite to that of the thermal motion, the initial level is positive; that is, a certain amount of heat must be supplied to neutralize the vibrational energy before there is any rise in temperature. In this case the specific heat follows the line AA’ parallel to OB’ above it. If the scalar direction of the vibrational component is the same as that of the thermal motion, the initial level is negative, and the specific heat follows the line CC’, likewise parallel to OB’ but below it. Here there is an effective temperature due to the vibrational energy before any thermal motion takes place. Although this initial component of the molecular motion is effective in determining the temperature, its magnitude cannot be altered and it is therefore not transferable. Consequently, even where the initial level is negative, there is no negative specific heat. Where the sum of the negative initial level and the thermal component is negative, the effective specific heat of the molecule is zero. It should be noted in passing that the existence of this second, fixed, component of the specific heat confirm the vibrational character of the basic constituent of the atomic structure, the constituent that we have identified as a photon. The demonstration that there is a negative initial level of the specific heat curve is a clear indication of the validity of the theoretical identification of the basic unit in the atomic structure as a vibratory motion. Equation 5-7 can now be further generalized to include thespecific heat contribution of the basic vibration: the initial level, which we will represent by the symbol I. The net specific heat, the value as measured, is then dH/dT = 2T/n3 +I

(5-8)

Where there is a choice between two possible states, as there is between the positive and negative initial levels, the probability relations determine which of the alternatives will prevail. Other things being equal, the condition of least net energy is the most probable, and since the negative initial level requires less net energy for a given temperature than the positive initial level, the thermal motion is based on the negative level at low temperatures unless motion on this basis is inhibited by structural factors.

Addition of energy in the time region takes place by means off a decrease in the effective time magnitude, and it involves eliminating successive time units from the vibration period. The process is therefore discontinuous, but the number of effective time units under ordinary conditions is so large that the relative effect of the elimination of one unit is extremely small. Furthermore, observations of heat phenomena of the solid state do not deal with single molecules but with aggregates of many molecules, and the measurements are averages. For all practical purposes, therefore, we may consider that the specific heat of a solid increases in continuous relation to the temperature, following the pattern defmed by equation 5-8. As pointed out earlier in this chapter, the thermal motion cannot cross the time region boundary until its magnitude is sufficient to overcome the progression of the natural reference system without assistance from the gravitational motion; that is, it must attain unit magnitude. The maximum thermal specific heat, the total increment above the initial level, is the value that prevails at the point where the thermal motion reaches this unit level. We can evaluate it by giving each of the terms T and n in equation 5-7 unit value, and on this basis we find that it amounts to 2 natural units, or 3R. The normal initial level is -2/9 and of this 3R is specific heat, or -2/3R. The 3R total is then reached at a net positive specific heat of 21/3 R. Beyond this 3R thermal specific heat level, which corresponds to the regional boundary, the thermal motion leaves the time region and undergoes a change which requires a substantial input of thermal energy to maintain the same temperature, as will be explained later. The condition of minimum energy, the most probable condition, is maintained by avoiding this regional change by whatever means are available. One such expedient, the only one available to molecules in which only one rotational unit is oscillating thermally, is to change from a negative to a positive initial level. Where the initial level is +2/3 R instead of -2/3 R, the net positive specific heat is 32/3 R at the point where the thermal specific heat reaches the 3R limit. The regional transmission is not required until this higher level is reached. The resulting specific heat curve is shown in Fig.3. Inasmuch as the magnetic rotation is the basic rotation of the atom, the maximum number of units that can vibrate thermally is ordinarily determined by the magnetic displacement. Low melting points and certain structural factors impose some further restrictions, and there are a few elements, and a large number of compounds that are confined to the specific heat pattern of Fig.3, or some portion of it. Where the thermal motion extends to the second magnetic rotational unit, to rotation two, we may say, using the same terminology that was employed in the inter-atomic distance discussion, the Fig. 3 pattern is followed up to the 21/3 level. At that point the second rotational unit is activated. The initial specific heat level for rotation two is subject to the same n3 factor as the thermal specific heat, and it is therefore 1/n3 x 2/3 R = 1/12 R. This change in the negative initial level raises the net positive specific heat corresponding to the thermal value 3R from 2.333 R to 2.917 R, and enables the thermal motion to continue on the basis of the preferred negative initial level up to a considerably higher temperature.

Figure 3

When the rotation two curve reaches its end point at 2.917 R net positive specific heat, a further reduction of the initial level by a transition to the rotation three basis, where the higher rotation is available, raises the maximum to 2.975 R. Another similar transition follows, if a fourth vibrating unit is possible. The following tabulation shows the specific heat values corresponding to the initial and final levels of each curve. As indicated earlier, the units applicable to the second column under each heading are calories per gram mole per degree Kelvin.

1 2 3 4

Vibrating Units -0.667 R -0.0833 R -0.0247 R -0.0104 R

Effective Initial Level -1.3243 -0.1655 -0.0490 -0.0207

2.3333 R 2.9167 R 2.9753 R 2.9896 R

Maximum Net Specific Heat (negative initial level) 4.6345 5.7940 5.9104 5.9388

Ultimately the maximum net positive specific heat that is possible on the basis of a negative initial level is attained. Here a transition to a positive initial level takes place, and the curve continues on to the overall maximum. As a result of this mechanism of successive transitions, each number of vibrating units has its own characteristic specific heat curve. The curve for rotation one has already been presented in Fig.3. For convenient reference we will call this a type two curve. The different type one curves, those of two, three, and four vibrating units, are shown in Fig.4. As can be seen from these diagrams, there is a gradual flattening and an increase in the ratio of temperature to specific heat as the number of vibratory units increases. The actual temperature scale of the curve applicable to any particular element or compound depends on the thermal characteristics of the substance, but the relative temperature scale is determined by the factors already considered, and the curves in Fig.4 have been drawn on this relative basis.

Figure 4

As indicated by equation 5-8, the slope of the rotation two segment of the specific heat curve is only one-eighth of the slope of the rotation one segment. While this second segment starts at a temperature corresponding to 21/3 R specific heat, rather than from zero temperature, the fixed relation between the two slopes means that a projection of the two-unit curve back to zero temperature always intersects the zero temperature ordinate at the same point regardless of the actual temperature scale of the curve. The slopes of the three-unit and four-unit curves are likewise specifically related to those of the earlier curves, and each of these higher curves also has a fixed initial point. We will find this feature very convenient in analyzing complex specific heat curves, as each experimental curve can be broken down into a succession of straight lines intersecting the zero ordinate at these fixed points, the numerical values of which are as follows: Vibrating Units 1 2 3 4

Specific Heat at 0º K (projected) -0.6667 R 1.9583 R 2.6327 R 2.8308 R

-1.3243 3.8902 5.2298 5.6234

These values and the maximum net specific heats previously calculated for the successive curves enable us to determine the relative temperatures of the various transition points. In the rotation three curve, for example, the temperatures of the first and second transition points are proportional to the differences between their respective specific heats and the 3.8902 initial level of the rotation two segment of the curve, as both of these points lie on this line. The relative temperatures of any other pair of points located on the same straight line section of any of the curves can be determined in a similar manner. By this means the following relative temperatures have been calculated, based on the temperature of the first transition point as unity. Vibrating Units 1 2 3 4

Relative Temperature Transition Point 1.000 2.558 3.086 3.391

End Point 1.80 4.56 9.32 17.87

The curves of Figs.3 and 4 portray what may be called the ―regular‖ specific heat patterns of the elements. These are subject to modifications in certain cases. For instance, all of the electronegadve elements with displacements below 7 thus far studied substitute an initial level of -0.66 for the normal -1.32. Another common deviation from the regular pattern involves a change in the temperature scale of the curve at one of the transition points, usually the first. For reasons that will be developed later, the change is normally downward. Inasmuch as the initial level of each segment of the curve remains the same, the change in the temperature scale results in an increase in the slope of the higher curve segment. The actual intersection of the two curve segments involved then takes place at a level above the normal transition point. There are some deviations of a different nature in the upper portions of the curves where the temperatures are approaching the melting points. These will not be given any

consideration at this time because they are connected with the transition to the liquid state and can be more conveniently examined in connection with the discussion of liquid properties. As mentioned earlier, the quantity with which this and the next two chapters are primarily concerned is the specific heat at zero external pressure. In Chapter 6 the calculated values of this quantity will be compared with measured values of the specific heat at constant pressure, as the difference between the specific heat at zero pressure and that at the pressures of observation is negligible. Most conventional theory deals with the specific heat at constant volume rather than at constant pressure, but our analysis indicates that the measurement under constant pressure corresponds to the fundamental quantity.

CHAPTER 6

Specific Heat Patterns Fig.5 is a specific heat curve derived from experimental data. The points shown in this graph are the measured values of the specific heat of silver. The accompanying solid lines are the segments of the theoretical four-unit curve of Fig.4 with the temperature scale located empirically. While the curve defined by the plotted points has the same general shape as the theoretical curve, it is quite different in appearance inasmuch as the sharp angles of the theoretical curve have been replaced by smooth and gradual transitions. The explanation of this difference lies in the manner in which the measurements are made. As indicated by equation 5-8 and the curves in Figs.3 and 4, the specific heat of an individual molecule can be represented by a succession of straight lines. Experimental observations, however, are not made on single molecules, but on aggregates of molecules, and the observed temperature of the aggregate is the average of many different individual molecular temperatures, which are distributed about the average in accordance with the probability relations. Midway between the transition points the relation between temperature and specific heat for most of the individual molecules is such that their specific heats lie on the same straight line in the diagram. The average consequently lies on the same line, and coincides with the true molecular specific heat corresponding to the average temperature. In the neighborhood of a transition point, however, the molecules that are individually at the higher temperatures cannot continue on the same line beyond the 3R limit, and must conform to a lower curve based on a higher number of rotating units. This operates to reduce the specific heat of the aggregate below the true molecular value for the prevailing average temperature. In the silver curve, Fig.5, for example, the true atomic specific heat at 75º K is 4.69. This would also be the average specific heat of the silver aggregate at that temperature if the silver atoms were able to continue vibrating on the basis of one rotating unit up to the point beyond which the probability distribution is negligible. But at a specific heat of 2¹/3 R (4.633) the vibration changes to the two-unit basis. Those atoms in the probability distribution that have specific heats above this level cannot conform to the one-unit line

but must follow a line that rises at a much slower rate. The lower specific heat of these atoms reduces the average specific heat of the aggregate, and causes the aggregate curve to diverge more and more from the straight line relation as the proportion of atoms reaching the transition point increases. The divergence reaches a maximum at the transition temperature, after which the specific heat of the aggregate gradually approaches the upper atomic curve. Because of this divergence of the measured (aggregate) specific heats from the values applicable to the individual atoms the specific heat of silver at 75º K is 4.10 instead of 4.69.

Figure 5: Specific Heat -Silver

A similar effect in the opposite direction can be seen at the lower end of the silver curve. Here the specific heat of the aggregate (the average of the individual values) could stay on the one-unit theoretical curve only if it were possible for the individual specific heats to fall below zero. But there is no negative thermal energy, and the atoms which are individually at temperatures below the point where the curve intersects the zero specific heat level all have zero thermal energy and zero specific heat. Thus there is no negative deviation from the average, and the positive deviation due to the presence of atoms with individual temperatures above zero constitutes the specific heat of the aggregate. The specific heat of a silver atom at 15º K is zero, but the measured specific heat of a silver aggregate at an average temperature of 15º K is 0.163. Evaluation of the deviations from the linear relationship in these transitional regions involves the application of probability mathematics, the validity of which was assumed as

a part of the Second Fundamental Postulate of the Reciprocal System. For reasons previously explained, a full treatment of the probability aspects of the phenomena now under discussion is beyond the scope of this work, but a general consideration of the situation will enable us to arrive at some qualitative conclusions which will be adequate for present purposes. At the present stage of development of probability theory there are a number of probability functions in general use, each of which seems to have advantages for certain applications. For the purpose of this work the appropriate function is one that expresses the results of pure chance without modification by any other factor. Such a function is strictly applicable only where the units involved are all exactly alike, the distribution is perfectly random, the units are infinitely small, the variability is continuous, and the size of the group is infinitely large. The ordinary classes of events around which most presentday probability theory has been constructed, such as coin and dice experiments, obviously fail to meet these requirements by a wide margin. Coins, for instance, are not continuously variable with an infinite number of possible states. They have only two states, heads and tails. This means that a major item of uncertainty has become almost a certainty, and the shape of the probability distribution curve has been altered accordingly. Strictly speaking, it is no longer a true probability curve, but a combination curve of probability and knowledge. The basic physical phenomena do conform closely to the requirements of a system in which the laws of pure chance are valid. The units are nearly uniform, the distribution is random, the variability is continuous, or nearly continuous, and the size of the group, although not infinite, is extremely large. If any of the probability functions in general use can qualify as representing pure chance the most likely prospect is the so-called “normal” probability function, which can be expressed as 1 y = ———e-x²/2 ¯\/2 Tables of this function and its integral to fifteen decimal places are available.6 It has been found in the course of this work that sufficient accuracy for present purposes can be attained by calculating probabilities on the basis of this expression, and it has therefore been utilized in all of the probability applications discussed herein, without necessarily assuming the absolute accuracy of this function in these applications, or denying the existence of more accurate alternatives. For example, Maxwell’s asymmetric probability distribution is presumably accurate in the applications for which it was devised (a point that has not yet been examined in the context of the Reciprocal System), and it may also apply to some of the phenomena discussed in this work. However, the results thus far obtained, particularly in application to the liquid properties, favor the normal function. In any event it is clear that if any error is introduced by utilizing the normal function it is not large enough to be significant in this first general treatment of the subject matter. On the foregoing basis, the distribution of molecules with different individual temperatures takes the form of a probability function øt, where t is the deviation from the average temperature. The contribution of the øt molecules at any specified temperature to the deviation of the specific heat from the theoretical value corresponding to the average

temperature depends not only on the number of these molecules but also on the magnitude of the specific heat deviation attributable to each molecule; that is, the difference between the specific heat of the molecule and that of a molecule at the average temperature of the aggregate. Since the specific heat segment from which the deviation takes place is linear, this deviation is proportional to the temperature difference t, and may be represented as kt. The total deviation due to the øt molecules at temperature t is then ktøt, and the sum of all deviations in one direction (positive or negative) may be obtained by integration. It is quite evident that the deviations of the experimental specific heat curves from the theoretical straight lines, both at the zero level and at the transition point have the general characteristics of the probability curves. However, the experimental values are not accurate enough, particularly in the temperature range of the lower transition, to make it worth while to attempt any quantitative correlations between the theoretical and experimental results. Furthermore, there is still some theoretical uncertainty with respect to the proper application of the probability function that prevents specifying the exact location of the probability curve. The uncertain element in the situation is the magnitude of the probability unit. Equation 6-1 is complete mathematically, but in order to apply it, or any of its derivatives, to any physical situation it is necessary to ascertain the physical unit corresponding to the mathematical unit. One pertinent question still lacking a definite answer is whether this probability unit is the same for all substances. If so, the lower portion of the curve, when reduced to a common temperature base, should be the same for all substances with the 1.32 initial level. On this basis, the specific heat of the aggregate at the temperature T0, where the theoretical curve intersects the zero axis, should be a constant. Actually, most of the elements with the -1.32 initial level do have a measured specific heat in the neighborhood of 0.20 at this point, but a few others show substantial deviations from this value. It is not yet clear whether this is a result of variability in the probability unit, or reflects inaccuracies in the experimental values. Whether all of the curves with the same maximum deviation (0.20) are coincident below T0 is likewise still somewhat uncertain. There is a greater spread in the observed specific heats below 0.20 than can be ascribed to errors in measurement, but most of the scatter can probably be explained as the result of lack of thermal equilibrium. At these low temperatures it no doubt takes a long time to establish equilibrium, and even an accurate measurement will not produce the correct result unless the aggregate is in thermal equilibrium. It is significant that the specific heats of the common elements which have been studied most extensively deviate only slightly from a smooth curve in this low temperature region. Fig.6, which shows the measured values of the specific heats of six of these elements on a temperature scale relative to T0, demonstrates this coincidence. If the probability unit is the same for all, or most, of the elements, as these data suggest, the deviation of the experimental curve from the theoretical curve for the single atom at the first transition point, T1, should also have a constant value. Preliminary examination of the curves of the elements that follow the regular pattern indicates that the values of this deviation actually do lie within a range extending from about 0.55 to about 0.70. Considerable additional work will be required before these curves can be defined

accurately enough to determine whether these is complete coincidence, but present indications are that the deviation at T1 is, in fact, a constant for all of the regular elements, and is in the neighborhood of three times the deviation at T0.

Figure 6: Specific Heat -Low Temperatures

With the benefit of the foregoing information as to the general nature of the deviations from the theoretical curves of Chapter 5 due to the manner in which the measurements are made, we are now prepared to examine the correlation between the theoretical curves and the measured specific heats. In order to arrive at a complete definition of the specific heat of a substance it is not only necessary to establish the shapes of the specific heat curves, the objective at which most of the foregoing discussion is aimed, but also to define the temperature scale of each curve. Although the theoretical conclusions with respect to these two theoretical aspects of the specific heat situation, like all other conclusions in this work, are derived by developing the consequences of the fundamental postulates of the Reciprocal System of theory, they are necessarily reached by two different lines of theoretical development. For this reason a more meaningful comparison with the experimental data can be presented if we deal with these two aspects independently. In this chapter, therefore, the experimental values will be compared graphically with theoretical curves in which the temperature scales are empirical. Chapter 7 will complete the definition of the curves be deriving the relevant temperature magnitudes. The curves of Fig.7 are typical of those of most of the elements.7 As indicated in Fig.4, the final straight line segment of each curve occupies the greater part of the temperature range of the solid state in the case of the high melting point elements. The significant features of the curves are therefore confined to the lower temperatures, and in order to bring them out more clearly only the lower temperature range (up to 300º K) is shown in the illustrations that follow. the remaining sections of the curves of Fig.7 are extensions of the lines shown in the diagram, except in the case of tungsten, which undergoes a transition to the four-unit status at about 325º K.

Figure 7: Specific Heat

Fig.8 is a similar group of specific heat curves for four of the electronegative elements with the -0.66 initial level. Aside from this higher initial level these curves are identical with those of Fig.7 when all are reduced to a common temperature scale. The transition to the two-unit vibration takes place at 4.63 (2¹/3 R) regardless of the higher initial level. This point will be given further consideration in Chapter 7. The upper portions of the lead and antimony curves, which are not shown on the graph, are extensions of the lines in the diagram. Arsenic and silicon have transitions at temperatures above 300º K.

Figure 8: Specific Heat

As noted in Chapter 5, there are a number of elements that undergo a modification of the temperature scale at the first transition point. Two curves with the modified second segment are shown in Fig.9. These two curves actually apply to four elements, as the specific heat of lithium follows the aluminum curve, while that of ruthenium coincides with the molybdenum curve. Coincidence of the specific heat curves of different elements, as in the instances mentioned, is not as uncommon as might be expected. The number of possible curve patterns is quite limited, and, as we will see in the next chapter, where the nature of the change in the temperature will be examined, the temperature factors are confined to specific values mainly within a relatively narrow range. Also included in Fig.9 is an example of a specific heat curve for an element which undergoes an internal rearrangement that modifies the thermal pattern. The measurements shown for samarium follow the regular pattern up to the vicinity of the first transition point at 35º K. Some kind of a modification of the molecular structure is evidently initiated at this point in lieu of, or in addition to, the normal transition to the two-unit vibrational status. This absorbs a considerable quantity of heat, which manifests itself as an addition to the measured specific heat over the next portion of the temperature range. By about 175º K the readjustment is complete, and the specific heat returns to the normal curve. Most of the other rare earth elements undergo similar readjustments at comparable temperatures. Elsewhere, if changes of this kind take place at all, they almost always occur at relatively high temperatures. The reason for this peculiarity of the rare earth group is, as yet, unknown.

Figure 9: Specific Heat

All of the types of deviations from the regular pattern that have been discussed thus far are found in the electronegative elements of the lower rotational groups. There is also an additional source of variability in the specific heats of these elements, as their atoms can combine with each other to form molecules. The result is a wide enough variety of

behavior to give almost every one of these elements a unique specific heat curve. Of special interest are those cases in which the variation is accomplished by omitting features of the regular pattern. The neon curve, for example, is a single straight line from the -1.32 initial level to the melting point. the specific heat curve of a hydrogen molecule, Fig.10, is likewise a single straight line, but hydrogen has no rotational specific heat component at all, and this line therefore extends only from the negative initial level, 1.32, to the specific heat of the positive initial level, +1.32, at which point melting takes place. The specific heats of binary compounds based on the normal orientation, simple combinations of Division I and Division IV elements, follow the same pattern as those of the electropositive elements. In these compounds each atom behaves as an individual thermal unit just as it would in a homogeneous aggregate of like atoms. the molecular specific heats of such compounds are twice as large as the values previously determined for the elements, not because the specific heat per atom is any different, but because there are two atoms in each formula molecule.

Figure 10: Specific Heat -Hydrogen

The curves for KCl and CaS, Fig.11, illustrate the specific heat pattern of this class of compounds. Some binary compounds of other structural types conform to the same regular pattern as in the curve for AgBr, also shown in Fig.11.

Figure 11

. As in the elements there is also a variation of this regular pattern in which certain compounds of the electronegative elements have a higher initial level, but in the compounds such as ZnO and SnO this level is zero, rather than -0.66, as it is in the elements. Some of the larger molecules similarly act thermally as associations of independent atoms. CaF2 and FeS2 are typical. More often, however, two or more of the constituent atoms of the molecule act as a single thermal unit. For example, both the KHF2 molecule, which contains four atoms, and the CsClO4 molecule, which contains six, act thermally as three units. In the subsequent discussion the term thermal group will be used to designate any combination of atoms that acts as a single thermal unit. Where individual atoms participate in thermal motion jointly with groups of atoms, the individual atoms will be regarded as monatomic groups. On this basis we may say that there are three thermal groups in each of the KHF2 and CsClO4 molecules. The great majority of compounds not only form thermal groups but also alter the number of groups in the molecule as the temperature varies. A common pattern is illustrated by the chromium chlorides. CrCl2 acts as a single thermal group at very low temperatures; CrCl3 as two. The initial specific heat levels are -1.32 and -2.64 respectively. There is a gradual increase in the average number of thermal groups per molecule up to the first transition point, at which temperature all atoms are acting independently. At the initial point of the second segment of the curve this independent status is maintained, and above the transition temperature the CrCl2 molecule acts as three thermal groups, while CrCl3 has four. At the present stage of the investigation we can determine from theory the possible ways in which a molecule can split up into thermal groups, but we are not yet able to specify on theoretical grounds just which of these possibilities will prevail at any given temperature, or where the transition from one to the other will take place. The theoretical information thus far developed does, however, enable us to analyze the empirical data

and to establish the specific heat pattern of each substance; that is, to determine just how it acts thermally. Aside from some cases, mainly involving very large molecules, where the specific heat pattern is unusually complex, and in those instances where experimental errors lead to erroneous interpretation, it is possible to identify the effective number of thermal groups at the critical points of the curves. Once this information is available for any substance, the definition of its specific heat curve is essentially complete, except for the temperature scale, the determinants of which will be identified in Chapter 7. Where n is the number of active thermal groups in a compound, the initial level is -1.32 n, the initial point of the second segment of a Type 1 curve is 3.89 n, and the first transition point is 4.63 n. The tendency of the atoms of multi-atom molecules to form thermal groups is particularly evident where the molecules contain radicals, because of the major differences in the cohesive forces that are responsible for the existence of the radicals. The extent to which the association into thermal groups is maintained naturally depends on the relative strength of the cohesive and disruptive forces. Those radicals such as OH and CN in which the bonds are very strong act as single thermal groups under all ordinary conditions. Those with somewhat weaker bonding–CO3, SO4, NO3, etc.,–also act as single units at the lower temperatures. Thus we find that at the initial points of both the first and second segments of the specific heat curves there are two groups in MnCO3, three in Na2CO3, four in KAl(SO4)2, five in Ca3(PO4)2, and so on. At higher temperatures, however, radicals of this class split up into two or more thermal groups. Still weaker radicals such as ClO4 constitute two thermal groups even at low temperatures. It was mentioned in Volume I that the boundary line between radicals and groups of independent atoms is rather indefinite. In general, the margin of bond strength required for a structural radical is relatively large, and we find many groups commonly recognized as radicals crystallizing in structures such as the CaTiO3 cube in which the radical, as such, plays no part. The margin required in thermal motion is much smaller, particularly at the lower temperatures, and there are many atomic groups that act thermally in the same manner as the recognized radicals. In Li3CO3, for example, the two lithium atoms act as a single thermal group, and the specific heat curve of this compound is similar to that of MgCO3 rather than of Na2CO3. Extension of the thermal motion by breaking some of the stronger bonds at the higher temperatures gives rise to a variety of modifications of the specific heat curves. For example, MoS2 has only two thermal groups in the lower range, but the S2 combination breaks up as the temperature rises, and all atoms begin vibrating independently. VCl2 similarly goes from one group to three. Splitting of the radical accounts for a change from two groups to three in SrCO3, from one to three in AgNO3, and from two to six in (NH4)2SO4. All of these alterations take place at or prior to the first transitional point. Other compounds make the first transition on the initial basis and break up into more thermal groups later. In a common pattern, a radical that acts as one thermal group at the low temperatures splits into two groups in the temperature range of the second segment of the curve, just as the CO3 radical in SrCO3, PbCO3 and similar compounds does at a lower level. There are a number of structures such as KMnO4 and KIO3, where this

increases the number of groups in the molecule from two to three. Pb3(PO4)2, in which there are two radicals, goes from five to seven, and so on. The effect of water of crystallization is variable, depending on the strength of the cohesion. For example, BaCl2.2H2O acts as three thermal groups at the lower temperatures, the water molecules being firmly bound to the atoms of the compound. As the temperature increases these bonds give way, and the molecule begins vibrating on a five-group basis. In Al2(SO4)3.6H2O and in NH4Al(SO4)2.12H2O the bonds with the water molecules remain fixed through the entire experimental range, up to about 300º K, and the thermal groups in these hydrates are five and six respectively, just as in the corresponding anhydrous compounds. An example of a drastic change in thermal behavior due to the disruption of inter-atomic bonds by thermal forces is shown in Fig.12. The radical CrO3 in the compound AgCrO3 is a single thermal group at very low temperatures. There is a gradual separation into two groups in the temperature range up to the first transition point, and the change to the twounit vibration is made on the basis of a two-group radical. At about 150º K all four atoms in the radical begin vibrating independently, and the molecule undergoes a transition from the second segment of a three-group curve to the second segment of a five-group curve. At about 250º K the compound makes the normal transition to three-unit vibration, continuing as five thermal groups. The compounds used as examples in the foregoing discussion were selected mainly on the basis of the availability of experimental data within the significant temperature ranges. For an accurate definition of the slope of each of the straight line segments of any empirical curve it is necessary to have measurements in the temperature range where the deviations due to the proximity of a transition point are negligible. The examples have been taken from among those of the experimental results that satisfy this requirement.

Figure 12: Specific Heat -AgCrO3

In a theoretical treatment of specific heat such as that in this present work it is necessary to deal with this quantity on a per molecule basis. For practical application, however, it is more convenient to use the specific heat per unit of mass, and most of the collected data are expressed in this manner. It should be noted that the effect of association into thermal groups is to reduce the specific heat per unit of mass. For this reason, the specific heat of most complex compounds is relatively low at low temperatures, and rises toward the values applicable to individual atoms as increasing temperature breaks up the original thermal groups. The simplest organic compounds, those composed of only two or three structural units, generally divide into no more than two thermal groups. Many of the somewhat larger organic molecules, particularly among the ring structures and branched compounds, follow the same rule. The specific heat relations of these compounds are similar to those of the inorganic compounds, except that there are more organic compounds in which the thermal motion is restricted to one rotational unit. These substances, the hydrocarbons and some other compounds of the lower elements, undergo a transition to a positive initial level on reaching their first (and only) transition point. The resulting specific heat curve, the one illustrated in Fig.3, is not much more than a straight line with a bend in it. A few compounds, including ethane and carbon monoxide, even omit the bend, and do not make the transition to the positive initial level. Further addition of structural units, such as CH2 groups, to the simple organic compounds results in the activation of internal thermal groups, units that vibrate thermally within the molecules. The general nature of the thermal motion of these internal groups is identical with that of the thermal motion of the molecule as a whole. But the internal motion is independent of the molecular thermal motion, and its scalar direction (inward or outward) is independent of the scalar direction of the molecular motion. Outward internal motion is thus coincident with outward molecular motion during only one quarter of the vibrational cycle. Since the effective magnitude of the thermal motion, which determines the specific heat, is the scalar sum of the internal and molecular components, each unit of internal motion adds one-half unit of specific heat during half of the molecular cycle. It has no thermal effect during the other half of the cycle when the molecule as a whole is moving inward. Because of the great diversity of the organic compounds the specific heat patterns occur in a variety that is correspondingly large. The effect of internal motion in those of the organic compounds in which it is present is well illustrated, however, by the specific heats of the normal paraffins. The values of the initial levels and the specific heat at T1 for the compounds of this series in the range from C3(propane) to C16(hexadecane) are listed in Table 21, together with the number of internal thermal units in the molecule of each compound.

Table 21: Specific Heats -Paraffin Hydrocarbons

Propane

Internal Thermal Units

1st

0

-2.64

Initial Levels

2nd

Specific Heat at T1

2.64

9.27

Butane Pentane Hexane Heptane Octane Nonane Decane Hendecane Dodecane Tridecane Tetradecane Pentadecane Hexadecane

0 2 3 4 5 6 7 8 9 10 11 12 13

-2.64 -2.64 -2.64 -3.96 -3.96 -5.30 -5.30 -5.30 -6.62 -6.62 -6.62 -6.62 -7.95

2.64 6.62 7.95 9.27 10.59 11.92 13.24 14.57 15.89 17.22 18.54 19.86 21.19

9.27 13.90 16.22 18.54 20.86 23.18 25.49 27.81 30.12 32.44 34.76 37.08 39.39

Propane and butane have only the two molecular thermal groups corresponding to the positive and negative ends of the molecules, and their specific heat at T1 is the normal two-group value: 9.27. Beginning with two internal groups in pentane, each added CH2 structural group becomes and internal thermal unit, and adds 2.317 to the total specific heat of the molecule at the transition point. The initial level of the first segment of the specific heat curve is -2.64 (the two-group value) in the lower compounds, and changes slowly, adding units of -1.32, as the length of the chain increases. The initial level of the second segment is 2.64 in butane and propane. In the higher compounds, each of which consists of n structural groups (CH2 and CH3), this second initial level is 1.324 n. The values thus derived theoretically are all consistent with the experimental curves. In a few cases the intersection of the two curve segments may not coincide with the calculated specific heat of the transition point, but these deviations, if they are real, are small enough to be explainable on the basis of changes in the temperature factors, the nature of which will be one of the subjects of discussion in Chapter 7. Branching of a hydrocarbon chain tightens the structure and tends to reduce the number of internal thermal units. For example, octane has five internal thermal units, and a specific heat of 20.86 at the transition point. But 2,2,4-trimethyl pentane, a branched compound with the same composition, has no internal motion at all, and the T1 specific heat of this compound is 9.27, identical with that of the C3 paraffin, propane. Ring formation has a similar effect. Ethyl-benzene and the xylenes, which are also C8 compounds, have some internal motion, but their T1 specific heats are 11.59 (one internal unit) and 13.90 (two internal units) respectively, well below the octane level. In Fig.13 the specific heat curves of hexane (straight chain) and benzene (ring), both C6 hydrocarbons, are contrasted. The subject matter of this and the preceding five chapters consists of various aspects of the volumetric and thermal relations of material substances. The study of these relations was the principal avenue of approach to the clarification of basic physical processes that ultimately led to the identification of the physical universe as a universe of motion, and the determination of the nature of the fundamental features of that universe. There relations were examined in great detail over a period of many years, during which thousands of experimental results were analyzed and studied. Incorporation of the

accumulated mass of information into the theoretical structure was the first task undertaken after the formulation of the postulates of the Reciprocal System of theory, and it has therefore been possible to present a reasonably complete description of each of the phenomena thus far dicussed, including what we may call the small-scale effects. Beginning with the next chapter, we will be dealing with subjects not covered in the inductive phase of the theoretical development. In this second phase, the deductive development, we are extending the application of the theory to all of the other major fields of physical science, in order to demonstrate that it is, in fact, a general physical theory. Obviously, where the area to be covered is so large, no individual investigator can expect to carry the development into great detail. Consequently, some of the conclusions expressed in the subsequent pages with respect to the small-scale features of the areas covered are subject to a degree of uncertainty. In other cases it will be necessary to leave the entire small-scale patter for some future investigation.

Figure 13: Specific Heat

CHAPTER 7

Temperature Relations AS explained in introducing the comparisons of the theoretical specific heats with experimental results, the curves in Fig.5 to13 verify only the specific heat pattern, the temperature scale of each curve being adjusted to the empirical results. In order to complete the definition of the curves we will now turn our attention to the temperature relations. All of the distinctive properties of the different kinds of matter are determined by the rotational displacements of the atoms of which these substances are composed, and by the

way in which the displacements enter into the various physical phenomena. As stated in Volume I, The behavior characteristics, or properties, of the elements are functions of their respective displacements. Some are related to the total net effective displacement... some are related to the electric displacement, others to the magnetic displacement, while still others follow a more complex pattern. For instance, valence, or chemical combining power, is determined by either the electric displacement or one of the magnetic displacements, while the inter-atomic distance is affected by both the electric and magnetic diplacement, but in different ways.

The great variety of physical phenomena, and the many different ways in which different substances participate in these phenomena result from the extension of this “more complex pattern” of behavior to a still greater degree of complexity. One of these more complex patterns was examined in Chapter 4, where we found that the response of the solid structure to compression is related to the cross–section against which the pressure is exerted. The numerical magnitude involved in this relation is determined by the product of the effective cross-sectional factors, together with the number of rotational units that participate in the action, a magnitude that determines the force per unit of the cross– section. Inasmuch as one of the dimensions of the cross–section may take either the effective magnetic displacement, represented by the symbol b in the earlier discussion, or the electric displacement, represented by the symbol c, two new symbols were introduced for purposes of the compressibility chapter: the symbol z to represent the second displacement entering into the cross–section (either b or c), and the symbol y to represent the number of effective rotational units (related to the third of the displacements). The a– b–c factors were thus represented in the form a–z–y. The values of these factors relative to the positions of the elements in the periodic table follow the same general pattern in application to specific heat as in compressibility, and most of the individual values are either close to those applying to compressibility or systematically related to those values. We will therefore retain the a–z–y symbols as a means of emphasizing the similarity. But the nature of the thermal relations is quite different from that of the relations that apply to compressibility. The temperature is not related to a cross–section; it is determined by the total effective rotation. Consequently, instead of the product, azy, of the effective rotational factors, the numerical magnitude defining the temperature scale of the thermal relations is the scalar sum, a+z+y, of these rotational values. This kind of a quantity is quite foreign to conventional physics. The scalar aspect of vectorial motion is recognized; that is, speed is distinguished from velocity. But orthodox physical thought does not recognize the existence of motion that is inherently scalar. In the universe of motion defined by the postulates of the Reciprocal System of theory, on the other hand, all of the basic motions are inherently scalar. Vectorial motions can exist only as additions to certain kinds of combinations of the basic scalar motions. Scalar motion in one dimension, when seen in the context of a stationary spatial reference system, has many properties in common with vectorial motion. This no doubt accounts for the failure of previous investigators to recognize its existence. But when motion extends into more than one dimension there are major differences in the way these two types of motion present themselves (or do not present themselves) to observation. Any

number of separate vectorial motions of a point can be combined into a single resultant, and the position of the point at any specified time can be represented in a spatial system of reference. This is a necessary consequence of the fact that vectorial motion is motion relative to that system of reference. But scalar motions cannot be combined vectorially. The resultant of scalar motion in more than one dimension is a scalar sum, and it cannot be identified with any one point in spatial coordinates. Such motion is therefore incapable of representation in a spatial reference system of the conventional type. It does not follow, however, that inability to represent this motion within the context of the severely limited kind of reference system that we are accustomed to use means that such motion is non–existent. To be sure, our direct perception of physical events is limited to those that can be represented in this type of a reference system, but Nature is not under any obligation to stay within the perceptive capabilities of the human race. As pointed out in Chapter 3, Volume I, where the subject of reference systems was discussed at length, there are many aspects of physical existence (that is, many motions, combinations of motions, or relations between motions) that cannot be represented in any single reference system. This is not, in itself, a new, or unorthodox conclusion. Most modern physicists, including all of the leading theorists, have realized that they cannot accommodate all of present–day physical knowledge within the limitations of fixed spatial reference systems. But their response has been the drastic step of cutting loose from physical reality, and building their fundamental theories in a shadow realm where they are free from the constraints of the real world. Heisenberg states their position explicitly. “The idea of an objective real world whose smallest parts exists objectively in the same sense as stones and trees exist, independently of whether or not we observe themÉis impossible,” 8 he says. In the strange half–world of modern physical theory the only realities are mathematical symbols. Even the atom itself is ―in a way only a symbol,‖ 9 Heisenberg tells us. Nor is it required that symbols be logically related or understandable. Nature, these front rank theorists contend, is inherently ambiguous and subject to uncertainties of a fundamental and inescapable nature. ―The world is not intrinsically reasonable or understandable,‖ Bridgman explains, ―It acquires these properties in ever–increasing degree as we ascend from the realm of the very little to the realm of everyday things.‖ 10 What the Reciprocal System of theory has done in this area is to show that once the true status of the physical universe as a universe of motion is recognized, and the properties of space and time are defined accordingly, there is no need for the retreat from reality, or for the attempt to blame Nature for the prevailing inability to understand the basic relations. The existence of phenomena not capable of representation in a spatial reference system is a fact that we must come to terms with, but the contribution of the Reciprocal System has been to show that the phenomena outside the scope of the conventional spatial reference systems can be described and evaluated in terms of the same real entities that exist within the reference system. The scalar sum of the magnitudes of motions in different dimensions, the quantity that we will now use in analyzing the temperature relations, is an item of this nature. It is just as real as any other physical quantity, and its components, the motions in the individual dimensions, are motions of the same nature as those one– dimensional scalar motions that are capable of representation in the spatial reference

systems, even though the scalar sum cannot be so represented in any manner accessible to our direct perception. In the theoretical minimum situation, where the effective thermal factors are 1–0–0, and the scalar sum of these factors is one unit, the temperature of the initial negative level is one unit out of the total of 128 that corresponds to the full 510.7 degrees temperature unit of the condensed states. But since the thermal motion is effective in only one direction, the ratio becomes 1/256, and the zero point temperature, T0, the temperature at which the thermal motion counterbalances the negative initial level of vibration, is 1.995º K. For a substance with thermal factors a, z, and y, and the normal 2/9 initial specific heat level, we then have T0 = 1.995 (a+z+y) degrees K

(7-1)

This value completes the definition of the specific heat curves by defining the temperature scales. It will be more convenient, however, to work with another of the fixed points on the curves, the first transition point, T1. As this is the unit specific heat level on the initial linear section of the curve, while T0 is 2/9 unit above the initial point, the temperature of the first transition point is T1 = 8.98 (a+z+y) degrees K

(7-2)

Thermal factors of the elements for which reliable specific heat patterns are available, and the corresponding theoretical first transition temperatures (T1) are listed in Table 22, together with the T1 values derived from curves of the type illustrated in Figs.5 to13, in which the temperature scale is empirical. In effect, this is a comparison between theoretical and experimental values of the temperature scales of the specific heat curves. The experimental values are subject to some uncertainty, as they have been obtained by inspection from graphs in which the linear portions of the curves were also drawn from visual inspection. Greater accuracy could be attained by using more sophisticated techniques, but the time and effort required for this refinement did not appear to be justified for the purposes of this initial investigation of the subject. The compressibility factors derived in Chapter 4, with a few values restated in different, but equivalent, terms, are shown in the table for comparison with the corresponding thermal factors. The principal determinants of the compressibility values, aside from the effect of the pressure level itself (including the internal pressure), were found to be the magnitude and sign (positive or negative) of the displacement in the electric dimension. The rotational group to which the element belongs (determined by the magnetic displacements) is much less significant. In the thermal situation the rotational group becomes the dominant influence. The elements of Group 3B (magnetic displacements 33), midway in the group order, generally have thermal factors close to the compression values. In half of the 3B elements included in the table the deviation is no more than one unit. But in each direction from this central group there is a systematic deviation from the compressibility values, upward in the lower groups and downward in the higher groups. Every element above number 42, molybdenum, that is included in the table, with one exception, has thermal factors either equal to or less than the corresponding compressibility factors. Every element below molybdenum, with three exceptions (two of which are alkali metals), has thermal factors that are either equal to or greater than the corresponding compressibility factors.

It was noted in Chapter 4 that in compression the lowest electropositive elements do not take the minimum 1-1-1 factors of their electronegative counterparts, but have a = 4 in all of the elements of this class investigated by Bridgman. The reason for this difference in behavior is not yet known (although it is no doubt connected with the all–positive nature of the rotational displacement of these elements), but it is even more pronounced in the thermal factors. Except for the alkali metals above sodium, which, as noted above, have thermal factors even lower than the compressibility values, the lower electropositive elements not only maintain the 6-unit minimum (4-1-1 or equivalent) but raise the effective magnitudes of their thermal factors still farther by omitting the n = 1 section of the specific heat curve based on equation 5-6, and going immediately to n = 2, which increases the temperature scale by a factor of 8. This pattern is followed by boron and carbon, and in part, by beryllium. The corresponding members of the next higher group, magnesium, aluminum, and silicon, also have n = 2 from the start of the thermal motion, but here the second unit is one–dimensional rather than three–dimensional. Beryllium combines the two patterns. It has the same thermal factors as lithium, but a dimensional multiplier halfway between those of lithium and boron, the two adjoining elements.

Table 22: Effective Rotational Factors Factors Li

Be

Factors

n

4-1-1 4-2-1

2

14

126

131

Y

4-2-1 4-3-1

8

72

71

4-1-1

2

12

108

110

Zr

4-8-1 4-4-1

9

81

84

4-4-1 4-2-1

2

14

314

323

Mo

4-8-2 4-8-2

14

126

129

8

56

314

323

4-6-2

12

108

107

2

12

269

267

4-8-2 4-8-2

14

126

128

8

48

269

267

4-6-2

12

108

107

8 8 8

48 72 64 6 12 10 14 12 24 24 7 9 7 3

431 647 575 54 108 90 126 108 216 216 63 81 63 27

420 635 578 52 109 91 131 112 220 207 66 84 62 28

4-8-2 4-8-1 4-6-1 4-6-2 4-4-2 4-4-1 4-4-2 4-3-1 4-4-1 2-2-1 4-4-1 4-6-2 4-4-1 4-2-1 4-1-1 4-4-1 4-3-1 4-3-1 4-2-1 2-2-1 1-1-0 4-1-1 1-1-0

13 11 10 9 8 5 12 7 6 8 7 5 2 2

117 99 90 81 72 45 108 63 54 72 63 45 18 18

117 95 91 78 72 46 105 66 57 68 61 44 19 17

4-1-1 4-4-1 4-3-1 4-1-1 4-1-1 3-1-1 Al 4-5-1 4-2-1 4-1-1 Si 4-4-1 4-6-2 P-r 4-6-2 P-w 4-2-1 S 4-1-1 4-4-1 Cl 4-2-1 Ar 1-1-1 4-6-1 4-2-1 4-1-1 4-4-1

2 2 2 2 2 2

Tot. Calc. Obs.

T1

Comp Therm.

4-1-1

B C-d C-g Na Mg

T1

Comp.Therm. Tot. Calc. Obs.

Ru

Rh Pd Ag Cd In Sn Sb Te I Xe Cs

K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr Rb

4-1-1 2-1-1 4-3-1 4-3-1 4-6-1 4-5-1 4-8-1 4-8-2 4-8-1 4-8-3 4-6-2 4-8-1 4-8-2 4-8-1 4-8-1 4-5-1 4-8-1 4-8-4 4-6-2 4-8-1 4-8-2 4-6-1 4-8-1 4-8-2 4-6-1 4-6-1 4-6-2 4-4-1 4-3-1 2-1-1 4-4-1 4-8-1 4-4-1 4-6-2 4-1-1 4-3-1 4-2-1 1-1-0 4-1-1 1-1-0

4 8 11 10 14 15 12

36 72 99 90 126 135 108

14 13 10 16 12 14 11 14 11 12 8 4 13 12 8 6 2 2

126 117 90 144 108 126 99 126 99 108 72 36 117 108 72 56 18 18

32 76 103 88 124 133 107 162 128 115 92 142 108 126 100 131 97 108 73 36 119 106 75 54 20 20

Ba La Pr Nd Sm Eu Gd Tb Dy Ho Er Tm Yb Hf Ta W Re

4-2-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-2-1

Ir

4-8-3

Pt Au Hg Tl Pb Bi

4-8-2 4-6-2

4-8-2 4-8-3

4-4-1 4-4-1 4-3-1

2-1-1 2-2-1 1-1-1 1-1-1 2-1-1 2-1-1 2-2-1 2-2-1 2-2-1 2-1-1 1-1-1 1-1-1 2-1-1 4-3-1 4-3-1 4-6-2 4-4-2 4-4-1 4-6-1 4-5-1 4-3-1 4-1-1 2-1-1 2-1-1 2-1-1 2-2-1

4 5 3 3 4 4 5 5 5 4 3 3 4 8 8 12 10 9 11 10 8 6 4 4 4 5

36 45 27 27 36 36 45 45 45 36 27 27 36 72 72 108 90 81 99 90 72 54 36 36 36 45

34 42 27 31 36 33 48 44 41 33 28 29 37 71 74 108 93 78 98 88 76 57 32 34 33 44

The option of one dimension or three dimensions is open whenever motion advances from one unit to two units, but not under any other conditions. Three-dimensional motion of one displacement unit is meaningless, as 13 = 1. After two units there is no option, as there cannot be more than two units in linear succession, for reasons that were discussed in Volume I. But two-unit motion can be either one-dimensional or three-dimensional. At the point where the advance from one to two units takes place, the motion is therefore able to take the dimensions that are best suited to the existing situation. A onedimensional increase in the value of n results in increasing the temperature scale by a factor of 2 rather than 8. The alkali metals, which diverge from the normal electropositive behavior in a number of respects because of their low electric displacement, follow the same pattern as the elements listed in the preceding paragraph, but one step lower, as indicated in the following comparison: Group 1B 2A 2B

Alkalis n=2 4-x-x 1-1-x

OtherPositive n=8 n=2 4-x-x

As we found in the specific heat investigation, the electronegative elements below displacement 7 have a half-size initial negative specific heat level: 1/9 unit instead of the normal 2/9. It might be expected that this would result in a net effective specific heat of

8/9 unit or 2 2/3 R, at the transition point instead of the 7/9 unit (2 1/3 R) that exists when the initial negative level is 2/9 unit. But it is quite clear from the measured specific heat values that this is not true. The first transition point in the specific heat curves of the electronegative elements is 2 1/3 R just as it is in the curves with the 2/9 unit (2/3 R) negative initial level. Apparently the restriction that prevents the existence of the more negative initial level in the specific heat of these elements is gradually eliminated as the temperature rises, so that at the transition point the effective negative component of the specific heat is the normal 2/9 unit. The thermal factors of the higher inert gases, krypton and xenon, which have no rotation in the electric dimension, are 1-0-0 rather than 1-1-1, as in compressibility. This is a peculiarity of the mathematics, and has no physical significance. In both cases the meaning of the symbols is that the effective magnitude is determined entirely by the factors a and z. In multiplication this requires a unit value in the y position, whereas in addition a zero is required for the same purpose. But this equivalence of the 1-1-1 compressibility and 1-1-0 thermal factors does not mean that 1-1-1 thermal and 1-1-0 thermal are equivalent. The 1-1-1 thermal combination is the minimum for a substance with effective rotational displacement in all three dimensions. Where the thermal factors drop to 1-1-0, as indicated for rubidium and cesium, there is no effective displacement in the electric dimension, and the thermal motion is following the inert gas pattern. Such behavior is uncommon, but it is not without precedent in other properties. We found in Chapter 1, for instance, that a number of elements, including the halogens, the elements corresponding to the alkalis on the opposite side of the inert gases, have inter–atomic distances in one or two dimensions that are similarly based on magnetic rotation only. Since the empirical values listed in Table 22 are subject to a considerable degree of uncertainty, small differences between them and the calculated values have no significance. In some cases, however, the discrepancy is large enough to be real, and further study of the thermal relations of these elements will be required. Only one of the experimental values shown in the table, one of those applicable to chromium, is too far from any theoretical temperature to be incapable of explanation on the basis of the theoretical information now available. As brought out in the discussion of the general pattern of the specific heat curves in Chapter 5, in many substances there is a change in the temperature scale of the curve at the first transition point (T1), as a result of which the first and second segments of the curve do not intersect at the 2¹/3 R end point of the lower segment of the curve in the normal manner. This change in scale is due to a transition to the second set of thermal factors given, for the elements in which it occurs, in Table 22. With the benefit of the information that we have developed regarding the factors that determine the temperature scale we can now examine the quantitative aspects of these changes. As an example, let us look at the specific heat curve of molybdenum, Fig.9, which, as previously noted, also applies to ruthenium. The thermal factors applicable to these elements at low temperatures are 4-8-2, identical with the compressibility factors. The first transition point, specific heat 4.63, is reached at 126º K on the basis of these factors. The corresponding empirical temperatures, determined by inspection of the trend of the experimental values of the specific heats, are 129 for molybdenum and 128 for

ruthenium, well within the range of uncertainty of the techniques employed in estimating the empirical values. If the thermal factors remained constant, as they do in the “regular” pattern followed by such elements as silver, Fig.5, there should be a transition to n = 2 at this 126º K temperature, and the specific heat above this point would follow the extension of a line from the initial level of 3.89 to 4.63 at 126º K. But instead of continuing on the 4-8-2 basis, the thermal factors decrease to 4-6-2 at the transition point. These factors correspond to a transition temperature of 108º K. The specific heat of the molecule therefore undergoes an isothermal increase at 126º K to the extension of a line from the initial level of 3.89 to 4.63 at 108º K, and follows this line at higher temperatures. The effect of the isothermal increase in the specific heat of the individual molecules is, of course, spread out over a substantial temperature range in application to a solid aggregate by the distribution of molecular velocities. The temperature of the subsequent transition points and the end points of the various segments of the specific heat curves can be calculated from the temperatures of the first transition points by applying the relative values listed in Chapter 5 to the appropriate values of T1. An approximate agreement between the empirical data and the higher transition points thus calculated is indicated, but the angles at which the upper segments of the curves intersect are too small to permit any close empirical definition of the temperature of intersection. The only one of the end points that has any real significance is the end point of the last segment of the curve applicable to the substance under consideration. This is the temperature limit of the solid. Any further addition of heat initiates the transition to the liquid state. Inasmuch as it is the individual molecule that reaches its thermal limit at the solid end point, it is the individual molecule that makes the transition to the liquid state. Physical state is thus basically a property of the individual molecule rather than a property of the aggregate, as seen in conventional physical theory. The state of the aggregate is merely a reflection of the state of the majority of its constituents. Recognition of this fact some forty years ago, in the early stages of the investigation that led to the results now being reported, was a major step in the clarification of physical fundamentals that ultimately opened the door to the formulation of a general physical theory. The liquid state has long been an enigma to conventional physics. As expressed by V. F. Weisskopf, “A liquid is a highly complex phenomenon in which the molecules stay together yet move along each other. It is by no means obvious why such a strange object should exist.” 11 Weisskopf goes on to speculate as to what the outcome would be if physicists knew the fundamental principles on which atomic structure is based, as present-day theory sees them, but “had never had occasion to see structures in nature.” He doubts if these theorists would ever be able to predict the existence of liquids. In the Reciprocal System of theory, on the other hand, the liquid state is a necessity, an intermediate condition that must necessarily exist between the solid and gaseous states. When the thermal motion of a molecule reaches equality with the inward progression of the natural reference system in one dimension of the region outside unit distance, the cohesive force in that dimension is eliminated. The molecule is then free to move in that dimension, while it is held in a fixed position, or a fixed average position, in the other dimensions by the cohesive forces that are still operative. The temperature at which the

freedom in one dimension is reached is the melting point of the aggregate, because any additional thermal energy supplied to the aggregate is absorbed in changing the state of additional molecules until the remaining content of solid molecules reaches the percentage that can be accommodated within the liquid aggregate. These remaining solid molecules are gradually converted to the liquid state in a temperature range above the melting point. Thus the liquid aggregate in this range contains a percentage of solid molecules, while the solid aggregate in a similar temperature range below the melting point contains a percentage of liquid molecules. The presence of these “foreign” molecules has a significant effect on the physical properties of matter in both of these temperature ranges, an effect which, as we will see in the subsequent discussion of the liquid state, can be evaluated accurately by using probability relations to determine the exact proportions in which molecules of the two states exist at each temperature. While the end point of the solid state is the temperature at which the intermolecular forces reach an equilibrium at the unit level, arrival at this end point does not mean automatic entry into the liquid state. It merely means that the cohesive forces of the solid are no longer operative in all three dimensions, and therefore do not prevent the free movement in one dimension of space that is the distinguishing characteristic of the liquid state. The significant point here is that a liquid molecule is limited to certain specific temperatures. A liquid aggregate can take any temperature within the liquid range, but only because the aggregate temperature is an average of a large number of the restricted individual values. This same restriction to one of a limited set of values also applies to the temperature of the solid molecule, but in the vicinity of the melting point the solid is at a high time region temperature level, where the proportionate change from one possible value, n units, to the next, n + 1 units, is small. The motion of the liquid state, on the other hand, is in the region outside unit space, and is equivalent to gas motion in the one dimension in which the thermal energy exceeds the solid state limit. As we saw in Chapter 5, temperatures in the vicinity of the melting point are very low on the scale applicable to this outside region, and the proportionate change from n to n + 1 is large. The intervals between the possible temperatures of liquid molecules are therefore large enough to be significant. Because of the limitation of the liquid temperatures to specific values, the temperature at which a molecule qualifies as a liquid is not the end point temperature of the solid state, but a higher value that includes the increment necessary to bring the end point temperature up to the next available liquid level. This makes it impossible to calculate melting points from solid state theory alone. Such calculations will have to wait until the relevant liquid theory is developed in a subsequent volume in this series, or elsewhere. But the temperature increment beyond the solid end point is small compared to the end point temperature itself, and the end point is not much below the melting point. A few comparisons of end point and melting point temperatures will therefore serve to confirm, in a general way, the theoretical deductions as to the relation between these two magnitudes.

There is a considerable degree of uncertainty in the experimental results at the high temperatures reached by the melting points of many of the elements, and there are also some theoretical aspects of the thermal situation in the vicinity of the melting point that have not yet been fully explored. The examples for discussion in this initial approach to the subject have been selected form among those in which these uncertain elements are at a minimum. First, let us look at element number 19, potassium. This element has a specific heat curve of the type identified by the notation n = 3 in Fig.4. Its thermal factors are 2-1-1, and it maintains the same factors throughout the entire solid range. As indicated in Chapter 5, the end point temperature of this type of curve is 9.32 times the temperature of the first transition point. This leads to an end point temperature of 336º K. The measured melting point is 337º K. In this case, then, the solid end point and the melting point happen to coincide within the limits of accuracy of the investigation. Chlorine, an element only two steps lower in the atomic series than potassium, but a member of the next lower group, has the lower type of specific heat curve, with n = 2. The end point temperature of this curve is 4.56 on the relative scale where the first transition point is unity. The thermal factors that determine the transition point, and are applicable to the first segment of the curve, are 4-2-1, but if these factors are applied to the end point they lead to an impossibly high temperature. It is thus apparent that the factors applicable to the second segment of the curve are lower than those applicable to the first segment, in line with the previously noted tendency toward a decrease in the thermal factors with increasing temperature. The indicated factors applicable to the end point in this case are the same 2-1-1 combination that we found in potassium. They correspond to an end point temperature of 164º K, just below the melting point at 170º K, as the theory requires. Next we look at two curves of the n = 4 type, the end point of which is at a relative temperature of 17.87. On the basis of thermal factors 4-6-1, the absolute temperature of the end point is 1765º K, which is consistent with the melting points of both cobalt (1768) and iron (1808). Here, too, the indicated factors at the end point are lower than those applicable to the first segment of the specific heat curve, but in this case there is independent evidence of the decrease. Cobalt, which has the factors 4-8-2 in the first segment is already down to 4-6-1 at the second transition point, while iron, the initial factors of which are also 4-82, has reached 4-6-2 at this point, with two more segments of the curve in which to make the additional reduction. Compounds of elements about group 1B, or having a significant content of such elements, follow one or another of the Type 1 patterns that have been illustrated by examples from the elements. The hydrocarbons and other compounds of the lower group elements have specific heat curves of type 2 (Fig.3) in which the end point is at a relative temperature of 1.80. As an example of this class we can take ethylene. the thermal factors of these lower group compounds are limited to 1-1-1, 2-1-1, and the combination value 1¹/2-1-1. As we found in Volume I, however, the two groups of atoms of which ethylene and similar compounds are composed are inside one time region unit of distance. They therefore act jointly in thermal interchange rather than acting independently in the manner of two inorganic radicals, such as those in

NH4NO3. Each group contributes to the thermal factors of the molecule, and the value applicable to the molecule as a whole is the sum of the two components. Ethylene uses the 1-1-1 and 1¹/2-1-1 combinations. A difference of this kind between the two halves of an organic molecule is quite common, and no doubt reflects the lack of symmetry between the positive and negative components that was the subject of comment in the discussion of organic structure. the combined factors amount to a total of 6¹/2 units. This corresponds to a transition point at 58º K, which agrees with the empirical curve, and an end point at 104º K, coincident with the observed melting point. The joint action of the two ends of an organic molecule that combines their thermal factors in the temperature determination is maintained when additional structural units are introduced between the end groups. As brought out in Chapter 6, such an extension of the organic type of structure into chains or rings also results in the activation of additional thermal motions of an independent nature within the molecules. The general nature of this internal motion was explained in the previous discussion. The same considerations apply to the transition point temperature, except that the internal motion is independent of the molecular motion in vectorial direction as well as in scalar direction. It is therefore distributed three–dimensionally, and the fraction in the direction of the molecular motion is 1/8 rather than 1/2. Each unit of internal motion thus adds 1/8 of 8.98 degrees, or 1.12 degrees K to the transition point temperature. With the benefit of this information we are now able to compute the temperatures corresponding to the specific heats of the paraffin hydrocarbons of Table 21. These values are shown in Table 23.

Table 23: Temperatures of Critical Points – Paraffin Hydrocarbons Thermal Factors Trans.Point Propane Butane Pentane Hexane and above

1-1-1 1-1-1 1¹/2-1-1 2-1-1

Total 1-1-1 1-1-¹/2 1¹/2-1-1 1¹/2-1-1

End point 6 5¹/2 7 7¹/2

1-1-1 2-1-1 2-1-1 2-1-1

1-1-0 1¹/2-1-1 2-1-1 2-1-1

Total 5 7¹/2 8 8

Temperatures Internal Units

T1

Propane

0

Butane

End Point Factors

End Point

Melting Point

Internal

Total

54

0

5

81

85

0

50

1

8¹/2

137

138

Pentane

2

65

1

9

145

143

Hexane

3

71

3

11

178

179

Heptane

4

72

3

11

178

182

Octane

5

73

5

13

210

216

Nonane Decane Hendecane Dodecane Tridecane Tetradecane Pentadecane Hexadecane

6 7 8 9 10 11 12 13

74 75 76 77 79 80 81 82

5 7 7 8 8 9 9 10

13 15 15 16 16 17 17 18

210 242 242 259 259 275 275 291

220 243 247 263 268 279 283 291

The first section of this table traces the gradual increase in the thermal factors as the molecule makes the transition from a simple combination of two structural groups, with properties that are similar to those of inorganic binary compounds, except for the joint thermal action due to the short inter-group distance, to a long-chain organic structure. The increase in the factors follows a fairly regular course in this range except in the case of butane. If the experimental values of the specific heat of this compound are accurate, its transition point factors drop back from the total of 6 that applies to propane to 5¹/2, whereas they would be expected to advance to 6¹/2. The reason for this anomaly is unknown. At the C6 compound, hexane, the transition to the long-chain status is complete, and the thermal factors of the higher compounds as far as hexadecane (C16), the limit of the present study, are the same as those of hexane. In the second section of the table the transition point temperatures are calculated on the basis of 8.98 degrees K per molecular thermal factor, as shown in the upper section of the table, plus 1.12 degrees per effective unit of internal motion. The number of internal motions shown in Column 1 for each compound is taken from Table 21. Columns 3 and 4 are the values entering into the calculation of the solid end point, Column 5. As the table indicates, some of the internal motions that exist in the molecule at the transition temperature are inactive at the end point. However, the active internal motion components are thermally equivalent to the molecular motions at this point, rather than having only 1/8 of the molecular magnitude as they do at T1. This is a result of the general principle that the state of least energy takes precedence (in a low energy environment) in cases where alternatives exist. Below the transition point the internal thermal motions are necessarily one-dimensional. Above T1 they are free to take either the one-dimensional or three-dimensional status. The energy at any given temperature above T1 is less on the three-dimensional basis. This transition therefore takes place as soon as it can, which is at T1. At the melting point the energy requirement is greater after the transition to the liquid state. Consequently, this transition does not take place until it must because there is no alternative. A return to one-dimensional internal thermal motion is an available alternative that will delay the transition. This motion therefore gradually reverts back to the one-dimensional status, reducing the energy requirement, and the solid end point is not reached until all effective thermal factors are at the 8.98 temperature level. The end point temperature of Column 5 is then 8.98 x 1.80 = 16.164 times the total number of thermal factors shown in Column 4. The calculated transition points are all in agreement with the empirical curves within the margin of uncertainty in the location of these curves. As can be seen by comparing the

calculated solid end points with the melting points listed in the last column, the end point values are also within the range of deviation that is theoretically explainable on the basis of discrete values of the liquid temperatures. It is quite possible that there is some ―fine structure‖ involved in the thermal relations of solid matter that has not been covered in this first systematic theoretical treatment of the subject. Aside from this possibility, it should be clear from the contents of this and the two preceding chapters that the theory derived by development of the consequences of the postulates of the Reciprocal System is a correct representation of the general aspects of the thermal behavior of matter.

CHAPTER 8

Thermal Expansion AS indicated earlier, addition of thermal motion displaces the inter-atomic equilibrium in the outward direction. A direct effect of the motion is thus an expansion of the solid structure. This direct and positive result is particularly interesting in view of the fact that previous theories have always been rather vague as to why such an expansion occurs. These theories visualize the thermal motion of a solid as an oscillation around an equilibrium position, but they fail to shed much light on the question as to why that equilibrium position should be displaced as the temperature rises. A typical “explanation” taken from a physics text says, “Since the average amplitude of vibration of the molecules increases with temperature, it seems reasonable that the average distance between the atoms should increase with temperature.” But it is not at all obvious why this should be “reasonable.” As a general proposition, an increase in the amplitude of a vibration does not, in itself, change the position of equilibrium. Many discussions of the subject purport to supply an explanation by stating that the thermal motion is an anharmonic vibration. But this is not an explanation; it is merely a restatement of the problem. What is needed is a reason why the addition of thermal energy produces such an unusual result. This is what the Reciprocal System of theory supplies. According to this theory, the thermal motion is not an oscillation around a fixed average position; it is a simple harmonic motion in which the inward component is coincident with the progression of the natural reference system, and therefore has no physical effect. The outward component is physically effective, and displaces the atomic equilibrium in the outward direction. From the theoretical standpoint, thermal expansion is a relatively unexplored area of physical science. Measurement of the expansion of different substances at various temperatures is being pursued vigorously, and the volume of empirical data in this field is increasing quite rapidly. However, the practical effect of the change in the coefficient of expansion due to temperature variation is of little consequence, and for most purposes it can be disregarded. As stated in the physics text from which the “explanation” of the expansion was taken, “Accurate measurements do show a slight variation of the coefficient of expansion with the temperature. We shall ignore such variations.” This lack of significant

practical application has limited the amount of theoretical attention that the subject has heretofore received. But one of the principal objectives of this present work is to demonstrate that the Reciprocal System is a general physical theory. However limited the practical use of the thermal expansion information may be, we will want to show that this expansion can be explained on the same basis as the other properties of matter, using the same principles and relations that are applied to those other properties, with only such modifications as are required by considerations peculiar to the expansion. In general, the volumetric behavior of a solid in response to the application of heat is analogous to that of a confined gas, the differences being limited to those items which depend on whether the point of equilibrium between any two of the constituent atoms is inside or outside unit distance. At constant pressure, the general gas equation (5-3), which describes the relation between the principal properties of the ideal gas, reduces to V = kT

(8-1)

This is Charles’ Law. It tells us that at constant pressure the volume of an ideal gas (one that is entirely free from time region forces) is directly proportional to the absolute temperature. The relation E = PV (equation 4-3) is merely a restatement of the definition of pressure, in a different form, and is therefore valid in the time region (inside unit distance) as well as in the ideal gas state. Since E = kT2 (equation 5-5) in the time region, it follows that in this region PV = kT2

(8-2)

At constant pressure this reduces to V = kT2

(8-3)

In our consideration of volume changes in solid structures due to the addition of thermal energy we will usually be interested mainly in the coefficient of thermal expansion, or derivative of volume with respect to temperature. This is obtained by differentiating equation 8-3. dv/dT = 2kT

(8-4)

Aside from the numerical constant k, this equation is identical with the specific heat equation 5-7, where the value of n in that equation is unity. Thus there is a close association between thermal expansion and specific heat up to the first transition temperature defined in Chapter 5. For all of the elements on which sufficient data are available to enable locating the transition point, this transition temperature is the same for thermal expansion as it is for specific heat. Each element has a negative initial level of the expansion coefficient, the magnitude of which has the same relation to the magnitude at the transition point as in specific heat; that is, 2/9 in most cases, and 1/9 in some of the electronegative elements. It follows that if the coefficient of expansion at the transition point is equated to 4.63 specific heat, the first segment of the expansion curve is identical with the first segment of the specific heat curve. Beyond the transition point the thermal expansion curve follows a course quite different from that of the specific heat, because of the difference in the nature of the two phenomena.

Since the term n3 is absent from the thermal expansion equation, the modification of the expansion curve that takes place where motion of single units is succeeded by multi-unit motion involves a change in the coefficient k. The expansion is related to the effective energy (that is, to the temperature), irrespective of the relation between total energy and effective energy that determines the specific heat above the first transition point. The magnitude of the constant K that determines the slope of the upper segment of the expansion curve is determined primarily by the temperature of the end point of the solid state. For purposes of this present discussion, the solid end point will be regarded as coincident with the melting point. As brought out in Chapter 7, this is, in fact, only an approximate coincidence. But the present examination of thermal expansion is limited to its general features. Evaluation of the exact quantitative relations will not be feasible until a more intensive study of the situation is undertaken, and even then it will be difficult to verify the theoretical results by comparison with empirical data because of the large uncertainties in the experimental values. Even the most reliable measurements of thermal expansion are subject to an estimated uncertainty of ±3 percent, and the best available values for some elements are said to be good only to within ±20 percent. However, most of the measurements on the more common elements are sufficiently accurate for present purposes, as all that we are here undertaking to do is to show that the empirically determined expansions agree, in general, with the theoretical pattern. The total expansion from zero temperature to the solid end point is a fixed quantity, the magnitude of which is determined by the limitation of the solid state thermal motion (vibration) to the region within unit distance. At zero temperature the gravitational motion (outward in the time region) is in equilibrium with the inward progression of the natural reference system. The resulting volume is s03, the initial molecular volume. At the solid end point the thermal motion is also in equilibrium with the inward progression of the natural reference system, as this is the point at which the thermal motion is able to cross the time region boundary without assistance from gravitation. The thermal motion up to the end point of the solid state thus adds a volume equal to the initial volume of the molecule. Because of the dimensional situation, however, only a fraction of the added volume is effective in the region in which it is measured; that is, outside unit space. For an understanding of the dimensional relations that are involved it is necessary to realize that all of the phenomena of the solid state take place inside unit space (distance), in what we have called the time region. The properties of motion in this region were discussed in detail at appropriate points in Volume I. This discussion will not be repeated here, but a brief review of the general situation, with particular reference to the dimensions of motion may be helpful. According to the fundamental postulates of the Reciprocal System, space exists only in association with time as motion, and motion exists only in discrete units From this it follows that space and time likewise exist only in discrete units. Consequently, any two atoms that are separated by one unit of space cannot move any closer together in space, as this would require the existence of fractional units. These atoms may, however, accomplish the equivalent of moving closer together in space by moving outward in time. All motion in the time region, the region inside unit space, is motion of this kind: motion in time (equivalent space) rather than motion in actual space.

The first unit of thermal motion is a one-dimensional motion in time. At the transition point, T1, this motion has reached the full one-unit level. As already explained, only half of this unit is physically effective. One fully effective unit is required for escape from the time region, and the motion therefore enters a second time region unit. In this second unit a threedimensional distribution of the motion is possible. But the motion in time that takes place in the time region has only a scalar connection with motion in the region outside unit space, which is motion in space. This is equivalent to a one-dimensional contact. Thus only one dimension of the three-dimensional time region motion is effective beyond the regional boundary. The effective fraction of the motion is 1/8 of one unit, or 1/16 of the total two-unit time region motion. The expansion is proportional to the effective component of the motion, and this means that the volumetric expansion from zero temperature to the solid end point, as measured in the region outside unit space, is also 1/16, or 0.0625 of the initial volume. On a one-dimensional (linear) basis, this is 0.0205. This is the relative expansion that would take place providing that no change in the volumetric determinants of the substance occurs above the reference temperature (usually room temperature). But such changes occur more often than not, and, as has been explained, the volume changes accompanying an increase in temperature are normally in the direction of increased volume. The total expansion is 0.0625 of the initial volume corresponding to the volume at the solid end point. Where this theoretical initial volume is greater than the reference volume projected to zero temperature, the expansion expressed relative to the smaller volume is correspondingly increased. It follows that in most cases the linear expansion, as measured, is somewhat above 0.0205, generally in the range from this value up to about 0.028. The increase in volume at the higher temperature, where it occurs, is generally due to,structural rearrangements. The changes take place either in the inter-atomic distance, by reason of transitions from one of the types of orientation discussed in Chapter 1 to another, or in the crystal structure, or both. The expansion is related to the inter-atomic distance (s0) rather than to the geometrical volume, and it is independent of the geometrical arrangement, but, as indicated in the preceding paragraph, a modification of the geometry does affect the relation of the volume at the solid end point to the reference volume at zero temperature. In the NaCl type of structure the edge of the unit cube is equal to the inter-atomic distance. This cube contains one atom, and the ratio of the measured volume to what we may call the three-dimensional space, the cube of the inter-atomic distance, is therefore unity. In the body-centered cube the edge is 2/V¯3 times the inter-atomic distance. Since the unit cube of this type contains two atoms, the ratio of volume to three-dimensional space is 0.770. The one-dimensional space, the edge of a hypothetical cube containing one atom, is then 0.9165 for the body-centered cube and 1.00 for the NaCl type structure. Transitions from one type of structure to the other modify the spatial relations accordingly. The values applicable to all five of the principal isometric crystal structures of the elements are listed in the following tabulation. Face-centered cube Close-packed hexagonal Body-centered cube

0.8909 0.8909 0.9165

Simple (NaCl) cube Diamond (ZnS) cube

1.0000 1.1547

The second segment of the thermal expansion curve has no negative initial level, because there is a positive expansion (that of the first segment) into which the initial level can extend. Like the transition from the liquid to the solid state, the transition from single units of motion to multi-unit motion involves a change in the zero datum applicable to temperature. The temperature T0, corresponding to the initial negative level, is eliminated, and the temperature of the end point, T1, of the first segment of the curve, which is 9/2 T0 on this segment, is reduced to 7/2 T0 on the second segment. As brought out in Chapter 7, the minimum of the zero point temperature, T0, is equivalent to one of the 128 dimensional units that correspond to one full temperature unit, 510.8 degrees K. As the temperature rises, additional units of motion are activated, and the corresponding value when all 128 units are fully effective is thus 7/2 x 510.8 = 1788 degrees K. Under the same maximum conditions, the second unit of thermal motion, from T1 to the solid end point, adds an equal magnitude. Thus the temperature of this theoretical full-scale solid end point is 3576 degrees k. The total expansion coefficient at T1 on the first segment of the expansion curve, and at the initial point of the second segment, is then 0.0205/3576. However, this coefficient is subject to a 1/9 initial level. This makes the net effective coefficient 8/9 x 0.0205/3576 = 5.2 x 10-6 per degree K. Where the end point temperature (which we are equating to the melting point, Tm, for present purposes) is below 3576, the average coefficient of expansion is increased by the ratio 3576/Tm, inasmuch as the total expansion up to the solid end point is a fixed magnitude. If the first temperature unit, up to T1, were to take its full share of the expansion, the coefficient at T1 on the first segment of the expansion curve, and at the initial point of the second segment, would also be increased by the same ratio. But in the first unit range of temperature the thermal motion takes place in one time region dimension only, and there is no opportunity to increase the total expansion by extension into additional dimensions in the manner that is possible when a second unit of motion is involved. (Additional dimensions do not increase the effective magnitude of one unit, as 1n = 1.) The total expansion corresponding to the first unit of motion (speed) can be increased by extension to additional rotational speed displacements, but this is possible only in full units, and is limited to a total of four, the maximum magnetic displacement. As an example, let us consider the element zirconium, which has a melting point of 2125º K. The melting point ratio is 3576/2125 = 1.68. Inasmuch as this is less than two full units, the expansion coefficient of zirconium remains at one unit (5.2 x 10-6) at the initial point of the second segment of the curve, and the difference has to be made up by an increase in the rate of expansion between this initial point and Tm; that is, by an increase in the slope of the second section of the expansion curve. The expansion pattern of zirconium is shown graphically in Fig.14.

Figure 14: Thermal Expansion

Now let us look at an element with a lower melting point. Titanium has a melting point of 1941º K. The ratio 3576/1941 is 1.84. This, again, is less than two full units. Titanium therefore has the same one unit expansion coefficient at the initial level as the elements with higher melting points. The melting point of palladium is only a little more than 100 degrees below that of titanium, but this difference is just enough to put this element into the two unit range. The ratio computed from the observed melting point, 1825º K, is 1.96, and is thus slightly under the two unit level. But in this case the difference between the melting point and the end point of the solid state, which we are disregarding for general application, becomes important. as it is enough to raise the 1.96 ratio above 2.00. The expansion coefficient of palladium at the initial point of the second segment of the curve is therefore two units (10.3 x 10-6), and the expansion follows the pattern illustrated in the second curve in Fig.14. The effect of the difference between the solid end point and the melting point can also be seen at the three unit level, as the melting point ratio of silver, 3576/1234 = 2.90, is raised enough by this difference to put it over 3.00. Silver then has the three unit (15.5 x 10-6) expansion coefficient at the upper initial point, as shown in the upper curve in Fig.14. At the next unit level the element magnesium, with a ratio of 3.87, is similarly near the 4.00 mark, but in this instance the end point increment is not sufficient to close the gap, and magnesium stays on the three unit basis. None of the elements for which sufficient data were available for comparison with the theoretical curves has a melting point in the four unit range from 715 to 894º K. But since the magnetic rotation is limited to four units, the four unit initial level also applies to the elements with melting points below 715º K. This is illustrated in Fig.15 by the curve for lead, melting point 601º K.

Figure 15: Thermal Expansion

As can be seen in Fig.14, the expansion coefficient of silver, as measured experimentally, deviates from the straight line relation in the vicinity of T1. This deviation is not due to experimental error or to structural readjustments. It is a result of the nature of the transition from the one unit expansion below T1 to the multi-unit expansion above this temperature. Unlike the specific heat transition, where the increments represented by the second segment of the specific heat curve add to the specific heat at T1, the expansion represented by the second segment of the expansion curve replaces the expansion represented by the first segment. The initial level of the second segment at zero temperature is the unit (or n-unit) level reached at the end of the first segment. This means that at T1 the molecule undergoes an isothermal expansion to the level of the second segment at that temperature. In the aggregate the individual molecular expansions are spread out over a temperature range by the distribution of molecular velocities, and they appear as a bulge in the expansion curve. Coincidentally, there is a downward deviation in the curve, similar to that in the experimental specific heat curves, due to the effect of the transition to the more nearly horizontal second segment of the curve. The net effect of these two types of deviation from the theoretical curve applying to the single molecule depends on their relative magnitude, and on the temperature range over which the deviations are distributed. The curves of Fig.14 have been selected from among those in which the net deviation is at a minimum, in order to minimize uncertainties in the definition of the upper sections of the curves, and to make it clear that these linear sections actually terminate at the calculated initial levels. More commonly, the bulge is quite pronounced, as in the curves for gold and lead, Fig.15. When the effect of this systematic deviation from the .linear relation in the vicinity of the transition point is taken into consideration, all of the electropositive elements included in the compilation of expansion data utilized in the investigation,12 except the rare earth elements, have expansion curves that follow the theoretical pattern within the range of accuracy of the experimental results. Most of the rare earths have the one unit expansion coefficient (5.2 x 10-6) at the initial level of the second segment of the curve, although their melting points are

in the range where coefficients of two, or in some cases three, units would be normal. The reason for this, the only deviation from the general pattern in the expansion curves of these elements, is as yet unknown, but it is no doubt connected with the other peculiarities of the rare earth elements that were noted earlier. The electronegative elements of Division III follow the regular pattern. The lowest melting point in this group is that of mercury, 234º K, well below the lowest value for any of the electropositive elements investigated, but this descent to a lower melting point does not introduce any new behavior. The upper segment of the expansion curve for mercury, defined by the empirical data in Fig.15, definitely terminates at the four unit level (20.7 x 10-6), as required by the theory. Thus the theoretical relations are applicable to the full temperature range of the first three divisions. As noted earlier, the borderline elements of Division IV, those with negative electric displacement 4, are capable of acting as members of either Division III or Division IV. The expansion curve for lead, Fig.15, follows the normal Division III pattern. The lower borderline elements, tin and germanium, have curves in which the initial levels, like those of the rare earths, are lower than the values corresponding to the melting points. Otherwise, these curves are also normal. Very little is known about the expansion of the elements of negative displacement below 4. The theoretical development has not yet been extended to a consideration of the effect of the strongly electronegative character of these elements on the volume relations, and the empirical data are both meager and conflicting. This Division IV situation is part of the general problem of anisotropic expansion, a subject to which the Reciprocal System of theory has not yet been applied. The measurements previously cited that apply to anisotropic crystals were made on polycrystalline material in which the expansion in different directions is averaged as a result of the random orientation in the aggregate. Both this issue of anisotropic expansion and the application of the thermal expansion theory to compounds and alloys are still on the waiting list for future investigation. There is no reason to believe that such an investigation will encounter any serious difficulties, but for the present other matters are being given the priority.

CHAPTER 9

Electric Currents Another set of properties of matter that we will want to consider results from the interaction between matter and one of the sub-atomic particles, the electron. As pointed out in Volume I, the electron, M 0-0-(1), in the notation used in this work, is a unique particle. It is the only particle constructed on the material rotational base, M 0-0-0, (negative vibration and positive rotation) that has an effective negative rotational displacement. More than one unit of negative rotation would exceed the one positive rotational unit of the rotational base, and would result in a negative value of the total rotation. Such a rotation of the basic negative vibration would be unstable in the material environment, for reasons that were explained in the previous discussion. But in the

electron the net total rotation is positive, even though it involves one positive and one negative unit, as the positive unit is two-dimensional while the negative unit is onedimensional. Furthermore, the independent one-dimensional nature of the rotation of the electron and its positive counterpart, the positron, leads to another unique effect. As we found in our analysis of the rotations that are possible for the basic vibrating unit, the primary rotation of atoms and particles is two-dimensional. The simplest primary rotation has a one-unit magnetic (two-dimensional) displacement, a unit deviation from unit speed, the condition of rest in the physical universe. The electric (one-dimensional) rotation, we found, is not a primary rotation, but merely one that modifies a previously existing two-dimensional rotation. Addition of the one-unit space displacement of the electron rotation to an existing effective two-dimensional rotation increases the total scalar speed of that rotation. But the one-dimensional rotation of the independent electron does not modify an effective speed; it modifies unit speed, which is zero from the effective standpoint. The speed displacement of the independent electron, its only effective component, therefore modifies only the effective space, not the speed. Thus the electron is essentially nothing more than a rotating unit of space. This is a concept that is rather difficult for most of us when it is first encountered, because it conflicts with the idea of the nature of space that we have gained from a long-continued, but uncritical, examination of our surroundings. However, the history of science is full of instances where it has been found necessary to recognize that a familiar, and apparently unique, phenomenon is merely one member of a general class, all members of which have the same physical significance. Energy is a good example. To the investigators who were laying the foundation of modern science in the Middle Ages the property that moving bodies possess by reason of their motion–―impetus‖ to those investigators; ―kinetic energy‖ to us–was something of a unique nature. The idea that a motionless stick of wood contained the equivalent of this ―impetus‖ because of its chemical composition was as foreign to them as the concept of a rotating unit of space is to most individuals today. But the discovery that kinetic energy is only one form of energy in general opened the door to a major advance in physical understanding. Similarly, the finding that the ―space‖ of our ordinary experience, extension space, as we are calling it in this work, is merely one manifestation of space in general opens the door to an understanding of many aspects of the physical universe, including the phenomena connected with the movement of electrons in matter. In the universe of motion, the universe whose details we are developing in this work, and whose identity with the observed physical universe we are demonstrating as we go along, space enters into physical phenomena only as a component of motion, and the specific nature of that space is, for most purposes, irrelevant, just as the particular kind of energy that enters into a physical process usually has no relevance to the outcome of the process. The status of the electron as a rotating unit of space therefore gives it a very special role in the physical activity of the universe. It should be noted at this time that the electron that we are now discussing carries no charge. It is a combination of two motions, a basic vibration and a rotation of the vibrating unit. As we will see later, an electric charge is an additional motion that may be superimposed on this two-component combination. The

behavior of charged electrons will be considered after some further groundwork has been laid. For the present we are concerned only with the uncharged electrons. As a unit of space, the uncharged electron cannot move through extension space, since the relation of space to space does not constitute motion. But under appropriate conditions it can move through ordinary matter,. inasmuch as this matter is a combination of motions with a net positive, or time, displacement, and the relation of space to time does constitute motion. The present-day view of the motion of electrons in solid matter is that they move through the spaces between the atoms. The resistance to the electron flow is then considered to be analogous to friction. Our finding is that the electrons (units of space) exist in the matter, and move through that matter in the same manner as the movement of matter through extension space. The motion of the electrons is negative with respect to the net motion of material objects. This is illustrated in the following diagram:

Line X in the diagram is a representation of a scalar magnitude of extension space, as it appears in the conventional reference system. Line A shows the effect of a unit of motion of a material object M through that space. The object that was originally coincident with spatial unit 1 is now coincident with spatial unit 2. Line B shows what happens if the original motion of object M is followed by a unit of electron motion. Just as object M moved through space X in line A, so space X (the electrons) moves through object M in line B. In one unit of motion (line A) object M advances from spatial unit 1 to spatial unit 2. In the following unit of the inverse type of motion (line B) the numbered spatial locations advance one unit relative to object M. This brings M back into coincidence with spatial unit 1, the same result that would have followed if object M had moved backward in the absence of any electron movement. Thus the movement of space (electrons) through matter is equivalent to a negative movement of matter through space. It follows that the voltage differential that causes the electron motion, and the stress in any substance that absorbs the motion, are likewise negative. Directional movement of electrons through matter will be identified as an electric current. If the atoms of the matter through which the current passes are effectively at rest relative to the structure of the solid aggregate as a whole, uniform motion of the electrons (space) through matter has the same general properties as motion of matter through space. It follows Newton’s first law of motion, and can continue indefinitely without addition of energy. This situation exists in the phenomenon known as superconductivity that has been observed experimentally in many substances at very low temperatures. But where the atoms of a material aggregate are in effective motion thermally, movement of

electrons through the matter adds to the spatial component of the thermal motion (that is, increases the speed), and thereby imparts energy (heat) to the moving atoms. The magnitude of the current is measured by the number of electrons (units of space) per unit of time. Units of space per unit of time is the definition of speed, hence the electric current is a speed. From a mathematical standpoint it is immaterial whether a mass is moving through extension space or space is moving through the mass. Thus in dealing with the electric current we are dealing with the mechanical aspects of electricity, and the current phenomena can be described by the same mathematical equations that are applicable to ordinary motion in space, with appropriate modifications for differences in conditions, where such differences exist. It would even be possible to use the same units, but for historical reasons, and as a matter of convenience, a separate system of units is utilized in present-day practice. The basic unit of current electricity is the unit of quantity. In the natural system it is the spatial aspect of one electron, which has a speed displacement of one unit. Quantity, q, is therefore equivalent to space, s. Energy has the same status in current flow as in the mechanical relations, and has the space-time dimensions t/s. Energy divided by time is power, 1/s. A further division by current, which has the dimensions of speed,. s/t, then produces electromotive force (emf) with the dimensions 1/s x t/s = t/s2. These are, of course, the space-time dimensions of force in general. The term ―electric potential‖ is commonly used as an alternative to emf, but, for reasons to be discussed later, we will not use ―potential‖ in this sense. Where a more convenient term than emf is appropriate, we will use the term ―voltage,‖ symbol V. Dividing voltage, t/s2, by current, s/t, we obtain t2/s3. This is resistance, symbol R, the only electrical quantity thus far considered that is not equivalent to a familiar mechanical quantity. Its true nature is revealed by an examination of its space-time structure. The dimensions t2/s3 are equivalent to mass, t3/s3, divided by time, t. Resistance is therefore mass per unit time. The relevance of such a quantity can easily be seen when it is realized that the amount of mass entering into the motion of space (electrons) through matter is not a fixed quantity, as it is in the motion of matter through extension space, but a quantity whose magnitude depends on the amount of movement of the electrons. In motion of matter through extension space the mass is constant while the space depends on the duration of the movement. In the current flow the space (number of electrons) is constant while the mass depends on the duration of the movement. If the flow is only momentary, each electron may move through only a small fraction of the total amount of mass in the circuit, whereas if it continues for a longer time the entire circuit may be traversed repeatedly. In either case, the total mass involved in the current flow is the product of the mass per unit time (the resistance) and the time of flow. In the movement of matter through extension space, the total space is determined in the same manner; that is, it is the product of the space per unit time (velocity) by the time of movement. In dealing with resistance as a property of matter we will be interested mainly in the specific resistance, or resistivity, which is defined as the resistance of a unit cube of the substance under consideration. Resistance is directly proportional to the distance traveled by the current and inversely proportional to the cross-sectional area of the conductor. It

follows that if we multiply the resistance by unit area and divide by unit distance we arrive at a quantity with the dimensions t2/s2 that reflects only the inherent characteristics of the material and the environmental conditions (principally temperature and pressure) and is independent of the geometrical structure of the conductor. The reciprocals of resistance and resistivity are conductance and conductivity, respectively. With the benefit of the clarification of the space-time dimensions of resistance we can now go back to the empirically determined relations between resistance and other electrical quantities, and verify the consistency of the space-time identifications. Voltage:V = IR = s/t x t2/s3 = t/s2 Power:P = I2R = s2/t2 x t2/s3 = 1/s Energy:E = I2Rt = s2/t2 x t2/s3 x t = t/s This energy equation demonstrates the equivalence of the mathematical expressions of the electrical and mechanical phenomena. Since resistance is mass per unit time, the product of resistance and time, Rt, is equivalent to mass, m. The current, I, is a speed, v. The electrical energy expression RtI2 is thus dimensionally equivalent to the kinetic energy expression ¹/2mv2. In other words, the quantity RtI2 is the kinetic energy of the electron motion. Instead of using resistance, time, and current, we may put the energy expression into terms of voltage, V (equivalent to IR), and quantity, q, (equivalent to It). The expression for the magnitude of the energy (or work) is then W = Vq. Here we have a definite confirmation of the identification of electric quantity as the equivalent of space. Force, as described in one of the standard physics textbooks, is ―an explicitly definable vector quantity that tends to produce a change in the motion of objects.‖ Electromotive force, or voltage, conforms to this description. It tends to cause motion of the electrons in the direction of the voltage gradient. Energy in general is the product of force and distance. Electrical energy, as Vq, is the product of force and quantity. It follows that electrical quantity is equivalent to distance: the same conclusion that we derived from the nature of the uncharged electron. In conventional scientific thought the status of electrical energy as one form of energy in general is accepted, as it must be, since it can be converted to any of the other forms, but the status of electrical, or electromotive, force as one form of force in general is not accepted. If it were, the conclusion stated in the preceding paragraph would be inescapable. But the clear verdict of the observed facts is disregarded because there is a general impression that electrical quantity and space are entities of a totally different nature. The early investigators of electrical phenomena recognized that the quantity measured in volts has the characteristics of a force, and they named it accordingly. Contemporary theorists reject this identification because it conflicts with their views as to the nature of the electric current. W. J. Duffin, for instance, gives us a definition of electromotive force (emf), and then says,

In spite of its name, it is clearly not a force but is equal to the work done per unit positive charge in taking a charge completely around [the electric circuit]; its unit is therefore the volt. 13

Work per unit of space is force. This author simply takes it for granted that the moving entity, which he calls a charge, is not equivalent to space, and he therefore deduces that the quantity measured in volts cannot be a force. Our finding is that his assumptions are wrong, that the moving entity is not a charge, but is a rotating unit of space (an uncharged electron). The electromotive force, measured in volts, is then, in fact, a force. In effect, Duffin concedes this point when he tells us, in another connection, that ―V/n [volts per meter] is the same as N/C [newtons per coulomb].‖ 14 Both express the voltage gradient in terms of force divided by space. Conventional physical theory does not pretend to give us any understanding of the nature of either electrical quantity or electric charge. It simply assumes that inasmuch as scientific investigation has hitherto been unable to produce any explanation of its nature, the electric charge must be a unique entity, independent of the other fundamental physical entities, and must be accepted as one of the ―given‖ features of nature. It is then further assumed that this entity of unknown nature that plays the central role in electrostatic phenomena is identical with the entity of unknown nature, electrical quantity, that plays the central role in current electricity. The most significant weakness of the conventional theory of the electric current, the theory based on the foregoing assumptions, as we now see it in the light of the more complete understanding of physical fundamentals derived from the theory of the universe of motion, is that it assigns two different, and incompatible, roles to the electrons. These particles, according to present-day theory, are components of the atomic structure, yet at least some of them are presumed to be free to accommodate themselves to any electrical forces applied to the conductor. On the one hand, each is so firmly bound to the remainder of the atom that it plays a significant part in determining the properties of that atom, and a substantial force (the ionization potential) must be applied in order to separate it from the atom. On the other hand, these electrons are so free to move that they will respond to thermal or electrical forces whose magnitude is only slightly above zero. They must exist in a conductor in specific numbers in order to account for the fact that the conductor is electrically neutral while carrying current, but at the same time they must be free to leave the conductor, either in large or small quantities, if they acquire sufficient kinetic energy. It should be evident that the theories are calling upon the electrons to perform two different and contradictory functions. They have been assigned the key position in both the theory of atomic structure and the theory of the electric current, without regard for the fact that the properties that they must have in order to perform the functions required by either one of these theories disqualify them for the functions that they are called upon to perform in the other. In the theory of the universe of motion, each of these phenomena involves a different physical entity The unit of atomic structure is a unit of rotational motion, not an electron. It has the quasi-permanent status that is required of an atomic constituent. The electron,

without a charge, and without any connection with the atomic structure, is then available as the freely moving unit of the electric current. The fundamental postulate of the Reciprocal System of theory is that the physical universe is a universe of motion, one in which all entities and phenomena are motions, combinations of motions, or relations between motions. In such a universe none of the basic phenomena are unexplainable. ―Unanalyzables,‖ as Bridgman called them, do not exist. The basic physical entities and phenomena of the universe of motion–radiation, gravitation, matter, electricity, magnetism, and so on–can be defined explicitly in terms of space and time. Unlike conventional physical theory, the Reciprocal System does not have to leave its basic elements cloaked in metaphysical mystery. It does not have to exclude them from physical inquiry, in the manner of the following statement from the Encyclopedia Britannica: The question ―What is electricity?‖ like the question ―What is matter?‖ really lies outside the realm of physics and belongs to that of metaphysics.15

In a universe composed entirely of motion, an electric charge applied to a physical entity must necessarily be a motion. Thus the problem faced in the theoretical investigation was not to answer the question, What is an electric charge?, but merely to determine what kind of motion manifests itself as a charge. The identification of the charge as an added motion not only clarifies the relation between the charged electron that is observed experimentally and the uncharged electron that is known only as the moving entity in the electric current, but also explains the interchanges between the two that are the principal support for the currently popular opinion that only one entity, the charge, is involved. It is not always remembered that this opinion achieved general acceptance only after a long and spirited controversy. There are similarities between static and current phenomena, but there are also significant differences. Inasmuch as no theoretical explanation of either kind of electric effect was available at the time, the question to be decided was whether to regard the two as identical because of the similarities, or as disparate because of the differences. Once made, the decision in favor of identity has persisted, even though much evidence against its validity has accumulated in the meantime. The similarities are of two general types: (1) some of the properties of charged particles and electric currents are alike, and (2) there are observable transitions from one to the other. Identification of the charged electron as an uncharged electron with an added motion explains both types of similarities. For instance, a demonstration that a rapidly moving charge has the same magnetic properties as an electric current was a major factor in the victory won by the proponents of the ―charge‖ theory of the electric current many years ago. But our findings are that the moving entities are electrons, or other carriers of the charges, and the existence or non-existence of electric charges is irrelevant. The second kind of evidence that has been interpreted as supporting the identity of the static and current electrons is the apparent continuity from the electron of current flow to the charged electron in such processes as electrolysis. Here the explanation is that the electric charge is easily created and easily destroyed. As everyone knows, nothing more than a slight amount of friction is sufficient to generate an electric charge on many surfaces, such as those of present-day synthetic fabrics. It follows that wherever a

concentration of energy exists in one of these forms that can be relieved by conversion to the other, the rotational vibration that constitutes a charge is either initiated or terminated in order to permit the type of electron motion that can take place in response to the effective force. It has been possible to follow the prevailing policy, regarding the two different quantities as identical, and utilizing the same units for both, only because the two different usages are entirely separate in most cases. Under these circumstances no error is introduced into the calculations by using the same units, but a clear distinction is necessary in any case where either calculation or theoretical consideration involves quantities of both kinds. As an analogy we might assume that we are undertaking to set up a system of units in which to express the properties of water. Let us further assume that we fail to recognize that there is a difference between the properties of weight and volume, and consequently express both in cubic centimeters. Such a system is equivalent to using a weight unit of one gram, and as long as we deal separately with weight and volume, each in its own context, the fact that the expression ―cubic centimeter‖ has two entirely different meanings will not result in any difficulties. However, if we have occasion to deal with both quantities simultaneously, it is essential to recognize the difference between the two. Dividing cubic centimeters (weight) by cubic centimeters (volume) does not result in a pure number, as the calculations seem to indicate; the quotient is a physical quantity with the dimensions weight/volume. Similarly, we may use the same units for electric charge and electric quantity as long as they are employed independently and in the right context, but whenever the two enter in to the same calculation, or are employed individually with the wrong physical dimensions, there is confusion. This dimensional confusion resulting from the lack of distinction between the charged and uncharged electrons has been a source of considerable concern, and some embarrassment to the theoretical physicists. One of its major effects has been to prevent setting up any comprehensive systematic relationship between the dimensions of physical quantities. The failure to find a basis for such a relationship is a clear indication that something is wrong in the dimensional assignments, but instead of recognizing this fact, the current reaction is to sweep the problem under the rug by pretending that it does not exist. As one observer sees the picture: In the past the subject of dimensions has been quite controversial. For years unsuccessful attempts were made to find ―ultimate rational quantities‖ in terms of which to express all dimensional formulas. It is now universally agreed that there is no one ―absolute‖ set of dimensional formulas.16

This is a very common reaction to long years of frustration, one that we encountered frequently in our examination of the subjects treated in Volume I. When the most strenuous efforts by generation after generation of investigators fail to reach a defined objective, there is always a strong temptation to take the stand that the objective is inherently unattainable. ―In short,‖ says Alfred Lande, ―if you cannot clarify a problematic situation, declare it to be ‗fundamental,‘ then proclaim a corresponding ‗principle‘.‖ 17 So physical science fills up with principles of impotence rather than explanations.

In the universe of motion the dimensions of all quantities of all kinds can be expressed in terms of space and time only. The space-time dimensions of the basic mechanical quantities were identified in Volume I. In this chapter we have added those of the quantities involved in the flow of electric current. The chapters that follow will complete this task by identifying the space-time dimensions of the electric and magnetic quantities that make their appearance in the phenomena due to charges of one kind or another and in the magnetic effects of electric currents. This clarification of the dimensional relations is accompanied by a determination of the natural unit magnitudes of the various physical quantities. The system of units commonly utilized in dealing with electric currents was developed independently of the mechanical units on an arbitrary basis. In order to ascertain the relation between this arbitrary system and the natural system of units it is necessary to measure some one physical quantity whose magnitude can be identified in the natural system, as was done in the previous determination of the relations between the natural and conventional units of space, time, and mass. For this purpose we will use the Faraday constant, the observed relation between the quantity of electricity and the mass involved in electrolytic action. Multiplying this constant, 2.89366 x 1014 esu/g-equiv., by the natural unit of atomic weight, 1.65979 x 10-24 g, we arrive at 4.80287 x 10-10 esu as the natural unit of electrical quantity. The magnitude of the electric current is the number of electrons per unit of time; that is, units of space per unit of time, or speed. Thus the natural unit of current could be expressed as the natural unit of speed, 2.99793 x 1010 cm/sec. In electrical terms it is the natural unit of quantity divided by the natural unit of time, and amounts to 3.15842 x 106 esu/sec, or 1.05353 x 10-3 amperes. The conventional unit of electrical energy, the watthour, is equal to 3.6 x 1010 ergs. The natural unit of energy, 1.49275 x 10-3 ergs, is therefore equivalent to 4.14375 x 10-14 watt-hours. Dividing this unit by the natural unit of time, we obtain the natural unit of power 9.8099 x 1012 ergs/sec = 9.8099 x 105 watts. A division by the natural unit of current then gives us the natural unit of electromotive force, or voltage, 9,31146 x 108 volts. Another division by current brings us to the natural unit of resistance, 8.83834 x 1011 ohms. The basic quantities of current electricity and their natural units in electrical terms can be summarized as follows: s

quantity

4.80287 x 10-10 esu

s/t

current

1.05353 x 10-3 amperes

1/s

power

9.8099 x 105 watts

t/s

energy

4.14375 x 10-14 watt-hours

t/s2

voltage

9.31146 x 108volts

t2/s3

resistance

8.83834 x 1011 ohms

Another electrical quantity that should be mentioned because of the key role that it plays in the present-day mathematical treatment of magnetism is ―current density,‖ which is

defined as ―the quantity of charge passing per second through unit area of a plane normal to the line of flow.‖ This is a strange quantity, quite unlike any physical quantity that has previously been discussed in the pages of this and the preceding volume, in that it is not a relation between space and time. When we recognize that the quantity is actually current per unit of area, rather than ―charge‖ (a fact that is verified by the units, amperes per square meter, in which it is expressed), its space-time dimensions are seen to be s/t x 1/s2 = 1/st. These are not the dimensions of a motion, or a property of a motion. It follows that this quantity, as a whole, has no physical significance. It is merely a mathematical convenience. The fundamental laws of current electricity known to present-day science–such as Ohm‘s Law, Kirchhoff‘s Laws, and their derivatives–are empirical generalizations, and their application is not affected by the clarification of tne essential nature of the electric current. The substance of these laws, and the relevant details, are adequately covered in existing scientific and technical literature. In conformity with the general plan of this work, as set forth earlier, these subjects will not be included in our presentation. This is an appropriate time to make some comments about the concept of ―natural units.‖ There is no ambiguity in this concept, so far as the basic units of motion are concerned. The same is true, in general, of the units of the simple scalar quantities, although some questions do arise. For example, the unit of space in the region inside unit distance, the time region, as we are calling it, is inherently just as large as the unit of space in the region outside unit distance, but as measured it is reduced by the inter-regional ratio, 156.444, for reasons previously explained. We cannot legitimately regard this quantity as something less than a full unit, since, as we saw in Volume I, it has the same status in the time region that the full-sized natural unit of space has in the region ouside unit distance. The logical way of handling this situation appears to be to take the stand that there are two different natural units of distance (one-dimensional space), a simple unit and a compound unit, that apply under different circumstances. The more complex physical quantities are subject to still more variability in the unit magnitudes, because these quantities are combinations of the simpler quantities, and the combination may take place in different ways and under different conditions. For instance, as we saw in our examination of the units of mass in Volume I, there are several different manifestations of mass, each of which involves a different combination of natural units and therefore has a natural unit of its own. In this case, the primary cause of variability is the existence of a secondary mass component that is related to the primary mass by the inter-regional ratio, or a modification thereof, but some additional factors introduce further varability, as indicated in the earlier discussion.

CHAPTER 10

Electrical Resistance

While the motion of the electric current through matter is equivalent to motion of matter through space, as brought out in the discussion in Chapter 9, the conditions under which each type of motion is encountered in our ordinary experience emphasize different aspects of the common fundamentals. In dealing with the motion of matter through extension space we are primarily concerned with the motions of individual objects. Newton‘s laws of motion, which are the foundation stones of mechanics, deal with the application of forces to initiate or modify the motions of such objects, and with the transfer of motion from one object to another. Our interest in the electric current, on the other hand, is concerned mainly with the continuous aspects of the current flow, and the status of the individual objects that are involved is largely irrelevant. The mobility of the spatial units in the current flow also introduces some kinds of variability that are not present in the movement of matter through extension space. Consequently, there are behavior characteristics, or properties, of material structures that are peculiar to the relation between these structures and the moving electrons. Expressing this in another way, we may say that matter has some distinctive electrical properties. The basic property of this nature is resistance. As pointed out in Chapter 9, resistance is the only quantity participating in the fundamental relations of current flow that is not a familiar feature of the mechanical system of equations, the equations that deal with the motion of matter through extension space. Present-day ideas as to the origin of electrical resistance are summarized by one author in this manner: Ability to conduct electricity…is due to the presence of large numbers of quasi-free electrons which under the action of an applied electric field are able to flow through the metallic lattice…Disturbing influences…impede the free flow of electrons, scattering them and giving rise to a resistance. 18

As indicated in the preceding chapter, the development of the theory of the universe of motion arrives at a totally different concept of the nature of electrical resistance. The electrons, we find, are derived from the environment. It was brought out in Volume I that there are physical processes in operation which produce electrons in substantial quantities, and that, although the motions that constitute these electrons are, in many cases. absorbed by atomic structures, the opportunities for utilizing this type of motion in such structures are limited. It follows that there is always a large excess of free electrons in the material sector of the universe, most of which are uncharged. In this uncharged state the electrons cannot move with respect to extension space, because they are inherently rotating units of space, and the relation of space to space is not motion. In open space, therefore, each uncharged electron remains permanently in the same location with respect to the natural reference system, in the manner of a photon. In the context of the stationary spatial reference system the uncharged electron, like the photon, is carried outward at the speed of light by the progression of the natural reference system. All material aggregates are thus exposed to a flux of electrons similar to the continual bombardment by photons of radiation. Meanwhile there are other processes, to be discussed later, whereby electrons are returned to the environment. The electron population of a material aggregate such as the earth therefore stabilizes at an equilibrium level.

These processes that determine the equilibrium electron concentration are independent of the nature of the atoms of matter and of the atomic volume. The concentration of electrons is therefore uniform in electrically isolated conductors where there is no current flow. It follows that the number of electrons involved in the thermal motion of atoms of matter is proportional to the atomic volume, and the energy of that motion is determined by the effective rotational factors of the atoms. The atomic volume and thermal energy therefore determine the resistance. Those substances whose rotational motion is entirely in time (Divisions I and II) have their thermal motion in space, in accordance with the general rule governing addition of motions, as set forth in Volume I. For these substances zero thermal motion corresponds to zero resistance, and the resistance increases with the temperature. This is due to the fact that the concentration of electrons (units of space) in the time component of the conductor is constant for any specific current magnitude, and the current therefore increases the thermal motion by a specific proportion. Such substances are conductors. Where there are two dimensions of rotation in space, as in many of the elements of Division IV, the thermal motion , which requires two open dimensions because of the finite diameters of the moving electrons, is necessarily in time. In this case, zero temperature corresponds to zero motion in time. Here the resistance is initially extremely high, but decreases with an increase in temperature. Substance of this kind are known as insulators or dielectrics. Where there is only one dimension of spatial rotation, as in Division III, the elements of greatest electric displacement, those closest to the electropositive divisions, are able to follow the positive pattern, and are conductors. The Division III elements of lower electric displacement follow a modified time motion pattern, with resistance decreasing from a high, but finite, level at zero temperature. These substances of intermediate characteristics are semiconductors. For the present we will be concerned primarily with the resistance of conductors, and will further limit the discussion to what may be called the ―regular‖ pattern of conductor resistance. A limitation of this kind is necessary at the present stage of the investigation because the large element of uncertainty in the experimental information on the resistivity of the various conducting materials makes the clarification of the resistance relations a slow and difficult process. The early stages of the development of the Reciprocal System of theory, prior to the publication of the first edition of this work in 1959, which were very productive in the non-electrical areas, made relatively little progress in dealing with the electrical properties, largely because of conflicts between the theoretical deductions and some experimental results that have since been found to be incorrect. The increasing scope and accuracy of the experimental work in the intervening years has improved this situation very materially, but the basic problem still remains. Ideally it should be possible to deduce all of the pertinent information from theoretical premises alone, without reference to experimental determinations, but as a practical matter this is not feasible. A few steps can be, and have been, taken on a purely theoretical basis, particularly where the previous development of the theory has cast some important new light on the subject matter, but from the practical standpoint an extensive

and detailed investigation in any area is possible only if the theoretical study and the checking of the theoretical conclusions against experimental and observational data go hand in hand. It follows that where empirical data are lacking, progress is difficult, and where they are seriously wrong, it is essentially impossible. Unfortunately, resistance measurements are subject to many factors that introduce uncertainty into the results. The purity of the specimen is particularly critical because of the great difference between the resistivities of conductors and dielectrics. Even a very small amount of a dielectric impurity can alter the resistance substantially. Conventional theory has no explanation for the magnitude of this effect. If the electrons move through the interstices between the atoms, as this theory contends, a few additional obstacles in the path should not contribute significantly to the resistance. But, as we saw in Chapter 9, the current moves through all of the atoms of the conductor, including the impurity atoms, and it increases the heat content of each atom in proportion to its resistance. The extremely high dielectric resistance results in a large contribution by each impurity atom, and even a very small number of such atoms therefore has a significant effect. Semiconducting elements are less effective as impurities, but they may still have resistivities thousands of times as great as those of the conductor metals. The resistance also varies under heat treatment, and careful annealing is required before reliable measurements can be made. The adequacy of this treatment in many, if not most, of the resistance determinations is questionable. For example, G. T. Meaden reports that the resistance of beryllium was lowered more than fifty percent by such treatment, and comments that ―much earlier work was clearly based on unannealed specimens.‖19 Other sources of uncertainty include changes in crystal structure or magnetic behavior that take place at different temperatures or pressures in different specimens, or under different conditions, often with substantial hysteresis effects. Ultimately, of course, it will be desirable to bring all of these variables within the scope of the theoretical treatment, but for the present our objective will have to be limited to deducing from the theory the nature and magnitude of the changes in resistance resulting from temperature and pressure variations in the absence of these complicating factors, and then to show that enough of the experimental results are in agreement with the theory to make it probable that the discrepancies, where they occur, are due to one of more of these factors that modify the normal values. Inasmuch as the electrical resistance is a product of the thermal motion, the energy of the electron motion is in equilibrium with the thermal energy. The resistance is therefore directly proportional to the effective thermal energy; that is, to the temperature. It follows that the increment of resistance per degree is a constant for each (unmodified) substance, a magnitude that is determined by the atomic characteristics. The curve representing the relation of the resistivity to the temperature, in application to a single atom, is thus linear. Like the curves representing the temperature variation of the other properties that we examined in the earlier chapters, and for the same reasons, the initial level of the resistivity curve is negative. From this initial level to the melting point the resistivity of an unmodified atom (one that has not undergone a structural rearrangement or other change that modifies the resistance relations) follows a single straight line, rather than a curve composed of two or more segments of different slopes, as in the specific heat and

thermal expansion curves. This limitation to a single line is characteristic of the electron relations, and is due to the fact that the electron has only one rotational displacement unit, and therefore cannot shift to a multi-unit type of motion in the manner of the complex atomic structures. A somewhat similar change in the resistivity curve does occur, however, if the factors that determine the resistance are modified by some rearrangement of the kind mentioned earlier. As P. W. Bridgman commented in discussing some of his results, after a change of this nature has taken place, we are really dealing with a different substance. The curve for the modified atom is also a straight line, but it is not collinear with the curve of the unmodified atom. At the point of transition to the new form the resistivity of the individual atom abruptly changes to a different straight line relation. The resistivity of the aggregate follows a transition curve from one line to the other, as usual. At the lower end of the temperature range, the resistivity of the solid aggregate follows another transition curve of the same nature as those that we found in the curves representing the properties discussed earlier. The relation of the resistance to the temperature in this temperature range is currently regarded as exponential, but as we saw in other cases of the same kind, it is actually a probability curve that reflects the resistivity of the diminishing number of atoms that are still individually above the temperature at which the atomic resistivity reaches the zero level. The curve for the solid aggregate also diverges from the single atom curve at the upper end, due to the increasing proportion of liquid molecules in the solid aggregate. In this case, again, two values are required for a complete definition of the linear curve; either the coordinates of two points on the curve, or the slope of the curve and the location of one fixed point. A fixed point that is available from theoretical premises is the zero point temperature, the point at which the curve for the individual atom reaches the zero resistance level. The theoretical factors that determine this temperature are the same as those applying to the specific heat and thermal expansion curves, except that since the resistivity is an interaction between the atom and the electron it is effective only when the motions of both objects are directed outward. The theoretical zero point temperature normally applicable to resistivity is therefore twice that applicable to the properties previously considered. Up to this point the uncertainties in the experimental results have had no effect on the comparison of the theoretical conclusions with experience. It is conceded that the relation of resistivity to temperature is generally linear. with deviations from linearity in certain temperature ranges and under certain conditions. The only question at issue is whether these deviations are satisfactorily explained by the Reciprocal System of theory. When this question is considered in isolation, without taking into account the status of that system as a general physical theory, the answer is a matter of judgment, not a factual matter that can be resolved by comparison with observation. But we have now arrived at a place where the theory identifies some specific numerical values. Here agreement between theory and observation is a matter of objective fact, not one that calls for a judgment. But agreement within an acceptable margin can be expected only if (1) the experimental resistivities are reasonably accurate, (2) the zero point temperatures applicable to specific heat (which are being used as a base) were correctly determined,

and (3) the theoretical calculation and the resistivity measurement refer to the same structure. Table 24 applies equation 7-1, with a doubled numerical constant, and the rotational factors from Table 22, to a determination of the temperatures of the zero levels of the resistance curves of the elements included in the study, and compares the results with the corresponding points on the empirical curves. The amount of uncertainty in the resistivity measurements is reflected in the fact that for 11 of these 40 elements there are two sets of experimental results that have been selected as the ―best‖ values by different data compilers.20 In three other cases there are substantial differences in the experimental results at the higher temperatures, but the curves converge on the same value of the zero resistivity temperature. In a situation where uncertainties of this magnitude are prevalent, it can hardly be expected that there will be anywhere near a complete agreement between the theoretical and experimental values. Nevertheless, if we take the closer of the two ―best‖ experimental results in the 11 two-value cases, the theoretical and experimental values agree within four degrees in 26 of the 40 elements, almost two-thirds of the total. The rare earth elements were not included in this study because the resistances of these elements, like so many of their other properties, follow a pattern differing in some respects from that of most other elements, including a transition to a new structural form at a relatively low temperature, accompanied by a major decrease in the slope of the resistivity curve. Because of this low temperature transition it is difficult to locate the zero point temperature from the empirical data, but in 9 of the 13 elements of this group for which sufficient data are available to enable an approximate identification of this temperature, it appears to be between 10 and 20 degrees K. The theoretical range for these elements, as indicated by the factors listed in Table 22, is from 12 to 20 degrees. Here again, then, the measured resistivities of two-thirds of the elements are at least approximately in agreement with the theoretical values. The existence of this amount of agreement, in spite of all of the influences tending to generate discrepancies, is about as good a confirmation of the validity of the theory, as a general proposition, as can be expected under the existing circumstances. Furthermore, it is not unlikely that there are alternate resistance patterns that result in explainable deviations from the calculated values, and some of the larger discrepancies may be thus accounted for when an investigation of broader scope in undertaken.

Table 24: Temperature of Zero Resistance

LI Na Mg Al K Sc Ti V

Total Factors 14 6 12 14 4 10 14 12

T0 Calc. 56 24 48 56 16 40 56 48

Obs. 56 30 45 57-60 17 33 54 45

Ru Rh Pd Ag Cd In Sn Sb

Total Factors 14 13 10 8 5 12 7 8

T0 Calc. 56 52 40 32 20 48 28 32

Obs. 44-58 44-55 39 28-35 18 19 25 24-35

Cr Fe Co Ni Cu Zn Ga As Rb Y Zr Mo

14 16 14 14 12 8 4 12 2 8 9 14

56 64 56 56 48 32 16 48 8 32 36 56

69 73 64-78 55 46-49 27 31 42 11 28 30-45 36-55

Cs Ba Hf Ta W Re Ir Pt Au Hg Tl Pb

2 4 8 8 12 10 11 8 6 4 4 4

8 16 32 32 48 40 44 32 24 16 16 16

8 26 32 30 46-55 45 28-46 33 18 7 16 12

For the second defining value of the resistivity curves we can use the temperature coefficient of resistivity, the slope of the curve, a magnitude that reflects the inherent resistivity of the conductor material. The temperature coefficient as given in the published physical tables is not the required value. This is merely a relative magnitude, the incremental change in resistivity relative to the resistivity at a reference temperature, usually 20 degrees C. What is needed for present purposes is the absolute coefficient, in microhm-centimeters per degree, or some similar unit. Some studies have been made in this area, and as might be expected, it has been found that the electric (one-dimensional) speed displacement is the principal determinant of the resistivity, in the sense that it is responsible for the greatest amount of variation. However, the effective quantity is not usually the normal electric displacement of the atoms of the element involved, as this value is generally modified by the way that the atom interacts with the electrons. The conclusions that have been reached as to the nature and magnitude of these modifications are still rather tentative, and there are major uncertainties in the empirical values against which the theoretical results would normally be checked to test their validity. The results of these studies have therefore been omitted from this volume, in conformity with the general policy of restricting the present publication to those results whose validity is firmly established. The experimental difficulties that introduce uncertainties into the correlations between the theoretical and experimental values of the resistivity do not play as large a role in the relative resistance under compression. The compression results therefore give us a more definite and unequivocal picture. Again, however, this initial exploration of the subject, as it appears in the context of the Reciprocal System of theory, will have to be confined to the ―regular‖ pattern, the one followed by most of the metallic conductors. Because the movement of electrons (space) through matter is the inverse of the movement of matter through space, the inter-regional relations applicable to the effect of pressure on resistance are the inverse of those that apply to the change in volume under pressure. We found in Chapter 4 that the volume of a solid under compression conforms to the relation PV2= k. By reason of the inverse nature of the electron movement, the corresponding equation for electrical resistance is P2R = k

(10-1)

As in the compressibility equation, the symbol P in this expression refers to the total effective pressure. If we give the internal component of this total the designation P0, as in the volume compressibility discussion, and limit the term P to the externally applied pressure, the equation becomes (P + P0)2R = k

(10-2)

The general situation with respect to the values of the internal pressure applicable to resistance is essentially the same as that encountered in the study of compressibility. Some elements maintain the same internal pressure throughout Bridgman‘s entire pressure range, some undergo second order transitions to higher P0 values, and others are subject to first order transitions, just as in the volume relations. However, the internal pressure applicable to resistance is not necessarily the same as that applicable to volume. In some substances, tungsten and platinum, for example, these internal pressures actually are identical at every point in the pressure range of Bridgman‘s experiments. In another, and larger, class, the applicable values of P0 are the same as in compression, but the transition from the lower to the higher pressure takes place at a different temperature. The values for nickel and iron illustrate this common pattern. The initial reduction in the volume of nickel took place on the basis of an internal pressure of 913 M kg/cm2. Somewhere between an external pressure of 30 M kg/cm2(Bridgman‘s pressure limit on this element) and 100 M kg/cm2(the initial point of later experiments at very high pressure) the internal pressure increased to 1370 M kg/cm2(from azy factors 4-8-1 to 4-81¹/2). In the resistance measurements the same transition occurred, but it took place at a lower external pressure, between 10 and 20 M kg/cm2. Iron has the same internal pressures in resistance as nickel, with the transition at a somewhat higher external pressure, between 40 and 50 kg/cm2. But in compression this transition did not appear at all in Bridgman‘s pressure range, and was evident only in the shock wave experiments carried to much higher pressures. Table 25 is a comparison of the internal pressures in resistance and compression for the elements included in the study. The symbol x following or preceding some of the values indicates that there is evidence of a transition to or from a different internal pressure, but the available data are not sufficient to define the alternate pressure level.

Table 25: Internal Pressures in Resistance and Compression (Bridgman’s pressure range) P0(M kg/cm2)

P0(M kg/cm2)

Comp.

Res. Comp.

Res. 1004-1506

Be

571-856

1285

Pd

1004

Na

33.6-50.4

33.6-50.4-134.4

Ag 577-x

577-866

Al

376-564

564-1128

Cd

246-x

246-554

K V

18.8 913-x

x-37.6 1370

In Sn

236 302

236-354 226-453

Cr Mn Fe Ni Cu Zn As Nb Mo

x-913 293-1172 913 913-1370 845-1266 305 274-548 897-1196 1442

x-457 586-1172 913-1370 913-1370 1266 305-610 274-548-822 1794 1442-2121

Ta W Ir Pt Au Tl Pb Bi Th

1072 1733 2007 1338 867 x-253 221-331 165-331 313-626

1206-x 1733 1338-2007 1338 650-867 169-x 165-441 x-662 626-1565

Rh

1442

1442

U

578-1156

419-838

The amount of difference between the two columns of the table should not be surprising. The atomic rotations that determine the azy factors are the same in both cases, but the possible values of these factors have a substantial range of variation, and the influences that affect the values of these factors are not identical. In view of the participation of the electrons in the resistivity relations, and the large impurity effects, neither of which enters into the volume relations, some difference in the pressures at which the transitions take place can be considered normal. There is, at present, no explanation for those cases in which the internal pressures indicated by the results of the compression and resistance measurements are widely divergent, but differences in the specimens can certainly be suspected. Table 26 compares the relative resistances calculated from equation 10-2 with Bridgman's results on some typical elements. The data are presented in the same form as the compressibility tables in Chapter 4, to facilitate comparisons between the two sets of results. This includes showing the azy factors for each element rather than the internal pressures, but the corresponding pressures are available in Table 25. As in the compressibility tables, values above the transition pressures are calculated relative to an observed value as a reference level. The reference value utilized is indicated by the symbol R following the figure given in the ―calculated‖ column.

Table 26: Relative Resistance Under Compression Pressure Calc. (M kg/cm2) 0 10 20 30 40 50 60 70 80 90 100

1.000 .989 .977 .966 .955 .945 .934 .924 .914 .904 .894

Obs. W 4-8-3 1.000 .987 .975 .963 .951 .940 .930 .920 .911 .903 .895

Calc.

Obs. Pt 4-8-2 1.000 1.000 .985 .981 .971 .963 .957 .947 .943 .931 .929 .916 .916 .903 .903 .891 .890 .880 .878 .870 .866 .861

Calc.

Obs. Rh 4-8-2 1.000 1.000 .986 .984 .973 .968 .960 .953 .947 .939 .934 .925 .922 .912 .910 .900 .897 .889 .886 .880 .875 .872

Calc.

Obs. Cu 4-8-1¹/2 1.000 1.000 .984 .982 .969 .965 .954 .949 .940 .934 .925 .920 .912 .907 .898 .895 .885 .884 .872 .875 .859 .866

0 10 20 30 40 50 60 70 80 90 100

Ni 4-8-1 4-8-1¹/2 1.000 .978 .960 .946 .933 .919 .907 .894 .882 .870 .858R

1.000 .982 .965 .948 .933 .918 .904 .892 .880 .869 .858

Fe 4-8-1 4-8-1¹/2 1.000 1.000 .978 .977 .958 .956 .937 .936 .918 .919 .901 .903 .889 .888 .875 .875 .864 .862 .853 .851 .841R .841

Pd 4-6-2 4-6-3 1.000 .980 .961 .943 .925 .907 .891 .880 .868 .858 .847R

1.000 .980 .960 .942 .925 .909 .894 .881 .862 .858 .847

Zn 4-4-1 4-4-2 1.000 .938 .881 .836 .810 .786 .762 .740 .719 .699 .679R

1.000 .937 .887 .847 .812 .783 .756 .733 .713 .695 .679

In those cases where the correct assignment of azy factors and internal pressures above the transition point is not definitely indicated by the corresponding compressibility values, the selections from among the possible values are necessarily based on the empirical measurements, and they are therefore subject to some degree of uncertainty. Agreement between the experimental and the semi-theoretical values in this resistance range therefore validates only the exponential relation in equation 10-2, and does not necessarily confirm the specific values that have been calculated. The theoretical results below the transition points, on the other hand, are quite firm, particularly where the indicated internal pressures are supported by the results of the compressibility measurements. On this basis, the extent of agreement between theory and observation in the values applicable to those elements that maintain the same internal pressures through the full 100.000 kg/cm2 pressure range of Bridgman‘s measurements is an indication of the experimental accuracy. The accuracy thus indicated is consistent with the estimates made earlier on the basis of other criteria. Inasmuch as the difference in the form of the compressibility equation, pv2= k (equation 4-4), and that of the pressure-resistance equation, p2R = k (equation 10-1), is a requirement of the general reciprocal relation between space and time specified in the postulates of the Reciprocal System of theory, the joint verification of these two equations is a significant addition to the mass of evidence confirming the validity of this reciprocal relation, the cornerstone of the quantitative expression of the theory of the universe of motion.

CHAPTER 11

Thermoelectric Properties As brought out in Chapter 9, the equivalent space in which the thermal motion of the atoms of matter takes place contains a concentration of electrons, the magnitude of which is

determined, in the first instance, by factors that are independent of the thermal motion. In the thermal process the atoms move through the electron space as well as through the equivalent of extension space. Where the net time displacement of the atoms of matter provides a time continuum in which the electrons (units of space) can move, a portion of the atomic motion is communicated to the electrons. The thermal motion in the time region environment therefore eventually arrives at an equilibrium between motion of matter through space and motion of space (electrons) through matter. It should be noted particularly that the motion of the electrons through the matter is a part of the thermal motion, not something separate. A mass m attains a certain temperature T when the effective energy of the thermal motion reaches the corresponding level. It is immaterial from this standpoint whether the energy is that of motion of mass through equivalent space, or motion of space (electrons) through matter, or a combination of the two. In previous discussions of the hypothesis that metallic conduction of heat is due to the movement of electrons, the objection has been raised that there is no indication of any increment in the specific heat due to the thermal energy of the electrons. The development of the Reciprocal System of theory has not only provided a firm theoretical basis for what was previously no more than a hypothesis–the electronic nature of the conduction process–but has also supplied the answer to this objection. The electron movement has no effect on the specific heat because it is not an addition to the thermal motion of the atoms; it is an integral part of the combination motion that determines the magnitude of that specific heat. Because the factors determining the electron capture from and loss to the environment are independent of the nature of the matter and the amount of thermal motion, the equilibrium concentration is the same in any isolated conductor, irrespective of the material of which the conductor is composed, the temperature, or the pressure. All of these factors do, however, enter into the determination of the thermal energy per electron. Like the gas pressure in a closed container, which depends on the number of molecules and the average energy per molecule, the electric voltage within an isolated conductor is determined by the number of electrons and the average energy per electron. In such a isolated conductor the electron concentration is uniform. The electric voltage is therefore proportional to the thermal energy per electron. The energy level at which the electrons are in thermal equilibrium with the atoms of a conductor depends on the material of which the conductor is composed. If two conductors of dissimilar composition, copper and zinc, let us say, are brought into contact, the difference in the electron energy level will manifest itself as a voltage differential. A flow of electrons will take place from the conductor with the higher (more negative) voltage, the zinc, to the copper until enough electrons have been transferred to bring the two conductors to the same voltage. What then exists is an equilibrium between a smaller number of relatively high energy electrons in the zinc and a greater number of relatively low energy electrons in the copper. In this example it is assumed that the voltages in the conductors are allowed to reach an equilibrium. Some more interesting and significant effects are produced where equilibrium is not established. For instance, a continuing current may be passed through the two conductors. If the electron flow is from the zinc to the copper, the electrons leave the zinc

with the relatively high voltage that prevails in that conductor. In this case the lower voltage of the electrons in the copper conductor cannot be counterbalanced by an increase in the electron concentration, as all of the electrons that enter the copper under steady flow conditions pass on through. The incoming electrons therefore lose a portion of their energy content in the process of conforming to the new environment. The difference is given up as heat, and the temperature in the vicinity of the zinc-copper junction increases. If the section of the conductor under consideration is part of a circuit in which the electrons return to the zinc, this process is reversed at the copper-zinc junction. Here the energy level of the incoming electrons rises to conform with the higher voltage of the zinc, and heat is absorbed from the environment to provide the electron energy increment. This phenomenon is known as the Peltier effect. In this Peltier effect a flow of current causes a difference between the temperatures at the two junctions. The Seebeck effect is the inverse process. Here a difference in temperature between the two junctions causes a current to flow through the circuit. At the heated junction the increase in thermal energy raises the voltage of the high energy conductor, the zinc, more than that of the low energy conductor, the copper, because the size of the increment is proportional to the total energy. A current therefore flows from the zinc into the copper, and on to the low temperature junction. The result at this junction is the same as in the Peltier effect. The net result is therefore a transfer of heat from the hot junction to the cold junction. Throughout the discussion in this volume, the term “electric current” refers to the movement of uncharged electrons through conductors, and the term “higher voltage” refers to a greater force, t/s2, due to a greater concentration of electrons or its equivalent in a greater energy per electron. This electron flow is opposite to the conventional, arbitrarily assigned, “direction of current flow” utilized in most of the literature on current electricity. Ordinarily the findings of this work have been expressed in the customary terms of reference, even though in some cases those findings suggest that an improvement in terminology would be in order. In the present instance, however, it does not appear that any useful purpose would be served by incorporating an unfortunate mistake into an explanation whose primary purpose is to clarify relationships that have been confused by mistakes of other kinds. A third thermoelectric phenomenon is the Thomson effect, which is produced when a current is passed through a conductor in which a temperature gradient exists. The result is a transfer of heat either with or against the temperature gradient. Here the electron energy in the warm section of the conductor is either greater or less than that in the cool section, depending on the thermoelectric characteristics of the conductor material. Let us consider the case in which the energy is greater in the warm section. The electrons that are in thermal equilibrium with the thermally moving matter in this section have a relatively high energy content. These energetic electrons are carried by the current flow to the cool section of the conductor. Here they must lose energy in order to arrive at a thermal equilibrium with the relatively cold matter of the conductor, and they give up heat to the environment. If the current is reversed, the low energy electrons from the cool section travel to the warm section, where they absorb energy from the environment to attain thermal equilibrium. Both of these processes operate in reverse if the material of the conductor is one of the class of substances in which the effective voltage decreases with an increase in the temperature.

There are also some substances in which the response of the voltage to a temperature increment changes direction at some specific temperature level. A similar reversal of the Thomson effect occurs whenever a change of this kind takes place. The quantitative measure of capability to produce the thermoelectric effects is the thermoelectric power of the various conductor materials. This is the electric voltage, expressed either relative to some reference substance, usually lead, or as an absolute value measured against a superconducting material. Neither the theoretical study nor the experimental measurements are far enough advanced to make a quantitative comparison of theory with experimental results feasible at this time, but some of the general considerations that are involved in the quantitative determination can be deduced from theoretical premises. The basic difference between the thermal motion of the electrons and that of the atoms of matter is in the location of the initial level, or zero point. The zero for the thermal motion of the atoms is the equilibrium condition, in which the atom is stationary in a three-dimensional coordinate system of reference because the motion imparted to it by the progression of the natural reference system is counterbalanced by the oppositely directed gravitational motion. On the other hand, the zero for the thermal motion of the electrons, the magnitude of the motion of the electrons in the absence of thermal motion, is the natural zero, which, in the context of the stationary reference system, is unit speed, the speed of light. The measure of the energy of the electron motion in matter is the deviation of the speed upward or downward from this unit level. The fact that the zero energy levels of the positive and negative electron motion are coincident explains why each thermoelectric effect is a single phenomenon in which the zero level is merely a point in a continuous succession of magnitudes, rather than a discontinuous phenomenon such as the resistance to current flow. The difference between a small positive electron speed and a small negative electron speed is itself relatively small, and within the limits of what can be accomplished by a change in the conditions to which the conductor is subject. Such a change in conditions may therefore reverse the motion. But a substance that is a conductor in one temperature or pressure range does not become an insulator in another range, because the positive zero is the equivalent of the negative infinity, rather than the negative zero, and in application to the atomic motion. there is, as a consequence, an immense gap between a small positive thermal speed and a small negative speed. The status of the electron motion as positive or negative is determined by the position that the interacting atom occupies in its rotational group, in the same manner as the effective electric displacement of the atom. Each of these rotational groups consists of two divisions that are positive from the atomic standpoint, followed by two negative divisions. But since the electron is a single rotating system, instead of a double system of the atomic type, the various subdivisions of the atomic series are reduced to half size in application to the electrons. The reversals from positive to negative therefore occur at every divisional boundary in electronic processes, rather than at every second division. Identification of individual elements as positive or negative from the thermoelectric standpoint is necessarily subject to some qualifications because, as previously mentioned, some elements are positive in one temperature range and negative in another, but a

reasonably good test of the theoretical conclusions can be accomplished by comparing the sign of the thermoelectric power as observed at zero degrees C with the divisional status of the elements for which thermoelectric data are available in one of the recent compilations. Table 27 presents such a comparison, omitting the Division I elements of displacements 1 and 2.

Table 27 Thermoelectric Power Division I Al+ Ce+

II CoFeNiMo+ Pd-

III Cu+ Zn+ Ge+ Ag+ Cd+ In+ Sn+

W+ Ir+ PtAu+ HgTl+

IV SiPbBi-

The reason for the omissions from the tabulation is that the first two Division I elements of each rotational group follow a distinctive pattern of their own. In these elements the factor controlling the thermoelectric power is the magnetic rotational displacement, rather than the electric displacement. Because of the single rotation of the electron, the range of magnetic displacements from 1-1 to 4-4 becomes two divisions, with a reversal of sign at the boundaries. For reasons of symmetry, the interior section from 2-2 to 3-3 constitutes one division, in which the displacement one elements, sodium, potassium, and rubidium, have negative thermoelectric voltages. The corresponding members of the outer groups, lithium and cesium, have positive voltages. The displacement two elements may follow either the magnetic or the electric pattern. One of those included in the reference tabulation, calcium, has the same negative voltage as its neighbor, potassium, but magnesium, the corresponding member of the next lower group, takes the positive voltage of the higher Division I elements. While the theoretical development that is being described in this work has not yet been extended to the quantitative aspects of the thermoelectric effects thus far discussed, it is of interest to note that the relation of the thermoelectric power to temperature has many of the characteristics that we encountered in our previous examination of the response of other properties of matter to temperature changes. This is well illustrated in Fig.16, which shows the relation between temperature and the absolute thermoelectric power of platinum. Without the captions it would be difficult to distinguish this diagram from one applicable to thermal expansion, or to the specific heat of an element of one of the lower groups. This is no accident. The curves look alike because the same basic factors are applicable in all of these cases.

Fig. 16: Absolute Thermoelectric Power -Platinum

In the platinum curve the initial level is positive and the increments due to higher temperature are negative. This behavior is reversed in such elements as tungsten, which has a negative initial level and positive temperature increments up to a temperature of about 1400 K. Above this temperature there is a downward trend. This downward portion of the curve (linear, as usual) is the second segment. At the present stage of the theoretical development it appears probable that a general rule is involved here; that is, the second segment of each curve, the multi-unit segment, is directed toward more negative values, irrespective of the direction of the first (single-unit) segment. Another thermoelectric effect is the conduction of heat. This is a process that is more important from a practical standpoint than those effects that were considered earlier, and it has therefore been given more attention in the present early stage of the development of the theory of the universe of motion. Although the examination of the subject was a somewhat incidental feature of the review of electric current phenomena undertaken in preparation for the new edition of this work, it has produced a fairly complete picture of the heat conductivity of the principal class of conducting metals, together with a general idea of the manner in which other elements deviate from the general pattern. It was possible to achieve these results in the limited time available because, as it turned out, the metallic conduction of heat is not a complex process, involving difficult concepts such as phonons, orbitals, relaxation processes, electron scattering, and so on, as seen by conventional physics, but a very simple process, capable of being defined by equally simple mathematics, closely related to the mathematical relations governing purely mechanical processes. In the first situation discussed in this chapter, that in which two previously isolated conductors of different composition are brought into contact, the electron energies in the two conductors are necessarily unequal. As brought out there, the contact results in the establishment of an equilibrium between a larger number of less energetic electrons in one conductor and a smaller number of more energetic electrons in the other. Such an equilibrium cannot be established between two sections of a homogeneous conductor because in this case there is no influence that requires either the individual electron energy or the electron concentration to take different values in different locations. If the environmental conditions are uniform, both the energy distribution and the electron concentration attain uniformity throughout the conductor.

However, if one end of a conductor composed of a material such as iron is heated, the energy content of the electrons at that location is increased, and a force differential is generated. Under the influence of the force gradient some of the hot electrons move toward the cold end of the conductor. At that end the newly arrived electrons give up heat in the process of reaching a thermal equilibrium with the atomic motion, and join the concentration of cold electrons previously existing at this location. The resulting higher electron pressure causes a flow of cold electrons back toward the hot end of the conductor. None of the characteristic electrical effects are produced in this process, because the two oppositely directed electron flows are equal in magnitude, and the effects produced by one current are cancelled by those produced by the other. The only observable result is a transfer of heat from the hot end of the conductor to the cold end. It should be noted that no electrostatic potential difference is involved in either of these current flows. This is one of the obstacles in the way of a simple explanation of heat conduction in the context of conventional physical theory, where electric currents are assumed to result from differences in potential. As explained in Chapter 9, our finding is that all of the forces causing flow of current in the conductor under consideration, that due to the excess energy of the hot electrons, that due to the increased concentration of electrons at the cold end, and that due to electric voltage in general, are forces of a mechanical type, not electrostatic forces. If the material of the conductor is a substance such as copper in which the voltage decreases (becomes less negative) as the temperature rises, the same result is produced in an inverse manner. Here the effective energy of the electrons at the hot end of the conductor is lower than that of the cold electrons. A flow of cold electrons into the hot region therefore takes place. These electrons absorb heat from the environment to attain thermal equilibrium with the matter of the conductor. The resulting increased concentration of hot electrons is then relieved by a flow of some of these electrons back toward the cold end of the conductor. Here, again, the two oppositely directed electron flows produce no net electrical effects. The conduction of heat in metals by movement of electrons is essentially the same process as the convection of heat by movement of gas or liquid molecules. In a closed system, energetic molecules from a hot region move toward a cold region, while a parallel flow carries an equal number of cold molecules back to the hot region. There is only one significant difference between the two heat transfer processes. Because the fluid molecules are subject to a gravitational effect, heat transfer by convection is relatively rapid if it is assisted by a thermally caused difference in density, whereas it is much slower if the diffusion of the hot molecules operates against the gravitational force. The quantitative measure of the ability of the electron movement to conduct heat is known as the thermal conductivity. Its magnitude is determined primarily (perhaps entirely) by the effective specific heat and the temperature coefficient of resistivity, both of which are inversely related to the conductivity. There is a possibility that it may also be affected to a minor degree by some other influences not yet identified, but in any event, all of the modifying influences other than the specific heat are independent of the temperature, within the range of accuracy of the measurements of the thermal conductivity, and they can be

combined into one constant value for each substance. The thermal conductivity of the substance is then this constant divided by the effective specific heat: Thermal conductivity = k/cp

(11-1)

As we saw in the earlier chapters, the specific heat of the conductor materials follows a straight line relation to the temperature in the upper portion of the temperature range of the solid state, and the resistance is linearly related to the temperature at all points. At these higher temperatures, therefore, there is a constant relation between the thermal conductivity and the electrical conductivity (the reciprocal of the resistivity). This relation is known as the Wiedemann-Franz law. The relation expressed in this law breaks down at the lower temperatures, as soon as the specific heat drops below the original straight line. However, the failure of the relation does not occur as soon as would be expected from the normal specific heats of the metals, most of which begin to drop away from the upper linear segment of the curve in the neighborhood of room temperature. The reason for the extension of the high temperature linear relation to a lower temperature in application to thermal conductivity is that the specific heat under the conditions applicable to thermal conduction is not subject to all of the limitations that apply to the transmission of thermal energy by contact between atoms of matter. Instead of going through some intermediate steps, as in the measured specific heats, the effective specific heat in thermal conduction continues on the high temperature basis down to the point where multi-unit motion is no longer possible, and a transition to a single unit basis is mandatory. The temperature designated as T0 in the previous discussion, the point at which the specific heat curve reaches the zero level, is the same in thermal conduction as in the atomic contacts, but in the interaction between the electrons and the atoms the single rotating system of the electron adds one half unit to the one unit initial level of the double system of the atom. The initial level of the modified specific heat curve is therefore 1¹/2 units (-1.98) instead of the usual one unit (-1.32). This makes the slope of the curve somewhat steeper than that of the initial segment of the normal specific heat curve defined in Chapter 5. The deviation of the thermal conductivity from the constant relation expressed by the Wiedemann-Franz law is the problem with which any theory of thermal conductivity has to deal, and since the explanation derived from the Reciprocal System of theory attributes this deviation to the specific heat pattern, the best way to demonstrate the validity of the explanation appears to be to work backward from the measured thermal conductivities (reference 21), calculate the corresponding theoretical specific heats from equation 11-1, and then compare these calculated specific heats with the theoretical pattern just described.

Figure 17: Effective Specific Heat in Thermal Conductivity

Fig.17 is a comparison of this kind for the element copper, for which the numerical coefficient of equation 11-1 is 24.0, where thermal conductivities are expressed in watts cm-2 deg-1. The solid lines in this diagram represent the specific heat curve applicable to the thermal conductivity of copper, as defined in the preceding discussion. For comparison, the first segment of the normal specific heat curve of this element is shown as a dashed line. As in the illustrations of specific heat curves in the preceding chapters, the high temperature extension of the upper segment of the curve is omitted in order to make it possible to show the significant features of the curve more clearly. As the diagram indicates, the specific heats calculated from the measured thermal conductivities follow the theoretical lines within the range of the probable experimental errors, except at the lower and upper ends of the first segment, where transition curves of the usual kind reflect the deviation of the specific heat of the aggregate from that of the individual atoms. Similar data for lead and aluminum are presented in Fig.18.

Figure 18: Effective Specific Heat in Thermal Conductivity

The pattern followed by the three elements thus far considered may be regarded as the regular behavior, the one to which the largest number of the elements conform. No full scale investigation of the deviations from this basic pattern has yet been undertaken, but an idea of the nature of these deviations can be gained from an examination of the effective specific heat of chromium, Fig.19. Here the specific heat and temperature values in the low temperature range have only half the usual magnitude. The negative initial specific heat level is -1.00 rather than -2.00, the temperature of zero specific heat is 16 K rather than 32 K, and the initial level of the upper segment of the curve is 2.62 instead of 5.23. But this upper segment of the modified curve intersects the upper segment of the normal curve at the Neel point, 311 K, and above this temperature the effective specific heat of chromium in thermal conductivity follows the regular specific heat pattern as defined in Chapter 5.

Figure 19: Effective Specific Heat in Thermal Conductivity

Another kind of deviation from the regular pattern is seen in the curve for antimony, also shown in Fig.19. Here the initial level of the first segment is zero instead of the usual negative value. The initial level of the second segment is the half sized value 2.62. Antimony thus combines the two types of deviation that have been mentioned. As indicated earlier, it has not yet been determined whether any factors other than the resistivity coefficient enter into the constant k of equation 11-1. Resolution of this issue is complicated by the wide margin of uncertainty in the thermal conductivity measurements. The authors of the compilation from which the data used in this work were taken estimate that these values are correct only to within 5 to 10 percent in the greater part of the temperature range, with some uncertainties as high as 15 percent. However, the agreement between the plotted points in Fig.17,18, and 19, and the corresponding theoretical curves shows that most of the data represented in these diagrams are more accurate than the foregoing estimates would indicate, except for the aluminum values in the range from 200 to 300º K. In any event, we find that for the majority of the elements included in our preliminary examination, the product of the empirical value of the factor k in equation 11-1 and the temperature coefficient of resistivity is between 0.14 and 0.18. Included are the best known and most thoroughly studied elements, copper, iron, aluminum, silver, etc., and a range of k values extending all the way from the 25.8 of silver to 1.1 in antimony. This rather strongly suggests that when all of the disturbing influences such as impurity effects are removed, the empirical factor k in equation 11.1 can be replaced by a purely theoretical value k/r, in which a theoretically derived conversion constant, k, in the neighborhood of 0.15 watts cm-2 deg-1 is divided by a theoretically derived coefficient of resistivity. The impurity effects that account for much of the uncertainty in the general run of thermal conductivity measurements are still more prominent at very low temperatures. At least on first consideration, the theoretical development appears to indicate that the thermal

conductivity should follow the same kind of a probability curve in the region just above zero temperature as the properties discussed in the preceding chapters. In many cases, however, the measurements show a minimum in the conductivity at some very low temperature, with a rising trend below this level. On the other hand, some of the elements that are available in an extremely pure state show little or no effect of this kind, and follow curves similar to those encountered in the same temperature range during the study of other properties. It is not unlikely that this will prove to be the general rule when more specimens are available in a pure enough state. It should be noted that an ordinary high degree of purity is not enough. As the data compilers point out, the thermal conductivities in this very low temperature region are ―highly sensitive to small physical and chemical variations of the specimens.‖

CHAPTER 12

Scalar Motion It was recognized from the beginning of the development of the theory of the universe of motion that the basic motions are necessarily scalar. This was stated specifically in the first published description of the theory, the original (1959) edition of The Structure of the Physical Universe. It was further recognized, and emphasized in that 1959 publication, that the rotational motion of the atoms of matter is one of these basic scalar motions, and therefore has an inward translational effect, which we can identify as gravitation. Throughout the early stages of the theoretical development, however, there was some question as to the exact status of rotation in a system of scalar motions, inasmuch as rotation, as ordinarily conceived, is directional, whereas scalar quantities, by definition, have no directions. At first this issue was not critical, but as the development of the theory was extended into additional physical areas, more types of motion of a rotational character were encountered, and it became necessary to clarify the nature of scalar rotation. A full scale investigation of the subject was therefore undertaken, the results of which were reported in The Neglected Facts of Science, published in 1982. The existence of scalar motion is not recognized by present-day physics. Indeed, motion is usually defined in such a way that scalar motion is specifically excluded. This type of motion enters into observable physical phenomena in a rather unobtrusive manner, and it is not particularly surprising that its existence remained unrecognized for a long time. However, a quarter of a century has elapsed since that existence was brought to the attention of the scientific community in the first published description of the universe of motion, and it is hard to understand why so many individuals still seem unable to recognize that there are several observable types of motion that cannot be other than scalar. For instance, the astronomers tell us that the distant galaxies are all moving radially outward away from each other. The full significance of this galactic motion is not apparent on casual consideration, as we see each of the distant galaxies moving outward from our own location, and we are able to locate each of the observed motions in our spatial reference system in the same manner as the familiar motions of our everyday

experience. But the true character of this motion becomes apparent when we examine the relation of our Milky Way galaxy to this system of motions. Unless we take the stand that our galaxy is the only stationary object in the universe, an assumption that few scientists care to defend in this modern era, we must recognize that our galaxy is moving away from all of the others; that is, it is moving in all directions. And since it is conceded that our galaxy is not unique, it follows that all of the widely separated galaxies are moving outward in all directions. Such a motion, which takes place uniformly in all directions has no specific direction. It is completely defined by a magnitude (positive or negative), and is therefore scalar. A close examination of gravitation shows that the gravitational motion is likewise scalar, differing from the motion of the galaxies only in that it is negative (inward) rather than positive (outward). The resemblance to the motion of the galaxies can easily be seen if we consider a system of gravitating objects isolated in space–perhaps a group of galaxies relatively close to each other. From our knowledge of the gravitational effects we can deduce that each of these objects will move inward toward all of the others. Here again the motion is scalar. Each object is moving inward in all directions. A small-scale example of the same kind of motion can be seen in the motion of spots on the surface of an expanding balloon, often used as an analogy by those who undertake to explain the nature of the motions of the distant galaxies. Here, too, each individual is moving outward from all others. If the expansion is terminated, and succeeded by a contraction, the motions are reversed, and each spot then moves inward toward all others, as in the gravitational motion. In the case of the expanding balloon there is a known physical mechanism that is causing the expansion, and our understanding of this mechanism makes it evident that all locations on the balloon surface are moving. The spots on this surface have no motion of their own. They are merely being carried along by the movement of the locations that they occupy. According to the astronomers‘ current view, the recession of the distant galaxies is the same kind of a process. As Paul Davies explains: Many people (including some scientists) think of the recession of the galaxies as due to the explosion of a lump of matter into a pre-existing void, with the galaxies as fragments rushing through space. This is quite wrong… The expanding universe is not the motion of the galaxies through space away from some centre, but is the steady expansion of space. 22

Here, again, it is the locations that are moving, carrying the galaxies along with them. But in this case there is no known physical mechanism to account for the movement. Like the expansion of the balloon, the ―steady expansion of space‖ is merely a description, not an explanation, of the movement. All that the observations tell us is that an outward scalar motion of physical locations is taking place, carrying the galaxies with it. The postulates of the Reciprocal System of theory, the theory of the universe of motion, generalize this type of motion. They define a universe in which scalar motion of physical locations is the basic form of motion from which all physical entities and phenomena are derived. The manner in which this type of motion manifests itself to observation therefore has an important bearing on the nature of fundamental physical phenomena.

This situation is a good example of the way in which important information is often overlooked because no one spends the time and effort that are required in order to make a thorough study of a seemingly unimportant observation. It has long been recognized that the motion of spots on the surface of an expanding balloon is, in some way, different from the ordinary motions of our everyday experience. The mere fact that this balloon motion is so widely used as an analogy in explaining the recession of the distant galaxies is clear evidence of this general recognition. But the galaxies seem to be a special case, and expanding balloons do not play any significant part in normal physical activity. Consequently, no one has been much interested in the physics of these objects, and this admittedly unique kind of motion was never subjected to a critical examination prior to the investigation of scalar motion that was undertaken in the course of the theoretical development reported in the several volumes of this work. The finding that the fundamental motion of the universe is scalar revolutionizes this situation. The motions of the galaxies, gravitating objects, and spots on the surface of an expanding balloon are obviously the kind of motions–scalar motions–that our theory identifies as fundamental. Scientists are usually, with ample justification, reluctant to accept a hypothesis that postulates the existence of phenomena that are unknown to observation. It should therefore be emphasized that scalar motion is not an unobserved phenomenon; it is an observed phenomenon that has not heretofore been recognized in its true character. Once the motions identified in the foregoing paragraphs have been critically examined, and their scalar character has been recognized, the existence of scalar motion is no longer a hypothesis; it is a demonstrated physical fact. The existence of other scalar motions, as required by the theory of the universe of motion is then a natural and logical corollary, and those observed phenomena that have the theoretical properties of scalar motion can legitimately be identified as scalar motions. A one-dimensional scalar motion of a physical location is defined by a magnitude, and can therefore be represented one-dimensionally as a point, or an assemblage of points, moving along a straight line. Introduction of a reference point–that is, coupling the motion to the reference system at a specific point in that system–enables distinguishing between positive motion, outward from the reference point, and negative motion, inward toward the reference point. The direction imputed to the motion may be a constant direction, as in the case of the translational motion of the photon, the direction of which is determined by chance at the time of emission, unless external factors intervene. The key point disclosed by our investigation is that the direction is not necessarily constant. A discontinuous, or non-uniform, change of direction could be maintained only by a repeated application of an external force, but it has been known from the time of Galileo that a continuous and uniform change of position or direction is just as permanent and just as self-sustaining as a condition of rest. Our finding merely extends this principle to the assignment of direction to scalar motion. As an illustration, let us consider the motion of point X, originating at point A, and initially proceeding in the direction AB in three-dimensional space. Then let us assume that line AB is rotated around an axis perpendicular to it, and passing through point A. This does not change the inherent nature or magnitude of the motion of point X, which is still moving radially outward from point A at the same speed as before. What has been

changed is the direction of the movement, which is not a property of the motion itself, but a feature of the relation between the motion and three-dimensional space. Instead of continuing to move outward from A in the direction AB, point X now moves outward in all directions in the plane of rotation. If that plane is then rotated around another perpendicular axis, the outward motion of point X is distributed over all directions in space. It is then a rotationally distributed scalar motion. The results of such a distributed scalar motion are totally different from those produced by a combination of vectorial motions in different directions. The combined effects of the magnitudes and directions of vectorial motions can be expressed as vectors. The results of addition of these vectors are highly sensitive to the effects of direction. For example, a vectorial motion AB added to a vectorial motion AB‘ of equal magnitude, but diametrically opposite direction, produces a zero resultant. Similarly, vectorial motions of equal magnitude in all directions from a given point add up to zero. But a scalar motion retains the same positive (outward) or negative (inward) magnitude regardless of the manner in which it is directionally distributed. None of the types of scalar motion that have been identified can be represented in a fixed spatial reference system in its true character. Such a reference system cannot represent simultaneous motion in all directions. Indeed, it cannot represent motion in more than one direction. In order to represent a system of two or more scalar motions in a spatial reference system it is necessary to define a reference point for the system as a whole; that is, the scalar system must be coupled to the reference system in such a way that one of the moving locations in the scalar system is arbitrarily defined as motionless (from the scalar standpoint) relative to the reference system. The direction imputed to the motion of each of the other objects, or physical locations, in the scalar system is then its direction relative to the reference point. For example, if we denote our galaxy as A, the direction of the motion of distant galaxy X, as we see it, is AX. But observers in galaxy B, if there are any, see galaxy X as moving in the different direction BX, those in galaxy C see the direction as CX, and so on. The significance of this dependence of the direction on the reference point can be appreciated when it is contrasted with the corresponding aspect of vectorial motion. If an object X is moving vectorially in the direction AX when viewed from location A, it is also moving in this same direction AX when viewed from any other location in the reference system. It should be understood that the immobilization of the reference point in the reference system applies only to the representation of the scalar motion. There is nothing to prevent an object located at the reference point, the reference object we may call it, from acquiring an additional motion of a vectorial character. For example, the expanding balloon may be resting on the floor of a moving vehicle, in which case the reference point is in motion vectorially. Where an additional motion of this nature exists, it is subject to the same considerations as any other vectorial motion. The coupling of a system of scalar motions to a fixed reference system at a reference point does not alter the rate of separation of the members of the scalar system. The arbitrary designation of the reference point as motionless (from the scalar standpoint)

therefore makes it necessary to attribute the motion of the reference point, or object, to the other points or objects in the system. This conclusion that the observed change of position of an object B is due, in part, to the motion of some different object A may be hard for those who are thinking in terms of the conventional view of the nature of motion to accept, but it can easily be verified by consideration of a specific example. Any two spots on the surface of an expanding balloon, for instance, are moving away from each other; that is, they are both moving. While spot X moves away from spot Y, spot Y is coincidentally moving away from spot X. Placing the balloon in a reference system does not alter these motions. The balloon continues expanding in exactly the same way as before. The distance XY continues to increase at the same rate, but if X is the reference point, it is motionless in the reference system (so far as the scalar motion is concerned), and the entire increase in the distance XY, including that due to the motion of X, has to be attributed to the motion of Y. The same is true of the motions of the distant galaxies. The recession that is measured is merely the increase in distance between our galaxy and the one that is receding from us. It follows that a part of the increase in separation that we attribute to the recession of the other galaxy is actually due to motion of our own galaxy. This is not difficult to understand when, as in the case of the galaxies, the reason why objects appear to move faster than they actually do is obviously the arbitrary assumption that our location is stationary. What is now needed is a recognition that this is a general proposition. The same result follows whenever a moving object is arbitrarily taken as stationary for reference purposes. The motion of any reference point of a scalar motion is seen, by the reference system, in the same way in which we view our motion in the galactic system; that is, the motion that is frozen by the reference system is seen as motion of the distant objects. This transfer of motion from one object to another by reason of the manner in which scalar motion is represented in the reference system has no significant consequences in the galactic situation, as it makes no particular difference to us whether galaxy X is receding from us, or we are receding from it, or both. But the questions as to which objects are actually moving, and how much they are moving, have an important bearing on other scalar motions, such as gravitation. With the benefit of the information now available, it is evident that the rotation of the atoms of matter described in Volume I is a rotationally distributed negative (inward) scalar motion. By virtue of that motion, each atom, irrespective of how it may be moving, or not moving, vectorially, is moving inward toward all other atoms of matter. This inward motion can obviously be identified as gravitation. Here, then, we have the answer to the question as to the origin of gravitation. The same thing that makes an atom an atom–the scalar rotation–causes it to gravitate. Although Newton specifically disclaimed making any inference as to the mechanism of gravitation, the fact that there is no time term in his equation implies that the gravitational effect is instantaneous. This, in turn, leads to the conclusion that gravitation is ―action at a distance,‖ a process in which one mass acts upon another distant mass without an intervening connection. There is no experimental or observational evidence contradicting the instantaneous action. As noted in Volume I, even in astronomy, where it might be presumed that any inaccuracy would be serious, in view of the great magnitudes

involved, ―Newtonian theory is still employed almost exclusively to calculate the motions of celestial bodies.‖ 23 However, instantaneous action at a distance is philosophically unacceptable to most physicists, and they are willing to go to almost any lengths to avoid conceding its existence. The hypothesis of transmission through a ―luminiferous ether‖ served this purpose when it was first proposed, but as further studies were made, it became obvious that no physical substance could have the contradictory properties that were required of this hypothetical medium. Einstein‘s solution was to abandon the concept of the ether as a ―substance‖–something physical–and to introduce the idea of a quasi-physical entity, a phantom medium that is assumed to have the capabilities of a physical medium without those limitations that are imposed by physical existence. He identifies this phantom medium with space, but concedes that the difference between his space and the ether is mainly semantic. He explains, ―We shall say our space has the physical property of transmitting waves, and so omit the use of a word (ether) we have decided to avoid.‖ 24 Since this space (or ether) must exert physical effects. without being physical, Einstein has difficulty defining its relation to physical reality. At one time he asserts that ―according to the general theory of relativity space is endowed with physical qualities,‖ 25 while in another connection he says that ―The ether of the general theory of relativity [which he identifies as space] is a medium which is itself devoid of all mechanical and kinematical qualities.‖ 26 Elsewhere, in a more candid statement, he concedes, in effect, that his explanations are not persuasive, and advises us just to ―take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.‖ 27 Einstein‘s successors have added another dimension to the confusion of ideas by retaining this concept of space as quasi-physical, something that can be ―curved‖ or otherwise manipulated by physical influences, but transferring the ―ether-like‖ functions of Einstein‘s ―space‖ to ―fields.‖ According to this more recent view, matter exerts a gravitational effect that creates a gravitational field, this field transmits the effect at the speed of light, and finally the field acts upon the distant object. Various other fields– electric, magnetic, etc.–are presumed to coexist with the gravitational field, and to act in a similar manner. The present-day ―field‖ is just as intangible as Einstein‘s ―space.‖ There is no physical evidence of its existence. All that we know is that if a test object of an appropriate type is placed within a certain region, it experiences a force whose magnitude can be correlated with the distance to the location of the originating object. What existed before the test object was introduced is wholly speculative. Faraday‘s hypothesis was that the field is a condition of stress in the ether. Present-day physicists have transferred the stress to space in order to be able to discard the ether, a change that has little identifiable meaning. As R. H. Dicke puts it, ―One suspects that, with empty space having so many properties, all that had been accomplished in destroying the ether was a semantic trick. The ether had been renamed the vacuum.‖ 28 P. W. Bridgman, who reviewed this situation in considerable detail, arrived at a similar conclusion. The results of analysis, he says, ―suggest that the

role played by the field concept is that of an intellectual dummy, which cancels out of the final result.‖ 29 The theory of the universe of motion gives us a totally different view of this situation. In this universe the reality is motion. Space and time have a real existence only where, and to the extent that, they actually exist as components of motion. On this basis, extension space, the space that is represented by the conventional reference system, is no more than a frame of reference for the spatial magnitudes and directions of the entity, motion, that actually exists. It follows that extension space cannot have any physical properties. It cannot be ―curved‖ or modified in any other way by physical means. Of course, the reference system, being nothing but a human contrivance, could be altered conceptually, but such a change would have no physical significance. The status of extension space as a purely mental concept devised for reference purposes, rather than a physical entity, likewise means that this space is not a container, or background, for the physical activity of the universe, as assumed by conventional science. In that conventional view, everything physically real is contained within the space and time of the spatio-temporal reference system. When it becomes necessary to postulate something outside these limits in order to meet the demands of theory construction, it is assumed that such phenomena are, in some way, unreal. As Werner Heisenberg puts it, they do not ―exist objectively in the same sense as stones or trees exist.‖ 30 The development of the theory of the universe of motion now shows that the conventional spatio-temporal system of reference does not contain everything that is physically real. On the contrary, it is an incomplete system that is not capable of representing the full range of motions which exist in the physical universe. It cannot represent motion in more than one scalar dimension; it cannot represent a scalar system in which all elements are moving; nor can it correctly represent the position of an individual object that is moving in all directions simultaneously (that is, an object whose motion is scalar, and therefore has no specific direction). Many of the other shortcomings of this reference system will not become apparent until we examine the effects of very high speed motion in Volume III, but those that have been mentioned have a significant impact on the phenomena that we are now examining. The inability to represent more than one dimension of scalar motion is a particularly serious deficiency, inasmuch as the postulated three-dimensionality of the universe of motion necessarily permits the existence of three dimensions of motion. Only one dimension of vectorial motion is possible, because all three dimensions of space are required in order to represent the directions of this one-dimensional motion, but scalar motion has magnitude only, and a three-dimensional universe can accommodate scalar motion in all three of its dimensions. Since the conventional reference system cannot represent all of the distributed scalar motions, and present-day science does not recognize the existence of any motions that cannot be represented in that system, it has been necessary for the theorists to make some arbitrary assumptions as a means of compensating for the distortion of the physical picture due to this deficiency of the reference system. One of the principal steps taken in

this direction is the introduction of the concept of ―fundamental forces,‖ autonomous entities that exist in their own right, and not as properties of something more basic. The present tendency is to regard these so-called fundamental forces as the sources of all physical activity, and the currently popular goal of the theoretical physicists, the formulation of a ―grand unified theory,‖ is limited to finding a common denominator of these forces. Gravitation is, in a way, an exception, as the currently popular hypothesis as to the nature of the gravitational force, Einstein‘s general theory of relativity, does attempt an explanation of its origin. According to this theory, the gravitational force is due to a distortion of space resulting from the presence of matter. So far as can be determined from the scientific literature, no one has the slightest idea as to how such a distortion of space could be accomplished. Arthur Eddington expressed the casual attitude of the scientific community toward this issue in the following statement: ―We do not ask how mass gets a grip on space-time and causes the curvature which our theory postulates.‖ 31 But unless the question is asked, the answer is not forthcoming. In Newton‘s theory the gravitational force originates from mass in a totally unexplained manner. In Einstein‘s theory it is a result of a distortion, or ―curvature,‖ of space that is produced by mass in a totally unexplained manner. Thus, whatever its other merits may be, the current theory (general relativity) accomplishes no more toward accounting for the origin of the gravitational force than its predecessor. In order to arrive at such an explanation we need to recognize that force is not an autonomous entity; it is a property of motion. The motion of an individual mass unit is measured in terms of speed (or velocity). The total amount of motion in a material aggregate is then the product of the speed and the number of mass units, a quantity formerly called ―quantity of motion,‖ but now known as momentum. The rate of change of the motion of the individual unit is acceleration; that of the total quantity of motion is force. The force is thus the total quantity of acceleration. The significance of this, in the present connection, is that force not only produces an acceleration when applied to a mass (a fact that is currently recognized), it is an acceleration prior to that application (a fact that is currently overlooked or disregarded). In other words, the acceleration is simply transferred. For example, when a rocket is fired, the total ―quantity of acceleration‖ available for application to the rocket (the force) is the sum of the quantities of acceleration of the individual particles of the gas produced from the propellant. The division of this total quantity among the mass units of the rocket determines the acceleration of each individual unit. and therefore of the rocket as a whole. Since force is a property of a motion rather than an autonomous entity, it follows that wherever there is a force there must also be a motion of which the force is a property. This leads to the conclusion that a gravitational force field is a region of space in which gravitational motions exist. In the context of conventional physical thought this conclusion is unacceptable, since there are no moving entities in an unoccupied field. The information about the nature of scalar motion developed in the earlier pages clarifies this situation. A material aggregate is moving gravitationally in all directions, but the

conventional spatial reference system is unable to represent a system of motions of this nature in its true character. As previously indicated, where the scalar motion AB of object A (the massive object now under consideration) toward object B (the test mass) cannot be represented in the reference system because of the limitations of that system, this motion AB is shown as a motion BA; that is, a motion of the test mass B toward the massive object A, constituting an addition to the actual motion of that test mass. Because of the spherical distribution of the scalar motions of the atoms of mass A, the magnitude of the motion imputed to mass B depends on its distance from A, and is inversely proportional to the square of that distance. Thus each point in the region surrounding A corresponds to a specific fraction of the motion of A, representing the amount of motion that would be imputed to a unit mass, if that mass is actually placed at this particular point. Here, then, is the explanation of the gravitational field (and, by extension, all other fields of the same nature). The field is not something physically real in the space; ―for the modern physicist as real as the chair on which he sits,‖ 32 as asserted by Einstein. Nor is it, as Faraday surmised, a stress in the ether. Neither is it some kind of a change in the properties of space, as envisioned by present-day theorists. It is simply the pattern of the magnitudes of the motions of one mass that have to be imputed to other masses because of the inability of the reference system to represent scalar motion as it actually exists. No doubt this assertion that what appears to be a motion of one object is actually, in large part, a motion of a different object is somewhat confusing to those who are accustomed to conventional ideas about motion. But once it is realized that scalar motion exists, and because it has no inherent direction it may be distributed over all directions, it is evident that the reference system cannot represent this motion in its true character. In the preceding analysis we have determined just how the reference system does represent this motion that it cannot represent correctly. This may appear to be a return to the action at a distance that is so distasteful to most scientists, but, in fact, the apparent action on distant objects is an illusion created by the introduction of the concept of autonomous forces to compensate for the shortcomings of the reference system. If the reference system were capable of representing all of the scalar motions in their true character, there would be no problem. Each mass would then be seen to be pursuing its own course, moving inward in space independently of other objects. In this case, accepted scientific theory has gone wrong because prejudice supported by abstract theory has been allowed to override the results of physical observations. The observers keep calling attention to the absence of evidence of the finite propagation time that current theory ascribes to the gravitational effect, as in this extract from a news report of a conference at which the subject was discussed: When it [the distance] is astronomical, the difficulty arises that the intermediaries need a measurable time to cross, while the forces in fact seem to appear instantaneously. 33

But it is assumed that we must accept either a finite propagation time or action at a distance, which, as Bridgman once said, is ―a concept to which many physicists have a violent allergy.‖ 34 Einstein‘s theory, which supports the propagation hypothesis, has

therefore been accorded a status superior to the observations. The following statement from a physicist brings this point out explicitly: Nowadays we are also convinced that gravitation progresses with the speed of light. This conviction, however, does not stem from a new experiment or a new observation, it is a result solely of the theory of relativity.35

This is another example of a practice that has been the subject of critical comment in several different connections in the preceding pages of this and the earlier volume. Overconfidence in the existing body of scientific knowledge has led the investigators to assume that all alternatives in a given situation have been considered. It is then concluded that an obviously flawed hypothesis must be accepted, in spite of its shortcomings, because ―there is no other way.‖ Time and again in the earlier pages, development of the theory of the universe of motion has shown that there is another way, one that is free from the objectionable features. So it is in this case. It is not necessary either to contradict observation by assuming a finite speed of propagation or to accept action at a distance. Some of the most significant consequences of the existence of scalar motion are related to its dimensions. This term is used in several different senses, two of which are utilized extensively in this work. When physical quantities are resolved into component quantities of a fundamental nature, these component quantities are called dimensions. Identification of the dimensions, in this sense, of the basic physical quantities has been an important feature of the development of theory in the preceding pages. In a different sense of the term, it is generally recognized that space is three-dimensional. Conventional physics recognizes motion in three-dimensional space, and represents motions of this nature by lines in a three-dimensional spatial coordinate system. But these motions which exist in three dimensions of space are only one-dimensional motions. Each individual motion of this kind can be characterized by a vector, and the resultant of any number of these vectors is a one-dimensional motion defined by the vector sum. All three dimensions of the spatial reference system are required for the representation of one-dimensional motion, and there is no way by which the system can indicate a change of position in any other dimension. However, the postulate that the universe of motion is three-dimensional carries with it the existence of three dimensions of motion. Thus there are two dimensions of motion in the physical universe that cannot be represented in the conventional spatial reference system. In common usage the word ―dimensions‖ is taken to mean spatial dimensions, and reference to three dimensions is ordinarily interpreted geometrically. It should be realized, however, that the geometric pattern is merely a graphical representation of the relevant physical magnitudes and directions. From the mathematical standpoint an ndimensional quantity is one that requires n independent magnitudes for a complete definition. Thus a scalar motion in three dimensions, the maximum in a threedimensional universe, is defined in terms of three independent magnitudes One of these magnitudes–that is, the magnitude of one of the dimensions of scalar motion–can be further divided dimensionally by the introduction of directions relative to a spatial reference system. This expedient resolves the one-dimensional scalar magnitude into three orthogonally related sub-magnitudes, which together with the directions, constitute

vectors. But no more than one of the three scalar magnitudes that define a threedimensional scalar motion can be sub-divided vectorially in this manner. Here is a place where recognition of the existence of scalar motion changes the physical picture radically. As long as motion is viewed entirely in vectorial terms–that is, as a change of position relative to the spatial reference system–there can be no motion other than that represented in that system. But since scalar motion has magnitude only, there can be motion of this character in all three of the existing dimensions of the physical universe. It should be emphasized that these dimensions of scalar motion are mathematical dimensions. They can be represented geometrically only in part, because of the limitations of geometrical representation. In order to distinguish these mathematical dimensions of motion from the geometric dimensions of space in which one dimension of the motion takes place, we are using the term ―scalar dimension‖ in a manner analogous to the use of the term ―scalar direction‖ in the earlier pages of this and the preceding volume.

CHAPTER 13

Electric Charges The history of the development of a mathematical understanding of electricity and magnetism has been one of the great success stories of science and engineering. With the benefit of this information, a type of phenomena totally unknown up to a few centuries ago has been harnessed in a manner that has revolutionized life in the more advanced human societies. But in a strange contrast, this remarkable record of success in the identification and application of the mathematical relations involved in these phenomena coexists with an almost complete lack of understanding of the basic nature of the quantities with which the mathematical expressions are dealing. In order to have a reasonably good conceptual understanding of electricity and magnetism, we need to be able to answer questions such as these: What is an electric charge? What is magnetism? What is an electric current? What is an electric field? What is mass? What is the relation between mass and charge? How are electric and magnetic forces produced? How do they differ from the gravitational force? How are these forces transmitted? What is the reason for the direction of the electromagnetic force? Why do masses interact only with masses, charges with charges? How are charges induced in electrically neutral objects? Conventional science has no answers for most of these questions. To rationalize the failure to discover the explanations, the physicists tell us that we should not ask the questions:

The question ―What is electricity?‖–so often asked–is… meaningless.36 (E. N. daC. Andrade) What is electricity?...Definitions that cannot, in the nature of the case, be given, should not be demanded. 37 (Rudolf Carnap)

The difficulty in accounting for the origin of the basic forces is likewise evaded. It is observed that matter exerts a gravitational force, an electric charge exerts an electric force, and so on, but the theorists have been unable to identify the origin of these forces. Their reaction has been to evade the issue by characterizing the forces as autonomous, ―fundamental conceptions of physics‖ that have to be taken as given features of the universe. These forces are then assumed to be the original sources of all physical activity. So far as anyone knows at present, all events that take place in the universe are governed by four fundamental types of forces.38

As pointed out in Chapter 12, this assumption is obviously invalid, as it is in direct conflict with the accepted definition of force. But those who are desperately anxious to have some kind of a theory of the phenomena that are involved close their eyes to this conflict. After having ―solved‖ the problem of the origin of the forces by assuming it out of existence, the theorists have proceeded to solve the problem of the transmission of the basic forces in a similar manner. Since they have no explanation for this phenomenon, they provide a substitute for an explanation by equating this transmission with a different kind of phenomenon for which they believe they have at least a partial explanation. Electromagnetic radiation has both electric and magnetic aspects, and is unquestionably a transmission process. In their critical need for some kind of an explanation of the transmission of electric and magnetic forces, the theory constructors have seized on this tenuous connection, and have assumed that electromagnetic radiation is the carrier of the electrostatic and magnetostatic forces. Then, since the gravitational force is clearly analogous to those two forces, and can be represented by the same kind of mathematical expressions, it has been further assumed that some sort of gravitational radiation must also exist. But there is ample evidence to show that these forces are not transmitted by radiation. As brought out in Volume I, gravitation and radiation are processes of a totally different kind. Radiation is an energy transmission process. A quantity of radiant energy is produced in the form of photons. The movement of these photons then carries the energy from the point of origin to a destination, where it is delivered to the receiving object. No movement of either the originating object or the receiving object is required. At either end of the path the energy is recognizable as such, and is readily interchangeable with other forms of energy. Gravitation, on the other hand, is not an energy transmission process. The (apparent) gravitational action of one mass upon another does not alter the total external energy content (potential plus kinetic) of either mass. Each mass that moves in response to the gravitational force acquires a certain amount of kinetic energy, but its potential energy is decreased by the same amount, leaving the total unchanged. As stated in Volume I, gravitational, or potential, energy is purely an energy of position: that is, for any specific masses, the mutual potential energy is determined entirely by their spatial separation.

All that has been said about gravitation is equally applicable to electrostatics and magnetostatics. Each member of any system of two or more objects (apparently) interacting electrically or magnetically has a potential energy determined by the magnitudes of the charges and the intervening distance. As in the gravitational situation, if the separation between the objects is altered by reason of the static forces, an increment of kinetic energy is imparted to one or more of the objects. But its, or their, potential energy is decreased by the same amount, leaving the total unchanged. This is altogether different from a process such as electromagnetic radiation which carries energy from one location to another. Energy of position in space cannot be propagated in space. The concept of transmitting this kind of energy from one spatial position to another is totally incompatible with the fact that the magnitude of the energy is determined by the spatial separation. As stated earlier, the coexistence of an almost total lack of conceptual understanding of electric and magnetic fundamentals with a fully developed system of mathematical relations and representations seems incongruous. In fact, however, this is the normal initial result of the manner in which scientific investigation is usually handled. A complete theory of any physical phenomenon consists of two distinct components, a mathematical formulation and a conceptual structure, which are largely independent. In order to constitute a complete and accurate definition of the phenomenon, the theory must be both conceptually and mathematically correct. This is a result that is difficult to accomplish. In most cases it is practically mandatory to approach the conceptual and mathematical issues separately, so that this very complex problem is reduced to more manageable dimensions. We either develop a mathematically correct theory that is conceptually imperfect (a ―model‖), and then attack the problem of reconciling this theory with the conceptual aspects of the phenomena in question, or alternatively, develop a theory that is conceptually correct, as far as it goes, but mathematically imperfect. and then attack the problem of accounting for the mathematical forms and magnitudes of the physical relations. As matters now stand in conventional science, the requirement of conceptual validity is by far the most difficult to meet. With the benefit of the mathematical techniques now available it is almost always possible to devise an accurate, or nearly accurate, mathematical representation of a physical relation on the basis of those physical factors that are known to enter into the particular situation, and the currently accepted concepts of the nature of these factors. The prevailing policy, therefore, is to give priority to the mathematical aspects of the phenomena under consideration. Vigorous mathematical analysis is applied to models which admittedly represent only certain portions of the phenomena to which they apply, and which, as a consequence, are conceptually incorrect, or at least incomplete. Attempts are then made to modify the models in such a way that they move closer to conceptual validity while maintaining their mathematical validity. There is a sound reason for following this ―mathematics first‖ policy in the normal course of physical investigation. The initial objective is usually to arrive at a result that is useful in practical application; that is, something that will produce the correct mathematical answers to practical problems. From this standpoint, the issue of conceptual validity is essentially irrelevant. However, scientific investigation does not end at this point. Our inquiry into the subject matter is not complete until we (1) arrive at a conceptual understanding of the

physical phenomena under consideration, and (2) establish the nature of the relations between these and other physical phenomena. A mathematical relation that is unexplained conceptually is of little or no value toward accomplishing these objectives. It cannot be extrapolated beyond the range for which its validity has been experimentally or observationally verified without running the risk of exceeding the limits of its applicability (as will be demonstrated in Volume III). Nor can it be extended to any area other than the one in which it originated. As it happens, however, many physical problems have resisted all attempts to discover the conceptually correct explanations. Many of the frustrated theorists have reacted by abandoning the effort to achieve conceptual validity, and are now contending that mathematical agreement between theory and observation constitutes ―experimental verification.‖ Obviously this is not true. Such a ―verification,‖ or any number of similar mathematical correlations, tell us only that the theory is mathematically correct. As has been emphasized at several points in the preceding discussion, mathematical validity does not, in any way, assure conceptual validity. It gives no indication whether the interpretation that is being given to the mathematical relations is right or wrong. The inevitable result of the currently prevailing policy is to overload physical science with theories that are mathematically correct but conceptually wrong. Solutions for the many long-standing problems of physical science clearly cannot be obtained as long as the attacks on the problems are terminated when mathematical agreement is achieved. But even if this defect in present practice is corrected, it is doubtful whether the answers to most of these difficult problems can be obtained by the prevailing method of devising a mathematical solution first, and then looking for a conceptual explanation. The reason is that a valid mathematical expression can be constructed to fit almost any model. As Einstein states the case: It is often, perhaps even always, possible to adhere to a general theoretical foundation by securing the adaptation of the theory to the facts by means of artificial additional assumptions. 39

Consequently, the mathematical expressions cannot be relied upon to furnish the necessary clues to a conceptual understanding. The important contribution of the Reciprocal System of theory to the solution of these problems is that it enables attacking them from the opposite direction; that is, first arriving at a conceptual understanding by deduction from very general basic relations, and then developing the mathematical aspects of the established conceptual relationships. In other words, instead of getting a mathematical answer and then looking for a conceptual explanation to fit it, we start by getting a conceptual answer and then look for a mathematical way of expressing it. In general, this is a much simpler procedure, but it could not be utilized on any extensive scale until a unified general theory was available, so that conceptual answers could be obtained by deductive processes. The Reciprocal System of theory satisfies this requirement. The clarification of the basic aspects of electricity and magnetism provides a dramatic example of the power of this new method of approach to physical problems. It is no longer necessary to deny the existence of answers to the questions listed at the beginning of this

chapter, or to content ourselves with pseudo-answers such as the ―curved space‖ explanation of gravitation. Two of these questions, ―What is mass?,‖ and ―What is an electric current?,‖ have already been answered in the previous pages of this and the preceding volume. Those involving magnetism will be answered in the general discussion of that subject which begins with Chapter 19, and the process of induction of charges will be explained in Chapter 18. The answers to all of the other questions in the list will be developed in this present chapter. When these presentations are complete, we will have provided simple and logical explanations for every one of these items with which present-day science is having so much difficulty. In the universe of motion all physical entities and phenomena are motions, combinations of motions, or relations between motions. It follows that the development of the structure of the theory that describes this universe is primarily a matter of determining just what motions and combinations of motions can exist under the conditions specified in the postulates. Thus far in our discussion of electrical phenomena we have been dealing only with translational motion, the movement of electrons through matter, and the various effects of that motion, the mechanical aspects of electricity, so to speak. We will now turn our attention to the electrical phenomena that involve rotational motion. As we saw in Volume I, gravitation is a three-dimensional rotationally distributed scalar motion. Objects having only one or two effective dimensions of scalar rotation were found to exist, but these objects, sub-atomic particles, have only a limited role in physical phenomena. In view of the general pattern of generating motions of greater complexity by combining motions of different kinds, the possibility of superimposing one-dimensional or two-dimensional scalar rotation on gravitating objects to produce phenomena of a more complex nature naturally suggests itself. On analyzing the situation, however, we find that the addition of ordinary rotationally distributed motion in less than three dimensions to the gravitational motion would merely modify the magnitude of that motion, and would not result in any new kinds of phenomena. There is, however, a modification of the rotational distribution pattern that we have not yet explored. Three general types of simple motion (scalar motion of physical locations) have thus far been considered: (1) translation, (2) linear vibration, and (3) rotation. We now need to recognize that there is a fourth type: rotational vibration, a motion that is related to rotation in the same way that linear vibration is related to translational motion. Vectorial motion of this type is uncommon–the motion of the hairspring of a watch is an example–and it is largely ignored in conventional physical thought, but it plays an important part in the basic motion of the universe. At the atomic level, rotational vibration is a rotationally distributed scalar motion that is undergoing a continuous change from outward to inward and vice versa. As in linear vibration, the change of scalar direction must be continuous and uniform in order to be permanent. Like the motion of the photon of radiation, it is therefore a simple harmonic motion. As noted in the discussion of thermal motion in Chapter 5, when such a simple harmonic motion is added to an existing motion it is coincident with that motion (and therefore ineffective) in one of the scalar directions, and has an effective magnitude in the other scalar direction. Every added motion must conform to the rules for the combination of

scalar motions that were set forth in Volume I. On this basis, the effective scalar direction of a self-sustaining rotational vibration must be outward, in opposition to the inward rotational motion with which it is associated. A similar addition with an inward scalar direction is not stable, but can be maintained by an external influence, as we will see later. A scalar motion in the form of a rotational vibration will now be identified as a charge. A one-dimensional motion of this type is an electric charge. In the universe of motion, any basic physical phenomenon such as a charge is necessarily a motion, and the only question to be answered by an examination of its place in the physical picture is what kind of a motion it is. We find that the observed electric charge has the properties that the theoretical development identifies as those of a one-dimensional rotational vibration, and we can therefore equate the two. It is interesting to note that conventional science, which has been at so much of a loss to explain the origin and nature of the charge, does recognize that it is scalar. For instance, W. J. Duffin reports that experiments which he describes show that ―charge can be specified by a single number,‖ thus justifying the conclusion that ―charge is a scalar quantity.‖40 However, in current physical thinking this electric charge is regarded as one of the fundamental physical entities, and its identification as a motion will no doubt be a surprise to many persons. It should therefore be emphasized that this is not a peculiarity of the theory of the universe of motion. Irrespective of our findings, based on that theory, a charge is necessarily a motion on the basis of the definitions that are employed in conventional physics, a fact that is disregarded because it is inconsistent with present-day theory. The key factor in this situation is the definition of force. It was brought out in Chapter 12 that force is a property of motion, not something of a fundamental nature that exists in its own right. An understanding of this point is essential to the development of the theory of charges, and some further consideration of the relevant facts is therefore appropriate in the present connection. For application in physics, force is defined by Newton‘s second law of motion. It is the product of mass and acceleration, F = ma. Motion, the relation of space to time, is measured on an individual mass unit basis as speed, or velocity, v, (that is, each unit moves at this rate), or on a collective basis as momentum, the product of mass and velocity, mv, formerly called by the more descriptive name ―quantity of motion.‖ The time rate of change of the magnitude of the motion is dv/dt (acceleration, a) in the case of the individual unit, and m dv/dt (force, ma) when measured collectively. Thus force is, in effect, defined as the time rate of change of the magnitude of the total quantity of motion, the ―quantity of acceleration‖ we might call it. From this definition it follows that a force is a property of a motion. It has the same standing as any other property, and is not something that can exist as an autonomous entity. The so-called ―fundamental forces of nature,‖ the presumably autonomous forces that are currently being called upon to explain the origin of the basic physical phenomena, are necessarily properties of underlying motions; they cannot exist as independent entities. Every ―fundamental force‖ must originate from a fundamental motion. This is a logical requirement of the definition of force, and it is true regardless of the physical theory in

whose context the situation is viewed. Present-day physical science is unable to identify the motions that the definition of force requires. An electric charge, for instance, produces an electric force, but so far as can be determined from observation, it does this on its own initiative. There is no indication of any antecedent motion. This apparent contradiction of the definition of force is currently being handled by ignoring the requirements of the definition, and treating the electric force as an entity generated in some unspecified way by the charge. The need for an evasion of this kind is now eliminated by the identification of the charge as a rotational vibration. It is now clear that the reason for the lack of any evidence of a motion being involved in the origin of the electric force is that the charge itself is the motion. An electric charge is thus a one-dimensional analog of the three-dimensional motion of an atom or particle that we identified as mass. The space-time dimensions of mass are t3/s3. In one dimension this is t/s. Rotational vibration is a motion similar to the rotation that constitutes mass, differing only in the periodic reversal of scalar direction. It follows that the electric charge, a one-dimensional rotational vibration, also has the dimensions t/s. The dimensions of the other electrostatic quantities can be derived from those of charge. The electric field intensity, a quantity that plays an important part in many of the relations involving electric charges, is the charge per unit area, t/s x 1/s2 = t/s3. The product of field intensity and distance, t/s3 x s = t/s2, is a force, the electric potential. For the same reasons that apply to the production of a gravitational field by a mass, the electric charge is surrounded by a force field. However, there is no interaction between mass and charge. As brought out in Chapter 12, a scalar motion that alters the separation between A and B can be represented in the reference system either as a motion AB (a motion of A toward B) or a motion BA (a motion of B toward A). Thus the motions AB and BA are not two separate motions; they are merely two different ways of representing the same motion in the reference system. This means that scalar motion is a mutual process, and cannot take place unless the objects A and B are capable of the same kind of motion. Consequently, charges (one-dimensional motions) interact only with charges, masses (three-dimensional motions) only with masses. The linear motion of the electric charge analogous to gravitation is subject to the same considerations as the gravitational motion. As noted earlier, however, it is directed outward rather than inward, and therefore cannot be added directly to the basic vibrational motion in the manner of the rotational motion combinations. This restriction on outward motion is due to the fact that the outward progression of the natural reference system, which is always present, extends to the full unit of outward speed, the limiting value. Further outward motion can be added only after an inward component has been introduced into the motion combination. A charge can therefore exist only as an addition to an atom or sub-atomic particle. Although the scalar direction of the rotational vibration that constitutes the charge is always outward, both positive (time) displacement and negative (space) displacement are possible, as the rotational speed may be either above or below unity, and the rotational vibration must oppose the rotation. This introduces a rather awkward question of terminology. From a

logical standpoint, a rotational vibration with a space displacement should be called a negative charge, since it opposes a positive rotation, while a rotational vibration with a time displacement should be called a positive charge. On this basis, the term ―positive‖ would always refer to a time displacement (low speed), and the term ―negative‖ would always refer to space displacement (high speed). Use of the terms in this manner would have some advantages, but so far as the present work is concerned, it does not seem advisable to run the risk of adding further confusion to explanations that are already somewhat handicapped by the unavoidable use of unfamiliar terminology to express relationships not previously recognized. For present purposes, therefore, current usage will be followed, and the charges on positive elements will be designated as positive. This means that the significance of the terms ―positive‖ and ―negative‖ with respect to rotation in reversed in application to charge. In ordinary practice this should not introduce any major difficulties. In this present discussion, however, a definite identification of the properties of the different motions entering into the combinations that are being examined is essential for clarity. To avoid the possibility of confusion, the terms ―positive‖ and ―negative‖ will be accompanied by asterisks when used in the reverse manner. On this basis, an electropositive element, which has low speed rotation in all scalar dimensions, takes a positive* charge, a high speed rotational vibration. An electronegative element, which has both high speed and low speed rotational components, can take either type of charge. Normally, however, the negative* charge is restricted to the most negative elements of this class, those of Division IV. Many of the problems that arise when scalar motion is viewed in the context of a fixed spatial reference system result from the fact that the reference system has a property, location, that the scalar motion does not have. Other problems originate for the inverse reason: scalar motion has a property that the reference system does not have. This is the property that we have called scalar direction, inward or outward. We can resolve this latter problem by introducing the concept of positive and negative reference points. As we saw earlier, assignment of a reference point is essential for the representation of a scalar motion in the reference system. This reference point then constitutes the zero point for measurement of the motion. It will be either a positive or a negative reference point, depending on the nature of the motion. The photon originates at a negative reference point and moves outward toward more positive values. The gravitational motion originates at a positive reference point and proceeds inward toward more negative values. If both of these motions originate at the same location in the reference system, the representation of both motions takes the same form in that system. For example, if an object is falling toward the earth, the initial location of that object is a positive reference point for purposes of the gravitational motion, and the scalar direction of the movement of the object is inward. On the other hand, the reference point for the motion of a photon that is emitted from that object and moves along exactly the same path in the reference system is negative, and the scalar direction of the movement is outward. One of the deficiencies of the reference system is that it is unable to distinguish between these two situations. What we are doing in using positive and negative reference points is compensating for this deficiency by the use of an auxiliary device. This is not a novel expedient; it is common practice. Rotational motion, for instance, is represented in the

spatial reference system with the aid of an auxiliary quantity, the number of revolutions. Ordinary vibrational motion can be accurately defined only by a similar expedient. Scalar motion is not unique in its need for such auxiliary quantities or directions; in this respect it differs from vectorial motion only in that it has a broader scope, and therefore transcends the limits of the reference system in more ways. Although the scalar direction of the rotational vibration that constitutes the electric charge is always outward, positive* and negative* charges have different reference points. The motion of a positive* charge is outward from a positive reference point toward more negative values, while that of a negative* charge is outward from a negative reference point toward more positive values. Thus, as indicated in the accompanying diagram, Fig. 20, while two positive* charges (line a) move outward from the same reference point, and therefore away from each other, and two negative* charges (line c) do likewise, a positive* charge moving outward from a positive reference point, as in line b, is moving toward a negative* charge that is moving outward from a negative reference point. It follows that like charges repel each other, while unlike charges attract. As the diagram indicates, the extent of the inward motion of unlike charges is limited by the fact that it eventually leads to contact. The outward movement of like charges can continue indefinitely, but it is subject to the inverse square law, and is therefore reduced to negligible levels within a relatively short distance.

Figure 20 (a) (b) (c)

– |

+ |=> |

– | |

S |

A |=> 12–0 + 4–0 Since rotational vibration exists only as a modifier of rotation, there are no separate units of vibrational mass that can be added or subtracted directly in the manner of the alpha particle. But the mass of the compound neutron has the same single (atomic weight) unit value as the vibrational mass unit, and like the latter, it is a single rotating system (from the material standpoint). It is therefore interchangeable with the vibrational mass. In our numerical notation, it can be expressed as 0–1. This equivalence of the neutron mass and the unit of vibrational mass makes it possible to modify isotopes by adding or removing compound neutrons. Thus we may start with the mass two isotope of hydrogen, H2, and by adding a compound neutron obtain the mass three isotope, H3. H2 + n => H3 2–0 + 0–1=> 2–1 Beta radioactivity is a conversion process rather than an ordinary addition process. An isotope that is above the zone of stability has one or more units of magnetic displacement, ¹/2–¹/2–0, in the form of rotational vibration, superimposed on units of the magnetic rotation of the atom. These vibrational units are only half the size of the rotational units. Addition of a second half–size unit to one of the combinations of unit rotation and unit rotational vibration is therefore required to produce an additional rotational unit. This cannot be accomplished by direct addition, as a rotational unit is not capable of accepting more than one vibrational unit. However, an unstable isotope is subject to influences that cause it to eject displacement. (That is what makes it unstable.) An isotope above the stability zone ejects a cosmic neutrino, (¹/2)–(¹/2)–1 and an electron, 0–0–(1). This ejection is equivalent to addition of displacement ¹/2–¹/2–0, the addition that is required to convert one of the half size vibrational units to a rotational unit. Neither of the ejected particles has any effective primary mass. No change in mass therefore takes place in this process (b– radioactivity). The original isotope with rotational mass 2Z and vibrational mass n becomes an isotope with rotational mass 2(Z+1)–that is, an isotope of the next higher element–and vibrational mass n–2. The total mass of the combination of motions remains the same, but two units of vibrational mass have been converted to rotational mass, and the combination has moved closer to the zone of stability. If it is still outside that zone, the ejection process is repeated. Where an isotope is below the zone of stability (deficient in vibrational mass) the process described in the foregoing paragraphs is reversed. In this process, b+ radioactivity, a unit of rotational mass is converted to two units of vibrational mass by ejection of a material neutrino, ¹/2–¹/2–(1), together with a positron, 0–0–1. The isotope of element Z, with rotational mass 2Z and vibrational mass n then becomes an isotope of element Z–1, with rotational mass 2(Z–1) and vibrational mass n+2. These are the basic radioactive processes. The actual course of events in any particular case depends on the initial situation. It may involve only one such event; it may consist of several successive events of the same kind, or a combination of the basic processes may be required to complete the transition to a stable condition. In natural beta radioactivity

under terrestrial conditions a single beta emission is usually sufficient, as the unstable isotopes are seldom very far outside the zone of beta stability. However, under some other environmental conditions the amount of radioactivity required in order to attain beta stability is very substantial, as we will see in Volume III. In natural alpha radioactivity, the mass that must be ejected may amount to the equivalent of several alpha particles even in the terrestrial environment. The loss of this rotational mass necessitates beta emission to restore the equilibrium between rotational and vibrational mass. Alpha radioactivity is thus usually a complex process. As an example, we may trace the steps involved in the radioactive decay of uranium. Beginning with U238, which is just over the borderline of stability, and has the long half life of 4.5 x 109 years, the first event is an alpha emission. U238 => Th234 + He4 184–54 => 180–54 + 4–0 This puts the vibrational mass outside the zone of stability, and two successive beta events follow promptly, bring the atom back to another isotope of uranium. Th234 => Pa234 180–54=> 182–52 Pa234 => U234 182–52 => 184–50 Two successive alpha emissions now take place, with a considerable delay between stages, as both U234 and the intermediate product Th230 have relatively long half lives. These two events bring the atomic structure to that of radium, the prototype of the radioactive elements. U234 => Th230 + He4 184–50 => 180–50 + 4–0 Th230 => Ra226 + He4 180–50 => 176–50 + 4–0 After another somewhat shorter time interval, a rapid succession of decay events begins. Half life periods in this phase of the decay range from days down to as low as seconds. Three more alpha emissions start the sequence. Ra226 => Rn222 + He4 176–50 => 172–50 + 4–0 Rn222 => Po218 + He4 172–50 => 168–50 + 4–0 Po218 => Pb214 + He4 168–50 => 164–50 + 4–0 By this time the vibrational mass of 50 units is well above the zone of stability, the center of which is theoretically 43 units at this point. The next event is therefore a beta emission.

Pb214 => Bi214 164–50 => 166–48 This isotope is still above the stable zone, and another beta emission is in order, but a further alpha emission is also imminent, and the next step may take either direction. In either case, the emission is followed by one of the alternate kind, and the net result of the two events is the same regardless of which step is taken first. We may therefore regard this as a double decay. Bi214 => Pb210 + He4 166–48 => 164–46 + 4–0 After some delay due to a 22 year half life of Pb210, two successive beta emissions and one alpha event occur. Pb210 => Bi210 164–46 => 166–44 Bi210 => Po210 166–44 => 168–42 Po210 => Pb206 + He4 168–42 => 164–42 + 4–0 The lead isotope Pb206 is within the stability limits both with respect to total mass (alpha) and with respect to the ratio of vibrational to rotational mass (beta). The radioactivity therefore ends at this point. The unstable isotopes that are responsible for natural beta radioactivity in the terrestrial environment originate either as by–products of alpha radioactivity or as a result of atomic transformations originated by high energy processes, such as those initiated by incoming cosmic rays. Alpha radioactivity is mainly the result of past or present inflow of material from regions where the magnetic ionization level is below that of the local environment. In those regions where the magnetic ionization level is zero, or near zero, all of the 117 possible elements are stable, and there is no alpha radioactivity. The heavy element content of young matter is low because atom building is a cumulative process, and this young matter has not had time to produce more than a relatively small number of the more complex atoms. But probability considerations make it inevitable that some atoms of the higher groups will be formed in the younger aggregates, particularly where older matter dispersed into space by explosive processes has been accreted by these younger structures. Thus, although aggregates composed primarily of young matter have a much lower heavy element content than those composed of older matter, they do contain an appreciable number of the very heavy elements, including the trans–uranium elements that are absent from terrestrial matter. The significance of this point will be explained in Volume III. If matter from a region of zero magnetic ionization is transferred to a region such as the surface of the earth, where the ionization level is unity or above, the stability limit in terms of atomic number drops, and radioactivity is initiated. Whether the material constituents of the earth acquired the unit magnetic ionization level at the time that the

earth assumed its present status as a planet, or reached this level at some earlier or later date is not definitely indicated by the information now available. There is some evidence suggesting that this change took place in a considerably earlier era, but in any event the situation with respect to the activity of the elements now undergoing alpha radioactivity is essentially the same. They originated in a region of zero, or near zero, magnetic ionization, and either remained in that region while the magnetic ionization level increased, or in some manner, the nature of which is immaterial in the present connection, were transferred to their present locations, where they have become radioactive for the reasons stated. As noted above, another source of natural radioactivity is atomic rearrangement resulting from interaction of material atoms with high energy particles, principally the cosmic rays and their derivatives. In such reactions stable isotopes of one kind or another are converted into unstable isotopes, and the latter than become sources of radioactivity, mostly of the beta type. The level of the beta radioactivity produced in this manner is quite low. The very intense activity of the same general nature that is indicated by the radio and x–ray emission from certain kinds of astronomical objects originates by means of a different process, examination of which will be deferred until the nature and behavior of the objects from which the emissions are observed are developed in Volume III. The processes that constitute natural radioactivity can be duplicated experimentally, together with a great variety of similar atomic transformations which presumably also occur naturally under appropriate circumstances, but have been observed only under experimental conditions. We may therefore combine our consideration of natural beta radioactivity, the so–called artificial radioactivity, and the other experimentally induced transformations into an examination of atomic transformations in general. Essentially, these transformations, regardless of the number and type of atoms or particles involved, are no different from the simple addition and decay reactions previously discussed. The most convenient way of describing these more complex events is to treat them as successive processes in which the reacting particles first join in an addition reaction and subsequently eject one or more particles from the combination. According to some of the theories currently in vogue, this is the way in which the transformations actually take place, but for present purposes it is immaterial whether or not the symbolic representation conforms to physical reality, and we will leave this question in abeyance. The formation of the isotope P30 from aluminum, the first artificial radioactive reaction discovered, may be represented as Al27 + He4 => P30 + n1 26–1 + 4–0 => 30–1 => 30–0 + 0–1 In this case the two phases of the reaction are independent, in the sense that any combination which adds up to 30–1 can produce P30 + n1, while there are many ways in which the 30–1 resultant of the combination of Al27 + He4 can be broken down. The final product may, for instance, be Si30 + H1. The usual method of conducting transformation experiments is to accelerate a small atomic or sub–atomic unit to a very high velocity and cause it to impinge on a target. In

general, the degree of fragmentation of the target atoms depends on the relative stability of those atoms and the kinetic energy of the incident particles. For example, if we use hydrogen atoms against an aluminum target at a relatively low energy level. we get results similar to those produced in the Al27 + He4 reaction previously discussed. Typical equations are Al27 + H1 => Mg24 + He4 26–1 + 2–(1) => 28–0 => 24–0 + 4–0 Al27 + H1 => Si27 + n1 26–1 + 2–(1) => 28–0 => 28–(1) + 0–1 Greater energies cause further fragmentation and result in such rearrangements as Al27 + H1 => Na24 + 3 H1 + n1 26–1 + 2–(1) => 28–0 => 22–2 + 6–(3) + 0–1 The general principle that the degree of fragmentation is a function of the energy of the incident particles has an important bearing on the relative probabilities of various reactions at very high temperatures, and will have further consideration later. In the extreme situation where the target atom is heavy and inherently unstable, the fragments may be relatively large. In this case, the process is known as fission. The difference between the fission process and the transformation reactions previously described in merely a matter of degree, and the same relationships apply. Although it is possible in some instances to transform one stable isotope into another by an appropriate process, the more general rule is that if the original reactants are stable the major product is unstable, and therefore radioactive. The reason is, of course, that the stable isotopes have vibrational to rotational mass ratios within the stability zone, and any change in the ratio tends to move it out of that zone. As an example, the P30 isotope formed in the reaction between aluminum and helium atoms is below the stability zone; that is, it is deficient in vibrational mass. It therefore decays by the b+ process to form a stable silicon isotope. P30 => Si30 30–0 => 28–2 In the radioactive reactions of the heavy elements the products often have substantial excesses of vibrational mass, and in these cases successive beta emissions take place, resulting in decay chains in which the unstable isotopes move step by step toward stability. One of the relatively long chains of this type that has been identified is the following: Xe140

=>

Cs140

=>

Ba140

=>

La140

=>

Ce140

108–32 (19)

=>

110–30 (19)

=>

112–28 (20)

=>

114–26 (21)

=>

116–24 (22)

The figures in parentheses refer to the number of units of vibrational mass corresponding to the center of the zone of stability, as calculated for each element from equation 24–1.

The original product Xe140 has 13 excess vibrational units, and is thus far outside the stability zone. Successive beta emissions convert two–unit quantities of vibrational mass to rotational mass, while the stable amount of vibrational mass gradually increases as the atomic number rises. On reaching Ce140 the excess has been reduced to two units. This is within the stability margin, and the radioactivity therefore ceases. The foregoing description of the atomic transformation processes has been confined to the essential element of the transformation, the redistribution of the primary mass, and the collateral effects have either been ignored or left for later treatment. In the latter category are the mass–energy relations, which will be discussed in Chapter 27. The electric charges carried by some of the reacting substances, or the reaction products, have no significance in the present connection, as they only affect the energy relations. On first consideration it might appear that the addition processes discussed in the preceding pages would provide the answer to the problem of accounting for the existence of the heavier members of the series of chemical elements. In current practice this is taken for granted, and the question to be answered is accepted as being merely the issue as to what specific one or more of these processes is operative. The currently accepted hypothesis is that the raw material from which the elements are formed is hydrogen, and that mass is added to hydrogen by means of the addition processes. It is recognized that (with certain exceptions that will be considered later) the addition mechanisms are high energy processes. Atoms approaching each other at low or moderate speeds normally rebound, and take up positions at equilibrium distances. The additions take place only where the speeds are high enough to overcome the resistance, and these speeds generally involve disruption of the structure of the target atoms, followed by some recombination. The only place now known in our galaxy where the energy concentration is at the level required for operation of these processes on a major scale is in the interiors of the stars. The accepted hypothesis therefore is that the atom building takes place in the stellar interiors, and that the products are subsequently scattered into the environment by supernova explosions. It has been demonstrated by laboratory experiments, and more dramatically in the explosion of the hydrogen bomb, that the mass 2 and mass 3 isotopes of hydrogen can be stimulated to combine into the mass 4 isotope of helium, with the release of large quantities of energy. This hydrogen conversion process is currently the most powerful source of energy known to science (aside from some highly speculative ideas that involve carrying gravitational attraction to hypothetical extremes). The attitude of the professional physicists has always been that the most energetic process known to them must necessarily be the process by which energy is generated in the stars (even though they have had to revise their concept of the nature of this process twice already, the last time under very embarrassing circumstances). The current belief of both the physicists and the astronomers therefore is that the hydrogen conversion process is unquestionably the primary stellar energy source. It is further assumed that there are other addition processes operating in the stars by which atom building beyond the helium level is accomplished.

It will be shown in Volume III that there is a mass of astronomical evidence demonstrating conclusively that this hydrogen conversion process cannot be the means by which the stellar energy is generated. But even without this evidence that demolishes the currently accepted assumption, any critical examination of the fundamentals of atom building will make it clear that high energy processes–inherently destructive–are not the answer to the problem. It is true that the formation of helium from isotopes of hydrogen proceeds in the right direction, but the fact is that the increase in atomic mass that results from the hydrogen conversion reaction is an incidental effect of a process that operates toward a different end. The primary objective of that process, the objective that supplies the probability difference that powers the process, is the conversion of unstable isotopes into stable isotopes. The fuel for the known hydrogen conversion process, that of the hydrogen bomb and the experiments aimed at developing fusion power, is a mixture of these unstable hydrogen isotopes. The operating principle is merely a matter of speeding up the conversion, causing the reactants to do rapidly what they will do slowly, of their own accord, if not subjected to stimulation. It is freely asserted that this is the same process as that by which energy is generated in the stars, and that the fusion experiments are designed to duplicate the stellar conditions. But the hydrogen in the stars is mainly in the form of the stable mass one isotope, and there is no justification for assuming that this stable atomic structure can be induced to undergo the kind of a reaction to which the unstable isotopes are subject by reason of their instability. The mere fact that the conversion process would be exothermic, if it occurred, does not necessarily mean that it will take place spontaneously. The controlling factor is the relative probability, not the energy balance, and so far as we know, the mass one isotope of hydrogen is just as probable a structure as the helium atom under any physical conditions, other than those, to be discussed in Chapter 26, that lead to atom building. At high temperatures the chances of atomic break–up are improved, but this does not necessarily increase the proportion of helium in the final product. On the contrary, as noted earlier, a greater kinetic energy results in more fragmentation, and it therefore favors the smaller unit rather than the larger. A certain amount of recombination of the fragments produced under these high temperature conditions can be expected, particularly where the extreme conditions are only temporary, as in the explosion of the hydrogen bomb, but the relative amounts of the various possible products of recombination are determined by probability considerations. Inasmuch as stable isotopes are more probable than unstable isotopes (that is what makes them stable), formation of the stable helium isotope from the atomic and sub–atomic fragments takes precedence over recombination of the unstable isotopes of hydrogen. But the mass one hydrogen isotope that is the principal constituent of the stars is just as stable as helium, and it has the advantage, in a high energy environment, of being the smaller unit, which makes it less susceptible to fragmentation, and more capable of recombination if disrupted. Thus it cannot be expected that recombination of fragments into helium, under high energy conditions, will occur on a large enough scale to constitute a major source of stellar energy.

In this connection, it should be noted that the general tendency of high energy reactions in the material sector of the universe is to break down existing structures rather than build larger ones. The reason for this should be evident. The material sector is the low speed sector, and the lower the speed of matter the more pronounced its material character becomes; that is, the more it deviates from the speeds of the cosmic sector. It follows that, in general, the lower the speed the greater the tendency to form combinations of the material type. Conversely, higher speeds lessen the material character of the matter, and not only inhibit further combination, but tend to disrupt the combinations already existing. Furthermore, this increase in the amount of negative displacement (thermal or translational motion) is not conducive to building up positive displacement in the form of mass. Thus the net result of the reactions in the high speed environment of the stellar interiors can be expected to decrease, rather than increase, the average atomic weight of the matter participating in these reactions. An analogous process in a more familiar energy range is the pyrolysis of petroleum. Cracking of a paraffinic oil, for instance, yields products that, among other things, include substantial quantities of complex aromatic compounds. For example, one of those that makes it appearance is anthracene, a 24–atom molecule. There are few, if any, of the ring compounds, even the smaller ones, in the original material. Thus it is evident that the high temperatures of this process have not only broken down the original hydrocarbon molecules into smaller molecules or atoms, but have also allowed some recombination into larger molecular units. Nevertheless, the general result of the cracking process is a drastic reduction in the average size of the molecules, the greater part of the mass being reduced to hydrogen, methane, and carbon. The point that needs to be recognized is that this is what high energy processes do to combinations such as atoms, regardless of whether those atoms are combinations of particles, as contended by conventional physics, or combinations of different forms of motion, as deduced from the postulates of the theory of the universe of motion. Such processes disrupt some or all of the original combinations. In the chaotic conditions generated by the application of powerful forces there is a certain amount of recombination going on alongside the disintegration. This may result in the appearance of some new combinations (isotopes), which may suggest that atom building is occurring. But, in fact, these constructive events are merely incidental results of a destructive process. In the universe of motion, the raw material for atom building consists of massless particles, the decay products of the cosmic rays. Conversion of these particles into simple atoms of matter, and production of increasingly more massive atoms from the original units, is a slow and gradual constructive process, not a high energy destructive process. This assertion as to the general character of the atom building process is confirmed by the astronomical evidence, which, as will be brought out in Volume III, shows that atom building is taking place throughout the universe, not merely at special locations and under special conditions, as envisioned in present–day theories. The details of the atom building processes in the universe of motion will be the subject of the next chapter.

CHAPTER 26

Atom Building Several chapters of Volume I were devoted to tracing the path followed by the matter that is ejected into the material sector of the universe from the inverse, or cosmic, sector in the form of cosmic rays. As brought out there, the cosmic atoms that constitute the cosmic rays, three–dimensional rotational combinations with net speeds greater than unity, are broken down into massless particles; that is, particles with effective rotation in less than three dimensions. These particles are then reassembled into material atoms, three– dimensional rotational combinations with net speeds less than unity. The processes by which this rebuilding is accomplished have not yet been observed, nor has the applicable theory been fully clarified. It was stated in the earlier volume that our conclusions in this area were necessarily somewhat speculative. Additional theoretical development in the meantime has placed these conclusions on a much firmer basis, and it would now be in order to call them tentative rather than speculative. As brought out in Chapter 25, the currently prevailing opinion is that atom building is carried on by means of addition processes of the type discussed in that chapter. For the reasons that were specified, we find it necessary to reject that conclusion, and to characterize these processes, to the extent that they actually occur, as minor and incidental activities that have no significant influence on the general evolutionary pattern in the material sector of the universe. However, as noted in the earlier discussion, there is one addition process that actually does occur on a large enough scale to justify giving it some consideration before we turn our attention to broadening the scope of the explanation of the atom building process introduced in Volume I. This addition process that we will now want to examine is what is known as ―neutron capture.‖ The observed particle known as the ―neutron‖ is the one that we have identified as the compound neutron. It has the same type of structure as the mass one hydrogen isotope; that is, it is a double rotating system with a proton type rotation in one component and a neutrino type rotation in the other. In the hydrogen isotope the neutrino rotation has the material composition M ¹/2–¹/2–(1). In the compound neutron it has the cosmic composition C (¹/2)–(¹/2)–1. The net displacements of this particle are M ¹/2–¹/2–0, the same as those of the massless neutron. The compound neutron is fully compatible with the basic magnetic (two–dimensional) rotational displacement of the atoms, and since it carries no electric charge it can penetrate to the vicinity of an atom much more easily than the particles that normally interact in the charged condition. Consequently, the compound neutrons are readily absorbed by atoms. On first consideration, therefore, neutron capture would appear to be a likely candidate for designation as the primary atom building process. Nevertheless, the physicists relegate it to a minor role. The prevailing downgrading of the potential of neutron capture is mainly due to the physicists‘ commitment to other processes that they believe to be responsible for the energy production in the stars. If, as now believed, the continuing additions to the atomic masses are made as a collateral feature of the stellar energy production processes, neutron capture can have only a limited significance. Some support for this conclusion is derived

from the finding that there is no stable isotope of mass 5. As the textbooks point out, the neutron capture process would come to a stop at this point. In the universe of motion this argument is invalid. As we saw in Chapter 24, isotopic stability is determined by the level of magnetic ionization. The lack of a stable isotope of mass 5 is peculiar to the unit ionization level, the level that happens to exist at the surface of the earth at the present time. In earlier eras, when the magnetic ionization level was lower, the obstacle at mass 5 was absent, or at least not fully effective, and in the future when the ionization level has risen, this obstacle will again be minimized or removed. We must nevertheless concur with the prevailing opinion that neutron capture is not the primary atom building process, because even though the mass 5 obstacle can be circumvented, there are not anywhere near enough of the compound neutrons to take care of the atom building requirements. These particles are produced in limited quantities in reactions of a special nature. Atom building, on the other hand, is an activity of vast proportions that is going on continuously in all parts of the universe. The compound neutron is actually a very special kind of combination of motions. The reason for its existence is that there are some physical circumstances under which two–dimensional rotation is ejected from matter. In the material atoms the two–dimensional rotation is associated with mass because of the way in which it is incorporated into the atomic structure. There is no way in which this mass can be given up, because the process by which it originated, bringing a massless particle to rest in the fixed spatial reference system, is irreversible. The two–dimensional speed displacement is therefore forced into the only available alternative, the compound neutron structure, even though this structure is inherently one of low probability. Let us turn now to the process which, according to the findings reported in Volume I, is, in fact, the primary means whereby atom building is actually accomplished. As brought out in that earlier discussion, the principal product of the decay of cosmic atoms, the original constituents of the cosmic rays, is the massless neutron, M ¹/2–¹/2–0. This particle can combine with an electron, M 0–0–(1), or eject a positron, M 0–0–1, to form a neutrino, M¹/2–¹/2–(1). On the basis of the principles governing the combination of motions, as defined in Volume I, simple combinations of motions do not produce stable structures unless the added motion has some characteristic opposed to that of the original. However, this restriction does not apply to a combination with a neutrino, as this particle has a net total speed displacement of zero, and the added motion is therefore the only active unit in the combination. Thus a massless neutron can be added to a neutrino. Some significant consequences ensue. All massless particles are moving outward at the speed of light (unit speed) relative to the conventional spatial reference system. But when the neutrino, M¹/2–¹/2–(1), combines with the massless neutron, M¹/2–¹/2–0, the displacements of the combination are M 1–1– (1), which means that the combination has an active inward two–dimensional rotational displacement in a three–dimensional type of structure. The addition of inward motion in the third scalar dimension brings the consolidated particle to rest in the spatial reference system. The results of this sequence of events were described in Volume I. As noted there, although the massless neutron and the neutrino have no effective mass, they do have the two–dimensional analog, t2/s2, of the three–dimensional property, t3/s3, that is

known as mass. When one of these particles, moving at the speed of light relative to the spatial reference system comes to rest in the gravitationally bound system represented by the reference coordinates, the unit translational speed thereby eliminated provides the necessary energy, t/s, to convert the two–dimensional quantity, the internal momentum, as we have called it, to the three–dimensional quantity, mass. The product of this process, with rotational displacements 1–1–(1) and a mass of one atomic weight unit, is the proton. In conventional physics the proton is regarded as a positively* charged particle that constitutes the nucleus of the hydrogen atom. We find that it is, in fact, a particle, which may or may not carry a positive* electric charge. We also find that as a particular kind of motion (not as a particle) it is a constituent of the hydrogen atom. It is not, however, a ―nucleus.‖ The mass one hydrogen isotope is a double rotating system in which the proton type of motion is combined with a motion of the neutrino type. The atom is formed by direct combination of the proton and the neutrino, but the existence of the particles as particles terminates when the combination takes place. At this point the motions that previously constituted the particles become constituent motions of the combination structure, the atom. This is an appropriate point at which to make some general comments about the successive combinations of different types of motions that are the essence of the atom building process. The key to an understanding of this situation is a recognition of the fact that these are scalar motions. The only inherent property of a scalar motion is its positive or negative magnitude, and the representation of that magnitude in the spatial reference system is subject to change in accordance with the conditions prevailing in the environment. The same scalar motion can be either translational, rotational, vibrational, or a rotational vibration, and it is free to switch from one of these to another to conform to changed conditions. Such a change is a zero energy process, as previously defined, merely a rearrangement. This is the same kind of a situation that we encountered in Chapter 17 in connection with ionization. As noted there, ionization of a particle can take place by means of any one of a number of different processes–absorption of radiant energy, capture of electrons, contact with fast moving particles, etc. Since the motions that are involved are of different types, it might appear that we are confronted with a difficult problem when we attempt to explain these processes as interchange of motions. But the situation is simple when it is viewed in scalar terms. The only inherent property of these scalar motions–the vibratory photon motion, the rotational electron motion, the translational motion of the atom or particle–is the magnitude. It follows that the magnitude is the only property that is necessarily transmitted unchanged in an interaction. The coupling to the reference system that distinguishes the photon from the electron, or from translational motion, is free to conform to the new environment. In ionization it takes the form of a rotational vibration, regardless of the type of the antecedent motion. Production of the hydrogen atom in the manner described in the preceding pages terminates the role of the direct addition processes in atom building. The essential step in this process is to bring the massless neutrons from their normal motion at the speed of light (stationary in the natural reference system) to a condition of rest in the fixed spatial reference system. As pointed out in Volume I, this requires the existence of rotational

motion in all three scalar dimensions, since the particle is capable of moving at the speed of light (relative to the spatial reference system) in any vacant dimension. The massless neutron does not have the necessary three dimensions of motion, but combination with the neutrino provides the required addition to the neutron dimensions. This combination, 1–1–(1), has a net total three–dimensional rotational displacement (mass) of one unit. The 1–1–(1) particle, the proton, thus produced cannot accept another massless neutron because of the two–dimensional nature of that particle. Nor can it accept a combination of the massless neutron with a neutrino, as that combination constitutes another proton, and consolidation of two protons is subject to the opposing factors previously considered in connection with the direct combination of atoms. Beyond the mass one hydrogen stage, therefore, atom building takes place mainly by means of an ionization process that we will now consider. The neutrinos in the decay products of the cosmic rays are subject to contacts with other particles, particularly photons of radiation. Some of these contacts result in magnetic ionization; that is, a two–dimensional rotational vibration is imparted to the neutrino. Since this is a one–unit displacement in opposition to the one unit of two–dimensional rotational displacement in the neutrino, the resultant net rotational displacement in these two dimensions is zero. As can readily be seen, such a charge could not be applied to a massless neutron. This particle already has zero displacement in the electric dimension, and if the one unit in the magnetic dimensions is neutralized the particle would have no effective speed displacement, and would be reduced to the status of the rotational base, the rotational equivalent of nothing at all. At the primitive level magnetic ionization is therefore confined to the neutrino. The magnetic ionization process was discussed at length in Chapters 24 and 25, and the steps through which the original ionization of the neutrinos is passed on to the atoms were described in considerable detail. At this time we will take a look at the mass relations, with the objective of demonstrating that the process by which mass is added during the events previously described is irreversible (up to the destructive limits defined in Chapter 25), and that magnetic ionization is therefore an atom–building process of such broad scope that it is clearly the predominant means of accomplishing the formation of the heavier elements. As explained previously, since the magnetically charged neutrino has no active speed displacement other than the one negative unit in the electric dimension, it is, in effect, a rotating unit of space vibrating in the magnetic dimensions. A material atom, which is a time structure (net displacement in time), can exist in this space of the neutrino just as in any other space. Such an atom is continually moving from one space unit to another. If it enters the space of a neutrino, the rotational vibration of the space unit (the neutrino) is equivalent to, and in equilibrium with, a similar, but oppositely directed, rotational vibration of the atom. When the atom again passes into another space unit it is a matter of chance whether the vibration goes with it, or is left with the space unit (the neutrino). Thus some of the magnetic charges originally imparted to the neutrinos in a material aggregate are transferred from the neutrinos to the atoms.

Neutrinos, whether charged or uncharged, move at unit speed relative to the spatial reference system, and their occasional periods of coincidence with atoms of matter are possible only because of the finite magnitude of the units of space and time. If the magnetic charge stays with the atom when the atom and neutrino separate, the charge, which is moving at unit speed while it is associated with the neutrino, is brought to rest in the spatial reference system. Elimination of the unit of outward speed provides the unit of displacement required for the addition of rotation in the third scalar dimension and enables the unit of magnetic (two–dimensional) speed displacement to be absorbed by the atom. Inasmuch as this unit that is absorbed has only half the mass of the full rotational unit, and has no rotation at all in the third dimension, it enters the atom as a unit of vibrational mass. If this puts the isotopic weight of the atom outside the zone of stability, some of the vibrational mass is converted to rotational mass in the manner previously described, moving the atom to a position higher in the atomic series. The transition from the massless state (stationary in the natural reference system) to the material status cannot be reversed in the material environment, as there is no available process for going directly from rotation to translation. The sub–atomic particles are subject to neutralization reactions in which oppositely directed rotations cancel each other, causing their speed displacements to revert to the translational status. But direct combination of two multi–unit atoms is difficult to accomplish. Because of the reversed direction of the forces in the time region, there is a strong force of repulsion between two such structures when they approach each other. Furthermore, each atom is a combination of motions in different scalar dimensions, and even if two atoms acquire sufficient relative speed to overcome the resistance and make effective contact, they cannot join unless the displacements in the different dimensions reach the proper conditions for combination simultaneously. With few, if any, exceptions, the additions to the masses of the atoms are therefore permanent (up to the time that one of the destructive limits is reached). Here, then, the first application of this atom building process is complete. By means of the successive steps that have been identified, the magnetic rotational speed displacement of the massless neutron produced by cosmic ray decay (the only active property of that particle) is converted into an addition to the mass of an atom. Successive additions of the same kind move the atom up the atomic series. Atom building in intergalactic space is slow because of the low density of matter, but the amount of time spent in this stage is so long that there is sufficient opportunity for production of a finite quantity of all of the117 possible elements, in proportions determined by the relative probabilities. After this initial period, the existing matter is increasingly concentrated into large aggregates. This speeds up the atom building, but meanwhile there are processes in operation that destroy some of the heavier elements. A significant aspect of the theoretical findings reported in this and the immediately preceding chapters is the important role of the massless particles, entities which, with the exception of the photon and the neutrino, are not recognized by conventional science. As brought out in the discussion earlier in this chapter, the characteristic feature of these particles is that they have no capability of independent motion, and are therefore

stationary in the natural system of reference. It follows that they are moving at unit speed (the speed of light) in the context of the conventional spatial reference system. According to our findings, there are three categories of material particles (combinations of motions without enough rotational displacement to form the atomic type of structure). These are (1) massless particles, (2) similar particles that have acquired mass, and (3) particles with structures intermediate between those of class (2) and the full atomic structure. Table 36 lists the sub–atomic particles of the material sector. The mass one hydrogen isotope is included in this list because of its intermediate type structure, although it is generally regarded as a full scale atom. Electric charges that may be present are not shown, except in the case of the one–dimensional charged particles, where they provide the rotational vibration that brings these particles into the gravitationally bound system. Charges applied to other particles in the list have no significant effect on the phenomena now being considered.

Table 36: The Subatomic Particles Massless Particles M0–0–0 M0–0–(1) *M¹/2–¹/2–(1) M0–0–1 M¹/2–¹/2–(1) M¹/2–¹/2–0

photon rotational base electron charged neutrino positron neutrino massless neutron

Particles With Mass –M0–0–(1)

charged electron

+M0–0–1 M1–1–(1)

charged positron proton

Intermediate Systems M1–1–(1) C(¹/2)–(¹/2)–1

compound neutron

M1–1–(1) M¹/2–¹/2–(1) mass 1 hydrogen * gravitational charge – negative* electric charge + positive* electric charge An exact duplicate of the Table36 list exists in the cosmic sector, with the speed displacements inverted. In this case the particles are built on the cosmic rotational base, represented as C 0–0–0, rather than the material rotational base, M 0–0–0. The particles not listed in Table35 that the physicists claim to have discovered–mesons, etc.–are combinations of the cosmic type, either particles from the cosmic sub–atomic list, or full–sized cosmic

atoms (where the presumed discoveries are authentic). It is even possible that some of the events of extremely short duration attributed to transient particles may be originated by cosmic chemical compounds. Recognition of the place of the massless particles in the evolutionary pattern of matter is one of the advances in understanding that has given us the present consistent, and apparently correct, explanation of the transition from cosmic to material (and vice versa). The 1959 publication identified the cyclic nature of the universe, and gave an account of the manner in which the transitions between sectors take place. At that time, however, the existence of the massless particles had not yet been discovered theoretically, and the particle now identified as the compound neutron was thought to be the intermediary by means of which intersector transfer is accomplished. When it was finally realized that the theory requires the existence of a massless neutron, the door to a new understanding of the transition process was opened. It then became evident that the transition is not directly from cosmic to material, but from cosmic (moving inward in time) to neutral (no motion relative to the natural reference system), and then to material (moving inward in space). This finding revolutionized our concept of the position of the massless particles in the physical picture. It can now be seen that these particles–the neutrino (known to conventional science), the massless electron and massless positron (previously identified as the moving particles in electric currents), the massless neutron, the rotational base, and the gravitationally charged neutrino (discovered theoretically)–are the constituents of a hitherto unknown subdivision of physical existence, a neutral state of the basic units of matter, intermediate between the states of the cosmic and material sectors. Inasmuch as the atom building process operates by means of successive additions of single units, the relative proportions of the various elements in a material aggregate are directly related to the age of the matter, and inversely related to the atomic number. However, there are a number of collateral factors that modify the basic relations. As we have seen, production of the mass one isotope of hydrogen is a relatively simple matter, involving nothing more than a union of two simple particles. The next step is more difficult because it requires the formation of a double system in which there are effective rotational displacements in both components. The great majority of the material atoms are therefore still in the hydrogen stage. The first full double system, helium, atomic number 2, is in second place, as would be expected. Beyond this level, the atomic rotation becomes more complex, and factors other than the required number of additions of mass units introduce numerous irregularities into what would otherwise be a regular decrease of abundance with atomic number. Evidently a single addition to the atomic rotation introduces a degree of asymmetry that decreases stability, as the even–numbered elements are generally more abundant than the odd–numbered ones. For instance, the ten most abundant elements beyond hydrogen in the earth‘s crust include seven even–numbered elements, and only three with odd atomic numbers. The zone of isotopic stability is likewise wider in the even–numbered than in the odd–numbered elements, as would be expected if they are inherently more stable. Many of the odd–numbered group have only one stable isotope, and there are five within the 117 element range of the terrestrial environment that have no stable isotope at all (in

that environment). On the other hand, no even–numbered element, other than beryllium, has less than two stable isotopes. The same kind of symmetry effect can be seen in the first additions of rotation in the magnetic dimensions. The positive elements of Group 2A, lithium, beryllium, and boron, are relatively scarce, while the corresponding members of group 2B, sodium, magnesium, and aluminum, are relatively abundant. At higher levels this effect is not apparent, probably because the successive additions to these heavier elements are smaller in proportion to the total mass, while the effects of other factors become more significant. One of the features of the rotational patterns of the elements that introduces variations in their susceptibility to the addition of mass, and corresponding variations in the proportions in which the different elements occur in material aggregates, is the change in the magnetic rotation that takes place at the midpoint of each rotational group. For example, let us again consider the 2B group of elements. The first three of these elements are formed by successive additions of positive electric displacement to the 2–2 magnetic rotation. Silicon, the next element, is produced by a similar addition, and the probability of its formation does not differ materially from that of the three preceding elements. Another such addition, however, would bring the speed displacement to 2–2–5, which is unstable. In order to form the stable equivalent, 3–2–(3), the magnetic displacement must be increased by one unit in one dimension. The probability of accomplishing this result is considerably lower than that of merely adding one electric displacement unit, and the step from silicon to phosphorus is consequently more difficult than the additions immediately preceding. The total amount of silicon in existence therefore builds up to the point where the lower probability of the next addition reaction is offset by the larger number of silicon atoms available to participate in the reaction. As a result, silicon should theoretically be one of the most abundant of the post–helium elements. The same considerations should apply to the elements at the midpoints of the other rotational groups, when due consideration is given to the general decrease in abundance that takes place as the atomic number increases. As we will see in Volume III, there are reasons to believe that the composition of ordinary matter at the end of the first phase of its existence in the material sector, the dust cloud phase, conforms to these theoretical expectations. However, the abundances of the various elements in the region accessible to direct observation, a region in a later stage of development, give us a different picture. The total heavy element content does increase with the age of the matter. A representative evaluation finds the percentage of elements heavier than helium ranging from 0.3 in the globular clusters, theoretically the youngest stellar aggregates that are observable, to 4.0 in the Population I stars and interstellar dust in the solar neighborhood, theoretically the oldest matter within convenient observational range. These are approximations, of course, but the general trend is clear. The peaks in the abundance curve that should theoretically exist at the midpoints of the rotational groups also make their appearance at the appropriate points in the lower groups of elements. The situation with respect to carbon is somewhat uncertain, because the observations are conflicting, but silicon is relatively abundant compared to the neighboring elements, as it theoretically should be, and iron, the predominant member of the trio of elements at the midpoint of Group 3A is almost as abundant as silicon. But

when we turn to the corresponding members of the 3B group, ruthenium, rhodium, and palladium, we find a totally different situation. Instead of being relatively abundant, as would be expected from their positions in the atomic series just ahead of another increase in the magnetic displacement, these elements are rare. This does not necessarily mean that the relative probability effect due to the magnetic displacement step is absent, as all of the neighboring elements are likewise rare. In fact, all elements beyond the iron–nickel group exist only in comparatively minute quantities. Estimates indicate that the combined amount of all of these elements in existence is less than one percent of the existing amount of iron. It does not appear possible to explain the relative abundances in terms of the probability concept alone. A fairly substantial decrease in abundance compared to iron would be in order if the age of the local system were such as to put the peak of probability somewhere in the vicinity of iron, but this should still leave the ruthenium group among the relatively common elements. The nearly complete elimination of the heavy elements, including this group which should theoretically be quite plentiful requires the existence of some additional factor: either (1) an almost insurmountable obstacle to the formation of elements beyond the iron group, or (2) a process that destroys these elements after they are produced. There is no indication of the existence of any serious obstacle that interferes with the formation of the heavy elements. So far as we can determine, the atom building process is just as applicable to the heavy elements as to the light ones. The building of the heavy elements is endothermic, but this should not be a serious obstacle, and in any event it does not apply below Group 4A, and therefore has no bearing on the scarcity of the 3B and lower division 3A elements. The peculiar distribution of abundances therefore seems to require the existence of a destructive process that prevents the accumulation of any substantial quantities of the elements heavier than the iron group, even though they are produced in the normal amounts. We have already seen, in Chapter 17, that such a process exists. This process will be examined in detail in Volume III, where it will be shown that the theoretical results of the process are in full agreement with the observed distribution of abundances of the elements. The entire atom building process described in this chapter is duplicated in the cosmic sector, with space and time interchanged. Here inverse mass is added to move the elements up the cosmic atomic series.

CHAPTER 27

Mass and Energy The discovery of the mass-energy relation E = mc2 by Einstein was a significant advance in physical theory, and has already had some far-reaching physical applications. It is, of course, entirely consistent with the Reciprocal System of theory. Indeed, this theory provides the explanation of the relation that has heretofore been lacking. It is not always

recognized that, in the light of current physical thought, this is a very strange relation. Why should the relation between mass and energy be expressible in terms of speed? Einstein supplied no explanation. He derived the relation from the mathematical expression of his theory of relativity, but a mathematical derivation does not explain anything until an interpretation of the mathematics gives that derivation a physical meaning. The information that has been missing is now supplied by the Reciprocal System. In the universe of motion defined by that system of theory, mass and energy are both reciprocal speeds, differing only in dimensions, mass being three-dimensional, while energy is one-dimensional. Unit energy is therefore the product of unit mass and the second power of unit speed, the speed of light. This finding as to the true significance of the mass-energy relation has an important effect on its applicability. It shows that the current belief that a quantity of energy always has a certain mass associated with it is erroneous. Reciprocal speed can exist either as mass, or as energy, but not both simultaneously. A quantity of mass, three-dimensional scalar motion, is equivalent to a quantity of energy, one-dimensional scalar motion, only when three-dimensional motion is actually transformed into one-dimensional motion, or vice versa. In other words, an existing quantity of mass does not correspond to any existing energy, but to the quantity of energy that would come into existence if the mass is actually converted into energy. For this reason, Einstein‘s hypothesis of an increase in mass accompanying increased velocity is inconsistent with our findings. The kinetic energy increment could increase the mass only if it were converted to mass by some appropriate process, and in that event it would cease to be kinetic energy; that is, the corresponding velocity would no longer exist. Actually, this hypothesis of Einstein‘s is inconsistent with his valid concept of the conversion of mass into energy, regardless of the point of view from which the question is approached. Mass cannot be an accompaniment of kinetic energy, a quantity that increases as the energy increases, and also an entity that can be converted into kinetic energy, a quantity that increases as the energy decreases. The two concepts are mutually exclusive. In the theoretical universe of motion now being described, the mass-energy relation is applicable only to those processes in which mass disappears and energy appears, or vice versa. The most familiar process of this kind is the interchange between mass and energy that takes place as a result of radioactivity, or similar atomic transformations. As we saw in Chapter 25, the primary mass is conserved in these reactions. In the radioactive disintegration Ra226—> Rn222 + He4, for example, the total primary mass of the original radium atom was 226. The primary mass of the residual radon atom, 222, and that of the ejected alpha particle, 4, likewise add up to 226. Thus any mass-energy conversion involved in atomic transformations of this kind is confined to the secondary mass. Current scientific opinion regards this secondary mass component as the mass which, according to accepted theory, is associated with the ―binding energy‖ that holds the hypothetical constituents of the hypothetical atomic nucleus together. It must be conceded that this ―binding energy‖ concept fits in very well with the prevailing ideas as to the nature of the atomic structure, but it should be remembered that the entire nuclear concept of the atom is purely hypothetical. No part of it has been verified empirically.

Even Rutherford‘s original conclusion that most of the mass of the atom is concentrated in a small nucleus—the hypothesis from which the present-day atomic theory was derived—is not supported except on the basis of the assumption that the atoms are in contact in the solid state, an assumption that we now find is erroneous. And every additional step that has been taken in the long series of adjustments and modifications to which the theory has been subjected as a means of extricating it from difficulties has involved one or more further assumptions, as pointed out in Chapter 18. Thus the fact that the ―binding energy‖ concept is consistent with this aggregate of hypotheses has no physical significance. All available evidence is consistent with our finding that the difference between the observed total mass and the primary mass is a secondary mass effect due to motion within the time region, and that the conversion of this secondary mass to energy is responsible for the energy production during radioactivity or other atomic transformations. The nature of the secondary mass was explained in Volume I. The magnitudes of this quantity applicable to the sub-atomic particles and the hydrogen isotopes were also calculated. Some studies were made on the higher elements during the early stages of the investigation, and it was shown in the first edition of this work that there is a fairly regular decrease in the secondary mass of the most abundant isotope of the elements in the range from lithium to iron. Beyond iron the values are irregular, but the secondary mass (negative in this range) remains in the neighborhood of the iron value up to about the midpoint of the atomic series, after which it gradually decreases. and returns to positive values in the very heavy elements. The effect of this secondary mass pattern is to make both the growth process in the light elements and the decay process in the heavy elements exothermic. From the foregoing, it follows that the secondary mass in the lower half of the atomic series, with the exception of hydrogen, is negative. This conflicts with the general belief that mass is always positive, but our previous development of theory has shown that the observed mass of an atom is the algebraic sum of the mass equivalents of the speed displacements of the constituent rotations. Where a rotation is negative, the corresponding mass component is also negative. The net total mass of a material atom is always positive only because the magnetic rotation is necessarily positive in the material sector of the universe, and the magnetic rotation is the principal component of the total. Just why the minimum in the secondary mass is at or near the midpoint of the atomic series rather than at one of the extremes is still unknown, but a similar pattern was noted in some of the material properties examined in the preceding pages of this and the earlier volume, and it is not unlikely that there is a common cause. Many investigators have devoted considerable effort to the study and analysis of atomic transformations that might possibly serve as the source of the energy generated in the sun and other stars. The general conclusion has been that the most likely reactions are those in which hydrogen is converted into helium, either directly or through a series of intermediate reactions. Hydrogen is the most abundant element in the stars, and in the universe as a whole. This hydrogen conversion process, if actually in operation, could therefore furnish a substantial supply of energy. But, as brought out in Chapter 25, there is no actual evidence that the conversion of ordinary hydrogen, the H1 isotope, to helium

is a naturally occurring process in the stars or anywhere else. Even without the new information supplied by the investigation here being reported, there are many reasons to doubt that this process is actually operative, and to question whether it would supply enough energy to meet the stellar requirements if it were in operation. It obviously fails by a wide margin to account for the enormous energy output of the quasars and other compact astronomical objects. As one astronomer states the case, the problem of accounting for the energy of the quasars ―is widely considered to be the most important unsolved problem in theoretical astrophysics.‖ 106 The catastrophic effect that the invalidation of the hydrogen conversion process as the stellar energy source would otherwise have on astronomical theory, leaving it without any explanation of the manner in which this energy is generated, is avoided by the fact that the development of the Reciprocal System of theory has revealed the existence of not only one, but two hitherto unknown physical phenomena, each of which is far more powerful than the hydrogen conversion process. These newly discovered processes are not only capable of meeting the energy requirements of the stable stars, but also the far greater requirements of the supernovae and the quasars (when the quasar energies are scaled down to the true magnitudes from the inflated values based on the current interpretations of the redshifts of these objects). Perhaps some readers may find it difficult to accept the thought that there could be hitherto unknown processes in operation in the universe that are vastly more powerful than any previously known process. It might seem that anything of that magnitude should have made itself known to observation long ago. The explanation is that the results of these processes are known observationally. Extremely energetic events are prominent features of present-day astronomy. What has not been known heretofore is the nature of the processes whereby the enormous energies are generated. This is the information that the theory of the universe of motion is now supplying. In Chapter 17 we examined one of these processes, the conversion of mass to energy that results when the matter in the interior of a star reaches the destructive thermal limit. This is the long-continuing process that supplies the relatively modest (on the astronomical scale) amount of energy necessary to meet the requirements of the stable stars. It also accounts for the large energy output of one kind of supernova, as we will see in Volume III. At this time we will take a look at what happens when a star arrives at a different kind of a destructive limit. The destructive limit identified in Chapter 17 is reached when the total of the outward displacements (thermal and electric ionization) reaches equality with one of the inward rotational displacements of the atom, reducing the net displacement of the combination to zero, and destroying its rotational character. A similar destructive limit is reached when the inward displacements (rotation and gravitational charge) are built up to a level that, from the rotational standpoint, is the equivalent of zero. This concept of the equivalent of zero is new to science, and may be somewhat confusing, but its nature can be illustrated by consideration of the principle on which the operation of the stroboscope is based. This instrument observes a rotating object in a series of views at regular intervals. If the interval is adjusted to equal the rotation time,

the various features of the rotating object occupy the same positions in each view, and the object therefore appears to be stationary. A similar effect was seen in the early movies, where the wheels of moving vehicles often appeared to stop rotating, or to rotate backward. In the physical situation, if a rotating combination completes its cycle in a unit of time, each of the displacement units of the combination returns to the same circumferential position at the end of each cycle. From the standpoint of the macroscopic behavior of the motion, the positions at the ends of the time units are the only ones that have any significance—that is, what happens within a unit has no effect on other units—and, under the conditions specified, these positions lie in a straight line in the reference system. This means that there is no longer any factor tending to keep the units together as a rotational combination (an atom). Consequently, they separate as linear motions, and mass is transformed into energy. It should be understood, however, that this transformation at the destructive limit has no effect on the motion itself. Scalar motion has no property other than its positive or negative magnitude, and that remains unchanged. What is altered is the coupling to the reference system, which is subject to change at the end of any unit, if the conditions existing at that point are favorable for such a change. The emphasis on the ends of the units of motion in the foregoing discussion is a reflection of the nature of the basic motions, as defined in the fundamental postulates of the Reciprocal System of theory. According to these postulates, the basic units of motion are discrete. This does not mean that the motion proceeds by a succession of jumps. On the contrary, motion is inherently a continuous progression. A new unit of the progression begins at the point where the preceding unit ends, so that continuity, in this sense, is maintained from unit to unit, as well as within units. But since the units are separate entities, the effects of the events that take place in one unit cannot be carried forward to the next (although the combination of the internal and external features of the same unit may be effective, as in the case of the primary and secondary mass). The individual units of motion may continue on the same basis, but the coupling of the motion to the reference system is subject to change to conform to whatever conditions may exist at the end of a unit. When the atom has returned to the situation that existed at the original zero, as is true if the end of the rotational cycle coincides with the end of the time unit, the motion has reached a new starting point, a new zero, we may say. For the reasons previously given, the limiting value, the equivalent of zero in each scalar dimension, is eight units of one-dimensional, or four units of two-dimensional, rotational displacement. In the notation used herein, the latter is a 4-4 magnetic combination. However, as indicated in Chapter 24, the destructive limit is not reached until the displacement in the electric dimension also arrives at the equivalent of the last magnetic unit. A rotational combination (atom) is therefore stable, at zero magnetic ionization, up to 4-4-31, or the equivalent 5-4-(1), which is element 117. One more step reaches the limit at which the rotational motion terminates. If the rotational limit is reached in atoms whose individual magnetic ionization is above the general level in the aggregate of which these atoms are constituents, the effect of approaching the limit is that the atoms become radioactive, and eject portions of their masses in the form of alpha particles, or other fragments. This prevents the building of

elements heavier than number 117, but it does not result in destruction of primary mass such as that which occurs at the destructive thermal limit. Thus the radioactivity is a means of avoiding the destructive effects of reaching the limiting value of the magnetic displacement. This situation is analogous to a number of others that are more familiar. For example, we saw in Chapter 5 that the limiting value of the specific heat of a solid is reached at a relatively low temperature. Beyond this limit the atom. or molecule, enters the liquid state. The transition requires a substantial energy input, and since the lower energy states are more probable in a low energy environment, the atom avoids the need to provide the energy increment by changing to a different thermal vibration pattern, if it has the capability of so doing. The atoms of the heavier elements make several changes of this kind as new limiting values of the specific heat are encountered at successively higher temperatures. Eventually, however, a point is reached at which no further expedients of this kind are available, and the atom must pass into the liquid state. Similarly, the probabilities favor the continued existence of the combination of motions that constitutes the atom, as long as this is possible. The destructive effects of arriving at the displacement limit are therefore avoided by the ejection of mass. But here, too, as in the case of the specific heat, a point is eventually reached where the level of magnetic ionization tending to increase the atomic mass prevents further ejection of mass from the atom, and arrival at the destructive limit can no loner be avoided. The consequences of reaching this rotational displacement limit at the equivalent of zero are qualitatively identical with those of reaching the thermal displacement limit at zero. The various rotational components cancel out, and the motion reverts to the linear basis. This transforms mass into kinetic energy, most of which is imparted to the residue of the atoms, or to other matter in the environment. The remainder goes into electromagnetic radiation. From a quantitative standpoint, there are some significant differences between the two phenomena. The thermal limit applies only to the heaviest element that is present in the aggregate in a significant quantity, and the rate at which this element arrives at the limit is regulated by a process that will be discussed in Volume III. The elements lower in the atomic series are not affected. Furthermore, the conversion of rotational to linear displacement (mass to energy) at the thermal limit does not necessarily apply to more than one of the magnetic displacement units of the atom, and a large part of the atomic mass may therefore remain intact, either as a residual atom or a number of fragments. Consequently, the thermal limit has no catastrophic effect until the temperature reaches the destructive limit of an element, iron, that is present in relatively large quantities. On the other hand, arrival at the magnetic displacement limit affects the entire mass of each atom, and the only portion of the mass of an aggregate that remains intact is that in the outer portions of the aggregate where the magnetic ionization level is lower than in the deeper interior. There is no process that limits the rate of disintegration at this destructive limit. The resulting explosion, known as a Type II supernova, is therefore much more powerful (relative to the mass of the exploding star) than the Type I supernova that occurs at the thermal limit, although its full magnitude is not evident from direct observation, for reasons that will be explained in Volume III.

While the thermal disintegration process is operative in every star, it does not necessarily proceed all the way to destruction of the star. The extent to which the mass of the star, and consequently the temperature, increases depends on its environment. Some stars will accrete enough mass to reach the temperature limit and explode; others will not. But the increase in the magnetic ionization level is a continuing process in all environments, and it necessarily results in arrival at the magnetic destructive limit when sufficient time has elapsed. This limit is thus essentially an age limit. A process related to those that have been described in the foregoing paragraphs is the sequence of events that counterbalances the conversion of three-dimensional motion (mass) into one-dimensional motion (energy) in the stars. The energy that is generated by atomic disintegration leaves the stars in the form of radiation. According to present-day views, this radiation moves outward at the speed of light, and most of it eventually disappears into the depths of space. The theory of the universe of motion gives us a very different picture. It tells us that inasmuch as the photons of radiation have no capability of independent motion relative to the natural datum, they remain stationary in the natural reference system, or move inward at the speed of the emitting object. Each photon therefore eventually encounters, and is absorbed by, an atom of matter. The net result of the generation of stellar energy by atomic disintegration is thus an increase in the thermal energy of other matter. As will be explained in Volume III, the matter of the universe is subject to a continuing process of aggregation under the influence of gravitation. Consequently, all matter in the material sector, with the added thermal energy, is ultimately absorbed by one of the giant galaxies that are the end products of the aggregation process. When supernova explosions in the interior of one of these giant galaxies become frequent enough to raise the average particle speed above the unit level, some of the full units of speed thus made available are converted into rotational motion, creating cosmic atoms and particles. This cosmic atom building, which theoretically operates on a very large scale in the galactic interiors, has been observed on a small scale in experiments, the results of which were discussed in Volume I. In the experiments, the high energy conditions are only transient, and the cosmic atoms and particles that are produced from the high level kinetic energy quickly decay into particles of the material system. Some such decays no doubt also occur in the galactic interiors, but in this case the high energy condition is quasi-permanent, favoring continued existence of the cosmic units until ejection of the quasar takes place. In any event, the production of these rotational combinations has increased the amount of existing cosmic or ordinary matter at the expense of the amount of existing energy, thus reversing the effect of the production of energy by disintegration of atoms of matter. In concluding this last chapter of a volume dealing with the properties of matter, it will be appropriate to call attention to the significant difference between the role that matter plays in conventional physical theory, and its status in the theory of the universe of motion. The universe of present-day physical science in a universe of matter, one in which the presence of matter is the central fact of physical existence. In this universe of matter, space and time provide the background, or setting, for the action of the universe; that is, according to this view, physical phenomena take place in space and in time.

As Newton saw them, space and time were permanent and unchanging, independent of each other and of the physical activity taking place in them. Space was assumed to be Euclidean (―flat‖ in the jargon of present-day mathematical physics), and time was assumed to flow uniformly and unidirectionally. All magnitudes, both of space and of time, were regarded as absolute; that is, not dependent on the conditions under which they are measured, or on the manner of measurement. A subsequent extension of the theory, designed to account for some observations not covered by the original version, assumed that space is filled with an imponderable fluid, the ether, which interacts with physical objects. Einstein‘s relativity theories, which have replaced Newton‘s theory as the generally accepted view of the theoretical physicists, retain Newton‘s concept of the general nature of space and time. To Einstein these entities constitute a background for the action of the universe, just as they did for Newton. Instead of being a three-dimensional space and a one-dimensional time, independent of each other, as they were for Newton, they are amalgamated into a four-dimensional spacetime in Einstein‘s system, but they still have exactly the same function; they form the framework, or container, within which physical entities exist and physical events take place. Furthermore, these basic physical entities and phenomena are essentially identical with those that exist in Newton‘s universe. It is commonly asserted that Einstein eliminated the ether from physical theory. In fact, however, what he actually did was to eliminate the name “ether,‖ and to apply the name ―space‖ to the concept previously called the ―ether.‖ Einstein‘s ―space‖ has the same kind of properties that were formerly assigned to the ether, as he admits in the following statement: We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there still exists an ether.25

The downfall of Newtonian physics was due to a gradual accumulation of discrepancies between theory and observation, the most critical being the results of the MichelsonMorley experiment and the measurements of the advance of the perihelion of Mercury, neither of which could be explained within the limits of Newton‘s system. Some modification of that system was obviously necessary. The question, as it stood around the end of the nineteenth century, was what form the revision of Newton‘s ideas should take. As brought out in Chapter 13, in order to qualify as ―theory,‖ in the full meaning of the term, the treatment of a physical phenomenon must cover not only its mathematical aspects, but also its physical aspects; that is, it must provide a conceptual understanding of the entities and relations to which the mathematics refer. However, the general tendency in recent years has been to concentrate on the mathematical development and to omit the parallel conceptual development, substituting conceptual interpretations of the individual mathematical results. Richard Feynman describes the present situation in this manner: Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics. 56

In his attack on the problem of revising Newton‘s theory, Einstein not only adopted this policy of widening the latitude for theory construction by restricting his development to

the mathematical aspects of the subject under consideration, and thereby avoiding any conceptual limitations on his basic assumptions, but went a step farther, and loosened the normal mathematical constraints as well. He first introduced a high degree of flexibility into the numerical values by discarding ―the idea that co-ordinates must have an immediate metrical meaning [an expression that he defines as the existence of a specific relationship between differences of coordinates and measureable lengths and times].‖ 36 As C. Moller describes this theoretical picture: In accelerated systems of reference the spatial and temporal coordinates thus lose every physical significance; they simply represent a certain arbitrary, but unambiguous, numbering of physical events. 107

Along with this flexibility of physical measurement, which greatly increased the latitude for making additional assumptions, Einstein introduced a similar flexibility into the geometry of spacetime by assuming that it is distorted or ―curved‖ by the presence of matter. The particular aim of this expedient was to provide a means of dealing with gravitation, a key issue in the general problem. One textbook explains the new view in this manner: What we call a gravitational field is equivalent to a ―warping‖ of time and space, as if it were a rubbery sort of material that stretched out of shape near heavy bodies.108

The basis for this assertion is an assumption, the assumption that, for some unspecified reason, space and matter exert an influence upon each other. ―Space acts on matter, telling it how to move. In turn, matter acts on space, telling it how to curve.‖ 109 (Misner, Thorne, and Wheeler) But neither Einstein nor his successors have given us any explanation of how such interactions are supposed to take place—how space ―tells‖ matter, or vice versa. Nor does the theory explain inertia, an aspect of the gravitational situation that has given the theorists considerable trouble. As Abraham Pais sums up this situation: It must also be said that the origin of inertia is and remains the most obscure subject in the theory of particles and fields.110

Today there is a tendency to call upon Mach‘s principle, which attributes the local behavior of matter to the influence of the total quantity of matter in the universe. Misner, Thorne, and Wheeler say that ―Einstein‘s theory identifies gravitation as the mechanism by which matter there (the distant stars) influences inertia here.‖ 111 But, as indicated in the statement by Pais, this explanation is far from being persuasive. It obviously gives us no answer to the question that baffled Newton: How does gravitation originate?. Indeed, there is something incongruous about the acceptance of Mach‘s principle by the same scientific community that is so strongly opposed to the concept of action at a distance. The fact is that neither Newton‘s theory nor Einstein‘s theory tells us anything about the ―mechanism‖ of gravitation. Both take the existence of mass as something that has to be accepted as a given feature of the universe, and both require that we accept the fact that masses gravitate, without any explanation as to how, or why, this takes place. The only significant difference between the two theories, in this respect, is that Newton‘s theory gives us no reason why masses gravitate, whereas Einstein‘s theory gives us no reason why masses cause the distortion of space that is asserted to be the reason for gravitation.

As Feynman sums up the situation, ―There is no model of the theory of gravitation today, other than the mathematical form.‖ 56 The concept of a universe of motion now provides a gravitational theory that not only explains the gravitational mechanism. but also clarifies its background, showing that mass is a necessary consequence of the basic structure of the universe, and does not have to be accepted as unexplainable. This theory is based on a new, and totally different, view of the status of space and time in the physical universe. Both Newton and Einstein saw space and time as the container for the constituents of the universe. In the theory of the universe of motion, on the other hand, space and time are the constituents of the universe, and there is no container. On this basis, the space of the conventional spatio-temporal reference system is just a reference system—nothing more. Thus it cannot be curved or otherwise altered by the presence or action of anything physical. Furthermore, since the coordinates of the reference system are merely representations of existing physical magnitudes, they automatically have the ―metrical meaning‖ that Einstein eliminated from his theory to attain the flexibility without which it could not be fitted to the observations. The theory of the universe of motion is the first physical theory that actually explains the existence of gravitation. It demonstrates that the gravitational motion is a necessary consequence of the properties of space and time, and that the same thing that makes an atom an atom, the rotationally distributed scalar motion, also causes it to gravitate. Additionally, the same motion is responsible for inertia. Of course, this return to absolute magnitudes and mathematical rigidity invalidates the conceptual interpretations of Einstein‘s solutions of the problems raised by the observed deviations from the consequences of Newton‘s theory, and requires finding new answers to these problems. But these answers have emerged easily and naturally during the course of the development of the details of the new theory. In most case no changes in the existing formulation of the mathematical relations have been required. While Einstein‘s modification of Newton‘s theory was almost entirely mathematical, our modification of the Newton-Einstein system is primarily conceptual, because the errors in currently accepted theory are nearly all in the conceptual interpretation of the observations and measurements; that is, in the prevailing understanding of the meaning of the mathematical terms and the relations between them. The changes that the new theory makes in the conceptual aspects of the gravitational situation do not affect any of the valid mathematical results of Einstein‘s theory. For example, most of the mathematical consequences of the general theory of relativity that have led to its acceptance by the scientific community are derived from one of its postulates, the Principle of Equivalence, which states that gravitation is the equivalent of an accelerated motion. In the theory of the universe of motion, gravitation is an accelerated motion. It follows that any conclusion that can legitimately be drawn from the Principle of Equivalence, such as the existence of gravitational redshifts, can likewise be derived from the postulates of the theory of the universe of motion in exactly the same form.

The agreement between the two theories that exists in these subsidiary areas, and in the mathematical results, does not extend to the fundamentals of gravitation. Here the theories are far apart. The theoretical development reported in the several volumes of this work shows that the attempt to resolve physical issues by mathematical means—the path that has heretofore been followed in dealing with fundamental physics—precludes any significant conceptual changes in theory, whereas, as our findings have demonstrated, there are major errors in the basic assumptions upon which the mathematical theories have been constructed. Until comparatively recently it was not feasible to locate and correct these errors, because access to a large amount of factual information is indispensable to such an undertaking, and the available supply of information was simply not adequate. Continued research has overcome this obstacle, and the development of the theory of the universe of motion has now identified the ―machinery,‖ not only of gravitation, but of physical processes in general. We are now able to identify the common denominator of all of the fundamental physical entities, and by defining it, we define the entire structure of the physical universe.

References 1.Weisskopf, V. F., Lectures in Theoretical Physics, Vol. III, Britten, J. Downs, and B. Downs, editors, Interscience Publishers, New York, 1961, p. 80. 2.Wyckoff, R. W. G., Crystal Structures, and supplements, Interscience Publishers, New York, 1948 and following. 3.All compression data in the range below 100.000 kg/cm2 not otherwise identified are from the works of p. W. Bridgman, principally his book, The Physics of High Pressure, G. Bell & Sons, London, 1958, and a series of articles in the Proceedings of the American Academy of Arts and Sciences. 4.Kittel, Charles, Introduction to Solid State Physics, 5th edition, John Wiley & Sons, New York, 1976. 5.McQueen and Marsh, Journal of Applied Physics, July 1960. 6.National Bureau of Standards, Tables of Normal Probability Functions, Applied Mathematics Series, No. 23, Washington, DC, 1953. 7.Specific heat data are mainly from Hultgren, et al, Selected Values of the Thermodynamic Properties of the Elements, American Society for Metals, Metals Park, OH, 1973, and Thermophysical Properties of Matter, Vol. 4, Touloukian and Buyko, editors, IFI Plenum Data Co., New York, 1970, with some supplemental data from original sources. 8.Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958, p. 189.

9.Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New York, 1952, p. 55. 10.Bridgman, p. W., Reflections of a Physicist, Philosophical Library, New York, 1955, p. 186. 11.Weisskopf, V. F., American Scientist, July-Aug., 1977. 12.Values of the coefficient of expansion are from Thermophysical Properties of Matter, op. cit., Vol. 12. 13.Duffin, W. J., Electricity and Magnetism, 2nd edition, John Wiley & Sons, New York, 1973, p. 122. 14.Ibid., p. 52. 15.Robinson, F. N. H., in Encyclopedia Britannica, 15th edition, Vol. 6, p. 543. 16.Stewart, John W., in McGraw-Hill Encyclopedia of Science and Technology, Vol. 4, p. 199. 17.Lande, Alfred, Philosophy of Science, 24-309. 18.Meaden, G. T., Electrical Resistance of Metals, Plenum Press, New York, 1965, p. 1. 19.Ibid., p. 22. 20.The resistance values are from Meaden, op. cit., supplemented by values from other compilations and original sources. 21.Values of the thermal conductivity are from Thermophysical Properties of Matter, op. cit., Vol. I. 22.Davies, Paul, The Edge of Infinity, Simon & Schuster, New York, 1981, p. 137. 23.Alfven, Hannes, Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966, p. 92. 24.Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938, p. 185. 25.Einstein, Albert, Sidelights on Relativity, E. p. Dutton & Co., New York, 1922, p. 23. 26.Ibid., p. 19. 27.Einstein and Infeld, op. cit., p. 159. 28.Dicke, Robert H., American Scientist, Mar. 1959. 29.Bridgman, p. W., The Way Things Are, Harvard University Press, 1959, p. 153. 30.Heisenberg, Werner, Physics and Philosophy, op. cit., p. 129.

31.Eddington, Arthur S., The Nature of the Physical World, Cambridge University Press, 1933, p. 156. 32.Einstein and Infeld, op. cit., p. 158. 33.Science News, Jan. 31, 1970. 34.Bridgman, p. W., The Way Things Are, op. cit., p. 151. 35.Von Laue, Max, Albert Einstein Philosopher-Scientist, edited by p. A. Schilpp, The Library of Living Philosophers, Evanston, IL, 1949, p. 517. 36.Andrade, E. N. daC., An Approach to Modern Physics, G. Bell & Sons, London, 1960, p. 10. 37.Carnap, Rudolf, Philosophical Foundations of Physics, Basic Books, New York, 1966, p. 234. 38 Gardner, Martin, The Ambidextrous Universe, Charles Scribner's Sons, New York, 1979, p. 200. 39.Einstein, Albert, Albert Einstein: Philosopher-Scientist, op. cit., p. 21. 40.Duffin, W. J., op. cit., p. 25. 41.Ibid., p. 281. 42.Gerholm, Tor R., Physics and Man, The Bedminster Press, Totowa, NJ, 1967, p. 135. 43.Ibid., pp. 147, 151. 44.Davies, Paul, Space and Time in the Modern Universe, Cambridge University Press, 1977, p. 139. 45,Einstein, Albert, Albert Einstein: Philosopher-Scientist, op. cit., p. 67. 46.Lorrain and Corson, Electromagnetism, W. H. Freeman & Co., San Francisco, 1978, p. 95. 47.Maxwell, J. C., Royal Society Transactions, Vol. CLV. 48.Rojansky, Vladimir, Electromagnetic Fields and Waves, Dover Publications, New York, 1979, p. 280. 49.Smythe, William R., Mc Graw-Hill Encyclopedia, op. cit., Vol. 4, p. 338. 50.Kip, Arthur, Fundamentals of Electricity and Magnetism, 2nd edition, McGraw-Hill Book Co., New York, 1969, p. 136. 51.Duffin, W. J., op. cit., p. 301. 52.Ibid., p. 313.

53.Ibid., p. 302. 54.Bleaney and Bleaney, Electricity and Magnetism, 3rd edition, Oxford University Press, New York, 1976, p. 64. 55.Dobbs, E. R., Electricity and Magnetism, Routledge & Kegan Paul, London, 1984, p. 50. 56.Feynman, Richard, The Character of Physical Law, MIT Press, 1967, p. 39. 57.Duffin, W. J., op. cit., p. 3. 58.Rogers, Eric M., Physics for the Inquiring Mind, Princeton University Press, 1960, p. 550. 59.McCaig, Malcolm, in Permanent Magnets and Magnetism, D. Hadfield, editor, John Wiley & Sons, New York, 1962, p. 18. 60.Park, David, Contemporary Physics, Harcourt, Brace & World, New York, 1964, p. 122. 61.Mellor, J. W., Modern Inorganic Chemistry, Longmans, Green & Co., London, 1919. 62.Rogers, Eric M., op. cit., p. 407. 63.Park, David, op. cit., p. 15. 64.Margenau, Henry, Open Vistas, Yale University Press, 1961, p. 72. 65.Thorne, Kip S., Scientific American, Dec. 1974. 66.Misner, Thorne, and Wheeler, Gravitation, W. H. Freeman & Co., New York, 1973, p. 620. 67.Feynman, Richard, The Feynman Lectures on Physics, Vol. II, Addison-Wesley Publishing Co., Menlo Park, CA, 1977, p. 1-3. 68.Bueche, F. J., Understanding the World of Physics, McGraw-Hill Book Co., New York, 1981, p. 328. 69.Pais, Abraham, Subtle is the Lord, Oxford University Press, New York, 1982, p. 34. 70.Hanson, Norwood R., in Scientific Change, edited by A. C. Crombie, Basic Books, New York, 1963, p. 492. 71.Schršdinger, Erwin, Science and Humanism, Cambridge University Press, 1952, p. 22. 72.Schršdinger, Erwin, Science and the Human Temperament, W. W. Norton & Co., New York, 1935, p. 154. 73.Heisenberg, Werner, Physics and Philosophy, op. cit., p. 129.

74.Feynman, Richard, The Character of Physical Law, op. cit., p. 168. 75.Jastrow, Robert, Red Giants and White Dwarfs, Harper & Row, New York, 1967, p. 41. 76.Harwit, Martin, Cosmic Discovery, Basic Books, New York, 1981, p. 243. 77.Ford, K. W., Scientific American, Dec. 1963. 78.Hulsizer and Lazarus, The New World of Physics, Addison-Wesley Publishing Co., Menlo Park, CA, 1977, p. 254. 79.Bahcall, J. N., Astronomical Journal, May 1971. 80.Duffin, W. J., op. cit., p. 165. 81.Hulsizer and Lazarus, op. cit., p. 255. 82.Mc Caig, Malcolm, op. cit., p. 35. 83.Anderson, J. C., Magnetism and Magnetic Materials, Chapman and Hall, London, 1968, p. 1. 84.McCaig, Malcolm, Permanent Magnets in Theory and Practice, John Wiley & Sons, New York, 1977, p. 339. 85.Duffin, W. J., op. cit., p. 156. 86.McCaig, Malcolm, Permanent Magnets, op. cit., p. 341. 87.Ibid., p. 8. 88.Shortley and Williams, Elements of Physics, 2nd edition, Prentice-Hall, New York, 1955, p. 717. 89.Lorrain and Corson, op. cit., p. 360. 90.Feynman, Richard, The Feynman Lectures on Physics, op. cit., Vol. II, p. 13-1. 91.Bueche, F. J., op. cit., p. 259. 92.Ibid., p. 267. 93.Rogers, Eric M., op. cit., p. 562. 94.Feather, Norman, Electricity and Matter, Edinburgh University Press, 1968, p. 104. 95.Kip, Arthur, op. cit., p. 278. 96.Ibid., p. 283. 97.Duffin, W. J., op. cit., p. 210.

98.Lorrain and Corson, op. cit., p. 334. 99.Handbook of Chemistry and Physics, 66th edition, Chemical Rubber Publishing Co., Cleveland, 1976. 100.Martin, D. H., Magnetism in Solids, MIT Press, 1967, p. 9. 101.Dobbs, E. R., op. cit., p. 54. 102.Lorrain and Corson, op. cit., p. 237. 103.Duffin, W. J., op. cit., p. 60. 104.Concepts in Physics, CRM Books, Del Mar, CA, 1979, p. 266. 105.Davies, Paul, The Accidental Universe, Cambridge University Press, 1982, p. 60. 106.Mitton, Simon, Astronomy and Space, Vol. I, edited by Patrick Moore, Neale Watson Academic Publishers, New York, 1972. 107.Moller, C., The Theory of Relativity, Clarendon Press, Oxford, 1952, p. 226. 108.Hulsizer and Lazarus, op, cit., p. 222. 109.Misner, Thorne, and Wheeler, op. cit., p. 5. 110.Pais, Abraham, op. cit., p. 288. 111.Misner, Thorne, and Wheeler, op, cit., p. 543.

DEWEY B. LARSON: THE COLLECTED WORKS Dewey B. Larson (1898-1990) was an American engineer and the originator of the Reciprocal System of Theory, a comprehensive theoretical framework capable of explaining all physical phenomena from subatomic particles to galactic clusters. In this general physical theory space and time are simply the two reciprocal aspects of the sole constituent of the universe–motion. For more background information on the origin of Larson‘s discoveries, see Interview with D. B. Larson taped at Salt Lake City in 1984. This site covers the entire scope of Larson‘s scientific writings, including his exploration of economics and metaphysics.

Physical Science The Structure of the Physical Universe The original groundbreaking publication wherein the Reciprocal System of Physical Theory was presented for the first time. The Case Against the Nuclear Atom ―A rude and outspoken book.‖

The Universe of Motion The third volume of the revised edition of The Structure of the Physical Universe, applying the theory to astronomy. The Liquid State Papers A series of privately circulated papers on the liquid state of matter.

Beyond Newton ―...Recommended to anyone who thinks the subject of gravitation and general relativity was opened and closed by Einstein.‖

The Dewey B. Larson Correspondence Larson‘s scientific correspondence, providing many informative sidelights on the development of the theory and the personality of its author.

New Light on Space and Time A bird‘s eye view of the theory and its ramifications.

The Dewey B. Larson Lectures Transcripts and digitized recordings of Larson‘s lectures.

The Neglected Facts of Science Explores the implications for physical science of the observed existence of scalar motion. Quasars and Pulsars Explains the most violent phenomena in the universe.

The Collected Essays of Dewey B. Larson Larson‘s articles in Reciprocity and other publications, as well as unpublished essays.

Metaphysics Beyond Space and Time A scientific excursion into the largely unexplored territory of metaphysics.

Economic Science Nothing but Motion The first volume of the revised edition of The Structure of the Physical Universe, developing the basic principles and relations. Basic Properties of Matter The second volume of the revised edition of The Structure of the Physical Universe, applying the theory to the structure and behavior of matter, electricity and magnetism.

The Road to Full Employment The scientific answer to the number one economic problem. The Road to Permanent Prosperity A theoretical explanation of the business cycle and the means to overcome it.

From; http://www.reciprocalsystem.com/um/index.htm

The Universe of Motion DEWEY B. LARSON Volume III of a revised and enlarged edition of The Structure of the Physical Universe

Preface 1. Introduction 2. Galaxies 3. Globular Clusters 4. The Giant Star Cycle 5. The Later Cycles 6. The Dwarf Star Cycle 7. Binary and Multiple Stars 8. Evolution–Globular Cluster Stars 9. Gas and Dust Clouds 10. Evolution–Galactic Stars 11. Planetary Nebulae 12. Ordinary White Dwarfs 13. The Cataclysmic Variables 14. Limits 15. The Intermediate Region 16. Type II Supernovae

17. Pulsars 18. Radiative Processes 19. X-ray Emission 20. The Quasar Situation 21. Quasar Theory 22. Verification 23. Quasar Redshifts 24. Evolution of Quasars 25. The Quasar Populations 26. Radio Galaxies 27. Pre-Quasar Phenomena 28. Inter-Sector Relations 29. The Non-Existent Universe 30. Cosmology 31. Implications References

Preface This volume applies the physical laws and principles of the universe of motion to a consideration of the large-scale structure and properties of that universe, the realm of astronomy. Inasmuch as it presupposes nothing but a familiarity with physical laws and principles, it is self-contained in the same sense as any other publication in the astronomical field. However, the laws and principles applicable to the universe of motion differ in many respects from those of the conventional physical science. For the convenience of those who may wish to follow the development of thought all the way from the fundamentals, and are not familiar with the theory of the universe of motion, I am collecting the most significant portions of the previously published books and articles dealing with that theory, and incorporating them, together with the results of some further

studies, into a series of volumes with the general title The Structure of the Physical Universe. The first volume, which devleops the fundamental physical relations, has already been published as Nothing but Motion. This present work is designated as Volume III. Volume II, Basic Properties of Matter, will follow. As stated in Nothing But Motion, the development of thought in these books is purely theoretical. I have formulated a set of postulates that define the physical universe, and I have derived all of my conclusions in all physical fields by developing the necessary consequences of those postulates, without introducing anything from any other source. A companion volume, The Neglected Facts of Science, shows that many of the theoretical conclusions, including a number of those that differ most widely from conventional scientific thought, can also be derived from purely factual premises, if some facts of observation that have heretofore been overlooked or disregarded are taken into consideration. As explained in the introductory chapter of this volume, astronomy is the great testing ground for physical theory. Here we can ascertain whether or not the physical relations established under the relatively moderate conditions that prevail in the terrestrial environment still hold good under the extremes of temperature, pressure, size, and speed to which astronomical entities are subjected. In order to be valid, the conclusions derived from theory must agree with all facts definitely established by astronomical observation, or at least must not be inconsistent with any of them. To show that such an agreement exists, I have compared the theoretical conclusions with the astronomical evidence at each step of the development. It should be understood, however, that this comparison with observation is purely for the purpose of verifying the conclusions; the observations play no part in the process by which these conclusions were reached. There are substantial differences of opinion, in many instances, as just what the observations actually do mean. Like the situation in particle physics discussions in previous publications, the ―observed facts‖ in astronomy are often ten percent observation and ninety percent interpretation. In those cases where the astronomers are divided, the most that any theoretical work can do is to agree with one of the concflicting opinions as to what has been observed. I have therefore identified the sources of all of the astronomical information that I have used in the comparisons. Since this work is addressed to scientists in general, rather than to a purely astronomical audience, I have taken information from readily accessible sources, where possible, in preference to the original reports in the astronomical literature. Once again, as in the preface to Nothing But Motion, I have to say that it is not feasible to acknowledge all of the many individual contributions that have been made toward developing the details of the theoretical system and bringing it to the attention of the scientific community, but I do want to renew my expression of appreciation of the efforts of the officers and members of the organization that has been promoting understanding and acceptance of my results. Since the earlier volume was published, this organization, founded in 1970 as New Science Advocates, has changed its name to the International Society of Unified Science, in recognition of its increased activity in foreign countries, three of which are currently represented on the Board of Trustees.

Publication of this present volume has been made possible through the efforts of Rainer Huck, who has acted as business manager of the publishing project, Jan Sammer, who has handled all of the many operations involved in taking the work from the manuscript stage to the point at which it was ready for the printers, and my wife, without whose encouragement and logistic support the book could not have been written. also participating were Eden G. Muir, who prepared the illustrations, and Ronald Blackburn, Maurice Gilroy, Frank Meyer, and Robin Sims, who assisted in the financing. March 1984 D. B. Larson Copyright © 1959, 1984 By Dewey B. Larson All rights reserved

CHAPTER 1

Introduction This volume is a continuation of a series which undertakes to determine the characteristics that the physical universe must necessarily have if it is composed entirely of discrete units of motion, and to show that the universe thus defined is identical, item by item, with the observed physical universe. The specific objective of this present volume is to extend the physical relations and principles developed in the earlier volumes to a description of the large-scale features of the universe of motion. This is the field of astronomy, and the pages that follow will resemble an astronomical treatise. In order to avoid misunderstanding, therefore, we will begin by emphasizing that this is not an astronomical work, in the usual sense. Astronomy and astrophysics are based on facts determined by observation. Their objective is to interpret these facts and relate them to each other in a systematic manner. The primary criterion by which the results of these interpretive activities are judged is how well they account for, and agree with, the relevant observational data. But astronomical data are relatively scarce, and often conflicting. Opinion and judgment therefore play a very large part in the decisions that are made between conflicting theories and interpretations. The question to be answered, as it is usually viewed, is which is the best explanation―? In practice this means which fits best with current interpretations in related astronomical areas. The conclusions that are expressed in this work, on the other hand, are derived from the postulated properties of space and time in a universe of motion, and they are independent of the astronomical observations. These conclusions must, of course, be consistent with all that is definitely known from observation, but whatever observational information may exist, or may not exist, plays no part in the development of thought that arrives at the conclusions that are stated. Observed astronomical objects and phenomena are not being described and discussed in this work as a foundation on which to Construct theory. They

are introduced only for the purpose of showing that these observations are consistent with the conclusions derived from theory. Thus the present volume is not an astronomical work, which interprets and systeniatizes the information derived from astronomical observation; it is a physical Work, which extends the development of physical theory in the two preceding volumes into the astronomical field, confirming the previously derived laws and principles by showing that they still apply under extreme conditions. The availability of this accurate new physical theory, developed and verified in other fields where the facts are more readily accessible, now gives us a source of information about astronomical matters that is not subject to the limitations that are inherent in the procedures that the astronomers must necessarily employ, It gives us a unique opportunity to examine the subject matter of astronomy from an outside viewpoint completely independent of any conclusions that have been reached from the results of astronomical observation. The record of advancement of astronomical knowledge has been largely a story of the invention and utilization of new and more powerful instruments. The optical telescope, the spectroscope, the photographic plate, the radio telescope, the x-ray telescope, the photoelectric cell—these and the major improvements that have been made in their power and accuracy are the principal landmarks of astronomical progress. It is a matter of considerable significance, therefore, that in application to astronomical phenomena, the theory of the universe of motion, the Reciprocal System of theory, as we are calling it, has the characteristics of a new instrument of exceptional power and versatility, rather than those of an ordinary theory, Astronomy has many theories, of course, but the products of those theories are quite different from the results obtained from an instrument, inasmuch as they are determined primarily by what is already known or is believed to be known, about astronomical phenomena. This existing knowledge, or presumed knowledge, is the raw material from which the theory is constructed, and conformity with the data already accumulated, and the prevailing pattern of scientific thought, is the criterion by which the conclusions derived from the theory are tested. The results obtained from an instrument, on the other hand, are not influenced by the current state of knowledge or opinion in the area involved. (The interpretation of these results may be so influenced, but that is another matter,) If those results conflict with accepted ideas, it is the ideas that must be changed, not the information that the instrument contributes, The point now being emphasized is that the Reciprocal System, like the instrument and unlike the ordinary theory, is wholly independent of what is known or believed about the phenomena under consideration, Stars and galaxies are found in the existing astronomical theories because they are put into these theories. They are aggregates of matter, they exert gravitational forces, and they emit radiation, and so on, in the theoretical picture, because this information was put into the theories. They theoretically generate the energy that is required to maintain the radiation by converting matter to energy, because this, too, was put into the astronomical theories. They conform to the basic laws of physics and chemistry; they follow the principles laid down by Faraday, by Maxwell, by Newton, and by Einstein, because these laws and principles were put into the theories. To this vast amount of knowledge and pseudo-knowledge drawn from the common store, the theorist adds a few assumptions of

his own that bear directly on the point at issue and, after subjecting the entire mass of material to his reasoning processes, he arrives at certain conclusions. Such a theory, therefore, does not see things as they are; it sees them in the context of existing observational information and existing patterns of thought. We cannot get a quasar, for instance, out of such a theory until we put a quasar, or something from which, within the context of existing thought, a quasar can be derived, into the theory. On the other hand, the existing concepts of the nature of astronomical objects cannot be put into an instrument. One cannot tell an instrument what it should see or what it should record, other than by limiting the scope of its application, and it therefore sees things as they are, not as the scientific community thinks that they ought to be. If there are quasars, the appropriate instrument, appropriately utilized, sees quasars. Every new instrument uncovers many errors in accepted thinking about known phenomena, while at the same time it reveals the existence of other phenomena that were not only unknown, but in many instances wholly unsuspected. The Reciprocal System of theory is like an instrument in that it, too, is independent of existing scientific thought. Stars and galaxies composed of matter appear in this theory, but neither these objects nor the matter itself are put into the theory; they are consequences of the theory: results that necessarily follow from the only things that are put into the theory, the postulated properties of space and time. The astronomical objects that appear in the theory are subject to the basic physical laws, they exert gravitational forces, they emit radiation, and so on, not because these things were put into the theory, but because they are products of the development of the theory itself. All of the entities and relations that constitute the theoretical universe of motion are consequences of the fundamental postulates of the system. While we can hardly say, a priori, that this system of theory sees things as they are, we can say that it sees things, as they must be if the physical universe is a universe of motion. If there are quasars, then this theory, like an appropriate instrument, and independently of any previous theoretical or observational information, sees quasars. Indeed, it did see quasars, somewhat indistinctly, to be sure, but definitely, long before the astronomers recognized them. As will be brought out in detail in Chapter 20, this prediscovery development of theory identified the quasars, together with some related phenomena that were not distinguished from them at this stage of the theoretical study, as high-speed products of galactic explosions (not yet discovered observationally), defined their principal properties, and described their ultimate fate. Like the invention of the telescope, the development of this new and powerful theoretical instrument now gives the astronomer an opportunity to widen his horizons, to get a clear view of phenomena that have hitherto been hazy and indistinct, and to extend his investigations into areas that are totally inaccessible to the instruments previously available. The picture obtained from this new instrument differs in many respects from present-day astronomical ideas—very radically in some instances—but the existence of such differences is clearly inevitable in view of the limited amount of observational information that has been available to the astronomers, and the consequent highly tentative nature of much of the astronomical theory currently in vogue, As has been demonstrated in the preceding volumes, the correct explanation of a physical situation

often differs from the prevailing ideas to a surprising degree even where the current theories have been successful enough in practice to win general acceptance. In astronomy, where comparatively few issues have been definitely settled, and differences of opinion are rampant, it can hardly be expected that the correct explanations will leave the previous theoretical structure intact. This work does not attempt to cover the entire astronomical field. Much of the attention of the astronomers is centered on individual objects. They determine the distance to Sirius, the atmospheric pressure on Mars, the temperature of the sun's photosphere, the density of the moon, and so on, none of which is relevant to the objectives of this present work, except to the extent that some individual fact or quantity may serve to illustrate a general proposition. Furthermore, the scope of the work, both in the number of subjects covered, and in the extent to which the examination of each subject has been carried, has been severely limited by the amount of time that could be allocated to the astronomical portion of a project equally concerned with many other fields of science. The omissions from the field of coverage, in addition to those having relevance only to individual objects, include (1) items that are not significantly affected by the new findings and are adequately covered in existing astronomical literature, and (2) subjects that the author simply has not thus far gotten around to considering. Attention is centered principally on the evolutionary patterns, and on those phenomena, such as the white dwarfs, quasars, and related objects, with which conventional theory is having serious difficulties. One of the recalcitrant problems of major significance is the question as to the origin of the galaxies. There are great many things that the cosmologist not only does not know, but also finds severe difficulty in envisaging a path towards finding out . . . In particular, how did the galaxies form? The encyclopaedias and popular astronomical books are full of plausible tales of condensations from vortices, turbulent gas clouds and the like, but the sad truth is that we do not know how the galaxies came into being.1 (Laurie John) Gerrit Verschuur foreseen major changes in current views: With what perspective will someone fifty years from now read our astronomical journals and books? . . I feel that in the area of understanding galaxies we might well leave present ideas farther behind than in any other area of astronomy.2 Most astronomers apparently believe that the question as to the origin of the stars is closer to a solution, but when the issue is squarely faced they are forced to admit that no tenable theory of star formation has yet been devised. For example, I. S. Shklovsky (or Shklovskii), a prominent Russian astronomer whose views will be quoted frequently in these pages, concedes that the star formation process is still in ‖the realm of pure speculation.― He describes the situation in this manner: It is natural to suppose that the connection between O and B stars and dust clouds] should be a genetic one, with the stars in the associations being formed from condensing clouds of gas and dust. Nevertheless . . . the problem [of proof] has

not yet been definitively solved . . . the situation has turned out to be all too complicated. New technological developments . . . may ultimately lift the star formation problem from the realm of pure speculation and make it an exact science.3 Our first concern in this present work will be with these two basic problems. As we saw in Volume I, the large-scale action of the universe is cyclic. The contents of the sector of the universe in which we live, the material sector, originate in a primitive, widely dispersed form, and undergo a process of aggregation into large units. Ultimately the aggregates of maximum size are explosively ejected into an inverse sector of the universe, the cosmic sector. A similar process takes place in that sector, culminating in an explosive ejection of the major aggregates of cosmic matter back into the material sector. The two preceding volumes have described the aggregation process in the material sector insofar as it applies to the primary units: atoms and sub-atomic particles. The incoming matter from the cosmic sector arrives in the form of cosmic atoms. The structure of these atoms is incompatible with existence in the material sector (that is, at speeds less than that of light), and they decay into sub-atomic particles that are able to accommodate themselves to the material environment Over a long period of time these particles combine to form simple atoms, after which the atoms absorb additional particles to form more complex atoms (heavier elements) Meanwhile the atoms are subject to a continual increase in ionization, the ultimate result of which is to bring each atom to a destructive limit At this point all, or part, of the rotational motion (mass) of the atom is converted to linear motion (kinetic energy). This atomic aggregation process, previously described in detail, thus terminates in destruction of the atom, or a portion thereof, rather than in ejection into the cosmic sector. In order to understand how the ejection takes place we will have to examine matter from a different standpoint. Heretofore we have been looking at the behavior of the individual units, the atoms. Now we will need to turn our attention to the behavior of material aggregates. This is the principal subject of the present volume. Let us begin our consideration of these aggregates with a pre-aggregate situation, a volume of extension space (the space of the conventional reference system) in which there is a nearly uniform distribution of widely separated hydrogen atoms and sub-atomic particles, the initial products derived from the incoming cosmic matter: the cosmic rays. Coexisting with this primitive material there is usually a small admixture of matter that has been scattered into space by explosive processes, mainly gas and dust, but including some larger aggregates up to stellar size. There may even be a few small groups of stars. All this material is subject to the two general forces of the universe, gravitation and the force due to the outward progression of the natural reference system. The nature of the aggregates that are formed is determined by the properties of these two forces. Three general types of aggregates can be distinguished: (1) dust particles, (2) stars and related aggregates, (3) galaxies and related aggregates. In the diffuse matter under consideration, the progression of the natural reference system is the dominant force except at very great distances. As we saw in Volume I, the direction of this progression is outward, but the natural outward direction, to which this

progression conforms, is away from unity, because the natural datum level is unity, not zero. Inside unit space, ‖away from unity― is inward as seen in the reference system. Inasmuch as the sizes of the atoms and sub-atomic particles put them into what we have called the time region, the region inside unit space, there is nothing to prevent random motion of one from bringing it within unit distance of another. When this occurs, the progression of the reference system moves these objects inward toward each other until they reach equilibrium positions where the gravitational motion and the progression are balanced. Such contacts are infrequent because of the very low densities and temperatures, but over a long period of time these infrequent contacts are sufficient to build up molecules and dust particles. Nothing larger than a dust particle can be formed by this contact process, because as soon as the diameter of the aggregate reaches unit distance, 4.56 x 106 cm, the direction of the progression of the natural reference system, relative to the conventional spatial coordinate system, is reversed. Outward from unity becomes outward from each other, and the particles move apart. Inter-atomic forces of cohesion operate against this outward progression, and permit the maximum size of relatively complex particles such as the silicates to exceed the natural unit of distance to a limited extent. The maximum attainable diameter is something less than one micron (10-4 cm). This is the explanation of the ‖surprising― fact noted by Otto Struve: It is surprising that the particles of all clouds are of about the same size. . . There must be a mechanism that prevents the particles from growing larger than one micron.4 Average grain sizes are closer to the unit of distance, which is equivalent to about 0.05 micron. Simon Mitton reports average values ranging from 0.02 microns for iron to 0.15 microns for silicates.5 Each of the individual entities with diameters greater than unity existing in the primitive diffuse volume of matter—molecules, dust particles, and bits of debris from disintegrated larger aggregates—is far outside the gravitational limits of its neighbors, and the progression of the natural reference system therefore tends to move them apart, but this outward motion is opposed, not only by the gravitational forces of the neighbors, but also by the inward motion due to the combined gravitational effect of all masses within the effective distance. If we start from a given point in the region of diffuse matter, and consider spheres of successively larger radius, the progression of the natural reference system is much greater than the gravitational effect originally, but the total gravitational force is directly proportional to the mass—that is, to the cube of the radius, where the density is uniform—whereas the effect of distance is a decrease proportional to the square of the radius. The net gravitational force that the mass included within the concentric spheres exerts against a particle at the outer boundary in each case therefore increases in direct proportion to the radius of the sphere. Hence, although the gravitational motion (or force) at the shorter distances is almost negligible compared to the progression of the natural reference system, equilibrium is eventually reached at some very great distance.

Beyond the point of equilibrium the particles of matter are being pulled inward toward the center of the spherical aggregate. But coincidentally, the gravitational forces acting from other similar centers are being exerted on the particles in the same region of space, and the net result is that there is a movement in both directions that leaves a relatively clear space between adjacent aggregates. The original immense volume of very diffuse matter thus separates into a number of large autonomous gravitationally bound aggregates. Current astronomical thought regards the condensation of a cloud of dust or gas as a matter of the relative strength of the gravitational force and the opposing thermal forces. On this basis, it is difficult to account for any large-scale condensation. As expressed by Gold and Hoyle: Attempts to explain both the expansion of the universe and the condensation of galaxies must be very largely contradictory so long as gravitation is the only force field under consideration. For if the expansive kinetic energy of matter is adequate to give universal expansion against the gravitational field it is adequate to prevent local condensation under gravity, and vice versa. That is why, essentially, the formation of galaxies is passed over with little comment in most systems of cosmology.6 In the universe of motion the inward and outward forces arrive at an equilibrium, as indicated in the foregoing paragraphs. No condensation would take place if this equilibrium persisted, but the continued introduction of new matter from the cosmic sector alters the situation. The added mass strengthens the gravitational force, and initiates a contraction. The decrease in the distance between particles increases the gravitational force still further. The contraction is thus a self-reinforcing process, and once it is started it accelerates, The two processes that have been described, the gradual contraction of the very large diffuse aggregate and the consolidation of the individual atoms and sub-atomic particles into molecules and dust particles, take place coincidentally. The drastic reduction in the number of separate units in the aggregate resulting from the consolidation results in an excess of empty space within the contracting volume, and causes the contracting sphere of matter to break up into a large number of smaller aggregates separated by nearly empty space. The product is a globular cluster, in which a large number of submasses— up to a million or more—are contained within the overall gravitational limit of a large spherical aggregate. Each of the sub-masses is outside the gravitational limits of its neighbors, and is therefore moving away from them, but it is being pulled inward by the gravitational force of the entire aggregate. Many of the internal condensations take place around the remnants of disintegrated galaxies that are scattered through the contracting material. In that case, the relatively massive core thus provided makes the mass a selfcontracting unit. Where no such nuclei are available, the forces of the globular cluster as a whole confine the sub-masses, and the contraction continues under the influence of these external forces until the density is adequate to continue the process.

This is where the astronomers current theories of star formation are stopped cold. They envision the formation as taking place in the galaxies, but there are no gas or dust clouds in our galaxy or in any other, so far as we know—that have anywhere near the critical density, or have any way of increasing their density to the critical level. Basically there does not appear to be enough matter in any of the hydrogen clouds in the Milky Way that would allow them to contract and be stable. Apparently our attempt to explain the first stages in star evolution has failed.7 (G. Verschuur) If the contraction of the sub-masses contained within the globular cluster is permitted to continue without interference from outside agencies, the gravitational energy of position (the potential energy) of their constituent units—atoms, particles, etc.—is gradually transformed into kinetic energy, and the temperature of the aggregate consequently rises. At some point, the mass becomes self-luminous, and it is then recognized as a star. The globular cluster, as we observe it, consists of an immense number of stars, separated by great distances, and forming a nearly spherical aggregate. As the foregoing discussion brings out, however, the star cluster stage is preceded by a stage in which the constituent units, or sub-masses, of the globular cluster are presteilar gas clouds rather than stars. The existence of such structures has some important consequences that will be explored as we proceed. No new assumptions or concepts have had to be introduced in order to derive this picture of the stellar condensation process in the depths of space. We have simply taken the physical principles and relations previously obtained from a development of the consequences of the basic postulates as to the nature of space and time, as described in the previous volumes of this work, and have applied them to the problems at hand. The results of this study not only give us a clear picture of how the formation of stars takes place, but also show that the formation occurs under conditions that necessarily exist throughout immense regions of space. The production of sufficient star clusters of the globular type to meet the requirements of the later phases of evolutionary development is thus shown to be a natural and inevitable consequence of the premises of the theory. The globular clusters are actually small aggregates of the same general nature as the galaxies. ‖There is no absolutely sharp cutoff distinguishing galaxies from globular clusters,―8 says Martin Harwit. The process just described thus provides the answers for both of the major astronomical problems identified earlier: the formation of stars and the formation of galaxies. As noted earlier, present-day astronomy has no tenable theory of galaxy formation. In the words of W. H. McCrea, ‖We do not yet know how to tackle the problem.―9 The situation with respect to the formation of stars is somewhat different, in that, although it is evident that the mechanism of star formation is not yet understood, there is a general impression that the dust clouds in the galaxies must be the locations in which this mechanism is operating. In such cases as this, where the general trend of thought in any field is on the wrong track, the reason almost invariably is the uncritical acceptance of some erroneous conclusion or conclusions. As will be brought out in detail in the pages that follow, astronomy has unfortunately been the victim of two particularly far-reaching errors, The latter portion of this volume will examine a wide variety of phenomena in which the true

relations have not heretofore been recognized because the general submission to Einstein's dictum that speeds in excess of that of light are impossible has diverted inquiry into unproductive channels, The theories applicable to the more familiar astronomical objects that will be discussed in the earlier chapters have been led astray by another erroneous conclusion also imported from the physicists, This costly mistake is the conclusion that the energy production process in the stars is the conversion of hydrogen to helium and successively heavier elements. As brought out in Volume II, the development of the consequences of the postulates that define the universe of motion arrives at a totally different conclusion as to the nature of the process by which the stellar energy is produced. Inasmuch as there is no direct way of determining just what is happening in the interiors of the stars, all conclusions with respect to this energy generation process have to be based on considerations of an indirect nature, Thus far, the thinking about this subject has been dominated by the physicists― insistence that the most energetic process known to them must necessarily be the process whereby the stars generate their energy, regardless of any evidence to the contrary that may exist in other scientific areas. The fact that they have had to change their conclusions as to the nature of this process twice already has not altered this attitude, The most recent change, from the gravitational contraction hypothesis to the hydrogen conversion hypothesis was preceded by a long and acrimonious dispute with the geologists, whose evidence showed that geological history required a great deal more time than was allowed by the gravitational contraction process. Ultimately the physicists had to concede defeat. It might be assumed that the embarrassing outcome of this controversy would have engendered a certain amount of caution in the claims made for the newest hypothesis, but there is no indication of it. Today there is ample astronomical evidence that the physicists current hypothesis is wrong, just as there was ample geological evidence in the nineteenth century that their then current hypothesis was wrong, But they are no more willing to listen to the astronomical evidence today than they were to the geological evidence of the earlier era. The astronomers are less combative than the geologists, and are not inclined to challenge the physicists dicta. So they are ignoring the evidence from their own field, and accommodating their theories to the hydrogen conversion hypothesis. Curiously enough, the only real challenge to that hypothesis at the present time comes from a rather unlikely source, an experiment whose execution is difficult, and whose interpretation is open to question. This is an experiment designed to measure the rate of emission of neutrinos by the sun. The number of neutrinos observed is far less than that predicted on the basis of the prevailing theories. ‖This is a terrible puzzle,―10 says Hans Bethel The neutrino experiment is one of the most interesting to be carried out in astronomy in recent years, and seems to be giving the most profound and unexpected results. The least that we can conclude is that until the matter is settled, we must treat all the theoretical predictions about stellar interiors with a bit of caution. 11 (Jay M. Pasachoff) The mere fact that the hydrogen conversion process can be seriously threatened by a marginal experiment of this kind emphasizes the precarious status of a hypothesis that rests almost entirely on the current absence of any superior alternative. The hypothesis of

energy generation by ordinary combustion processes held sway in its day on the strength of the same argument. Then gravitational contraction was recognized as more potent, and became the physicists orthodoxy, defended furiously against attacks by the geologists and others. Now the hydrogen conversion process is the canonical view, resting on exactly the same grounds that crumbled in the two previous instances. In each case the contention was that there is no other tenable alternative. But in both of these earlier cases it turned out that there was such an alternative. Even without the contribution of the theory of the universe of motion, which shows that, in fact, there is a logical and rational alternative, it should be evident from past experience that the assertion that ‖there is no other way― is wholly unwarranted. Without this crutch, the hydrogen conversion process is no more than a questionable hypothesis, a very provisional conclusion that must stand or fall on the basis of the way that its consequences agree with physical observations. Unfortunately the astronomers, whose observations are the ones against which the hypothesis can be tested, have taken it as an established fact, and have accorded it a status superior to their own findings, adjusting their interpretations of their own observations to agree with the physicists hypothesis. We need go no farther than the first deduction that is made from the assumed existence of the hydrogen conversion process to encounter a glaring example of the way in which this pure assumption is allowed to override the astronomical evidence. In application to the question of stellar ages, this hypothetical process leads to the conclusion that the hot, massive stars of the O and B classes are very young, as their output of energy is so enormous that, on the basis of this hypothesis, their supply of fuel cannot last for more than a relatively short time. It then follows that these stars must have been formed relatively recently, and somewhere near their present locations. No theory that calls for the formation of stars within the galaxies is plausible so long as the theorists are unable to explain how stars can be formed in this kind of an environment. One that, in addition, requires the most massive and most energetic of all stars to be very young, astronomically speaking, converts the implausibility into an absurdity. Even some of the astronomers find this conclusion hard to swallow, For instance, Bart J. Bok once observed that It is no small matter to accept as proven the conclusion that some of our most conspicuous supergiants, like Rigel, were formed so very recently on the cosmic scale of time measurements12 In the context of the theory of the universe of motion, the formation of single stars, or small groups of stars, by condensation from galactic dust or gas clouds is not possible. In addition to all of the other problems that have baffled those who have attempted to devise a mechanism for this purpose, the new theory discloses that there is a hitherto unrecognized force operating against such a condensation, the force due to the outward progression of the natural reference system, which makes condensation still more difficult, No known force other than gravitation is capable of condensing diffuse material into a star, and gravitation can accomplish this result only on a wholesale scale, under conditions in which an immense number of stars are formed jointly from a gas and dust medium of vast proportions.

On this basis, the globular clusters are the youngest aggregates of matter, and the stars of these clusters are the youngest of all stars. Thus the astronomers have their age sequence upside down. It may be hard to believe that the present structure of astronomical theory could contain such a major error in its basic framework. But, as we will see when we examine the various astronomical phenomena in the pages that follow, even the astronomers themselves admit that the theoretical conclusions based on the currently accepted age sequence are inconsistent with the observations all along the line. Of course they are reluctant to make any blanket statement to this effect, but if we add up their comments concerning the individual items, this is what they amount to. In the quotations from astronomical sources that will be introduced in connection with the discussion of these various subjects we will find that the individual inconsistencies and contradictions are characterized as ‖puzzling,― ‖curious,― ‖confusing,― ‖difficult to explain,― ‖not yet understood,― and so on. Some of the more candid writers concede that the theoretical understanding is unsatisfactory, referring to a particular inconsistency as ‖an impressive challenge to theoreticians,― admitting that it ‖imperils― currently accepted theory, or ‖conflicts with current models,― reporting that ‖severe problems remain― in arriving at understanding, or even that the observations constitute an ‖apparent defiance― of modern theory. The existence of this multitude of commonly recognized contradictions and inconsistencies is a clear indication that there is something radically wrong with the foundations of present-day astronomical theory. What the development of the theory of a universe of motion has done is to identify the mistake that has been made. Uncritical acceptance of an assumption made by the physicists has led to a conclusion regarding the ages and evolution of stars that is upside down.

CHAPTER 2

Galaxies From the finding that the initial product of the large-scale aggregation process in the material sector of the universe is the globular cluster, it follows that galaxies are formed by consolidation of globular clusters. This conclusion is in direct conflict with the prevailing astronomical opinion, which is described by John B. Irwin as follows: The Milky Way system, like other galaxies, is thought to have originated from a condensation or collapse of the intergalactic medium, which event resulted in a system of stars. The reason for the collapse is not known, and the details of the process are uncertain.13 As might be expected where neither the antecedents of the process nor the details are in any way understood, this explanation has encountered serious difficulties, and is currently in deep trouble. As expressed by Virginia Trimble in a report of a conference, at which this situation was discussed at some length, ―The conventional wisdom concerning galaxy formation and evolution is beginning to leak badly at the seams.‖ In the concluding portion of her report she notes that‖ Fall, Hogan, and Rees (Cambridge) have

considered the case of a galaxy assembled entirely out of pre-existing star clusters,‖ and she makes this comment: The discerning reader will long since have noticed where we are headed—if there are problems making the biggest things (clusters of galaxies) first, then perhaps we should try making the smallest things (stars or clusters of stars) first.14 Such a reversal of thinking on the subject is difficult in the context of present-day astronomical theory because so much of that theory has been specifically tailored to fit the ―big things first‖ viewpoint but, as we will see in the following pages, if the observational evidence is taken at its face value and not twisted to conform to the prevailing theories, the problems disappear. In the universe of motion the galaxies are, in fact, ―assembled entirely out of preexisting star clusters,‖ as the Cambridge astronomers suggested. Unlike the individual stars, whose spheres of gravitational control meet at locations of minimum gravitational force, so that each star is outside the gravitational limits of its neighbors, the original boundaries of the aggregate that ultimately becomes a globular cluster meet those of its neighbors at locations of maximum gravitational force, The contraction of the aggregates leaves the gravitational effect at these locations unchanged, while the increase in mass due to the influx of material from the cosmic sector adds a significant increment. Each of the globular clusters is thus well within the gravitational limits of the adjoining clusters. Consequently, there is a general tendency for the clusters to move toward each other and combine. When such a combination does occur, the combined unit exerts a stronger gravitational force within wider spatial limits, and both the accretion of diffuse material and the attraction of nearby clusters are speeded up. Like the contraction of the pre-cluster aggregate, the contraction of the group of clusters leading to combination is thus a self-reinforcing process. It should be noted in this connection that consolidation of two clusters is inevitable if their mutual gravitational attraction continues to act without interference from outside sources (that is, gravitational forces of other aggregates). There has been a rather general belief that because of the immense distances between the stars in a cluster, or other aggregate; two such structures could pass through each other with little or no actual contact. Fred Hoyle expresses this general opinion in this statement: Think of the stars as ordinary household specks of dust. Then we must think of a galaxy as a collection of specks a few miles apart from each other, the whole distribution filling a volume about equal to the Earth. Evidently one such collection of specks could pass almost freely through another. 15 Our finding that the stars occupy equilibrium positions throws a considerably different light on this situation. A stellar aggregate such as a cluster has the general characteristics of a viscous liquid and collision of two such aggregates involves an inelastic impact similar to the impact of one liquid aggregate upon another. In each case there is a certain smount of penetration while the kinetic energy of the incoming mass is being absorbed, but the eventual result is consolidation. The incurring mass meets a wall, not a passageway.

This liquid-like nature of the aggregates of stars, deduced theoretically and confirmed observationally by the behavior characteristics of the galaxies and star clusters that will be examined in the subsequent pages, has a major effect on the phenomena in which these objects participate. It invalidates many of the conclusions such as the one expressed by Hoyle in the statement just quoted, and a great many mathematical calculations that rest on the hypothesis of free movement of the constituent stars of an aggregate. Consolidation of two globular clusters produces an aggregate which not only has double the mass of a cluster, but also, because the impact is not exactly central in the usual case, has a rotational motion that was absent in the original cluster. Instead of an oversize cluster, we may therefore regard the combination as an aggregate of a new type: a small galaxy. For a period of time after its formation such a galaxy has a rather confused and disorderly structure, and is therefore classified as irregular, but in time the disruptions due to the collision are smoothed out, and the galaxy assumes a more regular form. By reason of the rotational motion that is now present, the galactic structure deviates to some extent from the nearly spherical shape of the original clusters, and it is now classed as an elliptical galaxy. If some larger unit does not capture this small elliptical galaxy it continues growing by accretion of dust and gas, and occasionally picks up another globular cluster. In the earlier stages, each such capture of a cluster disorganizes the galactic structure and puts the galaxy back into the irregular class for a time, but as it increases in size the galaxy gradually becomes able to swallow a cluster without any major effect on its own structure. By this time, however, some combinations of small galaxies begin to take place. Here, again, a structural irregularity develops, and persists for a time. In this stage the aggregates are reported to be ―several hundred times larger than the dwarf elliptical galaxies.18 As long as the captured clusters are mature—that is, fully consolidated into stars—the amount of dust in an elliptical or small irregular galaxy is relatively minor. Eventually, however, one or more of the captives is a cluster of dust and gas clouds an immature globular cluster, rather than a mature cluster of stars. The mixing of this large amount of dust and gas with the stars of the galaxy alters the dynamics of the rotation, and causes a change in the galactic structure If the dust cloud is captured while the galaxy is still quite small, the result is likely to be a reversion to the irregular status until further growth of the galaxy takes place. Because of the relative scarcity of the immature clusters, however, most captures of these objects occur after the elliptical galaxy has grown to a substantial size. In this case the result is that the structure of the galaxy opens up and a spiral form develops. There has been a great deal of speculation as to the nature of the forces responsible for the spiral structure, and no adequate mathematical treatment of the subject has appeared. But from a qualitative standpoint there is actually no problem, as the forces, which are definitely known to exist—the rotational forces and the gravitational attraction—are sufficient in themselves to account for the observed structure. As already noted, the galactic aggregate has the general characteristics of a heterogeneous viscous liquid. A spiral structure in a rotating liquid is not unusual; on the contrary, a striated or laminar structure is almost always found in a rapidly moving heterogeneous fluid, whether the

motion is rotational or translational. Objections have been raised to this explanation, generally known as the ―coffee cup‖ hypothesis, on the ground that the spiral in a coffee cup is not an exact replica of the galactic spiral, but it must be remembered that the coffee cup lacks one force that plays an important part in the galactic situation: the gravitational attraction toward the center of the mass, If the experiment is performed in such a manner that a force simulating gravitation is introduced, as, for instance, by replacing the coffee cup by a container that has an outlet at the bottom center, the resulting structure of the surface of the water is very similar to the galactic spiral. In this kind of a rotational structure the spiral is the last stage, not an intermediate form. By proper adjustment of the rotational velocity and the rate of water outflow the original dispersed material on the water surface can be caused to pull in toward the center and assume a circular or elliptical shape before developing into a spiral, but the elliptic structure precedes the spiral if it appears at all. The spiral is the end product. The manner in which the growth of the galaxy takes place has a tendency to accentuate the spiral form, but the rotating liquid experiment shows that the spiral will develop in any event when the necessary conditions exist. Furthermore, this spiral is dynamically stable. We frequently find the galactic spirals characterized as unstable and inherently short-lived, but the experimental spiral does not support this view. From all indications, the spiral structure could persist indefinitely if the mass and rotational velocity remained constant. The conclusion that the spiral arms are quasi-permanent features of the galaxies is currently contested on other grounds, as in the following quotation from an astronomy textbook: The trouble is that this idea predicts the arms should be nearly fixed structures almost as old as the galaxy itself, whereas actually they are young regions only a few million years old.16 The assertion that the spiral arms are ―young regions‖ is based on the presence of hot, massive stars, currently considered to be young, on the strength of the prevailing assumption as to the nature of the stellar energy generation process. The evidence that invalidates this hypothesis, which will be presented at appropriate points in the pages that follow, thus cuts the ground from under this argument. A spiral galaxy consists of a nucleus, approximately spherical, and a system of curving arms extending outward from the nucleus. In the smaller and younger objects the nucleus is small, the arms are thick and widely separated, and the general structure can be described as loose. As these galaxies grow older and larger, the nucleus becomes more prominent, the rotational velocity increases, and the greater velocity causes the arms to thin out and wind up more tightly. Ultimately the arms disappear entirely and the nearly spherical nucleus becomes the galaxy. At this stage the shape of the galaxy is the same as that of the smallest and youngest of the galaxies that have attained a stable form, and these giant old galaxies are generally included in the elliptical category. But putting such widely different aggregates into the same class simply on the basis of their form leads to confusion, and cannot be considered good practice. Fortunately, the term ―spheroidal‖ is being used to some extent in this connection, and since it is quite appropriate, we will classify these oldest and largest of the stellar aggregates as spheroidal galaxies.

As the foregoing discussion brings out, the primary criterion of the age of galaxies is size, with shape as a secondary characteristic varying in direct relation to size. It must be realized, of course, that accidents of environment and other factors will affect this situation to some extent, so that there are some deviations from the normal pattern, but in general the ages of the various types of galactic structures stand in the same order as their sizes. The passage of time also brings other observable results that confirm the ages indicated by the sizes of the galaxies. One of these is a decrease in abundance. In the evolutionary course as outlined, each aggregate is growing at the expense of its environment. The smaller units are feeding on atoms, small particles, and stray stars. The larger aggregates pull in not only all material of this kind in their vicinity, but also any of the small aggregates that are within reach. As a result of this cannibalism the number of units of each size progressively decreases with age. Observations show that the existing situation is in full agreement with the theoretical expectation, as the order of abundance is the inverse of the age sequence indicated by the galactic size and shape. The giant spheroidal galaxies, the senior members of the galactic family, are relatively rare, the spirals are more common, the elliptical galaxies are abundant, and the globular clusters exist in enormous numbers. It is true that the observed number of small elliptical galaxies, those in the range just above the globular clusters, is considerably lower than would be predicted from the age sequence, but it is evident that this is a matter of observational selection. When the majority of galaxies are observed at such distances that only the large types are visible, it is not at all strange that the number of small elliptical galaxies actually identified is less than the number which, according to the theory, should exist. The many additional elliptical galaxies discovered within the Local Group in very recent years, increasing the already high ratio of elliptical to spiral in the region accessible to detailed observation, emphasizes the effect of the selection process. Conventional astronomical theory neither requires nor excludes the existence of large numbers of these dwarf galaxies, and because they are too inconspicuous to demand attention from an observational standpoint, little notice has been taken of them until recently. Since our development leads to the conclusion that they are, next to the globular clusters, the most numerous of the astronomical aggregates, it is worth noting that the astronomers are beginning to recognize their abundance. For instance, a recent (1980) comment suggests that these dwarfs ―may be the most common type of galaxy in the universe.17 This is what the theory of the universe of motion says that they must be. Other observational indications of age will be examined later, after some more foundations have been laid, but these will merely supply additional confirmation. At this time it should be noted that all three of the criteria thus far discussed are in agreement that the observed galaxies and sub-galaxies can be placed in a sequence consistent with the theoretical deduction that there is a definite evolutionary path in the material sector of the universe extending from dispersed atoms and sub-atomic particles through multimolecular dust particles, clouds of atoms and particles, stars, clusters of stars, elliptical galaxies, and spiral galaxies to the giant spheroidal galaxies which constitute the final stage of the material phase of the great cycle of the universe. It is possible, of course, that some of these units may have remained inactive from the evolutionary standpoint for

long periods of time, perhaps because of a scarcity of available ―food‖ for accretion in their particular regions of space, and such units may be chronologically older than some of the aggregates of a more advanced type. Such variations as these are, however, merely minor fluctuations in a well-defined evolutionary pattern. ‖One of the continuing mysteries,‖ says Virginia Trimble, ―is why galaxies should have the range of masses they do.―14 The foregoing explanation of the evolution of the galaxies shows why. The galaxies originate as globular clusters and grow by capture until they reach a size limit at which their existence terminates. Galaxies therefore exist in all sizes between these two limits. Next we turn to a different kind of evidence that gives further support to the theoretical conclusions. In the preceding discussion it has been demonstrated that the deductions as to continual growth of the material aggregates by capture of matter from the surroundings are substantiated by the definite correlation between the size, shape and relative abundance of the various types of galaxies and clusters. Now we will examine some direct evidence of captures of the kind required by the theory. First we will consider evidence which indicates that certain captures are about to take place, then evidence of captures actually in progress, and finally evidence of captures that have taken place so recently that their traces are still visible. The observed positions and motions of the globular clusters provide the most abundant evidence of impending captures, but the total amount of information about these clusters now available is sufficient to justify a separate chapter. The capture of clusters by galaxies will therefore be discussed in Chapter 3 , in connection with the general consideration of the role of these objects. Capture of galaxies by larger galaxies is much less common than capture of globular clusters, simply because the clusters are very much more abundant. We may deduce, however, that there should be a few galaxies on the road to capture by each of the giant galaxies. This is confirmed by the observation that the nearer large spirals have ―satellites,‖ which are nothing more than small galaxies that are within the gravitational range of a larger aggregate, and are being pulled in to where they can be conveniently swallowed. The Andromeda spiral, for instance, has at least eight satellites: the elliptical galaxies M 32, NGC 147, NGC 185, and NGC 205, and four small galaxies that have been named Andromeda I, II, III, and IV. The Milky Way galaxy is also accompanied by at least six fellow travelers, the largest of which are the two Magellanic Clouds and the elliptical galaxies in Sculptor and Fornax. The expression ―at least‖ must be included in both cases, as it is by no means certain that all of the small elliptical galaxies in the vicinity of these two large spirals have been identified. As one report summarizes the situation, the dwarf galaxies ―cluster in swarms about the giant galaxies.‖ The author goes on to say, ―Why this should be is not yet understood; but theorists believe that it could be telling us much about the way galaxies form 18 In the light of the information presented in the foregoing pages, it should be evident that what these observations are telling us is simply that the original products are undergoing a process of consolidation into larger aggregates.

Some of these galactic satellites not only occupy the kind of positions required by theory, and to that extent support the theoretical conclusions, but also contribute evidence of the second class: indications that the process of capture is already under way. The so-called ―irregular‖ galaxies were not given a separate place in the age-size-shape sequence previously established, as it appears reasonably certain that these galaxies, which constitute only a small percentage of the total number of galaxies that have been observed, are merely galaxies belonging to the standard classes which have been distorted out of their normal shapes by factors related to the capture process. The Large Magellanic Cloud, for instance, is big enough to be a spiral, and it contains the high proportion of advanced type stars that is characteristic of the spirals. Why, then, is it irregular rather than spiral? The most logical conclusion is that the answer lies in the proximity of our own giant system; that the Cloud is in the process of being swallowed by our big spiral, and that it has already been greatly modified by the gravitational forces that will eventually terminate its existence as an independent unit. We can deduce that the Large Cloud was actually a small spiral at one time, and that the ―rudimentary‖ spiral structure, which is recognized in this galaxy, is actually a vestigial structure. The Small Cloud has also been greatly distorted by the same gravitational forces, and its present structure has no particular significance. From the size of this Cloud we may deduce that it was a late elliptical or early spiral galaxy before its structure was disrupted. The conclusion that it is younger than the large Cloud, which we reach on the basis of the relative sizes, is supported by the fact that the Small Cloud contains a mixture of the type of stars found in the globular clusters, currently called Population II, and the type found in the spiral arms, currently called Population 1, whereas the stars of the Large Cloud are predominantly of Population 1. The long arm of the Large Cloud, which extends far out into space on the side opposite our galaxy is a visible record of the recent history of the Cloud. The gravitational attraction of the Galaxy is exerted on each component of the Cloud individually, as well as on the structure as a whole, since the Cloud is an assembly of discrete units in which the cohesive and disruptive forces are in balance. This balance is precarious at best, and when an additional gravitational force is superimposed on the equilibrium within the Cloud some of the stars are detached from the aggregate. The difference between the forces exerted by our galaxy on the nearest stars of the Cloud and those exerted on the most distant stars was unimportant when the Cloud was far away, but as it approached the Galaxy this force differential increased to significant levels. As the main body was speeded up by the increasing gravitational pull some stragglers failed to keep up with the faster pace, and once they had fallen behind, the force differential became even greater. The Cloud therefore left a luminous trail behind it marking the path alone, which it had traveled.

This is no isolated phenomenon. Small galaxies may be pulled into larger units without leaving visible evidence behind, as the amount of material involved is too small to be detected at great distances, but when two large galaxies approach each other we commonly see luminous trails of the same nature as the one that has just been discussed.

Fig.1 is a diagram of the structural details that can be seen in photographs of the galaxies NGC 4038 and 4039. Here we can see that one galaxy has come up from the lower right of the diagram and has been pulled around in a 90 degree bend. The other has moved down from the direction of the top center and has been deflected toward the first galaxy. When the action is complete there will be one large spiral moving forward to its ultimate destiny, leaving the stray stars trailing behind the galaxies to be pulled in individually, or be picked up by some other aggregate that will come along at a later time. Several thousand ―bridges‖ that have developed from interaction between galaxies are reported to be visible in photographs taken with the 48-inch Schmidt telescope on Mount Palomar. Some of these are trailing arms similar to those in Fig. 1. Others are advance units that are rushing ahead of the main body. The greater velocity of these advance stars is also due to the gravitational differential between the different parts of the incoming galaxy, but in this case the detached stars are the closest to the source of the gravitational pull and are therefore subject to the greatest force. Irregularities of one kind or another are relatively common in the very small galaxies, but these are not usually harbingers of coming events like the gravitational distortions of the type experienced by the Magellanic Clouds. Instead, they are relics of events that have already happened. Capture of a globular cluster by a small galaxy is a major step in the evolution of the aggregate. Consolidation with another small galaxy is a revolutionary event. Since the relatively great disturbance of the galactic structure due to either of these events is coupled with a slow return to normal because of the low rotational velocity, the structural irregularities persist for a longer time in these smaller galaxies. The number of small irregular aggregates visible at any particular time is correspondingly large. Although the general spiral structure of the larger galaxies is regained relatively soon after a major consolidation because of the high rotational velocities that speed up the mixing process, there are features of some of these structures that seem to be correlated with recent captures. We note, for instance that a number of spirals have semi-detached masses, or abnormal concentrations of mass within the spiral arms, that are difficult to explain as products of the recent development of the spiral itself, but could easily be the result of recent captures. The outlying mass NGC 5195 seemingly attached to one of the arms of M 51, for example, has the appearance of a recent acquisition (although there is some difference of opinion as to the true status of this object). The lumpy distribution of matter in M 83 gives this galaxy the aspect of a recent mixture which has not yet been thoroughly stirred; NGC 4631 looks as if it contains a still undigested mass; and so on. A study of the ―barred‖ spiral galaxies also leads to the conclusion that these objects are galactic unions that have not yet reached the normal form. The variable factor in this case appears to be the length of time required for consolidation of the central masses of the combining galaxies. If the original lines of motion intersect, the masses are no doubt intermixed quite thoroughly at the time of contact, but an actual intersection of this kind is not required for consolidation. All that is necessary is that the directions of motion be such as to bring one galaxy into the general vicinity of the other. The gravitational force then accomplishes the change of direction that is necessary in order to bring about a contact of the two objects. Where the gap to be closed by gravitational action is relatively large, the rotational forces may establish the characteristic spiral form in the outer regions

of the combination before the consolidation of the central masses is complete, and in the interim the galactic structure is that of a normal spiral which a double center.

Figure 2

Figure of NCG 1300

Figure M 51 Fig.2 (a) shows the structure of the barred spiral galaxy NGC 1300. Here the two prominent arms terminate at the mass centers a and b, each of which is connected with the galactic center c by a bridge of dense material that forms the bar On the basis of the conclusions in the preceding paragraph, we may regard a and b as the original nuclei of galaxies A and B. the two aggregates whose consolidation produced NGC 1300. The gravitational forces between a and b are modifying the translational velocities of these masses in such a manner as to cause them to spiral in toward their common center of gravity, the new galactic nucleus, but this process is slowed considerably after the galaxy settles down to a steady rotation, as only the excess velocity above the rotational velocity of the structure as a whole is effective in moving the mass centers a and b forward in their spiral paths. In the meantime the gravitational attraction of each mass pulls individual stars out of the other mass center, and builds up a new galactic nucleus between the other two. As NGC 1300 continues on its evolutionary course, we can expect it to gradually develop into a structure such as that in Fig.2 (b), which shows the arms of M 51. Fig.2 (c) indicates how M 51 would look if the central portions of the arms were removed. The structural similarity to NGC 1300 is obvious. Additional evidence of relatively recent captures will be developed in Chapter 8 after some further groundwork has been laid. Meanwhile the evolutionary pattern of the constituent stars of the clusters and galaxies will be defined, and it will be shown that the stellar evolution corresponds with the pattern of evolution of the galaxies, as described in this present chapter. All in all, the results obtained from these various lines of inquiry add up to an overwhelming mass of evidence confirming the validity of the theoretical process of galactic evolution beginning with dispersed matter and ending with the giant spheroidal galaxies.

This picture of continuous growth from globular cluster to spheroidal galaxy extending over a period of many billion years is in direct conflict with the prevailing astronomical view, which regards the galaxies as having been formed directly from dispersed matter in an early stage of an evolutionary universe, and having remained in essentially the same condition in which they were originally formed. The difference between this view and that derived from the Reciprocal System of theory is graphically illustrated by an argument offered by Shklovsky in support of the contention that a process of star formation must be operative in the Galaxy. He points out that at least one of the stars of the Galaxy ―dies‖ each year in a supernova explosion, and then argues that ―In order that the stellar tribe should not become extinct, just as many new stars on the average must be formed annually in our Galaxy.19 While our findings portray the Galaxy as not only pulling in single stars on a continuous basis, but also periodically swallowing a globular cluster, and even an occasional small galaxy, Shklovsky is not even willing to concede the capture of one star per year. The same viewpoint is reflected in the current tendency to try to explain the globular clusters detected in inter-galactic space as outgoing rather than incoming. These ―intergalactic tramps,‖ says one text, ―may actually be globular clusters that escaped from our Galaxy.20 Even the halo stars surrounding the Galaxy tend to be regarded as escapees from the original galactic system rather than as incoming matter, In a strange juxtaposition alongside this uncompromising orthodox view, there is a widespread and growing recognition of the prevalence of galactic cannibalism, For example, Joseph Silk tells us that ―It seems that the giant galaxies have grown at the expense of other galaxies in their cluster,21 M. J. Rees elaborates on the same theme: We can see many instances where galaxies seem to be colliding and merging with each other, and in rich clusters such as Coma the large central galaxies may be cannibalizing their smaller neighbors . . . Many big galaxies—particularly the so-called CD galaxies in the centers of clusters—may indeed be the result of such mergers.22 There is also an increased willingness to recognize the observational indications of galactic collisions. After a number of years during which the collision hypothesis applied earlier to such powerful radio emitters as Cygnus A was regarded as a mistake, it has resurfaced, and is now widely accepted, We now frequently encounter unequivocal statements such as this: ―Several hundred collisions or near collisions between galaxies have been photographed in the past 20 years.23 The concepts of galactic cannibalism, of galaxies ―growing,‖ of ―capture,‖ and of ―collision,‖ are the concepts appertaining to the theory developed in this work, not to the theory currently accepted by the astronomers. Whether or not the investigators who are using these concepts realize that they are striking at the foundations of orthodox theory is not clear, but in any event, that is the effect of the present trend of thought. These present-day investigators and theorists are providing an increasing amount of significant support for the conclusions detailed in this volume. One more question about the aggregation process remains to be considered. We have found thus far in our examination of this process that the original stellar aggregates, the

globular clusters, enter into combinations, which continue growing until they reach the status of giant spheroidal galaxies. The question now arises, is this the end of the aggregation process, or do the galaxies combine into super-galactic aggregates‖ ? The existence of many definite groups of galaxies with anywhere from a dozen to a thousand members would seem to provide an immediate answer to this question, but the true status of these groups or clusters of galaxies is not as evident as that of the stars or the galaxies. Each of the stars is a definite unit, constructed according to a specific pattern from subsidiary units that are systematically related to each other. The same can be said of the galaxies. It is by no means obvious, however, that this statement can be applied to the clusters of galaxies. So let us turn to a theoretical examination of the question. The globular cluster, we found, originates as a contracting aggregate of diffuse matter in which numerous centrally concentrated sub-aggregates are forming. Because of their central concentration these sub-aggregates, which eventually become stars, meet their neighbors at locations of minimum gravitational effect, and their net movement is therefore outward away from each other. Dispersed aggregates of near uniform density, on the other hand, meet their neighbors at locations where the gravitational effect is at a maximum. They exist as separate entities only because of competition between the various centers, which limits each aggregate to the minimum stable size. When open space is made available by reason of contraction of the individual units, these aggregates, the globular clusters, move inward toward each other. If we now consider a still larger volume of space, there are no large-scale aggregates corresponding to the stars; that is, centrally concentrated aggregates that are outside the gravitational limits of their neighbors. But in their original condition, the assemblage of globular clusters constitutes a dispersed aggregate similar to the dispersed aggregate of gas and dust particles, but on a larger scale. Applying the same principles as before, we can deduce that there exists a gravitationally determined limiting size of the aggregates of clusters (which we will call groups) corresponding to the limiting size of the aggregates of gas and dust (the globular clusters). We could continue this hierarchy of aggregates, and contemplate an aggregate of groups, but before this next level of structure has time to materialize, the life span of the constituent stars has terminated. Thus the groups of globular clusters, which eventually become groups of galaxies, are the largest structural units. The hierarchical theory, in which there are clusters, clusters of clusters, and so on indefinitely, is thus excluded. This theory has maintained a certain amount of support in astronomical circles over the years, but on the basis of the foregoing findings it is no longer tenable. The theoretically defined groups of galaxies are not necessarily, or even usually, coincident with the currently recognized aggregates called clusters of galaxies The members of each of the classes of aggregates that we have defined, clusters and groups, are moving inward toward each other. The inward motion of the smaller units, the clusters, is much the faster. It follows that the net motion of the outer clusters of adjoining groups carries them away from each other, even though the groups of which they are components are moving inward. Consequently, the amount of empty space between groups continually increases. Ultimately the inward motion of the groups would reverse this trend, if it continued, but before this can happen the time limit intervenes.

Inasmuch as the new groups form in the regions of space left empty by the recession or disintegration of previously existing groups of galaxies—the ―holes‖ in space reported by the astronomers—the sizes of the resulting aggregates of galaxies are determined by the sizes of the vacant spaces. This is a matter of chance, and the individual values are no doubt distributed over a considerable range, but we can conclude that there is an average size, probably including some hundreds of visible galaxies and many hundreds of invisible dwarfs, to which most aggregates will conform approximately, with a relatively small number substantially above or below this average. On this basis, the largest units in which gravitation is effective toward consolidation of its components are the groups of galaxies. Each such group is formed jointly with a number of adjoining groups. These groups begin separating immediately, but until the outward movement produces a clear-cut separation, their identity as distinct individuals is not apparent to observation. Here, then, is the explanation of the large ―clusters‖ and ―superclusters‖ of galaxies. These are not structural units in the same sense as stars or galaxies, or the groups of galaxies that we have been discussing. Each consists of a number of independent groups, formed simultaneously in the same general region of space, and separating so slowly that the processes of galaxy formation and growth are well under way before the units have moved far enough apart to be recognized as separate entities. Some of the mathematical aspects of these cluster relationships will he explored further in Chapter 15 .

CHAPTER 3

Globular Clusters In the preceding chapter we saw that galaxies (small ones, called globular clusters) condense out of diffuse material, grow by accretion and capture, and finally at an advanced age reach the limiting size, that of a giant spheroidal galaxy. This is the essence of the large-scale evolutionary process in the material sector of the universe, the subject of the first half of this volume. The next several chapters will be devoted to examining the most significant details of this process. We will first turn our attention to the galaxies, junior grade, the globular clusters. It should be noted, in this connection, that current astronomical theory has no explanation for either the formation of the clusters or their existence in their present form. It is generally assumed that the clusters are products of the process of galaxy formation, but this provides no answer to the problem, in view of the absence of anything more than vague and tentative ideas as to how the galaxies were formed. The clusters are spherical, or nearly spherical, aggregates containing from about 20,000 stars to a maximum that is subject to some difference of opinion, but is probably in the neighborhood of a million stars. These are contained in a space with a diameter of from about 5 to perhaps 25 parsecs. The parsec is a unit of distance equivalent to 3.26 light years. Both of these units are in common use m astronomy, and in order to conform to the

language in which the information extracted from the astronomical literature is expressed, both units will be employed in the pages that follow. The structure of these clusters has long been a mystery. The problem is that only one force of any significant magnitude that of gravitation has been definitely identified as operative in the clusters. Inasmuch as the gravitational force increases as the distance decreases, the force that is adequate to hold the cluster together should be more than adequate to draw the constituent stars together into one single mass, and why this does not happen has never been ascertained. Obviously some counter force is acting against gravitation, but the astronomers have been unable to find any such force. Orbital motion naturally suggests itself, in view of the prevalence of such motion among astronomical objects, but the rotations of the clusters, if they are rotating at all, are far too small to account for the outward force. For example, K. Cudworth, reporting on a study of M 13, says that ‖no evidence of cluster rotation was found. 24 It is recognized that this is a problem that calls for an answer. ‖Why then is the rotation of globular clusters so small?― 25 ask Freeman and Norris. Those who dislike having to concede that there is a significant gap in astronomical knowledge here are inclined to make much of the fact that a few clusters do show some signs of rotation. For instance, Omega Centauri is slightly flattened, and some indication of rotation has been found in the spectra of M 3. But a showing that some clusters rotate is meaningless. All must be rotating quite rapidly to give any substance to the hypothesis that rotational forces are counterbalancing the gravitational attraction. If even one cluster is not rotating, or is rotating only slowly, this is sufficient to demonstrate that rotation is not the answer to the problem. Thus it is clear that rotation does not provide the required counter force. The suggestion has also been made that these clusters may be similar to aggregates of gas molecules, in which the individual units maintain a wide separation, on the average. But such an explanation requires both high stellar speeds and frequent collisions, neither of which can be substantiated by observation. Furthermore, the existence of the gaseous type of structure depends on elastic collisions, and the impact of stars upon stars, if it were possible, would certainly not be elastic. Indeed a rather large degree of fragmentation could be expected. Together with the large kinetic energies that would be required to counterbalance the weight of the overlying layers of stars, this would result in a physical condition in the central regions of the clusters very different from that existing in the outlying regions. Here, again, no such effect is observed. The astronomers are reluctant to concede that such a conspicuous problem as that of the structure of these clusters is without an acceptable solution, and the general tendency is to assume that the possibilities mentioned in the preceding paragraphs will somehow develop into an answer at some future time. It is therefore significant that exactly the same problem exists with respect to the observed dust and gas clouds in the Galaxy, and here, where the processes suggested as possible explanations of the cluster structure clearly do not apply, the theorists are forced to admit that this is ‖a major unanswered question.― The dust cloud situation will be discussed in Chapter 9. As in so many of the phenomena previously examined, the answer to this problem is provided by the outward progression of the natural reference system relative to the conventional stationary system of reference. Because of the way in which the cluster is

formed, every constituent star is outside the gravitational limits of its neighbors, and therefore has a net outward motion away from each of them. Coincidentally, all of the stars in the cluster are subject to a motion toward the center of the aggregate by reason of the gravitational effect of the cluster as a whole. Near this center, where the gravitational effect of the aggregate is at a minimum, the net motion is outward. But in the outer regions of the cluster, where the gravitational motion exceeds the progression of the reference system, the net motion is inward. The outer stars thus exert a force on the inner ones, confining them to a finite volume, in much the same way that the fabric of a balloon confines the gas that it encloses. The immense region of space around each star is thus reserved for that star alone, irrespective of the stellar motions. Whether or not the cluster acquires a rotation is immaterial. It is equally stable in a static condition. This question as to the structure of the globular clusters is only one of many physical situations in which an equilibrium exists between gravitation and a hitherto unidentified counter force. Because of the lack of understanding of the nature and origin of this force, the general tendency has been to ignore it, and either to grope for some other kind of answers, as in the globular cluster case, or to evade the issue in some manner. One of the few authors who has recognized that an ‖antagonist― to gravitation must exist is Karl Darrow. ‖This essential and powerful force has no name of its own,― Darrow points out in an article published in 1942. ‖This is because it is usually described in words not conveying directly the notion of force. 26 By this means, Darrow says, the physicist ‖manages to avoid the question.― In spite of the clear exposition of the subject by Darrow (a distinguished member of the Scientific Establishment), and the continually growing number of cases in which the ‖antagonist― is clearly required in order to explain the existing relations, the physicists have ‖managed to avoid the question― for another forty years. The development of the theory of a universe of motion has now revealed that the interaction between two oppositely directed forces plays a major role in many physical processes all the way from inter-atomic events to major astronomical phenomena. We will meet the ‖antagonist― to gravitation again and again in the pages that follow. Like gravitation, this counter force, which we have identified as the force due to the outward progression of the natural reference system relative to the conventional system of reference, is radial in the globular cluster, and since these two are the only forces that are operative to any significant degree during the formative period, the contraction of the original cloud of dust and gas into a cluster of stars is accomplished without introducing any appreciable amount of rotation. This is the answer to the question posed by Freeman and Norris. As noted in Chapter 2, consolidation of two or more of these clusters to form a small galaxy usually results in a rotating structure. The same result could be produced on a smaller scale if the cluster picks up a stray group of stars or a small dust cloud. Some such event, or gravitational effects during the approach to the Galaxy, probably accounts for the small amount of rotation that does exist in some clusters. The compression of the cluster structure reduces the inter-stellar distances to some extent, but they are still immense. Current estimates put the density at the center of the cluster at about 50 stars per cubic parsec, as compared to one star per ten cubic parsecs in the solar vicinity,27 This corresponds to a reduction in separation by a factor of eight. Since the

local separation exceeds 112 parsecs, or five light years, the average separation in the central regions after compression is still more than half of a light year, or 3 x 1012 miles, an enormous distance. For general application to the inter-stellar distances, the term ‖star system― has to be substituted for the word ‖star― as used in the foregoing paragraphs, but star systems in this sense are rare in the globular clusters. The origin and nature of double and multiple systems will be discussed in Chapter 7. In assessing the significance of the various available items of information about the globular clusters, to which we will now turn our attention, it should be kept in mind that all of the conclusions that have been reached in this work concerning these individual items are derived from the same source as the foregoing explanations of the origin and structure of the globular clusters; that is, from the postulates that define the universe of motion. As indicated in the preceding chapter, the observations of the globular clusters add materially to the amount of evidence confirming the theoretical conclusions as to the growth of the galactic aggregates by the capture process. On the basis of this theory, each galaxy is pulling in all of the clusters within its gravitational limits. We can therefore expect all galaxies, except those that are still very young and very small, to be surrounded by a concentration of globular clusters moving gradually inward. Inasmuch as the original formation of the clusters took place practically uniformly throughout all of the space under the gravitational control of each galaxy (except for a very large-scale radial effect that will be discussed later), the concentration of clusters should theoretically continue to increase as the galaxy is approached, until the capture zone is reached. Furthermore, the number of clusters in the immediate vicinity of each galaxy should theoretically be a function of the gravitational force and the size of the region within the gravitational limit, both of which are related to the size of the galaxy. These theoretical conclusions are confirmed by observation. A few clusters have been found accompanying such small galaxies as the member of the Local Group located in Fornax; there are several in the Small Magellanic Cloud and two dozen or more in the Large Cloud; our Milky Way galaxy has 150 to 200, when allowance is made for those which we cannot see for one reason or another; the Andromeda spiral, M 31, has the same or more; NGC 4594, the ‖Sombrero― galaxy, is reported to have ‖several hundred― associated clusters; while the number surrounding M 87 is estimated to be from one to two thousand. These numbers of clusters are definitely in the same order as the galactic sizes indicated by observation and by criteria previously established. The Fornax—Small Cloud—Large Cloud—Milky Way sequence is not open to question. M 31 and our own galaxy are probably close to the same size, but there are indications that M 31 is slightly larger. The dominant nucleus in NGC 4594 shows that this galaxy is still older and larger, while all of the characteristics of M 87 suggest that it is near the upper limit of galactic size. Observation gives us only what amounts to an instantaneous picture, and to support the validity of the theoretical deductions we must rely primarily on the fact that the positions

of the clusters as observed are strictly in accord with the requirements of the theory. It is significant, however, that such information as is available about the motions of the clusters of our own galaxy is also entirely consistent with the theoretical findings. In the words of Struve, we know ‖that the orbits of the clusters tend to be almost rectilinear, that they move much as freely falling bodies attracted by the galactic center. 28 According to the theory of the universe of motion, this is just exactly what they are. We see the globular clusters as a roughly spherical halo extending out to a distance of about 100,000 light years from the galactic center. There is no definite limit to this zone. The cluster concentration gradually decreases until it reaches the cluster density of intergalactic space, and individual clusters have been located out as far as 500,000 light years. This distribution of the clusters is completely in agreement with the theoretical conclusion that the clusters do not constitute parts of the galactic structure, but are independent units that are on the way to capture by the Galaxy. Both the spherical distribution and the greater concentration in the immediate vicinity of the Galaxy are purely geometrical consequences of the tact that the gravitational forces of the Galaxy are pulling the clusters in from all directions at a relatively constant rate. On the basis of the theoretical findings described in the preceding pages, the globular clusters are the youngest of the visible astronomical structures, and the stars of which they are composed (aside from an occasional older star or a small group of stars obtained from the environment in which the cluster condensed) are the youngest members of the stellar population. One of the observable consequences of this youth is supplied by the composition of the matter in the cluster stars. Inasmuch as the build-up of the heavier elements, according to the theoretical findings, is a continuing process, offset only to a limited extent by the destruction of those atoms that reach one or the other of the destructive limits, the proportion of heavy elements in any aggregate increases with age. It can be expected, then, that the stars of the globular clusters, with only a few exceptions, are composed of relatively young matter, in which the heavy element content is low. The evidence concerning the stellar composition is somewhat limited, as the observations reflect only the conditions in the outer regions of the stars, and are influenced to a substantial degree by the character of the material currently being accreted from the environment. ‖Detailed studies of the composition of stars,― says J. L. Greenstein, ‖can he made only in their atmospheres. 29 However, the differences in the reported values are too large to leave any doubt as to the general situation. For example, the percentage of elements above helium in the average globular cluster is reported to be lower by a factor of 10 or more than the corresponding percentage in the sun.30 Current astronomical theory concedes that the matter in the stars of the globular clusters is matter of a less advanced type than that in the spiral arms, but to reconcile this fact with the prevailing ideas as to the age of the clusters it invokes the assumptions (1) that the heavier elements were produced in the stellar interiors, (2) that they were ejected therefrom in supernova explosions, and (3) that the stars with the greater heavy element content were formed from this ejected material, This is an ingenious theory, but it is being called upon to explain a situation that is decidedly abnormal. The normal expectation would, of course, be that the youngest matter would be found in the youngest structures. A theory that postulates a reversal of the normal relationships is not ordinarily

given serious consideration unless some strong evidence in its favor can be produced, but in this case there is no observational evidence to support any of the three assumptions. Indeed, there is some evidence to the contrary, as in the following report: The relative abundance of these [heavy] elements in the supernova is not very different from their abundance in the sun. If the supernovae synthesize heavy elements out of lighter ones in the course of their explosion, none of that material is initially seen in the rapidly expanding debris.31 (Robert P. Kirshner). This is an example of the way in which, as noted in Chapter 1, the astronomical community is disregarding or distorting the evidence from observation in order to avoid contradicting the physicists conclusions as to the nature of the stellar energy generation process. The failure to find any evidence of the predicted increase in the concentration of heavy elements in the supernova products is, in itself, a serious blow to a theory that rests entirely on assumptions, but it is only one of a long list of similar conflicts and inconsistencies that we will encounter as we proceed with our examination of the astronomical field. As will be demonstrated in the pages that follow, all of the relevant astronomical evidence that is available is consistent with the theoretical identification of the course of galactic evolution outlined in the preceding pages, and is more than ample to confirm its validity. In fact, the available data concerning the globular clusters are sufficient in themselves to provide a conclusive verification of the theoretical conclusions set forth in this work. The remainder of this chapter will review these globular cluster data, and will indicate their relevance to the point at issue. The various items of information that have been accumulated will be described briefly. Each description will then be followed by a short discussion, indicating the manner in which this item is related to the point that is being demonstrated: the validity of the new conclusions with respect to the place of the clusters in the evolutionary sequence. 1. Observation: The globular cluster structure is stable. Comment: The explanation of the hitherto inexplicable structure of the clusters has already been discussed, but it should be included in the present review of the evidence contributed by the observations. The fact that the explanation of the cluster structure is provided by the existence of the same hitherto unrecognized factor that accounts for the recession of the distant galaxies is particularly significant. 2. Observation: The proportion of heavy elements in the stars of the globular clusters is considerably lower than in the stars and interstellar material in the solar neighborhood. Comment: Like item number 1, this fact, already discussed, is being included in the list so that it will appear in the summary of the evidence. 3. Observation: Some globular clusters contain appreciable numbers of hot stars.

Comment: This observed fact is very disturbing to the supporters of current theories. Struve, for example, called the presence of hot stars an ‖apparent defiance― of stellar evolutionary theory.32 But it is entirely in harmony with the theory of the universe of motion. Some stars, or groups of stars, are separated from the various aggregates by explosive processes, and are scattered into intergalactic space. As the globular clusters form from dispersed material they incorporate any of these strays that happen to be present. Others are captured as the clusters move through space. The presence of a small component of older and hotter stars in some of the young globular clusters is thus normal in the universe of motion. On the other hand, if the clusters have always existed in the outer regions of the galaxies, and are composed of very old stars, in accordance with conventional astronomical theory, the hot stars (which in this theory are young) should have disappeared long ago. 4. Observation: Some clusters also contain nebulous material. Comment: Helen S. Hogg, writing in the Encyclopedia Britannica, says, ‖Puzzling features in some globular clusters are dark lanes of nebulous material.― It is difficult, she says, ‖to explain the presence of distinct, separate masses of unformed material in old systems. 33 Quite true. But it is easy to explain the presence of such material in young systems, which the clusters are, according to the findings of this work. 5. Observation: There is an increasing amount of evidence indicating that very large dust clouds are being pulled into the Galaxy. Comment: This observed phenomenon has not yet been fitted into conventional astronomical theory. It is part of the cannibalism that is contrary to the premises of that theory, but is not yet clearly recognized in that light. In the universe of motion, the significance of these incoming dust clouds is clear. They are simply unconsolidated globular clusters, aggregates that have been, or are about to be, captured by the Galaxy before they have had time to complete the process of star formation. Considerable information concerning the structure of these unconsolidated clusters, and the nature of the processes that they undergo after entering the Galaxy, is now available, and will be examined in Chapter 9. 6. Observation: Aside from the somewhat exceptional instances where nebulous material is present, the globular clusters show little evidence of the presence of dust. Comment: Current astronomical theory ascribes this to age, assuming that over a long period of time the original dust will have been formed into stars, or captured by stars. Our finding is that the nature of the globular cluster condensation process results in almost all of the dust and gas of which the cluster was originally composed being brought under the gravitational control of the stars. In this condition the dust is not observable as a separate phenomenon. Evidence of the existence of dust aggregates is observed only where the normal condensation

process has been subject to some disturbing influence, or where a dust cloud has been captured. 7. Observation: Globular clusters exist in a zone surrounding our galaxy that extends out to a distance of at least 100,000 light years from the galactic center, and in similar locations around other galaxies. The existence of a substantial number of clusters in intergalactic space is also indicated. Comment: The crucial point in this connection is the number of intergalactic clusters. According to conventional theory, the formation of the Globular clusters was part of the formation of the galaxies, and there should be no clusters between the galaxies other than a few strays. In the universe of motion intergalactic space is the original zone of formation of the clusters, and the concentration around each galaxy is merely a geometric result of the gravitational notion toward the galaxy from all directions. On this basis there should be no definite limits to the cluster zone. The clusters should just thin out gradually until they reach the approximately uniform density in which they exist in space that is free of large aggregates of matter. The total number of' intergalactic clusters should thus be very large The amount of information currently available is not sufficient to produce a definitive answer to the question as to how common these intergalactic clusters actually are, hut the increasing number of discoveries of distant clusters is highly favorable to the new theory. The growing realization that dwarf' galaxies, not much larger than globular clusters, may he ''the most common type of' galaxy in the universe'' is a significant step toward recognition that intergalactic space is well populated with globular clusters. Indeed, some of the aggregates that are now being identified as dwarf galaxies may actually be globular clusters. Current estimates of the size of these dwarf galaxies, which put the average at about one million stars, are within the range of the estimates of the sizes of the globular clusters that have been made by other observers. 8. Observation: The number of clusters associated with each galaxy is a function of the mass of the galaxy. Comment: Either theory can produce a satisfactory explanation of this fact. On the basis of conventional theory the material from which the clusters are formed should constitute a fairly definite proportion of the total galactic raw material, and a larger galaxy should therefore provide material for more clusters. The Reciprocal System of theory asserts that the clusters are being drawn in from surrounding space, and that the more massive galaxies gather more clusters because they exert stronger gravitational forces throughout larger volumes of space. 9. Observation: The distribution of clusters around the Galaxy is nearly spherical, and there is no evidence that the cluster system participates to any substantial degree in galactic rotation.

Comment: This is difficult to reconcile with conventional theory. If the formation of the clusters was a part of the galaxy formation as a whole, it is hard to explain why one part of' the structure acquired a high rotational velocity while another part of the same structure acquired little or none. B. Lindblad has suggested that the Galaxy is composed of sub-systems of different degrees of flattening, each rotating at a different rate. This, however, is simply a description, not an explanation. The Reciprocal System of theory provides a simple and straightforward explanation. According to this theory the clusters arc not part of the Galaxy, but are external objects being drawn into the Galaxy by gravitational force. On this basis the reason why the clusters do not participate in the galactic rotation is obvious. The nearly spherical distribution is also explained by the theoretically near uniform distribution of the clusters in the volume of space from which they were drawn. 10. Observation: Interstellar distances in the outer regions of the globular clusters are comparable to those in the solar neighborhood. Present estimates are that the distances in the central regions are less by a factor of about eight.27 Comment: The significant point about the foregoing is that the variations in interstellar distance are relatively minor, and even in the locations of greatest density the distances between the stars are enormous. Conventional theory has no explanation for this state of affairs. In fact, the observed limitation on the minimum distance between stars is ignored in current astronomical thought, and close approaches of stars are features of a number of astronomical theories. The finding of this work is that the immense size of the minimum distance between stars (other than that between members of binary or multiple systems) is not accidental; it is a result of the inability of a star (or star system) to come within the gravitational limit of another. The stars do not approach each other more closely because they can not do so. 11. Observation: The ‖orbits― of the clusters are rectilinear. As expressed by Struve in the statement previously quoted, the clusters ‖move much as freely falling bodies attracted by the galactic center.― Comment: Our findings are that this is exactly what they are, and that the observed motions are therefore just what we should expect. Conventional theory can explain such motions only by assuming extremely elongated elliptical orbits with relatively frequent passage of the clusters through the galactic structure. In view of the liquid-like nature of this structure, as deduced from the postulates that define the universe of motion, such passages through the galaxy are clearly impossible. Even without this information, however, it should be rather obvious that there is some reason why the observed minimum separation between the stars in the solar neighborhood (the only region in which we can determine the minimum) is so large. There is no justification for assuming that this reason, whatever it may be, is any less applicable to the stars of the globular clusters. The factors that determine this minimum separation bar the passage of any stellar aggregate through any other such aggregate, irrespective of what their nature may

be. The conventional explanation of the observed inward motions of the clusters also conflicts with the following observation. 12. Observation: Clusters closer to the galactic center are somewhat smaller than those farther out. Studies indicate a difference of 30 percent between 10,000 parsecs and 25,000 parsecs.34 Comment: If the ‖elongated orbit― theory were correct, the present distances from the galactic center would have no significance, as a cluster could be anywhere in its orbit. But the existence of a systematic difference between the closer and more distant clusters shows that the present positions do have a significance. Since the visible diameter of the average cluster is in the neighborhood of 100 light years, and the actual overall dimensions are undoubtedly greater, there is a substantial gravitational differential between the near and far sides of a cluster at distances within 100,000 light years. We can therefore deduce that the clusters are experiencing an increasing loss of stars as they approach the Galaxy, both by acceleration of the closest stars and by retardation of the most distant. The effect of slow losses of this kind on the shape of the aggregate is minor, and the detached stars remerge with the general field of stars that is present in the same zone as the cluster. The process of attrition is therefore unobservable in any direct manner, but we can verity its existence by the comparison of sizes as noted above. From the observed differences it appears that the clusters lose more than half of their mass by the time they reach what may be regarded as the capture zone, the region in which the gravitational action on the cluster structure is relatively severe. The loss of stars due to gravitational differentials is substantially less in the case of a cluster approaching a small elliptical galaxy. Thus we find that an elliptical galaxy in Fornax, a member of the Local Group with a mass of about 2 x 109 solar equivalents, ‖contains about five globulars that are bigger than those in our galaxy. 13. Observation: There is also an increase in the heavy element content of the cluster stars as the distance from the galactic center decreases. Comment: This is another systematic correlation with radial distance that contradicts the ‖elongated orbit― theory. It is also inconsistent with the currently prevailing assumption that the globular clusters are component parts of the Galaxy and were formed in conjunction with the rest of the galactic structure. 14. Observation: The globular clusters range in size from a few tens of thousands to over a million stars. No stable stellar aggregates have been found between this size and the multiple star systems consisting of a few stars separated by very short distances comparable to the diameters of planetary orbits. Comment: This is a very striking situation for which present-day astronomical theory has no explanation. A study of the problem by S. Von Hoerner was able to conclude only that ‖the reasons must lie in the original conditions under which the

clusters were formed. 35 This is true, but it is not an explanation. What is needed is the information derived from theory in Chapter 2, the nature of those ‖conditions under which the clusters were formed.― As brought out there, no star can be formed within the gravitational limit of an existing star or multiple star system, since the gravitational pull of that star or star system prevents the accumulation of sufficient star-forming material. (Division of existing stars, as we will see later, forms Binary and multiple stars, not by condensation of new stars.) Stars formed outside the gravitational limit of an existing star are subject to a net outward motion. The cluster is held together only by reason of the gravitational attraction that the cluster as a whole exerts on its constituent stars. A cluster must therefore exceed a certain minimum size in order to be gravitationally stable. Such clusters originate only where large numbers of stars are formed contemporaneously from dust and gas clouds of vast proportions. The foregoing discussion has considered 14 sets of facts, derived from observation, that represent the most significant items of information about the globular clusters now available, aside from a few items that we will not be in a position to appraise until after some further background information has been developed. The deductions from the postulates of the universe of motion that have been described supply a full and detailed explanation of every one of these sets of facts. The performance of conventional astronomical theory, on the other hand, is definitely unsatisfactory, even if it is given the benefit of the doubt where definitive answers to the questions at issue are unavailable. Evaluation of the adequacy of explanations is, of course, a matter of judgment, and the exact score will differ with the appraiser, but an evaluation on the basis of the comments that were made in the preceding discussion leads to the conclusion that conventional theory provides explanations that are tenable, on the basis of what is known from observation, for only three of the 14 items (1, 6, 8). It supplies no explanation at all for five items (2, 7, 9, 10, 14), and the explanation it advances is inconsistent with the observed facts in 6 cases (3, 4, 5, 11, 12, 13). Five more sets of observations that are pertinent to this evaluation will be examined in Chapter 9, and with the addition of these items the total score for conventional astronomical theory is 4 items explained, 7 with no explanation, and 8 explanations inconsistent with observation. The significance of these numbers is obvious.

CHAPTER 4

The Giant Star Cycle Thus far we have been concerned with the globular clusters and their successors as aggregates of stars, Now we will turn our attention to the individual stars of which these aggregates are constructed, As we saw in Chapter 1, the stars originate as dust and gas clouds, There is no clear line between dust cloud and star. Until comparatively recently stars could be detected only by means of their radiation in the visible range, and this established a low limit at about 2500 K. During the last few decades instruments of greatly extended range have been developed, and stars of normal characteristics are now

being observed down to the neighborhood of 1000 K. Infrared objects of a nature not yet clearly determined, with surface temperatures as low as 300 to 700 K, have been reported. From theoretical considerations we deduce that at some point after the interior of a contracting cloud of dust and gas has been raised to a high temperature by gravitational energy, a relatively rapid rise in the temperature of the entire aggregate occurs when the destructive limit of the heaviest element present is reached in the central regions, and conversion of mass to energy begins. As explained in Volume II, both the thermal energy of the matter in the star and its ionization energy are space displacements, and when the total of these space displacements reaches equality with one of the rotational time displacements of an atom, the opposite displacements neutralize each other, and the rotation reverts to the linear basis. In other words, both the ionization and a portion of the matter of the atoms are converted into kinetic energy. Inasmuch as all atoms are fully ionized before the temperature limit is reached, and the heavier atoms are capable of acquiring a greater degree of ionization than the lighter ones, the amount of thermal energy required to bring the total space displacement up to the limit is less for the heavier elements. The limiting temperature is therefore inversely related to the atomic mass. Production of increasingly heavier elements is a continuing process that begins with the original entry of primitive matter from the cosmic sector. The pre-stellar dust cloud therefore contains a small proportion of newly formed heavy elements, together with whatever heavy element content there may have been in the fragments of older matter incorporated from the surroundings. Inasmuch as the entire structure of the cloud is fluid, the heavy elements make their way to the center. As the temperature in the central regions rises, successively lighter elements reach their destructive limits and are converted to energy. Activation of this second energy source necessitates an immediate and substantial increase in the temperature of the aggregate in order to produce enough radiation to reach equilibrium with the greater energy generation. Thus there is not a gradual rise of the surface temperature of the aggregate from the near zero of inter-stellar space up to the levels recognized as those of stars, but rather a long period of no more than minor warming, followed by a quite sudden jump to the temperature of an infrared star. The objects cooler than 1000 K generally display some peculiar characteristics that distinguish them from normal stars, and make it difficult to draw definite conclusions as to their true nature. The most significant evolutionary changes that take place in the stars, as they grow older can conveniently be shown on a graph in which the luminosity (expressed as magnitude) is plotted against some measurement representing the surface temperature. In its original form, this Hertzsprung-Russell, or HR, diagram utilized an arbitrary spectral classification as the temperature variable, but the present tendency is to use a color index, which accomplishes the same result. The textbooks still retain the H-R diagram, probably for historical reasons, but the color-magnitude, or CM, diagram is now in general use by the observers.

The CM diagram of the globular cluster M3 is shown in Fig.3. In this diagram the points representing the magnitudes applicable to the individual stars fall mainly within the crosshatched area. Identification of the locations marked O. A, B. and C has been added to the conventional diagram for purposes of this present discussion. The mass, density, and central temperature of the globular cluster stars are related to the variables of the CM diagram, and although they are subject to modification by other factors, so that they cannot be represented accurately in this two-component diagram, they can be located approximately, and adding them to the framework of the diagram for reference purposes facilitates understanding of the theoretical development. Accurate measurements of magnitudes in the area of the diagram occupied by the globular cluster stars are difficult to obtain. S. J. Inglis points out that ‖There is no red giant whose mass we know with any degree of certainty. 36 But we can relate these magnitudes to the evolutionary pattern of the stars, and thus arrive at approximations of their values. We know, for instance, that the line BC, the main sequence, is the location of gravitational equilibrium. The stars on this line are therefore at approximately the same density. The density at C is actually greater than that at B by a factor of 3 or 4, because of the compression due to the larger stellar mass, but since the equilibrium densities along the main sequence are more than a million times greater than those in the early portions of area O. the difference between B and C is negligible on the scale of the diagram. We may therefore draw lines parallel to BC and treat them as lines of equal density for analytical purposes. Similarly, the line AB theoretically represents a condition of constant mass. The theory further indicates that the central temperatures are determined by the stellar mass. Lines parallel to AB can thus be regarded as lines of equal mass and central temperature. On the basis of the explanation of the line AC that will be developed in the following pages, this line represents a condition in which condensation of a dust cloud of nearly uniform density is proceeding at a rate determined by gravitational forces. We may call it a line of constant growth.

Fig. 4 is a reproduction of the M 3 diagram with the lines representing these other variables added. These lines provide a good indication of the way in which the several variables are related in different regions of the diagram, and reference to the pattern of this illustration will be helpful in interpreting the CM diagrams that will be introduced later. The relations represented by the auxiliary lines in Fig. 4 apply to the stars of the globular cluster type only. As we will see later, the corresponding relations—the lines of equal mass, for instance—are altogether different for other classes of stars. This is a fact that has not heretofore been recognized, an oversight that is responsible for many errors in the orthodox interpretations of the CM diagrams.

All of the stars of a globular cluster condensed from the same dispersed aggregate of primitive material, but the conditions affecting the rate of condensation varied, and the evolutionary stages of the stars therefore differ. Consequently, the stars of a cluster such as M 3 are spread out over a range of the stellar evolutionary pattern on the CM diagram. The earliest of the visible stars are the coolest but, by reason of the immense area from which they are radiating, their luminosity is relatively high. These stars therefore occupy positions in the upper right of the diagram, in the general area marked O. The remainder of this chapter will give a general description of the paths that these stars follow when they leave this area. Further details will be added in Chapter 8, after some additional groundwork has been laid. The stars of these globular clusters exist in two size ranges. The great majority are small, in the neighborhood of the solar mass or below. Another portion of the total consists of stars that are substantially larger. We can identify the latter as stars that had a fragment of preexisting material as a nucleus for condensation of the pre-stellar dust and gas cloud. The smaller stars are those that did not enjoy this advantage. The fragments incorporated into the stars were usually small, as the explosions that scattered them into space were violent enough to reduce the greater part of the original structure to dust, gas, and small aggregates. The growth of the stellar structure follows essentially the same course whether or not it contains a small fragment as a nucleus. The important difference is that it takes a very long time to build a dust particle up to an aggregate of fragment size. A pre-stellar aggregate that has a fragment to start with therefore has a big head start over those that have to build all the way from dust particles, and it is able to establish gravitational control over a larger volume of the protocluster. Thus, even though the stars of both of these groups are nearly alike at their points of origin in area O, those of one group have a much greater potential for growth.

The supply of dust and gas available for capture is, in effect, exhausted for the first group by the time they reach the vicinity of point A. These stars then cease to grow, and they no longer continue on the path OC. Instead they make a sharp turn and move downward on a relatively steep slope, reaching gravitational equilibrium on the main sequence at point B. Along the path AB the gravitational contraction continues, but because the mass is no longer increasing, the central temperature remains approximately constant. The decrease in the size of the radiating surface results in an increase in the surface temperature, but coincidentally the corresponding increase in density increases the resistance to the flow of heat from the center of the star to the surface. These two oppositely directed processes just about counterbalance each other, and the net result, including the effect of the energy contributed by the contraction, is a small increase in surface temperature. The combination of a decrease in the radiating surface and a relatively small temperature change results in a rapid decrease in the luminosity. With the benefit of this information as to the nature of the changes that take place along the evolutionary path OAB of the small stars, it can now be seen that the stars on the path OAC are subject to the same factors, except that there is a continuous addition of more matter, and a consequent increase in the central temperature. As a result, the increase in surface temperature is much greater than that along the line AB, and the decrease in luminosity is smaller, leading to a nearly horizontal movement across the CM diagram. Arrival at the main sequence, at either point B or point C, eliminates any further generation of energy from gravitational contraction. Each star then has to establish a thermal equilibrium on the basis of the atomic energy generation alone. For this purpose it moves up or down the main sequence to the point where the dissipation of energy by radiation is in balance with the energy production. The main sequence is the location where the stars spend most of the latter part of their lives. It has been estimated that about 95 percent of the observable stars are on this sequence (although it should be understood that the observable stars do not constitute a representative sample of the stars as a whole). For convenient reference in the subsequent discussion we will designate the stars on the evolutionary paths OAB or OAC as Class A, and those of the main sequence as Class B. The stars of Class A and Class B coincide, in general, with those currently called Population II and Population I respectively. The reason for the reversal of the sequence is that it puts the classes into the correct evolutionary order. The younger stars are currently called Population II. The A classification is more appropriate. In the context of the star and cluster formation process deduced from the postulates that define the universe of motion, the foregoing explanation of the CM diagram of the globular clusters is essentially self-evident, but the astronomers cannot take this simple and logical view of the situation. They did so in an earlier era, but they have changed their ideas. As one author states, ‖Present knowledge has forced a nearly complete reversal of this view.― This ‖knowledge,― he says, is partly observational and partly theoretical. The ‖observational― items that he cites are (1) ‖red giants are common in globular clusters and elliptical galaxies, systems which are known to be of great age . . . and in which star formation has ceased countless ages ago,― and (2) ‖red giants do not appear in greater numbers in the nebulous regions of the Galaxy, as they would certainly do if they had been formed recently from the great gas and dust clouds of space. 37

As can easily be seen, these so-called ‖observational― items are, in fact, purely theoretical. Their application to the points at issue depends entirely on the prevailing theories of stellar formation and of stellar ages. As long as the astronomers were basing their conclusions on the evidence from their own field, they arrived at an understanding of the evolutionary course of the globular cluster stars very similar to that which we now derive from the Reciprocal System of theory. But it became evident that this conclusion is inconsistent with the physicists contention that the stellar energy is generated by the hydrogen conversion process (this is the ‖present knowledge― cited in the quotation above). This pure assumption by the physicists is the only basis for the assertion that the globular clusters ‖are known to be of great age.― There is no astronomical basis for that conclusion. But since the astronomers are unwilling to challenge the physicists assertions, they have, as indicated in the quoted comment, ‖completely reversed― their own ideas, and have accommodated their theories to the requirements of the hydrogen process. On this basis, the stars of the globular clusters are old stars. The evolutionary path obviously has to start in region O of the diagram, since the protostars are necessarily diffuse and cool. It is generally recognized that the red giant stars of the globular clusters are stars of the same type as the protostars. Shklovsky, for instance, concedes that the massive protostars in a late stage of their evolution ‖have all the characteristics of giant stars.― 38 But since the astronomers now see the red giants of the globular clusters as old stars they cannot accept the conclusion that these are identical objects. As a consequence of this inability to recognize the identity, astronomical theory first has to put the stars through the evolutionary process as protostars, and then, after a hypothetical sojourn on the main sequence, bring them back for another experience as giant stars. These giants then have to make their way, in some, as yet unexplained, manner, directly from their position in region O of the CM diagram to the region of the early white dwarfs, which is located in the diametrically opposite corner of the diagram. As expressed by L. H. Aller in an understatement of classic proportions, ‖the details of its [the giant star's] evolution are uncertain.― 39 When the stars of the globular clusters and dwarf galaxies are recognized as relatively young objects, only one step beyond the dense dust cloud, or protostar, stage, the necessity for these contortions in the theoretical evolutionary path is eliminated. The infrared protostars are precursors of the red giants; they are already giants and on the way to becoming red. From this cool and diffuse state they follow one or the other of the two alternate paths to gravitational equilibrium on the main sequence. After a star has achieved both gravitational and thermal equilibrium, and has settled down to a somewhat stable condition, its subsequent course depends on the environment. If this environment is relatively free from dust and gas, the star may not be able to generate enough energy to replace that lost by radiation, because of a shortage of heavy elements. In that case it moves slowly down the main sequence to the point where the radiation has been reduced enough to balance input and output. Whether or not this movement ever continues far enough to lower the central temperature below the lowest destructive limit, so that the star loses its energy supply and ceases to be a star, is not clearly indicated at the present stage of the theoretical development. As matters now stand, however, it seems

probable that any aggregate that is once able to attain the stellar status on the main sequence will remain a star. The continual replenishment of the supply of heavy elements by means of the atomic building process described in Volume II is an important factor in this situation. It plays a major role even where there is a significant amount of accretion, as there is only a very small proportion of heavy elements in the accreted matter. Since the amount of atom building is proportional to the mass of the aggregate, the same rate of heavy element formation that maintains the stellar status of the smaller stars is sufficient to add materially to the fuel supply of a larger star. The automatic reduction in the amount of radiation which takes place in response to a decrease in the generation of energy enables a star to adjust to a rather wide range of environmental conditions, and since changes in these conditions occur only on an extremely long time scale, many of the main sequence stars maintain approximately the same pattern of thermal behavior for extended periods of time (fortunately for the human race). But accretion from the environment plays a very important part in the general evolutionary picture in the globular clusters the growth comes entirely, or almost entirely from the remaining portions of the original pre-stellar dust and gas cloud. But accretion of matter also takes place from whatever environments the stars enter after consolidation of the original dust and gas is complete. Such accretion is common in the post-globular cluster stages, and has a significant effect on many astronomical phenomena, as we will see in the pages that follow. For reasons that will be discussed in Chapter 8, the accretion by the average star in the outer regions of a spiral galaxy exceeds the losses due to radiation, and this star therefore moves up the main sequence. Stars in regions of greater dust and gas concentrations evolve still more rapidly, and the process also speeds up, as the stars become more massive, since the stronger gravitational forces draw material from larger regions of space. As the stars increase in mass, the central temperatures increase accordingly, and successively higher destructive limits are reached, making additional elements available as fuel for the energy generation process. Since none of the heavy elements is present in more than a relatively small quantity in a region of minimum accretion, the availability of an additional fuel supply due to reaching the destructive limit of one more element is not sufficient to cause any significant change in the energy balance of the stars in the lower half of the main sequence. The rate of accretion increases as the stars move up the sequence, but because of the corresponding increase in mass and total energy content, they are able to absorb greater fluctuations. The main sequence stars are therefore relatively quiet and unspectacular as they gradually make their way along the evolutionary path. The chemical composition of the stars and the distribution of elements in the stellar interiors are debatable subjects, but the deductions that have been made from the principles established in the earlier development of theory do not conflict with actual observations; they merely conflict with some interpretations of those observations. While the gravitational segregation of the stellar material which theoretically puts a high

concentration of the heavier elements into the central core is not entirely in agreement with current astronomical thought, it should be emphasized that such a segregation is the normal result in a fluid medium subject to gravitational forces, and a theory which requires the existence of normal conditions is never out of order where the true situation is observationally unknown. Furthermore, even though the conclusions that have been reached as to the amount of heavy elements present in the stellar interiors are beyond the possibility of direct verification, it will be brought out in the subsequent discussion of the solar system that some strong evidence as to the internal constitution of the stars can be obtained from collateral sources. Current ideas as to stellar composition are based almost entirely on spectroscopic information. These data are useful, but they have a limited applicability, as they only tell us what conditions prevail in the outer regions of the stars. Even from this restricted standpoint the evidence may actually be misleading, as the spectroscopic results are affected to a significant degree by the character of the material currently being picked up through the accretion process. The observed differences in the stellar spectra that can be attributed to variations in chemical composition are probably more indicative, in many cases, of the environments in which the stars happen to exist at the moment than of the true composition of the stars themselves. The presence of substantial amounts of elements such as technetium, for example, in the outer regions of some stars poses a formidable problem if we are to regard this as an actual indication of the composition of the stars. It is doubly difficult for present-day astronomical theory. If the technetium is manufactured in the regions of maximum temperature in the center of each star, in accordance with the majority opinion at the moment, there is a serious problem in explaining how this material gets to the surface against the density gradient. L. H. Aller makes this comment: How the star gets the heavy elements from the core to the surface without exploding provides an impressive challenge to theoreticians.40 Shklovsky regards this emergence from the central regions as impossible, and contends that ‖Only nuclear reactions in the surface layers of the stars can account for the presence of technetium lines in type S stellar spectra.― 41 But this merely replaces one question with another. Just how the conditions necessary for initiating atomic reactions can be attained in these surface layers is an equally difficult problem. On the other hand, the technetium content at the surface of the star is easily explained on the basis that the observed amounts of this material have been derived from the captured material. This element is stable, according to the findings detailed in Volume II, wherever the magnetic ionization level is zero, and relatively heavy concentrations could be produced in areas that are left undisturbed for long periods of time. As indicated earlier, the gradual and uneventful progress of the growing stars up the main sequence is due to the relatively small size of the increments of energy that result from the attainment of the destructive limits of successively lighter elements. When the destructive limit of nickel is reached there is a change in the situation, as this element is present, both in the stars and in the interstellar matter, in quantities that are substantially greater than those of any heavier element. It could be expected, then, that the attainment

of this temperature limit would result in some observable enhancement of the thermal activity of the stars that are involved. Such increased activity is observed in a special class of stars located near the top of the main sequence. These Wolf-Rayet stars are somewhat less massive than the stars of the O class, the highest on the main sequence, but they have about the same luminosity, and they are associated with the O stars in the disk of the Galaxy. Their principal distinguishing characteristic is a very disturbed condition in their surface layers, with ejection of material that forms an expanding shell around each star. These special conditions lead to the existence of a distinctive spectrum. It appears probable that the Wolf-Rayet star is the one whose central temperature has reached the destructive limit of nickel. We may interpret its observed characteristics as indicating that arrival at this temperature limit has resulted in an increase in the production of energy that is large enough to cause violent internal activity, and ejection of matter from the star, without being enough to initiate a full-scale explosion. On this basis, the star remains in the Wolf-Rayet condition until the greater part of the nickel is consumed. It then resumes accreting mass (probably picking up most of what was ejected) and reverts to the O status. The foregoing comments on the Wolf-Rayet stars apply only to those known as Population I Wolf-Rayets. The Wolf-Rayet designation is also applied to some of the central stars of planetary nebulae, but there is little justification for putting these two groups of stars into the same class. This issue will be discussed in Chapter 11. When the temperature corresponding to the destructive limit of iron is reached, the situation is more drastically changed. This element is not limited to very small quantities, or even to moderate quantities like the nickel content. It is present in concentrations, which represent an appreciable fraction of the total stellar mass. The sudden arrival of this quantity of matter at its destructive limit activates a source of far more energy than the star is able to dissipate through the normal radiation mechanism. The initial release of energy from this source therefore blows the whole star apart in a tremendous explosion. According to current estimates, iron is more than 20 times as abundant in the stars as nickel. If the amount of nickel is sufficient to bring the star to the verge of an explosion, as the behavior of the Wolf-Rayet stars appears to indicate, the amount of iron is far more than is needed in order to cause an explosion. The explosion thus takes place as soon as the first portions of this element are converted into energy. The remainder, together with the overlying lighter material, is dispersed by the explosive forces. The carry-over of material from one cycle to the next enables the amount of iron and lighter elements to continue building up as the age of the system increases, whereas the heavier elements have to start from scratch after the explosion, except for some limited quantities of the elements close to iron that have escaped destruction. This accounts for what George Gamow called the ‖surprising shape of the empirical curve [of abundance of the elements],―42 the existence of distinctly different patterns above and below iron. The explosion that theoretically occurs at the destructive limit of iron is consistent with observation, as it can be identified with the observed phenomenon known as a Type I supernova. However, the characteristics of the supernova explosion, as derived from theory, conflict, in some respects, with current astronomical opinion. One of these

conflicts concerns the kind of stars that are subject to becoming Type I supernovae. Inasmuch as the temperature of a star is a function of its mass, the temperature limit at which the explosion takes place is also a mass limit. According to our theory, then, the stars that reach the destructive temperature limit and become Type I supernovae are hot massive stars, and they are all nearly alike. The astronomers concede the existence of a stellar mass limit. Since there is a recognized relation between stellar mass and temperature along the main sequence, the existence of a mass limit carries with it the existence of a temperature limit, as required by the theory of the universe of motion. Neither limit has an explanation in terms of conventional astronomical theory, and the observed cut-off in the mass distribution function was unexpected. ‖It is a surprise,― say Jastrow and Thompson, ‖that there also appears to be an upper limit to the mass of a star.― 43 These authors put the limit at about 60 solar masses. Other observers place it at about 100. The astronomers also admit that all Type I supernovae are very much alike. The observations of these phenomena are thus consistent with our theoretical findings. Furthermore, the temperature limit can be reached in any galaxy, and Type I supernovae should therefore occur in all classes of galaxies. They are the only kind that can occur regularly, according to our findings, in elliptical and small irregular galaxies. Spirals, such as our Milky Way, and the giant spheroidal galaxies, contain both Type I and Type II supernovae, which result from a different kind of stellar explosion that we will examine in detail in Chapter 16. As we will see there, the Type II explosion is the result of reaching an age limit. Except where some stray old star has been picked up by a young aggregate, stars cannot reach the age limit in young galaxies. This accounts for the observed restriction of the Type II supernovae to the older and larger galaxies. All that is known about the Type I supernovae is thus entirely consistent with the theory of the universe of motion. On the other hand, the observations are almost totally inconsistent with conventional astronomical theory. The astronomers have been almost completely baffled by the supernova phenomenon. Most investigators are reluctant to admit that they are up against a blank wall, and tend to describe the situation in ambiguous terms, such as the following, taken from a recent report on one aspect of the supernova problem: ‖The exact mechanism by which a star becomes a supernova is not yet known.― 44 The insertion of the word ‖exact― into this statement implies that the general behavior of the supernovae is understood, and that only the details are lacking. But the truth is that the astronomers have nothing but speculations to work with, and some of the more candid observers admit this. R. P. Kirshner, for instance, concedes that the ‖models― thus far proposed for the origin of supernovae are no more than speculative, and adds this comment: The train of events leading to a supernova of Type I is more mysterious than that leading to one of Type II, since a Type I supernova is expected to be the explosion of a star about as massive as the sun. Since such a star can comfortably settle down to being a white dwarf, something unusual must happen for it to explode as a supernova.31 This is a good example of the problems in astronomy that have been created by the elevation of the physicists assumption as to the nature of the stellar energy process to a

status superior to that of the astronomical observations, As Kirshner brings out in his statement, the Type I supernova is mysterious not so much because little is known about it, but because that which is known from observation conflicts with two items that are ‖known― from deductions based on generation of energy by the hydrogen conversion process, The conclusion that a star of about one solar mass can ‖comfortably settle down to becoming a white dwarf― is wholly dependent on the status of the red giants as old stars, This, in turn, is based entirely on the assumption as to the nature of the energy generation process. The further conclusion that these ‖old― red giants develop into white dwarfs rests on the equally unsupported assumption that the white dwarfs are still older than the red giants, and that there must be some progression from one to the other. The astronomical evidence disproving these assumptions will be presented at appropriate points in the subsequent pages. The fact now being emphasized is that Kirshner's ‖mystery― is simply a conflict between the astronomical observations and the consequences of the physicists assumption that the astronomers accept as gospel. The same conflict exists with respect to the other item of ‖knowledge― cited by Kirshner, the identification of the Type I supernova with the explosion of a star of about one solar mass. This is another conclusion that rests entirely on the physicists hydrogen conversion hypothesis. On the basis of this hypothesis, it has been concluded that the stars of the elliptical galaxies and small irregulars are very old. Conventional theory indicates that the more massive stars (which, according to the theory, are short-lived) would have been eliminated from these old aggregates by evolutionary processes. The deduction, then, is that ‖before their outburst type I supernovae were very old stars whose mass was at most only slightly (say 10 to 20 percent) greater than the mass of the sun.― 45 But this does not fit into the rest of conventional astronomical theory at all. As P. Maffei puts it, ‖This result has caused some problems to theoreticians.― 46 Kirshner points out that the supernova explosion is not the fate that present-day theory predicts for the small stars. Furthermore, the identification of the supernovae with the small stars, whose mass varies over a wide range, leaves the theory without any explanation for one of the few things about the Type I supernovae that definitely is known; that is, these explosions are all very much alike. In the light of the points brought out in the foregoing paragraphs, it is evident that the astronomers cannot legitimately claim to have a tenable theory of supernovae. In this case, then, as in so many of the others that have been, or will be, discussed in this volume, the deductions from the theory of the universe of motion are simply filling a vacuum, providing explanations that conventional astronomical theory has been unable to supply.

CHAPTER 5

The Later Cycles Only a relatively small proportion of the mass of the star needs to be converted into energy in order to produce the Type I supernova explosion. The remainder, constituting the bulk of the original mass, is blown away from the explosion location at high speeds.

We therefore find the site of such an explosion surrounded by a cloud of material moving rapidly outward. The prevailing view is that the entire mass is dispersed into interstellar space. As expressed by Shklovsky, ‖The gaseous material expelled during the outburst forever breaks its connection with the exploding star and travels out into interstellar space, interacting with the interstellar medium.― 47 In this particular case he is referring specifically to supernovae of Type II, but his subsequent comments make it clear that these remarks apply to Type I as well. It is evident that a large part of the matter ejected into space is actually dispersed in this manner, but there is likewise a significant part of the total that does not escape. As we will see in Chapter 6, the matter in the central portion of the star does not participate in the expansion into space. Because the speeds generated by the explosion are distributed over a wide range, another substantial portion of the ejected mass is restricted to relatively moderate outward speeds. One factor that has a bearing on this situation is that the Type I explosion takes place in the center of the star rather than throughout the structure. Consequently, much of the ejected material does not come out in the form of finely divided debris, but consists of portions of the outer sections of the star. These are ejected in aggregates of various sizes, what we would call fragments if we were dealing with solid matter. Such quasi-fragments have lower initial velocities than the small particles or individual atoms, since the acceleration imparted by a given pressure decreases as a function of the mass, where the density is uniform. They also expand quickly from their highly compressed initial state, which reduces their temperature drastically and makes them invisible. The visible portions of the Type I supernova remnants are mainly the fastest particles. During their outward travel, these explosion products are subject to the gravitational effect of the total mass until the fastest components reach the gravitational limit, and to a gradually decreasing effect thereafter. It follows that the slower components are subject to gravitational retardation, as well as to some resistance from the interstellar medium, for a very long period of time. If we take the previously cited figure of 60 solar masses as the size of the exploding star, and assume that a third of the total mass goes into energy, the outer portions of the explosion products are subject to the gravitational effect of 40 solar masses, In Chapter 14 we will develop an equation for calculating the gravitational limit, and from this equation we will find that the gravitational limit of an aggregate of 40 solar masses is 23 light years, or 7 parsecs, The radii of the observed Type I supernova remnants in the Galaxy average about 5 parsecs.48 Thus the expansion of these remnants, great as it has been, has not even taken the fastest of the explosion products beyond the gravitational limit of the aggregate as yet. Clearly, many of the slower products cease moving outward long before they reach the gravitational limit of the remaining mass. At this stage, where the expansion ceases, there is a cloud of cold and very diffuse material occupying a tremendous expanse of space. But unlike the large dust and gas clouds in the galactic arms, this material is under gravitational control. The gravitational effect of the mass as a whole on each individual particle is small because of the huge distances involved, but a net gravitational force does exist, and once the expansion has ceased, a contraction is initiated. Another long interval must elapse while this initially minute force does its work, but ultimately the constituent particles are pulled back to

where the internal temperature of the mass can rise enough to reactivate the energy generation process, and the star is reborn. This star is now back in area O of the CM diagram, first as an infrared star, and later, as it contracts and increases in temperature, as a red giant. This red giant resembles the first generation of stars of the same type, but it is not identical with them. It has gone around the cycle and through the explosion process, and has undergone some modifications in so doing. The most significant respect in which the new stars of the second cycle differ from their counterparts of the first cycle is that the second cycle star has a gravitationally stable core. The first cycle star condensed from a practically uniform dispersed aggregate. As noted earlier, some of these stars had nuclei on which to build, but only in rare instances is this anything more than a small fragment. Thus, until it reaches the critical density, such a star is simply a contracting dust and gas cloud. On the other hand, the aggregate of matter from which the second cycle star condenses is heavily concentrated towards the center, the site of the supernova explosion. The gravitational contraction therefore proceeds much faster in this central region, and a large part of the mass of the star reaches a state of gravitational equilibrium by the time that the atomic energy process is initiated. The newly formed second cycle star is thus a two-component system, a stable core with a large contracting outer envelope. In this combination structure, the luminosity is determined by the amount of energy generated. This, in turn, depends on the mass, which is concentrated mainly in the core. But the surface temperature corresponding to a given luminosity depends on the volume of the star, and this is mainly the volume of the envelope. Thus the surface temperature of the early second cycle star is similar to that of an early first cycle star, while the luminosity is similar to that of a main sequence star instead of being concentrated in one region in the upper right of the CM diagram in the manner of the early first cycle stars, the early stars of the second cycle occupy a band along the right of the diagram similar to the upper part of the main sequence on the left. We will designate this type of star as Class C. Adding the number of the cycle, these stars of the second cycle are Class 2C stars. After the initial movement downward from region O to a position determined by the stellar mass, the evolution of the Class 2C stars, resulting from continuation of the process of condensing the outer envelope, leaves the luminosity practically unchanged, but the surface temperature increases because of the reduction in the size of the radiating surface. This second cycle star thus moves almost horizontally across the CM diagram if it is in a region of minimum accretion. Any further accretion that takes place puts the terminal point, the location at which the star reaches gravitational equilibrium, higher on the diagram. The evolutionary paths of the Class C stars are therefore totally different from those of Class A, the stars of the first cycle. The Class C pattern is illustrated in Fig.5. The numbers shown with the names of the prominent stars identified in the diagram are the masses in solar units. As can be seen from these values, the mass scale for the Class C stars on the right of the diagram is practically identical with that of the Class B (main sequence) stars on the left. The line XY then represents the evolutionary path of a star of about five solar masses that is accreting only the remnants of its original dispersed matter. If the star condenses within a

dust cloud, or enters such a cloud before the consolidation of the diffuse matter is complete, the increase in mass by accretion from the cloud moves the star upward on the diagram, and the resulting path is similar to the line XZ. It should be noted that although the evolutionary path of the Class C stars in the CM diagram is quite different from that of the Class A stars, and the significance of positions in the diagram, in terms of variables other than temperature and luminosity, is also quite different, the result of the evolutionary development is the same in both cases. The evolution carries the stars from a cool and very diffuse condition in region O of the diagram to a position on the main sequence that is determined by the stellar mass. And it accomplishes the movement by means of the same process in both cases, a process— gravitational contraction—that is known to he operative under the existing conditions.

In sharp contrast to this straightforward gravitationally powered process, conventional astronomical theory offers a bizarre succession of twists and turns that attempt to reconcile the observational data with the upside down evolutionary sequence based on the purely hypothetical hydrogen conversion process as the source of stellar energy. As already noted, this theory requires a movement from region O. the red giant region, of the CM diagram, to the main sequence, but then finds it necessary to reverse the movement and bring the stars back to the red giant region again. The theorists have not been able to define this reverse movement without making the mass of the star an independent quantity. They have therefore abandoned any systematic connection between mass and position in the diagram, aside from that which exists along the main sequence. As Shklovsky puts it, the stars move on the diagram ‖in a rather meandering fashion.― 49

This assumption that the temperature and luminosity of a star can be totally independent of the mass is another inherently improbable hypothesis. Both of these quantities are determined by the mass along the main sequence, and the idea that the connection is completely severed under other conditions is unrealistic. Furthermore, it runs into an obvious difficulty when the hypothetical evolutionary line again intersects the main sequence on the road from red giant to white dwarf. If we examine the hypothetical evolutionary path without regard to its ‖meanderings,― what we find is a ‖turnoff― from the main sequence at a point asserted to be determined by the mass of the star, a horizontal movement to the right, and then a turn upward that continues on a diagonal line to the red giant region. From there the path extends back to the left along a rather indefinite horizonal course. A diagram that purports to demonstrate the agreement between this theoretical pattern and the observations accompanies almost all discussions of the subject in astronomical literature. This is a composite diagram, combining the CM diagrams of a number of star clusters. (See, for instance, reference 50 .) Aside from the question as to the direction of movement, which cannot be determined from observation, the hypothetical evolutionary path agrees, in general, with the CM diagram of the globular clusters. It could hardly do otherwise, since it was deliberately designed to fit the globular cluster pattern. The agreement of the composite diagram with the hypothetical evolutionary pattern is therefore significant only to the extent that there is agreement in the case of clusters other than those of the globular cluster type. In Chapter 10 we will find that some clusters such as M 67 and NGC 188 that are classified as open clusters are actually fragments of globular clusters that have not yet lost all of their globular characteristics. To arrive at the true significance of the composite diagram we need to eliminate the clusters of this type, as well as the normal globular clusters, and examine the extent of agreement between the remaining open clusters and the theoretical pattern. When we do this we find that there is no correlation whatever. These clusters have stars along the main sequence, and in the immediate vicinity thereof, and one of them also contains some red giants. But there is no trace of the evolutionary pattern that the diagram is supposed to corroborate. The evidence that is asserted to support the contention that the stars of the open clusters ‖evolve off the main sequence― simply does not exist. A recognition of the true evolutionary pattern, as derived from the theory of the universe of motion, makes it possible to understand the real meaning of the association of certain kinds of stars with dust clouds that has led to the belief that the stars are being formed within the clouds. Two such types of association are recognized. O associations are composed of stars of the O and B types, the largest and hottest of all stars. T associations are groups of stars of the T Tauri class, much smaller and cooler than the O and B stars. ‖Often, but not always, the T-associations coincide with O-associations.― 51 The prevailing belief that the hot massive stars are young leads to the conclusion that they were formed somewhere near their present locations. Taken together with the observed association between the O and B stars and nebulosities, this indicates that the stars of the O associations have been formed by condensation of portions of the gas and dust clouds in which they are now located. This hypothesis is currently accepted by most astronomers, but, as brought out in Chapter 1, they are unable to explain how stars can be

formed from clouds of such low density. ‖This process,― says Simon Mitton, ‖is almost a total mystery.― 52 Development of the theory of the universe of motion does not provide any way whereby the dust and gas clouds of the Galaxy can condense into stars. On the contrary, it identifies still another force opposing such a condensation, the force due to the outward progression of the natural reference system, and it indicates that condensation cannot take place unless the clouds are either very much larger or very much denser than anything that exists in the Galaxy. However, it is clear from the information brought to light by this development that what is actually happening is accretion of matter from the dust and gas clouds by previously existing stars. These stars already in existence are not limited by the factor that prevents dust and gas particles from condensing into a stellar aggregate under galactic conditions; the net motion of each particle outward away from all others. All particles within the gravitational limit of an existing star have a net inward motion toward the star, and are on the way to capture. . The clouds of dust and gas in the Galaxy are subject to forces that tend to spread them out and dissipate them. It follows that the identifiable clouds are relatively recent acquisitions by the Galaxy. As such, they are associated mainly with the relatively recent stellar acquisitions, the Class 1A stars. As we have seen, these stars are initially divided into two groups, a large group of small stars that reach gravitational equilibrium in the lower portion of the main sequence, and a smaller group of large stars that reach equilibrium well above the midpoint of that sequence. We can therefore expect the products of accretion to existing stars from the gas and dust clouds to be of two kinds, one group of hot massive stars and one group of small and relatively cool stars. These two groups required by the theory can obviously be identified with the O associations and the T associations respectively. Both groups contain some of the Class 2 stars that have been mixed with the Class 1 population since entry of the younger stars into the Galaxy. The positions of the O and T associations in the CM diagram are entirely consistent with the accretion explanation. The upper portion of the main sequence, in which the O stars are located, cannot be reached without some accretion from the environment. The largest Class 1 stars reach the main sequence considerably below this level, and the Class 2 (and later) red giants, reconstituted from part of the matter of the O type star that exploded, are necessarily somewhat less massive than the O stars. ‖There are no super red giants which would correspond to the evolution of an O-type star.― 36 (S. J. Inglis) The stars of all classes therefore have to grow at the expense of their environment in order to reach the O status. The T Tauri stars are found in a location generally described as ‖above― the lower portion of the main sequence. Inasmuch as this sequence runs diagonally across the diagram, it is equally correct to say that these stars are located somewhat to the right of the main sequence. As can be seen from Figure 5, this is consistent with an ongoing accretion from the surrounding cloud of dust and gas. A star that is accreting substantial quantities of such material is in essentially the same condition as one that is consolidating the final remnants of, the material dispersed by a supernova explosion. As we saw earlier, a star of this latter type (Class 2C) is moving horizontally across the CM diagram from right to left. In the latter part of this movement it occupies a position similar to that in which the

T Tauri stars are found. The T Tauri position is thus in full accord with the accretion explanation. The observation that there are ‖erratic changes in brightness― 53 of these stars is also consistent with the finding that they are accreting material from the environment in substantial, and probably variable, quantities. Let us now take a closer look at the pattern of events in the interior of an aggregate that has just become a star (of any cycle) by activating the atomic disintegration process as a source of energy. The additional energy thus released causes a rapid expansion of the star: This expansion has a cooling effect, which is most pronounced in the central regions, and as the temperature in these regions drops below the recently attained destructive limit, the energy generation process itself is shut off, accentuating the cooling effect. Eventually this cooling stops the expansion and initiates a contraction of the star, whereupon the temperature again rises, the destructive limit is once more reached, and the whole process is repeated. A newly formed star of either Class A or Class C is therefore variable in the amount of its radiation an intrinsic. Variable as it is called to distinguish it from stars of the class whose variability is due to external causes. ‖Almost all these stars I those below 1700 K I are, as we had expected, long period variables ― 54 report Neugebauer and Leighton, pioneer investigators of the infrared stars. Not all cool stars are young, but an old cool star has had time to reach gravitational equilibrium, and it is therefore small, whereas the young stars are still very diffuse—such a star has been described as nothing but a red hot vacuum—and hence they are very large. Inasmuch as they are radiating from a much larger area their total radiation is much greater than that of old stars of the same surface temperature. The bright infrared stars are thus the newly for-mea variables. The length of the cycle, or period, of a variable star depends on the relation of the magnitude of the energy released by atomic disintegration to the total energy content of the star. When the star is very young, and its temperature is barely above the stellar minimum, the rate of energy generation is large compared to the total energy of the star, and the swings from the ‖on― to the ‖off― position of the energy production mechanism are relatively large. Such stars are therefore long-period variables. As a star grows older, its temperature and energy content increase, because the average energy production exceeds the radiation in this stage of the evolutionary cycle. The fluctuations in the rate of energy production thus represent a constantly decreasing proportion of the total energy of the star, Both the period and the magnitude of the variation (measured in terms of percent change in radiation) therefore decrease with time. As the average temperature of the star rises, a point is eventually reached at which the temperature in the central regions during the low phase of the cycle no longer drops below the destructive limit of the heaviest element present. But this does not put an end to the variability, because by this time, or very soon thereafter, the high point of the temperature cycle reaches the destructive limit of the next lighter element, and generation of energy by destruction of this element takes place in the same kind of an on and off cycle. The fluctuations never cease entirely, but they decrease in magnitude, and are no longer evident observationally after the temperature stabilizes, or when the total energy of the star becomes so large that the effect of the variations is negligible on the scale of the observations.

One star, the sun, is so close to us that even small variations in energy production should be detectable. This subject has not yet been studied in the context of the universe of motion, but some aspects of the sun's behavior are known to be variable. The observed fluctuations in the sunspot activity are particularly noticeable. The origin of these spots is unknown, but no doubt they are initiated in some manner by the energy production process. Hence they may be giving us an indication of the variations in the output of that process that would be expected from the periodic changes. There are also some relatively long range variations, such as the decrease in energy output that caused the Little Ice Age in the seventeenth century and the increase that is producing the gradual warming in the twentieth, which may be due to variations in the nature or amount of the material accreted from the environment. In any event these are subjects that warrant some investigation. It is possible that such an investigation can be extended to more distant stars not currently classified as variables. Some observations of ‖variations in activity similar to the sun's I lyear sunspot cycle― in nearby stars have already been made.55 The theoretical explanation of the process whereby the heavier elements are built up, as set forth in Volume II, defines it as a continuous capture process that is taking place throughout the entire extent of the material sector. In the primitive aggregate of diffuse material, and in the early dust and gas clouds, the magnetic ionization level is zero, which means that there is no obstacle to the formation of any of the 117 possible elements. The time spent in this first of the evolutionary stages is so long that all of the elements are represented in the constituent dust of the clouds by the time the protostar stage is reached. Inasmuch as the build-up of the atomic structure is a step-by-step process, the initial abundance of the elements is an inverse function of the atomic mass (with some modification by other factors), but even a small amount of the very heavy elements is sufficient to initiate the atomic disintegration that provides the increment of energy which raises the dense dust cloud to stellar status. By the time the initial supply of heavy elements is exhausted, the stellar fuel has been replenished by accretion of material from the environment, and by the continued operation of the atomic building process. All accreted matter has some heavy element content, but the addition to the fuel supply is not limited to this amount. Any matter that adds significantly to the total mass of the star serves the purpose of activating an additional source of energy. The increase in mass increases the central temperature of the star, and it thereby makes more fuel available through reaching the destructive limits of lighter elements. Correlation of the central temperature with the mass carries with it the implication that the principal fuel supply at any given mass level is provided by a specific element. Most of the very heavy elements are present only in small concentrations, and this makes it difficult, in most instances to distinguish the points at which destruction of an additional element begins. There is, however, a relatively wide mass range, indicated by the crosshatched area in Fig 6, in which the variability is sufficiently regular to make it evident that a single element is the principal energy source. The distinctive character of the variability in this region, which we will identify as the Cepheid zone, extends through a wide enough range of central temperatures to indicate that the energy is being derived primarily from an element that is present in the star in a higher concentration than that

reached by any element of greater atomic number. The particular element that is involved cannot be positively identified without further investigation, but since lead is not only the first moderately abundant element in the descending order of atomic mass, but also the only such element in the upper portion of the atomic series, we may, at least tentatively, correlate the destructive thermal limit of this element number 82 with the central temperature corresponding to the mass range of the Cepheid zone. It should be noted in this connection that lead is the heaviest element that is stable against radioactivity in a region of unit magnetic ionization, and it therefore occupies a preferred position somewhat similar to that of iron. The long period variables that precede the Cepheids on the evolutionary path can be correlated with the elements above lead in the atomic series. Here the quantities of energy generated as the successive destructive limits are reached are smaller, inasmuch as these elements are relatively scarce, but each increment of energy has a greater effect on the stellar equilibrium because of the smaller heat storage capacity of these low temperature stars. This accentuates the eftcct of minor variations in the incoming flow of matter from the environment, and as a result these long period variables are less regular than the Cepheids. In general, these stars are not separable into easily recognizable classes on the order of the Cepheids, but some groups of a somewhat similar nature have been identified. The RV Tauri variables. for instance, are found between the red. Mira type, long period variables and the Cepheids . 56

There are 35 elements heavier than lead in the primitive material from which the globular cluster stars were formed. The destructive limit of each of these elements establishes a central temperature for a particular group of stars in the same manner that the destructive limit of lead (presumably) establishes the central temperature, and consequently the characteristic properties, of the Cepheids. Most of the Class C stars are probably at the unit magnetic ionization level, reducing the number of stable elements above lead to ten,

and the RV Tauri stars account for one of these, but trying to divide all of the variables earlier than the Cepheids into groups, and to identify the elements that constitute the principal energy source for each, is clearly impractical, as matters now stand, even if only nine more groups are involved, The stars located in the area where the Class A evolutionary line AC crosses the Cepheid zone arc known as RR Lyrae stars. They are abundant in the globular clusters, and for this reason are also called cluster variables. However, they are not the only Class A Cepheid stars in these clusters. One kind of a globular cluster star that we have not yet considered is a star that condenses on a large nucleus; either a pre-existing small star or an aggregate of planetary mass. When condensation of a star takes place on a nucleus of this size the line of evolutionary development is similar to that of the Class C giants, and is shifted upward on the CM diagram relative to the Class 1A path. This line enters the Cepheid zone at a location where the mass and central temperature are the same as in the region of the RR Lyrae stars, but the density and surface temperature are lower, while the luminosity is higher. The astronomers as Population know the stars of this Type II Cepheids, or W Virginis stars. In our terminology they are Class 1 Cepheids. The changes that take place in the stars during their trip around the cycle have some effect on the position that a star of a given kind occupies within its zone of the CM diagram. One such result is the existence of a third kind of Cepheid star. A giant Class C star of the second or later cycle moves through the Cepheid zone if it has a large initial mass, or is subject to heavy accretion. As would be expected, the general characteristics of this kind of Cepheid are similar to those of the Class 1 Cepheids. Indeed, it is only relatively recently that the existence of two distinct groups of these large Cepheid type stars has been recognized. But it is now known that the Class 2C (Population 1) Cepheids are more massive, and are located higher (about 1 ~/z magnitudes) on the CM diagram than those of Class 1.57 They are also quite uniform in size and other properties. Both the large mass and the similarity in properties of these Class 2C stars are explained by our finding that they are stars reconstituted from the products of Type I supernovae, which are explosions of stars that have reached the mass limit, and are therefore both very large and very much alike. These characteristics carry over into their products. As would also be expected, the RV Tauri variables previously mentioned are likewise separable into two distinct groups similar to the two classes of Cepheids.56 On the other side of the Cepheid zone the controlling factors are reversed. The heat storage capacity of the star is much greater, because of the higher temperatures and greater mass. Consequently, any variations, either in the rate of accretion or in the abundance of heavy elements in the accreted matter, are, to a large extent, smoothed out. The Cepheid stars have played an important part in the advancement of astronomical knowledge because there is a specific relation between their periods and their luminosities. This is understandable as another result of the interrelation between the different properties of the stars that has been the subject of much of the discussion in this and the preceding chapter. It probably applies to most other kinds of intrinsic variables as

well as to the Cepheids, but these other types of variable stars are less common, and so far, less clearly identified. It is also doubtful if any of these other classes of stars are as uniform as the Cepheids. The period-luminosity relation for the Cepheids, when properly calibrated enables the absolute magnitude of a Cepheid star to be determined from the period, an observable quantity. The relation of the absolute magnitude to the observed magnitude then indicates the distance to the star, thereby providing a means of measuring distances up to millions of light years, far beyond the limits of ordinary methods of measurement. The explanation of the pulsations of the Cepheids and other similar types of variable stars given in this work is, of course, quite different from that found in the astronomical literature. The astronomers envision this as a mechanical vibration—just like a bell, as one textbook puts it. But the observed characteristics of the pulsation contradict this hypothesis. A peculiar fact . . . is that the maximum brightness occurs near the time of most rapid expansion, while minimum brightness coincides with the most rapid contraction. This is contrary to any theory, which assumes a simple pulsation of the entire stellar body. It might indeed seem that the star should be brightest and hottest shortly after the contraction has brought it to a state of highest density and pressured 58 (R. Burnham) Like so many of the other ‖peculiar facts― that are noted, but disregarded, in current practice, this one is giving us a message. It is telling us that the prevailing theory of the pulsation is wrong. The theory of the universe of motion now reveals just what it is that is wrong. The pulsation is not a mechanical vibration; it is thermally powered. The interplay between two processes expansion and energy generation—is the cause of the periodicity. The maximum brightness occurs near the time of maximum expansion because this is the point at which the generation of energy at the maximum rate has persisted for the longest time. Except for some portions of the stellar content of nickel and other elements close to iron that escape because of local variations in the conditions in the central regions of a star, the elements heavier than iron are destroyed in the production of energy during the life of the star, and in the Type I supernova explosion which terminates that life if the star arrives at the temperature limit of iron. The building up of these elements has to start approximately from scratch again, but the period of expansion and re-aggregation of the explosion products is long enough to bring the heavy element concentration in the second-generation protostars back somewhere near that in the protostars of the first cycle. Meanwhile the concentration of iron and the elements of lower atomic mass has been increasing without interruption, and the total heavy element content of the class 2C stars (usually expressed as the percentage of elements above hydrogen or above helium, or as the ratio of the heavier elements to hydrogen) is substantially greater than that of the Class 1A stars. The same atom building processes are effective in the environment of the stars the interstellar space. The heavy element content is determined by the age of the mutter, irrespective of whether that matter is in the form of dust and gas or is incorporated into stars. As noted in Chapter 3, the current view of the astronomers is that the heavy

elements are formed in the interiors of the stars and are scattered into the environment by supernova explosions. On this basis, the heavy element content of young stars is greater than that of old stars because the proportion of heavy elements in the ‖raw material― available for star building increases as the galaxy ages. Although this view currently enjoys quite general acceptance, more and more anomalies are appearing as evidence from observation continues to accumulate. In addition to the many items of evidence contradicting this hypothesis that have been discussed in the previous pages of this volume, we may now note that there is evidence indicating that the heavy element content in the interstellar matter of the local environment is not increasing. Martin Harwit has considered this situation at some length. He observes that the ‖similarity in abundances― (that is, in chemical composition, as indicated by the spectra) of different classes of stars in the Galaxy—B stars, red giants, planetary nebulae, etc.—is ‖somewhat puzzling.― 59 These similarities lead him to this conclusion: ‖These analyses show that throughout the lifetime of the Galaxy the interstellar matter has had an almost unchanged composition.― This is definitely in conflict with the basic premise underlying the currently accepted explanation of the difference in composition between the ‖old― and young, stars: the assumption that the interstellar medium is continually enriched with heavy elements ‖cooked― in the stars and scattered into the environrllent. Of course, our findings also require the heavy element content of any given quantity of matter to increase with age, but the existing interstellar matter is not the same matter that occupied this region of space in earlier times. All galaxies are pulling in diffuse material from their surroundings, material, which, according to our findings, is relatively young. For example, Harwit refers to a recently discovered, apparently continuous, infall of gas from outside the Galaxy.― As noted in Chapter 2, the larger galaxies are also capturing some immature globular clusters in which the constituent dust clouds have not yet consolidated into stars. Meanwhile, the stars are accreting the older interstellar matter. It is quite likely that these two processes come near enough offsetting each other to leave the average composition of the interstellar material in the local environment nearly constant. Harwit's conclusion as to the constancy of composition is therefore consistent with the theory of the universe of motion insofar as it applies to the situation in the outer regions of spiral galaxies. The proportion of heavy elements should theoretically be greater in the older regions of the galaxies, but these are not accessible to detailed observation, as matters now stand.

CHAPTER 6

The Dwarf Star Cycle At the very high temperatures prevailing in the interiors of the stars at the upper end of the main sequence the thermal velocities are approaching the unit level, and when these already high velocities are further increased by the energy released in the supernova explosion the speeds of many of the interior atoms rise above unity. The results of speeds above the unit level were discussed briefly in Volume 1, but a more detailed

consideration will now be required, as these greater-than-unit speeds, which play no part in the physical activity of our terrestrial environment, are involved in a wide variety of astronomical phenomena. The discovery of the existence of speeds greater than that of light is one of the most significant results of the development of the theory of the universe of motion. It has opened the door to an understanding of many previously obscure or puzzling phenomena and relations. But some of the concepts that are involved in dealing with these very high speeds are new and unfamiliar. For that reason many persons find them hard to accept on the strength of theoretical reasoning alone, regardless of how solid a base that reasoning may have. The results of recent research reported in The Neglected Facts of Science, published in 1982, should be very helpful to these individuals, as that research has shown that many of the new findings derived from the theory of the universe of motion can also be derived from purely factual premises, independently of any theory, thus providing an empirical validation of the theoretical results. Among these theoretical conclusions that are now provided with factual proof are the items with which we are presently concerned: the existence of greater-than-unit speeds, and the characteristics of motion at these speeds. In order to emphasize the point that the theoretical findings in this areas however strange they may appear in the light of previously accepted ideas, are fully confirmed by observed facts and logical deductions from those facts, the description of the basic motions of the universe, for purposes of the theoretical development in this work, has been taken from the purely factual derivation given in the 1982 publication. This factual development was made possible by recognition of the physical evidence of the existence of scalar motion, and a detailed analysis of the properties of motion of this nature The scalar nature of the basic motions of the universe is an essential feature of the Reciprocal System of theory, and has been emphasized from the time of its first presentation. The points brought out in the extract from the 1982 book are simply the necessary consequences of the existence of these basic scalar motions. However, in order to follow the development of thought, it will be necessary to bear in mind some of the special features of scalar motion that were brought out in the previous volumes of this work. Although scalar motion, by definition, has no direction, in the usual sense of that term, it can be either positive or negative, When such motions are represented in a reference system, the positive and negative magnitudes appear as outward and inward respectively, For convenient reference, these are designated as ‖scalar directions.― Inasmuch as a scalar motion is simply the relation between a space magnitude and a time magnitude, it can be measured either as speed, the relation of space to time, or as inverse speed, the relation of time to space. Inverse speed was identified, in Volume I, as energy. A reciprocal relation, such as that between space and time in motion, is symmetrical about the unit value; that is, speeds of l/n (which we have identified as motion in space) are equivalent to inverse speeds, or energies, of n/l, whereas energies of l/n (which we have identified as motion in time) are equivalent to speeds of n/l. With the benefit of this understanding of those of the relevant factors that may be unfamiliar, we may now begin the extract from the published description of the high-speed regions, Photons of radiation have no capability of independent motion, and are carried outward at unit speed by the progression of the natural reference system, as shown in (I), Fig. 7. All

physical objects are moving outward in the same manner, but those objects that are subject to gravitation are coincidentally moving inward in opposition to the outward progression. When the gravitational speed of such an object is unity, and equal to the speed of progression of the natural reference system, the net speed relative to the fixed spatial reference system is zero, as indicated in (2), In (3) we see the situation at the maximum gravitational speed of two units. Here the net speed reached is -1, which, by reason of the discrete unit limitation, is the maximum, in the negative direction.

An object moving with speed combination (2) or (3) can acquire a translational motion in the outward scalar direction. One unit of the outward translational motion added to combination (3) brings the net speed relative to the fixed reference system, combination (4), to zero. Addition of one more translational unit, as in combination (5), reaches the maximum speed, + 1, in the positive scalar direction. The maximum range of the equivalent translational speed in any one scalar dimension is thus two units. As indicated in Fig. 7, the independent translational motions with which we are now concerned are additions to the two basic scalar motions, the inward motion of gravitation and the outward progression of the natural reference system. The net speed after a given translational addition therefore depends on the relative strength of the two original components, as well as on the size of the addition. That relative strength is a function of the distance. The dependence of the gravitational effect on distance is well known. What has not heretofore been recognized is that there is an opposing motion (the outward progression of the natural reference system) that predominates at great distances, resulting in a net outward motion. The outward motion (recession) of the distant galaxies is currently attributed to a different cause, the hypothetical Big Bang, but this kind of an ad hoc assumption is no longer necessary. Clarification of the properties of scalar motion has made it evident that

this outward motion is something in which all physical objects participate. The outward travel of the photons of radiation, for instance, is due to exactly the same cause. Objects such as the galaxies that are subject to gravitation attain a full unit of net speed only where gravitation has been attenuated to negligible levels by extreme distances. The net speed at the shorter distances is the resultant of the speeds of the two opposing motions. As the distance decreases from the extreme values, the net outward motion likewise decreases, and at some point, the gravitational limit, the two motions reach equality, and the net speed is zero. Inside this limit there is a net inward motion, with a speed that increases as the effective distance decreases. Independent translational motions, if present, modify the resultant of the two basic motions. The units of translational motion that are applied to produce the speeds in the higher ranges are outward scalar units superimposed on the motion equilibria that exist at speeds below unity, as shown in combination (5), Fig. 7. The two-unit maximum range in one dimension involves one unit of speed, s/t, extending from zero speed to unit speed, and one unit of inverse speed, t/s, extending from unit speed to zero inverse speed. Unit speed and unit energy (inverse speed) are equivalent, as the space-time ratio is 1/1 in both cases, and the natural direction is the same; that is, both are directed toward unity, the datum level of scalar motion. But they are oppositely directed when either zero speed or zero energy is taken as the reference level. Zero speed and zero energy in one dimension are separated by the equivalent of two full units of speed (or energy) as indicated in Fig.8.

In the foregoing paragraphs we have been dealing with full units. In actual practice, however, most speeds are somewhere between the unit values. Since fractional units do not exist, these speeds are possible only because of the reciprocal relation between speed and energy, which makes an energy of n/l equivalent to a speed of l/n. While a simple speed of less than one unit is impossible, a speed in the range below unity can be produced by addition of units of energy to a unit of speed. The quantity I /n is modified by the conditions under which it exists in the spatial reference system (for reasons explained in the earlier volumes), and appears in a different mathematical form, usually l/n². Since unit speed and unit energy are oppositely directed when either zero speed or zero energy is taken as the reference level, the scalar direction of the equivalent speed 1/n² produced by the addition of energy is opposite to that of the actual speed, and the net speed in the region below the unit level, after such an addition is 1 - 1/n². Motion at this speed often appears in combination with a motion 1 - 1/m² that has the opposite vectorial direction. The net result is then l/n² - 1/m², an expression that will he recognized as the Rydberg relation that defines the spectral frequencies of atomic hydrogen—the possible speeds of the hydrogen atom.

The net effective speed 1 - 1/n² increases as the applied energy n is increased, but inasmuch as the limiting value of this quantity is unity, it is not possible to exceed unit speed (the speed of light) by this inverse process of adding energy. To this extent we can agree with F.instein's conclusion. However, his assertion that higher speeds are impossible is incorrect, as there is nothing to prevent the direct addition of one or two full units of speed in the other scalar dimensions. This means that there are three speed ranges. Because of the existence of these three ranges with different space and time relationships, it will be convenient to have a specific terminology to distinguish between them. In the subsequent discussion we will use the terms low speed and high speed in their usual significance, applying them only in the region of three-dimensional space, the region in which the speeds are 1 - 1/n². The region in which the speeds are 2 - 1/n² that is, above unity, but below two units—will be called the intermediate region, and the corresponding speeds will be designated as intermediate speeds. Speeds in the 3 - 1/n² range will be called ultra high speeds. The foregoing paragraphs conclude the portions of the text of The Neglected Facts of Science that are relevant to the intermediate speed range. Consideration of speeds in the ultra high range will be deferred to later sections of this volume, as the phenomena now under review are limited to speeds below two units. However, one point that was mentioned in the extract from the 1982 publication, which should have some further emphasis in view of its importance in the present connection, is the status of unit speed. The true datum level of scalar motion, the physical zero, as we called it in the earlier volumes, is unit speed, not either of the mathematical zero points. This is significant, because it means that the second unit of motion, as measured from zero speed, does not add to the first unit. It replaces that unit. Although the use of zero speed as a reference level makes it appear that the sequence of units is 0, 1, 2, the status of unit speed as the true physical zero means that the correct sequence is -1, 0, +1. The importance of this point is its effect on the second unit of motion. This second unit is not the spatial motion (speed) of the first unit plus a unit of motion in time (energy), but the unit of motion in time only. The speeds of the fast-moving products of the supernova explosions that we are now undertaking to examine are in the intermediate range, where motion is in time. Instead of being blown outward in space in the same manner as the products that are ejected at speeds below unity, these intermediate speed products are blown outward in time. In both cases, the atoms, which were in relatively close contact in the hot massive star, are widely separated in the explosion product, but in the intermediate speed product the separation is in time rather than in space. This does not change either the mass or the volumetric characteristics of the atoms of matter. But when we measure the density, m/V, of the giant stars we include in V, because of our method of measurement, not only the actual equilibrium volume of the atoms, but also the empty three-dimensional space between the atoms, and the density of the star- calculated on this basis is something of a totally different order from the actual density of the matter of which it is composed. Similarly, where the atoms are separated by empty time rather than by empty space the volume obtained by our methods of measurement includes the effect of the empty threedimensional time between the atoms, which reduces the equivalent space (the apparent

volume), and again the density calculated in the usual manner has no resemblance to the actual density of the stellar material, In the giant stars the empty space between the atoms (or molecules) decreases the measured density by a factor which may be as great as 105 or l06.The time separation produces a similar effect in the opposite direction, and the second product of the explosion is therefore an object of small apparent volume, but extremely high density: a white dwarf star. The name ‖white dwarf― was applied to these stars in the early days just after their discovery, when only a few of them were known. These had temperatures in the white region of the spectrum, and the designation that was given them was intended to distinguish them from the red dwarfs in the lower portions of the main sequence. In the meantime it has been found that the temperature range of these stars extends to much lower levels, leading to the use of such expressions as ‖red white dwarf.― But by this time the name ‖white dwarf― is firmly established by usage, and it will no doubt be permanent, even though it is no longer appropriate. When judged by terrestrial standards, the calculated densities of these white dwarfs are nothing less than fantastic, and the calculations were originally accepted with great reluctance after all alternatives that could be found were ruled out for one reason or another. The indicated density of Sirius B. for instance, is about 130,000 g/cm³, that of Procyon B is estimated at 900,000 g/cm³, while other stars of this type have still greater densities. In the light of the relationships developed in this work, however, it is clear that this very high density is no more out of line than the very low density of the giant stars. Each of these phenomena is simply the inverse of the other. Donald Lynden-Bell expresses the conventional wisdom on the subject in this statement: We know already that some stars have collapsed to a size only ten times larger than that at which they would become black holes.60 In the face of this asserted ‖knowledge― it may not be easy to accept the idea that these objects have, in fact, expanded to their present size; that is, their components have moved outward away from each other in time, and the small size that we observe is merely a result of the way in which the expansion in time appears in the spatial reference system. But this conclusion is a necessary consequence of basic physical principles whose validity has been demonstrated in the preceding volumes of this series, and, as we will see in the subsequent pages, it produces explanations of the white dwarf properties that are in full agreement with all of the definitely established observational information. Unfortunately, the amount of observational information with respect to the white dwarfs that has been accumulated thus far is very limited, and much of what is available is of questionable accuracy. This scarcity of reliable information is due to a combination of causes. The white dwarfs have been known for only a relatively short time. The first one to be seen, the ‖pup― companion of Sirius, the dog star, was observed in 1862, but it was not until about 1915 that the distinctive character of the properties of this star was recognized, and theories to account for these properties were not developed until considerably later. The second reason for the lack of information is the dimness of these objects, which makes them very difficult to see, and limits both the number of stars that can be observed and the amount of information that can be obtained from each.

The third factor that has led to confusion in this area is the lack of a correct theoretical explanation of the white dwarf structure. As indicated in the statement quoted above, the currently accepted theory envisions an atomic collapse. It is asserted that the energy supply of a star is eventually exhausted, and that when energy generation ceases, the star collapses into a hypothetical state called ‖degenerate matter― in which the space between the hypothetical constituents of the atoms is eliminated, and these constituents are squeezed into a close-packed condition. As explained by Robert Jastrow: With its fuel gone it [the star] can no longer generate the pressures needed to maintain itself against the crushing force of gravity, and it begins to collapse once more under its own weight.61 Joseph Silk's explanation is essentially the same: The pressure exerts an outward force that withstands the gravity of the star, as long as there is sufficient hydrogen present in the stellar core to produce helium . . . After the supply of nuclear energy runs out and fails to provide adequate heat and pressure, gravitational collapse must ensue.62 This is an astounding conclusion. To put it into the proper perspective, it should be realized that the hypothetical collapse is something that is supposed to take place within the atom; that is, the pressure exerted on the atoms becomes so great that they are unable to withstand it. But, in fact, the pressure to which the atoms of the condensed gas are subjected to is not significantly altered by the cooling that results when and if the energy generation ceases. Each atom is subject to the pressure due to the weight of all overlying matter in any event, regardless of whether that matter is hot or cold. The pressure due to the thermal motion has nothing to do with conditions inside the atom; it merely introduces additional space between the atoms. Certainly, this added space would be eliminated if the star cooled down by reason of the exhaustion of the energy supply, but this would not change the conditions to which the atoms are subjected. The books from which an earlier generation of Americans learned to read contained a story about a man who was returning home from the city with a heavy sack of flour that he had purchased. He was afraid that the weight of the flour would be too much for the horse that he was riding, so to lighten the load on the horse he held the sack in his arms. In those days, the children that read the story found it hilarious, but now we are confronted with essentially the same thing in different language, and we are expected to take it seriously. Some writers seem to suggest the existence of a kinetic component that would add to the static pressure against the central atoms. Paolo Maffei gives us this version of the ‖collapse": Eventually, when all the lighter energy-producing elements have been depleted, energy will no longer be generated in the interior of the sun. In the absence of the internal pressure that supported them, the outer shells will rapidly fall toward the center due to gravitational attraction. In the course of this very rapid collapse, the atoms will be

squeezed together ever more tightly, and the electrons will be disassociated from the nuclei.63 But the assumption that a star could cool down rapidly enough to increase the total pressure significantly is nothing short of outrageous. There is no reason to believe that the heat transfer process within the star will be any faster during the cooling process than in the normal outward flow. Indeed, the cooling will be slowed up considerably by the release of gravitational energy as the outer portions of the star move inward. Furthermore, even under the most extreme assumptions, the critical pressure at which the atomic collapse is presumed to occur could be reached only in the very large stars, since the central atoms in the smaller stars can obviously withstand pressures immensely greater than the static pressure to which they are normally subject. (We know this to be true because atoms of the same kind do withstand these immensely greater pressures in the large stars). Thus the collapse, if it occurred at all, could occur only in the stars which current theory says do not collapse, but explode. And no one bothers to explain how the layers of matter outside the central regions of the star, which certainly are not subjected to any excessive pressure, are induced to participate in the degeneracy. The truth is that the question as to how matter gets from its normal state into this hypothetical degenerate condition is given scant attention. The astronomers have arrived at an explanation of the extremely high density of the white dwarfs that appears reasonable in the context of the currently accepted theory of atomic structure. That theory portrays the atoms in terms of individual constituents separated by large amounts of space. Elimination of this space seems to be a logical way of accounting for the enormous increase in density. No direct evidence bearing on this issue is currently available, and the hypothesis is therefore free from any direct conflict with observation. Having this (to them) satisfactory explanation of the density of the white dwarf, the astronomers have apparently considered it obvious that the stars must get from their normal condition to this white dwarf state in some way. Consequently, they have not considered it necessary to look very closely into the question as to how the collapse is to be accomplished. Eddington is often credited with having provided the ‖explanation― of the white dwarfs.64 But an examination of one of his discussions of the subject, such as that in the chapter on ‖The Constitution of the Stars― in his New Pathways in Science,65 reveals that the whole point of his discussion is to show that the existence of degenerate matter is consistent with accepted atomic theory. He does not address the question as to how this degeneracy is to be accomplished, except to comment that it can be produced by pressure, which gets us nowhere, as he offers no suggestion as to how the necessary pressure could be produced—the same lacuna that is so evident in the more recent discussions of the subject. Where such a suggestion is attempted, it is usually an obvious absurdity. Here is an example: Gravitation tends to squeeze the star to smaller and smaller dimensions, but every contraction only strengthens the force, thereby compelling further contraction . . . Its [the star's] contraction accelerates all the time for the reasons just explained, and outright would collapse into a black hole if forces were not generated to counteract the gravitational contraction. Such a force is the thermal pressure of the gas . . . the pressure eventually begins to balance gravitation.66 (M. J. Plavec)

This not only conflicts with the previously noted fact that the thermal pressure does not alter the pressure exerted against the atoms, but is also specifically contradicted by direct observation, as we know from experience that matter in which thermal pressure is not ”generated to counteract the gravitational contraction,"—that is, matter that is near zero absolute temperature—does not ‖collapse into a black hole.― It remains in the condition that we call the solid state, in which there is a definite minimum distance between the atoms. This is an equilibrium distance, and it can be reduced by application of pressure, but there is no observational indication of any kind of a limit, even though pressures as high as five million atmospheres have been reached in experiments. The truth is that there is no empirical evidence to support the assumption that gravitation operates within atoms. Observations show only that there is a gravitational effect between atoms (and other discrete particles). Furthermore, the behavior of matter under compression demonstrates that there is a countertorce, an antagonist to gravitation (the same) that we encountered earlier in our examination of the structure of the globular clusters) that limits the extent to which the gravitational force can decrease the interatomic distance. Plavec's contention that collapse into a black hole will take place unless forces, such as the thermal pressure, are ‖generated― to oppose gravitation is contradicted by the observed behavior of matter, which shows that the necessary counter-force is inherent in the structure of matter itself, and does not have to be generated by a supplementary process. . In order to clear the way for the ‖collapse― hypothesis, it is first necessary to assume that there is a limit to the strength of the counter-force, an assumption that is entirely ad hoc, since current science has not even identified the nature of this force, to say nothing of establishing its limits, if any. Then it is further necessary to assume that the gravitational force operates within the atom and that the opposing force is not so operative to any significant extent. The combination of these latter assumptions is inherently improbable, and in view of the lack of any indication of a limit to the resistance to compression, the first assumption has no more claim to plausibility. The theory of atomic collapse is thus simply an excursion into the realm of the imagination. In the universe of motion stars cannot and do not collapse. The results that are currently attributed to this hypothetical collapse are produced by the expansion of the fastest products of the supernova explosion into time. The factor that controls the course of development of the white dwarf stars is the inversion of physical properties in the intermediate speed region. As we have seen, the expansion into time increases the amount of three-dimensional time occupied by this star. This is equivalent to a decrease in the volume of space; that is, the equivalent spatial dimensions are reduced, resulting in an increase in density when measured as mass per unit of volume. Contraction of the matter of the white dwarf star under pressure has the opposite effect, just as it does in the case of ordinary matter. Pressure thus reduces the density measured on this same basis. The constituents of a white dwarf star, like those of any other star, are subject to the gravitational effect of the structure as a whole, and the atoms in the interior are therefore under a pressure. The natural direction of gravitation is always toward

unity. In the intermediate region (speeds above unity), as in the time region (distances below unity) that we explored in the earlier volumes, toward unity is outward in the context of a fixed spatial reference system, the datum level of which is zero. Thus the gravitational force in the white dwarf star is inverse relative to the fixed system of reference. It operates to move the atoms closer together in time, which is equivalent to farther apart in space. At the location where the pressure due to the gravitational force is the strongest, the center of the star, the compression in time is the greatest, and since compression in time is equivalent to expansion in space, the center of a white dwarf is the region of lowest density. As we will see later, this inverse density gradient plays an important part in determining the properties of the white dwarfs. Another effect of the inversion at the unit level can be seen in the relation of the size of the white dwarf to its mass. References are made in the astronomical literature to the ‖curious― fact that ‖the more massive a white dwarf is, the smaller its radius.― 67 When the true nature of the white dwarf is understood, this is no longer curious. A massive cloud of matter expanding into space occupies more space than one of less mass, and the radius of the massive cloud is therefore greater. A massive cloud of matter expanding into time similarly occupies more time than one of less mass, and the radius of the massive cloud (measured as a spatial quantity) is therefore smaller, inasmuch as more time is equivalent to less space. Astronomical observations give us only occasional disconnected glimpses of the white dwarf stars as they move through the various stages of their existence, but we can arrive at a theoretical picture of their evolution that is in full agreement with the little that is observationally known. The following paragraphs will outline the general nature of the evolutionary development, which will be considered in detail in Chapters 11, 12, and 13. In what may be called Stage 1, the immediate post-ejection period following the supernova explosion in which the white dwarf is formed, this star is expanding in time. This means that from a spatial standpoint it is contracting in equivalent space. In this stage, the constituent particles, newly raised to intermediate speeds, are emitting radiation at radio frequencies as they move toward isotopic stability at these speeds. (The process by which the radiation is generated will be examined in Chapter 18). Such a star is observable only as an otherwise unidentifiable source of radio emission. A great many such sources—"blank fields,― as they are known to the observers—have been located, and presumably many of these are outgoing white dwarfs. During this expansion stage energy is being lost to the environment, and there is little generation of energy to replace the losses. Energy production by atomic disintegration is reduced as the temperature rises in the range above unity, as this decreases the inverse temperature, which determines the destructive limits of the elements in the intermediate speed range. Since unity is the natural datum for physical activity, the critical level at which the disintegration of the atom takes place is unit equivalent temperature, corresponding to the speed of light, regardless of whether the predisintegration temperature is above or below the unit level. A deviation upward from unity (a decrease in inverse speed) has the same effect on the process as a downward deviation of the same magnitude (a decrease in speed). Inasmuch as the maximum speed is well above unity, only the very heavy elements are initially available as fuel.

When the energy loss to the environment has been sufficient to terminate the contraction in equivalent space, a process of re-expansion begins. The energy loss continues throughout this second evolutionary stage. As the expansion proceeds, and as the temperature falls toward unity, energy production increases to some extent, since successively lighter elements reach their destructive limits in the same manner as in the inverse situation on the opposite side of the unit temperature level. But the supply of elements heavier than iron was reduced to near zero before the supernova explosion, and the expanding white dwarf therefore has very little fuel for energy generation. The atom building process and the accretion of matter from the environment eventually begin replenishing the supply, but this proceeds at a relatively slow pace, Furthermore, the white dwarf does not have the benefit of gravitational energy, such as that which is released by the contraction of the giant stars, because the effect of gravitation in time is the inverse of the effect of gravitation in space. Because of the energy losses, the temperatures of the constituents of a white dwarf continually decrease, and eventually they begin dropping below the unit level, As this reversion to the lower speed range proceeds, the star is gradually converted from the status of a white dwarf (a star whose constituents move at intermediate speeds) to that of an ordinary main sequence star (one whose constituents move at speeds below the unit level), The evolution of the white dwarf is thus directed toward the same end as the evolution of the giant stars; that is, a restoration of the state of gravitational and thermal equilibrium that was destroyed by the supernova explosion. In the case of the red giant, the explosion produced a cool and diffuse aggregate, which had to contract and heat in order to reach the equilibrium condition. In the case of the white dwarf, the explosion produced a dense hot aggregate that had to expand and cool in order to reach the same equilibrium condition. Since the astronomers do not recognize the true nature of the white dwarf star, they have had great difficulty in charting an evolutionary course for these objects. As noted earlier, they have developed a theory of stellar evolution that takes the stars as far as the red giant stage. They regard the white dwarfs as being in the last stage on the road to stellar oblivion. It follows, so they conclude, that the stars must, in some way, get from red giant to white dwarf. The amount of progress that has been made toward putting some substance into this pure assumption during the last twenty years can be seen by comparing the following two statements: We know remarkably little about evolution in population I after the red giants,68 (J. L Greenstein, 1960) The details of the process by which the red giants evolve into white dwarfs are poorly understood.― 69 (R. C. Bohlin, 1982) But when a pure assumption of this kind is repeated again and again, its dubious antecedents are eventually forgotten, and it begins to be accepted as established knowledge. The remarkable way in which the status of this assumption as to the location of the evolutionary path has been elevated by the process of repetition, without any addition to the observational support, can be seen from the following statement from an astronomy textbook, in which the ‖poorly understood― and purely hypothetical evolutionary course becomes a certainty:

We do not know precisely what happens Ito the red giants] at this point, but we are sure that shortly thereafter the star moves rapidly to the left on the H-R diagram and then downward, fading out slowly in the lingering death of the white dwarf.70 Even in the light of conventional theory, the hypothesis that the stars ‖move rapidly to the left on the H-R diagram [from the red giant region] and then downward,― meanwhile shedding mass, is untenable. Movement to the left from the red giant region involves an increase in the mass of a Class I star, and either an increase or a constant mass for a member of one of the later classes. The stars in the upper left of the diagram are the most massive of all of the known stars. The mass loss assumed to be taking place during this hypothetical leftward movement is incompatible with the observed mass relationships. Nor is there any explanation as to how this assumed loss of mass could take place. Shklovsky, for instance, concedes that ‖we simply do not understand exactly how material is ejected from the envelopes of such [red giant] stars.― 71 Furthermore, even where matter is actually ejected from a star, this does not necessarily mean that it leaves the system. When the issue is squarely faced, it is apparent that there is no evidence of any significant loss of mass from any star system, other than the stars that explode as supernovae. There are, of course, many types of stars that eject mass, either intermittently or on a nearly continuous basis, but they do not given their ejecta anywhere near enough velocity to reach the gravitational limit and escape from the gravitational control of the star of origin. This ejected matter therefore eventually returns to the star from which it originated. In this connection, it should be noted that although the relation of the stellar mass to the variables of the CM diagram is different for the different classes of stars, our findings show that it is fixed for any one of these classes. Stars that are following an evolutionary course that involves an increase of mass cannot lose mass and still continue on that course. This not only rules out the theoretical loss of mass by stars such as the red giants which show no evidence of any significant outflow of matter, but also means that the observed ejection of material by stars like the Wolf-Rayets is a cyclical process of the kind discussed in the preceding paragraph. We will encounter this same kind of a cyclical ejection process in a more extensive form in the case of the planetary nebulae, which will be examined in Chapter 11. The present chapter is the first in this volume that involves a full-scale application of the reciprocal relation between space and time, the most significant consequence of the postulate of a universe composed entirely of motion. Some of the conclusions of the preceding chapters depend in part on this relationship, but the entire content of this chapter rests on the inverse relation between the effects of an expansion into space and those of an expansion into time. The concept of an object becoming more compact (from the spatial standpoint) as it expands will no doubt be a difficult one for many individuals (although for some reason, most seem to be quite comfortable with the fantastic ‖holes― in space—black holes, white holes, wormholes, etc.—that figure so prominently in present-day cosmological speculations). But the validity of the reciprocal relation between space and time has been demonstrated in many hundreds of applications in the preceding volumes, and it provides the complete and consistent explanation of the white dwarfs that conventional astronomical theory is unable to supply.

The theory of white dwarfs in the universe of motion contains none of the awkward gaps that are so conspicuous in currently accepted astronomical theory. In the context of this new theory both the nature of the white dwarfs and their properties—those properties that are so different from those of the familiar objects of everyday life—are necessary consequences of the event in which these stars originated: the supernova explosion. And these properties define the ultimate fate of these objects. There is no need to assume a stellar ‖death― for which there is no observational evidence. The destiny of the white dwarf, an eventual return to the main sequence, is implicit in the physical characteristics that make it the kind of a star that it is.

CHAPTER 7

Binary and Multiple Stars The prevalence of binary and multiple systems is one of the most striking facts that has emerged from the astronomers observations of the stars, but they have not been able thus far to find an explanation for the existence of these star systems that is plausible enough to attain general acceptance. A number of different types of theories have been proposed, but all are subject to serious difficulties. As one astronomy textbook describes the situation: Our hopes of understanding all stars would brighten if we could explain exactly how binary and multiple stars form . . . Unfortunately we cannot.72 In view of this embarrassing lack of understanding of one of the most prominent features of stellar existence, it is significant that the development of the theory of the universe of motion provides a detailed account of the origin of these binary and multiple systems, not as something of a separate nature, but as an integral part of the explanation of the stellar evolutionary process. Furthermore, this explanation of the origin of these systems carries with it an explanation of the diversity of the components, another item that has hitherto puzzled the investigators. A half century ago, James Jeans made the following comment about this situation, an observation that is equally appropriate today: Reverting to the special problem of binary systems, it is hard to see how the two constituents can be of the same age, and yet they can only be of different ages if they have come together as the result of capture, a contingency which is so improbable that it can be ruled out as a possible origin for the normal binary system . . . Clearly some piece of the puzzle is missing.73 The existence of two distinct products of the supernova explosions, with speeds in different ranges, is the piece of the puzzle that has been missing. On the basis of the theory of the Type I supernovae outlined in Chapters 4 and 6, every star that has been through one such expansion is now a star system consisting of two components: an A component on or above the main sequence, and a B component on or below the main sequence. This means that the seemingly incongruous associations of stars of very different types that are so common are perfectly normal developments. Combinations of

giant and dwarf stars, for example, are not freaks or accidents; they are the natural initial products of the process that produces the second-generation stars. The significance of the term ‖star system― introduced earlier should now be apparent. A star system, in this sense, consists of two or more stars or aggregates of sub-stellar size that have been produced by subdivision of a single star. Inasmuch as the constituents of such a system have originated inside the gravitational limit of the original star, they are gravitationally connected, rather than having a net outward motion away from each other, as is true of the individual stars. The term ‖binary― is frequently used by astronomers in an inclusive sense to cover all systems with more than one component, but for the purposes of this present work it will be restricted to the double systems. The star systems with more than two components will be called multiple systems. In the early stages the pairing varies with the evolutionary age of the system. Immediately after the explosion the A component is merely a cloud of dust and gas which appears as nebulosity surrounding the white dwarf B component. Later the cloud develops into a pre-stellar aggregate, and then into a giant infrared star. Since these aggregates are invisible, except under special circumstances, the white dwarf appears to be alone during this phase. When the giant star gets into the high luminosity range this situation is likely to be reversed, as the bright star then overpowers its relatively faint companion. Further progress eventually brings the giant down to the main sequence. The development of the white dwarf is slower, and there is usually a stage in which a main sequence star (the former giant) is paired with a white dwarf, as in Sirius and Procyon. Finally the white dwarf, too, reaches the main sequence, and thereafter both components progress upward along the same path. The upper (more advanced) portions of the main sequence therefore contain no associations of dissimilar stars. Many of these stars are binaries, but they are pairs of the same or closely related types. There are some differences in composition. The white dwarf gets the lion's share of the heavy elements in the supernova process, and even though it accretes the same kind of matter as the giants, it has a larger content of ‖metals― in the main sequence stage. The Wolf-Rayet stars appear to reflect this difference. Their distribution and relative size indicate (on the basis of the theory discussed in Chapter 4 ) that they are former white dwarfs. They are less massive than the O and B stars with which they are associated.74 As noted earlier, they are probably rich in nickel, a white dwarf characteristic, and they are ‖closely confined to the plane of the Galaxy,― 75 indicating that they are stars of Class 2 or later. No Wolf-Rayet stars have been found in the Orion Nebula, where O and B stars of Class 1 are plentiful.74 It is suspected that all [Wolf-Rayets] may be components of close pairs, the W stars revolving with larger O type companions, a situation that may provide an important clue to the still mysterious behavior of Wolf-Rayet stars.76

This ‖suspected― pairing with O type stars, reported by an astronomy textbook, is fully in accord with our theoretical findings. The O star is the A component of the binary system, the former giant, while the Wolf-Rayet star is the B component, the former white dwarf. The astronomers have been unable to arrive at any explanation as to why so many stars are binary, and they are even more at a loss to explain the frequent occurrence of pairs of a very dissimilar nature. The pairing of these dissimilar objects is an anomaly in the context of conventional astronomical theory, which pictures the two stars in a binary system as following the same evolutionary path, and therefore occupying very different locations on that path if they are stars of different types. This presumed difference in evolutionary status is hard to reconcile with the rather obvious probability that the two stars of such a system have a common origin. The fact that the white dwarf is normally (probably always) the less massive of the two stars exacerbates this problem. Double stars . . . often present the strange circumstance that the more massive star is still a main sequence subject, while the less massive star has reached the white dwarf stage. If the two stars are of the same age, and have always been a physical pair, then the more massive star should evolve faster than the other.77 Dean B. McLaughlin makes this comment on a specific situation: It is curious that several other novalike variables, as well as two recurrent novae, T Coronae Borealis and RS Ophiuchi, have red giant stars for companions.78 From the standpoint of the findings of this work, there is nothing at all ‖curious― about this situation. Nor is it a ‖strange circumstance― that the more massive star is on the main sequence. The seeming anomaly is actually an observational repudiation of current astronomical theory. It exposes the Falsity of the assumption upon which the current theory is based: the assumption that all stars follow the same evolutionary course, and that the main sequence stars precede the white dwarf stars on this course. Our finding is that the two constituents of a binary system follow totally different paths, and at any specific time they are equally far advanced on their respective paths. The path back to the main sequence is, however, somewhat longer for the white dwarfs, which accounts for the variety of the combinations. Because of the nature of the process by which they were formed, all of the stars of the white dwarf class, including the novae and related variables, are accompanied by stars or pre-stellar aggregates on or above the main sequence, These companions are not always visible, particularly if they are still in the pre-stellar stage, but if they are observable, they are either giants, sub-giants, or main sequence stars, It is true that some of the observed double stars do not fit into this evolutionary picture on the basis of their reported composition, For example Capella is said to be a pair of giants, Neither of these stars can qualify as the B component of a binary, On the basis of the theory of the universe of motion, we must therefore conclude that Capella is actually a multiple system rather than a double star, and that it has two unseen white dwarf or faint main sequence components, The Algol type stars, in which the main sequence star is paired with a sub-giant of a somewhat smaller mass, are similarly indicated as multiple systems. The main sequence star cannot be the B component because it is the larger of

the two units and has already attained the equilibrium status, while the sub-giant cannot be the B component because it is above the main sequence. We must therefore conclude that at least one of these stars has undergone a second explosion, and that a faint B companion accompanies it. This assessment of the situation is supported by the fact that in Algol itself at least one, and possibly two, small B components have been located observationally. The second explosive event attributed to such stars as Capella and Algol is a normal development that can be expected to occur in any star system of an advanced evolutionary age, if it is in an appropriate environment. Chronological age alone will not produce this result, as there is no progress up the main sequence unless sufficient material is available for accretion. But where there is an adequate supply of ‖food, in the environment, the stars continue moving around the cycle until their life span is terminated by a process that will be discussed in Chapter 15. Each passage of a single star through the explosive stage of the cycle results in the production of a binary system (unless the B component is below stellar size, a possibility that we will examine shortly), The number of stars in the system thus continues to increase with age, as long as sufficient material for accretion is available, Systems with as many as six components are found within the present observational range, and considerations that will be discussed later indicate that even larger systems may exist in the older regions of large spiral and spheroidal galaxies. The status of these multiple systems as combinations of separately produced binaries is clearly indicated by their structures. In triple systems . . . two stars commonly co-rotate in a close orbit, and a third star revolves around the pair at a great distance. In quadruple systems, such as Mizar, two close pairs are likely to revolve around each other at a great distance.79 (W. K. Hartmann) The local star group, the concentration of stars in the immediate vicinity of the sun, is composed mainly of Class B stars, those of the main sequence, and since there is ample evidence, such as that contributed by their heavy element content, that these are second generation products, Class 2B, they should be largely binaries. This theoretical conclusion is confirmed by observation. ‖Single stars are a minority.80 Most of the recognized binary systems have main sequence stars in both positions, but there are some main sequence-white dwarf combinations. Few, if any, giant-white dwarf systems are recognized in this region, but this is probably due to the effect of the time factor on the number of stars in each part of the cycle, as the interval during which the giant stars are visible is of short duration compared to the time spent by the white dwarfs in their evolutionary development. It should be noted in this connection that this local group is representative only of a particular evolutionary stage, not of stellar systems in general, and the proportions in which the various types of stars occur in this local region are not indicative of the composition of the stellar population as a whole, The white dwarf, for instance, is an explosion product, a star of the second or later generation, and stars of this type are almost totally absent from stellar systems such as the globular clusters, which are composed almost exclusively of first generation stars, those which have not yet passed

through the explosion phase of the cycle. It should not be assumed, therefore, that the high proportion of white dwarfs in the local region indicates a similar high proportion throughout the universe, or even throughout the Galaxy. The same caveat should be applied to the estimate, quoted in Chapter 4, that 95 percent of all stars are located on the main sequence. This estimate does not give sufficient consideration to the fact that few of the early type stars, those of the globular clusters and the early elliptical galaxies, have reached this evolutionary stage. These aggregates, which constitute the great majority of stellar systems (although they do not necessarily contain the majority of all stars), are composed almost entirely of Class 1A stars, those that have not yet reached the main sequence. The number of stars of the later classes in these aggregates is no more than can be explained on the basis of the strays, the scattered remnants of disintegrated older structures. The observers recognize the almost complete absence of the various types of binary stars from these young aggregates, but it remains unexplained in current thought. Burnham, for instance, comments that ‖For some reason not fully understood, eclipsing binaries appear to be very rare in globular star clusters.― 81 Likewise novae are scarce. ‖There are only two cases of novae in globular star clusters,― 82 he says. The search for binaries in the center of globular clusters has been totally unsuccessful,83 reports Bart J. Bok. Shklovsky concedes that for Population 11 stars in general, multiplicity is ‖fairly rare.― 84 No reason for this near absence of binaries from Population 11 (Class 1A) stars is given in the astronomical literature. Nor is much support given to the rather half-hearted efforts to explain the origin of the double and multiple systems. The sad fact is that the astronomers are trapped by their upside down evolutionary sequence. The striking difference in the abundance of binaries between two groups of stars that admittedly differ primarily in age shows that this must be an evolutionary effect. But since the astronomers regard the group with almost no binaries as the older, they have to find one process by means of which the binaries are produced in the original star formation, or very soon thereafter, and another process whereby the combinations are uncoupled at some later evolutionary stage. Even the origin of the binaries is without any explanation that is taken seriously, and no explanation at all has been advanced to account for the uncoupling. When the correct evolutionary direction is recognized, one half of this problem disappears. Only one process remains to be explained: the production of binary systems at some stage of the evolutionary development. In the context of the theory of the universe of motion, this is seen to be a necessary consequence of the division between motion in space and motion in time that takes place in the products of extremely violent explosions. Here, then, this theory provides a complete and consistent explanation of an important feature of the astronomical universe that is without any explanation in terms of conventional astronomical theory. The clarification of the situation that is accomplished by the new theory does not end at this point Because of the lack of understanding of the basic principles that are involved, the astronomers are unable to distinguish between cause and effect in these phenomena. For example, Shklovsky expresses the current astronomical opinion in this statement:

Enough has been said to conclude that the doubling of a star decisively controls its evolution.85 As the points brought out in the preceding discussion demonstrate, this view of the situation is upside down, like so many other aspects of currently accepted theory. Instead of the doubling of the star determining its evolution, the evolutionary development of the star results in the doubling. The conventional view expressed by Shklovsky really does not explain anything; it merely replaces one question with another. The question as to what causes the evolution becomes a question as to what causes the doubling. On the other hand, the answer derived from the theory of the universe of motion is complete. This theory explains why stars evolve, why this evolution terminates in an explosive event, and how the doubling of the star results from the explosion. In a statement quoted in the first volume of the present series, Richard Feynman commented that ‖Today our theories of physics, the laws of physics, are a multitude of different parts and pieces that do not fit together very well.― 86 This description is even more appropriate in application to the theories of astronomy. Despite its tradition, which stretches back many millennia, astronomy does not appear to qualify as a mature science in [Thomas! Kuhn's sense of the word—a science with an established framework of theory and understanding,87 (Martin Harwit) The binary star theory is one of the individual ‖parts and pieces, that has little connection with anything else. The existence of binaries is simply taken as given, and a set of conclusions with respect to some of the observed binary phenomena is then drawn from this existence, without fitting these conclusions, and the phenomena to which they refer, into the rest of astronomical theory. This comment is not intended as a criticism; it is simply a statement of one of the aspects of astronomy, as it now exists, that needs to be taken into consideration in order to understand why the theoretical development in this series of volumes arrives at so many conclusions that differ radically from the prevailing astronomical thought. Inasmuch as the astronomers have no general structure of theory, either in physics or in astronomy, with which to work, they have had no option but to proceed on this piecemeal basis. Actually, they have made impressive progress in identifying and clarifying the multitude of different parts and pieces.― What is now needed is to put these parts and pieces together, turning them right side up where necessary, and fitting them together in the correct manner. This is the task that the general physical theory derived from the postulates that define the universe of motion is now prepared to handle. With the benefit of, the information supplied by this new theoretical system, it can now be seen that the behavior characteristics of the binary star systems are inherent in the stars themselves. There is no need to invent processes that call for interaction between the components. Hypothetical processes of this nature are the current orthodoxy. Interacting double stars—i.e., those in which gas flows from one star to the other—are in vogue to explain many peculiar celestial phenomena. The subject has become a bandwagon during the last decade or less.88 (David A, Allen)

In many binary systems the separation between the stars is relatively sma1l, and some interaction between them is a definite possibility (although it should be remembered that where one of the two stars is a white dwarf, there is a separation in time as well as in space, and the stars are not actually as close to each other as they appear to be). But the current tendency is to use the hypothesis of mass transfer from one member of a binary system to the other as a kind of catch-all, to explain away any aspect of binary star behavior that is not accounted for in any other way. The remarkable extent to which this hypothetical mass transfer process is currently being stretched is well illustrated by the purported resolution of what is called the ‖Algol paradox.― As noted earlier in this chapter, the two principal components of Algol are a relatively large and hot main sequence star and a less massive, cooler subgiant. Here lies a paradox. The more massive B or A star should be the one to expand first yet the less massive star is the more evolved giant. Why? Is there a fundamental mistake in our idea of stellar evolution?89 (W. K. Hartmann) Very little is actually known about the conditions that exist in these binary systems, and still less is known about the events that have taken place earlier in the lives of these stars. Thus, at the present level of instrumentation and techniques there is no way of disproving a hypothesis about these binaries, and the astronomers have taken full advantage of the freedom for invention, ‖Theoretical studies have resolved the paradox", Hartmann says, It is simply assumed that the smaller star was originally the larger, and that after having achieved the more advanced status, it obligingly transferred most of its mass to its companion. In other binary star situations, such as in the cataclysmic variables, the transfer explanation can be used only if the movement is in the opposite direction, So it is cheerfully assumed, in this case, that the transfer is reversed. As Shklovsky explains; It seems that the hot component has already passed through its evolution and, at some epoch in the past, transferred much of its material to its companion star. But now the companion is returning the favor by restoring to the evolved star the material ‖borrowed― many millions of years ego, 90 Of course, we have to keep in mind the difficulties under which the astronomers carry on their work, but nevertheless, there are limits to what can legitimately be classified as scientific. Acceptance of untestable ad hoc assumptions as the resolution of problems, or giving them any status other than that of highly tentative suggestions for study, is incompatible with good scientific practice, It inevitably leads to wrong answers. The correct answer to Hartmann's question is, Yes, there is a fundamental mistake in current ideas of stellar evolution. The so-called ‖paradoxes― are actually observational contradictions of a theory that has no foundation in fact. In addition to the binaries, we also observe a considerable number of stars in the local region, which appear to be single. Some of these may actually be single stars that have drifted in as a result of the mixing process that occurs by reason of the rotational motion of the Galaxy, but others are double stars in which one of the components is unobservable. We have already noted that the A component of a binary is invisible during a portion of the early evolutionary stage, and all we see under these conditions is a 1one white dwarf. The components of the white dwarfs are not dispersed in space, and these

stars do not participate in this kind of a retreat into obscurity, but they become invisible for other reasons. As we will find in Chapter II, they cannot be seen at all until they cool down to a certain critical temperature. Later a bright giant or main sequence companion may overpower them, or they may simply be too dim to be observable at any considerable distance. Inasmuch as the maximum speed produced by the supernova explosions that we are considering is less—usually considerably less—than two units, the distribution of speeds above and below unity is asymmetric, with the greater part of the mass taking the lower speeds. For this reason, even though some of the matter ejected into space escapes from the gravitational control of the remnants of the star, the amount of retained slow speed material still exceeds the inward-moving mass in most, and probably ail, cases. The giant member of the binary system therefore has the greater mass. In Sirius, for example, the main sequence star, originally the giant, has more than twice the mass of the dwarf. Since even the smallest star is subject to a Type II supernova explosion at the age limit, it is evident that in many instances the mass of the dwarf component is below the minimum required for a star, in which case the final product is a single star with one or more relatively small and cool attendants: a planetary system, In the supernova explosion the material near the center of the star is obviously the part of the mass that acquires greater-than-unit speed, and disperses into time. The remainder of the stellar material is dispersed outward into space. In view of the segregation of heavy and light components which necessarily takes place in a fluid aggregate under the influence of gravitational forces, the chemical composition of the two components of the explosion products differs widely, Most of the lighter elements will have been concentrated in the outer portions of the star before the explosion, those heavier than the nickel-iron group will have been converted to energy, except for the stray atoms mixed in with other material, and the recent acquisitions that had not had time to sink to the center, while the central portions of the star contained a high concentration of the iron group elements. When the explosion occurs, the outward moving material, which we will call Substance A, consists mainly of light elements, with only a relatively small proportion of high density matter. It can be deduced that the composition of Substance B. the matter of the inward-moving component, is subject to a considerable amount of variation. The exploding stars differ in their chemical composition No doubt there are also differences in some of their physical properties—rotational speed, for example. Because of these differences in the stars from which they originate, the size and composition of the white dwarf components of the explosion products is also variable. If this component is small, it can be expected to be composed almost entirely of the iron group elements. The large white dwarfs contain a greater proportion of the lighter materials. In each of the two products of the stellar explosions that we are now considering the primary gravitational forces are directed radially toward the center of the mass of the dispersed material Hence, unless outside agencies intervene, it is to be expected that any capture of one subsidiary aggregate by another will result in consolidation the formation of a binary or multiple system being ruled out by the absence of non-radial motions. Ultimately, then, the greater part of the matter of the larger of the two components, the

material dispersed in space, will be collected into one unit. The smaller component then acquires orbital motion around the larger, consolidation being unlikely in this case, as neither unit will be moving directly toward the other unless by pure chance The ultimate result is a system in which a mass, or a number of masses, composed primarily of Substance B is moving in an orbit, or orbits, around a central star of Substance A If the B component is of stellar size, the system is a binary star; if it is smaller the product is a planet, or a planetary system Because of interaction during the final stages of the formation process, some of the unconsolidated fragments may take up independent orbital positions, constiturting planetary satellites. This provides an explanation of, the origin of the solar system, a matter that has been the subject of much speculation among the members of the human race, who occupy a planet of that system. On the foregoing basis we may conclude that at the beginning of the formative period of the solar system, after the gravitational forces had almost completed the task of aggregating the masses dispersed by the supernova explosion. A large mass of Substance A, with some small subsidiary aggregates and consider-able dispersed matter not yet incorporated into the central mass was approaching a much smaller and less consolidated mass of Substance B When the combination of the two systems took place under the influence of the mutual gravitational attraction, the major aggregates of the B component acquired orbital motion around the large central mass of the A component In the process of assuming their positions. These newly constituted planets encountered local aggregates of Substance A which had not yet been drawn into the central star, and under appropriate conditions these aggregates were captured, becoming satellites of the planets At the end of this phase all major units had been incorporated into a stable system in which planets composed of Substance B were revolving around a star composed of Substance A, and smaller aggregates of Substance A were similarly in orbit as planetary satellites Small fragments are subject to being pulled out of their normal paths by the gravitational forces of the larger masses which they may approach, and while orbital motion of these fragments is entirely possible, the chances of being drawn into one of the larger masses increase as the size decreases We may therefore deduce that during the latter part of the formative period all of the larger members of the system increased their masses substantially by accretion of fragments of Substance A in various sizes from planetesimals down to atoms and sub-atomic particles Some smaller amounts of Substance B. in assorted sizes, were also accreted. After the situation had stabilized, the central star, the sun, consisted primarily of Substance A, with a small amount of Substance B derived from the heavy portions of the original Substance A mix and the accretions of Substance B. Each planet consisted of a core of Substance B and an outer zone of Substance A. the surface layer of which contained some minor amounts of Substance B acquired by capture of small fragments. The planetary satellites, which had comparatively little opportunity to capture material from the surroundings because of their small masses and the proximity of their larger neighbors, were composed of Substance A with only a small dilution of Substance B. It can also be deduced that after the formative period ended, further accretion took place at a slower rate from the remains of the original material. From newly produced matter, and

from matter entering the system out of interstellar space, but the general effect of these subsequent additions did not differ greatly from that of the accretions during the formative period, and did not change the nature of the result This is the theoretical picture as it can be drawn from the information developed in the earlier pages. Now let us look at the physical evidence to see how well this picture agrees with observation. The crucial issue is, of course, the existence of distinct Substances A and B. Both the deduction as to the method of formation of the planetary systems and the underlying deduction as to the termination of the dense phase of the stellar cycle at the destructive limit would he seriously weakened if no evidence of a segregation of this kind could be found. Actually, however, there is no doubt on this score. Many of the fragments currently being captured by the earth reach the surface in such a condition that they can be observed and analyzed. These meteorites definitely fall into two distinct classes. The irons and the stones, together with mixtures, the stony-irons. The approximate average composition is as follows: Chemical Composition of Meteorites Irons

Stones

Iron

0.90

Nickel

0.08

Other Total

0.02 1.00

Iron Oxygen Silicon Magnesium Other Total

0.25 0.35 0.18 0.14 0.08 1.00

The composition of the iron meteorites is in full agreement with the conclusion that these are fragments of pure Substance B. The stony meteorites have obviously be en unable to retain any volatile constituents, and when due allowance is made for this fact their composition is entirely consistent with a status as Substance A. The existence of the mixed structures, the stony-irons, is easily explained on the basis of the previous deductions as to the composition of the various sizes of white dwarfs, It is also reported that the iron meteorites contain practically no uranium or thorium, whereas stony meteorites do.91 This is another piece of information that fits in with the theoretical picture. The energy generation process exhausted the supply of very heavy elements in the central regions of the stars, from which the iron meteorites (Substance B) are derived, before the supernova explosion occurred. But the outer regions of these stars, the source of the stony meteorites (Substance A), contained portions of the heavy element content of the accreted matter that had not yet made their way down to the center. The evidence from the meteorites thus gives very strong support to those aspects of the theory that require the existence of two distinct explosion products, Substances A and B. There is no proof that the meteorites actually originated contemporaneously with the planets in the manner described, but this is immaterial so far as the present issue is concerned. The theoretical process that has been outlined is not peculiar to the solar system; it is applicable to any system reconstituted after a supernova explosion, and the existence of distinct stony and iron meteorites is just as valid proof of the existence of distinct Substances A and B whether the fragments originated within the solar system, or

have drifted in from some other system that, according to the theory, originated in the same manner. The support given to the theory by the composition of the meteorites is all the more impressive because the segregation of the fragmentary material into two distinct types on such a major scale has been very difficult to explain oh the basis of previous theories, Additional corroboration of the theoretical deductions is provided by the spectra of novae. Since these are stars of the white dwarf class, they are composed of Substance B as originally formed. However, the white dwarfs accrete matter from the surroundings in the same manner as other stars, and within a relatively short time the original star is covered by a layer of Substance A. This material is essentially the same as that in the outer regions of stars of other types, and the composition of the stellar interior is therefore not revealed by the spectra obtained during the pre-nova and post-nova stages. But when the nova explosion occurs, some of the Substance B from the interior of the star forces its way out, and the radiation from this material can be observed along with the spectrum from the exterior, As would be expected from theoretical considerations, the explosion spectra often show strong indications of highly ionized iron.92 Another theoretical deduction that can be compared with the evidence from observation is the nature of the distribution of Substances A and B in the planetary system, The sun has a relatively low density, and we can undoubtedly say that it consists primarily of Substance A, as required by the theory. Whether or not it actually contains the predicted small amount of Substance B cannot be determined on the basis of the information now available. The planet that is most accessible to observation, the earth, definitely conforms to the theoretical requirement that it should consist of a core of Substance B with an overlying mantle of Substance A. The observed densities of the other inner planets, together with such other pertinent information as is available, likewise make it practically certain that they are similarly constituted. The prevailing astronomical opinion is that the differentiation which produced the iron cores occurred after the formation of the planets. This necessitates the assumption that these aggregates passed through a molten, or semi-molten stage, during which the iron ‖drained into metallic cores.― 93 Although this theory is still the one that appears most frequently in the astronomical literature, it received what is probably a fatal blow from the results of the Mariner 10 mission to Mercury. A report of these results reads in part as follows: Somehow in the region where Mercury formed from the dust and gas of the primeval nebula, it first gathered iron-rich materials to form a dense core before adding the outer shells of less dense material, Planetologists here (Jet Propulsion Laboratory) feel this to be true because there is no evidence revealed by Mariner that Mercury could have gone through a subsequent hot period during which iron-rich materials could have differentiated and formed the core.94 These observations indicating that the core formation preceded the acquisition of the lighter material are fully in accord with the theory of planetary formation derived in the foregoing pages, a theory which places the differention of the iron from the lighter elements in the pre-supernova star, rather than in the planets.

The observational situation with respect to the major planets is less clearly defined. The densities of these planets are much lower than those of the earth and its neighbors, but this is to be expected, since they have been able, by reason of greater size and lower temperature, to retain the lighter elements, particularly hydrogen, that have been lost by the inner planets. The observations indicate that the outer regions of these major planets are composed largely of these light elements. This eaves the internal composition an open question. It seems, however, that there must have been some kind of a gravitationally stable nucleus in each case to initiate the build-up of the light material, and it is entirely possible that this original mass, which is now the core of the planet, is composed of Substance B. Jupiter has a total mass 317 times that of the earth, and even if the core represents only a small fraction of the total mass, it could still be many times as large as the earth's core. We may thus conclude that, although the observational data on the outer planets do not definitely confirm the theoretical deduction that they have inner cores of Substance B. the observed properties are consistent with that finding. Since it is highly probable that all of the planets have the same basic structure, this lack of any definite conflict between theory and observation is significant. The satellites present a similar picture. The verdict with respect to the distant satellites, like that with respect to the distant planets, is favorable to the theory, but not conclusive. The available evidence is consistent with the theory that the inner cores of these satellites, as well as their outer regions, are composed of Substance A, but it does not definitely exclude other possibilities. The satellites that we know best, like the planet that we know best, gives us an unequivocal answer. The moon is definitely composed of materials similar to the stony meteorites and the earth's crust; that is, it is practically pure Substance A, as it theoretically should be. It is appropriate to point out that this theory of planetary origin derived by extension of the development of the consequences of the fundamental postulates of the Reciprocal System is independent of the temperature limitations that have constituted such formidable obstacles to most of the previous efforts to account for the existing distribution of material. The fact that the primary segregation of Substance A from Substance B antedated the formation of the soar system explains the existence of distinct core and mantle compositions without the necessity of postulating either a liquid condition during the formative period, or any highly speculative mechanism whereby solid iron can sink through solid rock. This explanation of the formation of the system also accounts for the near coincidence of the orbital planes of the planets, and for the distribution of the planetary orbits in distance from the sun. It has been recognized for two hundred years that the planets are not distributed haphazardly, but occupy positions at distances that are mathematically related in a regular sequence. This relation, called Bode's Law (although discovered by Titius), has never been explained, and since present-day scientists are reluctant to concede that there are answers which they are unable to find, the present tendency is to regard it as a mere curiosity. ‖It

is probable that the law is no more than an interesting relation of a coincidental nature.― 95 says one textbook. The basic principles governing this situation were explained in Chapter 6. The white dwarf is moving in time, and the speeds of its constituents are distributed in the range between one and two units. Increments of speed above the unit level are limited to unit values, but since the motion in the intermediate speed range is distributed over the full three dimensions of time, the applicable units are the three-dimensional units. As we saw earlier, the two linear units from zero to the one-dimensional limit correspond to eight three-dimensional units. The constituents of the white dwarf are thus distributed to a number of distinct speed levels, with a maximum of seven. The distances in equivalent space at the point of maximum expansion are similarly distributed. In the subsequent contraction back to the equilibrium condition these separations are maintained unchanged although the individual constituents move from one level to the next lower whenever they lose a unit of speed. During the contraction in time (equivalent to a reexpansion in space) there are two processes in operation. The gravitational force of the aggregate as a whole is pulling the particles in toward the center of mass, Coincidentally, each of the subdivisions of this aggregate defined by the different speed levels is individually consolidating, since all particles in each subdivision are moving at the same speed, and are therefore at rest relative to each other, aside from their mutual gravitational motion. The rate at which each process takes place depends mainly on the mass that is involved and the distance through which the constituents have to travel. If the total mass is relatively large, the central aggregation proceeds rapidly, and the local concentrations are pulled in to the center before they have a chance to develop very far. Where the total mass is relatively small the distances involved are about the same, and the central force is therefore weaker. In this case the subsidiary aggregates have time to form, and the consolidation of these aggregates into one central mass may not be complete by the time the white dwarf becomes subject to the gravitational effect of its companion in the binary system. Up to this point the subsidiary aggregates are all in a straight line spatially, They are distributed over three dimensions of time, but the spatial equivalent of this time is a scalar quantity, and it appears in the spatial reference system in linear form. When the white dwarf reaches the vicinity of its giant or main sequence companion, and is pulled out of its original line of travel by the gravitational force of the companion, the various subsidiary aggregates go into orbit at distances from the companion that reflect their separations, as well as the amount by which the line of movement of the white dwarf is offset from a direct central impact on its companion. Bode's Law reproduces these distances, as they appear in the solar system, as far as the planet Uranus. It provides no explanation as to where its elements come from, but it does correctly identify these elements as two fixed quantities and one variable. The fixed quantities are properties of the particular star system (the soar system), and therefore have to be obtained empirically; they cannot be calculated from theoretical premises. The first of these represents the distance in actual space between the A component and the closest of the planetary masses at the time the orbital motion was established. It is the same for all planets, and has the vague 0.4 in terms of the astronomical unit, the mean radius of the

earth's orbit, Our finding confirms the vague that appears in Bode's Law, The second constant is related to factors such as the masses of the two components of the binary system, and the magnitude of the explosion in which they were produced, In Bode's Law it has the vague 0.3. We arrive at a somewhat lower value, 0.267. The variable in the distance relation is the speed level of the motion in time. There are several factors involved in this relation that make it more complex than the simple sequence in Bode's Law. Two of these factors enter into the values in the first half of the group of planets. There is a 1½ step in the numerical sequence that does not appear in Bode's Law. As we have seen in the earlier volumes, this vague frequently appears in such a sequence where the quantity involved is complex, so that it is feasible to have a combination of one-unit and two-unit components, Apparently the big jump from one to two (a one hundred percent increase) favors such an intermediate vague which is relatively rare at the higher levels, The second special factor that enters into the situation we are now considering is that, for reasons explained in Volume I, all magnitudes in equivalent space appear in the spatial reference system as second powers of the original quantities, The distances below n = 4 can thus be expressed by the relation d = 0.267 n² + 0.4 In this lower range the results obtained from this expression are practically identical with those obtained from Bode's Law, as indicated in Table I, where the observed distances are compared with those calculated from the two equations.

Planet Mercury Venus Earth Mars Asteroids Neutral point Jupiter Saturn Uranus Neptune Pluto

TABLE I PLANETARY DISTANCES n Calc. Obs. 0 0.40 0.40 1 0.70 0.70 1½ 1.00 1.00 2 1.50 1.50 3 2.80 2.80 4 4.70 4.95 (4) 5.20 5.20 (3) 8.90 9.50 (2) 19.60 19.20 (1½) 34.50 30.00 — 39.40

Bode’sLaw 0.40 0.70 1.00 1.60 2.80

5.20 10.00 19.60 — 38.80

In this half of the total distance range, the increments of distance add directly, even though they are the results of increments of motion in time (equivalent space), because they correspond to the first half of the eight-unit speed range, which is on the spatial side of the neutral point. Beyond this point, on the temporal side, the relations are inverted. The n values (number of units from the appropriate zero) move back down, and the distances in (equivalent space, expressed in spatial terms, are inversely related to the value of n², Furthermore, the transition from space to time at the midpoint involves a change in the gravitational effect from one positive unit (gravitation in space) to one negative unit (gravitation in time), a net change of two units. On this basis, the neutral point is one unit (0.267) above the 4.7 distance corresponding to n = 4 on the space side. One more such unit brings the distance to 5.2. This is 4.8 plus the 0.4 initial vague. For

the more distant planets, the 4.8 applicable to n = 4 is increased in inverse proportion to n², resulting in the values shown in the table. The applicable equation is d = 76.8/n² + 0.4. The agreement between the observed and calculated distances is not as close for these outer planets as for the inner group, but it is probably as close as can be expected, except in the case of Pluto. Bode's Law could have a place for Pluto, but only at the expense of omitting Neptune. This is not acceptable, as Neptune is a giant planet, while Pluto is a small object of uncertain status, It appears likely that the inverse speed range corresponding to n = 1½ is the maximum that was reached by the parent white dwarf, and that both Neptune and Pluto condensed in this relatively wide distance range, This would account for the fact that the calculated value for n = 1½ falls between the observed distances of the two planets. This explanation of the interplanetary distances implies that almost all-small stars of the second generation or later have similar planetary systems in orbit, a point that we will consider in another connection later. Otherwise, the clarification of the distance situation is not of any special importance in itself, It is significant, however, that when we put together the different properties that the motion of the white dwarf constituent of a small binary system must possess, according to the theory of the universe of motion, we arrive at a series of interplanetary distances that are almost identical with the observed values. This numerical agreement between theory and measurement is a substantial addition to the evidence supporting the theoretical conclusions as to the nature of motion in the upper speed ranges. The white dwarf is the only object with component speeds greater than the speed of light that is involved in the astronomical phenomena thus far discussed in this volume, the phenomena that take up about 80 percent of a standard astronomy textbook. But the remainder of this work will be concerned mainly with objects whose components, and often the objects themselves, are moving at upper range speeds. A full understanding of the nature and properties of the white dwarf will contribute materially to clarification of the more complex phenomena of the intermediate and ultra high-speed ranges that will be discussed in the pages that follow. The smaller components of the solar system include interplanetary dust and gas, meteorites, asteroids, and comets. The asteroids are aggregates of Substance B. from l 000 km in diameter downward, which were never captured by planets, and did not accrete enough material to become planets in their own right. Most of the large asteroids are located in the asteroid belt― between Mars and Jupiter, and represent the core of a potential planet that failed to complete its consolidation because of the gravitational effect of nearby Jupiter. The orbits of the asteroids are subject to modification by the gravitational forces of the planets, and occasionally one is deflected into an orbit that results in capture by the earth. Those that reach the earth intact, or in fragmentary form, are the previously mentioned iron, or stony-iron, meteorites. Stray aggregates of Substance A similarly captured are the stony meteorites. Most of these latter objects, like the asteroids, date from the original formation of the solar system. Comets are relatively small aggregates of material drawn in by the sun from distant locations within its gravitational limit, Unless the incoming material happens to make a direct hit, it goes into a very elongated orbit on the first approach. At each return it loses part of its mass and reduces the size of its orbit. Eventually its entire contents are either

absorbed by one of the larger bodies of the solar system or distributed in the space surrounding those bodies. The earth participates in this process in a relatively small way, capturing both individual particles (sporadic meteors) and meteor swarms, which are portions of the detached cometary material that follow previous orbits of their parent comets. The current view is that there must be a ‖reservoir― of cornets at some relatively large distance from the sun in a sense, this is true, as the long period comets spend the greater part of their lives in the outer portions of their orbits. But this reservoir is merely a storage location, not a source. There is a certain residual amount of dust and gas within the gravitational limit of the sun, some inflow of diffuse matter from interstellar space, and a small, but continuous, formation of new matter from the incoming cosmic rays. Thus new cometary matter is constantly being made available. The number of comets in the system is probably now at an equilibrium level where the rate of formation is equal to the rate of loss due to evaporation from the comets and eventual capture of the remnants. The contents of this chapter identify some of the factors that have a bearing on the question as to where planets are likely to exist, a question that excites a great deal of interest because it is a key element in any assessment of the possibility of the existence of 1ife—particularly intelligent life—elsewhere in the universe. The B component of a binary system is either a star or a planetary system, not both. This eliminates all binary stars, and since all Class 1 stars are automatically excluded, it confines the possibility of planets to single Class 2 stars (such as the sun), or to single components of multiple systems Class 3 and later). Inasmuch as a long period of reasonably stable conditions is probably required for the development of life certainly for the emergence of any of the higher forms of life—the Class C stars of the second and later cycles, and the stars high on the main sequence, all of which are subject to relatively rapid change, can also be crossed off the list. This wholesale exclusion of so many different classes of stars may seem to limit the possibility of the existence of extra-terrestrial life very drastically, but in fact, these conspicuous and well-publicized stars constitute only a minor part of the total galactic population. The great majority of the stars of the Galaxy are small, and relatively cool, stars in the lower sections of the main sequence. As we will see in Chapter l2, there is a lower limit to the mass of a white dwarf star, and when the B component of a system is below this limit it cannot attain stellar status. This implies the existence of an immense number of planets among the smaller systems. Of course, there are requirements as to size, temperature, etc., that a planet must meet in order to be available as an abode for life, but there is a zone in each system within which a planet of an appropriate size is quite likely to meet the other requirements. Since Bode's Law (as revised) is applicable to all systems in which the conditions are favorable for planet formation (the small systems), it is probable that all of these systems have at least one planet in the habitable zone. The findings of this work thus increase the probability that there are a very large number of habitable planets earth-like planets, let us say—in our own galaxy, as well as in other spiral galaxies. There are few, if any in the galaxies smaller than the spirals—the ellipticals and the small irregulars—because they are composed almost entirely of Class l

stars. The situation in the giant spheroidals is not yet clear. There are multitudes of lower main sequence systems in these giants and these can be expected to have the usual proportion of planetary systems. However, the intense activity that, as we will see later' is taking place in the interior zones of these giants no doubt rules out the existence of life. Whether enough of this activity carries over into the outer parts of these galaxies to exclude life in these areas as well is uncertain. The oldest of these giants are probably lifeless. As we will find in Chapter 19, there is a strong X-ray emission from these mature galaxies and this is probably lethal. So far as we know at present, however there may be outlying regions in some of the younger galaxies of this class in which the conditions are just as favorable for life as in the spirals. In today's science fiction, where life in other worlds is a favorite motif, the habitations of the alien civilizations are identified with familiar names, for reasons that are understandable. The thrilling action that the authors of these works describe takes place on planets that circle Sirius, or Areturus, or some other well-known star. But according to our findings, few, if any, of these familiar stars are capable of having a habitable planet in orbit, and are also old enough to have developed complex forms of life. Sirius, for instance, has a white dwarf companion instead of a planetary system. Arcturus is a young Class C star. The astronomers do not make the mistake of identifying the environments of these stars as the abode of life, but they avoid it by making a different mistake. In selecting the target of their first systematic attempt at interplanetary communication (1974) they were misled by their current view of the evolutionary direction of the stars. This initial effort was directed at the globular cluster M 13, on the assumption that it is a very old structure in which the processes that lead to life have had ample time in which to operate. We now find that the globular clusters are relatively young structures which, aside from a few stray stars that have been picked up from the environment, are composed entirely of Class l stars. These cluster stars have not been through the explosion process― and therefore have no planetary companions at all. As matters now stand the available information indicates that habitable planets are plentiful, but that the planets on which life probably exists are not located in any systems that we can call by name. The stars that they are orbiting are undistinguished anonymous, and with few, if any, exceptions unseen stars of the lower main sequence.

CHAPTER 8

Evolution - Globular Cluster Stars Even though a globular cluster may contain as many as a million stars, it is too small to have any major effect on the structure of a barge spiral galaxy such as ours when a capture takes place. But since this capture occurs practically on our doorstep, we are able, to trace the progress of the clusters into the main body of the galaxy, and to read their history in considerable detail. This process is too stow to be followed observationally, but we can accomplish essentially the same thing by identifying clusters in successively later

stages of development, and establishing the order in which the various changes take place. As brought out in Chapter 3, the globular clusters are being drawn in toward the galaxy from the surrounding space by gravitational forces, and the observed concentration of the clusters thus far located within a sphere that has a radius of about 100,000 light years is merely a geometrical effect. The clusters move ‖as freely falling bodies attracted by the galactic center― , and they do not participate, to any significant extent, in the rotation of the Galaxy. Thus the observations indicate that the clusters are on the way to capture by the Galaxy. The increasing strength of the gravitational forces as the clusters approach closer to the Galaxy has a disruptive effect on the positional equilibrium within the clusters. The outer stars tend to be stripped away, and the clusters therefore decrease in size as they approach. Observations reported in Chapter 3 indicate that a cluster loses more than one third of its mass by the time it reaches a position within l0,000 parsecs of the galactic center. In the capture zone, the region in which the structure of the clusters begins to be disrupted, the losses are still greater, and at the time when contact is made with the Galaxy the remaining stars are numbered in the tens of thousands rather than in the original hundreds of thousands. On entry into the rapidly rotating galactic disk still further disintegration occurs, and the globular cluster separates into a number of open clusters. These are relatively small groups, most being in the range from around a dozen to a few hundred stars, although a few have as many as a thousand. The total mass of a small cluster of this kind is not large enough to produce a gravitational attraction that is equal to the outward progression of the natural reference system, even when augmented by the gravitational effect of the galaxy as a whole The open clusters are therefore expanding at measurable rates. One of the results of this rapid expansion is that the lifetime of these clusters is relatively short. In order to account for the large number of such clusters now in existence, which runs into the thousands—one estimate (reference 96) is 40,000 when due allowance is made for the fact that only a small fraction of the total can be identified from our position in the galaxy, there must be some process in operation that continually replenishes the supply, The astronomers have been unable to find any such process, Like other members of the human race, they are reluctant to admit that they are baffled, so the general tendency at present is to assume that the open clusters must originate by means of the star formation process that they believe is taking place in dense dust clouds. But this explanation simply cannot stand up. If the cohesive forces in these clouds are strong enough to form a cluster they are certainly strong enough to maintain it. The observed expansion thus contradicts the hypothesis of formation near the present cluster sites. Of course, it is conceivable that some clusters formed under certain favorable conditions might at some later date encounter conditions that would cause them to disintegrate, but all open clusters are disintegrating, and astronomical theory has to explain this fact. No stable stellar aggregate exists in the range between the globular clusters and the multiple star systems. If the issue is squarely faced, it is clear that conditions in the Galaxy are favorable for dissolution of the clusters, whereas the existing clusters must have been formed under conditions favorable for such formation.

Those astronomers who do face the issue recognize that current theory has no satisfactory answer to the problem, notwithstanding the wide range of possibilities that has been explored. Bok and Bok, who discuss the question at some length, conclude that at least some classes of clusters are not being replaced. The most conspicuous clusters, the Pleiades, Hyades, etc., are disintegrating, and these authors say that ‖there seem to be no others slated to take their place.― Likewise they conclude that the ‖open clusters with stars of spectral types A and later . . . may be a vanishing species. 97 The obvious answer cannot be ignored completely. Bok and Bok concede that ‖one might be tempted to think about dismembered globular clusters as possible future Pleiades-like clusters― , but since this conflicts with the prevailing ideas as to the direction of stellar evolution, they resist the temptation, and dismiss the idea as impossible, Here, again, the physicists, assumption as to the nature of the energy generation process must be supported, whatever the cost to astronomy may be. The two considerations that they say ‖show how impossible this would be― are first, that the spectra changes required in going from globular to Pleiades-like clusters are impossible, and second, that the ‖rate of evaporation for globular clusters is far too slow.― The first of these objections is simply a reiteration of the upside down evolutionary sequence that the astronomers have adopted to conform to the physicists, assumptions. As already explained, the evolutionary path for all stars is from globular cluster to main sequence, not vice versa, And the globular clusters that fail into the Galaxy do not shrink slowly by evaporation; they are torn apart quickly by the rotating matter of the galactic disk. The piece of information that has been lacking in the astronomers, view of the situation is the existence of an interstellar force equilibrium that gives an aggregate of stars the physical characteristics of a viscous fluid. The entry of the cluster into the galaxy is physically similar to the impact of one fluid aggregate on another. All of the elements of the problem fall into place when it is viewed in the light of the theory of the universe of motion. The conclusion as to the origin of the open clusters derived from this theory is reinforced by the available data on the properties of these stellar groups. One of these properties is the density of the group. Any gravitationally bound group of stars has a density greater than that of the field of stars in its environment. Inasmuch as the aggregate of stars in the Galaxy has the characteristics of a liquid, a stellar group whose density exceeds the density of the field stars will fall toward the galactic plane. This is a necessary consequence of the gravitational differential, and the descent will take place regardless of the nature of the influences that are responsible for the separation between the field stars, and regardless of whether the clusters fall into the Galaxy, as asserted by the theory of the universe of motion, or originate somewhere within that structure, in accordance with present-day astronomical theory. Even the much looser ‖associations― participate in this response to the gravitational diffcrential.98 Since the clusters are falling objects, those that are higher above the galactic plane are younger, on the average, than those lower down. One of the most conspicuous members of the higher class is M 67, about 440 parsecs above the plane. At the other extreme are objects such as the double cluster in Perseus, which is in the general vicinity of the plane. It follows directly from the relative positions of the two classes that the clusters of the M 67 class are the younger and those of the Perseus class are the older.

This conclusion derived from the relation of position to cluster density is corroborated by direct observation of density changes. Inasmuch as the clusters are expanding, their densities are decreasing with age, While the density of any individual cluster may reflect the particular conditions to which it has been subject, the average density of the clusters of each class should depend mainly on the amount of expansion that has occurred It therefore follows that the clusters with the higher average density are the younger, and those with the lower average density are the older. Studies show that the clusters of the M 67 class have the higher average density,99 Hence these are the young clusters, and the clusters of the Perseus class are relatively old—the same conclusion that we reach from a consideration of the positions above the galactic pane. Both of these indications of relative age are observed properties of the clusters, and are independent of the astronomical theory in whose context they are viewed, In this case, then, we have something that is very rare in astronomy: a direct observational indication of the direction of evolution. Here we have positive proof that the stars of the main sequence are older than the stars of the globular cluster type (the kind of which M 67 is composed). This negates the basic premise on which current theory of stellar evolution is founded. That theory asserts that the stars of the upper main sequence are necessarily young because the supply of hydrogen for production of energy will be exhausted in these stars in a relatively short time. The proof that these stars are not young now turns the argument upside down. The demonstrated fact that they are relatively old stars shows that hydrogen is not the stellar fuel With the addition of this evidence to the many items previously noted, we now have a positive definition of the direction of evolution of the stars of the globular and open clusters, and by extension, a definition of the direction of stellar evolution in general, In order to see just how this information fits into the theoretical picture, we will now turn to a consideration of the evolution of the stars in the clusters, Inasmuch as the remains of disintegrated stars and galaxies are scattered throughout all space, and atoms of matter are continually forming in this space from the decay products of the cosmic rays, there is a certain minimum amount of material subject to accretion in any environment in which a star may be located. Immediately after the formation of a globular cluster star by condensation of a portion of a protocluster, this thin diet of primitive material and the atom building that takes place within it, are all that is available for growth, and the evolution of the stellar structure is correspondingly slow. The stars of the globular clusters are therefore in an early stage of development. Aside from a few strays from older systems that have been incorporated during the formation of the cluster, the distant clusters contain only Class I stars: infrared stars, red giants, sub-giants, longperiod Class IA variables, and variables of the RR Lyrae and associated types, To these, the clusters closer to the Galaxy add some Class 1B stars of the lower main sequence. As noted in Chapter 4, the CM diagram provides a picture of the most significant changes that take place in the constituent stars of the globular clusters. The first stage of their evolution, after they become observable in area O of the diagram, is a contraction under the influence of the combined gravitational forces of each star itself and the cluster as a whole. This ends for each star when it reaches gravitational equilibrium on the line BC,

the main sequence. Thus the paths OAB and OAC on the CM diagram of M 3 are the routes followed by the stars of this cluster in the continuation of the process by which they originated, The locations along this path represent what we may call evolutionary ages, A star at point B or point C has traveled the entire length of its path OAB or OAC. Although it is common practice to refer to the pre-stellar aggregate as a dust cloud, it is actually a gas cloud with a small dust content. Thus the physical aspect of the evolution of the newly formed stars is defined by the behavior of an isolated gaseous aggregate subjected to a continuing increase in temperature and pressure under the influence of gravitational forces. Since the matter of the star is above the critical temperature by the time that the pressure reaches significant levels, it has been assumed that the star is gaseous throughout its structure. As expressed in one textbook, ‖Because the sun (a star) is so hot throughout all its volume, all of its matter must be in the gaseous state.―100 This statement is valid on the basis of the conventional definition of the gaseous state, in which this state has no density limit, but the investigation upon which this work is based (see Volume II) has shown that this definition leads to some erroneous conclusions. In particular, it leads to the conclusion that all matter above the critical temperature conforms to the gas laws—the general gas equation PV = RT, and its derivative relations. This is not true. In fact, these laws do not apply to matter at all. They apply only to the empty space between the atoms or molecules of the gas. At very low densities the volume of a gas aggregate, as measured, consists almost entirely of empty space, and the gas laws are therefore applicable. As soon as the density increases to the point where the volume occupied by the particles of matter begins to constitute an appreciable proportion of the total, a correction for the deviation of the volume from that of the ‖ideal gas― (the empty space) must be applied. A further increase in density ultimately brings the aggregate to a critical paint at which the correction becomes the entire volume; that is, the empty space has been completely eliminated. The aggregate is now a condensed gas. Inasmuch as conventional physics has no theoretically based relations from which to compute the magnitudes of the various properties of gas aggregates at high pressures, and relies on empirical relations, restricted to a relatively low pressure range, for this purpose, the existence of this third condensed state of matter was not detected prior to the development of the theory of the universe of motion. In the light of this theory, however, the existence of this condensed gas state is a necessary consequence of the nature of physical state. In the gaseous state the individual units—atoms or molecules—are separated by more than one unit of space, and are therefore moving freely as independent particles. In the condensed states—solid, liquid, and condensed gas—the separation has been reduced to the (equivalent of less than a unit of space (by the introduction of time). Here the individual particles occupy fixed (solid state) or spatially restricted (liquid and condensed gas) positions in which they are subject to a set of relations quite different from the gas laws. For example, as brought out in Volume II, the volume of a solid aggregate is inversely proportional to the square root of the total pressure, including the internal pressure, rather than inversely proportional to the external pressure as in the gaseous state. A study of the volumetric relations carried out in the course of the investigation on which this work is based has disclosed that the transition to condensed gas takes place within

the temperature and pressure range of much of the experimental work reported in the scientific literature. For instance, application of the theoretical relations to the volumetric data on water at 1000 C indicates that the transition from the gaseous state to the condensed gas state begins at about 600 atm. pressure, and is completed at about 3000 atm. Above this level the condensed gas volume can be computed by means of the relations that apply to the liquid state. The temperatures in the stars are, of course, vastly greater, but so are the pressures, and both the gaseous and condensed gas states exist within the stellar temperature and pressure range, a fact that has an important bearing on the evolutionary pattern of the stars. One important property shared by all of the condensed states is that an aggregate in any one of these states has a definite surface. This is not true of a gas cloud. Such an aggregate simply thins out with the radial distance until it reaches the density of the surrounding medium, This point is generally recognized in the case of star clusters and galaxies, which are structures of the same kind, differing only in that the constituent units are stars rather than particles. The fact that the dimensions of these objects, as observed, depend on the limiting magnitude reached by the observations is well known, but the corresponding phenomenon in the stars, if it is recognized at all, is not emphasized in the astronomical literature. This is no doubt due, at least in part, to the observational difficulties. The dimensions of the stars of the dust cloud classes can only be observed by means of special techniques of limited applicability (such as interference methods) or under special circumstances (such as in eclipsing variables), and the absence of surfaces has not been evident enough to attract attention. The only star that is readily accessible to observation, the sun, belongs to the other class of stars, those that do have definite surfaces. The condensation of a dust and gas cloud under the influence of gravitational forces is an equilibrium process, not a static equilibrium like that of the stars on the main sequence, where the variables react in such a way as to maintain constant relations, but a dynamic equilibrium, in which the interactions between the variables maintain a uniform pattern of change in their relations. Consequently, all of the clouds condensing into stars follow the same evolutionary path, differing only in the rate at which they move along that path. At any given stage of the contraction process along the line OA on the CM diagram all stars therefore have the same effective mass and volume (aside from the variations that are responsible for the width of the line), irrespective of the size of the dust clouds from which they are drawing their material. In this first part of the evolutionary path the continuing condensation of the stellar aggregate is made possible only by the assistance of the gravitational effect of the cluster as a whole, as this early type of star is not a self-gravitating object, As indicated in the earlier discussion, however, the gravitational forces of the star are strengthened as it becomes denser, and at a certain point, designated A on the CM diagram, Fig. 3, the star reaches the critical density where it becomes self-gravitating; that is, it is capable of further contraction toward gravitational stability without outside assistance. Beyond the point at which the critical density is reached, the two processes, the original growth process and the self-gravitation, are in competition. The outcome depends on the relative rapidity of the processes.

If the growth of the star has taken place all the way from particle size, without the benefit of any gravitationally stable core, the contents of the parent dust cloud are practically exhausted by the time that the star reaches the critical density at point A. In this event the self-gravitation initiated at A proceeds at a more rapid rate than the growth by accretion. The star then pulls away from its surroundings and moves directly down the diagram along the line AB, the line of constant mass. If the star did have a pre-existing fragment as a nucleus, growth along the line OAC is able to continue. As noted in Chapter 4, the availability of even a very small fragment as a nucleus for condensation gives a star a big advantage over the majority, which have to start from particles. Because of the much larger amount of dust and gas over which they are able to establish gravitational control, these stars that had the head start are usually able to follow the line AC all the way to point C, or at least to the vicinity of that point. In some cases there is a tendency for the observed paths to bend downward shortly before reaching C, indicating that the material for growth has been exhausted. In other eases the trend in the vicinity of point C is upward. This is no doubt due to accelerated accretion from favorable environments. Inasmuch as the nature of the process by which the process by which the primitive cloud of matter was formed formed the primitive cloud of matter as described in Chapter l, produces essentially the same initial conditions in each cluster, the equilibrium conditions are practically the same for all clusters. It follows that the critical points A and C on the line OAC are the same for all of these clusters. This conclusion refers, of course, to the true values, the absolute magnitudes. But the astronomers, evaluation of absolute magnitudes is subject to a considerable degree of uncertainty. For present purposes, therefore, it appears to be advisable to deal with the observed magnitudes, using the observed magnitude at some identifiable location in each diagram as a reference point. The resulting diagram is identical with that which would result from the use of the correct absolute magnitudes, except that the magnitude scale is shifted by an amount that reflects the effect of distance and obscuration. There are some other factors --chemical composition, for instance—in addition to the evolutionary development, that affect the variables represented on the CM diagram, and these factors, together with the observational uncertainties, result in rather wide evolutionary paths, but aside from these effects, the foregoing theoretical conclusions indicate that the upper sections of all CM diagrams of globular clusters should be identical, to the extent that the evolution of each cluster has progressed. Fig. 9 shows that this theoretical pattern is followed by six of the most prominent globular clusters. The outlined areas in each cluster diagram show the observed star locations. The boundaries of these areas have been located by inspection of diagrams published in the astronomical literature. Greater accuracy is possible, but this would call for an expenditure of time and effort that did not appear to be justified for the purposes of this somewhat preliminary survey of the situation.

The theoretical evolutionary lines, the diagonal lines in the diagrams, are the same for all clusters, except that in each case the reference point determines the magnitude scale. Whatever differences in the lengths and slopes of these lines may exist between the individual diagrams are due to differences in the scales of the original diagrams from which the data were taken. The upper of the three points identified on each line is the reference point. The point corresponding to a B-V color index of 1.4 has been selected as the reference point in most of the CM diagrams in this volume, because it is usually quite clearly defined by the observations, but where the 1.4 location is uncertain some better defined location has been substituted. What the diagrams show is that if the location of the reference point is taken to represent the absolute value of the luminosity, then the points A and B on the line OAC, as previously defined, have the correct theoretical relation to the reference value, within the accuracy of the representation. Some of the evolutionary paths tend to diverge from the theoretical line as they approach the main sequence at point C, but the deviation is within the range of the processes previously mentioned as being applicable in this region.

These considerations that apply to the upper section of the diagram, the line OAC, are likewise applicable to the lower sections, the line AB and the relevant portion of the main sequence, which have been identified observationally for only a limited number of clusters. It then follows that when the location of any one point is specified in the manner that has just been described, the M 3 pattern can be applied to a determination of the entire theoretical pattern of any globular cluster. The complete CM diagrams thus obtained for two of the clusters of Fig. 9 are shown in Fig. 10. These clusters clearly conform to the theoretical pattern.

It is true that there is considerable variability in the line AB, but this is easily understood as a result of the expansion and contraction of the cluster during the travel toward the Galaxy. As explained in Chapter 3, the cluster is subject to substantial loss of stars during its approach, because of differential gravitational effects. These losses alter the equilibrium in the cluster, and tend to cause density fluctuations. The variations in the cluster density have a corresponding effect on the pressure that is exerted on the individual stars by the gravitational force of the cluster as a whole, thus transmitting the density fluctuations to the stars. If the cluster and its constituent stars are expanding as the stars approach point A, the contraction along the line AB is delayed to some extent, and the evolutionary path is displaced to the left of the line. Then, when the expansion phase of the density cycle is succeeded by a contraction, the path is displaced to the right at some location farther down the line. There may even be another swing to the left

before the main sequence is reached. As can be seen in the diagrams, this cyclic effect is at a minimum in M 13, but it shows up clearly in such clusters as M 3 and M 5. The red giant section OA of the CM diagram of a globular cluster is usually well defined, even where the limiting magnitude to which the observations have been carried cuts off most of the lower portions of the diagram. Since only one observed point is required in order to establish the complete Class l diagram, and any point in this well-defined giant section will serve the purpose, it is not difficult to define the theoretical CM diagram for an ordinary globular cluster. Furthermore, if the observations extend to the main sequence, the accuracy of the diagram thus defined can be verified by the observed positions of the main sequence stars. Thus, as indicated by the diagrams already introduced, there is little question as to the position of the evolutionary paths. Uncertainties arise only in the case of the very distant clusters that are observed with such difficulty that only the most luminous stars can be identified. Even at these distances the diagrams are often well defined. For instance, Fig. 11 shows the relation between the theoretical OAC line and the observed locations of the stars of two of the most distant clusters for which data are available. These clusters, NGC 6356 and Abell 4 have uncorrected magnitudes at a B-V color index of. 1.4 of. 16.2 and 18.2 respectively. These compare with l 2.l for M 13 and 10.4 for NGC 6397, the cluster closest to the sun. The luminosity of the most distant of these four clusters is less than that of the closest by a factor of more than a thousand. A point that should be noted in connection with the evolutionary pattern of the globular clusters is that the difference in luminosity (on the logarithmic scale) between point B and point A, 5.6 magnitudes, is twice the difference between point B. and point C, which is 2.8 magnitudes. The significance of this relation will be discussed in Chapter 11. Identification of the globular cluster pattern as a fixed relationship provides a simple and potentially accurate method of determining the distances to the clusters. Inasmuch as the theoretical findings indicate that the pattern is identical for all the clusters, it follows that the absolute magnitude corresponding to any specific color index is the same for all. Like the evolutionary pattern itself, we will have to determine this absolute magnitude empirically for the present but once we have it for one cluster, we can apply it to all clusters, A value of 4.6 at a B-V color index of 0.4 on the main sequence has been selected on the basis of two criteria. First, this agrees with the currently accepted values applicable to the nearest clusters which are presumably the most favorably situated for accurate observation, and second, this value arrives at practically the same average as the observational values given by W. E. Harris for a long fist of clusters.101 These previously reported values from observation should average close to the correct magnitude if there are no systematic errors, even though the error range of the individual values is conceded to be quite wide. The distance moduli (the differences between the absolute and apparent magnitudes) calculated on the 4.6 basis are compared with those given in the tabulation by Harris in Table II. A few distant clusters not listed by Harris are also included. For the benefit of those readers who are not much at home with the astronomers, magnitude system, the distances are expressed in terms of light years in the last column of the table. The potential accuracy of the method is not fully attained in the present work because of the previously mentioned approximate nature of the process employed in identifying the

reference point for each cluster. But even so, more than half of the values calculated on this basis in the course of the present study agree with the values given by Harris within his estimate of the probable error range. Most of the observers, original reports do not specify whether the values shown in their diagrams have been corrected for the reddening due to dust along the line of travel of the radiation. The theoretical distances in the table have been calculated on the assumption that no such correction has been applied to the plotted values. If this is incorrect in any specific case, the calculated distance will be modified accordingly. The main sequence, as defined by the astronomers,102 has an absolute magnitude of about 3.8 at a B-V color index of 0.4. This puts it 0.8 magnitudes above the position obtained for the globular clusters from the study of the CM diagrams. The significance of this difference will be discussed later. The evolutionary age of each cluster is indicated by its position on the CM diagram. The overall range from the earliest to the latest type of star in the cluster remains about the same, but the positions of both the front end of the age sequence, the location of the most advanced stars, and the rear end, the location of the least advanced, move forward. Bart J. Bok comments that the branch of the diagram of the cluster Omega Centauri that is occupied by the red giants ‖is unusually long― , and also that the data ‖do not reveal the full extent of the main sequence. ―87 He attributes the length of the giant branch to a high degree of variability in the metal content, Our analysis shows, however, that both of the features of the diagram that Bok mentions are aspects of the same thing, They indicate that Omega Centauri is not as far advanced from the evolutionary standpoint as a cluster such as M 13, for example, There are stars in Omega Centauri that are earlier (that is, farther to the right in the CM diagram than the earliest stars in M I 3, while not enough stars have reached the main sequenced to give this cluster a main sequence population comparable to that of M 13.

TABLE II DISTANCES - GLOBULAR CLUSTERS Distance Modulus Cluster

Reddening

Harris

This Work

Distance (lt. years)

Abell 4

20.2

357,000

Palomar 14

20.0

326,000

NGC 6256

19.2

225,000

Kron 3

18.8

188,000

NGC 6356

0.21

17.07

18.0

130,000

M 14

0.58

16.9

17.7

113,000

Palomar 5

17.2

90,000

NGC 6235

0.38

16.6

16.4

62,000

NGC 6144

0.36

15.6

15.8

47,000

NGC 1851

0.07

15.4

15.8

47,000

NGC 6535

0.36

16.1

15.6

43,000

NGC 6535

0.36

16.1

15.6

43,000

M 79

0.00

15.65

15.3

37,000

NGC 5053

0.03

16.00

15.3

37,500

NGC 288

0.00

14.70

14.9

31,000

M 68

0.03

15.01

14.7

28,000

M 15

0.07

15.26

14.6

27,000

M3

0.00

15.00

14.6

27,000

M 30

0.01

14.60

14.5

26,000

M5

0.07

14.51

14.4

26,000

M2

0.19

14.30

14.3

23,500

Omega Centauri

0.11

13.92

14.3

23,500

47 Tuc.

0.04

13.46

14.2

22,500

M 71

0.28

13.90

14.1

21,500

NGC 3201

0.28

14.15

14.1

21,500

M 13

0.02

14.35

14.1

21,500

M 22

0.35

13.55

13.9

19,700

M 92

0.01

14.50

13.9

19,700

M 55

0.07

14.00

13.9

19,000

M4

0.01

13.20

13.1

14,300

NGC 6752

0.01

13.20

13.1

13,700

NGC 6397

0.13

13.30

12.3

9,400

The evolutionary age of the matter of which the stars of a cluster are composed is, of course, related to the age of the cluster, but these ages are not coincident. The chronological age of the matter includes not only the time spent in the star cluster stage, but also the time spent in the diffuse stage that precedes condensation into a star, This is subject to considerable variation. Furthermore, there are circumstances under which the evolution of the matter proceeds much faster than the evolution of the cluster. Thus, although the older clusters are, in general, composed of older matter, there is no direct relation. Some examples of accelerated evolution of matter in clusters will be examined in Chapter 9. The question as to the ages of the globular clusters has received a great deal of attention from the astronomers because they are presumed to have been formed within a relatively short time after the Big Bang in which most astronomers now believe the universe originated. On this basis, as Bok points out in a recent discussion of the subject, the clusters ‖seem to be the oldest objects in the Milky Way. ―83 But it is agreed that the concentration of ‖metals― (heavy elements) in an astronomical object is an indicator of its age, and as Bok acknowledges, there are differences in the metal content of the clusters that ‖imperil― the current theory of their formation. Some, particularly those most distant from the galactic center, are relatively metal-poor, while others have substantially greater metal content. Harris and Racine give us this assessment of the situation: ‖It is plain that the maximum [Fe/H] decreases roughly linearly with log R [distance from the galactic center], even out to about 100 kpc. ―104 Many astronomers are beginning to recognize that this radial dependence of the cluster ages, as indicated by the metal abundances, is inconsistent with present-day astronomical theory. Bok, for example, recognizes that something is wrong here. He states the case in this manner: ‖The spread of ages for the globular clusters conflicts with current models of how the galaxy evolved .―83 Our finding is that almost all of the conclusions in this area that have been reached on the basis of current astronomical theory are wrong, either in whole or in part. On first consideration it may seem unlikely that errors would be made on such a wholesale scale, but actually this is an inevitable result of the manner in which astronomical conclusions have to be reached under present conditions, where there is no general theoretical structure connecting the various astronomical areas. In the absence of the restraints that would be imposed by such a general structure, wrong theories and wrong interpretations of observations are able to reinforce each other and resist correction In the case now under consideration, a wrong theory of stellar energy generation, a wrong theory of the origin of the universe, and a wrong theory of stellar evolution provide mutual support for each other, and for the wrong interpretation of the place of the globular culusters in the astronomical picture. Correction of these errors one by one is not feasible, because a change in only one of the erroneous hypotheses introduces obvious contradictions with those that are retained, All of the major errors that are relevant to the point at issue have to be corrected simultaneously in order to arrive at a consistent system of thought. This is the objective of the present work.

CHAPTER 9

Gas and Dust Clouds As explained in Chapter 1, the original aggregates into which the primitive dispersed matter separates are the predecessors of the globular clusters. At first they are merely masses of the primitive matter in gravitational equilibrium, but they are caused to contract for reasons previously stated, and they eventually arrive at a density sufficient to justify calling them clouds of dust and gas. If these clouds remain undisturbed for a sufficient length of time, they ultimately condense into globular clusters of stars, as indicated in the earlier chapters. Although the protoclusters are probably somewhere near the same size, they are subject to different conditions because of such factors as the amount of fragmentary old material present, and the position of the protocluster in what we have called the group. Consequently, the rate at which condensation into stars takes place is subject to considerable variation. In the preceding pages we have been tracing the development of the faster aggregates, but we have now reached the point where the slower group enters into the evolutionary process in a significant way. We will therefore want to take a look at what has happened to the slower aggregates while all of this development of the faster ones has been going on. The slower aggregates are subject to the same external gravitational forces as the faster group. Thus they undergo the same kind of combination and capture processes as their more advanced counterparts. It is possible that some of them may remain isolated long enough to complete the process of consolidation into clusters of stars. In that event they follow the course that we have been describing. But because of the difference in the amount of time required for completion of the condensation process, many of the slower aggregates are captured while they are still in the gas and dust cloud stage. As a result, they enter into the galactic structure as clouds of particles rather than as stars. We have already noted (in Chapter 3) the existence of evidence indicating that the Galaxy is capturing some globular clusters in a pre-stellar stage. One report reads as follows: The most striking result of surveys of the distribution and motions of neutral hydrogen away from the galactic plane is the discovery of several high velocity hydrogen clouds or concentrations, nearly all having negative (approaching) radial velocities of up to about 200 km/s-1.104 Here we see that the unconsolidated clusters like the globular star clusters are moving ‖as freely falling objects attracted by the galactic center,― 28 in accordance with the conclusions that we derive from the theory of a universe of motion. The observed approach of these aggregates implies that there have been captures of similar aggregates in the past, and that the remains of these immature globular clusters are present in the Galaxy. Unlike the star clusters, which are broken into relatively small units as soon as they fall into the rotating disk, the particles of which the clouds are composed are able to penetrate into the interstellar spaces, and they envelop the stars that they encounter, rather

than colliding with their radial force fields. A cloud of this kind therefore tends to maintain its identity for a substantial period of time, although its shape may be greatly modified by the objects that it encounters. Until quite recently no evidence of gas and dust aggregates of globular cluster size had been found within the Galaxy. Smaller aggregates—nebulae, as they are called—have been recognized ever since the early days of astronomy; some bright, others dark. Only within the last few years has it begun to be recognized that many, perhaps most, of these identified nebulae are portions of much larger aggregates. For instance, Bok and Bok report that the Orion nebula, the most conspicuous of these objects, is actually a part of a larger cloud with a total mass of 50,000 to 100,000 solar units (comparable to the size of the globular clusters that are being captured). They characterize the Orion nebula as lust a little sore spot of ionized hydrogen in the larger complex.― 105 Still more recently it has been found that there are many larger clouds of gas in the Galaxy that have masses comparable to those of, the large globular clusters in the range from 100,000 to 200,000 solar masses. According to a report by Leo Blitz, these giant clouds are about 20 times as numerous in the Galaxy as the globular clusters. Both of these characteristics (size and abundance) are in agreement with what would be expected on the basis of the theoretical origin of the clouds as captured immature globular clusters. The gas cloud is less subject to loss of mass in approaching the Galaxy than the star clusters because of the vastly greater number of units involved (particles in one ease, stars in the other), while, as already noted, it is not subject to being broken up by contact with the moving stars of the Galaxy in the manner of the globular star clusters. The report by Blitz contributes some further information that verifies the identification of these giant gas clouds with the immature globular clusters. ‖The density of the gas in each clouds he says, ‖is l00 times greater than the average density of the interstellar medium.― 106 It is difficult, probably impossible to explain the formation of a distinct aggregate of this size within arrogating galaxy, and since the observed density establishes the cloud as a definite unit, distinct from the interstellar medium, the observations lend strong support to the theoretical conclusion that the clouds were formed outside the Galaxy and captured later. Furthermore, Blitz also reports that ‖the gas in each cloud is organized into clumps whose density is 10 times greater than the average density in the cloud.― He adds that some clumps with much greater density have been observed. The nature of these ‖clumps― is practically obvious, in the light of our findings, Here, of course, are the immature stars of the immature globular cluster. The clumps that are larger than the average are the aggregates that would have followed the upper branch OAC of the Class l evolutionary path if capture by the Galaxy had not intervened to prevent the consolidation that would have given these clumps the status of stars. The simple history of these gas and dust clouds, as derived from theory— formation by the globular cluster process, capture by the Galaxy, mixing with the galactic stars, eventually expanding into and merging with the interstellar medium—is in direct conflict with the upside down evolutionary view derived from the physicists, assumption as to the nature of the stellar energy generation process. Since the astronomers have accepted that erroneous view of the direction of evolution, they are forced to invent processes whereby the normal course of events is reversed. Instead of originating as massive aggregates and

being gradually disintegrated by the rotational forces of the Galaxy, forces that are known to exist and to operate in that direction, the astronomers find it necessary to assume the existence of some unknown counterforce that causes the clouds to form and grow to their present size against the normal direction of change. ‖Some mechanism must be continually farming them in the galaxy―, says Blitz, But he admits that the mechanisms thus far suggested—‖density waves―, magnetic effects, etc.—are not convincing. ‖The solution to the problem of how the complexes form does not seem to be close at hand―, he concludes. This is another understatement of the kind that is so common in the astronomers comments on their problems. The solution not only is not ‖close at hand―; it is not perceptible even in the far distance. The problem is still further complicated for the astronomers because their theory requires the clouds to form and then disperse again, while they remain in the same environment and subject to the same forces. The specific words used in the quotation in the preceding paragraph are worth a few comments, as they are repeated over and over again in current astronomical literature, and they epitomize the attitude that has made it possible for such a large theoretical structure of an imaginary nature to develop in the astronomical field. Some mechanism must exist, the author says, to take care of the problems that are encountered in trying to reconcile the observations with the deductions from the basic premises of the current theory. We have met this contention many times in the earlier pages, and we will encounter it again and again in the pages that follow. The observed facts stubbornly refuse to cooperate with the theorists, but the basic assumptions from which the theoretical conclusions are derived, particularly the assumption as to the nature of the stellar energy generation process, are sacrosanct. They cannot be questioned. There must be something, somewhere—‖some mechanism―—that brings the recalcitrant facts into line current astronomical thought insists. One of the reasons why the astronomers are having so much difficulty in dealing with the dust and gas clouds in the Galaxy is that they have never arrived at an understanding of their structure, just what it is that maintains them in their existing condition. As explained, by Blitz in his article: Under normal circumstances the pressure inside a cloud roughly balances the cloud's selfgravitation, which would tend to collapse the cloud if its action were unopposed. What generates the pressure is a major unanswered question. The truth is that this is the same ‖major unanswered question― that the astronomers face with respect to the structure of the globular clusters, They have managed to avoid conceding their inability to explain the cluster situation, but they have no option in the case of the clouds, as the opportunities for ad hoc assumptions that would enable them to evade the issues are too limited. There is clearly no rotation, and the temperature reveals the particle velocities, which is observable. As conceded in the foregoing quotation, it is clear that there is something missing in the current understanding of the physics of the clouds. The theory of the universe of motion identifies this missing ingredient as the outward progression of the natural reference system relative to the conventional stationary system of reference. Once again we meet the antagonist to gravitation. Both the particles in the

cloud and the stars in the cluster are subject to the outward progression as well as to the inward gravitational motion. Main sequence stars are gravitationally stable; that is, the inward gravitational force acting on their outermost atoms exceeds the outward force due to the progression of the reference system. In aggregates of stars or dispersed particles, on the other hand, the net force acting on their outer units is outward unless the mass of the aggregate exceeds a certain limit. For aggregates of the type that we are now considering, this limit is in the neighborhood of the mass of a large globular cluster. Any mass smaller than this limit is subject to expansion and loss of its outer units. The rate of loss depends on the size of the units, the mass of the aggregate, relative to the limiting value, the speed of movement of the constituent units (the temperature, in the case of the clouds), and the external forces exerted on the aggregate, if any. For the gas and dust clouds that exist in the Galaxy, all of these factors are favorable to a slow rate of loss. The units are very small, the clouds themselves are large, the temperature is very low, and the net external forces exerted on the clouds are small. It appears probable, therefore, that the existence of a cloud as a distinct unit eventually terminates as a result of processes other than escape of its outer particles principally the mixing action that takes place by reason of the motion of the associated stars, The effects of this process are clearly visible. The aggregates, originally spherical, are now observed to be irregular in shape and often elongated. Accretion of matter from a cloud by the stars enveloped within it during the mixing process reduces the mass of the diffuse aggregate substantially while the gradual destruction of the cloud is taking place. This accretion explains the presence of ‖new― stars in the clouds, especially the hot stars of the O and B classes, whose existence in these locations is currently ascribed to condensation directly from the dust and gas. The association of O and B type stars with gas and dust clouds is well established. Since the astronomers regard these stars as very young, astronomically speaking, they have concluded that the stars must have been formed from the clouds, somewhere near their present locations. Our finding that they are relatively old changes this picture drastically. There is now no reason why we must assume, in the face of all of the evidence to the contrary, that the dust and gas clouds in the spiral arms condense into stars. The simple and logical explanation of the presence of these stars in the clouds is that they are stars of the galactic population that have been mixed into the incoming dust and gas, and have grown to their present size by accretion from the clouds. This explanation fits all of the observational evidence, and it accounts for the existence of stars of these types by the operation of simple processes that are known to be capable of producing the observed results, and are known to be operative under the conditions existing in the clouds. The extent to which accretion of material by the stars takes place has long been subject to differences of opinion. Some astronomers regard it as minimal. S. P. Wyatt, for instance, says that ‖There is virtually no replenishment from the outside.― 107 The most that he is willing to concede is the capture of an occasional meteoroid. In fact, however, even a planet does better than that, in spite of strong competition from the sun. It is reported that ‖there is an extremely large flux of meteoroids near the planet Jupiter.― 108 The truth is that the astronomers, conclusions as to the amount of accretion by stars have been little more than guesswork. The existence of some accretion is well established,

notwithstanding assertions such as that by Wyatt. The only open question concerns the quantities. In this connection it is significant that within very recent years the general astronomical opinion has moved a long way in the direction of recognizing the importance of dust and gas in the universe from a concept of interstellar and intergalactic space as essentially empty to a realization that the total amount of matter in these regions is very large, and may even exceed the amount that has been gathered into stars. Calculations on which adverse conclusions regarding accretion are based generally assume that the stars are moving through the gas and dust clouds, and that this motion prevents any substantial amount of accretion. Our theoretical study indicates, however, that these clouds are participating in the rotation of the Galaxy in the same manner as the stars, and that the stars are therefore nearly stationary with respect to the clouds, a situation that is much more favorable to accretion. Bok and Bok specifically say that ‖the interstellar gas partakes in the general rotation of the galaxy.― 109 From the theoretical standpoint, there is nothing uncertain about the accretion situation. In the cyclic universe of motion everything that enters the material sector must be counterbalanced by the ejection of its equivalent. As we will see in the final chapters of this volume, only the explosion products of stars and stellar aggregates can acquire the speed that is needed in order to cross the regional boundary. It follows that all of the gas and dust formed from the primitive matter that enters this sector must either be condensed into stars or accreted by stars. We have already seen that the condensation into stars is not complete. As we trace the pattern of stellar behavior, it will also become evident that a great deal of material escapes from the stars before the final explosive events in which they are ejected from the material sector, and another substantial amount is scattered into space in connection with those explosions. Some of this dispersed matter is incorporated into the globular cluster stars as they are formed, but the rest has to be picked up by existing stars sooner or later. The average star must therefore increase in mass quite considerably during its lifetime. It is true that matter is being converted into energy in the stars, and is being lost from them by radiation, but in a cyclic universe all processes are in equilibrium. The mass loss by conversion to radiant energy is necessarily counterbalanced by an equivalent conversion of radiation to matter in processes of the inverse nature. Thus the existence of the radiation process does not alter the fact that all of the mass entering the material sector in dispersed form must be aggregated into stars in order to be ejected back into the cosmic sector to keep the cycle in equilibrium. The foregoing theoretical conclusions can be summarized by stating that they indicate that the dust and gas in interstellar and intergalactic space exists in much greater quantities and plays a much greater part in the evolutionary development of stars and galaxies, than the astronomers have been willing to concede, on the basis of their observations. Since there is no source of empirical information other than these observations, we have heretofore had to rely on the cogency of the reasoning by which our conclusions were reached, together with the absence of any actual evidence that would contradict those conclusions Now, however, the situation has been revolutionized by the results of observations with the Infrared Astronomical Satellite (IRAS).

The first observations with this satellite show that dust (and presumably gas) does, indeed, exist in interstellar space on the massive scale required by the theory of the universe of motion. As reported in an article in a current periodical (March 1984), ‖Dust is what IRAS found everywhere.― 349 The discovery, also reported in this article, of substantial quantities of dust surrounding Vega and Fomalhaut, together with indications that similar concentrations may exist around 50 other stars, is particularly relevant to the accretion situation. After a quarter of a century, the astronomers are finally arriving at the same kind of a view of the stellar environment as that which was derived from theory, and described in the first edition of this work, published in 1959. The accretion process is theoretically applicable to stars of all kinds, but if the cloud in which the accretion takes place is located well above the galactic plane, as is true of the Orion nebula and some of the others that are frequently characterized as ‖birthplaces― of stars, it is probable that most of the stars intermixed with the nebulae are of the globular cluster type In this event, the effect of the accelerated accretion is to move the stars to the left from their positions on the two branches of the evolutionary path, and to distribute them along nearly horizontal lines intersecting the main sequence at relatively high temperatures This is where the Orion stars are actually found.110 Occasionally some astronomer does concede that the O and B stars in the nebulae may be accretion products. For instance, George Gamow, like most of his colleagues, minimized the importance of the accretion process, but nevertheless admitted that ‖it is not impossible that the . . . Blue Giants found in spiral arms are actually old stars formed during the original process which were rejuvenated by accretion.― 111 But the orthodox astronomical view at present is that the stars of the O and to associations are new stars condensed out of the dust and gas clouds by some thus far unidentified process. Wyatt, for example, refers to ‖the unquestionable evidence that stars form out of interstellar matter.― 112 Here, then, the same textbook author who tells us that the strong gravitational forces of a stable galactic star are capable of ‖virtually no― accretion of matter is, at the same time, contending that the galactic dust clouds, which are known to exert no net gravitational force on their, constituents, are in some unknown way able to pull those constituents together to form a star. These two propositions are obviously incompatible, and their coexistence illustrates the disconnected and compartment nature of present-day astronomical theory. The absence of any general structure of theory encourages reliance on negative rather than positive evidence. Since the theorist has no explanation whose validity he can prove, what he attempts to do is to devise an explanation that cannot be disproved In this connection it is interesting to follow the chain of reasoning by which one prominent astronomer arrives at the currently orthodox conclusion with respect to condensation of stars from dust and gas clouds, The following are the essential statements from the five paragraphs in which he outlines the development of thought: There are virtually no clouds observed in which gravity is strong enough to overwhelm the temperature effects based on the measurements that can be made at present . . . There may be a way out of this dilemma . . . We really do not yet know how much molecular hydrogen lies in typical atomic hydrogen clouds. Such a situation is tailor-made for any theoretician to work with because there are

no data that could contradict any assumption made about the amount of additional matter in the clouds. . . We assume it [the cloud] must have enough matter to cause it to contract.113 (Gerrit Verschuur) This explanation of the background of one of the current theories of star formation in the galactic gas and dust clouds should make it evident why the astronomers are having so much difficulty in getting down to details. Verschuur is simply assuming the problem out of existence. Other theorists rely on some different assumptions—a hypothetical process to supplement the effect of gravitation, for example—but they all operate on the same principle; that is, they construct their hypotheses in such a way that ‖there are no data that could contradict― the assumptions. As might be expected, all details are vague. Verschuur admits that ‖We are far from understanding all the details of how clouds actually become stars.― 114 Perhaps the best assessment of the situation is that it illustrates the validity of this comment from the British scientific journal Nature (1974): Indeed, a great many theoretical astronomers delight in a situation where there is just enough evidence to make model building worthwhile, but not enough to prove that their favored model is ineorreet.115 The effect of the availability of dust and gas on the rate of evolution is illustrated by the globular clusters that are located in the Large Magellanie Cloud (LMC). Here the gravitational distortion of the structure of the Cloud has resulted in an irregular distribution of the dust and gas, and some globular clusters have entered regions of relatively high density. The rotational forces that would normally break up the clusters as they approach the central plane of the galaxy (the LMC) have also been greatly reduced by the gravitational distortion. As a result, some of the globular clusters remain intact in dusty regions for a long enough period to permit their constituent stars to reach an evolutionary stage comparable to that of the stars of the open clusters. While the shape and size of these clusters are those of normal globular clusters their stars are members of Class 1B, like those of the open clusters. We can correlate the evolutionary stages of the stars in the two Magellanic Clouds with the galactic ages, although the more heterogeneous populations of these larger aggregates make this correlation less specific than the corresponding results of the globular cluster study, The most significant observation, in this connection, is that the LMC has many red supergiant stars associated with hot blue stars in hydrogen clouds. As brought out in Chapter 5, these two very different types of stars are closely related from the evolutionary standpoint. The hot blue star (Class 1B, is near the supernova stage. The red giant of the second cycle (Class 2C) is the first visually observable post-supernova star. The presence of these red giants thus identifies the LMC as an aggregate in which the most advanced stars have reached the second evolutionary cycle. Stars of this class are not found in the Small Magellanic Cloud (SMC).116 Nor have any supernova remnants been located there.117 Their absence indicates that the most advanced stars of this galaxy are still in the first cycle. The concentration of Cepheid variables per unit of volume is much higher in the SMC.118 This is consistent with the evidence from the giants, as the first Cepheids are Class 1A stars, and the evolution around the cycle

reduces the number of stars of the earlier classes. The conclusion to be drawn from these observations is that the main body of the SMC is composed of stars of Classes 1A and 1B, whereas the average star in the LMC is in a more advanced evolutionary stage. The number of lA stars has decreased and some of the l B stars have passed into the 2C stage. The stellar compositions of the two galaxies thus support the conclusion, based on their relative sizes, that the LMC is older than the SMC, They also provide the answer to a question asked in the book from which the data cited above were taken: ‖Why has the Large Cloud so many more very young stars than the Small Cloud?― 119 The answer is that the ‖very young― stars to which the questioner refers are actually relatively old second-generation stars, and the LMC has more of these stars than the SMC because it is an older galaxy. While the gas and dust clouds in the Galaxy are undergoing the changes that have been described, their constituents are also aggregating into larger units; that is, atoms are combining to form molecules and dust particles. It has been known for many years that a number of the elements above helium are present in these clouds, but recently it has been discovered that these elements are, to some extent, organized into molecules. Over fifty different molecules, some of considerable complexity, have been identified so far. In view of the extremely low density and low temperature of the clouds, which limit the frequency of contact of the constituents, the observed amount of molecule formation was not anticipated. The results of this present investigation indicate, however, that the conditions in the clouds are much investigation indicate, however, that the conditions in the clouds are much more favorable for combination, up to a certain limit, than previously believed. The reason was explained in Chapter l. Inside unit distance, 4.56x 10-6 cm, the net motion, other than thermal, is inward until an equilibrium point is reached. At the very low temperatures of the clouds, estimated at about 10 K (reference 106 ), capture on contact, or even on a near miss, therefore has a high probability. As brought out in Volume II physical state is inherently a property of the individual molecule. At 10 K even the hydrogen molecule is in the solid state. The contact process is thus capable not only of producing a variety of molecules, but also of building up solid aggregates to sizes in the neighborhood of unit distance. As noted in the earlier discussion, the cohesive forces of the molecules enable the maximum size of the dust particles to exceed unit distance by a relatively small amount. Any further increment puts the particle into the region where the net motion is outward, and gravitational control over dispersed matter is possible only in very large aggregates. With the benefit of the information contained in this and the preceding chapters, we are now in a position to complete the comparison of the Reciprocal System and conventional astronomical theory from the standpoint of their ability to explain what is now known about the globular clusters. This addition to the comparison in Chapter 3 will be set up in the same manner as the original, and since 13 sets of observed facts were discussed in that chapter, we will begin with number 14. 14. Observation: The stars of the globular clusters are confined to the region above and to the right of the main sequence in the CM diagram, and to a relatively short section of the main sequence.

Comment: Both theories have explanations for the observed situation. Opinions as to their relative merits will no doubt differ, as long as this situation is considered is isolation. 15. Observation: Some clusters (M 67, for example) are classified as open clusters on the basis of size, shape, and location, but have CM diagrams very similar to those of the globular clusters. Comment: It is difficult to account for the existence of these hybrid clusters in terms of the totally different cluster origins portrayed in conventional theory. The derivation from the theory of the universe of motion arrives at a simple and straightforward explanation. It identifies M 67 and the others of the same general type as former globular clusters or parts thereof, which have only recently reached the galactic disk. The modification of the cluster structure under the influence of the strong rotational forces of the Galaxy is already under way, but the acceleration of the evolution of the stars by reason of the availability of more dust and gas for accretion is a slower process, and it has not yet had time to show much effect. 16. Observation The observed motions of the stars in the open clusters show that these groups are disintegrating at a relatively rapid rate. The large number of these clusters now in existence in spite of the short indicated life means that some process of replenishment of the supply must be in operation. Comment: As indicated in the discussion of this subject in Chapter 8, current astronomical theory has nothing to offer on this problem but pure speculation, The theory of the universe of motion identifies the source of the replacements. 17. Observation: Studies indicate that clusters similar to M 67 have a greater density and are located higher above the galactic plane than clusters that resemble the double cluster in Perseus. Comment: The significance of these observations has also been noted earlier, They constitute prima facie evidence that the accepted view of the direction of evolution of the clusters and their constituent stars is wrong. 18. Observation: In addition to globular clusters of the Norma type, the Magellanic Clouds contain some clusters that have the size and shape of globular clusters, but are composed of stars that resemble those of the open clusters in the galaxy. Comment: As Bart J. Bok pointed out in the statement quoted in Chapter 8, the existence of stars of different evolutionary ages in globular clusters is inconsistent with current astronomical theory, which views these clusters as having been formed early in the history of the universe. But it is easily understood on the basis of the theory of the universe of motion. Summarizing, we can add to the previous count one set of facts (number 14) explained, by current astronomical theory, two (15 and 16) without any explanation and two (17 and 18) for which the current explanations are

inconsistent with the observed facts. As reported in Chapter 3, this makes the total score for current astronomical theory 4 items explained, 7 with no explanation, and 7 with untenable explanations, In sharp contrast to this dismal record, the deductions from the postulates that define the universe of motion, which are totally independent of any input from astronomical observation, lead to explanations for all 18 items that are fully consistent with the observations. This globular cluster situation is not an isolated case. It is merely a particularly conspicuous example of the results of basing astronomical theory on pure assumptions. The principal assumptions that have been made, and the manner in which they have been utilized to construct a wholly imaginary astronomical universe, will be reviewed in Chapter 28, after the pertinent information that can be derived from the theory of the universe of motion has been more fully developed.

CHAPTER 10

Evolution - Galactic Stars When a globular cluster finally falls into the Galaxy and becomes subject to the forces of the galactic rotation, some rather drastic changes take place, and the CM diagram of the cluster is modified to the point where it is no longer recognizable without some understanding of the effects that are produced by the galactic forces. These effects are illustrated in Fig.12, which is the CM diagram of the cluster M 71. In this, and the other CM diagrams that will follow, any areas in which the star concentration is sufficiently above average to warrant special consideration are cross-hatched, while sparsely populated areas that may or may not belong in the diagram are outlined by dashed lines. M 71 is on the borderline, and has been classified as an open cluster by some observers, although it is now more commonly regarded as a globular. 120 From this uncertainty as to its true status we can deduce that it is a globular cluster that has reached the edge of the galactic disk and is on the way to becoming an open cluster, or more likely, will break up into a number of open clusters. The CM diagram of this cluster is described by Burnham as having a ‖red giant sequence resembling that of a globular― , with ‖an unusually large scatter and a steeper slope than normal― , but lacking the usual horizontal branch and extension to the main sequence. Thus, even for the astronomers, this diagram leaves a great deal to be explained. In the context of the new information developed in this volume, this diagram has even less resemblance to that of a normal globular cluster, as a ‖steep slope― of any of the lines in the diagram is inadmissible, The theoretical positions of all three of the evolutionary lines are fixed. The portion of the diagram in the upper right that is being identified as a wide giant branch is too steep to be the red giant line OA, and the slope of the cross-hatched section at the lower end of the diagram is not steep enough to be the evolutionary line AB. The diagram looks like a misfit.

So let us examine the situation from a theoretical standpoint. When the cluster enters the rotating stream, the immediate effect is that the loosely attached matter is stripped away, both stars from the cluster as a whole, and particles from the individual stars. As noted earlier, the differential gravitational forces are already reducing the sizes of the clusters very significantly as they approach the Galaxy, and this loss of stars is accelerated when the rotational forces are added to the radial gravitational effect. Reduction in size has the collateral result of reducing the central condensation. The globular clusters do not move freely through the field of stars in the manner described by Hoyle in the statement quoted in Chapter 2; they have to push the stars aside in order to clear their paths. But the individual stars do move through the interstellar medium. In so doing they lose the unconsolidated material by which they were surrounded, and from which they were drawing the additional mass that enabled them to follow the normal evolutionary paths. The loss of this material stops the growth of the star, and prevents it from reaching the critical density by the accretion route. However, the star is still subject to the compressive forces due to the gravitational effect of the cluster as a whole, and these forces, together with the self-gravitation of the star, compress the existing gaseous aggregate, and move it downward on the CM diagram along a line of constant mass. The theoretical results of the stripping action on the locations of the stars in the CM diagram are illustrated in Fig.13. Diagram (a) is the regular cluster diagram for a cluster in which the most advanced stars have just recently reached the main sequence. Diagram (b) shows where these stars would be if the cluster remained isolated long enough to permit the evolutionary development to bring most of the stars down to the main sequence, with only the least advanced stars still on the path AB. If the cluster falls into the galaxy while it is in the condition shown in (a), the atmospheres of dust and gas from which the stars along the

path OA are growing are swept away. These stars are then unable to move forward along this line, Instead of continuing on to the vicinity of point A before the supply of material for accretion is exhausted, they are deprived of this material almost immediately on entering the rotating stream. As a result, each star along the line OA leaves that line from whatever location it may happen to occupy at the time of entry, and moves downward on the diagram along a path parallel to AB, a line of constant mass.

Thus the effect of the interaction with the interstellar medium is to replace the relatively narrow path AB with a path that has the same slope and length, but has a width equal to OA. This path has a lower limit XX, parallel to OA that represents the extent to which evolutionary progress has taken place since the beginning of the capture process. As the evolution continues, the line XX, moves downward on the diagram. The theoretical CM diagram for a captured cluster in a relatively early stage is then similar to (c).

When the last stars have left OA on the downward path, their positions lie along a line YY, parallel to OA, constituting an upper limit to the stellar positions on the diagram. Summarizing this process, in the first interval after the entry of the cluster into the rotational stream the stars are located in the area between OA and the limit XX. As the downward movement continues, the last stars leave OA, and in the next stage the star locations are between XX, and YY,. Finally XX, is cut off by the main sequence, and in this last portion of the downward movement, the stars are located between YY, and the main sequence, as indicated in diagram (d). After the first stars reach the condition of gravitational equilibrium, the main sequence population continues to increase throughout the remainder of the evolutionary development. If we apply diagram (c), which shows the theoretical positions of the stars of a newly captured cluster, to the M 71 situation, everything falls into line, M 71 shows both of the characteristics previously mentioned as those of a greatly reduced globular cluster that is entering the fringes of the rotating galactic disk: a relatively low central condensation and a relatively small size, Its diameter is said to be about 30 light years. Double this value would still be below average. The giants exceed 200 light years. The relation of the observed locations of the stars of this cluster to the theoretical diagram is shown in Fig. 14. Here we see that the observations fit neatly within the theoretical parallelogram. The absence of identifiable stars on the line AC, the horizontal branch, is explained by two results of the stripping process: (1) no new stars are moving into the AC region, and (2) the relatively small number of stars that were located on this line prior to the start of the capture process were scattered over the triangular area ABC by the same kind of a downward movement that occurs in the more heavily populated region on the other side of the path AB.

The M 71 pattern is not uncommon. Five other clusters out of those examined in this

investigation also show the same kind of evidence that they are just entering the rotational stream only one is in the intermediate range where both the upper (YY,) and lower (XX,) limits are observable. The more advanced clusters that are limited to the lower section of the diagram between YY, and the main sequence are again fairly numerous. But here we find that a new factor has entered into the determination of position on the CM diagram. The main sequence sections of some of these more advanced clusters are well defined, and they show that the clusters in this stage of evolution are subject to an upward displacement of the main sequence, In the cluster M 67, which is regarded as the prototype of this class of cluster, the shift is about 2.6 magnitudes. Fig.15 is the CM diagram of M 67. As can be seen, this diagram is similar to those of M 71 and other newly captured clusters, but a considerable number of the stars of the cluster have reached the main sequence, and they do not lie on the line BC, the lower line in Fig. 15. Instead, they follow a line parallel to BC, but above it by the amount of the displacement. Otherwise, the stellar positions are entirely Norma It is particularly significant that the upper limit of the populated area, the line designated as YY, is sharp and distinct, because this line has a definite theoretical relation to the evolutionary pattern. It has to be parallel to the theoretical line OA, which is specifically defined mathematically, even though M 67 actually has no stars in the upper areas of the complete globular cluster diagram.

In order to understand the origin of the displacement of the main sequence, the gravitational shift, as we will call it; the nature of the equilibrium on the main sequence needs to be recognized. Basically, this is an equilibrium between the gravitational force (or motion) and the force (or motion) of the progression of the natural reference system, In the dust cloud state in which the giant stars originate there are two gravitational components, the self gravitation of the star and the gravitational effect of the cluster in which the star is located.

The net resultant of all forces is inward, and the star therefore contracts. As the contraction proceeds, the net inward force weakens, and ultimately the point is reached where the inward and outward forces are equal. This is the main sequence of the cluster Two of the three force components, the progression of the natural reference system and the self-gravitation of the star, are constant for a star of a given mass and volume but the third component is variable and it determines the location of the main sequence equilibrium The stars in a globular cluster occupy equilibrium positions where there is no net force in either direction, In this case, therefore, the variables force component is zero in the equilibrium condition, if the contraction is competed within the cluster, Here the stellar equilibrium within the cluster is identical with that of an isolated star in space. The stars of the Galaxy also occupy equilibrium positions, but the galactic situation is not a full three-dimensional equilibrium. It has been attained in part by balancing a portion of the inward gravitational effect of the galaxy as a whole against the outward component of the rotational motion. This is a one-dimensional vectorial motion, and while it counterbalances the gravitational motion so far as the representation in the conventional spatial reference system is concerned, it does not offset the full effect of a motion such as gravitation that is effective in all three scalar dimensions. Thus there is a second gravitational component in the main sequence force equilibrium of the galactic stars. The component due to selfgravitation at equilibrium is reduced accordingly; that is, the contraction of the star stops at a lower density (or expands back to that density), This puts the main sequence of the galactic stars somewhat higher on the CM diagram than the main sequence of the globular cluster stars. As indicated earlier, the difference is about 0.8 magnitudes, This is a theoretical conclusion that takes us into a hitherto unexplored area of astronomy, but it is not without observational support. We note, for instance, that when the main sequence of the clusters is lowered to the 4.6 level, the area of the diagram included between this and the galactic main sequence at 3.8 magnitude includes the positions of a group of stars known as sub-dwarfs. ‖The location of metal-poor subdwarfs is puzzling― , say M. and G. Burbidge, ‖because they seem less bright than [galactic] main sequence stars of comparable surface temperature and hence lie below the main sequence.― But then these authors go on to give us the information about the subdwarf stars, which, in the light of the theoretical conclusions that we have just reached, provides the explanation. These subdwarfs . . . are not traveling with the sun in its giant orbit around the hub of our galaxy, and consequently they are moving with high speeds relative to the sun and in one general direction—that opposite to the direction in which the galactic rotation is carrying the sun.102 According to our findings, these are stars that have escaped from globular clusters, and have entered the Galaxy from outer space. The fact that they are relatively metal-poor supports this conclusion. But in any event, whatever their origin may have been, the significant point is that they are not ‖traveling with the sun― ; that is, they are not participating (or not participating fully) in the rotation that we find to be the cause of the 0.8 magnitude gravitational shift of the galactic field stars. Actually, they can hardly avoid being affected to some extent by the rotational forces. It follows that they should theoretically be

distributed throughout the region between the two main sequence locations. This is just where they are found. Another item of evidence supporting the theoretical identification of the 0.8 magnitude difference as a gravitational shift will be forthcoming in Chapters 11 and 12, where it will be shown that the gravitational equilibrium applicable to objects moving in time is related to the 4.6 magnitude level, rather than to that of the galactic main sequence. With the benefit of the foregoing information we are now in a position to explain the gravitational shifts of M 67 and other open, or galactic clusters. M 67 is a remnant, or fragment, of a globular cluster that has quite recently fallen into the galaxy. It has reached the point where it has begun building up a main sequence population, although its slower stars are still in the process of completing their evolution along the globular cluster path AB and its rightward extension. It is one of the earliest of the objects classified as open clusters, and has the principal characteristics of a recent arrival: a star population that is large for an open cluster, a relatively compact structure, and a position high above the galactic plane. The big decrease from the globular cluster size and the entry into the galactic disk have destroyed the structural stability that existed in the parent globular cluster, and M 67 has begun the expansion that will ultimately terminate its existence as a separate entity. Now that they are within the Galaxy, the M 67 stars are subject to the same forces as the galactic field stars, and in addition are subject to the residual cohesive force of the cluster. Expressing this in another way, we can say that the stars of the main sequence of the open cluster have not yet completed their transition to gravitational equilibrium. The temporary equilibrium represented by their main sequence positions includes a diminishing component from the gravitational force of the cluster as a whole. The cluster stars will not reach main sequence positions comparable to those of the field stars of the Galaxy until the cluster expansion is complete, and this extra force component is eliminated. In the meantime, the main sequence of each cluster will be above that of the field stars by an amount depending on the remaining cohesive force of the cluster. This gravitational shift is greatest where the clusters are young, large, and compact, like M 67, and decreases as the cluster becomes older, smaller, and looser. As we saw earlier, when galaxies reach the size at which they capture substantial numbers of globular clusters they also begin to pull in some unconsolidated clusters, aggregates that are still merely clouds of dust and gas. These clouds arrive too late in the elliptical stage of galactic evolution to have much effect on the properties of the observed elliptical galaxies, although they may be responsible for the occurrence of concentrations of blue stars in some of these galaxies. But when the elliptical structure spreads out to form the spiral, the stars of the galaxy are mixed with the recent acquisitions of dust and gas. The stage is then set for a period of rapid advance along the path of stellar evolution, as the availability of this kind of a supply of material accelerates the evolutionary process. During the time that the mixing is taking place the dust and gas exist in widely different concentrations in different parts of the galactic structure. The average concentration in the outlying regions that it reaches first is sufficient to support an accretion rate that results in a continuing increase in the mass of the average star. After arrival at the main sequence, the

very small stars, those whose growth was cut off prematurely by the entry of the cluster into the Galaxy, take up relatively permanent positions in the lower sections of this sequence, while the larger stars accrete matter and move upward along this path. Since the stars of a cluster, aside from the few captured strays, were all formed in the same event, and are of approximately the same age, most clusters occupy only a limited sector of the evolutionary cycle. The active sector does not expand appreciably, but merely moves forward as the cluster ages and passes through the various evolutionary stages. In the Hyades, Fig.16(a), a cluster somewhat older than M 67, a few stars still remain on the contraction path AB, but the majority have reached the main sequence. Fig.16(b) represents a still more advanced cluster, the Pleiades, in which the last stragglers have attained gravitational equilibrium, and the main body of the active stars has moved up along the main sequence. Whether or not the Pleiades cluster is actually older than the Hyades is uncertain, as the evolutionary age is not necessarily coincident with the chronological age. The Pleiades are located in an observable nebulosity, and the accelerated accretion from this source may account for the more advanced evolutionary stage. The possible variations in the rate of development of these nearby clusters are of particular interest in connection with the possibility that many of the open clusters in the local region of the Galaxy may be fragments of the same disintegrated globular cluster. It has already been recognized that some of these clusters are similar enough to imply a common origin. This has been suggested, for example, in the case of Praesepe and the Hyades.121 The principal objection that has been raised to this hypothesis is that the clusters arc too tar apart (the distance between these two is over 450 light-years) to have originated in the same event. This conclusion is, of course, based on conventional astronomical theory. When it is realized that the open clusters are fragments of globular clusters this objection is eliminated, as it is evident that fragments of a disintegrated cluster could be distributed over much greater distances than those that are observed.

In any event, the greater density of the M 67 class of clusters and their higher galactic latitude, taken together with the observed expansion of all open clusters, definitely establish the M 67 class as younger than the main sequence clusters such as the Pleiades and the Hyades. This conclusion, previously reached, is now corroborated by the relative magnitudes of the gravitational shifts. Those of the M 67 class average about 2.5 magnitudes, while those of the main sequence clusters are not much above the 0.8 level of the field stars. Extension of the findings with respect to accretion by the main sequence stars indicates that continued development of the Pleiades cluster will eventually bring the hottest stars in this group to the destructive limit at the top of the main sequence, and will cause these stars to revert back to the red giant status via the explosion route. In the Perseus double cluster, Fig.17, such a process has already begun. Here the main body of stars is in the region just below the upper limit of the main sequence, but a number of red giants are also present. We can identify these giants as explosion products, stars of Class 2C, rather than new stars, Class 1A, as this identification keeps all of the stars in the cluster in an unbroken sequence along the evolutionary path, whereas if these were young stars of the first generation they would be unrelated to the remainder of the cluster. The presence of 2C giants implies that there are also young white dwarfs in this cluster, but they may be still in the invisible stage. Some binary stars are also reported to be present in clusters such as the Hyades and the

Pleiades. In these clusters, however, the A components of the binaries are on the main sequence, and there is a wide evolutionary gap between them and the Class 1 main sequence stars of the clusters. There are several possible explanations of their presence; ( I ) they are not actually members of the clusters, (2) they arc strays, older stars that were picked up during the condensation of the globular clusters, or during their subsequent travels, or (3) they were stars from the horizontal branch of the same globular cluster whose vertical branch produced the Class 1 stars of the open cluster. The cluster diagrams indicate that the stars of the two branches reach the main sequence at about the same time. Consequently there is an evolutionary gap between them that is just about right to account for the presence of some Class 2 (binary) stars in the Class 1 main sequence clusters. It seems probable that alternative (3) is the source, or at least the principal source, of these binary stars. It is important to note at this point that in the context of the theory of the universe of motion, the presence of observable nebulosity is not necessary to account for the position of the hotter stars of the cluster at the top of the main sequence. As explained earlier, the theory definitely requires continued stellar growth even under conditions where the density of the stellar medium is no greater than average. This is something that cannot be confirmed observationally with currently available instruments and techniques, but neither can it be disproved. Thus, this aspect of the theory is not inconsistent with anything that is actually known, which is all that is required in the case of an integrated general theory that is fully verified in other areas.

It is significant, in this connection, that current astronomical theory is inconsistent with the observations. This theory places the star formation in dense galactic nebulae. The location most commonly cited as a stellar birthplace is the Great Nebula in Orion, and the association between this nebula and a large group of hot O and B type stars is offered as evidence of

recent formation from the existing dust-and gas cloud. But no nebulosity can be detected in the Perseus cluster, or in NGC 2362, another similar cluster that has been extensively studied, or in a number of other clusters in which O and B stars are prominent, while most of the main sequence clusters, such as the Pleiades, that do have associated nebulosity have no O type stars. It is commonly recognized that there is a contradiction here that calls for an explanation, but since such contradictions abound in astronomy, it is not taken as seriously as the situation actually warrants. Some of the open clusters evidently carry over into Class 2B, as there are a large number of loose, somewhat irregular, clusters that have second generation characteristics. Here we find a substantial proportion of giant and subgiant stars, indicating that the clusters are either considerably older or considerably younger than a main sequence cluster such as the Pleiades. These clusters do not have the characteristics of the M 67 class, the predecessors of the Pleiades type of cluster, and their structure (or lack of structure) indicates that they have undergone considerable modification. We can therefore conclude that they are older, and that their giant stars belong to Class 2C. This conclusion is supported by evidence indicating that large proportions of the stars of these clusters are binaries. Up to this point no more than casual consideration has been given to the rotation of the various astronomical objects that have been discussed, because the significance of the information available on this subject is not clearly indicated as long as each individual situation is considered in isolation. We have now reached the point, however, where we can put together enough information from different sources to show that there is a general correlation between rotation and age throughout the astronomical universe. The earliest structures, both the globular clusters and the stars of which they are composed, have little or no rotation. As explained earlier, this is easily understood as a consequence of star and cluster formation under conditions in which only radial forces are operative to any significant degree. But it confronts conventional astronomical theory with difficult problems. The desperate attempts of the theorists to read some signs of rotation into the observations of the globular clusters as a means of accounting for the stability of these structures have already been discussed. In application to the stars, this problem is somewhat less acute, as the stars actually do rotate, and the issue here is a matter of origin and magnitude. According to J. L. Greenstein, the average rotational speeds of stars of spectral class G and fainter are less than 25 km/sec. His estimates of the giant stars show an increasing trend up to about 200 km/see for spectral classes A3 to A7, with a decrease thereafter. The peak for the ‖dwarf― class (that is, the main sequence stars) is placed at a somewhat higher luminosity, in classes B5 to B7, and is estimated at 250 km/sec.29 The existence of these peaks does not mean that the rotation actually decreases in the largest stars. These are surface velocities, and the decrease is merely a reflection of the slowing of the speed of the outer layers of these stars, a differential effect that is evident even in stars as small as the sun. Current theory offers no explanation as to why speeds of these particular magnitudes should exist. Indeed, Verschuur points out that, on the basis of the prevailing theories, they should be much greater.

The simplest calculations for star formation suggest that all stars should be spinning very, very fast as a result of their enormous contraction from cloud to star, but they do not do so. Why not? The answer is far from known at present. 114 Furthermore, there is direct evidence that the rotational speed is a function of age. For example, A. G. Davis Philip reports that the rotational velocities of Ap and Am stars decrease with increasing cluster age (which is decreasing age, according to our findings).122 We might also note that the question as to what happens to the rotational speed as stars go through the contortions that are required by present-day evolutionary theory receives practically no attention. Against this background, the simple, observationally confirmed, picture of the rotational situation derived from the theory of the universe of motion provides a striking contrast. On the basis of this theory, all of the primary astronomical objects—stars, star clusters, and galaxies—originate with little or no rotation, and acquire rotational velocities as a consequence of the evolutionary processes. This increase in velocity is primarily due to angular momentum imparted to these objects during the accretion of matter. Globular clusters, which have little opportunity for accretion, acquire little or no rotation. The larger galaxies and the stars of the upper main sequence, which grow rapidly, on the astronomical time scale, increase their rotational velocities accordingly. From the nature of the evolutionary processes, as they have been described in the preceding pages, it is apparent that no aggregate consists entirely of a single stellar class. However, the very young aggregates approach this condition quite closely, inasmuch as they are composed of young stars, and the only dilution by older material results from picking up an occasional stray that has been ejected from an older aggregate. Aside from these interlopers, the earlier globular clusters are pure Class 1A, and their CM diagrams are somewhere between a concentration at the initial point of the diagram at the extreme end of the red giant region and a distribution similar to that of M 3, Fig.3. As brought out in the preceding pages, the evolutionary ages of the observable globular clusters are correlated with their distances from the Galaxy. On first consideration, the existence of such a relation may seem rather surprising, but it is an inevitable result of the kind of a cluster formation process that was described in Chapters 1 and 2. In the equilibrium condition from which the contraction of the group of proto-clusters begins, the protoclusters in the outer regions of the group are moving inward, exerting a compressive force on those closer to the center of the group. Thus there is a density gradient from the periphery of the group to one or more central locations, just as there is a similar gradient from the outer regions of the clusters to their centers after they begin contracting individually. These density centers are the locations in which the condensation into stars first takes place, and the combination of the clusters into galaxies begins. Ultimately they become the locations of the major galaxies of each group. The density gradient from the periphery of the proto-group to the condensation centers then takes the form of a gradient from the gravitational limits of the major galaxies to the locations of those galaxies.

The basic physical process in the material sector of the universe is aggregation in space. Growth of the aggregates proceeds by a mechanism called capture, if it occurs on an individual basis, or condensation, if it takes place on a collective basis. The rate of growth is primarily a matter of the density of the medium from which the material is being drawn. Condensation does not occur at all unless the density exceeds a certain critical value. Capture is not so limited, but the rate at which it occurs depends on the probability of making contact, and that probability is a function of the spatial density of the entities subject to capture. All of the aggregation processes therefore speed up as the clusters move toward the Galaxy and into a denser environment. This accounts for the evolutionary changes, already noted, that take place during the travel of the globular clusters from the distant regions of intergalactic space to the point at which they end their existence as separate entities by falling into the Galaxy. The aggregation of matter on the atomic scale that produces successively heavier elements follows the same general course as the aggregation of the dust and gas particles into stars. The atom-building process, as described in the previous volumes of this series, is also a capture process, and it, too, proceeds at a rate that depends on the density of matter in the environment. Current estimates of the densities in the different regions through which the clusters pass give a general indication of the magnitudes that are involved. The following are some recent figures:123 Density (g/cm³) Intergalactic space Space near edge of galaxy Interstellar space

10-31 10-28 10-24

On this basis, the density increases by a factor of 1000 during the travel of the cluster from a distant point of origin to the edge of the Galaxy. Here, then, is the explanation for the differences in composition between the distant clusters and those near the Galaxy that were described earlier. After entry into the galactic environment the increase in density and the corresponding evolutionary changes are still more rapid. It is not possible to follow the evolutionary cycle of the stars in the distant galaxies in the same detail as in our own galaxy, but we can apply our findings from the study of evolution in the Galaxy to an explanation of some of the changes in the observable features of these other galaxies. We can deduce that the small elliptical galaxies, including the distorted members of this class currently classified as irregular, are more advanced than the average distant globular cluster, and are in an evolutionary stage comparable to that of the most mature of those clusters. On the basis of the classification that we have set up, this means that they are composed of a mixture of the IA and 1B classes of stars. The older and larger elliptical galaxies (not including the giant spheroidals, which are not classified as elliptical in this work) are in the same evolutionary stage as the earliest open clusters, and the CM diagrams of M 67 and the Hyades are representative of the phases through which these elliptical galaxies pass. It should be noted, however, that because of the continuing capture of younger aggregates, the early end of the age distribution is not cut off

in the galaxies as it is in the clusters. The CM diagram for an elliptical galaxy in the same evolutionary stage as the Hyades would extend the sector occupied by the Hyades stars all the way back through the globular cluster sector to the original zone of star formation. The rapid evolution in the early spiral stage eliminates most of the 1A stars, except those in the incoming stream of captured material. Aging of these spirals then results in the production of second generation stars, beginning with Classes 2C and 2D. All of these stars, both the giants (2C) and the white dwarfs (2D), are moving toward the main sequence, on reaching which they enter class 2B, the class to which the sun and its immediate neighbors belong. There are no giants among these local stars, but the presence of white dwarfs in such systems as Sirius and Procyon, and the existence of planets, shows that the local stars passed through the explosion phase fairly recently. We may interpret the lack of giants as indicating that the former giants, such as Sirius, have had time to get back to the main sequence, while their slower white dwarf companions are still on the way. It is not certain that all of the nearby stars actually belong in this same evolutionary group, as some younger or older stars may also be present as a result of the mixing due to the rotation of the galaxy and the gravitational differentials, but there are no obvious incongruities. The 2B stars in the regions of average accretion or above move upward along the main sequence in the same manner as they did when they were 1B stars of the first cycle, and again undergo the Type I supernova explosions. Eventually they recondense into stars of the third cycle, Classes 3C and 3D. These are three-member systems, if only one of the stars of the Class 2 binary system has exploded, or four-member systems if both have gone through the explosion phase. As indicated earlier, a considerable number of such multiple systems are known. Theoretically, this movement around the cycle will continue until the matter of which the star is composed reaches its age limit, providing that the environment is favorable for growth, but as mentioned in the discussion of the spiral structure, the contents of the galaxies are in a physical condition that has the general characteristics of a viscous liquid. In such an aggregate the heavier material moves toward the center of gravity, displacing the lighter units, which are concentrated preferentially in the outer regions. This process is slow and irregular because of the viscosity and the effects of the galactic rotation, but there is a general tendency for the older and heavier systems to sink toward the galactic center, into regions where the supply of material for accretion is limited. One six-member system, Castor, is frequently mentioned in the astronomical literature, but apparently systems of this size, systems of the fourth cycle, are scarce in the readily observable regions of the Galaxy. In view of the smaller amount of material available to the stars in the unobservable regions closer to the galactic center, and the increased competition for the material that is available, because of the higher concentration of stars, it is quite possible that the movement around the evolutionary path is limited to four or five cycles. Some evidence suggesting continuation to additional cycles is available from the cosmic rays. As explained in Volume 1, the nature of the process whereby matter is transferred from the material sector to the cosmic sector, and vice versa, is such that this matter is near its age limit before being ejected from the sector of origin. The cosmic iron content of the cosmic rays (the incoming matter from the cosmic sector) is something on the order of 50 times that

of the estimated iron content of the local main sequence (Class 2B) stars. If taken at its face value, this indicates that the evolutionary development, which causes the increase in the iron content, must extend into more than two or three additional cycles beyond the 2B stage. However, as noted earlier, the spectra of the stars tell us only what is present in the outer regions, and there is reason to believe that the iron content of the older stars in the local environment is substantially greater than indicated by the spectroscopic data. For the present it seems appropriate to interpret the cosmic ray composition as evidence favoring the higher iron content of the Class 2B stars rather than as indicating evolution beyond four or five cycles. In either case, however, the continuation of the accretion process into a number of cycles means that the proportion of large stars (products of the explosion of stars of maximum size) in the galactic population increases as time goes on. Inasmuch as the oldest stars are concentrated toward the galactic center, it follows that the number of large stars in the central regions of the Galaxy is considerably greater than would be expected from the proportions in which they are observed in the local environment. As we will see later, the presence of this large population of big stars in the central regions of the major galaxies has some important consequences. The fact that the development of the spiral structure antedates the appearance of the secondgeneration stars enables defining the general distribution of the stellar classes of the Milky Way galaxy and similar spirals. With the qualification ‖except for strays from older systems,― which has to be understood as attached to all statements in the discussion of stellar populations, we may say that the stars of the second and later generations, Class 2C and later, are confined to the galactic disk (including the arms) and the nucleus. The early first generation stars, Class 1A, are distributed throughout the outer structure. They constitute practically the entire halo population. The main sequence stars of the first generation. Class 1B. occupy an intermediate position, most prominently in the spiral arms. The identification of the conspicuous hot and luminous stars of the upper main sequence with the spiral arms was the step that led to the original concept of two distinct stellar populations. However, the information that has been developed herein shows that the galactic arms actually contain a very heterogeneous population, including not only stars from the entire first evolutionary cycle, but also stars from several, perhaps nearly all, of the later cycles. Observational difficulties limit our ability to follow the evolution of the galaxies beyond the stage of the spiral arms by studying the individual stars, but we can derive some further information from the character of the light that is being received from the inner regions. Since the stars in the galactic nucleus are older than those in the disk, they should be more advanced from an evolutionary standpoint, on the average. This difference in age is reflected in a difference in color. However, the correlation is not directly between color and age, but between color and the positions of the stars in the evolutionary cycle. It should be realized that the great majority of all stars are red. Consequently, we can expect red light under all conditions except where the stellar population includes an appreciable number of the relatively rare blue and white stars of the upper end of the main sequence, and

then only because the emission from these hot stars is so much greater than that from the red stars that even a small proportion of them has a major effect on the color of the aggregate as a whole. The hottest stars may be thousands of times as luminous as the average Class 1 star. Thus the color of a galaxy, or a portion thereof, does not identify the stage of evolution of the constituent stars. It merely tells us that the aggregate does, or does not, contain a significant number of stars in that part of an evolutionary cycle which extends into the upper end of the main sequence. The particular cycle to which these stars belong cannot be determined from this information, but since the color changes in galaxies take place gradually, the characteristics of the light emitted by a galaxy, or one of its constituent parts, supplement the evolutionary criteria previously identified. The integrated light from the larger elliptical galaxies belongs to the spectral type G (yellow). In the early spirals the emission rises to type F (yellow white), or even type A (white) in some cases, because of the large number of Class 1B stars that move up to the higher levels of the main sequence. As these stars pass through the explosion stage and revert to the 2C and lower 2B status, accumulating to a large extent in the galactic nucleus, the light gradually shifts back toward the red, and in the oldest spirals the color is much like that of the ellipticals. Summarizing the color cycle, we may say that the early structures are red, because they are relatively cool, there is only a small change in the character of the light during the development of the elliptical galaxy, then a rapid shift toward the blue as the transition from elliptical to spiral takes place, and finally a slow return toward the red as the spiral ages. Current astronomical theory correctly identifies the stars of the nuclear regions of the galaxies as older than those in the spiral arms, but reaches this conclusion by offsetting one error with another. This theory identifies the globular cluster stars as older than the main sequence stars of the galactic arms. This is incorrect. But then the theory equates the stars of the nucleus with those of the globular clusters. This, too, is an error, but it reverses the first error and puts the stars of the nucleus in the correct age sequence relative to those of the galactic arms. However, this superposition of errors leaves the astronomers with an open contradiction of their basic assumption as to the relation between the age of a star and its content of heavy elements. This embarrassing conflict between current theory and the observations is beginning to be a subject of comment in the astronomical literature. For example a 1975 review article reports measurements indicating that the ‖dominant stellar population in the nuclear bulges of the Galaxy and M 31 consists of old metal-rich stars.―124 As the authors point out, this reverses the previous ideas, the ideas that are set forth in the astronomy textbooks. The expression ‖old metal-rich stars― is, in itself, a direct contradiction of present-day theory. The whole fabric of the accepted evolutionary theory rests on the hypothesis that old stars are metal-poor. The existence of a greater metal content in the central regions of the galaxies is apparently not contested. Harwit makes this comment: There also seems to exist abundant evidence that the stars, at least in our Galaxy and in M 31, have an increasingly great metal abundance as the center of the galaxy is approached. The nuclear region appears to be particularly metal rich, and this seems to indicate that the evolution of chemical elements is somehow speeded up in these regions8

In the light of our findings it is, of course, unnecessary to assume any speeding up of stellar evolution in the central regions of the Galaxy. All that is needed is to recognize that the stars in these regions are the oldest in the galaxy, and their evolution has continued for a long period of time. This chapter completes our discussion of the more familiar areas of the astronomical universe. In the remainder of this volume we will be exploring hitherto uncharted regions, aspects of astronomy where the currently accepted ideas are almost completely wrong, because of the strangely unquestioning acquiescence in Einstein's assumption that the experimentally observed decrease in acceleration at high speeds is due to an increase in mass, and that speeds in excess of that of light are therefore impossible. As has been demonstrated in the course of the development of the theory of the universe of motion, the speed of light is a limit applying only to one-dimensional motion in space, and there are vast regions of the universe in which motion takes place in time, or in multi-dimensional space. Most of these are inaccessible to observation from our position in the universe, but some of the entities and phenomena of these regions do have observable effects on the material sector, the sector in which we make our observations. These effects will constitute the subject matter of the remaining chapters. Since these subjects will be approached from a totally different direction, the conclusions that will be reached will differ radically, in many cases, from those currently accepted by the astronomical community. As we begin our consideration of these new, unfamiliar, and perhaps disturbing, findings in the admittedly poorly understood areas of astronomy, it will therefore be well to bear in mind what the theory of the universe of motion has been able to do in the presumably quite well understood astronomical areas. It has produced an evolutionary theory that turns the conventional astronomical theories upside down, and it has identified a variety of observational data that confirm the validity of the revised evolutionary sequence, including two sets of observations, the densities of the different classes of open clusters, and the metal content of the stars in the central regions of the galaxies, that provide definite proof that evolution takes place in the reverse direction. This ability of the new theory to correct a major error in current thought with respect to the phenomena of the better known regions should inspire some confidence in the validity of the conclusions that are derived from that theory in the relatively unknown astronomical areas, particularly when it is remembered that scarcity of observational information is not a major handicap to a purely theoretical structure of thought, whereas it is usually fatal to theories, like most of those in astronomy, that rest entirely on observational.

CHAPTER 11

Planetary Nebulae Inasmuch as the system of reference by means of which we define the positions of physical objects in the material sector of the universe, the sector in which we are located, is stationary in space, but moving at the speed of light in time, we cannot detect objects moving in time, except during an extremely short interval while they pass through our reference system, and then only atom by atom. As explained earlier, however, if the net

total three-dimensional scalar speed is below the point of equal division between motion in space and motion in time, any time motion component included in the total acts as a modifier of the spatial motion—that is, as a motion in equivalent space—rather than as an independent motion in actual time. The nature of the modification depends on the magnitude and dimensions of the motion being modified. The participation of time motion in combinations of motion that are multi-dimensional in space (ultra high speeds) will be discussed later, in another connection. The motion with which we are now concerned, motion at intermediate speeds, is one-dimensional, but the original unit of speed (motion in space) has been extended linearly to a second unit, which is a unit of motion in time. Because of the effect of this time component, the successive spatial positions of an object moving freely at an intermediate speed do not lie on a straight line in the reference system as they would if the speed were less than unity. Motion in time has no direction in space. The spatial direction of each successive unit of the time component of the intermediate speed is therefore determined by chance. However, the average position of the freely moving object follows the straight line of the purely spatial motion, because the total threedimensional motion is still on the spatial side of the sector boundary. As a result of this time effect, the radiation from a white dwarf in its early stages is not received from the surface of the star itself, but from a much larger area centered on the average stellar location. When the inherently weak radiation from this (spatially) very small star is further diluted by being spread out over this wide area it is reduced below the observable level. It follows that the white dwarfs expanding back toward the material sector (evolutionary stage 2) are not observable at all as long as their surface temperature is above the level corresponding to the unit speed boundary. On that boundary the change of position in time (equivalent space), relative to the natural datum, the unit speed level, is zero, and the radiation from the white dwarf is received at full strength. The white dwarf stars therefore become observable at this point. Our first concern will be with the relatively large stars, those whose mass exceeds a certain critical level that we will identify later. The detailed study of the white dwarf stars and related phenomena in the context of the theory of the universe of motion is still in the early stages, and we are not yet in a position to calculate the entry temperature for this class of white dwarf, but it can be evaluated empirically, and is found to be in the neighborhood of 100,000 K. At this temperature, where the relatively large white dwarf enters its third evolutionary stage, it is still a gas and dust cloud in equivalent space; that is, it is in the gaseous state. In this gaseous state in time the B-V color index for a given temperature is different from that of the stars on the spatial main sequence. We find empirically that the color index corresponding to the 100,000 K temperature of the incoming white dwarfs is about -0.3. On the main sequence this index corresponds to a temperature of about 30,000 K. Theoretically, these temperatures should be related by a factor of three. On entry into the observable region, the white dwarf is moving in all three dimensions of time (equivalent space). The radiation from this star, the wavelength of which determines the color, is onedimensional. From the color standpoint, therefore, the radiation consists of three

independent components, each of which has the wavelength and color corresponding to one third of the total rate of emission of thermal energy. The temperature, on the other hand, is determined by the total energy emission. Thus the color index of the newly arrived white dwarf of the class we are now considering is the same as that of a spatial main sequence star with a temperature one third that of the white dwarf. Since we do not have the theoretically correct values at this time, we will continue using 100,000 K and 30,000 K, with the understanding that these values refer to a temperature of about 100,000 K and a temperature one third as large, approximately 30,000K. The location of the -0.3 index on the CM diagram coincides, in general, with the position of a rather obscure class of stars known as the hot subdwarfs. The ‖evolutionary status― of these stars ‖has not been really understood.― 125 say Kudritzke and Simon, but current opinion apparently favors the suggestion that ‖On the way to becoming a white dwarf, while it is still very hot and just before the thermonuclear reactions cease, a star may find temporary stability in the region below the main sequence.― 102 In short, this is presumed to be a way station on the totally unexplained, and poorly defined, route by which, according to current theory, a red giant becomes a white dwarf. Observational information about these hot subdwarfs is still scarce and somewhat uncertain. A 1961 report by K. Hunger, et al.. says that ‖little is known about their precise location in the H-R diagram.― 126 These authors make the following comments on matters that are relevant to the present discussion: (1) a major fraction of these stars are binaries, (2) some of them are central stars of planetary nebulae, and (3) the mass of one of them, the star HD 49798, has been evaluated as 1.5 solar masses. According to our findings, all of these stars are binaries. The relevance of the other two items will appear as our examination of these stars proceeds. During the interval between the supernova explosion that produced the white dwarf and the reentry of that star into the reference system, where it is subject to observation, the portion of the original material ejected into space at less-than-unit speeds has also undergone some changes. Immediately following the explosion, the density of the material moving outward was sufficient to carry everything in the vicinity along with it, and the visible object was a rapidly expanding cloud of matter. As the expansion proceeded, the density of the cloud decreased, and in time a point was reached where the outgoing matter passed through the interstellar material rather than carrying that material with it. Eventually the outward motion of the ejected matter came to a halt, and inward motion began under the influence of gravitation, as explained in Chapter 4. The existence of the hot subdwarfs suggests that the turnaround time is less for the material dispersed in time than for the material dispersed in space, and that the hot star is visible for a time before there is any substantial inflow of material from the environment. But eventually the matter that is being pulled back by the gravitational forces begins falling into the star. The first material of this kind reaching one of these newly arrived white dwarf stars. the hot subdwarfs, encounters the extremely high temperature of this object, and is heated to such an extent that it is ejected back into the surroundings. Since both the incoming and the outgoing material are at a very low density. there is only a limited amount of interaction, and the cold material continues to flow inward through the outward moving matter.

When the incoming matter reaches the hot surface of the star it is not only heated to a very high temperature, but is also strongly ionized. The outgoing ionized matter emits visible radiation, and we therefore see a sphere of ionized matter centered on the young white dwarf. The radiation from ionized atoms occurs when they drop to a lower state of ionization, and as a consequence the greater part of it takes place after the ejected material has traveled far enough to lose a substantial part of its original ionization energy, and before that energy is all dissipated. This leaves a nearly invisible region in the interior of the sphere. To the observer, the resulting structure has the appearance of a ring. Such an object is a planetary nebula. Here we see the significance of the observation, cited above, that some of the hot subdwarfs are central stars of planetary nebulae. According to our deductions from theory all of the hot subdwarfs will become central stars of planetary nebulae in due course. The planetaries are all far distant from our location, and it is difficult to get an accurate observational picture of the complicated processes that are under way in them. Consequently, there is considerable difference of opinion as to just what is happening. Although we seem to comprehend the broad outlines of their formation and development, much of what we see is confusing and not at all well understood.127 (James B. Kaler) The outward motion of the ionized gas in the typical large nebula is well established, and the general tendency is to take this as indicating that the central star, which is conceded to be a white dwarf, or on the way to becoming a white dwarf, is ejecting mass into its surroundings as a part of the process which, according to current ideas, will eventually reduce it to a burned-out cinder. This observed outward flow of matter seems, on first consideration, to define the nebula as an expanding cloud of material. But there are strong indications that this simple view is incorrect. One significant point is that the nebulae are not actually expanding at the rates indicated by the measured speeds of the outgoing matter. Indeed, some of the nebulae are not expanding at all. For instance, velocity measurements indicate that the diameter of NGC 2392, the Eskimo Nebula, is increasing at the rate of about 68 miles per second. But no definite increase in size is shown in photographs taken 60 years apart. Our findings now indicate that the prevailing view of the nebulae as expanding structures is incorrect. Instead of being a rapidly dissipating cloud of material ejected from the central star in a single burst, or succession of closely spaced bursts, our analysis indicates that the planetary nebula is a relatively permanent ionization sphere through which the outgoing stream of material flows. We might compare it to the visible area of a river illuminated by the beam of a searchlight. A report by M. and W. Liller concedes that ‖Very possibly, all planetary nebulae are ionization spheres,― 129 but contends that in general these ionization spheres are expanding, although at a slower rate than would be indicated by the measured velocities. The size of the ionization sphere depends on the temperature of the central star and on the density of the nebula, increasing with higher central temperature and decreasing with higher density. The temperatures of the central stars necessarily decrease from the 100,000 K initial level. There is little or no loss of matter from the planetary nebula

system, since the initial speeds of the ejected matter, while high by terrestrial standards, are not anywhere near sufficient to carry the outgoing matter to the gravitational limit before being slowed down. In the meantime, additional matter is being drawn in from the environment. Thus the general trend of both temperature and density is in the direction of reducing the size of the ionization sphere (the observable nebula). It does not follow, however, that this decrease is continuous and uniform throughout the planetary nebula stage. On the contrary, the flow conditions in the nebula are such that fluctuations of a major nature can be expected, particularly in the early portion of this evolutionary stage. The inward flow of material toward the central star is not observable. Under ordinary conditions very diffuse material at great distances and low temperatures cannot be detected by any means now available. A part of this incoming matter is ionized by the radiation from the star, but the ionizing effect increases as the star is approached, and the transitions to lower ionization states that cause the emission of radiation are minimized. Radiation from the incoming matter is thus minor, and it has not been identified. We can, however, deduce that the initial inflow of the material, when the hot white dwarf first establishes a definite position in space, is relatively heavy, as the site of a recent supernova explosion is well filled with explosion products. This relatively large amount of incoming matter encounters the maximum 100,000 K temperature, is strongly ionized on contact, and is ejected at a high speed. Thus a large ionization sphere is quickly established when the action begins. The outward movement of the relatively large amount of ejected matter retards the inward flow of matter to some extent. This has two effects. It reduces the amount of material reaching the central star, thereby reducing the amount of ejection, and diminishing the outward flow. Coincidentally, the incoming material that is being held back by the outward flow builds up a concentration in the regions beyond the ionization sphere. Eventually the reduced outward flow is unable to hold back this concentration of material that is being pulled inward by gravitational forces, and there is a surge of matter toward the central star. This recreates the original situation (at a somewhat lower level, since the temperature of the central star has decreased in the meantime), and the whole process is repeated. During the time that the outward flow predominates, the density within the ionization sphere is decreasing, while because of the reduction in the inflow of cold matter, the surface temperature of the central star remains approximately constant. The ionization sphere therefore expands slowly. When the inward surge of matter occurs, these conditions undergo a rapid change. The density within the ionization sphere increases sharply, and the surface temperature of the central star decreases. The result is a rapid contraction of the ionization sphere. After these effects of the surge have run their course, the heavier outward flow and the expansion of the ionization sphere are resumed, but in the meantime the internal temperature of the central star has dropped, and the surface temperature does not regain its former level. The expansion therefore starts from a smaller size than before, and the next surge occurs before the size of the ionization sphere reaches its earlier maximum. Thus, as the successive expansions and contractions continue, the size of the nebula gradually decreases. Eventually the open space in the

center is eliminated, or at least drastically reduced. The older nebulae are therefore relatively small, and have filled, or partially filled, centers. One observed phenomenon that tends to confirm the validity of the foregoing explanation of the general behavior of the planetary nebulae is the existence of faint outer rings in some of the nebulae. These are just the kind of remnants that would be left behind if there is a relatively rapid periodic decrease in the size of the ionization sphere, as indicated in the foregoing theoretical account of the process. The currently favored explanation is that the rings were produced by explosive outbursts from the central star that preceded an outburst to which the main portion of the nebula is attributed. But there is no evidence that such explosive outbursts occur, nor does present-day astronomical theory have any explanation of how they could originate. This is only one of many conflicts between the pattern of evolution of the planetary nebulae, as derived from the theory of the universe of motion, and the view currently prevailing among the astronomers. In that currently accepted view, these nebulae are seen as expanding objects, and the largest ones are therefore regarded as the oldest. But the temperature relations specifically contradict this hypothesis. Examination of the data reported for a selected group of ‖prominent― nebulae130 shows that the temperatures of the central stars range from about 100,000 K to about 30,000 K. The sizes of the nebulae vary widely, but those members of the sample group with temperatures in the neighborhood of 100,000 K all have diameters of a minute of arc or more, while almost all of those at the lower end of the temperature range have diameters of less than 30 seconds. The idea that a ‖dying star― that ‖has come close to the end of its life . . . destined soon to become a white dwarf, the last stage before it disappears from view altogether― 129 is steadily increasing in temperature from 30,000 K to 100,000 K during the planetary stage is preposterous. Even on the basis of the astronomers own theory, the temperature trend must be downward. It is true that the luminosity of the central star increases substantially as the size of the nebula decreases. The data reported for the sample group show that at the high end of the range of nebular sizes the average magnitude of the central star is about 14. The four with Messier numbers have magnitudes 13.5, 14, 15, and 16.5. From this level the luminosities increase rapidly, and at the low end of the nebular size range the average magnitude is about 11. These are observed, rather than absolute, magnitudes, but the correction for distance, if available, would not change the general picture. Coincidentally, the temperatures of the central stars are decreasing. It is evident that what is happening here is that while the total emission is decreasing as the stellar temperature drops, a larger proportion of that total is coming directly from the central star, rather than being passed on to the nebula and emitted from there. The decrease in temperature is the salient feature of the change that takes place with time, and it establishes the direction of evolution unequivocally. Turning now to the question as to the location of the planetaries on the CM diagram, the first point to be noted is that we are now dealing with stars that are quite different from those on the upper side of the main sequence. As we saw in Chapter 6, these stars, Class D stars, as we are calling them, were dispersed in time by the supernova explosion, rather than being dispersed in space by the process with which we are more familiar. When they

again become visible as stars they are expanding back toward the main sequence, instead of contracting. The color indexes and luminosities of these stars can be measured, and they can therefore be represented on a CM diagram. But, as we have already seen in the case of the Class C stars, the other variable properties of the stars do not necessarily maintain the same relations to the color index-luminosity function in the different star classes. For instance, the Class C mass at a given point in the diagram is usually quite different from the mass of a Class A star at that same point. The truth is that, with the exception of the spatial main sequence, which is common to all, the CM diagrams of the different classes of stars are different diagrams. Fig. 4 and the accompanying discussion in Chapter 4 bring out the fact that the major properties of Class A stars, other than those on which the CM diagram is based, are specifically related to the variables of the diagram, so that the stars of this class are alike if they have the same position on the diagram. This is not true, in general, of a Class A star and a Class C star at the same location. Similarly, if the Class D central star of a planetary nebula occupies the same position in the diagram as a certain Class A star, this does not mean that the two are alike. On the contrary, they are very different, because of their dissimilarity in the properties that are not portrayed by the diagram. This issue does not arise in the case of most of the stars of the dwarf classes, as they are well below the spatial main sequence, but some of the large hot subdwarfs and central stars of the planetary nebulae are close to, or even above, the location of the main sequence. It should be recognized that the diagram is misleading in these cases, and that the stars of these two dwarf classes are actually very different from stars whose motion is in space. In this volume, all Class D stars will be regarded as ‖below the main sequence― for the purposes of the discussion. The temperature of about 100,000 K at which the white dwarf reaches the observable region is far above the level of the environment in the material sector of the universe. In order to reach a point of thermal equilibrium in that sector the star must cool down to a level within the sector energy range (below unit speed). This cannot be accomplished in one continuous operation; a three-step process is required. Conversion to the onedimensional material status can take place only on a single unit basis, in which a single unit of one-dimensional time motion converts to a single unit of one-dimensional space motion. The star must first cool down to a limiting temperature where the individual atoms at the stellar surface are in the unit condition in time. This is the temperature, which we have identified empirically as approximately 30,000 K. Here the transition from motion in time to motion in space takes place. The third step in the process, a further cooling to the equilibrium temperature, then follows. From the foregoing, we find that the planetary nebulae are located on the conventional color-magnitude version of the H-R diagram between the two vertical lines drawn in Fig.18, representing temperatures of I00,000 K and 30,000 K respectively. The plotted points are the locations of the planetary nebulae in the tabulation by G. O. Abell (reference 131). All of these points fall within the temperature limits defined by the specified lines.

For an understanding of the positions and evolutionary changes illustrated by Fig.18 we need to review some of the findings of the previous volumes of this series with respect to natural units. According to the fundamental postulates of the theory of the universe of motion, the basic constituent of the universe, motion, is limited to discrete units. Since all physical phenomena in this universe are motions, combinations of motions, or relations between motions, it follows from the discrete nature of the units of motion that all of these subsidiary phenomena must also be limited to discrete units. The basic units of space, time, mass, energy, etc., were evaluated in Volume I. However, these simple units are not directly applicable to complex phenomena. Here a compound unit usually applies, a combination of the simple primary units. For example, the primary unit of space has been evaluated as 4.56x 10-6 cm. But within a unit of space there are compound motions in which the spatial units are modified by certain combinations of units of time. As a result, the phenomena in this region are not related to the simple units of space, but to a compound (or modified) unit of space that amounts to 0.0064 of the full-sized natural unit, or 2.92x10-8 cm. Because of the general applicability of the discrete unit limitation, we can deduce that wherever we encounter a critical value of some kind, we are dealing with a compound unit, or a small number of such units. It is not usually possible to evaluate the compound unit in terms of the simple units of which it is composed until after the theoretical relations that are applicable have been clarified in considerable detail. For instance, in the case of the space units, the factor 0.0064 that relates the compound unit to the simple unit is something that one would not be likely to find unless he had a very good idea as to where to look for it. The development of the theory of the universe of motion has not yet been applied to the quantitative aspects of astronomical phenomena on an extensive enough scale to enable evaluating more than a limited number of the compound astronomical units. But the mere knowledge that some particular magnitude is a compound unit, or a small whole number of such units, is very often helpful. In the present instance, we are able to make use of a feature of the evolutionary pattern of the globular clusters that was mentioned, but not discussed, in Chapter 8. As noted there, the difference in luminosity between point A and point B on the CM diagram, on the logarithmic scale, is twice the difference between point B and point C. Inasmuch as these points are all critical points in the evolutionary pattern, the difference in magnitude between any two of them is presumably n compound units, where n is a small whole number.

The nature of this compound unit has not yet been determined, but the logarithmic magnitude scale suggests a dimensional relation, and leads to the surmise that the magnitudes at points B. C, and A are 1, 2, and 3 respectively. There is, of course, a large hypothetical component in this conclusion, at the present stage of theoretical and observational knowledge, but we can treat it like any other hypothesis; that is, develop its consequences and compare them with observation. As will be seen in the discussion that follows, the consequences of this hypothesis do, in fact, agree with the available observational information. Within the limits to which the correlation has been carried, the hypothesis has been verified. The particular value of this hypothesis is that it gives us a means of locating the critical points in the white dwarf section of the CM diagram. In the earlier volumes it was established that the boundary between motion in space and motion in time has a finite width, and that there are two natural units between the respective unit levels. It follows that if, as we have concluded, point B corresponds to one unit in the spatial direction (+1 we may say), then a point one unit lower on the extension of the line AB corresponds to zero, and point B', two units lower, corresponds to -1; that is, one unit in the temporal direction. The line APB' parallel to BC is then the equivalent of the main sequence for motion in time. With the benefit of this information we can now define the evolutionary paths of the planetary stars. Fig.19 compares these paths with those of the giant stars. The line OAB is

the evolutionary pattern of a giant star that has a mass of about 1.1 solar units at point B. Such a star originates with a smaller mass, but secretes material as it moves along line OA, and reaches the 1.1 mass level at point A. This is the critical density level, where the star acquires the ability to contract by means of its own gravitation, without the aid of outside forces. This contraction carries it down to the point of gravitational equilibrium at B. This star that begins its life as a red giant is in a state of thermal equilibrium; that is, it is radiating the same amount of heat that it generates. But its density is extremely low, far below the level of stability. Its evolutionary course beyond point A, unless modified by accretion, is therefore along a line of constant central temperature toward the main sequence, the location of gravitational equilibrium. The early white dwarf, on the other hand, is already in a state of gravitational equilibrium, from the standpoint of gravitation in space, while it is too hot to be thermally stable. This star therefore moves along the line of gravitational equilibrium for motion in time, the equivalent of the spatial main sequence, toward a condition of thermal equilibrium. The (inverse) volume of the white dwarf star at any given surface temperature is determined by the mass. Thus the more massive stars reach the 100,000 K temperature level while their inverse volume. from which they radiate (a point that will be considered further in Chapter 12) is greater, and their luminosity is consequently higher. The incoming white dwarfs are thus distributed along the 100,000 K line in accordance with their masses. At the 1.1 level, identified as A' in Fig.19, the white dwarf occupies a critical position somewhat similar to the critical density position at point A on the giant path. T his white dwarf of 1.1 solar masses is the smallest star that has sufficient total thermal energy to maintain the 100,000 K surface temperature in the gaseous type of gravitational equilibrium.

This critical mass star that originates at point A' moves down along the line A'B', gradually converting its outermost atoms from three-dimensional motion in time to onedimensional motion in time. This conversion is completed at B'. Further cooling then transforms the one-dimensional motion in time at B' to one-dimensional motion in space at B. Thus the giant and dwarf stars of the same mass eventually arrive at the same point on the spatial main sequence. Giant stars whose growth ceases at some point a between O and A follow a path ab parallel to AB, terminating at point b on the main sequence. Stars that continue adding matter beyond point A have a different evolutionary pattern, as previously explained. In the dwarf region it is the star with a mass less than 1.1 solar units that has a different pattern of evolution, one that will be examined in Chapter 12. Since the mass of the white dwarf is constant during its movement along the line of gravitational equilibrium, it follows that each mass has its own equilibrium line. Thus the equivalent of the main sequence for motion in time is a series of lines parallel to the spatial main sequence. The larger stars that we are now considering originate along the 100,000 K temperature line at locations above point A'. Such a star moves along a path a'b' from a', the point of origin, to b', a point on the line B'B. It then converts to motion in space at point B in the same manner as the star of mass 1.1, but B is not a location of thermal equilibrium for the more massive star. A further movement along the main sequence is required in order to

reach the point of thermal stability. The final position of this star in the CM diagram is a point somewhere between B and C, the exact location depending on the mass. On the diagram, the movement from B to x, the final location, appears anomalous, as a decrease in temperature normally corresponds to a movement to the right. This is another illustration of the fact that the CM diagrams of stars of different classes, or even different sub-classes, are actually different diagrams. The temperature corresponding to a given color index is much lower on the spatial main sequence than on the equivalent path a'b' for motion in time. The movement of the Class D stars toward the left after reaching the spatial main sequence is not a temperature effect, but a result of this difference in the significance of positions in the diagram. The cooling star is actually at a considerably lower temperature in its final position at point x than it was at point b', even though it is farther to the left. Inasmuch as the white dwarfs are contracting in time rather than in space, the spatial compression due to the gravitational motion toward the galactic center has no effect on these Class D stars. The evolutionary paths shown in Fig.19 therefore meet at the globular cluster level of the spatial main sequence, rather than at the position of the galactic field stars. The final position, designated x, on the spatial main sequence is, however, subject to the gravitational shift, and the last phase of the conversion from motion in time to motion in space includes an upward movement of 0.8 magnitudes, as well as the movement to the left from B to x. As noted in Chapter10, the observed Class D pattern is strong evidence of the reality of the gravitational shift. Observational information about the two classes of relatively large white dwarfs that we have been considering, the hot subdwarfs and their successors, the central stars of the planetary nebulae, is very limited, but the positions on the CM diagram indicated by the available data are entirely consistent with the evolutionary pattern that we have derived from theory. The dashed line in Fig.18 outlines the locations of the hot subdwarfs as given by M. and G. Burbidge (reference 102). This indicated area is clearly consistent with the theoretical conclusions. As noted earlier, the locations of the representative group of planetary nebulae identified in Fig.18 are also within the theoretical limits. Because of their decrease in temperature, the movement of these nebulae on the CM diagram must be, at least generally, from left to right. (Even the adherents of conventional astronomical theory concede this. See, for instance, the diagram by Pasachoff, reference 132.) Some further confirmation of the theoretical findings can therefore be obtained by examination of the relation of the diameters of the planetaries on the Abell list to their locations on the diagram. Fig.20 is a reproduction of Fig.18, with the diameters in parsecs shown alongside the points indicating the locations. As might be expected, in view of the diversity of the conditions under which the nebulae exist, and to which the observations are subject, there are wide unexplained variations in the individual values, but the general trend is clear. Disregarding the group of nebulae below the line A'B', which are subject to some special considerations that we will examine in Chapter12, there are 16 nebulae with an average diameter of 84 parsecs in the left half of the identified nebular region, and 7 nebulae with an average diameter of 47 parsecs in the right half.

Observational information on the masses of the planetary stars is minimal, but the little that is available is consistent with the existence of a lower limit at 1.1 solar masses, or at least not inconsistent with it. An average of 1.2 solar masses has been suggested.― 133 As noted earlier, the mass of one of the preplanetary stars, the hot sub-dwarfs, has been determined as 1.5 on the same scale. We will see in Chapter13 that the mass of another star, which we will identify as a former planetary star, has been calculated at 2.1 solar masses. These results are too few in number to confirm the theoretical minimum, but they do point in that direction. Inasmuch as this shortage of empirical information exists, in one degree or another, throughout the entire range of the white dwarf phenomena, it is again appropriate to call attention to the fact that the validity of the general principles and relations that have been, and will be, applied to explaining these phenomena has already been firmly established in physical fields where factual data are abundant and reliable. Thus, even though the correlations between theory and observation that are possible in such areas as that of the white dwarfs are too limited to provide positive confirmation of the validity of the theoretical conclusions, the fact that these conclusions are consistent with what is known from observation is sufficient, in conjunction with the validity of the principles on which they are based, to establish a strong probability that they are correct. It was noted in Chapter 6 that some of the central stars of planetary nebulae are currently identified as Wolf-Rayet stars. The identification is based on their high temperatures and spectra that are similar to those of the massive Wolf-Rayets. In other respects these objects are quite different. As described by Smith and Aller, the central stars of the planetary nebulae are believed to have masses in the neighborhood of solar, and absolute magnitudes fainter than -3, whereas the Wolf-Rayet stars in the other class are believed to average about ten solar masses and to have absolute magnitudes brighter than -4. A comparison of typical stars of these two classes leads these authors to the conclusion that they have a ‖totally different evolutionary status.― They admit that they ‖are led to wonder how many different stages of evolution can yield the Wolf-Rayet form of spectrum.― 134 ‖It is a further problem,― says Anne B. Underhill, ‖to understand . . . why this physical state may occur early in the life of a massive star (the Wolf-Rayet stars of Population 1) and late in the life of a star of small mass (the Wolf-Rayet stars of the disk population).― 75 This problem is resolved by our finding that the planetary stage follows almost immediately after the Wolf-Rayet stage; that is, the true Wolf-Rayet is a late preexplosion star, whereas the central star of a planetary nebula currently confused with the Wolf-Rayet is an early post-explosion star. The similarity of the spectra is no doubt due to the existence of very high temperatures in both cases, and to the presence, in both classes of stars, of matter from the stellar interiors that has been brought to the surface by explosive activity. According to the general description of the dwarf star cycle given in Chapter 4, and the identification of the evolutionary pattern in Fig.19, the white dwarfs ultimately make their way back to positions on the spatial main sequence. We have now traced the course of one group of these stars along lines parallel to the main sequence from their points of entry into the observable region to positions somewhere near the low temperature limit in

the neighborhood of 30,000 K. As has been indicated, the next move will be upward toward the main sequence. Before discussing the nature of the change that takes place in this final dwarf stage, however it will be advisable to examine another group of white dwarf stars that also has to undergo this final transition to the material status.

CHAPTER 12

Ordinary White Dwarfs The previous discussion of the white dwarf stars has been directed at the products of the Type I supernovae, the explosions that take place at the temperature limit to which matter is subject. As already mentioned, a similar explosion, known as a Type II supernova, takes place when matter reaches an age limit. This is intrinsically a more violent process, and in its extreme manifestations it produces results that are quite different from those of the Type I supernovae. Discussion of these results and the manner in which they are produced will be deferred to the later chapters. At this time we will want to note that under less extreme conditions the results of the Type II supernovae are identical with those of Type I, except that the products are smaller. The explanation is that the unique character of the products of the extreme Type II supernovae is due to the ultra high level of the speed imparted to these products by the combination of a large explosion (that is, one involving a large star) and an extremely energetic process. The products of Type I supernovae do not reach this speed level, even though the exploding star is one of maximum size, because the process is less violent. Similarly, the products of a Type II supernova do not reach the ultra high level if the exploding star is small, even though they have the benefit of the very energetic process. Although the age limit can be reached by stars of any size, and the white dwarf products of Type II explosions extend through a wide size range, the great majority of those that exist in the outer regions of the galaxies are small, simply because the great majority of the stars in these regions are small. Many of these small white dwarfs are below the minimum size of 1.1 solar masses that applies to the central stars of the planetary nebulae. Our next objective will be to examine the evolutionary course of these smaller stars, ordinary white dwarfs, as we will call them. As we saw in Chapter 11, the 1.1 lower mass limit of the planetary nebula region is the white dwarf mass below which the energy content of the star is not sufficient to maintain a gaseous structure in gravitational equilibrium. This is analogous to the critical density of the giant stars. It should be understood that the term ‖giant― refers to the volume, not to the mass. Most of these giants are low mass stars. Such stars, whose first stage of evolution carries them along the path OA, are unable to reach the critical density in the dust cloud (gaseous) condition, and have to call upon the compressive forces of the aggregate in which they are located to aid in developing a compact, gravitationally stable, core in order to increase the average density to the required level. What exists here is a situation in which the inward-directed forces operate to force the matter of the star into a gravitationally stable condition. When the star is too small for this condensation to take place in a single operation, applicable to the star as a whole, it proceeds on a two-

component basis, in which one component, the central core, is compressed to the condensed gas state, while the remainder of the stellar aggregate continues on the gaseous basis, gradually converting to condensed gas as the star moves down toward the main sequence. In the case of the white dwarfs there is no gravitational problem, as the white dwarf aggregate is always under gravitational control, but the smaller stars those with masses less than 1.1 solar units, do not have enough energy content to maintain the surface temperature at 100,000 K in the gaseous state. Hence they, too, have to proceed on a twocomponent basis, developing a condensed gas component like that of the smaller stars of the giant class. However, the fact that the motion of the constituents of the white dwarf is in time rather than in space introduces some differences. Because of the inverse density gradient in the white dwarf stars this relatively heavy condensed gas component takes the form of an outer shell, rather than that of an inner core. Then, the presence of this shell reduces the radiation temperature to that of a condensed gas surface. This is the same surface condition that exists along the line B'B, the 30,000 K temperature line of the planetaries. Thus the 100,000 K line above point A' becomes a 30,000 K line below that level. The existence of an outer shell has been recognized observationally, but because of the prevailing theory of white dwarf structure this has been interpreted as a zone of ordinary matter surrounding the hypothetical degenerate matter of which the white dwarf, according to current astronomical theory, is composed. Gireenstein reports that there is a non-degenerate envelope about 65 miles deep.135 On the basis of our findings, the thickness of the shell at the time of entry into the observable region depends on the size of the star. A white dwarf just below the critical 1.1 mass needs only a thin shell, but the required thickness increases as the mass of the star decreases. As brought out in Chapter11, the central star of a planetary nebula moves down the CM diagram along the line A'B', or a parallel line above it, to the level at 30,000 K where the energy content of the outer thermal units of this gaseous aggregate is on the boundary between motion in time and motion in space. Here the transition from units of temporal motion to units of spatial motion takes place. But since the ordinary white dwarfs have to develop an outer shell of condensed gas before they become observable, the energy content of their outer thermal units is already below the unit speed boundary. A transition to motion in space on the basis of the full-sized unit is therefore impossible. These small stars have to cool to a lower critical temperature at which their outer thermal units are at the level of the smaller compound units of the condensed gas state, a state in which the atoms occupy equilibrium positions inside unit distance, in what we have called the time region. The 30,000 K and 100,000 K temperatures along the line at the left of the CM diagram are critical values in the sense in which this term was used in the discussion of the luminosity scale of the diagram. We may therefore deduce by analogy with the situation in the region above the main sequence that the drop from 100,000 K to 30,000 K at the point A' involves one of the compound natural units of luminosity. The 30,000 K equivalent of the line APB' is then a parallel line one unit lower in the diagram. This line constitutes the lower boundary of the zone occupied by the ordinary white dwarfs.

Above point A' the constituents of the white dwarf stars are moving freely in time; that is, they constitute gaseous aggregates in time. It follows that they radiate from the surface corresponding to an inverse volume. The more massive stars of this group (the hot subdwarfs and the planetary stars) have the greater inverse volume and are therefore the more luminous. Below point A' the outer layers of the stars are in the condensed gas state, in which they are confined within limited volumes of space. These stars radiate from the spatial surface, the surface corresponding to a direct volume. The more massive stars of this class have the larger inverse volume, and therefore the smaller direct volume (a theoretical conclusion that, as we have noted earlier, is confirmed observationally). Consequently, they are less luminous than the smaller stars of the same class. There may be some question as to why there should be a difference between the radiation pattern of the gaseous state and that of the condensed gas state when the motion is in time, since we do not encounter any such difference in dealing with motion in space. The stars on or above the spatial main sequence radiate in space regardless of their physical state. The answer to this seeming contradiction is that condensed gas aggregates radiate in time if they are condensed in time, whereas they radiate in space if they are condensed in space. The outer shells of the white dwarfs condense in space. From their initial locations along the entry line, the cooling ordinary white dwarfs move down the CM diagram along lines parallel to the spatial main sequence in the same manner, and for the same reasons, as the planetary stars, within the relatively narrow band between APB' and the lower zone boundary. Since the radiation from these stars is in space, the color-temperature relation applicable to this radiation is the same as that which applies to the stars of the spatial main sequence. The evolutionary lines of the ordinary white dwarfs therefore continue to their individual temperature limits, rather than terminating at the extension of the low temperature limit of the planetaries. Consideration of the question as to the location of these low temperature limits of the ordinary white dwarfs will be deferred to the next chapter. At this time we will merely note that the evolutionary lines followed in the cooling of these stars do not reach the position of the lower portion of the spatial main sequence, which bends sharply downward beyond 4000 K. James Liebert reports that there is a cut-off between magnitudes 15 and 16. This fact that the range of the white dwarfs stops short of the main sequence has come as an unwelcome surprise to the astronomers. Greenstein makes this comment: An anomaly has been found in the number and relative frequency of cool, red white dwarfs. It has been expected that these would be very common, but, in fact, objects more than 10,000 times fainter than the sun are rare.136 Main sequence dwarfs are observed all the way down to about magnitude 19, and it has been anticipated that the white dwarf population would extend to comparable levels. The observed cut-off at a higher luminosity confronts astronomical theory with an awkward problem. The evolutionary sequence, according to orthodox ideas, is protostar to main sequence star to red giant to white dwarf to black dwarf. One of the biggest problems that arises in the attempts to reconcile this theoretical sequence with the observations is how to account for the changes in mass that are required if this sequence is followed. As

already noted, the theorists are experiencing major difficulties in accounting for the reduction in mass that is necessary if the red giant is to evolve into a white dwarf. They have no explanation at all for an increase in mass during the evolution of the star. The existence of main sequence stars smaller than the white dwarf minimum thus puts them into a difficult position. Liebert, arguing from the premises of accepted theory, states that the observed cut-off implies either (1) an error in the calculations, or (2) a decreased white dwarf birthrate about 1016 years ago.137 In the context of the theory of the universe of motion aggregates of intermediate speed matter are produced in all sizes from the maximum downward. But the smaller aggregates are unable to complete their consolidation into single compact entities. As explained in Chapter 7, in connection with the formation of planetary systems, the pattern of gravitational forces in the aggregates of intermediate speed matter favors complete consolidation of the larger aggregates, but becomes more favorable to multiple products as the total mass decreases. On this basis, the reason for the absence of white dwarfs below a mass of about 0.20 solar units is not that white dwarf aggregates of smaller sizes do not exist, but that these smaller aggregates are not able to complete their consolidation, and remain as groups of objects of less than stellar size. The existence of this lower mass limit applying to the white dwarfs is one of the reasons for the big difference in luminosity between the planetary stars and the ordinary white dwarfs that has puzzled the observers. As Richard Stothers puts it, there is a ‖luminosity gap― between the coolest planetary star and the hottest of the ordinary white dwarfs.138 Some attempts have been made to explain this gap in terms of stellar composition. Greenstein, for instance, tells us that The only possible explanation of their low luminosity is that hydrogen must now comprise less than 0.00001 of the mass of a dwarf star.135 Like so many other astronomical pronouncements, what this assertion really means is that its author is unable to find any other explanation within the limits of currently accepted theory. From the theory of the universe of motion we find that the ‖gap― between the luminosities is mainly due to the reduction in the luminosity of the ordinary white dwarfs by reason of the outer shell of condensed gas that characterizes these stars. The luminosity difference is increased by the existence of the mass minimum, as this eliminates the small stars that would be the most luminous members of this class. We can now see the significance of the group of planetary nebulae immediately below the line APB' on the CM diagram. These are stars that are small enough to require an outer shell, but so close to the dividing line that the shell is too thin to block much of the radiation from the interior. These out-of-place planetaries are found only in a very limited region of the diagram, because as soon as they cool a little more and move down the evolutionary path a short distance the shell thickness increases enough to cut off the planetary type of radiation. In our examination of the behavior of ordinary white dwarfs we will, as usual, draw upon various sources in the astronomical literature for the observational information that is needed, but the specific comparisons with the theoretical pattern will deal mainly with a

group of 60 white dwarfs on which all of the major physical properties have been determined—absolute magnitudes and color indexes by J. L. Greenstein (reference 139), and masses and temperatures by H. L. Shipman (reference 140). Fig.21 is the CM diagram for this group of stars. All but three of the masses of the sample group fall within the evolutionary band that has been identified. The average decrease in luminosity is more rapid than that indicated by the theoretical evolutionary lines, but this faster drop is due to known causes. At the upper end of the evolutionary band the entire distribution of masses is shifted upward to some extent. In this early white dwarf stage, when the outer shells are relatively thin, some of the radiation from the interior is evidently penetrating the shell, increasing the luminosity beyond the normal levels. This is a weaker form of the same effect that was noted in connection with the existence of planetary nebulae below the line A'B'. In the remainder of the band the average luminosity gradually drops away from the theoretical line in the same manner, and for the same reason, that the spatial main sequence turns downward in its lower sections. This is a result of the gradual decrease in the frequencies of the radiation from the stars, which shifts more and more of the radiation into the optically invisible ranges as the temperature drops.

The general relation between mass and luminosity is definitely inverse, as required by the theory. While the positions of the individual members of the three mass groups identified by symbols in Fig.20 are somewhat scattered, those of the smaller stars are all in the

upper portion of the populated areas of the diagram, while those of the group with masses above 0.8 solar units are all in the lower portion. Most of the stars of the intermediate group, those with masses between 0.4 and 0.8, are close to the average. As noted earlier, the lower section of the evolutionary band of the ordinary white dwarfs is not cut off at the 0.4 color index in the manner of the planetary stars, but continues on to a limit somewhere in the neighborhood of magnitude 16. The faintest star in the sample group has magnitude 15.73. The number of stars below the 0.4 color index in this sample group is rather small, but this is undoubtedly a matter of observational selection. All of the white dwarfs are relatively dim, and the observational difficulties resulting from this cause increase as the stars age and become less luminous. The available data on these objects therefore come preferentially from the earlier, more luminous, stars. As we will see later, ‖the most numerous kind of white dwarf― is the cool, dim type of star that populates the lower luminosity range, beyond a color index of 0.3 or 0.4, the same range that is so poorly represented in the sample group. The question as to what happens to the stars that reach the lower limit of this white dwarf evolutionary path will be the subject of discussion in the next chapter. The foregoing findings as to the evolutionary course of the ordinary white dwarfs now enable us to extend the theoretical CM diagram of the planetary stars, Fig.19, to include the stars of this smaller class, and to show how the zone occupied by these ordinary white dwarfs is related to the positions of the other classes of stars. For comparison, this enlarged diagram, Fig.22, also indicates the location of the ordinary white dwarfs as identified in the illustration accompanying the previously cited article by M. and G. Burbidge.102 The spectra of the white dwarfs show a considerable amount of variation, and on the basis of this variability these stars are customarily assigned to a number of different classes. Greenstein distinguishes nine classes, and the designations that he has applied in his tabulation141 are in general use. However, the basic distinction appears to be between the hydrogen-rich stars, designated as Class DA, a few hybrid classes, particularly DAF, and the balance, which are helium-rich. Much of the discussion in the literature is carried on in terms of DA and non-DA. H. M. Van Horn, for instance, comments that ‖The existence of white dwarfs with non-DA (hydrogen deficient) spectra has not yet been satisfactorily explained.― 142 Because of this lack of an acceptable explanation, the astronomers have not reached any consensus on the question as to whether the observed differences that have led to the distinction between the various classes reflect actual differences in composition, or are products of processes that take place during the evolution of the stars. The theoretical development in this work leads to the conclusion that these differences are primarily evolutionary. Before discussing these theoretical reasons why changes take place in the atmospheres of the white dwarfs as they age, we will first examine the evidence which demonstrates that these stars do, in fact, undergo significant changes as they progress along their evolutionary paths.

In this case, as is usual in astronomy, the observations give us only what amounts to an instantaneous picture, and do not specifically indicate whether the regularities that are observed are time related. This is the reason for the existing uncertainty. But the new information developed in the foregoing pages has now provided a basis from which we can approach the question. As shown in Fig.21, the ordinary white dwarfs of different masses follow parallel cooling lines on the CM diagram, with the smaller stars at the top of the luminosity range and the larger ones at the bottom. From this demonstrated fact that the lines parallel to the main sequence in the white dwarf region of the diagram are

lines of equal mass, as the theory requires them to be, it follows that on a plot of mass against the B-V color index, Fig.23, where the lines of equal mass are horizontal, the distance from the left side of the diagram along any one of these lines represents time; that is, it measures the amount of evolutionary development. The general trend obviously is from the hydrogen-rich stars, Class DA, to the classes marked x on the diagram, the DC group, we might call them, all of which are classified by Shipman as helium-rich. At temperatures above a dividing line in the vicinity of 8000 K the great majority are DA stars, with only about ten percent in the DC group. Below this temperature all of the stars fall into the DC group or a transitional class. A specific segment of the general transition from the DA status to that of the DC group can be recognized in the larger stars. Greenstein defines a class DAF, in which the hydrogen lines characteristic of the DA spectrum are weaker, and Ca II lines are present. Class DF, in which Ca II appears follows this, but hydrogen does not. Evolution through the entire sequence DA, DAF, DF is taking place in stars with mass above 0.50. Next let us turn to the question as to what causes this shift from a hydrogen atmosphere to a helium atmosphere as the white dwarf ages. The astronomers have no answer to this question. As explained by James Liebert in a 1980 review article, ‖The existence of nearly pure helium atmosphere degenerates over a wide range of temperatures has long been a puzzle.― 137 The ‖cooler helium-rich stars,― he reports, are ‖the most numerous kind of white dwarf.― Furthermore, the concentration of still heavier elements in the atmospheres of these stars is also too high to be explainable on the basis of current astronomical theory. Since the interior of the white dwarf is in an unusual physical state (this is true regardless of whether the matter is ‖degenerate,― as seen by conventional theory, or expanding into time, as seen by the theory of a universe of motion), the matter in the atmosphere, which is normal, must have been accreted from the environment. Liebert points out that The metals in the accreted material should diffuse downward, while hydrogen should remain in the convective layer. Thus the predicted metals-to-hydrogen ratios would be at or below solar (interstellar) values, yet real DF-DG-DK stars have calcium-to-hydrogen abundance ratios ranging from about solar to well above solar.137

The only possibility that Liebert is able to suggest as a solution to the ‖puzzle― is that the hydrogen accretion must be ‖blocked by some mechanism.― This is clearly a ‖last resort― kind of hypothesis, lacking in plausibility, and wholly without factual support. On the other hand, the explanation of the structure of the white dwarf derived from the postulates that define the universe of motion requires just the kind of a situation that is found by the observers. As Liebert says, on the basis of conventional theory, ‖the metals in the accreted material should diffuse downward.― But on the basis of the theory described in this work, the center of the white dwarf is the region of least density. According to this

theory, then, the hydrogen should ‖diffuse downward,― and the metals should remain in the outer regions. The helium, too, should remain behind while the lighter hydrogen sinks. The observed distribution of the three components, hydrogen, helium, and metals, in the classes of stars identified by Liebert is exactly what the theory of the universe of motion tells us it should be in the older white dwarfs. The presence of hydrogen atmospheres in the earlier stars, and the gradual nature of the transition to helium atmospheres are due to slow transmission of physical effects across any boundary between motion in space and motion in time. Originally, the white dwarf, located in the middle of the debris left by the supernova explosion, was able to accrete matter at a relatively rapid rate. Inasmuch as these accreted explosion products consisted mainly of hydrogen, the accretion gave the white dwarf a hydrogen atmosphere. But there was a small proportion of helium and other heavier elements in the accreted matter. Long-continued preferential movement of hydrogen into the stellar interior therefore resulted in a gradual increase in the proportion of heavier elements in the atmosphere. Meanwhile the accretion rate was decreasing as the white dwarf and its giant companion swept up the residue from the explosion. Eventually the incoming hydrogen passed into the interior of the star as fast as it arrived. Beyond this point, which we located in the vicinity of 8,000 K, the atmosphere of the white dwarfs is predominantly helium. In view of the complete inability of the astronomers to find any tenable explanation of these helium atmospheres within the limits of accepted physical and astronomical theory. the agreement with the theory of the universe of motion is impressive. This is an appropriate point at which to emphasize one of the most significant aspects of a general physical theory, one that derives all of its conclusions in all physical fields by deduction from a single set of basic premises, independently of any information from observation. The development of such a theory not only produces explanations for known phenomena that have hitherto resisted explanation, but also, because of its purely theoretical foundations, is able to supply explanations in advance for phenomena that have not yet been discovered. Items of this anticipatory character have had only a minor impact on the presentation in the preceding pages of this volume, as the subject matter thus far covered has been confined almost entirely to phenomena that were already known prior to the first publication of the theory of the universe of motion in 1959. But the remainder of this volume will deal mainly with astronomical phenomena that have been discovered, or at least recognized in their true significance, since 1959. The explanations that will be given for these phenomena will be taken directly from the 1959 publication, or derived by extension of the findings described therein. One entire chapter (Number 20) will be devoted to describing the predictions made in 1959 with respect to the origin and properties of a then unknown group of objects that are now identified with the quasars, pulsars, and related objects. The phenomenon that we are now considering, the existence of helium atmospheres in certain classes of white dwarf stars, is a more limited example of the same kind of anticipation of the observational discoveries. Here the explanation was provided before the need for it was recognized. The essential feature of this explanation is the inverse density gradient. The existence of this inverse gradient is not an ad hoc assumption that has been formulated to fit the observations, in the manner of so many of the

‖explanations― offered by conventional theory. It is something that is definitely required by the basic postulates of the theory of the universe of motion, and was so recognized, and set forth in the published works, long before the existence of the helium atmospheres was reported by the observers, and the need for an explanation of this seeming anomaly became evident. ‖The 1959 publication stated specifically that ‖The center of a white dwarf star is the region of lowest density .― Once the existence of the inverse density gradient was recognized, the presence of helium atmospheres in the older white dwarf stars could have been deduced, independently of any observations, if the investigations had been extended into more detail. This was not feasible as a part of the original project, because of the limited amount of time that could be allocated to astronomical studies in an investigation covering the fundamentals of all major branches of physical science. The answer to the problem of the helium concentration was, however, available for immediate use as soon as the problem was specifically recognized. In the pages that follow, this experience will be repeated time and time again. We will encounter a long succession of recent discoveries—some of a minor character, like the helium atmospheres; others that have a major significance to astronomy—and we will find simple and logical explanations of these discoveries ready and waiting in the physical principles that were previously derived from the postulates of the theory of the universe of motion. This ready availability of deductively derived answers to current problems is something that conventional astronomical theory does not have. The astronomers first have to make the discovery, and then look for an explanation of what they have found. Almost allimportant new discoveries come as surprises. Thus it is to be expected, in a rapidly growing field of knowledge, that there will be many phenomena that are still unexplained, or not satisfactorily explained, in terms of accepted theories and concepts. This situation is not looked upon as particularly serious, inasmuch as explanations of a more or less plausible character can reasonably be expected to be forthcoming for most of these items as more observations are made and the general level of knowledge in the relevant areas rises. But the prevalence of these issues of a work-in-progress nature tends to obscure the fact that among the unexplained phenomena there are some that clearly cannot be reconciled with the accepted theories, and therefore provide definite proof that there is something seriously wrong in the currently prevailing structure of theory. Spontaneous movement of heavy atoms against the density gradient does not occur in the real world. Technetium cannot rise from the core of a normal star to the surface through an overlying volume of hydrogen. Helium and the metals cannot remain on the surface of a normal star, or a highly condensed star, while hydrogen sinks to the center. Inasmuch as the observations show that technetium is present in the surface layers of some stars, and the heavier elements do remain in the surface layers of some of the white dwarfs, it is evident that the current theories are wrong in some essential respects. In the first of these cases, there is adequate evidence to show that technetium is present in stars of normal characteristics; that is, matter is not being ejected from the interiors explosively. It then follows that the technetium is not produced in the core of the star in accordance with the prevailing ideas. In the white dwarf situation, there is adequate evidence to demonstrate that the concentration of heavy elements in the outer layers of certain classes of these

stars is greater than that in the matter that is being accreted from the environment. Here, then, the hydrogen is preferentially sinking into the stellar interior. In this case it necessarily follows that the white dwarf is not a normal star, or a star composed of ‖degenerate― matter, but a star with an inverse density gradient.

CHAPTER 13

The Cataclysmic Variables The white dwarf situation is a good example of the way in which an erroneous basic concept can cause almost endless confusion in an area where the information from observation is erroneously interpreted. This is one of the two most misunderstood areas in astronomy (aside from cosmology, which belongs in a somewhat different category), and it is significant that the other badly confused area, the realm of the quasars and associated phenomena, is another victim of the same basic error: a misunderstanding of the cause of the extremely high density of such objects as white dwarfs and quasars. The wrong conclusion as to the nature of the very dense state of matter leads to an equally wrong conclusion as to the ultimate destiny of the stars that attain this state: the conclusion that they must, in the end, sink into oblivion as black dwarfs, cold, lifeless remnants that play no further part in the activity of the universe. This is the basis for the assumption, already discussed, that the white dwarfs must have evolved from the red giants. Extension of this line of thought then leads to the conclusion that, except for ‖freaks,― the stars of the high density classes should line up in some kind of an evolutionary sequence. As previously noted, the position of the planetary nebulae in the CM diagram has been interpreted by the astronomers as indicating that they are the first products of the unidentified hypothetical process that carries the red giants into the white dwarf region. It then follows that the central stars of the planetary nebulae must evolve into the ordinary white dwarfs. Shklovsky regards this as incontestable. ‖There can be no question,― he says, ‖but that the stable object into which the nucleus of a planetary nebula evolves should be a white dwarf.― 143 But even this essential step in the hypothetical evolutionary course runs into difficulties. Aller and Liller give us this assessment of the situation: Our evidence indicates that they [the central stars of the planetary nebulae] evolve into white dwarfs, but we do not yet know whether they represent an intermediate stage for most stars or not. Neither do we know from what specific kinds of stars they may evolve.144 This problem persists all the way down the line. The theorists not only have difficulty in explaining how the planetaries evolve from the red giants, and how the ordinary white dwarfs evolve from the planetaries; they are also confronted with the problem of how to account for the existence of a variety of high density objects for which their evolutionary sequence has no place. The novae, for instance, must fit into the picture somehow. But there does not seem to be any place for them in the astronomers' version of the evolutionary path. ‖Nova outbursts are too rare to be a typical stage in stellar evolution,―

145

says Robert P. Kraft. Because of the lack of any explanation consistent with the accepted theories of stellar evolution, there is a rather general tendency to dismiss the novae and related objects, the cataclysmic variables, as aberrations. For example, one astronomy textbook offers this comment: Very little is known about the reason for a nova's outburst. It appears that something has gone wrong with the process of nuclear energy generation in the star. 146 Development of the theory of the universe of motion now shows that the planetaries and the ordinary white dwarfs follow parallel, rather than sequential, evolutionary paths. All of these dwarf stars enter the observable region along a critical temperature line at the left of the CM diagram, and move downward and to the right along parallel lines as they cool (evolutionary stage 3). On reaching the temperature at which a transition to motion in space takes precedence over further cooling of the atoms moving in time, a temperature that is determined by the stellar mass, each star converts to motion in space. This change takes it upward on the CM diagram (evolutionary stage 4). The general nature of the conversion process is the same for all of these stars, but the specific character of the observable results depends on the magnitudes of the factors involved. Our next objective will be to examine the details of this process. As successive portions of the intermediate speed matter of which the two classes of white dwarf stars are composed cross the unit speed boundary in their continuing loss of thermal energy, they form local concentrations of gas—bubbles, we may say—with particle speeds in the range below unity. Because of the inverse density gradient in the interior of the white dwarf star, these gas bubbles move downward to the center, the location of lowest density, and accumulate there. Some interchange takes place between the gas and the surrounding intermediate speed matter, tending to convert part of the gas back to intermediate speeds, but this interchange is slower than the oppositely directed movement across the unit boundary that produces the gas in the outer regions. A gas pressure therefore builds up at the center of the star. When this pressure is high enough, the compressed gas breaks through the overlying material, and the very hot matter from the interior is exposed briefly at the surface of the star, increasing its luminosity by a factor that may be as high as 50,000. The star also becomes a x-ray emitter. The significance of this emission will be discussed in Chapter19. Within a relatively short time (astronomically speaking) the small amount of matter brought to the surface by the outburst cools, and the star gradually returns to its original status. A white dwarf is inconspicuous and, since the first observed events of this kind could not be correlated with previously identified objects, they were thought to involve the formation of entirely new stars. As a result, the inappropriate term nova was applied to this phenomenon. From the foregoing description it is apparent that the nova process is periodic. As soon as one gas accumulation is ejected, the compressive and thermal forces in the interior of the star begin working toward development of a successor. Inasmuch as the gravitational forces operating within the star are gradually expanding it toward the condition of equilibrium for motion in space represented by the spatial main sequence (that is, they are drawing the constituent atoms closer together in time), the resistance to the gas pressure

that builds up in the center of the star decreases as the star moves through this stage of its existence. The decreasing resistance shortens the time interval between explosions. The first event of this kind may not occur for a very long time after the beginning of the observable life of the star, but as the star approaches closer to the point of full conversion to motion in space the time interval decreases, and a number of novae have repeated within the last 100 years. Novae are relatively infrequent phenomena, and observationally difficult because of the relatively short duration of the active period, and the rapid changes that take place during this time. Meaningful information about them is consequently limited. The theoretical conclusions with respect to this stage of the evolution of the stars on the dwarf side of the main sequence can therefore be compared with observation only to a very limited extent. We will have to be content, in most cases, with a showing that the theoretical findings are not inconsistent with what has been observed. Two of the brightest novae, T Coronae Borealis and RS Ophiuchi, are in the class known as recurrent novae, having repeated three or four times during the period in which they have been subject to observation. This is another name that is not very appropriate, as some novae of the more common ‖classical― type have also been observed to repeat their outbursts, and theoretical considerations indicate that all will eventually repeat many times. T Coronae is estimated to have a mass of 2.1 solar units,147 which puts it, and presumably RS Ophiuchi, in the class of the larger white dwarfs, those that were formerly the central stars of planetary nebulae. This large mass is consistent with the high luminosity of the two novae that have been mentioned. The nature of the nova process is the same regardless of the size of the star that is involved. In all cases there is a pressure build-up that eventually breaks through the overlying layers of the star. But there are differences in the rate of pressure increase, and in the weight of matter through which the confined gas must force its way in order to escape, and the variability in these factors results in major differences in the character of the outbursts from different classes and sizes of stars. In the white dwarfs of the larger (planetary) class, the luminosity and temperature changes required to move a star from the point on the evolutionary line where it begins its final transition to motion in space to the appropriate main sequence position on the line segment BC are relatively small, averaging about three magnitudes, and they are accomplished quite rapidly. This accounts for the short interval between the outbursts of these stars. On the other side of the dividing line the situation is quite different. The first stars of the smaller class, the ordinary white dwarfs, not only enter the observable region at a much lower luminosity, but undergo a greater decrease in luminosity and temperature as they cool, so that when they arrive at the point where they are ready to begin the transition from motion in time to motion in space they have a long way to go, as Fig.21 clearly indicates. The time between outbursts is correspondingly long. On the other hand, the magnitude of the outburst is not related to the amount of energy decrease involved in the transition, but to the size of the star, which determines the resistance to escape of the confined gas. Even the largest of the novae produced by ordinary white dwarfs are therefore less violent than those of the T Coronae class, although their range of magnitudes is greater. Initially they repeat only at very long intervals, too long for more

than one event to have occurred during the time that observations of these phenomena have been carried on. The observers classify novae as slow, fast, or very fast, depending on the rate at which the luminosity develops and returns to normal. Aside from details of the spectra, which are not being covered in this work, available quantitative information about these objects includes the maximum and minimum luminosity, together with the difference between the two: the total luminosity range. The distances to the novae are not known, and the absolute magnitudes are therefore unavailable. The most significant luminosity measurement is the total range, which is independent of the distance, except to the extent that there has been absorption of light in passing through the intervening matter. Table III compares the ranges of the group of novae tabulated by McLaughlin (reference 148) with the assigned classifications and with the number of days required for the luminosity to decline seven magnitudes, a rough check on the validity of the classification. Some general conclusions can be drawn from this information. Theoretically the earliest outbursts of the largest novae should be the fastest, and should have the maximum magnitude range, since these largest stars are at the bottom of the white dwarf evolutionary band. Both the rate of luminosity change and the magnitude range should decrease as the white dwarf star ages. The mass does not change significantly. The slowest novae with the smallest magnitude range should therefore be those in which the stars are at the low end of the nova size range, and also near the end of their nova stage. In between these two extremes, the magnitude range is determined by the size and age of the nova. Average range may indicate either an old large nova, or a young small one, or one that is near average in both respects. Table III NOVAE Nova CP Pup V450 Cyg DQ Her EL Aql GK Per CP Lac V476 Cyg V603 Aql Q Cyg RR Pic CT Ser V630 Sgr T Aur V528 Aql DK Lac V465 Cyg V360 Aql V606 Aql

Range (magnitudes) 16.6 >14.0 13.6 13.5 13.3 13.2 12.5 11.9 11.8 11.5 >11.0 11.0 11.0 10.7 10.5 10.1 >10.0 9.9

Class VF S S F VF VF VF VF VF S F? VF S F F S? VF F

Decline (days) 140 — 8880 — 300 154 170 260 250 1000 — 123 1800 — 500 — — 320

DL Lac V604 Aql XX Tau V356 Aql HR Lyr Eu Ser T Cr B* DM Gem V841 Oph DO Aql DN Gem V8490 Oph T Pxy** V1017 Sgr** WZ Sge** RS Oph* * ‖recurrent―

9.8 >9.2 >9.0 9.0 8.7 >8.6 8.6 8.5 8.3 >7.9 7.9 >7.6 7.6 7.5 7.4 6.7

F F F S S F VF VF S S VF S S S F VF

300 230 which the observations point are the same features that we find when we apply pure reasoning to the properties of space and time as defined in the postulates of the Reciprocal System of theory. The explosive event that is required by the theory produces exactly the kind of an association of three related objects—a central galaxy with a radio galaxy on one side and a quasar diametrically opposite—that Arp has identified in his studies. The ultra high speed imparted to the quasar by the tremendous amount of energy released in the galactic explosion exists in a second dimension of motion, and provides a second redshift component, related to but distinct from, the normal recession redshift, and the mathematical statement of that relation, as derived from theory is identical with the relation between the measured values. While the pattern of redshift values illustrated in Fig.25 is conclusive in itself, it does not exhaust all of the corroboration of the theory that we can extract from Arp's associations. The distances of the

radio emitters from the central galaxy also have a significance in this respect. As explained in Chapter15, gravitation is effective in all three scalar dimensions, and therefore operates against the explosion-generated motion as well as against the normal recession. As a result, the net explosion speed is initially small, and increases with the distance in the same manner, except for the twodimensional effect, as the recession speed. On the other hand, since the greater part of the explosion speed is initially applied to overcoming the effect of gravitation, which operates within the fixed spatial reference system, there is a rapid change of position in the reference system during this initial period when the net total speed, including the scalar speed not capable of representation in this reference system, is quite small. The rate of change of position then decreases as gravitation is gradually overcome and the net speed increases. Thus the theory leads to the decidedly unconventional conclusion that the faster the quasar moves in the explosion dimension, the less its position in space changes. According to the theory, the relative spatial speed of the quasar, the component that manifests itself by changing the quasar position in space, is the difference between I.0, the speed of light, and the explosion component of the quasar redshift, 3.5 z½ in the quasars of Table IV. The relative speed of the radio galaxy is the average outward speed of the stars that fail to reach the 1.0 speed level, and are therefore ejected in space rather than becoming constituents of the quasar. Since the distribution of these speeds was initially the tail of a probability curve from l.0 downward, the average at the time of observation should be somewhat above 0.5, and nearly the same in all cases. Here, again, Arp's associations provide a sample that we can test to see if this theoretical requirement is met. In these associations we can measure the ratio of the distances of the two ejected objects from the central galaxy, since the three objects lie on a straight line. Inasmuch as the distance traveled since the explosion is proportional to the average spatial speed, the distance ratio thus determined is also the ratio of the average speeds. Applying this ratio to the spatial speed in the explosion dimension derived from the redshift measurement, we arrive at the speed of the radio galaxy. For this test we are able to use only those associations in which all three components central galaxies, quasars, and radio galaxies—have been clearly identified. Four of the associations listed in Table IV are within the 10,000 km/see range in which identification of the central galaxy is feasible. but the radio galaxy in association 148 is unidentified optically. Its approximate location is known, and it can therefore be included in the study, along with the three associations that are clearly identified, with the understanding that the results on 148 are subject to some uncertainty. Table V shows the observational data on these four associations, and the speeds of the radio galaxies as calculated from these data. is still possible that the radio galaxy associated with a particular quasar may have been correctly identified, as the radio galaxies can be detected at distances well beyond those at which the features distinguishing a ‖peculiar― galaxy can be recognized. The correlations in our analysis have therefore been made on the basis of the radio galaxy, if the necessary redshift measurement is available, rather than the central galaxy, for all distances greater than that of association 148, as indicated by the symbol R in the third column of the table. Association Number 134 160

TABLE V Excess Spatial Redshift Speed .155 .845 .312 .688

Distance Ratio .73 .91

R.G. Speed .62 .62

125 148

.566 .695

.434 .305

1.35 2.57

.59 .78

Column 2 of the table gives the explosion redshift of the quasar in the association identified in Column 1. Column 3 is the relative spatial speed of the quasar, the difference between unity and the value in column 2. Column 4 is the measured distance ratio. Multiplying Column 4 by Column 3, we arrive at the speed of the radio galaxy, relative to an explosion speed of 1.0. These results given in Column 5 meet the requirements set forth earlier, that is, they arrive at essentially the same speed for all four radio galaxies (if we make allowance for the lack of certainty in the position of the radio galaxy in association 148), and this calculated speed is within the limits that we can establish from more direct considerations. Furthermore, a very wide range of quasar speeds is included, as the theoretical spatial speed of the quasar 3C 273 m association 134 is twice that of 3C 345 in association 125, and almost three times that of 3C 254 in association 148. The downward trend in the relative distance of the quasars from the central galaxy as the speed increases is unmistakable. Verification of a theoretical conclusion of this nature, one that is nothing short of outrageous in the context of conventional theory is particularly significant because it shows that a drastic change in fundamental theory is required before the full range of physical phenomena can be understood. The customary process of adjustment and modification of existing theory by means of additional ad hoc assumptions is clearly incapable of dealing with discrepancies of this magnitude. No amount of tinkering with the conventional theory of motion can reconcile a decrease in the rate of change of spatial position with an increase in speed. Some new light on the general situation is indispensable.. A related phenomenon that is equally inexplicable in terms of conventional physical thought is the nearly constant separation of the radio emitting regions in most quasars. Although the distances to different quasars vary over an extremely wide range, the apparent separation of the two radio components is usually close to a constant value. For example, Table VI shows the separations (in seconds of arc) measured by D. E. Hogg,246 excluding three values that will be considered later.

Quasar 3C 181 3C 204 3C 205 3C 207 3C 208 3C 249.1 3C 261 3C 268.4

Table VI COMPONENT SEPARATIONS Separation Quasar 6.0 3C 273 31.4 3C 275.1 15.8 3C 280.1 6.7 3C 288.1 10.5 3C 336 18.8 3C 432 10.8 MSH 13-011 9.4

Separation 19.6 13.2 19.0 6.4 21.7 12.9 7.8

Similar measurements by Macdonald and Miley include a substantial proportion of larger separations, but these authors comment that their list includes many objects in which the radio components are so far distant from the optical center that, in their words, ‖If the radio structures of the larger QSOs were not symmetric about the optical QSO they might not have been identified.― 247 This suggests that the quasars with the larger component separations represent a different group of objects, the members of which have a second observable set of laterally displaced components. Such a hypothesis is supported

by a further comment from the investigators which indicates that, in some instances, both types of component separation are present in the same structures. ‖Many sources,― they say, ‖have large scale structure but small scale components dominate.― The almost constant angular separation of such a large proportion of these radio components of quasars stands out as an observed fact for which conventional astronomical theory has no explanation. As expressed by K. I. Kellerman, ‖either: the linear dimensions of radio sources depend on redshift in just such a way as to cancel the geometrical effects of the redshift, or: The geometric effect of the redshift on apparent size is negligibly small.― 248 Since neither of these alternatives can be accommodated within the boundaries of conventional theory. astronomy, Kellerman says, is confronted with a paradox. In approaching the question theoretically. we note first that the outward radial movement of the quasars is beyond the limits of the reference system, and it is therefore incapable of representation in that system. As explained in connection with the derivation of the applicable general principles in Volume II, motion in a second dimension is normally excluded from representation in the spatial reference system because the presence of motion in the original dimension preempts the full capacity of the reference system. But when representation of the motion in the original scalar dimension is ruled out for some reason, representation of the motion in the second dimension becomes possible. The lateral motion of the distant quasars is analogous to the lateral magnetic motion discussed in Volume II. As in electromagnetism, the motion in the second dimension of the intermediate speed range appears in the reference system with a direction perpendicular to the line of motion in the original dimension. In the case of the quasars, this direction is perpendicular to the line of sight. The recession speed in the second dimension is the same as in the dimension coincident with the reference system, but as observed it is reduced by the interregional ratio, 156.444. Since it originates in a two-dimensional region, it is observed as a second power quantity. Thus the ratio of lateral to radial motion is (2/156.444) . In the terms in which the astronomers generally express the lateral displacement, this observable recession in the lateral direction amounts to 16.9 seconds of arc. Inasmuch as the outward motion of a quasar has a specific direction, as seen in the spatial reference system, the lateral motion is confined to one specific perpendicular line. As noted earlier, however, scalar motion does not distinguish between the direction AB and the direction BA. The lateral recession outward from point X is therefore divided equally between a direction XA and the opposite direction XB by the operation of probability. Matter moving translationally at upper range speeds thus appears in the reference system in two locations equidistant from the line of motion in the coincident dimension (the optical line of sight, in most cases), and separated by 33.8 seconds of arc. It does not follow, however, that the separation observed from the earth will be this large. If the quasar is a distant one no evidence of its existence can be detected here until the radiation has had time to travel the long intervening distance. When first received. this radiation will disclose only the situation that existed at the location and time of ejection. before the lateral recession had begun The progress of the recession will be revealed gradually by the radiation subsequently received. but the observed recession will lag behind the true magnitude by the time required for the travel of the radiation, until the observed separation reaches the limiting value. In the meantime, the separation will be observed at some value intermediate between zero and the maximum.

l his explains why the observed separations vary, and arc generally less than the calculated 33.8 seconds of arc. As can be seen from the foregoing explanation these observed separations should be related to the time that has elapsed since the explosive event that produced the fast-moving products from which the radiation is being emitted. The relation of the optical and radio emissions provides a rough indication of this time. The ratio of these emissions is affected by the evolutionary changes that take place in the various stages of the existence of the quasar, but by limiting our consideration to a homogeneous group of objects we can minimize the effect of these changes. For such a group the radio emission should decrease with time, as the isotopic adjustment progresses toward completion, and the ratio of optical to radio emission should increase accordingly. The magnitude of this ratio should therefore give us an approximate measure of the relative quasar ages. An appropriate group of this kind consists of the six quasars in Hogg's list with redshifts above 1.00 for which luminosity data are available in the tabulations in Chapter 25. Examination of these data indicates that the approximate ratio (RL) of optical to radio luminosity is related to the separation of the radio components (S) by the expression: S = 83RL + 3.0. Separations calculated on this basis are compared with Hogg s measurements in Table VII. Table VII COMPONENT SEPARATIONS Quasar

RL

3C 204 3C 208 3C 432 3C 268.4 3C 181 3C 280.1

0.279 0.113 0.112 0.075 0.033 0.031

Calc. 26.2 12.4 12.3 9.2 5.7 5.6

Separation Obs. 31.4 10.5 12.9 9.4 6.6 19.0

All but one of these correlations are within the range of variation that can be expected in view of the diversity of the conditions affecting the individual quasars. The reason for the discrepancy in the values applicable to the quasar 3C 280. l is not ‖known― but it could be the result of a second very recent, outburst that has renewed the radio emission. On this basis, the low RL value is produced by the radiation from the second explosive event, whereas the 19.0 figure is the separation between the products of the earlier event. The separations greater than about 35 seconds of arc that are included in the reports that were quoted, those of the three quasars omitted from the tabulation of the Hogg results, and a larger number from the work of Macdonald and Miley, arc due to a different cause. They are the results of actual motion of the ultra high speed dust and gas from which the radio emission originates, motion that has taken this material away from the location where the optical radiation is being produced. This is the process by which the separation of the radio components of the radio galaxies originates, and it will be examined in connection with the discussion of these objects in Chapter 26. As we will see there, this process is not operative beyond a distance of 1.00 in the explosion dimension (total redshift 1.081). Thus there should be no component separations above 33.8 seconds by more than the observational error at redshifts above 1.081. This is consistent with the findings of both of the investigations cited. In addition to the major explosive events that produce the larger radio aggregates, there is also a

continuing series of explosions of a more limited character (to be explained in Chapter 24) in the older quasars. In some instances these result in scattered centers of emission along the normal lateral line, but a large proportion of the total energy is generated by the radioactivity of the short-lived isotopes, which is observed at or near the optical location. As we will see shortly, there is also another factor that confines some of the radio emission in the older quasars to the center position. Thus there is a tendency toward three, rather than two, major locations of radio emission. The prevalence of the threecomponent pattern is illustrated in the data reported by Macdonald and Miley. These investigators say that only 6 of the 36 objects for which they determined radio structures are definitely double, whereas 23 have, or may have, a third component at the center. The remaining 7 are more complex. The finding that the radio emission from the distant quasars originates mainly at the same spatial location as the optical emission, but that we see it at two or more locations in the reference system, is another conclusion that appears outrageous in the context of current physical thought, but like the equally unconventional findings previously discussed, it is in agreement with the physical observations, and provides the explanations for aspects of those Observations that are in conflict with conventional astronomical theory. In reality, it is not a strange or unusual phenomenon; it is merely unfamiliar. Multiple images produced by other means—mirrors, for instance—are commonplace. All radiation from a quasar is subject to the same considerations, but the stellar constituents from which the optical emission originates are usually moving at speeds below the two-unit level. Thus the optical position of a quasar normally shows no lateral displacement. In some stars, however, the internal speeds may be in the ultra high range. In that event, both the optical and the radio emissions originate from the laterally displaced locations. The recently discovered cases of ‖twin quasars,― which are thought to be duplicate images produced by gravitational lenses, may well be single quasars with ultra high speed optically emitting components. When the quasars have reached the point where their net speed exceeds two units and enters the cosmic range, the gravitational effect is inverted, and motion in time replaces motion in space. This eliminates the lateral recession in equivalent space that is responsible for the double character of the radio structure, as seen in the spatial reference system. The radiation from the quasar is still observable until the motion in time has continued long enough to destroy the status of the quasar as a spatial aggregate, and in the meantime this radiation is observed in the undisplaced radial location. Observations indicate that many of the oldest of the visible quasars are in this transitional stage. A substantial proportion of those quasars that, on the basis of criteria such as the presence of absorption redshifts, large radio emission, and high z values, are in an advanced stage of development, show no spatial extension other than that corresponding to the spatial dimensions of the optical objects. Thus the theory of the universe of motion provides an explanation of the major features of the quasar structures. Kellerman's ‖paradox,― we find, is simply a message from nature, and it is the same message that we get from our analysis of the redshifts in Arp's associations. It tells us that inasmuch as the lateral displacements, like the excess redshift, are directly related to the recession, and are therefore observable effects of motion, the conventional narrow view of motion, which limits it to speeds less than that of light and to effects that can be represented within a three-dimensional spatial system of reference, must be broadened. But this is not something new that we are just now finding out by examination of the astronomical situation. It is a direct consequence of the inherent nature of the motion of which the universe is composed, and it plays just as significant a part in the fundamental

physical relations as in the astronomical phenomena we are now considering. The principles here being applied were developed deductively in the earlier volumes, and were there utilized in application to many physical phenomena. For example, the physical principle that explains why radio sources are double (or triple) is not peculiar to this particular application; it is a general property of scalar motion that has previously been shown to provide the explanation for such diverse phenomena as the induction of electric charges and the deflection of light by massive objects. As demonstrated in this and the preceding chapter, the deductions from the Reciprocal System of theory, incorporating this more comprehensive view of the nature of motion, are in full agreement with the results of observation in the quasar areas examined thus far. In the pages that follow it will be shown that this one-to-one correspondence between the theoretical deductions and the observational results is maintained throughout the entire range of the quasar phenomena. Some of the features of the account of the origin and nature of the quasars thus derived are in conflict with current astronomical thought, to be sure, but this merely reveals the erroneous nature of much of the current thinking. For example, present-day theory sees no way in which the forces necessary to eject a galactic fragment can be built up within a galaxy. ‖Obviously d normal assemblage of stars cannot be hurled about like a snowball,― says Arp. However, the observational evidence makes it clear that fragments are ejected under some circumstances; that is, they are hurled about like snowballs. Current astronomical literature is full of references to, and hypotheses dependent upon, ejection of ‖assemblages of stars― from galaxies. In explaining how this is possible, and indeed, inevitable in the normal course of galactic evolution, the Reciprocal System is simply filling a conceptual vacuum.

CHAPTER 23

Quasar Redshifts Although some of the objects now known as quasars had previously been recognized as belonging to a new and different class of phenomena, because of their peculiar spectra, the actual discovery of the quasars can be said to date from the time, in 1963, when Maarten Schmidt identified the spectrum of the radio source 3C 273 as being shifted 16 percent toward the red. Most of the other identifying characteristics originally ascribed to the quasars have had to be qualified as more data have been accumulated. One early description, for example, defined them as ‖star-like objects identified with radio sources.― But present-day observations show that in most cases the quasars have complex structures that are definitely un-starlike, and there is a large class of quasars from which no significant radio emission has been detected. But the high redshift has continued to be the hallmark of the quasar, and its distinctive character has been more strongly emphasized as the observed range of values has been extended upward. The second redshift measured, that of 3C 48, is 0.369, substantially above the first measurement. 0.158. By early 1967, when about 100 redshifts were available, the highest value on record was 2.223, and at the present writing it is up to 3.78. Extension of the redshift range above l.00 raised a question of interpretation. On the basis of the previous understanding of the origin of the Doppler shift, a recession redshift above 1.00 would indicate a relative speed greater than that of light. The general acceptance of Einstein's contention that the speed of light is an absolute limit made this

interpretation unacceptable to the astronomers. and the relativity mathematics were invoked to resolve the problem. Our analysis in Volume I shows that this is a misapplication of these mathematical relations In the situations to which those relations actually do apply, there are contradictions between values obtained by direct measurement and those obtained by indirect means, such as. for instance, arriving at a speed measurement by dividing coordinate distance by clock time. In these instances the relativity mathematics (the Lorentz equations) are applied to the indirect measurements to bring them into conformity with the direct measurements, which are accepted as correct. The Doppler shifts are direct measurements of speeds, and require no correction. A redshift of 2.00 indicates a relative outward motion with a scalar magnitude of twice the speed of light. While the high redshift problem was circumvented in conventional astronomical thought by this sleight-of-hand performance with the relativity mathematics, the accompanying distance-energy problem has been more recalcitrant, and has resisted all attempts to resolve it, or to evade it. Reference was made to this problem in Chapter 21, but inasmuch as it constitutes a crucial issue, for which the theory of the universe of motion has an answer, while conventional theory does not, a review of the situation will be appropriate in the present connection. If the quasars are at cosmological distances—that is, the distances corresponding to the redshifts on the assumption that they are ordinary recession redshifts—then the amount of energy that they are emitting is far too great to be explained by any known energy generation process, or even any plausible speculative process. On the other hand, if the energies are reduced to credible levels by assuming that the quasars are less distant, then conventional science has no explanation for the large redshifts. Obviously something has to give. One or the other of these two limiting assumptions has to be abandoned. Either there are hitherto undiscovered processes that generate vastly more energy than any process now known, or there are hitherto unknown factors that increase the quasar redshifts far beyond the normal recession values. For some reason, the rationale of which is difficult to understand, the majority of astronomers seem to believe that the redshift alternative is the only one that requires a revision or extension of existing physical theory. The argument most frequently advanced against the contentions of those who favor a non-cosmological explanation of the redshifts is that a hypothesis, which requires a change in physical theory should be accepted only as a last resort. What these individuals are overlooking is that this last resort is the only thing left. If modification of existing theory to explain the redshifts is ruled out, then existing theory. must be modified to explain the magnitude of the energy generation. Furthermore, the energy alternative is much more drastic, inasmuch as it not only requires the existence of some totally new process, but also involves an enormous increase in the scale of the energy generation, a rate far beyond anything now known. All that is required in the redshift situation, on the other hand, even if a solution on the basis of known processes cannot be obtained, is a new process. This process is not called upon to explain anything more than is currently recognized as being within the capability of the known recession process; it merely has to account for the production of the redshifts at less distant spatial locations. Even without the new information derived in the

development of the theory of the universe of motion it should be evident that the redshift alternative is by far the better way out of the existing impasse between the quasar energy and redshift theories. It is therefore significant that this is the explanation that emerges from the application of the Reciprocal System of theory to the problem. Such considerations are somewhat academic, as we have to accept the world as we find it, whether or not we like what we find. It is worth noting, however, that here again, as in so many instances in the preceding pages, the answer that emerges from the new theoretical development takes the simplest and most logical path. Indeed, the answer to this quasar problem does not even involve breaking as much new ground as expected by those astronomers who favor a non-cosmological explanation of the redshifts. As they see the situation, some new physical process or principle must be invoked in order to add a ‖non-velocity component― to the recession redshift of the quasars. But we find that no such new process or principle is needed. The additional redshift is simply the result of an added speed; one that has hitherto escaped recognition because it is not capable of representation in the conventional spatial reference system. The preceding chapter explained the nature and origin of the second component of the redshifts of the quasars, the explosion-generated component, and showed that the validity of this explanation is confirmed by an analysis of the three-member ‖associations― identified by Halton Arp. In this present chapter we will examine the quasar redshifts in more detail. As indicated in the preceding pages, the limiting value of the explosion speed, and redshift, is two net units in one dimension. If the explosion speed is divided equally between the two active dimensions of the intermediate region, the quasar can convert to motion in time when the explosion component of the redshift in the initial dimension is 2.00, and the total quasar redshift is 2.326. At the time Quasars and Pulsars was published only one quasar redshift that exceeded the 2.326 value by any substantial amount had been reported. As pointed out in that work, the 2.326 redshift is not an absolute maximum, but a level at which conversion of the motion of the quasar to a new status, which it will ultimately assume in any event, can take place. Thus the very high value 2.877 attributed to the quasar 4C 05.34 either indicated the existence of some process whereby the conversion that is theoretically able to occur at 2.326 is delayed. or else was an erroneous measurement. Inasmuch as no other data bearing on the issue were available, it did not appear advisable to attempt to decide between the two alternatives at that time, In the subsequent years, many additional redshifts above 2.326 have been found, and it has become evident that extension of the quasar redshifts into these higher levels is a frequent occurrence. The theoretical situation has therefore been reviewed, and the nature of the process that is operative at the higher redshifts has been ascertained. As we have seen, the 3.5 redshift factor that prevails below the 2.326 level is the result of an equal division of seven equivalent space units between a dimension that is parallel to the dimension of the spatial motion and a perpendicular dimension. Such an equal division is the normal result of the operation of probability where there are no influences that favor one distribution over another, but other distributions are not totally excluded. There is a small, but not negligible, probability of an unequal distribution. Instead of the normal 3½ - 3½ distribution of the seven units of speed, the division may become 4 - 3,

4½ - 2½ etc. The total number of quasars with redshifts above the level corresponding to the 3½ - 3½ distribution is relatively small, and any random group of moderate size—say 100 quasars—would not be expected to contain more than one, if any. A representative random group of quasars examined in Chapter 25 has none. An asymmetric dimensional distribution has no significant observable effects at the lower speed levels (although it would produce anomalous results in a study such as the analysis of Arp's associations in Chapter 22 if it were more common), but it becomes evident at the higher levels because it results in redshifts exceeding the normal 2.326 limit. Because of the second power nature of the inter-regional relation, the 8 units involved in the explosion speed, 7 of which are in the intermediate region, become 64 units, 56 of which are in that region. The possible redshift factors above 3.5 therefore increase in steps of 0.125. The theoretical maximum, corresponding to a distribution to one dimension only, would be 7.0, but the probability becomes negligible at some lower level, apparently in the neighborhood of 6.0. The corresponding redshift values range up to a maximum of about 4.0. The largest redshifts thus far measured are as follows: Quasar 2000-330 OQ 172 2228-393 OH 471

Observed 3.78 3.53 3.45 3.40

Redshift Calculated 3.75 3.54 3.47 3.40

Factor 6.000 5.625 5.500 5.375

An increase in the redshift factor due to a change in the dimensional distribution does not involve any increase in the distance in space. All quasars with redshifts of 2.326 and above are therefore at approximately the same spatial distance. This is the explanation of the seeming inconsistency involved in the observed fact that the brightness of the quasars with extremely high redshifts is comparable to that of the quasars in the redshift range around 2.00. The stellar explosions that initiate the chain of events leading to the ejection of a quasar from the galaxy of origin reduce a large part of the matter of the exploding stars to kinetic and radiant energy. The remainder of the stellar mass is broken down into gas and dust particles. A portion of this dispersed material penetrates into the sections of the galaxy surrounding the region where the explosions take place, and when one such section is ejected as a quasar it contains some of this fast-moving dust and gas. Since the maximum particle speeds are above those required for escape from the gravitational attraction of the individual stars, this material gradually makes its way outward, and eventually assumes the form of a cloud of dust and gas around the quasar —an atmosphere, we might call it. The radiation from the constituent stars of the quasar passes through this atmosphere, giving rise to absorption lines in the spectrum. The dispersed material surrounding a relatively young quasar is moving with the main body, and the absorption redshift is therefore approximately equal to the emission value. The constituent stars grow older during the time that the quasar moves outward, and in the later stages of its existence some of these stars reach their destructive limits. These stars then explode as Type II supernovae in the manner previously described. As we have seen, such explosions eject one cloud of explosion products outward into space, and

another similar cloud outward into time (equivalent to inward in space). When the explosion speed of the products ejected into time is superimposed on the speed of the quasar, which is already near the sector boundary, these products pass into the cosmic sector and disappear. The outward motion of the explosion products ejected into space is equivalent to an inward motion in time. It therefore opposes the motion of the quasar, which is outward in time. If this inward motion could be observed independently it would produce a blueshift, as it is directed toward our location, rather than away from it. But since this motion occurs only in combination with the outward motion of the quasar its effect is to reduce the net outward speed and the magnitude of the redshift. Thus the slower-moving products of the secondary explosions move outward in the same manner as the quasar itself, and their inverse speed components merely delay their arrival at the point where conversion to motion in time takes place. A quasar in one of these later stages of its existence is thus surrounded not only by an atmosphere moving with the quasar itself, but also by one or more independent clouds of particles moving away from the quasar in time (equivalent space). Each cloud of particles gives rise to an absorption redshift differing from the emission value by the magnitude of the inward speed imparted to these particles by the internal explosions. As pointed out in the discussion of the nature of scalar motion, any object that is moving in this manner may also acquire a vectorial motion. The vectorial speeds of the quasar components are small compared to their scalar speeds, but they may be large enough to cause some measurable deviations from the scalar values. In some cases this results in an absorption redshift slightly above the emission value. Because of the inward direction of the speeds resulting from the secondary explosions, all other absorption redshifts differing from the emission values are below the emission redshifts. The speed imparted to the ejected particles has no appreciable effect on the recession' z like the increase in effective speed beyond the 2.326 level, therefore, the change has to take place in the redshift factor, and it is limited to steps of 0.125, the minimum change in that factor. The possible absorption redshifts of a quasar thus exist in a regular series of values differing by 0.125z½ Inasmuch as the value of z for the quasars reaches a maximum at 0.326, and all variability of the redshifts above 2.326 results from changes in the redshift factor, the theoretical values of the possible absorption redshifts above the 2.326 level are identical for all quasars, and coincide with the possible values of the emission redshifts. Since most of the observable high redshift quasars are relatively old, their constituents are in a state of violent activity. This vectorial motion introduces a margin of uncertainty into the measurements of the emission redshift, and makes it impossible to demonstrate an exact correlation between theory and observation. The situation is more favorable in the case of the absorption redshifts because the measured absorption values for each of the more active quasars constitute a series, and a series relation can be demonstrated even where there is a substantial degree of uncertainty in the individual values. This is illustrated in Table VII, which compares the measured absorption redshifts of three of the high redshift quasars with the theoretically possible values. The correlation is

impressive in the case of the quasar OH 471. With the exception of the value at redshift factor 3.75, all of the observed redshifts are within 0.01 of the theoretical values, and only one of the first seven theoretically possible absorption redshifts is missing from the observed list. In this instance the agreement between the values is close enough to be conclusive in itself. The differences between the theoretical and measured values for the other quasars in the table are typically about 0.02. Since the interval between successive theoretical redshifts is only 0.07, the 0.02 discrepancy is uncomfortably large, when each correlation is considered individually. But when all of the values for the quasar 4C 05.34 are compared, as a series, with the series of theoretical values, the two series clearly agree. The data for the third quasar in the table are more scattered, but the general trend of the values is similar. Because the explosion. redshift is the product of the redshift factor and z½, each quasar with a recession speed (z) less than 0.326 has its own set of possible absorption redshifts, the successive members of each series differing by 0.125z². One of the largest systems in this range that has been studied thus far is that of the quasar 0237-233, the observed redshifts of which are compared with the theoretical values in Table IX. An asterisk indicates an average of two or more measured values. Similar data for the quasars PHL 938 and 0424-131 are included in the tabulation. The theoretical absorption redshifts in this table are calculated from the observed emission redshifts (indicated by the symbol Em) and are therefore subject to any errors that may have been made in the determination of the emission shifts. Apparently no major errors are involved, as the correlations between theory and observation are just as close as in Table VIII, where the theoretical absorption values are independent of any measurements. TABLE VIII ABSORPTION REDSHIFTS Redshift Factor 5.25 5.125 5 4.875 4.75 4.625 4.50 4.375 4.25 4.125 4.00 3.875 3.75 3.625 3.50 3.375 3.25

Calc. 3.33 3.25 00 3.11 3.04 2.97 2.90 2.83 2.75 2.68 2.61 2.54 2.47 2.40 2.33 2.25 2.18

OH 471 3.34 3.25 3.18 3.12 2.97 2.91 2.77

4C 05.34

0830+115

3.19

2.88 2.81 2.77

2.95 2.91

2.59 2.49

2.47

2.22 2.18

3.125

2.11

2.13

In general, the negative component added to the particle speed by the secondary (internal) explosions is limited to about 1.50, but in some cases absorption redshifts 2.00 or more below the emission values have been reported. The significance of these very low values is still uncertain. Since the speed of the secondary explosion products is independent of that of the main body of the quasar, the dimensional distribution of this speed may be different from that of the speed of the quasar, and it is not unlikely that the low redshifts are due to combinations of explosion speed and change in the dimensional distribution. There is no currently available information against which this hypothesis can be checked, and the very low values have therefore been omitted from the tabulations. Absorption redshifts have been identified in many quasar spectra, but the number of rich systems thus tar located is relatively small. This is significant because the length of the absorption series is an indication of the extent to which disintegration of the quasar by destruction of its constituent stars has taken place. Some quasars are already so badly disintegrated that they will probably never reach the point at which they convert to motion in time while they are still in the form of aggregates of stars. No doubt the number of these rich systems will be increased to some extent as more observations are made, but it seems evident, on the basis of the information now available, that they are a minority. Most of the larger quasars apparently convert to motion in time while the quasar structure is still practically intact.

Factor 3.5 3.375 3.25 3.125 3.0 2.75 2.625 2.5 2.375 2.25 2.125 2.00

Calc. 2.223 2.154 2.019 1.948

1.674 1.605 1.536 1.399

TABLE IX ABSORPTION REDSHIFTS 0237-223 PHL 938 Obs. Calc. Obs. 2.223Em 1.955 1.955Em 2.176 2.013 1.955

1.673* 1.623* 1.526* 1.364

2.875 1.588 1.696 1.465

1.281 1.220

1.592 1.715 1.463

Calc. 2.165

0424-131 Obs. 2.165Em

1.763

1.768

1.561 1.494

1.579 1.532

1.261 1.227*

The reason for the difference in behavior between these two classes of quasars is that two different processes are involved. Demise of the quasar within the spatial reference system is due to age. When the great majority of the stars that constitute the fast-moving galactic fragment that we call a quasar have reached the age limit of matter, and have individually disintegrated, the quasar ceases to exist as such, irrespective of where it may happen to be at that time. On the other hand, the disappearance of the quasar at the sector boundary, the point at which it begins moving in actual time, is a matter of speed, and consequently of distance. A quasar that originates at a distant location begins moving outward in time away from our location when the net total explosion speed relative to our galaxy,

including the component due to our distance from the point of origin of the quasar, reaches the two-unit level. However, the transition from gravitation in space to gravitation in time does not take place until the explosion speed alone is two units. A quasar that has left our field of view by reason of the sector limit is still observable from other locations closer to the point of origin until the gravitational transition occurs. Ordinarily a long period of time is required to bring a significant number of the stars of a quasar up to the age limit that initiates explosive activity. Consequently, absorption redshifts differing from the emission values do not usually appear until a quasar reaches the redshift range above 1.75. From the nature of the process, however, it is clear that there will be exceptions to this general rule. The outer, more recently accreted, portions of the galaxy of origin are composed mainly of younger stars, but special conditions during the growth of the galaxy, such as a relatively recent consolidation with another large aggregate, may have introduced a concentration of older stars into the portion of the structure of the galaxy that was thrown off in the explosion. These older stars then reach their age limits and initiate the chain of events that produces the absorption redshifts at a stage of the quasar life that is earlier than usual. It is unlikely, however, that the number of old stars included in any newly ejected quasar is ever large enough to generate the amount of internal activity that would lead to an extensive absorption redshift system. In the higher redshift range a new factor enters into the situation and accelerates the trend toward more absorption redshifts. A substantial amount of explosive activity is normally required in order to impart the increments of speed to the dust and gas components of the quasar that are necessary for the production of absorption systems. Beyond an explosion speed of two units, however, this limitation no longer applies. Here the diffuse components are subject to the environmental influences of the cosmic sector, which tend to reduce the inverse speed (equivalent to increasing the speed), thus producing additional absorption redshifts in the normal course of quasar evolution, without the necessity of further generation of energy in the quasars. Above this level, therefore, ‖the quasars . . . all show strong absorption lines.― Strittmatter and Williams, from whose review of the subject the foregoing statement was taken, go on to say that It is as if there were a threshold for the presence of absorbing material at emission redshifts of about 2.2.234 This empirical conclusion agrees with our theoretical finding that there is a definite sector boundary at redshift 2.326. In addition to the absorption redshifts in the optical spectra, to which the foregoing discussion refers, some absorption redshifts have also been found at radio frequencies. The first such discovery, in the radiation from the quasar 3C 286, generated considerable interest because of a rather widespread impression that the radio absorption requires an explanation different from that applicable to absorption at optical frequencies. The original investigators Concluded that the radio redshift is due to absorption by neutral hydrogen in some galaxy lying between us and the quasar. Since the absorption redshift is about 80 percent of the emission redshift in this case, they regarded the observations as evidence in favor of the cosmological redshift hypothesis. On the basis of the theory of the universe of motion, the radio observations do not introduce anything new. The absorption process that operates in the quasars is applicable to all radiation frequencies.

and the existence of an absorption redshift at a radio frequency has the same significance as the existence of an absorption redshift at an optical frequency. The measured radio redshifts of 3C 286 in emission and absorption are 0.85 and 0.69 respectively. At redshift factor 2.75, the theoretical absorption redshift corresponding to the emission value of 0.85 is 0.68.

CHAPTER 24

Evolution of Quasars On the basis of the theoretical findings outlined in the preceding pages, the isotopic readjustment activity in the ejected fragment of the exploding galaxy that constitutes the quasar is at a high level in the initial stage immediately following the explosion. The radio emission is correspondingly strong. As time goes on the internal activity gradually subsides, and eventually radio emission ceases, or at least declines to unobservable levels. This radio-quiet stage comes to an end when the constituent stars of the quasar begin to arrive at their age limits in substantial numbers, and the explosions of these stars renew the isotopic adjustment activity. Radio emission then resumes. The most distant of the quasars that have been identified belong to the class of radioemitting quasars that follows the radio-quiet stage, Class II, as we will call it. Below a redshift of about 1.00, however, both classes are present, and in order to distinguish between the two it is necessary to identify some properties in which there is a systematic difference between the values applicable to the two classes of objects. Ultimately it should be possible to establish such lines of demarcation from pure theory, but for the present we will have to rely on semi-empirical distinctions. We can expect, for instance, that the evolution of the quasars from the early to the later stages will be accompanied by color changes. Identification of certain specific color characteristics that vary systematically with the quasar age will be sufficient for present purposes. A full explanation of the reason for the observed differences can be left for future investigation. As noted earlier, the colors of astronomical objects are customarily expressed in terms of color indexes. At this time we will be concerned mainly with the U-B index, which is the difference between the magnitude measured through an ultraviolet filter and that measured through a blue filter. Later we will introduce the B-V index, the color index that we used in dealing with the radiation from the stars, which is the difference between the blue magnitude anti the visual, or photographic, magnitude, obtained through a yellow filter. The empirical data indicate that in the quasars the U-B index is a rough indication of temperature. In main sequence stars the U-B index is positive; that is, more energy is received in the blue range. (It should be remembered that the magnitude scale is inverse.) This index is also positive in ordinary galaxies, which are composed mainly of such stars. Because of the inversion that takes place when the speed of light is exceeded, the theoretical development indicates that in the quasars the color trend should be reversed, and the U-B index should be negative, indicating that more energy is received in the ultraviolet range. All of the U-B values quoted in this chapter are negative, and should be so understood.

The number of quasars on which fairly complete measurements are now available runs into the hundreds, and it will not be feasible to analyze all of these data in a work of a broad general nature. Our examination will therefore have to be limited to a representative sample. The group of quasars studied in Quasars and Pulsars was one for which the redshifts and color data were tabulated by M. and G. Burbidge in their book Quasi-Stellar Objects.250 It includes all of the quasars for which these data were available up to the time of publication, and is therefore free from selection effects, except insofar as it favors the objects that are the most accessible to observation. No significant modifications of the conclusions drawn from the original study have been necessary, and the following discussion will be taken from the earlier publication, with the addition of the results from some subsequent studies, mainly of the same group of objects, those listed in the Burbidge table 3.1. The color indexes are determined primarily by the internal activity (the temperatures, together with the isotopic adjustments and their consequences) within the quasars. The pattern of change during the evolution of the quasars should therefore be capable of being evaluated on a purely theoretical basis. Such a project is beyond the scope of this work, but the general nature of the changes that take place in the indexes, as empirically determined, shows a definite qualitative correlation with the changes that theoretically occur in the generation and dissipation of energy. We can therefore set up some defining criteria for these quasar classes on a semi-empirical basis. In the original study reported in Quasars and Pulsars the division was established at U-B = 0.60 and an absolute ratio flux (R.F.) of 6.0 measured at 178 MHz: All quasars with UB values less than 0.60 were placed in Class I. Those having higher U-B, but R.F. below 6.0 were found to be continuous with the low U-B quasars in their properties, and were also assigned to Class I. The high R.F.-high U-B quasars form a discontinuous group with quite different properties, and were identified as members of Class II.

Fig.26 shows the relation between U-B and R.F for those of the Class I quasars listed in the Burbidge Table 3.1 for which the necessary information is available. This diagram is essentially equivalent to the astronomers' ‖two color diagram,― except that the scales have been inverted because we are here dealing with phenomena of am inverse region, the region of intermediate speeds. We will use both colors later, with and without the radio flux. It would be convenient to define the quasar classes on the basis of color alone, and some progress in this direction will be made when the B-V index is introduced later in this chapter, but color criteria that are capable of defining these classes in a manner that is free from ambiguity have not yet been developed. When a quasar is first ejected from the galaxy of origin, its constituents are in a state of violent activity, and its radio flux is abnormally high. Only one of the quasars included in the group under consideration is still in this very early stage. This is 3C 196, which has

U-B = 0.43 and absolute R.F. = 4 x .3. Its redshift is 0.871, of which 0.054 is the normal recession component. In this work the symbol z is used to represent the normal recession redshift only. The explosion-generated component, usually 3.5 z1/2 but subject to modification of the redshift factor 3.5, will hereafter be designated as q, and the total quasar redshift will be represented by the symbol Z. We then have the relation Z = z + q. We will also want to recognize that the redshift component q represents an equivalent distance (that is, a distance in the spatial equivalent of time), and we will call this the quasar distance. The quasar distance of 3C 196 is 0.817, which makes it one of the most distant Class I objects in the Burbidge table. After the initial spurt of activity in a quasar dies down to some extent, it can be found in the zone designated ‖early― in the upper left of Fig.26. As it ages, and its activity drops still further, it moves to the right (toward lower R.F.) and downward (toward higher UB). Ultimately it passes the zero radio flux line and enters the radio quiet stage. The tabulated data show that at the time they were compiled no Class I quasars had been detected at quasar distances greater than 0.900, and no objects of this class that are old enough to have U-B indexes above 0.60 had been found beyond a quasar distance of approximately 0.700. The significance of these figures lies in the fact that the high R.F. quasars with U-B indexes above 0.60 (Class II) can be detected beyond a quasar distance of 0.700. Indeed, we can follow them all the way out to the ultimate limit at 2.00. It is clear, then, that these more distant objects are not in the same condition in which they were when they were originally ejected. In order to move into the range in which they are now observed these distant Class II quasars must have undergone some process that released a substantial amount of additional radiant energy at radio wavelengths. We have already deduced that such a process exists. Because of the long period of time during which a quasar is traveling outward before it arrives at the point where it converts to the cosmic status, some of its constituent stars reach the age that corresponds to the destructive limit. Secondary Type 11 explosions then occur. Obviously, this is just the kind of a process that is required in order to explain the emergence of a second class of radio-emitting quasars at distances beyond the observational limit of Class I objects. It should be noted that a secondary series of explosions is a natural sequel to the original explosion of the giant galaxy. That original explosion was initiated as soon as enough of the oldest stars in the galaxy reached their age limits. The stars in the ejected fragment, the quasar, were younger, but many of them were also well advanced in age, and after another long period of time some of these necessarily arrived at the age limit. The original stellar explosions occurred outside the portion of the galaxy that was ejected as a quasar; that is, they took place in the interior of the giant galaxy of which the quasar is a fragment. Thus the radio emission from a Class I quasar is mainly a result of the extremely violent ejection. On the other hand, the secondary explosions occur in the body of the quasar itself, and the emission from the Class II quasars comes directly from the exploding stars. This difference in origin is reflected in the relation between the U-B index and the radio flux, enabling us to utilize this relation as a means of distinguishing between the two classes. Fig.27 is a plot of U-B vs. R.F. for the Class II quasars in the Burbidge table. As can be seen, the points representing these objects tall entirely outside the section of the diagram occupied by the quasars of Class I. There is no indication in

this diagram that the Class II quasars follow any kind of an evolutionary pattern, but we will give this question some consideration later.

The quasar 3C 273 is of particular interest. This is definitely a Class II quasar, according to the criteria that have been defined, but its distance is far out of line with that of all other known objects of its class. No other Class II quasar in the group we are now examining has a quasar distance less than 0.315, whereas the quasar distance of 3C 273 is only 0.156. Ordinarily we can consider that when we measure the redshift of an object we are also determining its maximum possible age, as this age cannot be greater than the time required to move out to its present position. On this basis, we would interpret the low redshift of 3C 273 as an indication that it is an unusually young Class II quasar. This could be true. It was pointed out in the earlier discussion that the secondary explosions may occur relatively soon after the original ejection, inasmuch as some of the stars in the

galactic fragment that is ejected as a quasar may already be near the age limit at the time of the explosion. Very young Class II quasars are therefore definitely possible But 3C 273 is not necessarily young. It may be very much older than the 0.156 quasar distance would indicate, as the general relation between redshift and age does not hold good at very short distances where the magnitude of the possible random motion is comparable to that of the recession. Two galaxies that are separated by a distance in the neighborhood of their mutual gravitational limit can maintain this separation almost indefinitely, and the width of the zone in which the relative motion can be little or none at all is increased considerably if there is random motion with an inward component. Hence 3C 273 may have spent a long time near its present position relative to our Milky Way galaxy, and may be just as old as the quasars at distances around 0.300. The observational information currently available is not adequate to enable making a definite decision between these alternatives, but where we have a choice between attributing an unusual situation to a chance coincidence that has resulted in an object of a relatively rare type being located very close to us, or attributing it to a unique characteristic which we know that the object in question does possess -its proximity -the latter is clearly entitled to the preference pending the accumulation of further evidence. We therefore conclude tentatively that 3C 273 is at least as old as the Class II quasars in the vicinity of quasar distance 0.300. The position of 3C 273 in Fig.27 is indicated by a triangle. As can be seen from the diagram, this quasar is among the weaker radio emitters in its class (although we receive a large radio flux from it because it is so close) but, so far as its properties are concerned, it is not abnormal, or even a borderline case. Its proximity therefore provides a unique opportunity to observe at relatively close range a member of a class of objects that can otherwise be found only at great distances. Further experience in application of the U-B criterion to distinguishing the quasar classes has indicated that it is somewhat ambiguous in the region of high U-B values and low radio emission. Introducing the B-V index has therefore made an adjustment of the selection criteria. In this region of high (more negative) U-B values, the location where the original criteria proved to be deficient, there are some quasars with low radio emission that have absorption redshifts. As brought out in Chapter 23, this is an indication of advanced age, which places them in Class II. These objects have B-V indexes in the upper portion of the full range of values, whereas the indexes of the relatively low redshift quasars in this region, which can be expected to be Class I objects, fall in the lower portion of this range. We may tentatively establish a dividing line at B-V = +0.15, and instead of assigning all quasars with low radio emission and high U-B indexes to Class I, we will put those members of this group that have B-V indexes above 0.15 in Class II. Until such time as we are able to base the selection criteria on a theoretical rather than an empirical foundation we can hardly expect precision, but this change to a two-color basis undoubtedly brings us closer to the correct line of demarcation. The revised color index pattern for quasars at distances below 1.00 is shown in Table X. Included in this revision is a change in the U-B classification boundary from 0.60 to 0.59.

The identification of the evolutionary status of the quasars by color and radio flux (or distance) enables utilizing the data with respect to the other observable features of the quasars to verify the theoretical conclusions as to the differences between the classes, and between the earlier and later members of each class, something that we could not do if these features entered into the criteria by which the classes are identified. For instance, we have deduced from theoretical premises that the absorption, which gives rise to the absorption redshift lines in the quasar spectra, takes place in clouds of material accelerated to high inverse speeds by internal supernova explosions in these objects. No absorption occurs, therefore, until these explosions occur on a sufficiently large scale. As noted earlier, this point is not reached until the quasar is somewhere in the radio quiet stage, while it is evident from the nature of the requirements for the production of multiple absorption redshift systems that multiplicity will not appear until a still higher level of activity is reached. On the basis of this evolutionary pattern, we can deduce the following rules regarding the occurrence of absorption redshifts:

Class I early I late II early II late

Table X QUASAR CLASSES U-B B-V (negative values) (positive values) Below 0.59 Below 6.0 Above 0.59 Below 0.15 Above 0.59 Above 0.15 Above 0.59

R.F. Below 6.0 Below 6.0 Above 6.0

1. Class I quasars have no absorption redshifts. 2. Absorption redshifts approximating the emission values are possible throughout most of the radio-quiet region, and in the Class II radioemitting quasars. 3. Absorption redshifts differing from the emission values by more than the amount that can be attributed to random motion are possible only in Class II quasars and relatively old radio-quiet quasars. A check of 29 quasars with absorption redshifts listed in a 1972 compilation by Burbidge and O'Dell 251 shows that all of these objects are in compliance with the foregoing rules when the assignment to classifications is made on the basis that has been specified. Here, then, we have a significant confirmation of the theoretical description of the conditions under which the absorption redshifts occur. It was noted earlier that there would be a further advantage in being able to distinguish the two classes of radio-emitting quasars by color alone without having to consider the magnitude of the radio emission. As indicated in Fig.28, which is a combination of Fig.26 and 27, with the B-V index substituted for the radio flux, this is almost accomplished by the resulting two-color diagram. There is some uncertainty along the dividing line at the 0.15 B-V index, and there is one deviant object, 3C 280.1, which has a B-V index of 0.13, although its redshift far exceeds the Class I limit. Otherwise, the two classes of quasars are located in separate portions of the diagram, as in Fig.26 and 27. The deviation of 3C 280.1 from the normal range of B-V indexes is probably due to the same cause as

the deviation of this quasar from the normal radio pattern, as shown in Table Vll, Chapter 22.

Thus far we have been looking at the color indexes and radio flux as means of differentiating between the various classes of quasars. Now we will want to examine the significance of the changes that take place in these quantities during the evolution of the quasars. The magnitudes of all of the properties that we are now considering undergo evolutionary changes. Thus any one of them can serve as an indicator of quasar age. Obviously, however, the properties that change most uniformly with time are the best indicators, and on this basis we may consider the radio flux in Fig.26 and 27 as indicating the quasar age. These diagrams thus show how the quasar temperature (U-B) varies with age (R.F.). We now find that the B-V index follows approximately the same trend as the

radio flux, which means that this index is also an indicator of age, and can be substituted for the radio flux in the diagrams. The U-B indexes of the earliest Class I quasars fall in the range from about -0.40 to -0.59. As these quasars age, the index moves almost horizontally to the vicinity of B-V = +0.15, and then turns sharply downward on the diagram (toward more negative values) as the radio-quiet zone is approached. The B-V index of the earliest Class I quasar in the sample under examination is 0.60. This index decreases as the quasar ages, reaching positive or negative values near zero at the radio-quiet boundary. The U-B indexes of the Class II quasars range from -0.59 to about -1.00, with no apparent systematic variation. The corresponding B-V indexes for most of the Class II quasars with relatively low redshifts (below 0.750) are in the neighborhood of +0.20. Beyond 0.750 the index increases, and the maximum values around 0.60 or 0 70 are reached near the 1.00 distance. This peak is followed by a decrease to a level at which most values are comparable to those of the early members of this class. While the actual mathematical relations between the internal activity of the quasars and their color indexes have not yet been examined in the light of the Reciprocal System of theory, the evolutionary pattern followed by the values of these indexes, as described in the preceding paragraph, shows a definite qualitative correlation with the changes that theoretically take place in the generation and dissipation of energy. In Class I the initial energy is high, but it gradually subsides, as no continuing source of large amounts of energy is available to these objects. Both color indexes respond to this change by moving toward more negative values as the quasars age. In Class II the initial activity develops slowly, as it originates from many small events rather than from one big event, and the Class II quasars do not reach the high temperatures that are characteristic of the early Class I objects. The lowest (least negative) U-B values in Class II are in the neighborhood of the dividing line at -0.59, and the full range extends to about -1.00. The five radio-quiet quasars in the Burbidge tables for which color indexes are given have U-B indexes in the range from 0.78 to -0.90. It follows that only those quasars with U-B indexes between about -0.75 and the -0.59 limit can be regarded as having a temperature increment due to the secondary explosions, and even in this group, which includes about 40 percent of the total number of Class II Quasars, the increment is not large There is no systematic change with age in the U-B indexes of these Class II objects. This is understandable on the basis of the conclusion that this index is related to the temperature, as the temperature variations in Class II are due to events that can take place at any time during the Class II stage of quasar existence. The pattern of values of the B-V index that was previously described indicates that the processes, which determine the magnitude of this index, are increasing in strength throughout the Class II stage. The specific nature of these processes has not yet been established but obviously they are aspects of the motion of the quasar constituents, and for the present we can use the very general term ‖internal activity― in referring to them. As the quasar distance increases, the average age of the observable quasars rises, inasmuch as the age range is continually being extended. This increase in age is accompanied by a corresponding increase in internal activity, and, below a quasar

distance of 1.00, by an increase in the B-V index. As already mentioned, this index decreases beyond 1.00 distance, probably because of a decrease in the intensity of the internal activity due to the dimensional distribution of the various properties of the quasars that occurs in this distance range. Inasmuch as the concentration of energetic material in the interior of the giant spheroidal galaxy from which a quasar was ejected was built up gradually over a long period of time, the isotopic adjustments taking place in this material at the time of the ejection are mainly of the long-lived types. Thus the decrease in radio emission and ‖internal activity― in the early quasar stage should be quite gradual. The temperature, on the other hand, is raised to a very high level by the explosion, and can be expected to take a very sharp initial drop. We would normally expect, therefore, that the early Class I stage would begin with an exponential decrease in the U-B index (temperature) as a function of the BV index (age). But this is not at all what Fig.28 indicates. There is little, if any, decrease in the U-B index in the early Class I stage. Let us see, then, if we can account for the observed situation. One obvious possibility is that the rapid decrease in the temperature precedes the earliest quasar stage. On this basis, the temperature of the newly ejected galactic fragment drops rapidly to a certain level, which we can identify as that of the earliest Class I quasars (UB = -0.40 + 0.10), remains at this level to about B-V = +0.15, and then resumes a rapid drop to a minimum level near 1.00. On first consideration, this may appear to be another of the combinations of ten percent fact and ninety percent speculation that are so common in the relatively uncharted areas of physics and astronomy. However, there actually is in existence a class of objects, not currently identified as quasars, that occupies the position in this U-B vs. B-V diagram in which the theoretical very early group of quasars would fall if the foregoing explanation of the nature of the early evolutionary pattern is correct. Like the quasars, these objects are abnormally small, very powerful extragalactic bodies. Their existence was first recognized when the radiation from the ‖variable star― BL Lacertae was found to have some very peculiar properties. Several dozen similar objects have since been located. Because their properties are in some respects unique, they have been placed in a new astronomical category. However, no consensus has been reached on a name for these objects. As matters now stand, we have a choice between BL Lac objects, lacertids, and lacertae. The latter term will be used in the discussion that follows. Most of the differences between the lacertae and the quasars are merely matters of degree, as would be expected if the lacertae are very young quasars. For instance, the evidence of association with giant galaxies is much stronger than in the case of the quasars. Joseph S. Miller describes the results of a recent (1981) investigation in which both lacertae and quasars were examined as follows: We conclude that the data are consistent with all BL Lac objects being located in luminous giant elliptical galaxies . . . No galaxy components were definitely detected for any of the QSOs in this study.252 These observations are consistent with the status of the lacertae as pre-quasar explosion products. The observed galaxies are the giants—spheroidal, in the terminology of this work—from which these objects were ejected. The parent galaxies are more likely to be

observed while the explosion products are still in the lacertae stage immediately following ejection because these products have not yet had time to travel very far. By the time the quasar stage is reached the ejected fragment has moved farther away from the galaxy of origin, and the association between the two is not necessarily evident. All known lacertae are radio sources, whereas many, perhaps most, quasars are radio quiet. Here again, the difference is accounted for if we accept the conclusion that the lacertae are the initial products of the galactic explosions; that is, they are in the violent post-ejection stage. This conclusion is supported by the observation that ‖The BL Lac type objects appear to be very closely related to violently variable QSO's like 3C 279 and 3C 345 (two quasars of Early Class I).― 253 The reason for the lack of radio-quiet lacertae is then evident. The violent internal activity that produces the radiation at radio frequencies continues throughout both the lacertae and Early Class I stages. It has been found that the bright lacertae are not associated with extended radio sources,254 whereas most quasars of the early classes do show such an association. Here, again, extreme youth is the explanation. The extended sources have simply not had time to develop. The radiation from the lacertae includes optical, radio, and infrared components, all of which are to be expected from young explosion products moving at upper range speeds. No x-ray radiation has been detected. This, too, is consistent with the theoretical evolutionary status of the lacertae. There are no x-rays in very young explosion products, as we saw earlier in the case of the supernovae. Objects that lose energy after having been accelerated to upper range speed levels emit X-rays. By the time the ejected fragment reaches the quasar stage, some loss of energy has taken place, and production of x-rays has begun. A clear picture of the relation between lacertae and quasars is provided by the respective colors. To illustrate this point, the colors of a representative group of lacertae 254 have been added to Fig.28, and the enlarged diagram is shown in Fig.29. Quite clearly, the positions of the lacertae in this two-color diagram are fully consistent with the theoretical conclusion that these objects are the initial products of the galactic explosions, and precede the early Class I quasars in the evolutionary development of the ejecta from the explosions. Except for a few objects that have penetrated into the Class II region of the diagram, the evolutionary path of the lacertae joins that of the Class I quasars in a smooth transition, and the combined path follows the pattern that, as explained earlier, we would expect the galactic explosion products to follow in their early stages, on the basis of the theory that we have developed.

One more of the distinctive characteristics of the lacertae remains to be examined. The most intriguing difference between quasars and lacertae is that the quasars have strong emission lines in their spectra that the lacertae lack. The reason for this is not yet understood.255 (Disney and Veron) This, too, is readily explained on the basis of the theoretical description of the immediate post-ejection conditions. The principle that plays the most important role in this situation has been encountered repeatedly in connection with other phenomena discussed in the

preceding pages, but it is one of those items that is so foreign to existing physical thought that it may be a source of conceptual difficulty for many readers. A more detailed discussion is therefore appropriate at this point, where the relevant observational evidence is more extensive than in the applications considered earlier. For reasons already specified, the radioactivity and the accompanying emission of radiation at radio frequencies decline slowly throughout the Class I quasar stages. This decline is illustrated in Fig.30 Here the absolute radio emissions are plotted against the U-B color indexes (indicative of the temperature) in steps 0.02 of the index. This procedure results in some values that are averages of two or three individual emissions, thereby smoothing the resulting curve to some extent. The circled points indicate the average values. Those not so identified are single values. As might be expected from the nature of the radio emission process, there are a few widely divergent values, but the general trend is clearly represented by a line such as that in the diagram, which conforms to the theoretical expectation. The optical situation is more complicated because the stellar component speeds that are produced by acquisition of a part of the explosion energy are much lower than those of the gas and dust particles that supplied the original explosion energy. These stellar components therefore return to the speed range below unity during the evolution of the Class I quasars. The effect on the optical emission is shown in Fig.31, which is similar to Fig.30, with the absolute optical luminosities substituted for the radio emissions. (The methods of calculating the absolute values of both the optical and the radio emissions will be explained in Chapter 25.) Here we see that the luminosity remains nearly constant in the initial range, up to about U-B = - 0.50. It then begins a rapid rise to a point in the neighborhood of -0.59. At this point the emission drops by one half. During the late Class I stage, which follows, there is a moderately fast decrease to a level below -0.05 at the point of entry into the radio-quiet zone.

Since the stellar component speeds that are primarily responsible for the magnitude of the optical luminosity are subject to the same conditions that apply to the radio emission; that is, a gradual decay of the effects of the explosive ejection. the peak in the luminosity curve is somewhat surprising on first consideration. But, in fact' two different processes are involved. The isotopic adjustments that produce the radio emissions decrease gradually in intensity as more and more of them are completed. The optical emission is a function of the temperature; that is, of the speeds of the component particles. In the low speed range with which we are all familiar, the rate of emission of radiation increases with the component speeds (the temperature). It might seem that a still further increase in the speed would lead to a still greater rate of emission. But in the universe of motion directions are reversed at the unit level. Consequently, the same factors that cause the radiation to increase as the component speeds approach unity from lower levels also operate to increase the radiation as unit speed is approached from the higher levels. It

follows that the radiation is at a maximum at the unit level, and decreases in both directions. Applying this principle to the Class I quasars, we see that in the U-B range as far as 0.45, the component speeds are nearly constant as they slowly approach their maximum, and begin to decrease. Then the continued radiation losses with no comparable replacements accelerate the rate of decrease, reaching a maximum at the unit speed level. During this interval, while the speeds are still above unity, the decrease in speeds results in an increase in the rate of emission, reaching a peak at unit speed. As the diagram indicates, this peak coincides with the dividing line between classes I and 11 at U-B = 0.59. Beyond this point the speed drops into the range below unity, the range in which a decrease in temperature results in a decrease in the radiation. Like gravitation, the radiation process is operative in both of the active dimensions of the intermediate region. Half of the radiation is therefore eliminated at the unit speed level.

The lack of emission lines in the spectra of the lacertae is another result of this radiation pattern. The immediate post-explosion speeds of the gaseous component of the explosion products are very high, probably close to the two unit level. As brought out in Chapter15, this is the zero for motion in time, and the physical condition of an aggregate at this temperature is similar to that of an aggregate at a temperature near the zero of motion in space. The explanation of the lack of emission lines, then, is that the temperatures of the gases in the lacertae are too high to produce a line spectrum. At these extremely high

temperatures (low inverse temperatures) the aggregate is in a condition in time that is analogous to a solid structure in space, and like the latter it radiates with a continuous spectrum. This is another example of the same phenomenon that we noted in Chapter16 in connection with the continuum emission from the Crab Nebula. By the time the quasar stage is reached, the temperature has dropped enough to give the aggregate the normal characteristics of a gas, including a line spectrum. It was evident from the time of the earliest studies of the different classes of quasars, reported in Quasars and Pulsars, that the -0.59 value of the U-B index marks some kind of a physical division, and this was one of the criteria on which the classification of the quasars in that publication was set up. It can now be seen that the -0.59 U-B level corresponds to unit temperature. The fact that the evolutionary path of the Class I quasars (including the lacertae) contains a horizontal section, rather than decreasing somewhat uniformly from the initial to the final state, as might be expected where there is no source of replacement for the energy that is being lost by radiation, is explained by the transition from two-dimensional to one-dimensional motion. The energy of the second dimension of motion in the intermediate speed range is analogous to the heats of fusion and vaporization. When the change to one-dimensional motion takes place, the energy of motion in the other dimension becomes available to maintain the temperature, and the UB index, at a constant level for a time before the decreasing trend is resumed. Incorporation of the lacertae into the path of development now completes the evolutionary picture of the Class I explosion products from the time they are ejected from the galaxy of origin to their entry into the radio-quiet stage. Some of these objects may disappear during that stage, for reasons that will be explained in the next chapter. The remainder eventually undergo secondary explosions and attain the Class II status. There is no systematic relation between the temperature and age in Class II, because both the time at which the secondary explosions occur and their magnitude are subject to major variations. Each individual Class II quasar does, however, follow a course that eventually brings it to the point where it crosses the sector boundary and disappears. There are many pitfalls in the way of anyone who attempts to follow a long chain of reasoning from broad general principles to specific details, and since this is an initial effort at applying the Reciprocal System of theory to the internal structural features of the quasars, it must be conceded that modification of some of the conclusions that have here been reached is likely to be necessary as observational knowledge continues to accumulate. and further advances in theoretical understanding are made in related areas. However, the general picture of the quasar structure and evolution derived from theory corresponds so closely with the information now at hand that there seems little reason to doubt its validity, particularly since that picture was developed easily and naturally from the same premises on which the earlier conclusions regarding the origin and nature of the quasars were based. It is especially significant that nothing new is required to explain either the existence or the properties of the quasars (including the lacertae). Of course, nothing new can be put into a purely deductive theory of this kind. Introduction of additional hypotheses or ad hoc assumptions of the kind normally employed in the adjustment of theories to fit new observations is excluded by the basic design of the theoretical system, which calls for

deriving all conclusions from a single set of premises, and from these only. Some new principles and hitherto unknown phenomena are certain to be revealed by any new theoretical development of this magnitude, and many such discoveries have, in fact, been made in the course of the theoretical studies thus far undertaken. Such items as those utilized in the foregoing applications of the theory to the various aspects of the quasar situation -the status of all physical phenomena as more or less complex relations between space and time, the inversion of these relations at unit levels, the role of time as equivalent space, and the asymmetric transmission of physical effects across unit boundaries - are all new to science. But these are not peculiar to the quasars; they are general principles, immediate and direct consequences of the basic postulates, the kind of features that distinguish the universe of motion from the conventional universe of matter, and they were discovered and employed in a variety of applications decades before the quasar study was undertaken. All of the novel principles deduced from theory and utilized in this work were explicitly stated in the initial presentation of the Reciprocal System of theory in the first edition of this work, published in 1959, years before the quasars were discovered. Furthermore, many of the consequences of these general principles, in the form of physical phenomena and relations, that are now seen to play important parts in explaining the origin and evolution of the quasars were likewise pointed out in detail in that 1959 publication, four years before Maarten Schmidt measured the redshift that ushered in the era of the quasar ‖mystery.― The status of stellar aggregates as structures in positional equilibrium, which permits the building up of internal pressures in the galaxies, and the ejection of fragments, the existence of two distinct divisions of the explosion products, ejected in opposite directions, one moving at normal speed and the other moving at a speed in excess of that of light; the reduction in the apparent spatial size of aggregates whose components move at upper range speeds; the generation of large amounts of radiation at radio wavelengths from the explosion products; and the eventual disappearance of the ultra high speed material; were all derived from theory and discussed in the published work, not only long before the discovery of the quasars but years before any definite evidence of the galactic explosions that produce the quasars was found.

CHAPTER 25

THE QUASAR POPULATIONS Now that we have identified the different classes of quasars, located them in the evolutionary course of development, and established criteria by which to distinguish one from another, it will be of interest to undertake what we may describe as a census, to get an idea as to the relative numbers of observable objects of the various classes, the factors that are responsible for the differences between these classes, and the effect of the evolutionary development on these various populations. The list of known quasars is continually being extended, both by increasing the capability of the available instrumentation, and by more use of the existing equipment. A complete survey

of the observable quasars is therefore impossible, as matters now stand. The best that we can do is to examine all those on which the necessary information was available up to some particular date. Under the circumstances there is no advantage to a very large sample. As the modern poll takers have demonstrated, a relatively small sample is adequate if it is actually representative. Rather than attempting to cover all of the quasars currently known, we will therefore review and update the results of a study made some years ago on the same group of quasars examined in the studies reported in the preceding chapter, those on which the relevant data were available in 1967. The total number of quasars included in the 1967 tabulation by the Burbidges is 102, but color indexes were not available for 26 of these. The study was therefore confined to the other 76. Of these, 45, or sixty percent, were quasars of Class II. The spatial distribution of these objects is quite uniform out to a quasar distance of 1.00. On the two-dimensional basis that we have seen is applicable to the intermediate speed range, two independent distributions are possible in three-dimensional space. The existing quasars can be located either in the scalar dimension that is represented in the conventional spatial reference system, or in a dimension that is perpendicular to it. It follows that only half of the existing quasars are visible. There are 20 visible Class II quasars within a 1.00 radius (quasar distance) and 5 within 0.50. Both of these figures represent the same density: 20 quasars in a sphere of radius 1.00. We may therefore take this as the true density of Class II quasars observable in this distance range with the 3C instruments and procedures. The total number of these quasars in equivalent space is twice this number, or 40 per spherical unit. In the second quasar distance unit, from 1.00 to 2.0O, there is another division between two perpendicular dimensions, which again reduces the visibility by one half, cutting the visible number to one quarter of the total. This means that where the actual quasar population remains unchanged, only 10 Class II quasars per spherical unit are visible in the quasar distance range from 1.00 to 2.00. The number of Class II quasars calculated on the foregoing basis for spheres of successively larger radius is compared with the observed number in Table XI. There are a number of factors that cause some deviations from the theoretical distribution at very short distances, but the number of quasars involved is so small that the effect on the distribution pattern is negligible. Except for the normal amount of random fluctuation, the theoretical distribution is maintained throughout the quasar distance range up to about 1.80. Beyond this point there is a slow decrease as the normal limit at 2.00 (total redshift 2.326) is approached, and an increasing number of quasars become unobservable because they cross the boundary into the cosmic sector. The relation of the number of visible quasars to the distance has been a matter of much interest to the astronomers because of the bearing that it has, or may have, on the question as to whether the density of matter in the universe is decreasing, as required by the Big Bang cosmological theory. This has been a hotly contested subject, but the present consensus, as reported by H. L. Shipman, is that ‖Quasars were far more abundant in the early universe than they are now.― 256 But this conclusion is based on the assumption that the quasars are distributed three-dimensionally, and the data of Table XI that confirm the two-dimensional distribution, together with the corroborative evidence presented earlier, cut the ground out from under the astronomers' conclusions. From these data it is evident that there has been no change in the quasar density during the time interval represented by the quasar distance of

2.00. The close correlation between the calculated and observed quasar distributions not only demonstrates the uniformity of the quasar density throughout space, but also confirms the validity of the theoretical principles on which the calculations were based. It should be emphasized that this is not merely a case of providing a viable alternative to the currently accepted view of the situation. The fact that uniformity of distribution on the twodimensional basis has been demonstrated not only for the total number of radio-emitting quasars in a representative sample, but also individually for each of the three classes of objects included in this total puts the findings on a firm basis. The essential concept of the Big Bang theory is thus invalidated. The data for the other two classes of radio-emitting quasars, early Class I and late Class I, are included in Table XI. Here the distribution is reproduced with space densities of 40 and 60 quasars per spherical unit respectively. We thus find that the predominance of Class II quasars in the observed list does not reflect the true situation. Instead of being a 40 percent minority, the Class I objects actually constitute about 70 percent of the total number of radioemitting quasars. TABLE XI DISTRIBUTION OF QUASARS Quasar Number Quasar Distance Calc. Obs. Distance Class II Number =20q² 0.1 0 0 0.1 0.2 1 1 0.2 0.3 2 1 0.3 0.4 3 5 0.4 0.5 5 5 0.5 0.6 7 5 0.6 0.7 10 7 0.7 0.8 13 8 0.8 0.9 16 15 0.9 1.0 20 20 Number = 10q² + 10 1.1 22 1.2 24 1.3 27 1.4 30 1.5 33 1.6 36 1.7 39 1.8 42 1.9 46 2.0 50

Number Calc. Class I—Early Number =20q² 0 1 2 3 5 7 10 13 16

Obs.

0 0 0 2 2 7 11 14 16

Class I—Late Number = 30q² 23 25 29 31 32 34 36 41 44 45

0.1 0.2 0.3 0.4 0.5 0.6 0.7

0 1 3 5 8 11 15

0 1 3 7 9 11 14

The sample on which the study was conducted contains no quasars with quasar distances above 2.00, a fact which indicates that the asymmetric redshift factors, discussed in Chapter 23, that lead to redshifts exceeding the normal limit are relatively uncommon. Although we know the quasars (and other astronomical objects as well) only as sources of radiation, the amount of information that can be extracted from this radiation is surprisingly large; so large, in fact, that much of it will not be needed for purposes of the kind of a general survey of the various quasar populations that we are now undertaking. The current status of the quasars as astronomy's greatest mystery is not due to a lack of sufficient information, but to the astronomers' inability, thus far, to construct the kind of a theoretical framework that would enable placing the many items of information that now seem irrelevant or contradictory in their proper places relative to each other, and to the astronomical universe as a whole. Availability of a purely deductive system of theory, in which all conclusions are derived by development of the consequences of the fundamental properties of space and time, now provides what is needed. Our present undertaking is to examine the primary characteristics of the different classes of quasars and to show how they fit into the general picture. We will make use of the information developed in the preceding chapters, particularly that referring to the color indexes, the recession redshift (and distance), z, and the quasar distance (and redshift), q. The other magnitudes with which we will be mainly concerned are the optical luminosity, 1, its absolute value, L, and the radio emission or flux, for which we will use the customary symbol S. The optical radiation as received is ordinarily expressed in terms of the astronomical magnitude scale. This system of measurement is presumably satisfactory to the astronomers, since they continue using it, but it is confusing to just about everyone else. Actually. it is a historical accident. The magnitudes were originally ordinal numbers—simply positions in a series. The brightest stars were designated as stars of the first magnitude, the next brightest as stars of the second magnitude, and so on. Later these magnitudes were adjusted to conform to a specific mathematical relation, so that they became a measurement scale. but in order to avoid major changes, the upside down ordinal sequence was retained. Thus the stars with the greatest numerical magnitude are not the brightest, but the faintest. For the same reason, the numerical scale, which for convenience is exponential, was constructed on an awkward basis in which 2.5 magnitudes are equivalent to a factor of 10. It has been necessary to refer to astronomical magnitudes to some extent in this work tin order to maintain contact with the astronomical literature. To facilitate translating these values into terms that are more familiar to most readers of this volume, the following table of equivalents may be helpful: Factor 2 4 5 8

Magnitude Difference 0.75 1.50 1.75 2.25

Factor 10 50 100 1000

Magnitude Difference 2.50 4;25 5.00 7.50

The quantity that is being measured in terms of the magnitude scale is the luminosity of the

object. For our present purposes we will want to deal with the actual luminosity, and we will therefore convert the magnitudes to luminosities. In order to keep the numerical values within a convenient range we will state the luminosity in terms of the increments of magnitude above 15, converted to the luminosity basis. Such values represent the ratio of the measured luminosity to the luminosity corresponding to visual magnitude 15. For example, the value 0.200 indicates a luminosity one fifth of the reference level. As indicated by the foregoing tabulation, reducing the luminosity by a factor of 5 adds 1.75 to the magnitude. The value 0.200 thus corresponds to magnitude 16.75. We will be concerned mainly with the absolute luminosity, the actual emission from the quasar, rather than with the observed value, which varies with the distance. For this purpose, we will establish a reference datum at the point where q is 1.00 and z is 0.08. The absolute luminosity will be expressed in terms of the measured value projected to this datum by the appropriate relation. No doubt some exception will be taken to the use of an unorthodox measurement scale in the comparisons that follow, but in addition to generating values that are more convenient to handle, this different scale of measurement will help to avoid the confusion that might otherwise arise from the fact that the basis for projecting the observed luminosity to the absolute system is not the same in our calculations as in conventional practice, and the calculated absolute luminosities corresponding to the observed values will not usually agree. The same considerations apply to the radio emission values. The values given in the tables are absolute emissions recalculated from the data of Sandage,257 and expressed on a relative basis similar to that used for the optical emission. As we have seen in the preceding pages, the distinctive characteristics of the quasars and related astronomical objects are due to their greater-than-unit speeds. However, in undertaking to follow the course of development of these objects it will be necessary to recognize that the quasar is a complex object with many speeds, each of which may vary independently of the others. These include: 1. Quasar speeds. The quasars are ejected with scalar speeds exceeding two units. During the interval in which it is restrained by gravitation, each quasar has a speed of z in space, due to the normal recession, and a net speed of 3.5 z½ in time (equivalent space) in the dimension of the spatial reference system. The observed quasar redshift is a measure of the scalar total of these two redshift components. 2. Stellar speeds. The pre-explosion activity and the violent explosion raise the speeds of most of the constituent stars of the ejected galactic fragment (the quasar) above the unit level. It is this intermediate speed of the stars of the quasar, and the consequent expansion into time, that are responsible for the small apparent sizes of the quasars. They are galactic equivalents of the white dwarf stars. 3. Stellar component speeds. The speeds of the individual atomic and molecular components of the stars (temperatures) are independent of the speeds of the stars. Like the stellar speeds, they are increased to levels in the intermediate range by the energy released during the explosion, but they are subject to radiation losses, while the speeds of the stars are not affected by radiation. Consequently. the speeds

(temperatures) of the stellar components decrease relatively rapidly, and in most quasars they return to the speed range below unity at the end of the early Class I stage. The stellar speeds, on the contrary, remain in the intermediate range throughout the entire life of the quasar. 4. Independent particle speeds. Dust and gas particles are accelerated to high speeds in the stellar and galactic explosions, and they retain these speeds (temperatures) longer than the atomic and molecular constituents of the stars because of the lower rate of radiation in the gaseous state. Radio emission therefore continues through both Class I stages. As indicated in the foregoing paragraphs, the explosive forces impart the initial speeds of the quasar system. Prior to the explosion that produces the quasar the interior of the giant galaxy of origin is in a state of violent activity resulting from a multitude of supernova explosions. The products of these explosions are confined to this interior region by the overlying stellar aggregate, which, as pointed out earlier, has physical characteristics resembling those of a viscous liquid. The dust and gas particles in the agitated interior are moving with speeds greater than that of light. When the internal pressure finally becomes great enough to blow out a section of the overlying material as a quasar, a large quantity of this fast-moving material becomes part of the quasar aggregate. The violent readjustments resulting from the explosion accelerate a substantial proportion of the component stars of the quasar to these same intermediate speeds. After the initial sharp decrease during the lacertae stage, the status of the quasar speeds at the beginning of the early Class I stage is as follows: The quasar as a whole is moving unidirectionally outward at ultra high (above two units) speed, but is subject to the gravitational effect of the galaxy of origin. This results in the net speed reflected in the observed redshift, z + 3.5z½ The constituent stars of the quasar are moving at intermediate (between one and two units) speeds, and are therefore expanding into time, causing the apparent spatial dimensions of the quasar to decrease. The atomic and molecular constituents of the stars are likewise moving at intermediate speeds, with similar results, putting the stars into the white dwarf condition. The gas and dust particles, which acquired upper range speeds prior to the explosion, undergo a relatively slow speed decrease. All matter accelerated to a higher speed level by the explosion is experiencing isotopic adjustments, and is therefore emitting strong radiation at radio wavelengths. As the quasar ages and moves away from the galaxy of origin its net outward speed increases because of the continual reduction of the retarding gravitational force. All of the internal speeds decrease because the large initial energy content is supplied by the galactic explosion, and there is no active source of energy in the quasar itself, other than the normal stellar generation processes, which are wholly inadequate to maintain the high energy concentration that exists initially. The internal motions therefore lose energy in radiation and other interactions with the environment. This decrease in the internal activity results in a corresponding decrease in the optical luminosity. In determining the true, or absolute, luminosity from the observed value, one of the factors that must be taken into consideration is the effect of the distribution to two

perpendicular planes. This applies to the radiation as well as to our ability to see the quasars, and it means that only half of the radiation originating from the quasar components that are moving at speeds below unity is included in the observed luminosity. If the quasar components from which the radiation originates are moving at intermediate speeds, the distribution of the radiation is extended to the full eight units of the intermediate region. In calculating the absolute luminosity, the measured value is thus subject to an increase by a factor of 2 or 8. The limitation of the intermediate range speeds (temperatures) to the early Class I stage restricts the application of the ratio of 8 to l to this class. For all other classes of quasars the ratio is 2 to 1. The other determinant of the relation between the observed and absolute luminosities is the distance. The magnitude of this effect depends on the route by which the radiation travels. The normal recession in space of a quasar elected from a nearby galaxy is small, and the quasar motion is therefore primarily in time from the very start. Consequently, the radiation from this object travels back to us through time. On the other hand, a quasar ejected from a distant galaxy is receding at a high speed in space at the time of the explosion, and a substantial period of time elapses before the motion in time in the explosion dimension reaches the recession level. In the meantime the radiation from this quasar travels back through space. Eventually, however, the continually increasing net explosion speed exceeds the speed of the recession, after which the travel of the radiation from this distant quasar, like that from the one nearby, takes place through time. On this basis, the radiation from the lacertae, the quasars of early Class I, the youngest members of Late Class I, and a few small, rapidly evolving, members of the radio-quiet class, travels in space. That from the remainder of late Class I, most of the radio-quiet quasars, and the quasars of Class II, travels in time. Quasars that are very close, where random motion in space plays a significant role, may continue on the space travel basis beyond the normal transition point. Because of the two-dimensional distribution of the quasar radiation originating in the intermediate speed range, the radiation received through space is proportional to the first power of the distance in space, z. Inasmuch as q = 3.5 z½ it is also proportional to q². The distribution of the radiation in time is likewise two-dimensional, and the quasar radiation received through time is proportional to the first power of the distance in time (equivalent space), q. In the discussion that follows all distances will be identified in terms of q (time) or q² (space). Table XII gives the observational data for the early Class I quasars in the group under consideration, expressed in the terms that have been described, together with two calculated values, the quasar distance, q, and the visibility limit. This visibility limit is the approximate luminosity that a quasar of a given class and distance must have in order to be located by a survey with the equipment and techniques available to the observers whose results constitute the quasar sample that is being examined. A purely theoretical determination of this limit would require a quantitative evaluation of the capabilities of the equipment in use at the time the observations were made, an undertaking that is not feasible as a part of the present investigation. The visibility limits for the quasars

of the various classes have therefore been determined empirically from the minimum luminosities of the observed Class II quasars; that is, it is assumed for present purposes that the limiting luminosity actually observed approximates the true limit. The faintest magnitudes reached in the results here being studied were 19.44 (3C 280.1), l9.35(3C 2), and 19.25(1116 + 12). The corresponding absolute luminosities are 0.025, 0.017, and 0.037. The quasar distance of 3C 2 is 0.962. If we assume that this quasar, which has the lowest luminosity of any Class II object in the sample group, is almost at the visibility limit, we can take a luminosity of 0.016 (magnitude 19.50) as the limit at q = 1.00. The corresponding limits, on the q basis, for 3C 280.1 and 1116+12 are then 0.020 and 0.029 respectively; that is. both of these quasars are close to the limit of visibility. This should be sufficient to justify using 0.016 for the visibility limit on the q basis for the purposes of our investigation.

Quasar 1049-09 3C 48 1327-21 3C 279 3C 147 3C 275.1 3C 345 3C 261 3C 263 3C 207 3C 380 1354+19 3C 254 3C 138 3C 196 0922+14

Z .344 .367 .528 .538 .545 .557 .595 .614 .652 .684 .692 .720 .734 .760 .871 .895

TABLE XII CLASS I QUASARS-EARLY TYPE q U-B B-V S m .335 -.49 +.06 .17 16.79 .357 -.58 +.42 1.49 16.2 .507 -.54 +.10 .31 16.74 .516 -.56 +.26 .76 17.8 .523 -.59 +.35 2.79 16.9 .534 -.43 +.23 3.77 19.00 .569 -.50 +.29 .72 16.8 .586 -.56 +.24 .25 18.24 .621 -.56 +.18 .48 16.32 .650 -.42 +.43 .43 18.15 .637 -.59 +.24 2.61 16.81 .682 -.55 +.18 .42 16.02 .695 -.49 +.15 .78 17.98 .718 -.38 +.23 1.33 17.9 .817 -.43 +.60 3.25 17.6 .838 -.52 +.54 .23 17.96

Limit .057 .065 .132 .136 .140 .146 .166 .176 .197 .216 .221 .238 .247 .264 .342 .360

L .172 .337 .413 .162 .381 .057 .495 .140 .913 .186 .653 1.455 .247 .285 .486 .365

The objects that have been used for the evaluation of this limit are quasars of Class II, in which, as we have seen, the radiation travels through time (on the q basis). The radiation from most of the Class I quasars travels through space and this modifies the visibility limits. The principal factor that enters into this situation is that there is a difference between the brightness, or luminosity, of an astronomical object, and what we may call the intensity of the radiation, if the radiating matter is moving at a speed greater than unity (the speed of light). This difference arises because of the introduction of a second time component at the higher speed. At speeds less than unity the only time entering into the radiation process is the clock time. At higher speeds there are also changes in position in three-dimensional time (relative to the natural datum). Here it becomes necessary to> distinguish between the time of the progression of the natural reference system, the time that is registered on a clock, and the total time involved in the physical phenomenon under consideration. This total time is the sum of the clock time and the change in time location.

Ability to detect radiation with equipment of a given power is determined by the intensity of the radiation, the radiation per unit of time. Distribution of the radiation over additional units of time reduces the intensity. The luminosity, however, is measured as the amount of radiation received during the total time corresponding to a unit of clock time (one of the components of the total), and it is not affected by the number of units involved in this total. If the radiation travels through time its magnitude is a scalar quantity in spatial terms. It therefore has no geometrical distribution, and is received at full strength. However, if radiation from an object in the intermediate speed range travels through space it is distributed in the spatial equivalent of time; that is, in equivalent space. As we saw in Chapter 23, the full distribution extends over 64 effective units. Only two of these are collinear with the scalar dimension of the spatial reference system. Thus the radiation received through space from an object in the intermediate region per unit of total time, the intensity of the radiation, is 1/32 of the total emission. It follows that the visibility limit for travel in space corresponding to the 0.016 limit for travel in time is 32 x 0.016 = 0.512. This is the limit applying at quasar distance 1.00. For other distances, the limits are 0.016 q (time travel) and 0.512 q² (space travel). The limits shown in Table XII and the tables of the same nature that will follow have been calculated on this basis. While this general distribution of the radiation over the full 64 units in time does not affect the luminosity, we have already found that there are other distributions in space that reduce the ratio of the observed radiation to the original emission by a factor of 8 for the early Class I quasars and a factor of 2 for all others. The ratio of intensity to luminosity for motion through space is then the ratio of intensity to emission, 1/32, divided by the ratio of luminosity to emission, 1/2 or 1/8. This gives us 1/4 for the early Class I quasars and 1/16 for the others. The significance of these ratios is that they enable us to determine the visibility limits in terms of the observed magnitudes (luminosities) for those Class I quasars whose radiation travels through space. The 1/4 ratio tells us that quasar radiation originating in the intermediate speed range and received through space (q² basis) has only one quarter of the intensity that it would have if travel through time (q basis) were possible. This is equivalent to a difference of approximately 1.5 magnitudes. The q² limit corresponding to the 19.50 magnitude of the q limit applicable to the quasar sample under investigation is thus 18.00. While the equipment used in collecting the data included in this sample was capable of observing Class II quasars at 19.50 magnitude, early Class I quasars, whose radiation travels through space, had to be 1.5 magnitudes (4 times) brighter in order to be detected. The reality of the 18.00 limit can be seen by inspection of the values in Table XII. Only one of the magnitudes in this list exceeds this limit by more than the amount that can be expected in view of the variability in the luminosity of these extremely active objects. The one exception, 3C 275.1, is a very strong radio emitter, with the largest radio output of any quasar in the sample under examination. It was probably located optically in an intensive search with powerful equipment.

The gradual decrease in the energy level of the quasars that we observe in the early Class I stage continues during the late Class I stage, as indicated in both the radio emission (Fig. 30) and the optical luminosity (Fig. 31). Since the spatial change of position is initially very slow, the travel of the radiation is still mainly in space (q² basis) at the start of the late stage, but by its end the radiation from many of the smaller objects (those below about 0.50 absolute luminosity) is reaching us through time (q basis). Coincidentally, the color indexes become less reliable as an indicator of quasar age, as the smaller aggregates evolve more rapidly. These factors introduce some uncertainty into the determination of the absolute luminosity of objects of this class. Any individual late Class I quasar outside of the local region in which random motion is significant may be just beyond the early stage, so that its radiation is still traveling in space, or it may have originated nearby, so that the currently indicated distance represents travel in time. Usually, however, the relation of the luminosities calculated on the two different bases to the applicable visibility limits indicates the correct alternative. Most of the quasars whose absolute luminosities calculated on the q² basis are above the q² limits probably have true luminosities in the neighborhood of the values calculated on that basis. Conversely, where the luminosity on the q basis is only slightly above the corresponding limit, the quasar radiation probably travels through time. In those cases where the luminosity calculated on the q basis is substantially above the q limit. but the quasar does not qualify as visible on the q² basis, the absolute luminosity is somewhere between the q and q² values, and its true magnitude cannot be determined from the information now available. Intensity Time travel Early Class I Other space travel

Luminosity

1 1/32 1/32

I/L

1 1/8 1/2

1 1/4 1/16

Limiting Magnitude 19.50 18.00 16.50

Luminosity data for the late Class I quasars of the reference list are given in Table XIII. The basis (either q or q²) on which each of the absolute luminosities in the last column was calculated is indicated by the column in which the corresponding visibility limit is shown. For these quasars. whose luminosity to emission ratio is ½, the intensity to luminosity ratio becomes 1/16. This corresponds to a magnitude difference of 3.0, which puts the visibility limit for this quasar class at 16.50. The limiting magnitudes for the different classes of quasars are summarized in this tabulation: TABLE XIII CLASS I QUASARS—LATE TYPE Quasar

Z

q

U-B

B-V

S

m

2135-14 1217+02 PHL1093 PHL1078 3C249.1 3C277.1 3C351

.200 .240 .260 .308 .311 .320 .371

.197 .235 .255 .301 .303 .312 .360

-.83 -.87 -1.02 -.81 -.77 -.78 -.75

+.10 +.02 +.05 +.04 -.02 -.17 +.13

15.53 .06 16.53 17.07 18.25 .22 15.72 .20 17.93 .33 15.28

Limits q q .020 .028 .004 .005 .047 .005 .066

L .048 .027 .038 .015 .095 .021 .200

3C 47 PHL 658 3C 232 3C 334 MSH 03-19 MSH 13-011 3C 57

.425 .450 .534 .555 .614 .626 .68

.411 .435 .513 .532 .586 .596 .646

-.65 -.70 -.68 -.79 -.65 -.66 -.73

+.05 +.11 +.10 +.12 +.11 +.14 +.14

.58 18.1 16.40 .18 15.78 .35 16.41 .60 16.24 .48 17.68 .01 16.40

.007 .097 .135 .145 .176 .010 .214

.024 .104 .257 .155 .219 .051 .230

The limitation of the Late Class I quasars to the shorter distances is a conspicuous feature of TableXIII, as there are absolute luminosities among this group of objects that are high even by the standards of the Class II quasars, which can be seen all the way out to the 2.00 sector limit. No quasars in TableXIII have a quasar distance beyond 0.646. This early cut-off is a result of the 16.50 limiting magnitude, together with the steep rise of the visibility limit on the q² basis applying to space travel. Quasars originating nearby and moving out to a greater distance have passed out of the Class I stage before traveling this far, whereas most of those originating beyond 0.500 are cut off by the rapidly rising visibility limit, which is up to 0.128 at this point. The most distant late Class I quasar in the list, 3C 57. is a relatively large fragment, with absolute luminosity 0.230. just above the 0.214 visibility limit corresponding to this distance. The existence of the l6.500 magnitude limit is clearly demonstrated in the table. Nine of the quasars in this list have a high enough luminosity in proportion to the visibility limit to make it probable that their radiation is transmitted through space, and none of these is appreciably above 16.50 magnitude (that is, less luminous). A comparison of the values in Table XIII with those of Table XII shows the extent of the decrease in energy emission that takes place as the Class I quasars grow older. Because the early Class I quasars are products of extremely violent galactic explosions, their emission is very high, booth at optical and radio frequencies, much above that of any other quasar class. In the absence of any adequate source of replacement of the energy that is lost by radiation the internal activity gradually subsides, and the average emission in the late Class I stage is much lower. The maximum emission in the early class, both optical and radio, is six times the maximum of the late class. The average optical luminosity of the quasars of early Class I is four times the average of those of late Class I. The average radio emission in early Class I is also four times the average emission of those members of the late class for which radio data were available. Since the radio and optical radiation are produced by different processes their decline as a result of the gradual decrease in the internal energy content of the quasars does not necessarily have to proceed at exactly the same rate, but the fact that the relative emissions of the two groups are the same for both types of radiation is a significant confirmation of the validity of the theoretical relations on which the calculations are based. The Class I radio-quiet quasars are a distinctive and quite homogeneous group, and some consideration of their place in the general picture is appropriate, but only two of them appear in the sample under examination. In order to have an adequate sample, the quasars of this class listed in the 1972 compilation by Burbidge and O'Dell 251 have been added to those in

the 1967 list. TableXIV gives the emission data for these quasars. As would be expected on theoretical grounds, these are small objects, their average luminosity being only 0.018, whereas the average of those of the late Class I radio-emitting quasars of Table XIII that are in the same distance range is 0.064. The reason for this difference is that the smaller quasars have less energy to start with, and they dissipate it more rapidly because of their greater ratio of surface area to mass. They consequently pass through the various stages of evolution in less time, and some of them reach the radio-quiet stage while the larger Class I quasars of the same age are still radio emitters.

Quasar B 234 B 264 TON 256 B 154 B 340 BSO-2 B 114 PHL 1186 B 46 PHL 1194 RS 32 PHL 1027 PHL 1226 B 312 *q² basis

TABLE XIV CLASS I RADIO-QUIET QUASARS Z q Limit .060 .060 .001 .095 .094 .002 .131 .130 .009* .183 .180 .003 .l84 .181 .003 .l86 .183 .003 .221 .217 .003 .270 .264 .004 .271 .265 .004 .299 .292 .005 .341 .332 .005 .363 .353 .006 .404 .391 .006 .450 .435 .007

L .006 .016 .015* .007 .030 .006 .015 .010 .020 .029 .009 .054 .020 .010

This more advanced evolutionary status is reflected in the mode of travel of the radiation. While the radiation from the majority of the late Class I radio-emitting quasars travels in space, all but one of the radio-quiet quasars in Table XIV has reached the stage where the travel of the radiation is in time. One of the factors that contributes to this result is that the visibility limit of these small objects on the q² basis is reached relatively soon. Only three of the 14 quasars listed in Table XIV have absolute luminosities over 0.020. The visibility limit on the q² basis corresponding to 0.020 luminosity is at a quasar distance of about 0.200. This means that a Class I radio-quiet quasar whose radiation travels in space is visible only within this relatively short distance. As in the case of the Class I radio emitters, the limitation on the distance of the radio-quiet quasars whose radiation travels in time is a result of evolutionary development. By the time these objects have moved from their relatively near locations of origin out to a quasar distance of about 0.400 their optical emission has decreased to the point where it is not detectable with equipment of the kind used by the investigators whose results are reported in Table XIV. The most distant quasar of this group is at a quasar distance of 0.435. There are no radio-quiet objects between this distance and q = 1.136 in either of the two samples that we are examining. They reappear in the range beyond I .136. The factors that are responsible for this distribution pattern will be considered later in this chapter.

There is considerable doubt as to the true status of some of the small objects that have been classified as quasars. A recent (1982) news item reports that B 234, the closest object in TableXIV (z = 0.060) and B 272, another object that has been regarded as a nearby quasar (Z = 0.040), are H II galaxies, in which the radiation originates in large regions of ionized hydrogen 258. The members of this recently recognized class of galaxies appear to be in the size range of small spirals, and in approximately the same evolutionary stage, but they have not yet acquired the spiral structure. It is possible that more of the small nearby ‖quasars― are actually galaxies of this new class, but this should not change any of the conclusions reached herein, other than the estimate of the minimum quasar size, which might be increased slightly. Inasmuch as the Class II stage is the last of the phases through which a quasar passes between its origin and its disappearance, a normal Class II quasar has been traveling outward for a very long time. It therefore follows that the absolute luminosity of such an object should approximate the value calculated on the q basis. Table XV gives the luminosity data thus calculated for the Class II quasars from the reference list that are nearer than q = 1.00. There is one exceptional case in this tabulation. As noted earlier. when a relatively large quasar is very close to the location from which we are observing it, the outward movement may be retarded long enough to enable the quasar to reach Class II status before the transition from radiation travel in space to travel in time. The quasar 3C 273 is in this condition.

Quasar 3C 273 2251+11 1510-08 1229-02 3C 215 2344+09 PHL 923 3C 286 3C 454.3 1252+11 3C 309.1 0957+00 3C 336 MSH 14-121 3C 288.1 3C 245 CTA 102 3C 2 3C 287 3C 186

TABLE XV CLASS II QUASARS—BELOW q =1.00 Z q U-B B-V S m .158 .156 -.85 +.21 1.50 12.8 .323 .315 -.84 +.20 .15 15.82 .361 .351 -.74 +.17 .35 16.52 .388 .376 -.66 +.48 .20 16.75 .411 .398 -.66 +.21 .21 18.27 .677 .643 -.60 +.25 .30 15.97 .717 .679 -.70 +.20 17.33 .849 .797 -.82 +.22 2.21 17.30 .859 .806 -.66 +.47 2.13 16.10 .871 .817 -.75 +.35 .26 16.64 .904 .846 -.77 +.46 1.33 16.78 .906 .847 -.71 +.47 .23 17.57 .927 .866 -.79 +.44 .69 17.47 .940 .877 -.76 +.44 .95 17.37 .961 .895 -.82 +.39 .56 18.12 1.029 .955 -.83 +.45 .68 17.25 1.037 .962 -.79 +.42 1.91 17.32 1.037 .962 -.96 +.79 .83 19.35 1.055 .977 -.65 +.63 1.24 17.67 1.063 .984 -.71 +.45 .95 17.60

Limit .012 .005 .006 .006 .006 .010 .011 .013 .013 .013 .014 .014 .014 .014 .014 .015 .015 .015 .016 .016

L .369 .148 .087 .075 .020 .263 .079 .096 .293 .181 .164 .080 .089 .099 .050 .120 .114 .017 .084 .090

Table XVI is a similar presentation of the corresponding data for the Class II quasars at quasar distances greater than 1.00. The objective of separating the Class II objects into these

two groups is to show that, from a luminosity standpoint, the two groups are practically identical. The range of values in each case is about the same, and the average luminosity for the group below 1.00 is 0.126, while that for the more distant group is 0.13X. In booth the average and the maximum luminosities there is a small increase at the tar end of the distance range, above' 1.70, due to the changes that take place as the sector limit at 2.00 is approached, changes that were previously discussed in connection with the redshifts (Chapter 23) and the color indexes (Chapter 24). Otherwise, wherever we draw out a random sample of Class II objects we obtain practically the same luminosity mixture.

Quasar 3C 208 3C 204 1127-14 BSO-1 1454-06 3C 181 3C 268.4 3C 446 PHL 1377 3C 298 3C 270.1 3C 280.1 3C 454 3C 432 PHL 3424 PHL 938 3C 191 0119-04 1148-00 PHL 1127 3C 9 PHL 1305 0106+01 1116+12 0237-23

Z 1.110 1.112 1.187 1.241 1.249 1.382 1.400 1.403 1.436 1.439 1.519 1.659 1.757 1.805 1.847 1.93 1.953 1.955 1.982 1.990 2.012 2.064 2.107 2.118 2.223

TABLE XVI CLASS II QUASARS-ABOVE q = 1.00 q U-B B-V S m 1.024 -1.00 +.34 .98 17.42 1.026 -.99 +.55 .19 18.21 1.090 -.70 +.27 1.51 16.90 1.136 -.78 +.31 16.98 1.142 -.82 +.36 .45 18.0 1.254 -1.02 +.43 1.02 18.92 1.269 -.69 +.58 .73 18.42 1.271 -.90 +.44 1.48 18.4 1.298 -.89 +.15 16.46 1.301 -.70 +.33 3.30 16.79 1.367 -.61 +.19 1.03 18.61 1.480 -.70 -.13 .80 19.44 1.559 -.95 +.12 .82 18.40 1.597 -.79 +.22 .93 17.96 1.630 -.90 +.19 18.25 1.695 -.88 +.32 17.16 1.713 -.84 +.25 1.18 18.4 1.715 -.72 +.46 .39 16.88 1.736 -.97 +.17 .84 17.60 1.742 -.83 +.14 18.29 1.759 -.76 +.23 .41 18.21 1.800 -.82 +.07 16.96 1.833 -.70 +.15 .56 18.39 1.841 -.76 +.14 .90 19.25 1.922 -.61 +.15 .74 16.63

Limit .016 .016 .017 .018 .018 .020 .020 .020 .021 .021 .022 .024 .025 .026 .026 .027 .027 .027 .028 .028 .028 .029 .029 .029 .031

L .111 .053 .190 .183 .072 .034 .055 .056 .339 .250 .049 .025 .069 .104 .082 .232 .075 .304 .158 .084 .091 .295 .081 .037 .429

This does not mean that the optical characteristics of all Class II quasars are identical; it merely means that whatever differences do exist are distributed throughout the Class II evolutionary stage. There are periods in the life of Class II quasars when the internal explosive activity is at a level above normal. but these active periods are not confined to any one phase of the Class II existence, and may occur at any time. One of the significant results of the near identity between these two quasar groups at much different distances, when their absolute luminosities are calculated by means of the first power relation derived from theory, is to supply another confirmation of that relation; that is,

to confirm the two-dimensional nature of the quasar radiation. The validity of this relationship was demonstrated in Quasars and Pulsars by a direct correlation between quasar distance and the average luminosities of small groups of quasars in which all group members are at approximately the same distance. Now the relation is verified in a different manner by showing that the distribution of luminosities calculated on this first power basis is, with the one exception that has been noted, independent of the distance. Obviously, sample groups from different sections of the range of distances would not show the close approach to uniformity that is evident in the tables unless the basis for reducing observed to absolute luminosity is correct. The identification of the Class II quasars above q = 1.00 is positive, as no other quasars have quasar distances in this range. It then follows that the agreement between the properties of the two groups of Class II quasars also validates the criteria by which the members of the group below 1.00 were differentiated from the Class I quasars that exist in the same distance range. It is clear from the entries in Table XVI that the quasars do not thin out gradually with distance, as expected on the basis of conventional theory. On the contrary, there is evidently a sharp cut-off at some point just beyond the last object of the sample group (quasar distance 1.922). This is not due to decreased visibility, as the visibility limit at the 1.922 distance is 0.031, far below the 0.133 average luminosity of the Class II quasars. It must result from some other limiting factor that comes into operation at this distance. This is in full agreement with the theoretical conclusion that the quasars that retain the normal 3 ½--3½ distribution of the intermediate region units of motion convert to motion in time, and disappear from view. at quasar distance 2.00. The radio-quiet quasars included in Table XVI are relatively large objects. their average absolute luminosity being 0.145, in sharp contrast to the Class I radio-quiet quasars of Table XIV, which average only 0.018. A substantial size is thus indicated as a requirement for attaining the Class II radio-quiet status. This is understandable when we consider the nature of the process that is responsible for the Class II activity. As we have seen, the Class II stage is initiated when a considerable number of the stars of the quasar reach their age limits and undergo supernova explosions. If some or all of the explosion products are confined within the interior of the structure, the quasar becomes a Class II radio emitter. If it is not big enough, or compact enough' to confine these products they are ejected as they are produced, or at intervals, and the quasar gradually disintegrates. The luminosity data for the various classes of quasars are summarized in Table XVII. The most conspicuous feature of this tabulation is the high luminosity of the early Class I objects. However, when we consider the enormous disparity in size between the exploding galaxy that produced the early Class I quasar and the exploding fragment that constitutes the Class II quasar, the difference in luminosity between these two classes is easily accounted for. The relatively low emission of the late Class I objects is obviously a result of the energy losses during the time that has elapsed since the galactic explosion. At the end of the Class I stage, the quasars are in what we may call a condition of minimum internal activity.

Class

TABLE XVII QUASAR LUMINOSITIES Max. Min. Av.

Max/Min

I-Early I-Late (under 0.76) I-Late (over 0.76) I-Radio Quiet Il-Below 1.00 II-Above 1.00

1.455

..057

..422

25

..257

..024

..155

11

..155 ..054 ..369 ..429

..015 ..006 ..017 ..025

..057 ..017 ..126 ..138

10 .9 22 17

Table XVII separates the 14 quasars of late Class I into two groups of 7 each, with the dividing line at U-B = 0.76. The ratio of maximum to minimum luminosity in these two subgroups is practically identical, indicating that the decrease in internal activity continues throughout the late Class I stage, as would be expected from theoretical considerations, and that the difference between the tabulated values for the two groups reflects a decrease in the luminosity level because of the reduced activity, rather than a difference in the sizes of the quasars in the two groups. We may thus conclude that the absolute luminosity of a radioemitting quasar of minimum size in a condition of minimum internal activity is about 0.015. As indicated earlier, the radio-quiet quasars in the Class I distance range differ from the coexisting radio emitters mainly in size. Addition of this radio-quiet class brings the minimum size down to 0.006, or to make some allowance for the rather small sample, let us say 0.005. Some question may be raised as to why there should be a minimum size; that is, why the explosion does not produce debris of all sizes from sub-atomic particles up to some maximum size of fragment. The answer is that the quasar is the whole cloud of ultra high speed matter ejected by the explosion, including stars, star fragments, dust, and gas. We see the cloud as a discrete object because of the great distances that are involved. The maximum luminosities vary considerably more than the minimum. This is evidently due to the fact that in the quasars, as well as in the pre-explosion galaxies, the internal activity can build up to a higher level in the larger aggregates before breaking through the overlying layers of material. The effect of this factor is shown by the ratios of maximum to minimum luminosities, which range from 17 to 25 in the active quasar classes, but average only about 10 in the relatively inactive late Class I groups. Since each of the larger quasars passes through all of the stages represented by the various radio-emitting classes, the range of sizes should be the same in each, if the sample is representative. The 0.155 maximum of the subgroup of late Class I in which the U-B index is over 0.76 should therefore be the maximum value comparable to the 0.015 minimum that we found to be applicable under the condition of minimum internal activity. Since the sample is small, there may be some larger objects elsewhere, but the continuity of the maximum-minimum ratio throughout Class I indicates that the 0.155 value is at least close to the maximum. Furthermore, the quasar 3C 334, which has the 0.155 luminosity, may still have somewhat more than the minimum internal activity. These possibilities tend to counterbalance each other. It thus appears that a value of about 0.150 is acceptable as the maximum absolute quasar luminosity under conditions of minimum internal activity. What we now want to consider is the meaning of these maximum and minimum values in terms of the masses of the quasars; that is, are they consistent with the theoretical conclusion

that the quasar is a fragment of a giant spheroidal galaxy? The various factors that enter into this situation are not yet defined clearly enough to enable an accurate calculation, but an approximation is all that is needed in order to answer the question as stated. The most convenient way of obtaining this answer is to make a direct comparison between a quasar and the galaxy from which it was ejected, both of which are at the same spatial distance. The logical pair for this purpose is the one that we know the best, the quasar 3C 273 and its associate, the giant galaxy M 87. The largest uncertainty in this evaluation is in the relative mass-to-light ratios of these two objects. It is known that there is a systematic increase in this ratio as the size of the galaxy increases, as would be expected from the theoretical information about the galactic structure developed in the preceding chapters. A recent review by Faber and Gallagher reported relative values for spiral galaxies ranging from 1.7 for the smaller class to 10 for the large so spirals.259 Information with respect to the giant spheroidal galaxies, the parent objects of the quasars, was reported to be scarce, but the available data indicated a substantially higher ratio, probably at least 20. The increase in the mass-to-light ratio with the size of the galaxy is mainly due to the increasing amount of confined high density, high temperature, material in the galactic interiors. At the level of minimum internal activity the quasars contain much less of this dispersed material, without the confinement. The stars are still moving at upper range speeds, and the star density remains high, but this does not affect the mass-to-light ratio, which is determined primarily by the extent to which upper range speeds exist in the constituents of the stars. As previously noted, these constituents return to temperatures below unity at the end of the early Class I stage. The mass-to-light ratios of the quasars in the minimum activity condition should therefore approximate those of the smaller spiral galaxies. An estimate of 2 should be reasonable. This means that the ratio of the masses of the minimum activity quasars to those of the galaxies of origin is less by a factor of about 10 than the ratios of the luminosities. As indicated in Table XVII, the Class II quasars are about twice as luminous as the quasars in the stages of minimum internal activity. This brings the mass-to-light ratio of 3C 273 down to about ½0 of that applicable to M 87. The observed magnitudes of M 87 and 3C 273 are 9.3 and 12.8 respectively. The corresponding ratio of luminosities is 25. Applying the correction for the difference in the mass-to-light ratios, we arrive at the conclusion that M 87 is 500 times as massive as 3C 273. From the data in the tables in this chapter it appears that 3C 273 is somewhere near the maximum quasar size. On this basis, then, only about 0.2 percent of the mass of a giant galaxy is ejected in the form of a quasar, even when the fragment is one of maximum size. This is only a very small portion of the galaxy, but the galaxy itself is so immense (about 1012 stars, according to current estimates) that 0.2 percent of its mass is a huge aggregate of matter. It is equivalent to about two billion stars, enough to constitute a small spiral galaxy. The smallest quasar, radio quiet by the time we observe it, represents only about 0.007 percent of the galactic mass—a mere chip, one might say—yet it, too, is a very large object by ordinary standards, as it contains approximately 70 million stars, the equivalent of about

100 large globular clusters, or a dwarf elliptical galaxy. The data examined in this volume, and the two that preceded it, together with the interpretation of these data in terms of the quasar theory derived from the postulates of the Reciprocal System give us a picture of the quasars that is complete and wholly consistent. As this analysis shows, if a fragment of a giant galaxy, of a size consistent with the theory. has been ejected at a speed greater than that of light, as required by the theory, then the optical emission from the constituent stars of the fragment, occurring at a rate consistent with the normal emission from such stars, at the distance theoretically indicated by the redshift, and distributed in space and its equivalent in the manner required by the theory, will be received here on earth in just the quantities that are actually observed. There are no inconsistencies of the kind that are so conspicuous in the application of conventional theory to the quasars. All of the observations fit easily and naturally into the theoretical structure. As brought out in the preceding pages, this is true not only of the general situation, but also of the minor details. The correlation between theory and observation provides individual confirmation of many of the special features of the theory, such as the first power relation between distance and luminosity, the changes in color and distribution of the radiation that take place when the speed exceeds one or another of the unit levels, the special characteristics of the early type quasars, the differences between the limiting magnitudes of the various quasar types, etc. Furthermore, the theory from which all of these results have been obtained is not something that has been constructed to fit the observations. Each and every conclusion that has been reached is a necessary consequence of the basic assumptions as to the properties of space and time. The theoretical development shows that just because space and time have these postulated properties, quasars must exist, and they must have exactly the characteristics that are now revealed by observation.

CHAPTER 26

Radio Galaxies As predicted in the first edition of this work, the fast-moving products of galactic explosions that are now known as quasars were discovered in the course of observations of radiation at radio wavelengths. A dozen years earlier, the first radio galaxy, Cygnus A, had been identified. The optical object corresponding to this radio source was found to have the appearance of two galaxies in collision. When another very strong radio emitter, Centaurus A, was discovered and identified with an optical object, NGC 5128, that likewise appeared to be a pair of colliding galaxies, the galactic collision hypothesis became the favored explanation of the origin of the extra-galactic radio emission, although no one could explain how collisions could produce the observed radiation. As more radio observations accumulated, it became clear that the great majority of the radio sources are not colliding galaxies. The necessity of providing some other explanation for most of the sources raised doubts as to the validity of the collision

hypothesis, and ‖by 1960 the colliding-galaxy theory of radio sources had all but expired.― Ten or twelve years later the pendulum had swung back in the other direction. The authors of the foregoing comment on the situation in 1960 saw it this way in 1973: We suspect that in NGC 3921 and similar objects one is witnessing the vigorous tumbling together or merger of what until recently were two quite separate galaxies.260 A realization that galactic collisions, once thought to be rare events, are actually quite common, has been a significant factor in this change of attitude. There are many galaxies with distorted shapes, and it has been found that a substantial number of these are, or at least appear to be, double structures of some kind, suggesting that two separate galaxies are, or have been, interacting. In NGC S l28 what we apparently see is a spiral galaxy plowing into the middle of a giant elliptical galaxy. This view is supported by the observation that ‖the gaseous disk is apparently rotating much faster than the elliptical component.― 261 The galaxy NGC 4650A is reported to have a similar structure, with an elliptical core and an outer spiral galaxy revolving around the core. In considering the situation from a theoretical standpoint, the first point to be noted is that colliding galaxies produce radio frequency radiation by the same process as any other strong radio emitter; that is, the radiation comes from particles that have been accelerated to upper range speeds. The acceleration can be produced in any one of a number of different ways. Thus it is quite possible that some of the observed radio emission may be a result of galactic collisions, even though in the majority of the radio emitters it results from explosive processes. It would be expected, however, that the explosions, the more violent of the two processes, would produce the stronger radiation. What needs to be explained, then, in the case of the two sources Cygnus A and Centaurus A, is the exceptional strength of their radio emissions. The answer that we obtain from the theory is that the strong emission is not a direct result of the collision but an indirect result, in which sources of radiation already present are released. It is evident from observation that in each case one of the two colliding objects is a giant. We have previously deduced that the interiors of such giant galaxies contain concentrations of intermediate and ultra high speed matter, enough to make these galaxies strong sources of radio emission even when their structures are intact and only a small part of the radiation that is produced is able to pass through the material that overlies the producing zone. These giant galaxies are large enough, and stable enough, to be able to absorb globular clusters or small galaxies without any significant disturbance of their own structures, but a collision with a large spiral can be expected to result in some disruption of the outer structure of the giant, allowing the escape of large quantities of explosion products from the interior. Here, then, is a source of radiation that is easily able to account for the strong emission from the two objects in question. The alteration of the normal pattern of development of the internal activity by escape of matter and radiation during collisions is not likely to have any long run significance. It can be expected that when the consolidation of the two galaxies is complete the new galactic structure will be able to contain the material moving with upper range speeds, and the build-up of this material will be resumed, continuing to the ultimate limit in the normal manner. However, some drastic changes in the pattern of evolution of the galaxy

may result if the large-scale explosive activity is premature. This possibility will be explored in the next chapter. Two objections have been raised to the collision hypothesis in application to NGC 5128: ( 1 ) the ` `dark lane is wider than would be normal for the disk in a spiral galaxy,― and (2) the ‖lane is more disturbed than the matter in the disk of a spiral galaxy should be.― 263 Neither of these objections is tenable once it is understood that the stars of a galaxy occupy equilibrium positions. Disturbance of this equilibrium by contact with another galaxy generates effects that extend over great distances. The recent discovery that NGC 5128 is a strong x-ray source supports the conclusion that a collision has disrupted the structure of the giant galaxy, as the intermediate speed component of the matter escaping from the central region of the galaxy begins emitting xrays as soon as its temperature drops below the unit level. Inasmuch as the x-ray radiation originates from matter moving at less than unit speed, it should be emitted mainly from the optical location rather than the radio locations. This theoretical conclusion is confirmed by observation .264 Whether the speeds responsible for the radio emission of the colliding galaxies are produced in part by the collision, or whether the fast-moving matter released by the rupture of the outer layers of the larger galaxy is the sole source of this radiation is not definitely indicated by the information now available, but the indications are that the contribution of the collision is no more than minor. The disruption of the outer structure of a giant spheroidal galaxy is clearly the process that leads to the greatest release of radio-emitting material, and it accounts for the fact that such objects as Cygnus A are extremely strong radio emitters. Smaller amounts of such material escape from other galaxies and from quasars under special conditions. The giant galaxy M 87, for example, apparently has a hole in its outer structure through which ultra high speed matter is escaping in the form of a jet. In still another class of radio emitting objects, the Seyfert galaxies, the containment is quite limited, and the ultra high speed material escapes continuously, or at short intervals. These latter two classes of objects will be given some further consideration in the next chapter. Another special kind of radio galaxy is the one known as the N galaxy. Most of these objects are far distant. Consequently they have not been studied as extensively as those more accessible to observation, and the amount of information about them that is now available is rather limited. For this reason, whatever conclusions we may reach with respect to them will have to be somewhat tentative. However, the theory that has been developed from the postulates of the universe of motion requires the existence of a class of objects with the same characteristics as those thus far observed in the N galaxies. On the basis of the information currently available, it thus appears probable that the N galaxies are the objects that the theory calls for. Inasmuch as there is no gravitational effect beyond a quasar distance of 1.00, the explosion speed has no component in the dimension of the reference system in the range from 1.00 to 2.00. From our point of view, therefore, a quasar originating beyond q =

1.00 remains at its original spatial location (subject to the normal recession) during its entire life span. Ordinarily the radiation from the quasar overpowers that of the galaxy of origin, and the quasar appears to be alone. In some circumstances, however, the presence of the galaxy can be detected. Furthermore, we can deduce from probability considerations that some of the quasars are located directly behind the heavily populated galactic centers from which, according to the theory, they originate. In this case the quasar radiation is absorbed and reradiated. This means that there should'exist a class of galaxies in which the galactic nucleus is abnormally bright and emits radiation with some of the spec'tral characteristics of the radiation from the quasars. The distinguishing feature of the N galaxies is a nucleus of this nature, and it is now conceded that ‖the spectra and colors of quasars are similar to those of the nuclei of N galaxies.― 265 Indeed, the similarities between these galaxies and the quasars are so evident that it has been suggested that all quasars may be N galaxies with very prominent nuclei. One specific observation that has been interpreted as evidence in favor of this hypothesis is a change of three magnitudes (a factor of about 16) in the emission from the galaxy X Comae. This leads the observers to conclude that this is ‖an object that apparently can change temporarily from an N-type galaxy to a QSO.― This, they say, ‖clearly supports the hypothesis― that quasars are simply very bright galactic nuclei.266 However, the explanation provided by the theory presented in this work is not only equally consistent with the observations, but also explains how and why the change takes place, something that is conspicuously lacking in the ‖bright nucleus― hypothesis. If the quasar is behind the galaxy from which it was ejected, as we have concluded that the N galaxies are, it is quite possible for changes to occur, as the galaxy rotates, in the amount of matter through which the quasar radiation must pass. Such changes are probably no more than minor in the usual case, but they obviously can extend all the way from a condition in which the entire radiation from the quasar is~absorbed and reradiated, so that we have nothing but an N galaxy, 'to a condition in which that radiation passes through essentially unchanged, and we see only a quasar. It has also been reported 267 that in some of the objects of this class the quasar component is ‖off center― with respect to the underlying galaxy. This is very difficult to explain on the basis of the hypothesis that the N galaxy is a galaxy with a quasar core, but it is easily understood if what is being observed is a galaxy with a quasar almost directly behind the galactic center. Another significant observation is that ‖the underlying galaxy [of the N systemJ has the same colors as a giant elliptical (E) galaxy. ‖ 265 This supports the theoretical finding that the underlying galaxy in the N system is a galaxy of maximum size (and age) that exploded and ejected the quasar. Further support for this explanation comes from the observation that the N galaxies are xray emitters. After having been raised to the radio-emitting speed level by the strong radiation from the quasar, some of the gas and dust of the N galaxy loses energy in its interaction with the other galactic constituents, and returns to the lower speed range. This initiates x-ray emission. ‖All optically known N galaxies out to a red shift of 0.06 are detected as x-ray emitters.― 227

The general run of radio galaxies-those that are not members of special classes such as the ones that have been described-are explosion products. As we saw earlier, a radio galaxy is normally produced jointly with each quasar. It is also possible that in some galaxies large-scale supernova activity may begin before the galaxy has reached the size that makes it capable of resisting internal pressures in the ultra high range. In that event, the galactic explosion will be less violent, and the major explosion product will not attain the ultra high speed that characterizes the quasar. Instead, it will be a radio galaxy. In all cases, however, a radio galaxy is an ordinary galaxy, differing from the other members of its class only in that it contains gas and dust that has been accelerated to speeds greater than that of light, and is therefore undergoing the isotopic adjustments that produce radiation at radio frequencies. Many quasars are strong radio sources, as could be expected from the fact that secondary explosions take place in the older quasars, giving them a source of replacement for the particles and the energy that are dissipated. As we saw in our examination of the absorption spectra, the particle speeds are actually increasing in the older quasars. Radio galaxies, on the other hand, are limited to the original supply of matter and energy that they acquire in the explosive event. It should be noted, however, that the strength of the radiation from the distant quasars is greatly overestimated in current practice, because the absolute value of the emission is calculated on the basis of a three-dimensional distribution. As explained earlier, the actual distribution is two-dimensional. In those scientific areas where data from observation and experiment are scarce and subject to a variety of interpretations, the generally accepted choice from among the alternatives often fluctuates in a manner reminiscent of the changes of fashion in clothing. The changing attitudes toward the process responsible for the generation of the radiation from the radio galaxies that were mentioned earlier in this chapter now appear to be entering still another phase. The ‖high fashion― in today's astrophysical theory is the black hole. Wherever problems are encountered, the current practice is to call upon the black hole to provide the answer. So it was probably inevitable that black holes would find.their way into the theory of the radio galaxies. Just how the black hole accomplishes the observed result is not explained. We are simply expected to say ‖black hole― as we would say ‖open sesame,― and take it for granted that we have the answers. For example, K. I. Kellerman reports evidence supporting the ‖speculation that the efficient transport of energy from the black hole to the extended radio lobes occurs by what is commonly referred to as a relativistic beam or jet.― 268 The basic questions as to how and why a black hole produces a ‖relativistic beam― are passed over without comment. Since the astronomers know of no means of producing strong radio radiation other than the synchrotron process, they assume that this process must be operating, even though they realize that, as matters now stand, there is no plausible explanation of how the conditions necessary for the operation of this process could be produced on such an enormous scale. J. S. Hey tells us that. The synchrotron theory has remained undisputed as the principal process of radio emission. But the problems of the production of relativistic particles and their

replenishment by repeated activity have prompted a great deal of speculation . . . There are at least as many theories as there are theoretical astronomers.269 As this statement indicates, the astronomers' view of this phenomenon has not advanced beyond the highly speculative stage. H. L. Shipman sums up the situation in this manner: We have no definite explanation for the appearance of even the most common form of radio galaxy, the double radio galaxy.270 Here again, the theory of the universe of motion produces the answers to the problems in the course of a systematic and orderly development of the consequences of its basic postulates, without the necessity of making any further assumptions, and without calling upon any black holes or any other figments of the imagination. This theory tells us that, except for some minor contributions from processes such as galactic collisions, the energy of the radio radiation is produced explosively. Gas and dust particles are accelerated to upper range speeds, and radiation at radio frequencies is then produced in the manner described in Chapter 18. Where conditions are such that the speed of certain particles drops back below the unit level at some stage of the evolution of the explosion products, x-ray emission takes place, as also explained in an earlier chapter. Where the maximum explosion speeds are in the intermediate range, below two units, the explosion products expanding in time have no other motions. The radio emission therefore takes place from the original spatial position-that is, the optical location-of the exploding object, except to the extent that some of the intermediate speed matter may be entrained in the outward-moving low speed products. The general run of white dwarfs and many other radio emitters are therefore single radio sources. Explanation of these sources presents no particular problem, except the basic requirement of accounting for the production of strong radio radiation. Current astronomical theory has nothing to offer as a means of meeting this requirement except the synchrotron process, which, as brought out earlier, is wholly inadequate. But the isotopic adjustment process discussed in Chapter 18 provides an explanation that is in full agreement with the observations. The most glaring deficiency in the current astronomical views regarding the radio radiation is the one that authors such as Shipman are conceding in their discussions of the subject: the lack of any plausible explanation of the structure of the extended sources. Our finding is that these sources are expanding clouds of matter not essentially different, except in the distribution of their component motions, from the other strong radio sources that we have examined. In all explosive events within our ordinary experience we observe that an expanding cloud of material is ejected from the exploding object. A supernova remnant is such a cloud. One of the rather surprising results of the development of the consequences of the postulates of the theory of the universe of motion is the finding that the white dwarf, a small compact object, is likewise an expanding cloud of material. It is essentially the same kind of thing as the cloud that is expanding in space, differing only in that it is expanding into time, and is therefore contracting when viewed from the spatial standpoint. This difference in behavior is easily understood when the inverse nature of motion in time (as compared to motion in space) is taken into consideration. Expansion

into space increases the spatial size of one cloud of explosion products. Expansion into time decreases the size of the other. The ‖mysterious― pulsars have an equally simple explanation. They are merely moving white dwarfs. The ordinary white dwarf, as we have seen, is a stationary expanding object; stationary in space (aside from ordinary vectorial motion) and expanding in time. The pulsar is moving at ultra high speed, the next higher speed range. This object therefore adds another motion, expanding in time like any other white dwarf, and, in addition, moving translationally in a dimension of space other than the one represented in the conventional spatial reference system. The quasars have the same kind of a combination of motions as the pulsars. Thus we can describe both of these classes of objects as stationary in the dimension of the reference system (except for the normal recession and possible random motion in space), expanding into time (equivalent space), and having a linear motion in a second spatial dimension. Here the explosive inerease of speed into the ultra higb range has resulted in the addition of two more motion components to the original spatial motion, an expansion and a translational motion. Because of the alternation of space and time in the basic motion, one of these added components must be motion in time and the other motion in space. In the case of the quasars and the pulsars, the expansion is in time and the translation is in space. But, as we saw in Chapter 15, where the theoretical situation was examined, it is equally possible, under appropriate circumstances, for the expansion to take place in space (that is, in the second spatial dimension) and the translation to take place in time. This produces the same results, except that space and time are interchanged. Here we have expansion in space and translation in time. Although the combination of motions is essentially the same in both cases, the observed phenomena are totally different, because of the limitations of the spatial reference system. To observation, quasars and pulsars are small, very compact, contracting objects. Inverting the roles of space and time in this description, we find that the explosion products of the inverse type are large, very diffuse, expanding objects. In both cases, the motion in the early stages, immediately following the explosion, is modified by gravitation. As we saw in the case of the quasars, the spatial motion in the second scalar dimension is normally unobservable, but for a time subsequent to the explosion this unobservable scalar motion is acting against gravitation. The gradual elimination of the gravitational effect allows the progression of the natural reference system that was counterbalanced by gravitation to become effective, reversing the change of position in the reference system that resulted originally from gravitation. It was noted in Chapter 22 that this process results in an observable movement in space during the early part of the quasar life, gradually decreasing, and terminating at a quasar speed of 1 .00. ln this instance, Case I, as we will call it, an object that is expanding in time, and is therefore compact in space, undergoes a linear outward motion in space. In the inverse situation, Case II, an object that is expanding in space, and therefore extends over a large spatial volume, undergoes a linear outward motion in time (unobservable). In both cases, the first portion of the spatial motion operates against gravitation, and the gravitational

change of position that is eliminated is observable. Thus in Case I there is an observable linear translational motion that terminates at a quasar distance of 1 .00, where the net gravitational motion reaches zero. In Case II there is an observable linear expansion terminating at the same 1.00 distance. Beyond this point the expansion takes the normal spherical form that results from a random distribution of directions. A rapidly moving stream of particles is commonly called a jet. Thus the spatial expansion at ultra high speeds takes the form of a jet and sphere combination. As we saw earlier, scalar motion does not distinguish between the direction AB and the opposite direction BA. It follows that where there is no obstacle in the way of the expansion, two oppositely directed jet and sphere combinations originate at each explosion site. The objects inversely related to the quasars and pulsars therefore manifest themselves by a radiation pattern that can be described as having a dumbbell shape. This widely dispersed matter is not generally regarded as an ‖object― in the same sense in which this term is applied to a quasar, but actually the two are identical in form, aside from the inversion of space and time. The quasars and pulsars are compact in space and spread out over a very large expanse of time. The tadio-emitting dumbbell is compact in time and spread out over a very large expanse of space. Both of these kinds of objects are essentially nothing but expanding clouds of explosion products. The difference between them, as they appcar to our observation, is due to the manner in which we are observing them; that is, we are able to detect changes of position in three dimensions of space, but our direct apprehension of time is limited to the scalar progression. We detect other motion in time only by its effect, if any, on spatial positions. Deviations from the dumbbell pattern are caused by obstructions in the way of travel of the explosion products, by supplementary explosive activity, by vectorial motion of the galaxy of origin during the expansion stage, or by interaction with neighboring galaxies. The structure of the radio-emitting cloud of matter thus has a considerable amount of diversity, but the division into two somewhat symmetrical regions is generally apparent, except where a specific direction is imparted to the motion of the explosion products by escape through a single orifice. Distant radio galaxies are subject to the same lateral displacement of the radio image that applies to the quasars, but this displacement is small compared to that resulting from the linear expansion of the explosion products, and it is generally obscured by elements of the structure due to that expansion. However, a noted in Chapter 22, both the large scale structure and the small scale displacements are observed in some cases. The ultra high speed motion in the interiors of the giant galaxies is thermal motion, in which the directions of the motions of the individual particles are continually changing because of repeated contacts of the moving particles. When the galactic explosion occurs, those of the ultra high speed particles that escape from the galaxy are incorporated into the two major explosion products. Here the forces tending to confine this material are inadequate to accomplish total confinement, and the ultra high speed thermal motion is therefore gradually converted into ultra high speed linear outward motion. Thus both of the major products of the galactic explosion, the quasar and the radio galaxy, are ejecting the dumbbell type of radio-emitting clouds.

As brought out in the theoretical discussion in Chapter 15, the ultra high speed particles expanding into space in the combination jet and sphere pattern are moving at the same total speed as the pulsars. Thus their ultimate fate is the same. Except for a relatively small proportion that are slowed down sufficiently by environmental factors to reduce their speeds below the two-unit level, the individual particles of the expanding cloud of matter eventually cross the boundary and escape into the cosmic sector in the same manner as the pulsars and the quasars. The x-ray radiation from the relatively small number of particles that return to the lower speed ranges is too widely scattered to be observable. Optical radiation is visible only from entrained material in the early jet stage. The ultra high speed expansion is therefore primarily a radio phenomenon. In addition to the components moving at less than unit speed, and the components moving at ultra high speed that have just been discussed, the products of the most violent explosions also include particles moving with intermediate speeds. As we have seen in the earlier pages, motion in this speed range (the speeds of the components of the white dwarfs) does not change the position in space. From a spatial standpoint, the particles that constitute this intermediate speed component are motionless. The spatial densities of the outward-moving material are high enough to carry most of this otherwise motionless matter with the streams, but some of it remains at the explosion site. In those cases where the size of this remainder is substantial, the radio emission pattern has three main centers rather than only two. Some of the entrained intermediate speed matter may also drop out of the stream during the jet stage, resulting in local concentrations of material, often called ‖knots,― in the jet. The discussion of the cloud of ultra high speed matter that produces the dumbbell type of radio-emitting structure in this chapter completes the identification of the different types of motion combinations that are involved in the phenomena of the upper speed ranges. Summarizing these findings, it can be said that, although the objects included in this category show a wide diversity of shapes and sizes, all the way from tiny, but extremely dense, aggregates to very diffuse clouds of material spread out over vast regions of space, they can all be described as fast-moving clouds of matter, either clouds of particles or clouds of stars. The very diffuse objects are clouds of matter widely dispersed in space by the forces of the explosions. The very compact objects are clouds of matter widely dispersed in time by forces of the same kind. The variations in the way in which these clouds appear to observation are due to the differences between motion in space and motion in time, and to the variability in the manner in which these different motions are distributed among the three speed levels of the material sector of the universe. The relations between the different kinds of observed objects are brought out clearly by the comparison in Table XVIII. Here we see that all of the new type of objects discovered by the astronomers during the last few decades, from the rather commonplace supernova remnants to the ‖mysterious― quasars, are explosion products, differing in the way in which they appear to observation because some are aggregates of particles while others are aggregates of stars, and because there are variations in two properties of the motions of their components: the speed level (which determines whether the motion is in space or in time), and the motion distribution-unidirectional (linear) or random (expansion or contraction). An additional

variation is due to the fact that some of these objects (the white dwarfs, for example) are single entities while others are combinations in which a relatively compact object, such as a radio galaxy, is associated with an extended cloud of material. In this connection, it should be understood that expansion in time, like any other time motion, acts as a modifier of the spatial dimension of a cloud-that is, as a contraction in equivalent space-as long as the total motion of the object has a net spatial resultant. Thus, even though motion in time is not, in itself, observable, the decrease in the size of an astronomical object due to expansion in time can be observed. TABLE XVIII MOTION COMBINATIONS AT UPPER RANGE SPEEDS Aggregates of particles Speed level 1 2 White dwarf - early ET White dwarf - late CT White dwarf remnant ES Pulsar - outgoing ET Pulsar - incoming CT Pulsar remnant Component A ES Component B LT Aggregates of stars Quasar ET Radio Galaxy LS Intermediate speed gas component ET Associated radio cloud LT E expanding C Contracting L moving linearly

3

LS LS

ES LS

ES

S in space T in time

It is appropriate to emphasize that the explanations that emerge from the application of the Reciprocal System of theory to the extremely compact objects, and related phenomena, that have been brought within the scope of astronomical observation in very recent years, are not drawn from the land of fantasy in the manner of ‖black holes,― ‖degenerate matter,― and the like, but are simple and direct results of two aspects of motion that have not been recognized by previous investigators: motion in time and motion at speeds exceeding that of light. When the full range of motions is recognized, the explanations of the newly discovered objects and phenomena emerge easily and naturally, each taking its specific place in the evolutionary pattern of the material sector of the universe. This characteristic of the theoretical development continues what has been one of the outstanding features of the previously described results of the application of the theory of the universe of motion to the astronomical field. Instead of being a collection of unrelated classes of entities, each originating under a special set of circumstances, all of the observed astronomical objects

are found to have their definite places in an evolutionary path resulting from aggregation under the influence of gravitation. We have seen, for instance, that the formation of stars and galaxies is not the result of hypothetical processes that operate only under very special conditions, as assumed in present-day astronomy. Instead, the formation of each class of objects takes place at the appropriate point in the evolutionary path as the direct result of gravitational aggregation, a process that is known to exist and to be operative under the conditions existing at the point of formation of the particular object. The situation with respect to the other phenomena that have been examined in the preceding pages is similar. It was not necessary to call upon processes that require the existence of special conditions of an unusual nature to explain the strong radiation at radio or x-ray frequencies that is received from certain classes of objects. Here again, the observed phenomena are explained by means of processes that necessarily take place at certain stages of the evolutionary development. Nor do we have to follow the astronomers' practice of evading the task of accounting for such phenomena as the cataclysmic variables by calling them ‖freaks.― These phenomena have places on the evolutionary path that are just as specific as those of the better known astronomical objects. The view of the newly discovered compact objects and other ‖puzzling― features of the large-scale activity of the universe that we obtain by applying the physical principles developed in the two preceding volumes of this work differs quite radically from the way in which these phenomena are portrayed in current astronomical theory. But when it is realized that the astronomical theories in these areas are based almost entirely on assumptions, it should be evident that such conflicts are inevitable. The astounding extent to which astronomical science has degenerated into science fiction will be described in Chapter 29. In the interim we will examine a few phenomena that were not taken up earlier because it was evident that they could be more conveniently considered after the role of the quasars and associated phenomena had been clearly defined.

CHAPTER 27

Pre-Quasar Phenomena In the preceding pages we have seen that a Type II supernova in the outer regions of the galaxy, originating from a relatively large star, produces a pulsar that moves away from the explosion site at ultra high speed, and also an assortment of products of smaller sizes and lower speeds, both above and below unit speed (the speed of light). We have also seen that when large numbers of these supernova explosions occur in the interiors of the oldest and largest galaxies (as most of them do, since the oldest stars are concentrated in the central regions of these galaxies), the pressure that is built up by the fastmoving explosion products ultimately blows out a section of the overlying layers of the galaxy. This fragment them moves off at ultra high speed as a quasar. Now we will want to give some consideration to the events that precede this ejection.

The fact that the energy of each of the major explosive events comes from an accumulation of relatively small (compared to the final energy release) energy increments contributed by explosions of individual stars not only establishes the normal pre-quasar pattern, but also determines the kind of variations from the normal pattern that are possible. Since any small galaxy, or even a globular cluster, may incorporate a few remnants of disintegrated old galaxies, Type lI supernovae may occur in any aggregate, but they are relatively rare in the small young structures, and most of their products escape immediately from these structures. However, when a galaxy reaches the stage in which some of its constituent stars other than the strays begin to arrive at their age limits, the number of supernovae in the galactic interiors, where the oldest stars are concentrated, increases dramatically. Coincident with the increase in age, a galaxy also increases in size, and the interior regions in which the explosive activity is taking place are enclosed by a continually growing wall of overlying matter. In the ordinary course of events this growth leads the increase in internal activity by a sufficient margin to prevent the escape of any large amount of explosion products until the quasar stage is reached. The normal pre-quasar period is therefore characterized by a slow, but steady, build-up of intermediate and ultra high speed matter in the galactic interiors. With the possible exception of one class of galaxies that we will consider shortly, the galaxies of the normal evolutionary sequence, those that will eventually eject quasars if they are not captured by larger aggregates before they reach the critical age, show no structural evidence of the activity that we find from theory is taking place in their interiors. There are, however, two observable phenomena that indicate the existence and magnitude of this activity. One of these is radio emission. The magnitude of the radiation at radio frequencies indicates the rate at which isotopic adjustments are taking place in matter recently accelerated to speeds greater than unity by the supernova explosions. It has been shown by Fanti et al. , that the amount of radio emission is related to the brightness, and hence to the size, of both spiral and elliptical galaxies, as the theory requires.271 All of the more advanced spirals are radio emitters, and the giant spheroidal galaxies are strong radio emitters. Further evidence of the presence of upper range speeds in the galactic interiors is provided by the high density that is characteristic of the central cores of the larger galaxies. According to current estimates, the density in the core of our Milky Way galaxy is 30 or 40 times as great as would normally be expected, while the central regions of M 87, the nearest, and consequently the best known of the giants, are estimated to be at least 80 times the normal density. Current efforts to explain these abnormal densities are based on the assumption that there must be a large number of high density objects in these central regions: white dwarfs, or the hypothetical neutron stars or black holes. The development of the theory of the universe of motion now reveals that the extremely high density of all of the compact astronomical objects-white dwarf stars, pulsars, x-ray emitters, galactic cores, quasars, etc.-is due to the same cause: speeds in excess of unity (the speed of light). The conventional explanation of the high density of the white dwarfs is based on the idea of a ‖collapse― of the atomic structure, and it therefore cannot be extended to an aggregate composed of stars. The effect of upper range speeds, on the other hand, is independent of the nature of the moving entities. The reduction in the

effective distance between objects by reason of these speeds is a specific function of the speed, irrespective of whether these objects are atoms or stars. Thus, the high density of the central regions of the larger galaxies is not due to the presence of unusual concentrations of very dense objects, but to the distortion of the scale of the reference system that results from the high speeds of the normal constituents of the galactic interiors. The cores of these galaxies are in the same physical condition as the white dwarf stars and the quasars; that is, their density is abnormally high because introduction of the time displacement of the upper range speeds has reduced the equivalent space occupied by the central portions of the galaxies. In brief, we may say that the reason for the abnormal density in the older and larger galaxies is that these galaxies have white dwarf cores-not white dwarfs in the core, but cores in which the constituent stars and particles are in the same condition as the constituent particles of a white dwarf star. We do not have enough information to enable tracing the build-up of the internal activity of the galaxies from its beginning, but we do have some knowledge about the interior of a galaxy that is not yet very far along this road. This is our own Milky Way galaxy, where we have the advantage of proximity, and can observe details that would otherwise be beyond the scope of our instruments. A small region, known as Sagittarius A, apparently located at the dynamic center of the galaxy, has some unusual characteristics which indicate that it is the kind of a core that we could expect in a spiral of moderate age. The picture is not entirely clear as yet, but as one report puts it, ‖Radio observations indicate that something quite unusual is going on at the center of our own galaxy. ― 231 Another observer draws the same conclusion from the infrared emission which, he says, is ‖so intense that it cannot easily be interpreted unless we believe that something very special is occuring there. ― 272 Here we have another instance of the association of strong radio and strong infrared emission that was discussed in Chapter 14, an association that the astronomers have never been able to explain. Referring particularly to the quasars, Shipman calls this ‖the infrared puzzle.― 273 Both of these types of radiation are characteristic of matter that is moving at upper range speeds. Their existence in the core of the Milky Way galaxy shows that this galaxy has already developed the kind of an intermediate speed core-a white dwarf core -that we would theoretically expect in a galaxy of this size and age. The optical radiation from the core is unobservable because of absorption in the intervening matter, but some information as to the size and properties of this core has been derived from infrared and radio measurements. It is generally assumed that the radiation at about two microns wavelength is thermal, and that its intensity is proportional to the star density. As indicated in Chapter 14, our findings are in agreement with this conclusion. On this basis it is estimated that there are about 70 million solar masses within 10 parsecs of the center of the core, and that the density in the innermost volume of 0.1 parsec radius is 100 million times the star density in the vicinity of the sun.274On first consideration such a concentration may seem incredibly large, but when it is realized that this observed high spatial density is actually a very low density in time, it becomes evident that the observed magnitude is not out of line with other limiting densities. For

instance, the density of solid matter at zero temperature and pressure is in the neighborhood of 100 million times the density of the most diffuse stars. 275 The radiation in the near infrared comes from stars that are moving at upper range speeds (which accounts for their high spatial density), but are composed of particles whose speeds (temperatures) are in the range below unity (which accounts for the thermal character of the radiation). In addition to this type of radiation, there is also a very intense radiation in the far infrared, a nonthermal radiation that ‖is presumed to be synchrotron radiation. ― 276 In the light of the findings detailed in the preceding pages, it is evident that this presumption is incorrect, and that the non-thermal radiation, both the infrared and the associated radio emission, originates from isotopic adjustments in matter that has been accelerated to upper range speeds. The existence of radiation of this nature identifies the Milky Way galaxy as one that has a good start toward the build-up of matter with speeds in the upper ranges which will eventually lead to the kind of a gigantic explosion that ejects a quasar. ‖Astronomers,― says Hartmann, ‖are still groping for explanations of what is happening at the center of the Milky Way. ― 277 Here is the framework of the explanation that they are looking for. As noted earlier, the evidence of internal activity increases as the galaxy becomes older and larger. We are not yet able to make a quantitative determination of the maximum size from theoretical premises, but we know from theory that such a limit exists, and this is confirmed by observation. Fred Hoyle points out that ‖Galaxies apparently exist up to a certain limit and not beyond. ― 278 Rogstad and Ekers give us an idea as to the location of that ‖certain limit.― They report that an absolute photographic magnitude of about -20 is a necessary condition for a spheroidal galaxy to be a strong radio source. 279 Some of the giant galaxies that are in the neighborhood of this limiting size have jets of high speed material issuing from their central regions. The nature and properties of these jets were examined in Chapter 26. Our present concern is with their origin. Such a jet is a conspicuous feature of the giant galaxy M 87. Like the quasar 3C 273, with which it is associated, M 87 is of special interest because it is the only member of its class near enough to be accessible to detailed investigation. This object has all of the features that theoretically distinguish a galaxy that has reached the end of the road. It is a giant spheroidal, with the greatest mass of any galaxy for which a reasonably good estimate can be made; it is an intense radio source, one of the first extragalactic sources to be identified; and a jet of high speed material emitting strongly polarized light can be seen originating from the interior of the galaxy. These indications of explosive activity are so evident that they were recognized in the original application of the Reciprocal System of theory to the astronomical field, just as soon as the theoretical limits to the life of the galaxies were discovered, long before any observational evidence of galactic explosions was recognized by the astronomers. The 1959 publication contained this statement: ‖It would be in order to identify this galaxy [M 87], at least tentatively, as one which is now undergoing a cosmic explosion." Jets such as that issuing from M 87 are obviously produced under conditions in which pressure is released in a specific direction. Since the galactic explosion that produces a quasar blows out a particular segment of the outer structure of the galaxy, the spatial motion of the quasar is given such a direction. Similar conditions may exist where

fragmentary material is ejected, and in that event the initial ejection takes the form of a jet. The observable astronomical jets are fast-moving streams of unconsolidated material with individual speeds (temperatures) that extend into the upper ranges. On this basis they should theoretically be strong emitters of radiation at radio wavelengths, particutarly at the ends of the jets, and the radiation should be highly polarized. These deductions, based on the theoretical relations developed in the earlier pages, are in agreement with the observations. The theoretical development likewise accounts for a remarkable feature of the jets that is inexplicable in the context of current astronomical theory. This is the nearly uniform thickness of the M 87 jet and others of a similar nature. The hypothesis that the astronomers have invoked to account for the radio emission and the polarization would result in a rather rapid expansion and dissipation of the jet. Why does this not occur is, to them, a mystery. Simon Mitton makes this comment: The thickness of the jet is only tens of light years, so there must be a powerful constraint to the natural expansion of the gas.280 The development of the theory of the universe of motion identifies this ‖powerful constraint.― Aside from some entrained low speed matter, the constituents of these jets are atoms and particles moving at speeds in the two upper ranges. At these speeds the cloud of particles that constitutes the jet is expanding into time, rather than into space, and its spatial dimensions are decreasing slightly, rather than increasing. The available evidence does not indicate specifically how the jet originated. It is possible that the hole in the outer structure of the galaxy through which the material of the jet is issuing may be the result of a collision similar to that which seems to have taken place in NGC 5128 and some other radio galaxies. However, the relatively small cross-section of the jet and the absence of any indication of major distortion of the galactic structure suggest that the jet is more likely to be an after-effect of the ejection of a quasar or other explosion product. It no doubt takes an appreciable time to close the opening left by the ejection of a section of the outer wall of the galaxy, and during this interval there must be some loss of energetic material from the interior. If this is a correct interpretation of the situation, the leakage now visible as a jet will eventually terminate as the outer wall of the galaxy reforms and closes the existing gap. There is at least one quasar in the immediate vicinity that could have been ejected from M 87 recently. According to Arp, the average recession speeds of the galaxies in different parts of the region around M 87 range from about 400 km/sec more than the speed of M 87 to about 400 km/sec less.244 Any quasar or radio galaxy whose normal recession is within about 0.0015 of the recession of M 87 is therefore a probable member of the cluster of galaxies centered around M 87, and is a possible product of an explosion of that galaxy. Included within these limits is a quasar, PKS 1217+02, with a redshift of 0.240, which is equivalent to a recession shift of 0.0045 (almost the same as that of M 87). There are also several radio galaxies in the same neighborhood, with redshifts that qualify them as possible partners of this quasar. It thus appears likely that PKS 1217+02 and one of the nearby galaxies, perhaps 3C 270, with redshift 0.0037, were ejected in a relatively recent explosion.

Of course, it is not possible to reconstruct the exact sequence of events in this crowded area where there are so many galaxies that are interacting with each other, but it is clear that the whole range of explosion products is present, from the very old Class II quasar, 3C 273, to the jet of M 87, which originated only yesterday, astronomically speaking. There may even have been an explosion of M 87 that did not produce a quasar. It has been noted that the galaxy M 84 (radio source 3C 272. 1 ) is aligned with the M 87 jet in such a manner as to suggest that this galaxy may have been formed from material ejected in a more violent period of the activity of the galaxy that preceded the production of the jet.281 The present activity of M 87 could well be the concluding phase of that explosive event. Ultimately, after a number of ejections have occurred, an exploding galaxy will have lost so much of its substance that it will be unable to resume its normal shape and once more confine the explosion products to the interior of the structure. Thereafter the pressures necessary for the ejection of fragments of the galaxy will not be generated, and the products of the supernova explosions will be expelled at more moderate speeds in the form of clouds of dust and gas. The galaxy M 82, the first in which definite evidence of an explosion was recognized, seems to be in this stage. Photographs of the galaxy taken with the 200-inch Palomar telescope show immense clouds of material moving outward, and the galactic structure appears badly distorted.2Hz Just how large M 82 may have been in its prime, before it began ejecting mass, cannot be determined from observation, but presumably it was in the giant class. At present it is in the range of the spirals. Sooner or later the remnants of all such overage galaxies will be gathered into their younger and larger neighbors. The eventual fate of M 82 is clearly foreshadowed by a comment in an article by A. R. Sandage that the evidence of explosive events in this galaxy was discovered in a survey of ‖a group of visible galaxies centered on the giant spiral galaxy M 81 . ― 282 Capture of one after another of this group of galaxies will eventually build M 81 up to the maximum size. The giant thus produced will then continue on its way to the ultimate destruction that it, in turn, will experience, leaving behind remnants that will be incorporated into the new galaxies that form in the regions of space that are vacated. Identification of the galaxies that, like M 82, are in the process of disintegration, is complicated by the fact that galaxies in the process of consolidation display many of the same features. A collection of galaxies with these features, an ‖Atlas of Peculiar Galaxies,― compiled by Halton Arp, probably contains a mixture of both types. The galactic combinations should outnumber the disintegrations by a rather wide margin, since many combinations are required to produce one giant spheroidal candidate for disintegration. Before turning to another subject, it may be of interest to note that the astronomers have been so frustrated in their attempts to understand M 82 as an exploding galaxy that they are now shifting to other ideas in the hope of getting something that they can fit into the prevailing structure of astronomical theory. The following is a recent statement by Harwit:

The brightest far-infrared extragalactic source known (M 82) at one time was thought to be an exploding galaxy because hydrogen is seen streaming out of its central portions at velocities of 1,000 kilometers a second. Energetic processes that we do not yet understand appear to be active in this galaxy.283 This is a good example of the operation of one of the policies that has taken modern astronomical theory out of the real world and into the land of fantasy. M 82 exhibits some of the characteristic features of an exploding object. Recognition of these features naturally led at first to the conclusion that an explosion was in process in the interior of the galaxy. But as more information has been accumulated, difficulties have been experienced in reconciling this information with current theories as to the nature of such an explosion. As Harwit reports, the situation is not understood. The rather obvious implication of these difficulties is that the current astronomical theories, insofar was they apply to the M 82 situation, are wrong, but rather than pursuing this line of thought the theorists have chosen to develop some new hypotheses to replace the explosion assumption-hypotheses that are more speculative and less subject to disproof by testing against observation. The present-day tendency toward this kind of a retreat from reality will be given some further consideration in the next chapter. Another kind of galactic phenomenon results from what we may call premature explosive activity. A galaxy may, for example, capture a number of relatively old stars quite early in its life, or it may even pick up some old star clusters or a remnant of a disintegrated galaxy. These older stars will reach their age limits and explode before the galaxy arrives at the stage where such explosions are normal events. If the premature activity of this nature is not extensive, the energy that is released is absorbed in the normal motions of the galaxy. But where a considerable number of stars-those in a captured cluster, for instance-reach the age limit in advance of the normal time, some significant results may follow. If large scale activity of this kind begins when the galaxy is in an earlier stage in which it is smaller and less compact than the giant spheroidals, the concentration of explosion products in the interior may break though the overlying material before the pressure required for the ejection of a quasar is attained. The theoretical results of this kind of a situation are observed in a class of objects first identified and described by Carl Seyfert, and known as Seyfert galaxies. These are spiral galaxies, much smaller than the giant spheroidals, and by reason of their spiral structure, in which much of the mass is spread out in the form of a disk, their central regions are relatively exposed, rather than being buried under the outlying portions of the galaxies, as in the giants. The action that is going on in the Seyferts is thus more accessible to observation. Present-day astronomical theory is totally unable to account for the observed properties of these galaxies. With reference to the facts that are now known about them, D. W. Weedman makes the comment that ‖The reason for their existence remains one of the most pressing astronomical mysteries. ― 284 As in so many of the ‖mysteries― examined in the preceding pages, however, these observations are readily explained in the context of the universe of motion. The biggest enigma for the astronomers is the magnitude of the energy emission. Radiation of the upper range types-radio and far infrared-is being

emitted from these Seyfert galaxies in the same manner as from the core of the Milky Way galaxy, but at an immensely greater rate. As reported by Neugebauer and Becklin: The amount of power such galaxies radiate in the infrared corresponds to as much as 10¹¹ times the power output of the sun. This is approximately the amount of power radiated by all the stars in our galaxy at all wavelengths.168 "Conventional concepts of nuclear physics are woefully inadequate in accounting for such a large energy output from such a miniscule region,― 285says Mitton. The astronomers' perplexity is still more vividly expressed in this statement: One cannot help wondering what strange machine is hidden at the center of that galaxy [NGC 1275, a Seyfert] and others similar to it. Such prodigious emission of energy and matter from a region that appears to be shrinking, the more we study it, poses questions to which we have no answers.286 (P. Maffei) Now that we have established the nature of the quasars, the finding that the Seyferts are premature quasars identifies the source of the energy, and eliminates the problem of the size of the emitting region. This region appears small only if we look at it as a spatial domain, which it is not. It is actually a large region containing a huge number of stars, but its extension is in time, rather than in space. The violent motion required by the theory of the universe of motion has been detected in the cores of these Seyfert galaxies. R. J. Weymann reports that the emission spectra of the Seyfert galaxies ‖indicate that the gases in them are in a high state of excitation and are traveling at high speeds in clouds or filaments. Outbursts probably occur from time to time, producing new highvelocity material.― This, of course, is a description of the state of affairs that the theory says should exist, not only in the Seyfert galaxies, but in the cores of the giant spheroidals as well. To the astronomers the whole situation is a ‖puzzle― because, unlike the Reciprocal System, conventional astronomical theory provides no means, other than gravitation, of confining high-speed material within a galaxy, and gravitational forces are hopelessly inadequate in this case. Weymann summarizes the situation in this manner: If we accept the fact that the gas inside the tiny core of a Seyfert galaxy is moving at the high apparent velocity indicated by the spectra, and if we assume that the gas is not held within the core by gravitation, we must explain how it is replaced or conclude that the violent activity observed in the core is a rare transient event caused by some explosive outburst. But the latter possibility, he concedes, is inadmissible, because the Seyfert galaxies ‖cannot be considered particularly rare.― Hence this piece of observational evidence that is such a significant and valuable item of confirmation of the theory described in this work, not only the theory of the Seyfert galaxies, but the whole theory of the galactic explosion phenomena, including the quasars, is nothing but another enigma to conventional theory. Weymann also points out that the spectral characteristics of the light from the nuclei of these Seyfert galaxies are quite different from those of the light coming from the outlying regions.

Ordinary stars (such as our sun) emit more yellow light than blue light. This is also the case if one observes a Seyfert galaxy through an aperture that admits most of the light from the galaxy. As the aperture is reduced to accept light only from the central regions, however, the ultraviolet and blue part of the spectrum begins to predominate.231 This is another piece of information that fits neatly into the general theoretical picture. We have deduced from theory that the predominantly yellow light (positive U-B) that we receive from ordinary galaxies is characteristic of matter moving with speeds less than that of light, while the predominantly ultraviolet light (negative U-B) is characteristic of matter moving with upper range speeds. Now we observe an otherwise normal galaxy with a nucleus in which there is some unusual activity. From theoretical considerations we identify this activity as being due to a series of supernova explosions that are accelerating some particles or aggregates of matter to speeds in excess of the speed of light, and we find that the light from this galaxy displays just the characteristics that the theory requires. The existence of some kind of an unidentified energetic process in the interiors of the Seyferts-a ‖strange machine,― as Maffei called it in the statement previously quoted-is generally recognized. Simon Mitton makes this comment: The variations in NGC 1068 (a Seyfert galaxyJ require a non-thermal mechanism for the generating source of the intense infrared emission . . . Because of the difficulties with the hot dust concept, Rieke and Low prefer to attribute the radiation to a mysterious non-thermal source.287 As reported by Mitton, it is now generally agreed that there is sufficient evidence to show that there are ‖periodic explosions in the Seyfert nucleus that blast debris into the surrounding regions.― But these explosions are unexplained in current astronomical thought. ‖All models of Seyfert nuclei ultimately rely on the ad hoc existence of a primary energy source― 285 The theory developed herein resolves all of these issues. Furthermore, it explains the periodic nature of the explosive activity. This is one of the most difficult aspects of the situation from the standpoint of current theory. Observations confirm the existence of high speed matter in the interiors of the Seyfert galaxies in the intervals between explosions, but, as pointed out by Weymann, conventional astronomical theory has no way of explaining the build-up and containment of this very energetic material. In this case, as in so many others, the Reciprocal System, by providing an explanation, is filling a conceptual vacuum. The same factor that makes the intemal activity of the Seyfert galaxies more accessible to observation than that of the giant spheroidals, the thinner layers of overlying material, also limits the kind of products that can result from this activity. In these smaller galaxies it is not possible to build up the great concentration of energy that is necessary in brder to eject a quasar, and the emissions of material therefore take less energetic forms. The most common result is nothing more than an outflow of matter in an irregular pattern, but in some instances small fragments of the galaxy are ejected, without the ultra high speed of the quasar.

Because of the periodicity of the explosive events in the Seyfert galaxies the nature and magnitude of the radiation from the products are variable. Immediately after an outburst the galaxy is a strong radio and infrared emitter, as noted in Chapter18. As time goes on, the isotopic adjustments are completed and this radiation therefore decreases. As a result, the radio emission from some of the Seyferts is little, if any, greater than that from the average spiral galaxy. Except for that portion which is entrained in the outgoing low speed matter, the intermediate speed products of the explosion remain in the immediate vicinity of the galaxy because of the absence of translational motion in space in the intermediate speed range. Ultimately this material cools enough to drop back below the unit speed level. This initiates isotopic adjustments of the inverse nature, producing xrays. Thus some Seyferts are strong x-ray emitters, while in others little or no x-ray radiation is detected,264 depending on the stage in which the galaxy happens to be when observed. As would be expected, the stronger sources, both radio and x-ray, are subject to large variations. It is quite evident that there is some kind of a connection between the Seyferts and the quasars. As expressed by Weymann, ‖Except for an apparent difference in luminosity, Seyfert galaxies and quasars represent essentially similar phenomena.― 231 Many astronomers believe that quasars are simply distant Seyfert galaxies, the basis for such a conclusion being the finding that a number of quasars are surrounded by diffuse matter that has the same redshift as the quasar itself. It is difficult, however, to see why this conclusion should necessarily follow from the observed facts. Some of the reports specify that what has been observed is ‖nebulosity― that presumably indicates the presence of hot gas. But the presence of hot gas surrounding an object does not preclude that object from being a quasar. Indeed, our findings with respect to the origin of the quasars indicate that they musr be surrounded by hot gas in their early stages, and probably are in their later stages as well. Nor is the hypothesis as to the identity of the Seyferts and the quasars entitled to any more credence because an association has been found between a quasar and a galaxy of the same redshift. The logical conclusion in this case is that the previous classification was in error, and that the observed object is actually a Seyfert galaxy. The Seyferts are difficult to identify at great distances because the cores are so much more tuminous than the surrounding structure. lt can be expected, therefore, that improvements in instrumentation and procedures will result in identifying an increasing number of objects of this type among the distant objects now classified as quasars. Only a small proportion of the spiral galaxies have thus far been recognized as Seyferts. Weedman estimates about one percent.284 Even a substantial increase over this percentage would be consistent with the theoretical status of the Seyferts as deviants from the normal evolutionary pattern, the pattern that culminates in the production of quasars. The analog of the Seyfert galaxy is not the quasar but the giant spheroidal galaxy from which the quasar was ejected. Both of these types of galaxies are subject to periodic outbursts in which quantities of dust, gas, and galactic fragments are ejected. But the giant galaxy also ejects quasars and diffuse material at ultra high speeds, while the Seyfert explosions are not powerful enough to accelerate any of their products into the ultra high range. Consequently there are no counterparts of the quasars in the Seyfert

products. Nor do these products have any of the other ultra high speed properties, such as the characteristic radio structure. No Seyfert galaxy exhibits a double radio structure such as that found in most radio galaxies and quasars.286 (P. Maffei). To conclude the discussion of the pre-quasar situation, we turn now to the earliest ancestors of the giant galaxies that produce the quasars, the globular clusters. The,general run of stars of these clusters are far too young to become supernovae, but as emphasized in the earlier pages, the dispersed material from which the globular clusters were formed contained a few remnants of disintegrated galaxies-stars and small star clusters. These are incorporated into the newly formed globular clusters, usually serving as nuclei for the cluster formation. They are already.well along the way to their limiting age, and may reach it while the cluster is still an independent unit. In a large cluster, one that has not yet undergone the attrition that takes place in the immediate vicinity of a gaiaxy, the amount of material overlying the central regions is sufficient to withstand a considerable amount of internal pressure. Any ultra high speed explosion products probably escape, but those that are moving at less than unit speed are largely confined, while the intermediate speed products, aside from those that are entrained in the outward-moving material, remain at the location of origin, inasmuch as they have no spatial motion components. The presence of these intermediate speed products results in the existence of a high density region in the center of the cluster, a small-scale replica of those in the cores of the large galaxies. After the few very old stars are gone there is no replacement of the energy lost from the explosion products, and their temperature therefore decreases. At some point it drops below the unit level. This initiates x-ray emission. A 1977 publication reported that seven ‖x-ray stars― had been found in the globular clusters of our galaxy.288 Unlike the returning white dwarfs, whose x-ray emission is observable only when the materiat from the interiors of these stars breaks through the overlying low speed matter in a nova explosion, these ‖x-ray stars― are actually concentrations of exptosion products similar to those in the observable supernova remnants, and they continue their emission in the manner of those remnants, gradualty decreasing as the isotopic adjustments are completed.

CHAPTER 28

Inter-sector Relations Unquestionably, the most intriguing new finding that has emerged from the development of the theory of the universe of motion is the existence of an inverse sector of the universe that duplicates the material sector which has heretofore been regarded as the whole of physical existence. As might be expected, this finding has met with a cold reception by those scientists who adhere strongly to orthodox lines of thought. This is, in a way, rather inconsistent, as these same individuals have been happy to extend hospitality to the same ideas in different forms. The concept of an ‖antiuniverse― composed of antimatter surfaced almost as soon as antimatter was established as a

physical reality; the hypothesis of ‖multiple universes― gets a respectful hearing from the scientific Establishment; and astronomical literature is full of speculations about ‖holes― that may constitute links between those universes-black holes, white holes, wormholes, etc. It should therefore be emphasized that the theory of the universe of motion which identifies an inverse sector is not based on radical departures from previous thinking, but on concepts that were already familiar features of scientific thought. Actually, all that has been done in the extension of the new theory into this area is to take the vague concept of an antiuniverse, put it on a solid factual foundation, and develop it in logical and mathematical detail. Many of the conclusions that have been reached in the course of this development are new, to be sure, but they are implicit in the antiuniverse concept. Observational identification of antimatter in our local environment shows that the observed universe and the antiuniverse are not totally isolated from each other; some entities of the ‖anti― type exist in observable form in our familiar physical universe. It is only one step farther-a logical additional step-to a realization that this implies that the complex entities of the observed type may have componenrs of the ‖anti― nature. Once this point is recognized, it can be seen that the unorthodox conclusions that have been reached in the preceding pages are simply the specific applications of the antiuniverse concept. For example, additions to the linear component speeds (temperatures) decrease the density of ordinary astronomical objects. It follows from the inverse nature of the ‖anti― sector, the cosmic sector, as we are calling it, that addition of speeds of the ‖anti― character increases the density. Similarly, addition of rotational motion in space to an atom of matter decreases the isotopic mass, while addition of rotational motion of the inverse type (motion in time) increases the isotopic mass. And so on. The new theoretical development has merely taken the familiar idea of a universe of motion, and the equally familiar idea of existence in discrete units only, and has followed these ideas to their logical consequences, an operation that was made possible by the only real innovation that the new development introduces into physical theory: the concept of a universe composed entirely of discrete units of motion. With the benefit of this new concept, it has been possible to define the physical universe in terms of the two postulates stated in Volume I. The contents of this present volume describe the detailed development of the consequences of these postulates, as they apply to astronomy. Before concluding this description, and taking up consideration of some of the other consequences and implications of the findings, it will be appropriate to give further attention to the few, but important, direct contacts between the two sectors of the universe. In one sense, the two primary sectors of the universe, the material and the cosmic, are clearly differentiated. The phenomena of the material sector take place at net speeca that cause changes of position in space, whereas the phenomena of the cosmic sector take place at net speeds that cause changes of position in time. But the space and time of the material sector are the same space and time that apply to the cosmic sector. For this

reason, the phenomena of each sector are also, to some degree, phenomena of the other as well. Some of the observable effects of this inter-sector relationship have already been discussed. In Volume I the cosmic rays that originate in the cosmic sector were considered in substantial detail, and in the preceding chapters of this present volume similar consideration has been given to the quasars and pulsars that are on their way to the cosmic sector. In these areas previously examined, we have been dealing with phenomena in which physical objects acquire speeds, or inverse speeds, that cause them to be ejected from one sector into the inverse sector, where the combinations of motions that constitute these objects are transformed into other combinations that are compatible with the new environment. In addition to these actual interchanges of matter between sectors, there are also situations in which certain phenomena of one sector make observable contact with the other sector because of this point that has just been brought out: the fact that the events of both sectors involve the same space and time. As we have seen in the earlier pages, the dominant physical process in each sector is aggregation under the influence of gravitation. In the material sector gravitation operates to draw the units of matter closer together in space to form stars and other aggregates. When portions of this matter are ejected into the cosmic sector in the form of quasars and pulsars, gravitation ceases to operate in space. This leaves the outward progression of the natural reference system unopposed, and that progression, which carries the constituent units of the spatial aggregates outward in all directions, destroyes the spatial structures and leaves their contents in the form of atoms and particles widely dispersed in both space and time. Meanwhile, gravitation in time has become operative, and as it gradually increases in strength it draws the dispersed matter into stars and other aggregates in time. These aggregates then go through the same kind of an evolutionary course as that followed by the aggregates in space. As this description indicates, the basic physical units maintain the continuity of their existence regardless of the interchanges between sectors, merely altering their distribution in space and time. In the material sector they are distributed throughout the full extent of the three dimensions of the spatial reference system, but they move only through the restricted region of time traversed in a linear progression. In the cosmic sector these distributions are reversed. Contacts between the entitites of the material sector and those of the cosmic sector are therefore limited. In view of the relatively low density of matter in the universe as a whole, a cosmic entity moving one-dimensionally through three-dimensional space will, on the average, have to travel a long way before encountering a material object. Nevertheless, some such encounters are continually taking place. The key factor in this situation is the nature of the relation between space and time. Not until comparatively recently was it realized that such a relation actually exists. Even in Newton's day these two entities were still regarded as being totally independent. The current view is that time is one-dimensional, and constitutes a kind of quasi-space which joins with three dimensions of space to form a four-dimensional space-time framework, within which physical objects move one-dimensionally. While this four-dimensional spacetime concept is relatively new, the basic idea of space and time as the elements of a

framework, or setting, for the activity of the universe is one of long standing. Indeed, it is so deeply embedded in physical thought that it is very difficult to recognize the existence of any alternative. The problem involved in making a break with this familiar habit of thought is illustrated by the fact that even in the first edition of this work, the postulates of the theory being described were still expressed in terms of ‖space-time.― Eventually, however, it was realized that space-time is actually motion. Throughout the development of thought concerning this subject, it has been recognized by everyone that motion is a relation between space and time. The magnitude of the motion, the speed or velocity, has been expressed accordingly, in terms of centimeters per second, or some equivalent. The four-dimensional concept embraced by current science assumes that a totally different kind of relation also exists. In application to entities of a fundamental nature, such a duality is inherently improbable, and the development of the theory of the universe of motion now indicates that the assumption is erroneous. Our finding is that any relation between space and time is motion or an aspect of motion. It is now evident that the concept of space-time employed in conventional physical theory, and carried over into the early stages of the development of the theory of the universe of motion, is a partial, and rather confused, recognition of the nature of the fundamental relation between space and time. This so-called ‖space-time,― a simple relation between a space magnitude and a time magnitude, is the basic scalar relation between space and time; that is, ‖space-time― is actually scalar motion. Fundamentally, this relation is mathematical. Its dimensions are therefore mathematical, or scalar, dimensions. From the mathematical standpoint, an n-dimensional quantity is simply one that requires n independent scalar magnitudes for a complete definition. It follows that in a three-dimensional universe there can be three scalar dimensions of motion. The spatial aspect of one (and only one) of these can be represented geometrically in a reference system of the conventional coordinate type. Here we are dealing with three dimensions of space, but only one dimension of motion. The reference system is not capable of representing motion in the other two scalar (mathematical) dimensions. But the fact that the same space and time are involved in all types of motion means that there are some effects of the motion in these other dimensions that are observable, at least indirectly, in the reference system. The force of gravitation, for example, is reduced by a distribution over all three dimensions, and only a fraction of it is effective in the space of the reference system. Use of the term ‖dimension― in both mathematical and geometric applications leads to some confusion. The term is usually interpreted geometrically, and many persons are puzzled by the introduction of scalar dimensions of motion into the physical picture. It has therefore been suggested that some different designation ought to be substituted for ‖dimension― in one of the two applications. However, all dimensions are inherently mathematical. The geometric dimensions are merely representations of numerical magnitudes. Motion at a speed less than unity causes a change of position in space. The threedimensionality of the universe applies to the spatial aspect of this motion as well as to the motion as a whole. The space involved in one of the scalar dimensions of motion can

therefore be resolved into three independent components, which can be represented geometrically. Since no more than three dimensions exist, there is no basis, within threedimensional geometry, for representation of the spatial aspects of the other two scalar dimensions of motion, except under certain special conditions discussed in the preceding pages. Motion at a speed greater than unity causes a change of position in threedimensional time. If independent, this motion cannot be represented in the spatial reference system. However, if it is a component of a combination of motions in which the net total speed is on the spatial side of the neutral level, the temporal speed acts as a modifier of the spatial speed; that is, as a motion in equivalent space. From the foregoing it can be seen that the universe is not four-dimensional, as seen by conventional science, nor is it six-dimensional (three dimensions of space and three dimensions of time), as some students of the Reciprocal System of theory have concluded. We live in a three-dimensional universe. Just how these three dimensions manifest themselves in any specific case depends on the individual circumstances. Two physical entities make contact when they occupy adjacent units of either space or time. It is commonly believed that the essential condition for contact is to reach the same point in space at the same time, but this is not necessarily true. Objects located in the spatial reference system must be at the same stage of the progression-that is, the same clock time-in order to make contact, but this is only because there is a space progression paralleling the time progression recorded by the clock, and unless two such objects are at the same stage of this space progression they are not at the same spatial location. Two objects that are in contact in space are not usually at the same location in threedimensional time. Likewise, objects that are in contact in time are usually at different spatial locations. This fact that the spatial contact is independent of the time location accounts for the containment of the material moving at upper range speeds in the interiors of the giant galaxies prior to the explosions that produce the quasars. Since the components of this high speed aggregate are expanding into time at speeds in excess of the speed of light, it might be assumed that they would quickly escape from the galaxy. But the increased separation in time does not alter the spatial relations. The equilibrium structure in space that exists in the outer regions of the giant galaxy is able to resist penetration by the high speed material in the same manner in which it resists penetration by matter moving at less than unit speed. The motions of cosmic entities in time are similarly restrained by contacts with cosmic structures, but these phenomena are outside our field of observation. The phenomena of the cosmic sector with which we are now concerned are the observable events which involve contacts of material objects with objects that are either partially or totally cosmic in character. Interaction of a purely material unit with a cosmic unit, or a purely cosmic unit with a material unit follows a special pattern. Where the structures are identical, aside from the inversion of the space-time relations, as in the case of the electron and the positron, they destroy each other on contact. Otherwise, the contact is a relation between a space magnitude and a time magnitude, which is motion. Viewed from a geometrical standpoint, these entities move through each other. Thus matter, which is primarily a time

structure, moves through space, while the uncharged electron, which is essentially a rotating space unit, moves through matter. Material and cosmic atoms, and most sub-atomic particles, are composite structures that include both material (spatial) and cosmic (temporal) components. Inter-sector contacts between such objects therefore have results similar to those of contacts between material objects. To an observer, such a contact appears to be the result of a particle entering the local environment from an outside source. These results are indistinguishable from those produced by an incoming cosmic atom. The contact will therefore be reported as a cosmic ray event. The cosmic atoms involved in these events are moving at the ordinary inverse speeds of the cosmic sector, rather than at the very high inverse speeds of the atoms that are ejected into the material sector as cosmic rays. The most energetic of the reported cosmic ray events therefore 'probably result from these random encounters. One other cosmic event that has an observable effect in the material sector is a catastrophic explosion, such as a supernova or a galactic explosion, that happens to coincide with the time of the spatial reference system. The radiation received in the material sector from ordinary cosmic stars is widely dispersed in space, because only a few of the atoms of each of these stars are located in the small amount of space that is common to the cosmic star and the spatial reference system as they pass through each other. But a cosmic explosion releases a large amount of radiation in a very small space, just as an explosion of the material type releases a large amount of radiation in a very short time. We can thus expect to observe some occasional very short emissions of strong radiation at cosmic frequencies (that is, the inverse of the frequencies of the radiation from the corresponding explosions of the material tyPe). Both the theoretical investigations and the observations in this area are still in the early stages, and it is premature to draw firm conclusions, but it seems likely that the theoretical short, but very strong, emissions of radiation can be identified with some of the gamma ray ‖bursts― that are now being reported by the observers. A reported ‖new class of astronomical objects― is described in terms suggesting cosmic origin. These objects, says .he report, ‖emit enormous fluxes of gamma radiation for periods of seconds or minutes and then the emission stops. ‖ 289 Martin Harwit tells us that ‖remarkably little is known about gamma ray bursts,― 290 and elaborates on that assessment by citing an observers' summary of the existing situation, the gist of which is contained in the following statement: Neither the indicated direction or coincidence in times of occurrence have yet established an association between these bursts and any other reported astrophysical phenomena. Even today, 1978, with 71 bursts cataloged, and with improved directional resolution available, the sources of these bursts remain unidentified without even a strong suggestion of the class or classes of objects responsible.291 In addition to these events involving contacts between the entities of the two sectors, there are other phenomena which result from the fact that photons of radiation exist on the sector boundary, and therefore participate in the activities of both sectors. This is a consequence of the status of unit speed as the speed datum, the physical zero, as we called it in the earlier discussion. From the standpoint of the natural reference system, a

speed of unity measured from zero speed, and an inverse speed of unity measured from zero energy (inverse speed) are equal to each other, and equal to zero. An object moving at this speed relative to the conventional spatial reference system, or to an equivalent temporal reference system, is not moving at all from the natural standpoint. The photons of radiation, which move at unit speed in the conventional reference system, are thus stationary in the natural system of reference, regardless of whether they originate from objects in the material sector, or from objects in the cosmic sector. It follows that they are observable in both sectors. Because of the inversion of space and time at the unit level, the frequencies of the cosmic radiation are the inverse of those of the radiation in the material sector. Cosmic stars emit radiation mainly in the infrared, rather than mainly at optical frequencies, cosmic pulsars emit x-rays rather than radio frequency radiation, and so on. But these individual types of radiation are not recognized as such in the material sector because, as we found earlier, the atoms of matter that are aggregated in time to form cosmic stars, galaxies, etc., are widely separated in space. The radiation from all types of cosmic aggregates is received from these widely dispersed atoms as a uniform mixture of very low intensity that is isotopic in space. This ‖background radiation― is currently attributed to the scattered remnants of the radiation originated by the Big Bang, which are presumed to have cooled to their present state, equivalent to an integrated temperature of about 3K, in the billions of years that are supposed to have elapsed since that hypothetical event occurred. The Big Bang is one of the major features of the universe as it appears in modern astronomical theory. The next chapter will present a comparison of this astronomical universe with the universe of motion defined by the postulates of this work. It will be shown that, although the building blocks of the astronomers' universe are observed entities -stars, galaxies, etc.-that exist in the real sense, the universe that they have constructed as a setting for these real objects is a purely imaginary structure that has no resemblance to the real physical world. Inasmuch as science claims to have methods and procedures that are capable of arnving at the physical truth, it may be hard to understand how the astronomers, who presumably utilize scientific methods, could have reached such very unscientific results. But an examination of astronomical literature quickly shows just what has gone wrong. The astronomers have followed the lead of a modern scientific school whose methods and procedures do not conform to the rigid standards of traditional science. Of course, this assertion will be vigorously denied by those whose activities are thus characterized. So let us see just what is involved in this situation. Aside from gathering information, the traditional way of extending scientific knowledge is by means of what is known as the hypothetico-deductive method. This method involves three essential steps: ( 1 ) formulation of a hypothesis, (2) development of the consequences thereof, and (3) verification of the hypothesis by comparing these consequences with the facts disclosed by observation and measurement. The nature of this process allows a wide latitude for the construction of the basic hypotheses. On the other hand, the constraints on item (3), the verification process, are extremely rigid. In order to qualify as an established item of

scientific knowledge, a hypothesis must be capable of being stated explicitly, so that it can be tested against observation or measurement. It must be so tested in a large number of separate applications distributed over the entire field to which this item is applicable, it must agree with observation in a substantial number of these tests, and it must not be inconsistent with observation in any case. It is important to bear in mind that a physical proposition of a general nature, the kind of a hypothesis that enters into the framework of the astronomers' universe, cannot be verified directly in the manner in which we can verify a simple assertion such as ‖Water is a compound of oxygen and hydrogen.― Direct verification of a general relation would require an impracticable number of separate cotrelations. In this case, therefore, it is necessary to rely on probability considerations. Each comparison of one of the consequences of a hypothesis with observed or measured facts is a test of that hypothesis. Disagreement is positive. It constitutes disproof. If even one case is found in which a conclusion that definitely follows from the hypothesis is in conflict with a positively established fact, that hypothesis, in its existing form, is disproved. Agreement in any one comparison is not conclusive, but if the tests are continued, every additional test that is made without encountering a discrepancy reduces the probability that any discrepancy exists. By making a sufficient number and variety of such tests, the probability that there is any conflict between the consequences of the hypothesis and the physical facts can be reduced to a negligible level. Just where this level is located is a matter of opinion, but the principle that is involved is the same as that which applies to any other application of the probability laws. Many positive correlations are required in order to establish a probability strong enough to validate a hypothesis. If only a few test can be made, the probability of validity remains too low to be acceptable. To illustrate the effect of a small number of correlations with empirical knowledge, let us consider one of the coin tossing experiments that are used extensively in teaching probability mathematics. We will assume, for purposes of the illustration, that the participants have not been given the opportunity to examine the coin that will be used in the experiment. Thus there is a small possibility that this coin is a phony object with heads on both sides. If the first toss comes up heads, this is consistent with a hypothesis that a two-headed coin is being used, but clearly, this one case of agreement with the hypothesis does not change the situation materially. The odds are still overwhelmingly in favor of the coin being genuine. Not until a substantial number of successive heads have been tossed-perhaps nine or ten~ould the double-headed coin hypothesis be taken seriously, and a still longer run would be required before the hypothesis could be considered validated. The effect of the number of trials, or tests, on ihe probability of the validity of a hypothesis is independent of the nature of the proposition being tested. Astronomical conclusions are subject to the same considerations as any other hypotheses, including the hypothesis of the double-headed coin. But very few of the key features of the astronomers' picture of the basic structure of the universe are supported by more than one or two correlations with observation. Some have none at all. The fact that the one or two cases of agreement between theory and observation, where they exist, does not add significantly to the probability of validity thus means that these crucial astronomical

conclusions are unconfirmed. As scientific products they are incomplete. The final step in the standard scientific procedure, verification, has not been carried out. To make matters worse, many of the conclusions are not merely unverified. The processes by which they have been reached are such that a large proportion of them are necessarily wrong. The reason is that these conclusions rest, in whole or in part, on general principles that are invented. The status of invention as a source of physical theory was discussed in Volume I, but a review of the points brought out in that discussion that are relevant to the astronomical situation will be appropriate at this time. Modern physical theory is a hybrid structure derived from two totally different sources. In most physical areas, the small-scale theories, those that apply to the individual physical phenomena and the low-level interactions, are products of induction from factual premises. Many of the general principles, those that apply to large-scale phenomena, or to the universe as a whole, are invented. ‖The axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention,― 292 is Einstein's contention. There is a great deal of misunderstanding as to the role of experience in the first step of the scientific process, the formulation of a hypothesis, largely because of the language that is used in discussing it. For example, in describing ‖how we look for a new [physical] law,― Richard Feynman tells us, ‖First we guess it.― 293 This would seem to leave the door wide open, and such statements are widely regarded as sanctioning free use of the imagination in theory construction. But Feynman goes on the stipulate that the hypothesis must be a ‖good guess,― and enumerates a number of criteria that it must satisfy in order to qualify as ‖good.― Before he is through he concedes that ‖what we need is imagination, but imagination in a terrible strait-jacket.― 294 What Feynman calls a ‖good― guess is actually one that has a substantial probability of being correct. As he points out, there are an ‖infinite number of possibilities― if invention is unrestricted. The probability of any specific one of these being correct is consequently near zero. The scientific way of arriving at a reasonably probable hypothesis (the way that Feynman describes, even though some of his language would lead us to think otherwise) is to utilize inductive processes such as extrapolation, analogy, etc., to obtain the kind of an ‖inference from experience― to which Einstein objects. A hypothesis derived inductively-that is, an inference from experience-is, in effect, pretested to a considerable extent. For instance, an analogy in which a dozen or so points of similarity are noted is equivalent to an equal number of positive correlations subsequent to the formulation of a hypothesis. Thus the inductive theory has a big head start over its inventive counterpart, and is within striking distance of proof of validity from the very start. But inductive reasoning requires a factual foundation. Inferences cannot be drawn from experience unless we have had experience of the appropriate nature. In many of the fundamental areas the necessary empirical foundations for the application of inductive processes have not been available. The result has been a long-standing inability to find answers to many of the major problems of the basic areas of physics. Continued

frustration in the search for these answers is the factor that has led to the substitution of inventive for inductive methods. A similar situation exists throughout most of the astronomical field, where normal inductive methods are difficult to apply because of the scarcity of empirical information and the unfamiliar, and seemingly abnormal, nature of many of the observed phenomena. The astronomers have therefore followed the example of the inventive school of physicists, and have drawn upon their imaginations for their hypotheses. Application of this policy has resulted in replacement of the standard scientific process of theory construction by a process of ‖model building.― This process starts with a ‖free invention,― a ‖castle in the air,― as H. L. Shipman describes it. Beginning with ‖a small, neat castle in the air,― he says, you ‖patch on extra rooms and staircases and cupolas and porticos. ‖295 The result is not a theory, an explanation or description of reality, it is a model, something that, as Shipman explains, is merely intended to facilitate understanding of the real world. ‖The model world exists only in people's minds,―296 he says. The fatal weakness of this kind of a program, based on invention, is that inventive hypotheses are inherently wrong. The problems that they attempt to solve almost invariably exist because some essential piece, or pieces, of information are missing. This rules out obtaining the answer by inductive methods, which must have empirical information on which to build. Without the essential information the correct answer cannot be obtained by any means (except by an extremely unlikely accident). The invented answer drawn from the imagination to serve as the basis for a model is therefore necessarily wrong. Of course, the erroneous invented theories, or models, cannot meet the standard tests of validity, and the same process of invention has been applied to the development of expedients for evading the verification requirements. Not infrequently these are employed to evade actual conflicts with the observed facts. Chief among them is the ad hoc assumption. When the consequences of a hypothesis are developed, and it is found that they disagree with observation in some respects, instead of taking this as disproof of the validity of the hypothesis, the theorist uses his ingenuity to invent a way out of the difficulty that cannot be tested, and therefore cannot be disproved. He then assumes this invention to be valid. Like the invented theories themselves, and for the same reasons, these inventions that take the form of ad hoc assumptions are inherently wrong. Another of the expedients frequently employed to justify acceptance of a hypothesis whose validity has not been, or cannot be, tested is the ‖There is no other way― argument that we have had occasion to discuss at a number of points in the preceding pages. No further comment should be necessary on the usual form of this argument, but we often meet it in a somewhat different form in astronomy. There are many astronomical phenomena about which very little is known, and only one or two correlations with observation are possible, as matters now stand. There is a rather widespread impression that, under the circumstances, if a hypothesis is consistent with observation in these instances, its validity is established. Here the argument is that the hypothesis has been tested in the only way that is possible, and has withstood that test. The fallacy involved in calling this a verification can be seen when it is realized that the limitation of the testing

of a hypothesis by reason of the unavailability of more than one or two tests is equivalent to discontinuing the coin-tossing tests of the double-headed hypothesis after the first or second toss. The truth is that the increase in the probability of validity of a hypothesis that results from a favorable outcome of one or two tests is insignificant, regardless of the reasons for the limitation of the testing to these cases. What the current practice amounts to is that instead of proof the astronomers are offering us absence of disproof. Shipman makes this commeni about the situation in one of the poorly tested areas: To a great extent this picture [of stellar evolution] is based on limited models, blind faith, and a few observed facts.297 ‖Blind faith― may be appropriate in religion, but it is totally unscientific. One of the most unfortunate results of the reliance on absence of disproof is that it favors departures from reality in the construction of theories. The farther a hypothesis diverges from reality, the less opportunity there is for checking it against established facts, and the more difficult it is to disprove. By the time the speculation reaches such concepts as the black hole, all contact with reality has long since been lost. For example, examination of the case in favor of the black hole as the explanation of the x-ray source Cygnus X-l, the object that is supposed to provide the best observational evidence for the existence of a black hole, reveals that this case is argued entirely on the basis of what this object is not. It is not a white dwarf, so it is claimed, because it is larger than the accepted unverified hypothesis as to the nature of the white dwarf stars will permit. It is not a neutron star, because, for the same reason, the observations conflict with the accepted unverified hypothesis as to the nature of the hypothetical neutron stars. ‖It is difficult to explain Cygnus X-1 as anything but a black hole ,―298 says Shipman. In less credulous times, the inability of an investigator to find a viable explanation for a phenomenon would have been regarded as an indication that his job is still unfinished. But now we are expected to accept the best that he can do as the best that can be done. In justice to this author, however, it should be noted that, although he accepts the existence of the black hole as ‖probable― on the strength of the foregoing argument, he evidently has some qualms about giving unreserved support to such an excursion into the land of fantasy, because he goes on to say: Black holes are, so far, entirely theoretical objects . . . It is very tempting, especially for people who like science fiction, to succumb to the Pygmalion syndrome and endow their model black holes with a reality that they do not yet possess.299 It is, of course, true that the opportunities for gathering factual information are severely restricted in astronomy, where experimentation is not possible and observation is limited by the immense distances and very long times that are involved in the phenomena under consideration. The structure of astronomical theory thus rests on a very narrow factual base, and it is to be expected that more than the usual amount of speculation will enter into astronomical thinking. But the presence of this speculative component in current

thought is all the more reason for taking special precautions to maintain a strict distinction between those items that have met the test of validity and those that are still unverified. In order to preserve the contact with reality, it is particularly important to avoid pyramiding unverified results. Here the demands of science collide with the interests of the scientists, especially the theorists. Advancement of theoretical knowledge is a slow and difficult task. Few of those who undertake this task ever accomplish anything of a lasting nature, other than minor modifications of some features of previous thought. But the professional scientists of the present day are under intense pressure to produce results of some kind. Financial support, personal prestige, and professional advancement all depend on atriving at something that can be published. As expressed among the university faculties, ‖Publish or perish.― So the theorists concentrate their efforts mainly in the far-out regions where there are only a minimum of those inconvenient facts that are the principal enemies of theories, and they fill the scientific literature with products that cannot be tested because they have too few contacts with physical reality. It is the pyramiding of these untestable hypotheses that has produced the imaginary universe of modern astronomy that we will examine in the next chapter. To the extent that the theorists make any attempt to justify their wholesale use of imaginary entities and phenomena in the construction of their models, they rely on the contention that ‖ihere is no other way―; that the amount of factual information available for their use is totally inadequate to provide the foundation for theoretical development. This is a specious argument. It serves the purpose of the individual whose primary purpose is to find something that he can publish, but it makes no contribution toward the advancement of knowledge. On the contrary, to the extent that the imaginary results are accepted, it places obstacles in the way of real advances. Furthermore, the lack of factual information is not nearly as acute as the astronomers depict it. It is true that the amount of information about individual pheno~nena is often quite limited, but this is not peculiar to astronomy. It is common to all areas of inquiry, and science has found ways of overcoming this handicap. For example, information in several areas may sometimes be pooled. The concept of ‖energy,― which has played an important part in the development of physical theory, did not emerge from the study of any one individual area. It was derived by the process known as abstraction, involving the use of data from many such areas. It would have been equally possible for the astronomers to have abstracted the property of ‖extremely high density― from a number of different astronomical phenomena, and to have examined it in the light of the large amount of factual information thus collected. This might well have resulted in the discovery of the true cause of this high density before it was brought to light by the theory of the universe of motion. Such considerations are now no more than academic in application to astronomy, since it has been demonstrated in the preceding pages that the physical principles developed from the postulates that define the universe of motion are capable of dealing with the whole range of astronomical phenomena. But one of the things that many scientists have envisioned is the eventual application of scientific methods and procedures to the solution of the problems of some of the non-scientific branches of thought that have long been

mired down in confusion and contradiction. Before anything of this kind can be accomplished, it will obviously be necessary for the scientific profession itself to return to the traditional methods and procedures that are responsible for its record of achievement. The black holes, the quarks, the Big Bang, and similar fantasies are the products that are publicized as the fruits of scientific research in the media, and the ordinary individual cannot be expected to realize that the remarkable accomplishments of science over the past several centuries have not been made by such flights of fancy, but by a steady application of the traditional methods of science to one problem after another, testing each answer as it is obtained, and building up a solid and stable structure of theory brick by brick. If science is to be applied to economics, for example, it will have to be in this slow, careful, and painstaking way. Economics already has too many of the economic equivalents of the black hole.

CHAPTER 29

The Non-existent Universe Chapter 28 completes the description of the new view of astronomical phenomena that we get from the development of the theory of the universe of motion, to the extent that this development has thus far been carried. Before beginning consideration of a different aspect of the physical universe in the final two chapters of this volume, it will be appropriate to take a second look at the universe that this new understanding replaces, the non-existent universe of the imaginative theorists that plays such a major role in presentday physics and astronomy. The non-existent entities and phenomena that make up this phantom universe have been discussed in detail in the earlier pages, but since this discussion has been distributed over three volumes, there would seem to be some merit in a recapitulation that brings the major astronomical items together, so that the connections between the various items can be recognized, and the almost incredible extent of this realm of fantasy that has grown up within the boundaries of the scientific disciplines can be fully appreciated. Construction of this elaborate network of figments of the imagination would have been impossible in the prosaic and conservative science of Galileo and Newton, but when the progress of experimental and observational discovery carried empirical knowledge beyond the range of Newtonian theories, and thereby undermined their authority, Einstein was able to secure acceptance of his contention that his distinguished predecessors were wrong in believing that ‖the basic concepts and laws of physics . . . were derivable by abstraction, i.e., by a logical process, from experiments.― 292 General acquiescence in his dictum that ‖the axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention― opened the gates to a free and unrestrained exercise of the imagination. Accordingly, Bohr pioneered the idea of inventing new physical laws for application in those areas where problems were encountered in applying the established laws and principles, Einstein introduced the concept of flexible magnitudes, Heisenberg promulgated a principle of uncertainty to legitimize discrepancies, and soon the era of scientific invention was in full swing.

Now we are going to examine the structure of fantasy that has been erected by those who have taken advantage of this license to give free rein to the imagination under the banner of science, so that we can see just how far the universe of modern astronomy has diverged from the universe of physical reality. Although it is fictional, this imaginary universe has a logical structure. It is carefully reasoned from specified premises. But some of these premises involve departures from reality. These are assumptions-free inventions-in areas where the true facts were unknown, or not yet recognized, prior to the investigation reported in this work. With the aid of such assumptions to complete their foundations, the inventive astronomers have been able to build an elaborate structure of theory extending far beyond the limits of the real universe and into the land of fantasy. As pointed out in Chapter 28, the retreat from reality is primarily due to the fact that little or no attempt has been made to subject the inventions, and the theoretical conclusions based upon them, to the standard tests of validity. Inasmuch as the ties that bind this structure of theory to the solid ground of observed and measured facts have been severed only at a few specific points, it is usually difficult to determine by examination of any one particular physical situation just how much is fact and how much is fiction. But we can establish a clear line of demarcation between the real and the fictional by identifying the points at which the false assumptions have been made, and following the lines of reasoning, based on these assumptions, that lead to the kind of non-existent entities, phenomena, and relations that populate the phantom universe of present-day science. We will be concerned mainly with the astronomical fantasies, not only because astronomy is the primary subject matter of this present volume, but also because it deals with the physical extremes, and therefore has the effect of magnifying the departures from reality. It is here, in the astronomical field, that we find the black holes, the degenerate matter, the singularities, and other extravagances of fertile imaginations. But the initial points of departure from the real world are at a more fundamental level. The physicists are the ones that first strayed from the straight and narrow path. Astronomy has suffered the consequences. Of course, the astronomers do not recognize the remarkable extent to which their discipline has taken on a fictional character, but at least some of them realize that there is little connection between their theoretical universe and what is actually observed. As Harwit puts it, there is ‖a gap between theorists and observers.― He comments on the ‖remarkable detachment― between observation and theory, and goes on to say: The astrophysical concepts that lead us to an understanding of cosmic phenomena have a history that is all but decoupled from the actual discovery of the phenomena . . . Theory and observation pursue their own somewhat separate ways, and the major cosmic phenomena continue to be discovered mostly by chance.236 It is also beginning to be recognized that this gap will eventually have to be closed by means of a reconstruction of basic theory. As noted elsewhere in this volume, there is a tendency in astronomical circles to expect this reconstruction to take place in the fundamental physical laws, rather than in astronomy itself. As demonstrated in this work, a drastic revision of physical fundamentals is indeed required, but such a revision necessarily has significant repercussions on the superstructure that the astronomers have

erected on the physical foundations that must now be rebuilt. At least some members of the astronomical community are beginning to recognize this point. For example, Geoffrey Burbidge, Director of the Kitt Peak National Observatory, made this comment in a recent ( 1983) interview: My suspicion is that Chip Arp [Mount Wilson and Las Campanas Observatories] is right, and some of the main pillars of extra galactic astronomy are going to tumble down.300 After all, astronomy is merely large-scale physics, and the astronomers are m the awkward position of having to place the foundations of their theoretical structure in what Paul Davies (one of the most enthusiastic of the current generation of fantasyconstructors) describes as ‖the Alice-in-Wonderland world of the New Physics, a world alive with paradoxes, mysteries, and discontinuities.― 301As might be expected in an Alice-in-Wonderland world, the retreat from reality starts at the very base of the theoretical structure. This can be seen in the following comparison: 1. In the imaginary universe: The fundamental constituents of the universe are elementary units of matter. In the real universe: There are no elementary units of matter. The word ‖elementary― in this context means ‖irreducible.― In earlier eras matter was regarded as elementary, in this sense, and since it was known to consist of discrete units, the existence of an elementary unit of matter was taken for granted. One of the major objectives of investigators in the physical field has been to identify the elementary unit. In the meantime, however, the discovery of processes whereby matter can be transformed into non-matter, and vice versa, has provided concrete proof that matter is not elementary. Since matter and radiation, for example, are interconvertible, they must necessarily be different forms of the same thing. And since matter cannot qualify as radiation, nor radiation as matter, it follows that neither can be elementary. Both must be forms of the elementary entity. Thus there are no elementary particles of matter in the real universe. 2. In the imaginary universe: The elementary units of matter are quarks. In the real universe: There are no quarks. Non-existent particles obviously cannot be found by the normal scientific process of discovery. They have to be invented. There seems to be a general impression that if the inventions are held to a minimum in any specific case, the development of thought is still scientific; that is, it continues to be a study of nature. But this view greatly underestimates the effect of a single deviation from reality. The original step into the phantom world may be relatively harmless. In itself, the issue as to whether or not there is an irreducible unit of matter has no significant effect on the general physical situation. But one false step leads to another, and soon the development of thought is far out of touch with reality. No invention can anticipate the results of future empirical discoveries. Consequently, the history of inventive theories is one of never-ending modifications and adjustments, usually moving farther and farther away from the original point of contact with empirical facts. The quark hypothesis is the end result (so far) of the effort to identify the non-

existent elementary particle, or particles, of matter, and it carries this process to the point of absurdity. The quark is purely hypothetical. There is no actual evidence of the existence of anything of this kind. Indeed, one of the principal activities of ‖elementary particle physics― is dreaming up plausible reasons why such evidence cannot be found. 3. In the imaginary universe: The atom is constructed of particles that are made up of quarks. In the real universe: The atom is an integral unit that has no ‖parts.― The quarks are not the only postulated particles that the investigators cannot find in the real world. They cannot find the particles that are supposed to be constructed of quarks either. They confuse this issue by giving these imaginary particles, the hypothetical constituents of the atoms, the same names as observed particles such as electrons and neutrons. But calling different objects by the same name does not make them the same kind of objects. Regardless of what they are called, objects belong in the same category only if they have the same properties. The properties that have to be ascribed to the hypothetical sub-atomic particles in order to make it theoretically possible for them to be constituents of atoms differ widely from the properties of the observed particles that are called by the same names. Stability, for instance, is an essential property of any atomic constituent, including the hypothetical particle that is currently called a ‖neutron.― The observed neutron is not stable. It lives only about 15 minutes. Similarly, the properties that the hypothetical atomic constituent currently called an ‖electron― must have in order to fit into its prescribed place in the atomic sttucture are quite different from those of the observed electron. We can deal with these imaginary electrons only on a statistical basis, and as Herbert Dingle points out, we can make these statistical methods effective ‖only by ascribing to the partictes properties not possessed by any imaginable objects at all.― 302 Furthermore, as many leading theorists tetl us, the atomic electron cannot be regarded as a ‖real― particle. It does not ‖exist objectively,― 337 they say. The idea that the real world can be constructed of elementary units that are not real-that do not even ‖exist objectively"-is the kind of an absurdity that is characteristic of the Wonderland of the imaginary universe. 4. In the imaginary universe: The atom has a ‖nuclear― structure in which a positively charged nucleus containing most of the mass is surrounded by negatively charged electrons. In the real universe: The atom is a single integral unit, not a collection of parts. The experimental ‖nucleus― is actually the atom itself, and contains all of the mass. Even though there are no ‖elementary― particles of matter, the ‖smallest― or ‖simplest― particles of matter can be identified, and if these small or simple particles had the properties that would qualify them as constituents of the larger particles, it would be in order to postulate that the larger particles are so constituted. But since we know that matter is not composed of elementary units of matter, there is no justification for assuming that the atoms must necessarily be constructed of smaller particles of matter. It follows that there is no reason why there must be atomic constituents. This eliminates any grounds that may have existed for conjuring up imaginary constituents such as quarks, or for inventing modifications of known particles to make them suitable as building blocks.

Since no real particles capable of meeting the requirements that apply to constituents of atoms can be found, the logical conclusion (the one that has been reached in this work from different premises) is that the atom is not constructed of subsidiary units. The prevailing concept of a ‖nuclear― structure is a hypothetical assemblage of imaginary particles; assumption piled upon assumption. 5. In the imaginary universe: Atomic behavior is govemed by a set of laws differing in significant respects from the laws governing the behavior of macroscopic matter. In the real universe: The same physical laws are applicable everywhere. The inventive theorists find it necessary to invent new laws (a) to account for the hypothetical behavior of the non-existent constituents of the atom, and (b) to account for the phenomena of the region inside unit distance, where the inversion that occurs at all unit levets (not yet recognized by conventional science) alters the manner in which the physical laws apply. Even with an unlimited license for making ad hoc assumptions, the builders of the imaginary universe have not been able to devise a set of laws for their atoms that is logical and self-consistent. In order to justify holding on to their concept of the nature of the atomic structure they have therefore advanced the strange contention that their atom has these incomprehensible characteristics because nature itself is illogical and inconsistent in the realm of the very small. 6. In the imaginary universe: At the atomic level the universe is illogical and incomprehensible. In the real universe: Phenomena at the atomic level have the same character as those at the macroscopic level. The physicists' atom is not a real physical entity: The modern atom is ‖the solution of a wave equation, and nothing more."j°; (E. N. da C. Andrade) It is ‖in a way, only a symbol.― 304 (Werner Heisenberg) The hypothetical electron constituent of the atom is an ‖abstract thing, no longer intuitable in terms of the familiar aspects of everyday experience. ‖ 305 (Henry Margenau) The theory of that atom (the quantum theory) is incomprehensible: I think I can safely say that nobody understands quantum mechanics.306(Richard Feynman) An understanding of the `first order' is . . . almost by definition, impossible for the world of atoms.307 (Werner Heisenberg) As these statements from prominent scientists demonstrate, present-day science does not even pretend that its atom belongs to the world of reality. But it asks us to believe the preposterous assertion that the reality which admittedly does not exist at the atomic level is somehow acquired in the course of combining these phantom atoms into macroscopic structures. P. W. Bridgman states the case specifically in these words: The world is not intrinsically reasonable or understandable; it acquires these properties in ever-increasing degree as we ascend from the realm of the very little to the realm of everyday things.308 This is utter nonsense, quite out of character for Bridgman, one of the keenest analysts that the scientific profession has produced. A real structure can be built of real bricks. An

imaginary structure can be built of imaginary bricks. But a real structure cannot be built of these imaginary bricks. What Bridgman has described is not the world as it actually exists, but the physicists' understanding of that world. A real world can be built of real entities that the physicists do not understand. Bridgman has used the term ‖not understandable― where the correct term is ‖not understood.― The practice of treating that which is not understood as not understandable is quite common, but obviously without justification. If this unwarranted extrapolation is removed from Bridgman's statement, it becomes something like this: The world is not fully understood. It is understood to an increasing degree as we ascend from the realm of the very little to the realm of everyday things. Here we have a correct description of the situation as it stood prior to the development of the theory of the universe of motion described in this and the preceding volumes. The point that is being brought out in this present chapter is that, in the absence of an understanding of the phenomena of ‖the realm of the very little,― the theorists have invented a universe that they can manipulate to produce imaginary solutions for whatever problems they may encounter. Thus far in our examination of the framework of this nonexistent universe we have been following the physicists' line of reasoning based on the assumption (now known to be contrary to fact) that the basic entities of the universe are elementary units of matter, a development .of thought that arrives at an imaginary structure of the atom of matter. Next we will trace a similar line of reasoning based on a contrafactual assumption as to the nature of the energy generation process in the stars, and we will examine the fantastic features of the imaginary world that result from the merging of these two lines of thought. 7. In the imaginary universe: The light elements are the fuel for the energy generation in the stars. In the real universe: The heavy elements are the stellar fuel. Like the nuclear atom, the hydrogen conversion process appeared plausible when it was first proposed. Direct observation of the energy production is not possible, but the assertion that the energy is produced by the only process then known that appeared capable of meeting the requirements seemed reasonable at that time. However, as soon as the astronomical consequences of the production of energy by this process were examined, it should have been clear that this is not the process that the stars utilize in the real world. A multitude of astronomical observations are in conflict with the consequences of this assumption. 8. In the imaginary universe: The hot, massive stars are young. The stars of the globular clusters are old. In the real universe: The hot, massive stars are the oldest stars of their respective generations. The stars of the globular clusters are relatively young. The stellar age sequence in the imaginary universe of present-day astronomy, is one of the direct consequences of the assumption as to the nature of the energy generation process, and it is a classic example of how an erroneous assumption in one limited area can have consequences of a far-reaching nature. So far as the energy generation process itself is concerned, the question as to which constituents of the star supply the energy is not a critical issue, as long as the energy source is adequate and controllable. But the

indirect results of this error have been disastrous. The general acceptance of the hydrogen conversion process as the stellar energy source has seduced the astronomers into embracing an upside down view of the entire evolutionary process. If they had been presented with this entire package as a whole, and had realized that it was all dependent on an assumption as to the nature of an unobservable process, it is unlikely that this package would ever have been accepted. But here, as in so many other cases, most of the fictional components of theories are the results of extended lines of reasoning in which the crucial role of the erroneous basic assumptions tends to be obscured. Many astronomers are uneasy about this situation, and recognize that a fictional element has entered into astronomy somewhere. Maffei makes this comment: We are now moving beyond those concepts and the knowledge familiar to us in the first half of this century, and we are entering a world in which science and fantasy intertwine.309 It is evident, however, that there is no general understanding of how far the current astronomical thinking has diverged from reality, or where the excursions into the land of fantasy have originated. Item number 8 is one of the major points of departure. Another consequence of the erroneous assumption as to the nature of the stellar energy generation process that has played a significant part in diverting astronomical theory into fantasyland is the conclusion that the stars eventually run out of fuel. 9. In the imaginary universe: The light element fuel supply of a star is eventually exhausted, and the star ultimately cools down to .the temperature of interstellar space. In the real universe: The fuel supply is continually replenished by accretion of matter from the environment. At this point the lines of development from the basic products of the imagination that we have identified thus far join to produce some further nonexistent phenomena. 10. In the imaginary universe: ‖With its fuel gone it [the star] can no longer generate the pressure necessary to maintain itself against the crushing force of gravity. ‖ 61 In the real universe: Gas pressure operates in aIl directions equally; downward as well as upward. The gravitational forces therefore remain the same regardless of the magnitude of the gas pressure. The structure of matter at zero absolute temperature, where thermal forces are absent, arrives at an equilibrium condition, in which the gravitational force is counterbalanced by an opposing force that has not been identified by conventional science, other than as an ‖antagonist. ‖ 26 There is no observational indication that this force is subject to any kind of a limit, and we now find that in the universe of motion no such limit exists. The ‖antagonist― is the force generated by the progression of the natural reference system relative to'the conventional reference system, and it cannot be overcome by the gravitational force, however great that force may be. 11. In the imaginary universe: ‖The crushing force of gravity― acting against the interior atoms of the star, after the elimination of the gas pressure, collapses their structure. In the real universe: (a) Elimination of the gas pressure, if it occurred, would not

increase the force acting on the central atoms. (b) The structure of the atom does not collapse under pressure. The ‖collapse― is an imaginary breakdown of the structure of the imaginary nuclear atom. In this hypothetical atomic structure the imaginary positively and negatively charged constituents are widely separated (on the atomic scale), leaving nothing but empty space in the greater part of the volume occupied by the atom. The collapse is presumed to eliminate most of this empty space, and bring the atomic constituents into contact. There is ample observational evidence to support the theoretical conclusion that such a collapse is impossible. The mere existence of stars that are 50 or 100 times as massive as the sun is positive proof that the inter-atomic equilibrium is able to withstand the greatest pressures of which we have any definite knowledge, those which exist at the center of such a star. The contention that this pressure is increased when, and if, the star cools because of the exhaustion of the fuel supply is pure nonsense. The matter in the center of the star is subject to the full pressure due to the weight of the overlying material regardless of whether that material is hot or cold. 12. In the imaginary universe: the collapse of the atomic structure converts the matter of the star into a strange hypothetical state called ‖degenerate matter. ‖ In the real universe: There is no degenerate matter. In this connection, it should be realized that the ‖collapse― is not merely an assumption that has no observational support. It is an assumption that is specifically contradicted by the observed facts. As pointed out above, the existence of very massive stars is definite proof that the inter-atomic equilibrium is maintained under the greatest pressures that are known to be brought against it-immensely greater than the maximum pressures reached in the smaller stars: the ones that are presumed to collapse into the degenerate state. The truth is that the collapse is merely another addition to the chain of inventions. It is a mythical collapse of a hypothetical assemblage of imaginary particles. The degenerate matter is an imaginary product of that mythical collapse. 13. In the imaginary universe: The speed of light is an absolute. limit on the speed of material objects. In the real universe: The speed of light is the limiting speed in one of the three scalar dimensions in which motion can take place. Here, again, the product of the imagination is specifically contradicted by observation and measurement. As brought out in detail in Volume I of this work, and in other previous publications, the Doppler shifts of the quasars are direct speed measurements, and values exceeding 1 .00 indicate speeds greater than that of light. The customary application of Einstein's relativity mathematics to reduce these speeds below the 1 .00 level is an unwarranted use of a relationship developed for, and justified in, a totally different kind of a situation. In this case, what the erroneous assumption has done is the inverse of the results of the other basic errors that have been discussed. Those others opened the door to imaginative ideas having no connection with reality; that is, they resulted in the extension of physical and astronomical theory into areas that do not exist. General acceptance of the assumption of an absolute limit at the speed of light has prevented extension of the theory into some areas of the universe that actually do exist. It has blocked any investigation of

the phenomena of the realm of the very fast, and has enabled the fantasies of the ‖degenerate matter― type to be taken seriously because they have had no competition. 14. In the imaginury universe: The white dwarf is an aggregate of degenerate matter produced by the collapse of a star of small or moderate size. In the real universe: The white dwarf is one of the products of a supernova explosion. It is composed of ordinary matter that has been accelerated to speeds in excess of that of light, and is therefore expanding outward in time (equivalent to inward in space). The white dwarf is an aggregate of ordinary matter produced from another aggregate of such matter (a star) by one of the processes to which ordinary matter is subject, and it has the properties of ordinary matter. Its only distinctive observable feature is the magnitude of one of these properties, its density. Conventional science has no explanation for densities in the range in which the white dwarf densities fall, because it accepts the dictum of the inventors of the imaginary universe that speeds greater than that of light (the speeds that are responsible for the high density) do not exist. 15. In the imaginary universe: the ordinary white dwarf eventually cools and becomes a black dwarf: a dead star. In the real universe: The white dwarfs lose energy to the environment. In the case of those produced by Type I or relatively small Type II supernovae, this energy loss eventually reverses the process that is responsible for the small size and high density of the white dwarfs, and expands them back into main sequence stars. There are no dead stars. The black dwarf is purely hypothetical. There is no observational evidence that any such objects exist. Like so many other features of the non-existent universe of present-day astronomy, the black dwarf hypothesis survives only because the existing astronomical facilities are not capable of producing the physical evidence that would demonstrate that there are no such objects. One of the problems that the astronomers have encountered in building their imaginary universe is that the consequences of some of their basic assumptions do not agree with the consequences of some of the others. The white dwarf is a case in point. It is the result of lines of reasoning based on the erroneous assumptions that have been identified in the foregoing paragraphs. But another assumption, likewise accepted by most astronomers, leads to a totally different result. According to conventional physics we should expect stars at the ends of their lives to contract under their own gravity until their gravitational fields become so strong that light no longer escapes from them and they become invisible.310 The feature of conventional physics to which this statement refers is Einstein's assumption that gravitation is a distortion of space-time due to the presence of matter. 16. In the imaginary universe: Gravitation is a distortion of space-time and therefore acts within the atoms as well as between them. In the real universe: Gravitation is a motion of the individual units (atoms and sub-atomic particles) and therefore acts only between the units.

This is another of the basic departures from reality that have taken the astronomers' perception of the universe into the land of fantasy. From the space distortion hypothesis the theorists have derived the concept of selfgravitation of the atom. It is assumed that application of sufficient external force brings matter to a critical point where this selfgravitation becomes effective. Beyond this point the atoms continue contracting by virtue of their own gravity. This process is quite different from the ‖collapse― envisioned in the theory that leads to the astronomers' conception of the white dwarf. Thus there are two competing theories in this area. To further complicate the situation, the results of observation do not agree with either of these theories. The statement quoted above as to the conclusions of ‖conventional physics― goes on to say: ‖in fact, we observe the reverse. Stars typically explode at a certain critical phase of their tives.― Faced with this real-life observation, which could not be ignored, the astronomers have worked out a compromise between the observations and their two theories. As it happens, they have never been able to ascertain what stars explode, or why the explosions occur. In the absence of this information, the latitude for ad hoc assumptions is almost unlimited, and the theorists have been able to put enough of them together to construct an explanation that meets the current liberal standards of acceptability; that is, there is not enough information available to disprove it. It is assumed that, for some unspecified reason, large stars are unable to collapse quietiy into white dwarfs in the manner of their smaller counterparts, and instead terminate their lives with explosions. Then it is further assumed that only the explosion products reach the self-gravitation stage. 17. In the imaginary universe: Stars that exceed a certain mass limit terminate their existence with explosive events that leave residues denser than the white dwarfs. In the real universe: Every star eventually reaches either a mass limit or an age limit, and explodes, producing a white dwarf, or its inverse equivalent, or both. Presumably the hypothetical critical density is somewhat above that of the hypothetical degenerate matter. As one investigator in this field remarks, ‖precision is not possible, because we do not know enough about the properties of matter at the 'supernuclear densities' of a white dwarf. ‖`°' But according to the astronomers' theory, there must be a physical state intermediate between the white dwarf and the self-gravitating object. To meet this demand the theorists again call upon the remarkable property of the imaginary neutron, that of becoming stable whenever stability is required by a theory. 18. In the imaginary universe: The high density products of explosions of stars in the intermediate size range are neutron stars. They are observed as pulsars. In the real universe: The pulsars are fast-moving white dwarfs. There are no neutron stars. The general impression today is that the status of the pulsars as neutron stars is an established fact, although as Martin I4arwit admits in a statement quoted earlier, the astronomers ‖have no theories that satisfactorily explain just how a massive star collapses to become a neutron star.― 184 The problems involved in explaining the properties of the pulsars in terms of neutron stars are equally intractable. F. G. Smith, one of the leading investigators in the field, concedes, in another of the earlier references, that little is

known about either the origin or the mechanism of the pulsars.183 Our development shows that the neutron star is a typical product of the imagination. The inability to define its properties is not surprising. The properties of non-existent entities are always difficult to define precisely. The pulsars are actually white dwarf stars produced by supernova explosions that are powerful enough to give some of their products speeds in the ultra high range. These result in outward translational motion, as well as the expansion into time that is characteristic of all white dwarfs. 19. In the imaginary universe: The terminal events in the lives of the largest stars produce compact objects whose density is above the critical level. These are black holes. In the real universe: There are no limits on the size of white dwarfs, other than those that apply to all stars. There are no black holes. "Of all the conceptions of the human mind from unicorns to gargoyles to the hydrogen bomb perhaps the most fantastic is the black hole . . . Like the unicorn and the gargoyle, the black hole seems much more at home in science fiction or in ancient myth than in the real universe. ‖ 201This comment by K. S. Thorne, one of the enthusiastic searchers for evidence of these ‖fantastic― phenomena, is an eminently correct assessment of the situation. This author goes on to assert that, ‖Nevertheless, the laws of modern physics virtually demand that black holes exist.― This, too, is true, but only because the particular ‖laws of modern physics― to which he refers are not the laws of the solid and stable areas of physics. They are the laws of the phantom universe. Without the self-gravitation concept, the theorists have no way of producing the extreme densities of the black holes. But once they invoke the aid of this concept they have no way of stopping it. Indeed, it must accelerate. The same imaginary process that accounts for the existence of black holes in the imaginary universe therefore limits these entities to no more than a transient existence. The black hole contracts to a point. 20. In the imaginary universe: There is no limit to the process of contraction by self-gravitation. It therefore continues until the entire star has shrunk to a mere point: a singulariry. In the real universe: There are no singularities. One of the recognized principles of logic, the branch of thought upon which scientific procedure is organized, is the reductio ad absurdum, in which the falsity of a proposition is established by demonstrating that a logical development of its consequences leads to an absurdity. The singularity is an absurdity. It is totally foreign to all that we actually know about the physical universe. It therefore follows that there is an error somewhere in the line of thought that produced this absurd result. The findings of this present investigation have now identified many such errors, but even without this new information it should be clear that every assumption in the lines of thought leading to the singularity is open to doubt until the situation is clarified. The general assumption that the existence of black holes is at least quasipermanent is, in effect, a denial of the validity of the singularity hypothesis. But those who have so much to say about the extraordinary properties of black holes are silent on the question as to why, or how, the contraction process should stop at this black hole stage. Such details, it seems, are unimportant in a universe of the imagination.

21. In the imaginary universe: The existing physical universe originated in a gigantic explosion: the Big Bang. In the real universe: There was no Big Bang. The information now available does not indicate how the universe originated, or whether it had an origin. In the singularity hypothesis the observed limits of gravitational contraction are ignored, and this concept is carried to the point of absurdity. In the Big Bang hypothesis the same treatment is accorded to the concentration of energy. We find from observation that the greatest concentration of energy (matter and the motion of matter) in the material sector of the universe is in a giant spheroidal galaxy containing somewhere in the neighborhood of 10¹² stars, and we have reasons to believe, even without the positive information derived from the theory of the universe of motion, that this is a limiting concentration imposed by natural laws. The Big Bang theory ignores this limitation, and again the result is an absurdity: a hypothetical event whose antecedents are completely unknown, whose mechanism cannot be explained, and whose results, as we will see in Chapter 30, do not agree with what we actually observe. A comparison of the Big Bang theory (which describes the theoretical results of an extremely large explosion) with the astronomers' theory of the origin of black holes in supernova events (which describes the theoretical results of large explosions) provides a good illustration of the inconsistencies so prevalent in the imaginary universe. In their study of the ultimate fate of large stars, the theorists have produced a hypothesis, based on the concept of selfgravitation derived from Einstein's theories, that specifies the results of a supernova explosion. If the same hypothesis is applied to the Big Bang explosion, the result of the Big Bang will be an immense black hole, or singularity, surrounded by a relatively small amount of material expanding in space. This obviously is not the universe that we observe, so the astronomers simply repudiate Einstein and his gravitational theories, so far as their application to the Big Bang is concerned, and invent another, very different, theory for this special situation. This concludes the description of the principal features of the imaginary universe that modern theorists have constructed to explain the phenomena that they have not been able to bring within the bounds of the current understanding of the universe of physical reality. It is not feasible to examine the immense amount of detail into which the development of this imaginary universe has been carried-the elaborate computerdesigned fictitious evolutionary paths of the stars, for instance, or the remarkably detailed (but somewhat discordant) accounts of what happens in the first few seconds after the hypothetical Big Bang, the comprehensive description of the insides of the imaginary black holes, and so on-but the points that have been covered in the preceding pages should be sufficient to indicate the extent of this imaginary universe, and the major part that it plays in present-day physics and astronomy. It should also be noted that this description is limited to those items with which most astronomers agree, as matters now stand. The imaginations of the theorists are by no means restricted to the areas that have been covered here. A host of books and articles are currently explaining in great detail the hypothetical properties of other non-existent entities and processes. ‖Holes― are the current fad, and new kinds are appearing in profusion. Some are merely variations of the plain black hole-mini black holes,

superholes, rotating black holes, expanding black holes, etc.-while others step out boldly with new concepts: white holes, for example, or even ‖wormholes.― The hypothetical conditions existing in the first minutes after the imaginary Big Bang are likewise high fashion at the moment, and are being called upon to provide explanations for the formation of galaxies, the origin of the background radiation, the production of those chemical elements that are not otherwise accounted for, and a variety of other items. This is indeed a happy time for the theorists. They live in an era in which the universe of the imagination is the prevailing orthodoxy, and they are provided with a fertile field in which to work, one in which there are only a bare minimum of those inconvenient observed or measured facts that have been the downfall of so many of the cherished products of their less fortunate predecessors. The case in favor of the most typical features of the imaginary universe, such items as degenerate matter or singularities, is entirely negative; that is, it rests on the absence of any observational evidence that specifically disproves these hypotheses. Thus the farther one of these products of the imagination departs from reality, the easier it is to meet the requirements for acceptance by the scientific community. One of the strangest features of the whole situation is that while the theorists are letting their imaginations run wild, and indulging in speculations of the most fantastic character, all in the name of science, they are religiously observing a taboo that prevents them from investigating the one hitherto unexplored area of the real universe in which the answers to many of their problems can be found: the region of speeds greater than that of light. There is nothing irrational or illogical about such speeds. Indeed, up to the beginning of the present century there was no suggestion that there might be any inherent limitation on speed. But Einstein has laid down an interdict that prohibits the exploration of the consequences of motion at speeds greater than that of light, and since a challenge to this ukase is unthinkable in the present-day scientific community, the astronomers are barred from even speculating about the immense field of physical existence at speeds greater than that of light, the field to which the entire latter half of this present volume has been devoted. Current physical and astronomical theory stops dead at the speed of light. Inductive reasoning, or exercise of the imagination, beyond this point are, in effect, prohibited. The construction of the astronomers' imaginary universe has been a gigantic task because of the never-ending revisions, re-adjustments, and corrections that have been required by the new information continually being produced by the work of the observers and experimenters. Those who have participated in the undertaking are very proud of what has been accomplished, and those who are now chronicling their endeavors characterize them in superlatives, such as the following from Paul Davies, referring specifically to the elucidation of the hypothetical details of the epoch immediately following the imaginary Big Bang: The study of this violent primeval epoch must rank as one of the most exciting intellectual adventures of modern science.311 No doubt this task has been exciting for those who have been engaged in carrying it out, and in this sense it is an ‖adventure,― but the primary aim of science is to increase our

knowledge of nature, and from a scientific standpoint the psychological reactions of the investigators are irrelevant. The only legitimate scientific criterion by which the feats of the imagination involved in constructing the imaginary universe can be judged is whether or not they have, in fact, added to our knowledge of nature. They certainly have not done so directly, since false information is not an addition to knowledge. Perhaps these excursions into the land of fantasy may have stimulated some thinking along lines that eventually produced some items of real knowledge. However, it is more probable that the net result of the effort expended on the investigation of the properties of non-existent entities and phenomena has been to obstruct the advance of knowledge, rather than to facilitate it. As pointed out in the discussion of this subject in The Neglected Facts of Science, ‖It would appear that the main purpose served by inventing a theory is to enable the scientific community to avoid the painful necessity of admitting that they have no answer to an important problem.― 312 In any event, there is no longer any need for a science fiction approach to astronomy. The development of the theory of the universe of motion has provided a solid foundation of positive knowledge and a comprehensive theoretical framework that enables fitting all of the observed phenomena into their proper places in the grand design.

CHAPTER 30

Cosmology Long before the first records of human activity were scratched on rocks or indented into clay, the more thoughtful members of the human race were already wondering about the origin of the world in which they found themselves living, and about its ultimate fate. We know this to be true because these first records indicate that the thinking about such matters had already reached a rather high level of sophistication. That early thinking was, of course, purely speculative; the connection between the premises on which it was based and the conclusions that were reached was too nebulous to justify calling it inductive reasoning. Furthermore, these speculative ò ideas relied almost entirely on supernatural processes, and they were essentially religious in character. In the course of time, as various fields of thought split off from religion, and secular branches of knowledge were originated, the questions as to the origin and fate of the universe came to be accepted as philosophical issues. Such subjects as cosmology and cosmogony were therefore defined, until quite recently, as subdivisions of philosophy. Within the present century, however, some physical phenomena have been discovered that are believed to have a bearing on these issues, and as a result, most of the theoretical activity in this area is now carried on in scientific terms, and en though it is just about as speculative as ever, it is regarded as scientific. As expressed by Hermann Bondi, ‖Nowadays we regard cosmology as a branch of science, or to be more precise, a branch of astronomy.‖ 313

Bondi defines cosmology as ‖the field of thought that deals with the structure and history of the universe as a whole.‖ An astronomy textbook gives this somewhat more explicit definition: Cosmology is concerned with the nature and origin of the entire universe-its structure today, its past, and its future.314 The scope of the subject, as thus defined, is greatly extended beyond the earlier objectives. We may, indeed, regard the modern additions to cosmology as a separate field of knowledge. This is the view taken by the Encyclopedia Britannica, which places cosmology under two separate headings: ‖Cosmology, in astronomy‖ and ‖Cosmology, philosophical.‖ In this work the subject will be divided in essentially the same way. This chapter will examine the aspects of astronomy that are generally classed as cosmological, and Chapter 31 will then take up a consideration of the implications of our physical and astronomical findings on questions of a more philosophical nature. Present-day cosmological theories can be described as variations of two themes. Ever since Hubble's discovery of the recession of the distant galaxies, accounting for this recession has been regarded as the number one requirement of such a theory. The current favorite, the Big Bang theory, assumes that an enormous explosion at some time in the remote past hurled the entire contents of the universe out into space at the tremendous speeds now observed. One variation of this theory sees the expansion as continuing indefinitely, and the ultimate fate of the universe as a condition in which its constituent parts are separated by distances too great for any interaction. An alternative view is that the expansion will ultimately reach a limit, and will be succeeded by a contraction that will terminate with another Big Bang, the cycle being repeated indefinitely. These theories based on a Big Bang are evolutionary in character. They depict the universe as undergoing a continual change from an initial to a final state, with or without a reversal, depending on the particular version of the theory. The Steady State theories, the only alternatives to the Big Bang that have been taken very seriously, portray the universe as unchanging in its general aspects. In fact, one approach to this type of theory bases it on a ‖Perfect Cosmological Principle,‖ which asserts that this uniformity is a fundamental principle of nature. In order to maintain the uniformity, the steady state concept, in its present form, requires the continual creation of new matter from which new galaxies can be formed to fill the spaces left vacant by the outward movement of the previously existing galaxies. The fortunes of these rival theories have fluctuated as new observational discoveries have posed difficulties for one or the other of them, and as revisions of the theories have been made to accommodate them to the new information. As matters now stand, the steady state type of theory is at a low ebb. It has for years been contending against observational data which are asserted to indicate that there are more faint radio sources at great distances than would be found under steady state conditions. In 1965 it received another blow when an isotropic background radiation was discovered and attributed to the remnants of the Big Bang. The present tendency on the part of the astronomers is to conclude that the Steady State theories are ‖almost certainly excluded by two independent sets of facts,‖ 315and to accept the Big Bang theory as having been established by default, there being no other contenders.

In view of the very limited amount of factual data available in this area, and the open questions as to the relevance of these data to the points at issue, the near unanimity of astronomical opinion is clearly a bandwagon effect. As J. N. Bahcall pointed out in a recent (1971) article, ‖We frequently settle important scientific issues by acclamation rather than observation. ' The general acceptance of the Big Bang theory is a prize example of this wholly unscientific practice. A few words of caution are being heard. For instance, Bernard Lovell had this to say: No one acquainted with the contortions of theoretical astrophysicists in the attempt to interpret the successive observations of the past few decades would exhibit great confidence that the solution in favour of the hot big bang would be the final pronouncement in cosmology.317 Fred Hoyle states the case more bluntly. He tells us, ‖I have little hesitation in saying that a sickly pall now hangs over the big-bang theory. One of the problems involved in making a critical examination of invented theories is that they are generally vague enough to leave room for differences of opinion on major details-often on vital details. Current scientific literature is full of references to different ‖interpretations‖ of various theories of this type. The Big Bang cosmological theory is no exception. In fact, the differences between the interpretations of this theory are so extreme that these interpretations actually constitute different theories rather than different versions of the same theory. For this reason, the comments and criticisms that apply to one are not necessarily applicable to another. To cope with this situation we will first consider the original form of the theory, in which a highly concentrated aggregate of matter ―explodes and ejects the galaxies in all directions.‖318 Subsequently we will give some attention to the more recent interpretations. The principal objections to the original Big Bang theory, as seen in the context of conventional astronomical thought, without taking into account the new information derived from the theory of the universe of motion, which will be considered later, can be summarized as follows: 1. The Big Bang is pure assumption. There are no physical principles from which it can be deduced that all of the matter in the universe would ever gather together in one location, or from which it can be deduced that an explosion would occur if the theoretical aggregation did take place. 2. Theorists have great difficulty in constructing any self-consistent account of the conditions existing at the time of the hypothetical Big Bang. Attempts at mathematical treatment usually lead to concentration of the entire mass of the universe at a point. ‖The central thesis of Big Bang cosmology,‖ says Joseph Silk, ―is that about 20 billion years ago, any two points in the observable universe were arbitrarily close together. The density of matter at this moment was infinite.‖ 319 This concept of infinite density is not scientific. It is an idea from the realm of the supernatural, as most scientists realize when they meet infinities in other physical contexts. Richard Feynman puts it in this manner: ―If we get infinity [when we calculate] how can we ever say that this agrees with nature.‖ 235 This point alone is enough to invalidate the Big Bang theory in all of its various forms.

3. The scale of the magnitudes involved is far out of line with experience, or even any reasonable extrapolation from experience. 4. As noted in Chapter 29, the results attributed to the Big Bang are inconsistent with the physical and astronomical theories currently employed in application to supernova explosions. 5. It is difficult, if not impossible, to account for the isotropy of the observed universe on the basis of the Big Bang hypothesis. As expressed by Dennis Sciama, this is ―a headache to the astrophysicist.―320 This problem is particularly acute in reference to the background radiation that is currently supposed to provide the best support for the theory. 6. The problem of the formation of the galaxies has never been solved in the context of this theory. ‖Moreover,‖ says W. H. McCrea, ‖those who have explored it most fully seem to be the ones who are most convinced that almost no progress has been made.―321 H. L. Shipman concedes that this is a significant point. ‖Since galaxies exist, it is embarrassing that we can't make galaxies in a hot, Big Bang cosmology.―322 7. The theory provides no explanation for a large number of physical phenomena that are directly connected with the evolution of the hypothetical explosion products. 8. Because of this lack of tie-in with observational information, the number of deductions that can be made from the theory is very limited. This minimizes the possibility of conflict with observation, and gives the impression that there are few criticisms that can be levied against the theory from the observational standpoint. In reality, however, what this means is that the theory cannot be tested. This is a devastating list of criticisms to be levied against one of the most highly publicized elements of present-day astronomical thought. Most astronomers are reluctant to subject the currently favored hypotheses in their field to critical scrutiny, but it is obvious that these objections to the Big Bang demolish most of the arguments advanced in favor of that theory in its original form. A large segment of the astronomical community has therefore abandoned the original concept, and has substituted other, very different, ideas, retaining only the Big Bang name. We now find many assertions such as the following in the astronomical literature: Many people (including some scientists) think of the recession of the galaxies as due to the explosion of a lump of matter into a pre-existing void, with the galaxies as fragments rushing through space. This is quite wrong . . . the expanding universe is not the motion of the galaxies through space, away from some center, but is the steady expansion of space.323 (Paul Davies) This conceptual change eliminates some of the serious objections to the original Big Bang hypothesis, but what does not seem to be realized by its proponents is that it also eliminates the explanatory character of that hypothesis. The original Big Bang is based on

an analogy with observed explosions. Matter, we know, has an internal energy content that, under appropriate circumstances, can be released explosively. The Big Bang is assumed to accomplish such a release on a gigantic scale. But this explosive process propels matter through space, the effect that Davies specifically repudiates. In order to produce ‖steady expansion of space‖ explosively it would be necessary to have either a means of applying the energy of matter to space, something that is totally foreign to physical science as we know it, or a source of energy in space itself, something of which there is no indication whatever. Consequently, there is neither observational nor theoretical justification for the assumption that the concept of an explosion is applicable to space. Thus the new version of the Big Bang expressed by Davies eliminates the ‖bang.‖ In fact, it eliminates all explanatory content from the hypothesis, and reduces it to nothing more than a restatement of the observational situation. It merely asserts that the space between galaxies is continually increasing. Another alternative to the original hypothes,is calls for replacing the Big Bang with a multitude of little bangs. The theory seems to call for enormous numbers of small bangs . . . all essentially simultaneous, close together, and nearly identical.324 (Lyman Spitzer, Jr.) This suggestion avoids the fatal weakness of the space expansion version of the Big Bang described by Davies, but only at the expense of introducing many other problems, such as the question as to how the explosions are synchronized, the exacerbation of the isotropy problem, etc. Consequently, the little bang hypothesis has received little attention thus far. The principal significance of the present-day swing away from the original Big Bang concept in all but name is that it demonstrates a recognition on the part of those who are supporting the revised hypotheses that the objections to the original Big Bang are insurmountable. An examination of the astronomers' Steady State theory, again without considering the new knowledge made available by the development reported in this work, discloses the following major objections: 1. In this theory the expansion is a pure assumption. No mechanism for accomplishing it is provided. 2. The theory requires the continuous creation of matter, which conflicts with the conservation laws. Like the concept of infinite magnitudes, this is a resort to the supernatural. 3. The theory has no explanation for the formation of galaxies, a key factor in the events that this theory purports to explain. 4. The theory has no explanation for the observed background radiation (aside from a suggestion by Fred Hoyle;25 that approximates what we now find to be the true explanation, but was not taken seriously). 5. In this tbeory the oldest galaxies are removed from the system by ‖disappearing beyond the time horizon‖ to maintain the unchanging galactic composition. This hypothesis breaks down when the galaxy from which the universe is being observed becomes the oldest within the observational limits. Thereafter the age of the oldest galaxy within these limits continually increases, violating the basic premise of the theory.

6. The theory provides no explanation for a large number of physical phenomena that are directly connected with the evolutionary pattern that it predicts. 7. Because of this lack of detail, it is untestable. A critical examination of this ‖theory‖ quickly shows that it is not a theory, nor even a hypothesis. It is merely an unelaborated idea, the idea that is contained in what is known as the Perfect Cosmological Principle. Most astronomers¢ accept, at least on a tentative basis, the Cosmological Principle, which asserts that the universe appears the same, aside from small scale irregularities, from all locations in space. The Perfect Cosmological Principle extends this idea to include the assertion that it likewise appears the same from ali locations in time. This extension has considerable appeal on broad philosophical grounds, but in order to give it the status of a cosmological hypothesis that can be subjected to scientific tests of its validity, it is necessary to identify and postulate mechanisms whereby the uniformity that is called for can be maintained. There are four major requirements: ( 1 ) a source of raw material for the formation of new galaxies, (2) a mechanism for accomplishing this formation, (3) a mechanism for implementing the galactic recession, and (4) a means of removing the over-age galaxies from the system. The Steady State ‖theory‖ proposed by a group of astronomers does not come anywhere near providing these details that would convert it from a mere idea into a testable hypothesis. Its protagonists have suggested a continuous process of creation as the source of the new matter, and have offered a process of disappearance over the time horizon as an answer to the problem of removing the over-age galaxies. The latter, as already noted, is unacceptable. No attempt has been made to account for the formation of galaxies, or for the observed recession, in the context of the theory. The Big Bang is a full-fledged hypothesis-not merely an idea like its competitor-but if cosmology is to deal with the universe as a whole, as indicated by the definitions quoted earlier, it is not a theory of cosmology. It deals only with the origin of the universe and with the galactic recession, aside from a misapplication of the second law of thermodynamics, and says nothing at all about the large number and variety of phenomena that constitute the activities of the universe as a whole. Calling it a cosmological theory is equivalent to asserting, that the galactic recession is the only thing of any significance that occurs in the universe subsequent to its origin. It should be evident that, even on the basis of previously available observational information, without the benefit of the new knowledge contributed by the theory of the universe of motion, neither of these presentday cosmological theories is anywhere near tenable in its present form. The only justification for giving either of them any consideration at all is the rather tenuous possibility that a continuing effort to overcome, or at least minimize, their many shortcomings might eventually result in the construction of a viable theory by a process of modification. But the case for these theories is not currently being argued on these grounds. What we are being told is that there is no alternative. When astronomers express dissatisfaction with both the Big Bang and the Steady State concepts of the universe, they are in trouble, because it is hard to imagine radical alternatives.326 (Nigel Calder)

On the next page of his book, however, the author makes a statement that illustrates where the trouble lies: why alternatives to these untenable theories are so hard to find. ‖The only way any anyone has thought of to avoid this conclusion (that the contents of the universe were formerly much more closely crowded together than they are now],‖ he says, ‖is to suppose that . . . less matter existed in the universe than does now.― Nere again we meet the ubiquitous ‖only way‖ argument. As in so many similar cases examined in the earlier pages of this and the preceding volumes, the so-called ‖only way‖ has that status only if it is assumed that the relevant portions of currently accepted physical and astronomical theory are correct in all respects. This is a totally unwarranted assumption. Any impasse such as that which exists in this case calls for a critical examination of the premises on which the accepted view of the situation is based. The long list of cases in which the investigation reported in this work uncovered new alternatives where it had been generally accepted, on the basis of assurances from Einstein and other leading scientists, that no such alternatives existed, is a graphic illustration of the need for a more critical examination of the foundations on which the current ideas rest. What makes alternatives to the existing ideas so difficult to find is that a totally new view of some essential element in the situation is usually required before the alternative possibilities can be recognized. It is quite unlikely that the author of this work would have been able to identify all of the many previously unrecognized alternatives that have provided the answers to longstanding problems discussed in these volumes if he had not had the benefit of a general physical theory that enabled him to arrive at these alternatives by a straightforward process of deduction. The cosmologists have been at a disadvantage, in that they have not had any assistance of this kind. The Big Bang and Steady State theories are the only alternatives that they have been able to see in the context of the current physical and astronomical theories, and they have not explored the possibility that these theories might be wrong. Their inability to see the true picture is understandable, but this does not make their conclusions any more acceptable. As this work has demonstrated, astronomy has not yet produced enough data on which to build a tenable cosmological theory, and there is no indication that it is likely to do so in the foreseeable future. The present data in cosmology are still limited, ambiguous, and fragmentary, and they all depend on complex instruments stretched right to the limits of their sensitivity and performance.327(Martin Rees) A significant feature of this situation is that the spectacular increase in the scope and quantity of observational information in the astronomical field in the last few decades has not resulted in any significant progress toward an understanding of the cosmological problem. The case in favor of any cosmological theory is still being argued mainly on the basis of the shortcomings of the alternatives. Each step forward from the observational standpoint seems to introduce new difficulties. This accumulation of unsolved problems is a clear indication of the need for new ideas. In his book, The Structure of Scientific Revolutions, Thomas Kuhn points out that the need for a new and better theory is generally indicated by ‖a state of growing crisis."

The emergence of new theories is generally preceded by a period of pronounced professional insecurity. As one might expect, that insecurity is generated by the persistent failure of the puzzles of normal science to come out as they should. Failure of existing rules is the prelude to the search for new ones.328 The existence of such a crisis in astronomy and cosmology is revealed by the current reactions to the inability of accepted theory to deal with the many problems now confronting these disciplines. More and more scientists are coming to realize that some basic changes in the existing structure of theory will be required. Typical of the comments now being made in increasing numbers are the following: In some places, too, the extraordinary thought begins tQ emerge that the concepts of physical science as we appreciate them today in all their complexity may be quite inadequate to provide a scientific description of the ultimate state of the universe.329(Bernard Lovell) It [radio astronomy] is at present producing more and more data that cast more and more doubt on the big bang and other evolutionary cosmologies, and it will probably continue to do so until someone is able to propose an entirely new approach to cosmology; for example, proposing a new physical law whose consequences can be tested by astronorriers.330 (G. Verschuur) Clearly, the physics of radio galaxies and quasars, the nature of the red shift, and perhaps fundamental physics itself are being questioned by these measurements [recent radio observations]331 (K. I. Kellerman) Astronomers are looking more and more toward a revision of physical theory as an answer to their currently outstanding problems. In the statements quoted above, Lovell suggests that the concepts of physical science may be inadequate; Kellerman says that fundamental physics is subject to question; and Verschuur predicts that a new physical law will be required. The physicists do not offer much resistance to these conclusions. They have problems of their own that are equally as recalcitrant as those that baffle the astronomers, and they realize that their theories are in need of some overhauling. Feynman, for instance, tells us that ‖All the principles that are known are inconsistent with each other, so something has to be removed.‖ 332 He defines the problem in these terms: ‖We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere . . . and in that disagreement it must agree with nature.‖ 294 As Feynman concedes, this is an ‖extremely difficult‖ assignment. The irony of the situation is that the greater part of the difficulty is not inherent in the problem; it is gratuitously introduced by the investigators themselves. Feynman's statements show just where the trouble lies. When he says that the ‖new view . . . has to agree with everything that is known,‖ he is using the word ‖known‖ in the sense of ‖positively established.‖ This is the only sense in which the statement is valid. But when he says that ‖the principles that are known are inconsistent with each other,‖ he is using the word ‖known‖ in the sense of ‖currently accepted." The practice of elevating the popular opinion of the moment to the status of established truth is the root of the present difficulty. It not only stands in the way of finding the

answers to unsolved problems, but also prevents recognition of those answers if and when they are obtained in spite of all obstacles. Replacement of an erroneous theory of long standing is difficult enough without this unnecessary handicap, as scientists, like their counterparts in other fields of human activity, are reluctant to change ideas to which they are accustomed. In principle, new ideas are welcome, but in practice those that disturb previous lines of thought encounter an atmosphere of hostility. The following comment by Geoffrey Burbidge, reported in a news item, describes the existing situation: As is always the case when scientific questions are really fundamental, new ideas which, if they prevail, will overturn the old ones, are resisted by all means, in the name of science, but by any means that come to hand.333 The theory derived from the postulates that define the universe of motion, and presented in this work, encounters this antagonism in full force when it is extended into the astronomical field, because it conflicts with many cherished ideas, some of very long standing. The astronomers should realize, however, that when they reach the point where they have to hoist the distress signal, and call for help by way of a ‖drastic revision‖ of physical theory, they must expect some simila~ major changes in astronomical theory. The changes required by the theory of the universe of motion.. are far-reaching, to be sure, but nothing less will serve the purpose. While the case in favor of this new theory is affirmative; that is, it is demonstrated in the preceding pages that the physical universe does, in fact, conform to the principles and relations derived from the postulates of the theory, the new findings that have emerged from the development of the consequences of the postulates have added still further dimensions to the case against both of the astronomers' cosmological theories. For example, the finding that matter is subject to a destructive temperature limit precludes the existence of a concentration of matter such as that assumed in the Big Bang hypothesis. Likewise, the finding that the net motion of the galaxies is inward within the gravitational limits, and outward in two dimensions beyond 1.00 redshift, rather than always outward in three dimensions, invalidates the recession explanation in all versions of the Big Bang. These examples could be multiplied manifold. The universe of motion described in this work is a universe of the steady state type. It conforms to the Perfect Cosmological Principle on which the astronomers' Steady State theory is based; that is, the large-scale features of the universe are unchanging, both in space and in time. But it is also evolutionary, differing from the Big Bang theory in that the evolution is a continuing process: a cyclic evolution rather than a linear evolution. This cyclic feature eliminates the need for continuous creation of matter, one of the principal objections to the astronomers' Steady State theory, while it also negates the prediction of a cold and lifeless ultimate state of the universe, a feature of the original Big Bang theory that is philosophically distasteful to many scientists. Thus the cosmological aspects of the theory of the universe of motion combine the more desirable features of the astronomers' cosmological theories, while avoiding the most objectionable aspects of each. Unlike its predecessors, which, as noted earlier, are limited to providing explanatory hypotheses for only a few of the cosmological aspects of the universe, the results of the theoretical development now being described constitute a comprehensive cosmological

theory in which the evolutionary development of the constituents of the universe- atoms, molecules, stars, galaxies, etc.-is an integral part of the cosmological process. This understanding derived from the theory of the universe of motion participates in the proof of the validity of the theory as a whole that is accomplished by the application of the probability relations. It may, however, be of interest to supplement this proof by a summary of the items that are relevant to the validity issue. Most of the content of such a summary can be expressed by the statement that none of the objections against either the Big Bang or the Steady State theory identified in the preceding pages is applicable to this cyclic theory. The following additional points should be noted: 1. No ad hoc assumptions are employed. All conclusions are derived deductively from the postulates that define the universe of motion. 2. The expansion of the material sector of the universe, as indicated by the recession of the distant galaxies, is a direct consequence of these postulates. 3. The high degree of isotropy of the matter in the universe is a result of the fact that the matter entering from the cosmic sector is distributed in space in accordance with probability considerations. 4. The background radiation currently attributed to the remnants of the Big Bang is the cosmic equivalent of starlight and other observed radiation of the material sector. It is isotropic because it is emitted by cosmic sector matter that is aggregated in time but dispersed in space. 5. The formation of stars, star clusters, and galaxies is a logical and natural part of the aggregation process deduced theoretically. 6. No creation of matter is required. 7. No special scheme for getting rid of the mature galaxies is necessary. The existing matter moves in a closed system. 8. The cosmological theory is a part of a general physical theory, applicable to all physical phenomena. There are innumerable opportunities to test its validity by correlation with observation. This item number 8 is the key element in the whole situation. As Martin Rees pointed out in a statement quoted earlier, the serious handicap under which present-day cosmology labors is the lack of an adequate supply of relevant and reliable data. Without a solid base from which to work, no refinement of the reasoning process will enable reaching correct conclusions. Irwin Shapiro makes this comment: All chains of reasoning in cosmology are elastic. Almost any observation interpreted to support one conclusion can, in the hands of a moderately adroit theoretician, be reinterpreted to support the opposite.334 The availability of a general theory of the physical universe now supplies the solid theoretical foundation that has been lacking, not only in cosmology, but in astronoFny as well. This fully integrated theoretical structure applicable to the entire range of physical

phenomena throughout the universe enables formulating the general physical principles from purely theoretical premises, and verifying them in areas that are readily accessible to observation. We are then able to apply them with confidence to the fields such as cosmology where the information from observation is meager, or, in many cases, nonexistent.

CHAPTER 31

Implications A scientific theory, such as the one described in the several volumes of this work, the theory of the universe of motion, consists of a set of assumptions that define the theory, together with the consequences of these assumptions, developed by applying logical and mathematical processes to the basic premises. The ordinary scientific theory covers only a limited portion of the total scientific field, and it is therefore an addition to established scientific knowledge rather than an independent structure. Hence it necessarily utilizes various items from the currently accepted body of scientific knowledge in the development of its consequences. The theory of the universe of motion, on the other hand, deals with the physical universe as a whole, and is entirely selfcontained. All of the conclusions as to the consequences of this theory are derived from the basic postulates without introducing anything from any other source. We have now arrived at the point, however, where it should be recognized that the foregoing statement applies to the theory of the universe of motion as a scientific product. Science itself is not entirely self-contained. In order to make scientific investigation possible, and to give meaning to the results thereof, it is necessary to make certain preliminary assumptions of a philosophical nature. The validity of these assumptions is accepted by the workers in the field of science as a condition of becoming scientists, and since these assumptions form a background for all scientific work, they are not ordinarily mentioned in scientific discourse, except in those instances where the topics under consideration are on the borderline between science and philosophy. In this concluding chapter of the present volume we will undertake to examine some of the questions that arise along this borderline, and in preparation for that examination we will want to look at the philosophical underpinnings of physical science: ‖the metaphysical presuppositions of science,― 335 as one writer calls them. These include the following: (a) lt is assumed that the universe is rational. (b) It is assumed that the same physical laws and principles apply throughout the universe. (c) It is assumed that the results of specific physical actions are reproducible. (d) It is assumed that the subject of scientific investigation is an objectively real universe. (e) It is assumed that physical changes (effects) result from causes. (f) It is assumed that the results of scientific investigation, when verified in accordance with standard scientific practice, are certain and permanent. (g) It is

assumed that the laws and principles of the physical universe are, in effect, restrictions, and that whatever they do not prohibit exists. Most members of the scientific community simply take these assumptions as axiomatic. Indeed, the great majority of rank and file scientists would be quite surprised to find that anyone questions such assumptions as the rationality of the universe, for example. But some exceptions have been taken to specific items in the list, mainly by individuals who are particularly interested in the philosophical aspects of science. An element of uncertainty has thus been introduced into the substratum of physical science. The development of the theory of the universe of motion has now clarified this situation, and has demonstrated that the criticisms of these basic assumptions are invalid. It appears, however, that a few of the criticisms that have been offered are of sufficient interest, in view of the publicity that they have received, to warrant some discussion in this work. The assumptions to which the following comments refer are identified by the same letter symbols that were used in the earlier listing. (a) If the universe were not rational, the scientific objective of arriving at a systematic understanding of the activities of the universe would be an impossible task. It is true that, as noted in Chapter 29, some prominent scientists have characterized the realm of the very small as irrational, but what this amounts to is excluding this domain from the scientific field. Our findings indicate that this exclusion is unnecessary. (b) In present-day practice, the Principle of Uniformity, as we may call it, has not been accepted in its entirety, because the theorists have been unable to find explanations on this basis for the phenomena of some special areas, such as the sub-atomic region or the interiors of the stars. However, it is accepted in a kind of a selective way, and regarded as applying whenever it does not inconvenience the theorists, but leaving open the possibility of deviations in special situations. The clarification of the physical relations in the far-out regions that has been,accomplished by the development described in this work has now shown that there are no exceptions to this general principle. The difficulties in the special areas that have led to suggestions as to exceptions have been due to inadequate understanding of the phenomena in these areas. (c) The assumption of reproducibility is usually stated in terms of the reproducibility of experiments, but it is equally applicable to any other type of physical action. (d) One school of philosophy contends that the universe exists only in our minds. This is a difficult position to contravene, as its defenders can simply extend it to apply to the premises of any adverse argument. But as scientists, we can dismiss this point of view as irrelevant. A subjective universe cannot be distinguished from an objectively real universe by any means at our command, and from a scientific standpoint where there is no distinction there is no difference. A modification of this point of view that has some support among scientists concedes reality only to the information received by the senses. The advocates of this interpretation point out (correctly) that we do not perceive physical objects directly; we have direct knowledge only of the ‖sense-data.― Our concepts of physical objects are theoretical constructs based on these data. The conclusion that they have drawn from this is that only

the sense data have objective reality, and that all else is a creation of the human mind. As expressed by G. C. McVittie: A preferable alternative to the doctrine of the rational External World is to regard science as a method of correlating sense-data . . . On4this view, the corpus of sense-data may, or may not, form a rational whole, but the human mind by selecting classes of data succeeds in grouping them into rational systems . . . Unobservables such as light, atoms, eiectromagnetic and gravitational fietds, etc., are not constituents of an independently existing rational External World; they are but concepts useful in the manufacture of systems of correlation.336 Other observers have adopted an intermediate position, conceding reality to some features of the universe, primarily macroscopic objects; but denying the reality, in this same sense, of other features-atoms and electrons, for example. Heisenberg cautions us specifically that we must not regard the smallest parts of matter as being objectively real in the same sense in which rocks and trees are real.337 ‖Atoms are neither things nor objects,― he says, ‖atoms are paris of observational situations. ‖338 In another attempt to describe this strange half-world in which the ‖official― school of modern physics places the basic units of matter, he characterizes the atom as ‖in a way, only a symbol. ‖339 The theory of the universe of motion has provided a definitive answer to these questions about reaGty. There i.s an external universe independent of the human race, and independent of any observations that they may make. The physical universe is a universe of motion; that is, motion is the reality of which the universe is composed. Motions and combinations thereof are therefore ‖real― in any ordinary sense of the word. The relations between these motions have a somewhat different status, and whether they can be considered real depends on how that term is defined. In any event, some of the ‖unobservables― of modern physics, the nucleus of the atom, for instance, are wholly non-existent. Some, such as electromagnetic and gravitational fields, are merely special ways of looking at physical situations-that is, describing the relations between motionsand belong in the same category in which we place such concepts as the center of gravity or the poles of the earth. But the smallest subdivisions of matter, the atoms and subatomic particles, have exactly the same claim to reality as the largest aggregates of matter; the smallest subdivisions of electricity, the electrons, have the same claim to reality as the heaviest electric currents; and so on. Whether or not the entity in question is observable, as matters now stand, is irrelevant. It should be understood, however, that reality, as defined above, is physical reality; that is, the reality of the universe of motion. This does not necessarily exclude the possibility that there may be reality of a different nature: a nonphysical reality. (e) The same frustrations that have led modern scientists to invent theories where their efforts to apply inductive reasoning to their problems have encountered difficulties have also impelled them to jettison any of the previously accepted scientific or philosophical principles that might happen to stand in the way of the inventions. Some are even ready to discard logic, one of the foundations of the structure of scientific knowledge. For example, F. Waismann asserts that ‖Quantum physics presents a strong case against traditional logic"340-an upside down conclusion, if there ever was one. But the favorite

target of those who seek to make things easier for the theorists is the connection between cause and effect. Like Waismann, most of the others who are attempting to brush aside ihose principles that stand in the way of the currently fashionable ideas rely primarily on the quantum theory, with some assistance from relativity and other theoretical products of the modern era, according these theories a status superior to that of the previously accepted principles. As it happens, this quantum theory that is now being used as ammunition with which to attack some of the essential features of traditional scientific procedure is itself based on a sound principle, existence only in discrete units, that was derived by one of these standard scientific procedures: generalization of empirical findings. The development of tbe theory of the universe of motion has now shown that this discrete unit principle is one of the key elements in the basic framework of the physical universe. But because conventional science is unaware of the directional reversals that take place at the unit levels, it has not been able to arrive at a theoretical explanation of events inside unit distance that is consistent with the established laws of physics. This put the theorists in a position where it seemed that they either had to give up quantum theory or sacrifice some of the established philosophical principles. They chose the latter course, and quantum theory, as now constituted, not only defies logic, but also causality and continuity of existence (that is, it asserts that an object may exist at point A at one time and at point B at another time without having been anywhere in the interim). The abandonment of causality is particularly stressed by the expositors of the theory, as in the following statement: Whenever he [the physicist] penetrates to the atomic, or electronic level in his analysis, he finds things acting in a way for which he can assign no cause, for which he can never assign a cause, and for which the concept of cause has no meaning, if Heisenberg's principle is right. This means nothing more nor less than that the law of cause and effect must be given up.341 (P. W. Bridgman) In the universe of motion all entities and phenomena are motions, combinations of motions, or relations between motions. It follows that any physical event X involves modification of an existing motion combination A by another motion or combination B. Motion B is then the cause of event X. However, the initial combination A was itself the result of a previous event Y in which a then existing motion combination C was modified by a motion D to produce combination A. Thus D can also be regarded as a cause of event X. In fact, any physical event has what amounts to an infinite number of causes. This event is the intersection of two or more causal systems, and might be compared to a major river, which is the result of continual joining of the products of intersection of an almost infinite number of rivulets. Thus the conclusions of quantum theory leading to the abandonment of causality must be rejected. In this connection, however, it is necessary to distinguish between causality and determinism. ‖There is some disagreement among scientists about the concept of causality. Among many it is essentially equivalent to the notion of determinism.― 342 (R. B. Lindsay) But there is a distinct difference between the two concepts. Causality implies nothing more than the existence of a cause for every physical event. Determinism includes the further premise that the same cause applied to the same kind of a situation

always produces the same result. In the universe of conventional science, which is a universe of matter, non-material causes act upon material ‖things,― and there are grounds for concluding that the same cause should produce the same effect if applied to the same thing under the same conditions. However, the real world does not act in this manner, and the reaction of ‖modern science― has been to throw the baby out with the bath water; that is, to reject causality. Our finding that both matter and non-material phenomena are manifestations of motion now resolves the problem. On this basis, cause and effect are simply aspects of the interaction of motions. Causality is maintained in all cases, because a motion cannot be changed except by an interaction with another motion (since nothing exists but motions). But, as we have seen in the preceding pages, there are continual interchanges between different kinds of motion-between scalar and vectorial motion, between one-dimensional motion and two or three-dimensional motion, between motion in space and motion in time-and many of these interchanges involve redetermination of direction or magnitude by chance processses. Because of this intervention by chance, the exact results of such interactions are unpredictable. Thus, while causality is maintained throughout the physical world, determinism is ruled out. (f) On the basis of this assumption, physical science has a permanent, and ever-growing, core of positively established knowledge. This is the view of traditional science, a view that is still accepted by the great majority of scientists. But the general relaxation of scientific standards that has accompanied the introduction of inventive theories in modern times has confused the situation to the point where there is no longer any clear distinction between today's best guess and established fact. This has led to a contention on the part of some scientists and philosophers that no scientific findings are positively established, an assertion that is welcomed in some quarters because it tends to excuse the deficiencies of many unverified theories. ‖The notion that scientific knowledge is certain is an illusion,― 343 says Marshall Walker. This point of view is based largely on an unrealistic concept of ‖certainty.― It is true that no physicai statement can be verified with what we may call mathematical certainty, in which the probability of error is zero. Because of the nature of physical observations, the best that we can do in any physical situation is to arrive at a point where the probability of error is negligible, a physical certainty, we may say. But from a practical standpoini, this physical certainty is fully equivalent to mathematical certainty. Drawing a distinction between the two is meaningless hairsplitting. A theory is verified when its validity is established with physical certainty. In this connection, it is important to recognize that scientific statements can be verified only if they are properly expressed so that they stay within the limits to which comparisons with observation can be made. Much of the erroneous thinking in this area is due to a lack of precision in defining the items that are involved. For example, we cannot ordinarily verify a statement in the form y = 3x, where x and y are physical variables (unless this proposition can be incorporated into one of greater scope that can be verified as a whole). In order to be verifiable, the statement will usually have to be put into the form: Within the limits x = a and x = b, y = 3x to an accuracy of one part in 10`: When thus expressed and validated by comparison with the results of observation, this

statement constitutes exact and permanent knowledge, regardless of whether some future findings may show that the relation is invalid somewhere outside the limits specified, or that there is a deviation of less than one part in 10z under some circumstances. As Lecomte du Nouy points out, ‖science has never had to retract an affirmation based on facts that are well established within accurately defined limits.― 344 In support of his assertion that there is no certain scientific knowledge, Walker tells us that ‖New models are often quite radically different from their predecessors, and often require the abandonment of ideas that have long been considered obvious and axiomatic.― This comment illustrates one of the common errors in thought that underlie the denial of scientific certainty. Walker bases his conclusion on the observation that many ‖models― and presumably ‖obvious and axiomatic― ideas ultimately had to be abandoned. But the truth is that few models ever qualify as scientific knowledge. Models do not attempt to cover all aspects of the phenomena with which they deal (if they did, they would be theories, not models), and consequently they are inherently erroneous, either in part or in their entirety. The failure of these models to stand the test of time therefore has no relevance to the status of firmly established knowledge. Likewise, if an assertedly ‖obvious and axiomatic― idea can be definitely verified, it then constitutes scientific knowledge, and is both certain and permanent. If it fails the test of comparison with the observed facts, then it is not, and never was, ‖obvious and axiomatic,― nor is it scientific knowledge, and the necessity of discarding it has no significance in the present context. (g) This principle is commonly expressed in the statement that ‖What can exist does exist.― K. W. Ford puts it in this manner: One of the elementary rules of nature is that, in the absence of a law prohibiting an event or phenomenon, it is bound to occur with some degree of probability. To put it simply and crudely: Anything that can happen does happen.345 This author uses the word ‖happen― rather than ‖exist,― but as he notes in another connection, at the basic level ‖there is no clear distinction between what is and what happens. ― 346 This principle is not as well known, as a principle of nature, among scientists in general as those previously discussed, but they all employ it, usually unconsciously, in a great variety of applications. It is this principle that provides the justification for interpolation and extrapolation. It has been the key factor in such theoretical anticipations as Mendeleev's prediction of previously unknown elements, Dirac's prediction of the positron, and myriads of other, less dramatic, scientific advances. And it is the essence of the lines of reasoning that are being employed in the current attempts to evaluate the possibility of life elsewhere in the universe. As can be seen in these illustrations, the absence of a prohibition is first established in one area. The principle that what can exist does exist is then invoked to justify the assertion that the phenomenon in question also exists in the other areas. The validity of this principle, in application to the physical universe, has been clearly established by our findings. In many cases, entities or phenomena that would otherwise exist, on the basis of this principle, are excluded by adverse probabilities or other specific factors. Aside from these exclusions, all of the entities or phenomena that are

theoreticatly possible within the area thus far covered in the investigation have their counterparts in the observed physical universe. It is true that only a relatively small portion of the universe as a whole has been examined in the context of the new theoretical system, but the area of coverage includes the basic phenomena of all of the major subdivisions of physical science, and many thousands of individual items. The probability that there is any violation of this principle anywhere in the universe has thus been reduced to a negligibie level. Addition of these philosophical principles to the physical knowledge set forth in this and the preceding volumes now puts us in a position where we are able to arrive at answers to some long-standing questions about fundamental issues. We will begin with 1. Is the physical universe finite or infinite? In past discussion of this subject it has usually been assumed that the question reduces to a matter of whether or not space is finite. Those who favor the finite alternative generally envision some kind of a space curvature, a geometry that permits space to be finite, yet unbounded. As brought out in Volume I, space as ordinarily conceived-extension space, in terms of this work-is not a physical entity. It is merely a reference system, a purely mental construction. As such, it can be thought of as infinite. But the space that actually exists in a physical sense is the space aspect of the existing motion of the universe. The question as to whether this space is finite or infinite therefore becomes a question as to whether the amount of motion in the universe is finite. The finding that the activity of the universe is cyclic answers this question immediately. A cyclic system is a closed system; it is finite. In the universe of motion, spatial structures exist only for a limited time; that is, a limited segment of the time progression. Temporal structures (in the cosmic sector) exist only during a limited segment of the space progression. The principal obstacle that stands in the way of acceptance of the idea of a finite universe is the observed outward motion of the photons of light and other electromagnetic radiation. On first consideration, it would seem that, regardless of what the aggregates of matter may be doing, the radiation is being dispersed outward into space, and is eventually lost from the universe as we know it. But we now find that this apparent outward movement of the photons is an illusion due to the inward movement of the gravitationally bound system from which we are doing our observing. The photons actually have no capability of independent motion. This is why the physicists have never been able to find a mechanism for the ‖propagation of radiation.― There is no such propagation, and therefore no need for a mechanism. The prevailing impression is that Einstein provided an explanation for this phenomenon, but, in fact, what he did was to dismiss the problem as too difficult. In a statement quoted in Volume I, he characterizes the situation in this manner: Our only way out . . . seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.347 Since the photons of radiation remain at their points of origin, in the natural system of reference, their ultimate fate is not to be lost in the depths of space, as observations from

our locations in the universe of motion appear to indicate. We are doing our observing from locations that are moving inward at high rates of speed, and our observations are distorted accordingly. All photons remain in the space over which the matter of the universe is distributed. It follows that they must ultimately encounter, and be absorbed by, matter. They are then transformed into thermal motion, or participate in the atom building process by which radiation is reconverted into matter. A small fraction of the total are able to pass into the cosmic sector, appearing there as a ‖background radiation― of the type discussed in Chapter 30. 2. Did the universe evolve from a primitive condition, or has it been in the same condition in which we now observe it during its entire existence? The results of the development of theory in the preceding pages of this and the previous volumes are consistent with either of these alternatives. The evolution in each sector begins with matter in a primitive dispersed condition, but it does not necessarily follow that there was ever a time at which all matter was in this condition. In any event, even if the universe did originate in a primitive condition, theoretical considerations indicate that it would eventually arrive at an equilibrium such as that which now appears to exist. 3. Did the universe have a beginning, or has it always existed? The two parts of this question are not mutually exclusive, as they appear to be. We can answer the second part affirmatively, but this does not necessarily mean that the answer to the first part is negative. Such words as ‖always― and ‖before― presuppose the existence of time. ‖Always― means ‖during all time.― ‖Before― means ‖at an earlier time.― The universe has always existed; that is, it has existed throughout all time, because time exists only as a constituent of that physical universe. In the sense in which it is being asked, the first part of this question is meaningless, as it assumes that the existence of time is independent of the existence of the universe. Whether or not the question might have a real significance on the basis of something other than a sequence in time is beyond the scope of the present work. 4. Will the universe eventually come to an end? All individual objects in the physical universe, including the earth and the solar system, have finite life spans, and their existence will eventually terminate. But there is nothing in the physical system that would end the existence of the universe as a whole. The physical universe is a self-contained, and self-perpetuating mechanism. It will continue on the present basis indefinitely, unless it is destroyed by some outside agency. The question as to whether any such outside agency exists will be considered later. 5. Was the universe created by some agency? The development of theory in this work sheds no light on the question of creation. The only thing that exists in the physical universe is motion. Our theory, as it now stands, defines what motion is, and what it does, but not how it originated, or whether it had an origin. Since time, in a universe of motion, exists only as an aspect of that motion, the universe and time are coeval. On this basis, the universe has existed always-during all time-regardless of whether or not it originated from an act of creation. Neither the theory of the universe of motion, nor the many hitherto unrecognized physical facts uncovered

during its development, gives any indication as to whether a creation occurred. This remains a wide open question, so far as science is concemed. 6. Is the activity of the physical universe purposeful, or is it simply mechanistic? The finding that the physical universe consists entirely of a finite quantity of motion means that it is purely mechanistic. However, this does not preclude the possibility that the existence of this machine may have a purpose. This is an issue on which our study of the mechanism sheds no light, although it does clear the way for a study of the problem. 7. Is the human race merely part of the machine, or does it, in some way, have an independent role? Conventional science takes a somewhat ambivalent attitude toward this question. It portrays the universe as strictly mechanistic, and yet introduces the concept of an ‖observer,― whose presence is presumed to have a significance with respect to the outcome of physical processes. The effect of the new information derived from the development of the theory of the universe of motion on our understanding of the relation of the human race to its physical environment has been explored in connection with an extension of the physical investigation into the non-physical fields, the results of which will be reported in a separate publication. 8. Are we alone, or is there intelligent life elsewhere in the universe? This is a long-standing question that has entered a new phase since the development of communication processes that are, at least potentially, capable of transmitting and receiving messages from distant planets. It is now a lively subject of discussion and speculation, and some steps have been taken toward a systematic search for evidence of extra-terrestrial life. This question can be subdivided into the following three parts: 1. Are there other locations in the universe in which the physical conditions are suitable for the existence of life? 2. Does life necessarily develop in some fraction of the suitable locations? 3. Where life exists, does it necessarily evolve into intelligent life under the most favorable conditions? The results obtained from the theory of the universe of motion enable giving an affirmative answer to the first of these subsidiary issues. As brought out in Chapter 7, our findings indicate not only that there are an enormous number of planetary systems, but also that the planets in these systems are distributed in distance from their controlling stars in accordance with Bode's Law (as revised). This means that the great majority of the systems include at least one planet within the habitable zone, a planet that may be suitable for the development of the higher forms of life. Inasmuch as the results reported in the several volumes of this work do not extend into the biological field, they do not provide answers for the other two subdivisions of the main question. However, the'se results have verified the status of the postulates of the theory of the universe of motion as a correct definition of the physical universe. If life is a physical phenomenon, then it, too, is defined by these postulates. Thus the theory opens an avenue of approach to these other two issues. A preliminary study along these lines has been included in the extension of the physical investigation that was mentioned in the answer to question 7.

9. If there are intelligent beings elsewhere in the universe, will we eventually be able to make some kind of contact with them? At the present stage of our knowledge, any answer to this question would be pure speculation. 10. Is there anything outside (that is, independent of) the universe of motion? This is probably the most important question that can be asked by members of the human race. Many persons, particularly those with strong religious ties, will be inclined to contest this assertion, having in mind issues that are more directly connected with their specific beliefs. But we can safely predict that if these alternative questions are carefully examined it will be found that they have no meaning unless this question number 10 can be answered affirmatively. Conventional science gives us a negative answer. It regards space and time as constituting a background, or setting, in which physical entities exist, and in which physical activity takes place. All existence, according to this view, is in space and in time. It then follows that there cannot be any existence outside of space and time. The prevailing scientific opinion is that this is an incontrovertible conclusion. Furthermore, it is claimed that every fact to which we have access can reasonably be explained in terms of the physical universe alone, as would be expected on the basis of the foregoing assertions. Although it is generally conceded that this is the verdict of science at the present stage of knowledge, it is, to most scientists, an unwelcome conclusion. The great majority of these individuals have some kind of religious or philosophical convictions about non-physical existence that they are not willing to give up, regardless of how strong a case against the reality of such an existence science may present. For some this has created a very difficult situation. As expressed by du Nouy: It cannot be contested that the heart of many men is the stage of a conflict between the strictly intellectual activity of the brain, based on the progress of science, and the intuitive, religious, self. The greater the sincerity of the man, the more violent is the conflict.348 The fact that the clarification of the physical relationships in our study of the universe of motion has opened the door to an extension of this study into the non-physical realm thus has a profound significance. The physical findings clearly demolish what previously seemed to be an unassailable case against the reality of outside existence. Even the most casual consideration of the claim that every known fact has a reasonable explanation in physical terms is sufficient to show that the validity of this claim rests entirely on a subjective assessment of what constitutes a reasonable explanation in each individual case. The prevailing scientific position with respect to evidence of nonphysical existence thus amounts to nothing more than a refusal to recognize any evidence that is offered in favor of such existence. It follows that the scientific rejection of the possibility of existence outside the physical universe has no basis other than the premise that all existence is in space and in time.

In the universe of motion, this is not true. Space and time do not constitute a container for the entities and phenomena of that universe; they are contents of the universe. Once this is understood, the obstacle in the way of non-physical existence disappears. The results of the investigation here being reported show that the physical universe consists entirely of a specific finite quantity of a particular kind of motion. The question at issue now becomes: Can anything exist other than this quantity of this kind of motion? This is an issue that can be investigated by standard scientific methods and procedures. We cannot apply the purely deductive method by which we have derived the answers to similar questions within the boundaries of the physical universe after establishing the validity of the fundamental postulates of the Reciprocal System of theory, as we have no assurance that the laws and principles of the physical universe are applicable to the outside region. We can, however, postulate the applicability of those of the previously established principles that are not subject to any obvious regional limitations, and test the validity of that postulate in the regular manner. In so doing, we are using one of the versatile tools of inductive reasoning: the extrapolation process. We are making the kind of an ‖inference from experience― upon which scientific theory was based before the ‖inventive― school of Einstein and his successors gained control of the scientific Establishment. First, we assume the validity of the Principle of Uniformity, identified as Principle (b) in the list given at the beginning of this chapter. This principle then carries with it the validity of the other items in the list that are relevant to the point at issue, particularly the rationality of the outside existence, principle (a), and the assertion that what can exist does exist, principle (g). We know from observation that motion can exist. Our observations tell us only that it exists in a certain form and iri a certain finite quantity, but there is no indication of any kind of a limiting factor that would restrict it to this form and to this quantity. Principle (g) therefore tells us that motion can exist in other forms and in other quantities if our hypothesis as to the applicability of the Principle of Uniformity to the outside existence is valid. Having formulated this hypothesis by extrapolating the principles and relations that we have established in the physical universe, we are then ready to verify it in the standard manner by developing the consequences of the hypothesis and comparing them with observation. Notwithstanding the scientific contention that all observed phenomena can be explained on a purely physical basis, it quickly becomes evident, when the verification process is undertaken, that many of the effects of non-physical existence required by the uniformity hypothesis are, in fact, observable. Their true status as unexplained nonphysical phenomena has not heretofore been recognized because they coexist with many unexplained physical phenomena, and have not been distinguished from these obscure features of physical existence. The findings of this extension of the investigation of the physical universe into the nonphysical region are much too voluminous to be included with the physical results, and will be described in a separate publication, but it would not be appropriate to conclude the discussion in this volume without calling attention to the manner in which the clarification of the properties of the physical universe sets the stage for a confirmation of the reality of existence outside that universe. The more complete understanding of

physical existence opens the door to an exploration of existence as a whole, including those nonphysical areas that have hitherto had to be left to religion and related branches of thought. It is now evident that our familiar material world is not the whole of existence, as modem science would have us believe. It is only a part—perhaps a very small part—of a greater whole.

References 1.

John, Laurie, Cosmology Now, Taplinger Publishing Co., New York, 1976, p. 85.

2.

Verschuur, Gerrit, Starscapes, Little, Brown and Co., Boston, 1977, p. 143.

3.

Shklovskii, I. S., Stars, W. H. Freeman & Co., San Francisco, 1978, p. 66.

4.

Struve, Otto, Elementary Asrronomy, Oxford University Press, New York, 1959, p. 296.

5.

Mitton, Simon, Exploring the Galaxies, Charles Scribner’s Sons, New York, 1976, p. 86.

6.

Gold and Hoyle, Paper 104, Paris Symposium on Radio Astronomy, edited by Ronald N. Bracewell, Stanford University Press, 1959.

7.

Verschuur, Gerrit, op. cit., p. 102.

8.

Harwit, Martin, Astrophysical Concepts, John Wiley & Sons, New York, 1973, p. 43.

9.

McCrea, W. H., Cosmology Now, edited by Laurie John, op. cit., p. 94.

10. Bethe, Hans, Technology Review (MIT), June 1976. 11. Pasachoff, Jay M., Astronomy Now, W. B. Saunders Co., Philadelphia, 1978, p. 135. 12. Bok, Bart J., The Astronomer’s Universe, Cambridge University Press, 1958, p. 91. 13. Irwin, John B., Sky and Telescope, Nov. 1973. 14. Trimble, Virginia, Earth and Extraterrestrial Sciences, March. 1978. 15. Hoyle, Fred, Frontiers of Astronomy, Harper & Bros., New York, 1955, p. 278. 16. Hartmann, William K., Astronomy: The Cosmic Journey, Wadsworth Publishing Co., Belmont, CA, 1978, p. 365. 17. Hershfeld, Alan, Sky and Telescope, Apr. 1980. 18. Couper, Heather, 1978 Yearbook of Astronomy, p. 190. 19. Shklovskii, I. S., op. cit., p. 60. 20. Hartmann, William K., op. cit., p. 386. 21. Silk, Joseph, The Big Bang, W. H. Freeman & Co., San Franscisco, 1980, p. 177. 22. Rees, M. J., The State of the Universe, edited by G. T. Bath, The Clarendon Press, Oxford, 1980, p. 35.

23. Jastrow and Thompson, Astronomy, Fundamentals and Frontiers, 2nd edition, John Wiley & Sons, New York, 1974, p. 231. 24. Cudworth, K., Astronomical Journal, July 1976. 25. Freeman and Norris, Annual Review of Astronomy and Astrophysics, 1981. 26. Datrow, Karl, Scientific Monthly, Mar. 1942. 27. Hogg, Helen S., McGraw-Hill Encyclopedia of Science and Technology, 1982, p. 13-53. 28. Struve, Otto, Sky and Telescope, July 1955. 29. Greenstein, J. L., McGraw-Hill Encyclopedia, p. I3-49. 30. Pasachoff, Jay M., op. cit., p. 87. 31. Kirshner, Robert P., Scientific American, Dec. 1976. 32. Struve, Otto, Sky and Telescope, June 1960. 33. Hogg, Helen S., Encyclopedia Britannica, l5th edition, p. 17-605. 34. Lohmann, W., Zeitschrift für Astrophysik, Aug. 1953. 35. Von Hoerner, Sebastian, Astrophysical Journal, Mar. 1957. 36. Inglis, Stuart J., Planets, Stars and Galaxies, 3rd edition, John Wiley & Sons, New York, 1961, p. 309. 37. Burnham, Robert, Jr., Burnham’s Celestial Handbook, Dover Publications, New York, 1978, p. 1294. 38. Shklovskii, I. S., op. cit., p. 110. 39. Aller, L. H., Encyclopedia Britannica, l5th edition, p. 17-600. 40. Ibid., p. 17-602. 41. . Shklovskii, I. S., op. cit., p. 144. 42. Gamow, George, The Creation of the Universe, Viking Press, New York, 1952, p. 46. 43. Jastrow and Thompson, op. cit., p. 133. 44. Herbst and Assousa, Scientific American, Aug. 1979. 45. Shklovskii, I. S., op. cit., p. 225. 46. Maffei, Paolo, Monsters in the Sky, The MIT Press, 1980, p. 129. 47. Shklovskii, I. S., op. cit., p. 227. 48. Ibid., p. 285. 49. Ibid., p. 186. 50. Ibid., p. 193. 51. Ibid., p. 109.

52. Mitton, Simon, op. cit., p. 89. 53. Aller, L. H., op. cit., p. 17-596. 54. Neugebauer and Leighton, Scientific American, Aug. 1968. 55. Wilson, Olin C., et al., Scientific American, Feb. 1981. 56. Baker and Fredrick, Astronomy, Ninth edition, Van Nostrand Reinhold Co., New York, 197 I , p. 393. 57. Kraft, Robert P., Scientific American, July 1959. 58. Bumham, Robert, Jr., op. cit., p. 590. 59. Harwit, Martin, op. cit,. pages 24 and 345. 60. Lynden-Bell, Donald, Cosmology Now, op. cit., p. 50. 61. . Jastrow, Robert, Red Giants and White Dwarfs, Harper & Row, New York, 1967, p. 41 . 62. Silk, Joseph, op. cit., p. 257. 63. Maffei, Paolo, op. cit., p. 205. 64. See, for instance, Hartmann, op. cit., p. 295. 65. Eddington, Arthur, New Pathways in Science, University of Michigan Press, 1959, Chapter VII. 66. Plavec, M. J., McGraw-Hill Encyclopedia, p. 13-118. 67. Shklovskii, I. S., op. cit., p. 165. 68. Greenstein, J. L., Stellar Atmospheres, University of Chicago Press, 1960, p. 676. 69. Bohlin, R. C., et al., Astrophysical Journal, Jan. 15, 1982. 70. Jastrow and Thompson, op. cit., p. 182. 71. Shklovskii, I. S., op. cit., p. 194. 72. Hartmann, William K., op. cit, p. 338. 73. Jeans, James, The Universe Around Us,fourth edition, Cambridge University Press, 1947, p. 236. 74. Allen, David, 1973 Yearbook of Astronomy. 75. Underhill, Anne B., Annual Review of Astronomy and Astrophysics, 1968. 76. Baker and Fredrick, op. cit., p. 372. 77. Burnham, Robert, Jr., op. cit., p. 407. 78. McLaughlin, Dean B., Sky and Telescope, May 1946. 79. Hartmann, William K., op. cit., p. 334. 80. Ibid., p. 333. 81. Burnham, Robert, Jr., op. cit., p. 995.

82. Ibid., p. 1263. 83. Bok, Bart J., Scientific American, Mar. 1981. 84. Shklovskii, I. S., op. cit., p. 207. 85. Ibid., p. 215. 86. Feynman, Richard, The Character of Physical Law, MIT Press, 1967, p. 30. 87. Harwit, Martin, Cosmic Discovery, Basic Books, New York, 1981 , p. 57. 88. Allen, David K., 1981 Yearbook of Astronomy, p. 201. 89. Hartmann, William K., op. cit., p. 337. 90. Shklovskii, I. S., op. cit., p. 214. 91. Davis and Day, Water, Doubleday & Co., New York, 1961, p. 117. 92. Basko, M. M., Annals of the New York Academy of Sciences, Feb 15, 1980. 93. Hartmann, William K., op. cit., p. 209. 94. News item, New Scientist, Apr. l I , 1974. 95. Ebbighausen, E. G., Astronomy, Charles E. Merrill Books, Columbus, Ohio, 1966, p. 57. 96. Hogg, Helen S., Encyclopedia Britannica, l5th edition, p. 17-604. 97. Bok and Bok, The Milky Way, 4th edition, Harvard University Press, 1974, p. 249. 98. Shklovskii, I. S., op. cit., p. 112. 99. Struve, Otto, Sky and Telescope, Apr. 1960. 100. Ebbighausen, E. G., op. cit., p. 76. 101. Harris, W. E., Astronomical Journal, Dec. 1976. 102. Burbidge, M. and G., Scientific American, June 1961. 103. Harris and Racine, Annual Review of Astronomy and Astrophysics, 1979. 104. Larson, R. B., Nature, Mar. 3, 1972. 105. Bok and Bok, op. cit., p. 97. 106. Blitz, Leo, Scientific American, Apr. 1982. 107. Wyatt, Stanley P., Principles of Astronomy, third edition, Allyn and Bacon, Boston,1977, p. 562. 108. News item, Sky and Telescope, Oct. 1975. 109. Bok and Bok, op. cit., p. 160. 110. Hartmann, William K., op. cit., p. 284. 111. Gamow, George, op. cit., p. 94.

112. Wyatt, Stanley P., op. cit., p. 568. 113. Verschuur, Gerrit, op. cit., p. 102. 114. Ibid., p. 105. 115. News item, Nature, July 12, 1974. 116. Thackeray, A. D., The Magellanic Clouds, edited by Andre B. Muller, D. Reidel Publishing Co., Dordrecht, Holland, 1971, p. 14. 117. Westerlund, Bengt, ibid., p. 31. 118. Payne-Gaposchkin, Cecilia, ibid., p. 36. 119. Oort, J. H., ibid., p. 189. 120. Burnham, Robert, Jr., op. cit., p. 1546. 121. Ibid., p. 347. 122. Philip, A. G. D., Sky and Telescope, July 1978. 123. Hartmann, William K., op. cit., p. 309. 124. Van den Bergh, Sidney, Annual Review of Astronomy and Astrophysics, 1975. 125. Kudritzki and Simon, Astronomy and Astrophysics, Dec. 1, 1978. 126. Hunger, K., et al., Astronomy and Astrophysics, March I, 1981. 127. Kaler, James B., Sky and Telescope, Feb. 1982. 128. Burnham, Robert, Jr., op. cit., p. 943. 129. Liller and Liller, Scientific American, Apr. 1963. 130. Burnham, Robert, Jr., op. cit., p. 2120. 131. Abell, G. O., Astrophysical Journal, Apr. 1966. 132. Pasachoff, Jay M., op. cit., p. 143. 133. Aller and Liller, Nebulae and Interstellar Matter, edited by Middlehurst and Aller, Univ. of Chicago Press, 1968, p. 558. 134. Smith and Aller, Astrophysical Journal, Mar. 1, 1971. 135. Greenstein, J. L., Scientific American, Jan. 1959. 136. Greenstein, J. L., McGraw-Hill Encyclopedia, p. 14-633. 137. Liebert, James, Annual Review of Astronomy and Astrophysics, 1980. 138. Stothers, Richard, Astronomical Joumal, Dec. 1966. 139. Greenstein, J. L., Astronomical Journal, May, 1976. 140. Shipman, H. L., Astrophysical Journal, Feb. 15, 1979.

141. Greenstein, J. L., Stellar Atmospheres, op. cit., p. 689. 142. Van Horn, H. M., Physics Today, Jan. 1979. 143. Shklovskii, I. S., op. cit., p. 198. 144. Aller and Liller, op. cit., p. 483. 145. Kraft, Robert P., Scientific American, Apr. 1962. 146. Ebbighausen, E. G., op. cit., p. 101. 147. McLaughlin, Dean B., Stellar Atmospheres, J. L. Greenstein, editor, op. cit., p. 640. 148. Ibid., p. 593. 149. Kraft and Luyten, Astrophysical Journal, Oct. l, 1965. 150. Joy, A. H., Stellar Atmospheres, J. L. Greenstein, editor, op. cit., p. 668. 151. Burnham, Robert, Jr., op. cit., p. 225. 152. Joy, A. H., op. cit., p. 666. 153. Haro, Guillermo, Non-Stable Stars, edited by G. H. Herbig, Cambridge University Press, 1957, p. 26. 154. Burnham, Robert, Jr., op. cit., p. 174. 155. Ibid., p. 123. 156. Warner, Brian, Sky and Telescope, Nov. 1973. 157. Burnham, Robert, Jr., op. cit. p. 928. 158. Ibid., p. 218. 159. Schatzman, E, Stellar Structure, edited by Aller and McLaughlin, Univ. of Chicago Press, 1965, p. 329. 160. Joy, A. H., op. cit., p. 672. 161. Gallagher and Starrfield, Annual Review of Astronomy and Astrophysics, 1978. 162. McLaughlin, Dean B., Stellar Atmospheres, op. cit., p. 647. 163. Walker, Marshall, The Nature of Scientific Thought, Prentice-Hall, Englewood Cliffs, N.J., 1963, p. 132. 164. Davies, Paul, The Runaway Universe, Harper & Row, New York, 1978, p. 159. 165. Jeans, James, op. cit., p. 279. 166. Ibid., p. 281. 167. Gorenstein and Tucker, Scientific American, Nov. 1978. 168. Neugebauer and Becklin, Scientific American, Apr. 1973. 169. Jastrow and Thompson, op. cit., p. 250.

170. Branch, David, Astrophysical Journal, Sept. 15, 1981. 171. Minkowski, R., Nebulae and Interstellar Matter, edited by Middlehurst and Aller, op. cit., p. 629. 172. Shklovskii, I. S., op. cit., p. 297. 173. Ibid., p. 226. 174. Kowal, Charles T., Astronomical Journal, Dec. 1968. 175. Poveda and Woltjer, Astronomical Journal, Mar. 1968. 176. Shklovskii, I. S., op. cit., p. 257. 177. Minkowski, R., op. cit., p. 652. 178. Ibid., p. 658. 179. Mitton, Simon, The Crab Nehula, Charles Scribner’s Sons, New York, 1978, p. 42. 180. Ibid.. p. 56. 181. Shklovskii. I. S., op. cit., p. 270. 182. Poveda and Woltjer, Astronomical Journal, Mar. 1968. 183. Smith, F. G., Pulsars, Cambridge University Press, 1977, p. 9. 184. Harwit, Martin, Cosmic Discovery, op. cit,. p. 243. 185. Ibid.. p. 327. 186. Hoyle, Fred, From Stonehenge to Modern Cosmology, W. H. Freeman & Co., San Francisco, 1972, p. 62. 187. Shklovsky, I. S., Publications of the Astronomical Society of the Pacific, Apr.-May 1980. 188. Smith, F. G., op. cit., p. 220. 189. Ibid., p. 169. 190. News item, Sky and Telescope, Dec. 1979. 191. Smith, F. G., op. cit., p. 229. 192. Ibid., p. 228. 193. Ibid., p. 9l. 194. Ibid., p. 103. 195. Manchester and Taylor, Pulsars, W. H. Freeman & Co., San Francisco, 1977, p. 226. 196. Smith, F. G., Nature, Dec. 5, 1970. 197. Manchester and Taylor, op. cit., p. 15. 198. Taylor and Manchester, Annual Review of Astronomy and Astrophysics, 1977. 199. Manchester and Taylor, op. cit., p. l8.

200. Ibid., p. 6. 201. Thome, Kip S., Scientific American, Dec. 1974. 202. Ruderman, M., Annals of the New York Academy of Sciences, Feb. 15, 1980. 203. Smith, F. G., Pulsars, op. cit., p. 171. 204. Burbidge and Burbidge, Quasi-Stellar Objects, W. H. Freeman & Co., San Francisco, 1967, p. 52. 205. Mitton, Simon, Exploring the Galaxies, op. cit., p. 179. 206. Bok and Bok, op. cit., p. 168. 207. Shklovskii, I. S., Stars, op. cit., p. 256. 208. Moore and Stockman, Astrophysical Journal, Jan. l, 1981. 209. Mitton, Simon, Exploring the Galaxies, op. cit., p. 108. 210. Ibid., p. 180. 211. Mason and Cordova, Sky and Telescope, July 1982. 212. Fabian, Andrew C., Earth and Extraterrestrial Sciences, Feb. 1973. 213. Giacconi, R, quoted by Fabian, ibid. 214. Fabian and Pringle, New Scientist, Feb. 7, 1974. 215. Hartmann, William K., op. cit., p. 371. 216. Kylafis, N. D., et al., Annals of the New York Academy of Sciences, Feb 15, 1980. 217. Holt and McCray, Annual Review of Astronomy and Astrophysics, 1982. 218. Shklovskii, I. S., Stars, op. cit., p. 384. 219. Gursky and Van den Heuvel, Scientific American, Mar. 1975. 220. Shklovskii, I. S., Stars, op. cit., p. 400. 221. News Item, Nature, Jan. 26, 1973. 222. Cocke, W. J., et al., Nature, Sept. 26, 1970. 223. Radhakrishnan, V., et al., Nature, Feb. l, 1969. 224. Giacconi, R., Physics Today, May 1973. 225. Shklovskii, I. S., Stars, op. cit., p. 244 226. Charles and Culhane, Scientific American, Dec. 1975. 227. Rothchild, R. E., Earth and Extraterrestrial Sciences, Mar. 1979. 228. Giacconi, R., Scientific American, Feb. 1980. 229. Mitton, Simon, The Crab Nebula, op. cit., p. 172.

230. Verschuur, Gerrit, Starscapes, op. cit., p. 171. 231. Weymann, R. J., Scientific American, Jan. 1969. 232. Harwit, Martin, Cosmic Discovery, op. cit., p. 23. 233. Jastrow and Thompson, op. cit., p. 254. 234. Strittmatter and Williams, Annual Review of Astronomy and Astrophysics, 1976. 235. Feynman, Richard, op. cit., p. 155. 236. Harwit, Martin, Cosmic Discovery, op. cit., p. 244. 237. Shklovskii, I. S., Stars, op. cit., p. 288. 238. Mitton, Simon, Exploring the Galaxies, op. cit., p. 135. 239. Bahcall and Hills, Astrophysical Journal, Feb. 1, 1973. 240. News Item, Nature, Sept. 7, 1968. 241. Bohuski and Weedman, Astrophysical. Journal, Aug. 1, 1979. 242. Hewish, A., Annual Review of Astronomy and Astrophysics, 1970. 243. Verschuur, Gerrit, Starscapes, op. cit., p. 116. 244. Arp, Halton, Astrophysical Journal, May 1967. 245. Arp, Halton, private communication. 246. Hogg, D. E., Astrophysical Journal, Mar. 1969. 247. Macdonald and Miley, Astrophysical Joumal, Mar. I , 1971 . 248. Kellerman, K. I., Astronomical Journal, Sept. 1972. 249. Arp, Halton, Science, Dec. 17, 1971. 250. Burbidge and Burbidge, op. cit., Chapter 3. 251. Burbidge and O’Dell, Astrophysical Journal, Dec. 15, 1972. 252. Miller, Joseph S., Publications of the Astronomical Society of the Pacific, Dec. 8l-Jan. 82. 253. Rieke and Lebofsky, Annual Review of Astronomy and Astrophysics, 1979. 254. Stein, W. A., et al., Annual Review of Astronomy and Astrophysics, 1976. 255. Disney and Veron, Scientific American, Aug. 1977. 256. Shipman, H. L., Black Holes, Quasars, and the Universe, Houghton Mifflin Co., Boston 1980, p. 215. 257. Sandage, Allan R., Astrophysical Journal, Nov, 15, 1972. 258. News item, Sky and Telescope, Jan. 1982. 259. Faber and Gallagher, Annual Review of Astronomy and Astrophysics, 1979.

260. Toomre and Toomre, Scientific American, Dec. 1973. 261. Dufour and Van den Bergh, Sky and Telescope, Nov. 1978. 262. Overbye, Dennis, Sky and Telescope, July 1979. 263. Jastrow and Thompson, op. cit., p. 240. 264. Gursky and Schwartz, Annual Review of Astronomy and Astrophysics, 1977. 265. Metz, William D., Science, Sept. 2l, 1973. 266. Bond and Sargent, Astrophysical Journal Letters, Nov. I, 1973. 267. Kristian, Jerome, Astrophysical Journal, Jan. 15, 1973. 268. Kellerman, K. I., Annals of the New York Academy of Sciences, Feb. 15, 1980. 269. Hey, J. S., The Evolution of Radio Astronomy, Neale Watson Academic Publications, New York, 1973, p. 169. 270. Shipman, H. L., Black Holes, op. cit., p. 204. 271. Fanti, R., et al., Astronomy and Astrophysics, Apr. 1, 1973. 272. Verschuur, Gerrit, Starscapes, op. cit., p. 157. 273. Shipman, H. L., Black Holes, op. cit., p. 180. 274. Mitton, Simon, Exploring the Galaxies, op. cit., p. I 12. 275. Hartmann, William K., op. cit., p. 290. 276. Ibid., p. 374. 277. Ibid., p. 375. 278. Hoyle, Fred, Galaxies, Nuclei, and Quasars, Harper & Row, New York, 1965, p. 4. 279. Rogstad and Ekers, Astrophysical Journal, Aug. 1969. 280. Mitton, Simon, Exploring the Galaxies, op. cit., p. 107. 281. . Burbidge and Burbidge, Nature, Oct. 4, 1969. 282. Sandage, Allan R., Scientific American, Nov. 1964. 283. Harwit, Martin, Cosmic Discovery, op. cit., p. 145. 284. Weedman, Daniel, Annual Review of Astronomy and Astrophysics, 1977. 285. Mitton, Simon, 1973 Yearbook of Astronomy. 286. Maffei, Paolo, op. cit., p. 288. 287. Mitton, Simon, Exploring the Galaxies, op. cit., p. 120. 288. Clark, George W., Scientific American, Oct. 1977. 289. Leventhal and MacCallum, Scientific American, July 1980.

290. Harwit, Martin, Cosmic Discovery, op. cit., p. 146. 291. Ibid., p. 147. 292. Einstein, Albert, The Structure of Scienlific Thought, edited by E. H. Madden, Houghton Mifflin Co., Boston, 1960, p. 82. 293. Feynman, Richard, op. cit., p. 156. 294. Ibid., p. 171. 295. Shipman, H. L., Black Holes, op. cit., p. 19. 296. Ibid., p. 16. 297. Ibid.. p. 63. 298. Ibid., p. 98. 299. Ibid., p. 66. 300. Burbidge, Geoffrey, Sky and Telescope, Sept. 1983. 301. Davies, Paul, Science Digest, Sept. 1983. 302. Dingle, Herbert, A Century of Science, Hutchinson’s Publications, London, 1951 , p. 315. 303. Andrade, E. N. DaC., An Approach ro Modern Physics, G. Bell & Sons, London, 1959, p. 134. 304. Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New York, 1952, p. 55. 305. Margenau, Henry, Quantum Theory. Vol. I, edited by D. R. Bates, Academic Press, New York, 1961 , p. 6. 306. Feynman, Richard, op. cit., p. 129. 307. Heisenberg, Werner, op. cit., p. 38. 308. Bridgman, P. W., Reflections of a Physicist, Philosophical Library, New York, 1955, p. 186. 309. Maffei, Paolo, Beyond rhe Moon, MIT Press, Cambridge, Mass., 1978, p. 301. 310. News item, New Scientist, Oct. 17, 1968. 311. Davies, Paul, The Runaway Universe, op. cit., p. 33. 312. Larson, Dewey B., The Neglected Facts of Science, North Pacific Publishers, 1982, p. 58. 313. Bondi, Hermann, Cosmology Now, op. cit., p. 11. 314. Jastrow and Thompson, op. cit., p. 259. 315. Ibid., p. 271. 316. Bahcall, J. N., Astronomical Joumal, May 1971. 317. Lovell, Bemard, Cosmology Now, op. cit., p. 8. 318. Alfven, H., Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966, p. 100.

319. Silk, Joseph, op. cit., p. 61. 320. Sciama, Dennis, Cosmology Now, op. cit., p. 67. 321. McCrea, W. H., ibid., p. 91. 322. Shipman, H. L., Black Holes, op. cit., p. 256. 323. Davies, Paul, The Edge of Infinity, Simon and Schuster, New York, 1981, p. 137. 324. Spitzer, Lyman, Jr., Searching Between the Stars, Yale University Press, 1982, p. 5. 325. See discussion in Verschuur, Starscapes, op. cit., p. 190. 326. Calder, Nigel, The Violent Universe, The Viking Press, New York, 1969, p. 121. 327. Rees, Martin, Cosmology Now, op. cit., p. 129. 328. Kuhn, Thomas, The Structure of Scientific Revolutions, University of Chicago Press, 1962, p. 67. 329. Lovell, Bernard, Cosmology Now, op. cit., p. 7. 330. Verschuur, Gerrit, The Invisihle Universe, Springer-Verlag, New York, 1974, p. 139. 331. Kellerman, K. 1., Physics Today, Oct. 1973. 332. Feynman, Richard, op. cit., p. 160. 333. de Vaucoleurs, G., Sky and Telescope, Aug. 1978. 334. Shapiro, Irwin, Technology Review (MIT), Dec. 1975. 335. Walker, Marshall, op. cit., p. 28. 336. McVittie, G. C., General Relativiry and Cosmology, Chapman & Hall, London, 1956, p. 5. 337. Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958, p. 129. 338. Heisenberg, Werner, Physics and Beyond, Harper & Row, New York, 1971, p. 123. 339. Heisenberg, Werner, Philosophic Problems of Nuclear Science, op. cit., p. 55. 340. Waismann, F., Turning Points in Physics, Interscience Publishers, New York, 1959, p. 154. 341. Bridgman, P. W., op. cit., p. 93. 342. Lindsay, R. B., The Role of Science in Civilization, Harper & Row, New York, 1963, p. 84. 343. Walker, Marshall, op. cit., p. 6. 344. du Nouy, P. L., The Road to Reason, Longmans Green & Co., New York, 1949, page20. 345. Ford, K. W., Scientific American, Dec. 1963. 346. Ford, K. W., The World of Elementary Particles, Blaisdell Publishing Co., New York, 1963, p. 214. 347. Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938, p. 159. 348. du Nouy, P. L., Between Knowing and Believing, David McKay Co., New York, 1966, p. 239.

349. Gold, Michael, Science 84, Mar. 1984. 350. Schorn, Ronald A., Sky and Telescope, Feb. 1984. 351. Hoyle, Fred, Science Digest, May 1984.

DEWEY B. LARSON: THE COLLECTED WORKS Dewey B. Larson (1898-1990) was an American engineer and the originator of the Reciprocal System of Theory, a comprehensive theoretical framework capable of explaining all physical phenomena from subatomic particles to galactic clusters. In this general physical theory space and time are simply the two reciprocal aspects of the sole constituent of the universe–motion. For more background information on the origin of Larson’s discoveries, see Interview with D. B. Larson taped at Salt Lake City in 1984. This site covers the entire scope of Larson’s scientific writings, including his exploration of economics and metaphysics.

Physical Science The Structure of the Physical Universe The original groundbreaking publication wherein the Reciprocal System of Physical Theory was presented for the first time. The Case Against the Nuclear Atom ―A rude and outspoken book.‖

The Universe of Motion The third volume of the revised edition of The Structure of the Physical Universe, applying the theory to astronomy. The Liquid State Papers A series of privately circulated papers on the liquid state of matter.

Beyond Newton ―...Recommended to anyone who thinks the subject of gravitation and general relativity was opened and closed by Einstein.‖

The Dewey B. Larson Correspondence Larson’s scientific correspondence, providing many informative sidelights on the development of the theory and the personality of its author.

New Light on Space and Time A bird’s eye view of the theory and its ramifications.

The Dewey B. Larson Lectures Transcripts and digitized recordings of Larson’s lectures.

The Neglected Facts of Science Explores the implications for physical science of the observed existence of scalar

The Collected Essays of Dewey B. Larson Larson’s articles in Reciprocity and other publications, as well as unpublished essays.

motion.

Metaphysics

Quasars and Pulsars Explains the most violent phenomena in the universe.

Beyond Space and Time A scientific excursion into the largely unexplored territory of metaphysics.

Nothing but Motion The first volume of the revised edition of The Structure of the Physical Universe, developing the basic principles and relations. Basic Properties of Matter The second volume of the revised edition of The Structure of the Physical Universe, applying the theory to the structure and behavior of matter, electricity and magnetism.

Economic Science The Road to Full Employment The scientific answer to the number one economic problem. The Road to Permanent Prosperity A theoretical explanation of the business cycle and the means to overcome it.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF