A Comprehensive Model of Reality
A Comprehensive Model of Reality Based on Quantum Physics Worldview
Salman M Salman
Physics Department, Alquds University, Palestine.
Abstract
A comprehensive model of reality based on quantum physics worldview assertions, and the effects on our conception of objective reality is proposed. The model adopts ideas from many interpretations, but also adds new concepts related to a wider meaning of a generalized uncertainty principle with new rules dictating the mixed and reduction of states, and the emerging types of statistics, locality, entanglement, and consciousness. In addition, we identify the roles of the principle of least action, and the 2nd law of thermodynamics, conservation laws, and other concepts related to the role of symmetries. The model relies strongly on logical consistency and experimental support.
Many details related to certain phenomena are addressed in a novel way. Concepts regarding existence and reality are discussed from logical and physical perspectives, reaching a new definition of objective reality. In addition, we address the concepts of nominal particle size, the generalized inverse square law, and the single particle occupying more than one of the mixed states, like interference. Some suggestions are made about field quantization using simpler methods for field theories. We comment on gauge field theories and the Higgs mechanism, which we consider as an arbitrary assumption and not a necessity. We bring some logical argument that favor particle/energy fundamentality over fields.
A comparison between this model and other interpretations regarding the concepts is made. The conclusion is that it is feasible to build a working, logically consistent, and comprehensive model for reality that can address many contradictions raised during the past century. Finally we propose certain experiments to test the feasibility of some key points and concepts.
The work is divided into three parts. Section I introduces the problem and relevant issues and classifies the quantum interpretation in four classes, namely, minimalists, accommodating, objective realists, and maximalists. We give some comparisons and evaluations in terms of reality. Section II represents the core of the article. It introduces the model basic assumptions underlining the generalized uncertainty principle. The model adds the core assumptions on the definition and the objective conditions of mixing and reduction of states, nominal size of entities, and certain developments related to the 3 main type of statistics. Section III brings some examples and experiments for the model justification and predictions, and possible physics extensions based on the proposed model.
Keywords: Quantum Mechanics, QM interpretations, Entities (energy and matter); Forces and Fields (FF); Space-time (ST); Dynamic Laws (DL); and Boundary Conditions (BC), Forms of Statistics, Generalized Uncertainty Principle, Quantum Mixing and Reduction, Measurement, Nonlocality, Entanglement.
I. Introduction and Preview
It was hard to describe reality beyond the positivist culture of the past 300 years that is based on experimental science and rational logic. It is even harder to find a consistent logical and practical solution or theory for the underlying processes that produce what we see. A review of the thoughts of the late century scientists and philosophers who handled this issue tells about the level of difficulty [1-2]. However, judging their works and views from the current vantage point, one may underestimate the complexity, because since then many refinements and experimental results made the current view better informed and trivialized some early objections [3-5].
More importantly, the challenging discoveries of QM started at a time when the positivist school of modern philosophy and science were strong and hard to challenge [6]. The concepts of Greek logic and the renaissance positivism have influenced the predication of the new theories in their early development, proposing major departures from the old philosophy. Relativity and Quantum mechanics represented two major challenges to the established order, and in many aspects they were generating controversies.
Some scientists and philosophers of the 20th century resisted the implications of QM, because some hints suggested by the discoveries, seem to support concepts connected to a metaphysical worldview [7-9]. Resistance to relativity started high, but then it was reduced to certain features related to the implications of speeds beyond the speed of light, which is not taken seriously by the founders or developers. In this work we focus more on the QM implications, but will address certain issues in relativity.
It is understandable to see reservation and resistance from the early positivists’ science toward concepts that may lead to approval of conclusions connected to the supernatural, and non-local worldview. We have to admit that the negative tones have affected a healthy and objective evolution of the discoveries. However, the sheer success of theories and experiments forced significant transformations on the 19th century positivism yielding what is called logical positivism or empiricism [6].
QM and relativity started as necessities not choices as is the case for many great theories. We will not review the historic interpretations developments, or the cross difference issues. Our comparative review of some of the major interpretations addresses only particular questions of concern in relation to our model. For a preliminary review one can refer to some of the following [1-5].
I.1 Classification of QM Interpretations
We categorize the views concerning the implications and interpretations of the QM dynamic laws into four major groups:
1- The Minimalists positions range from the skeptical of the formulations actual meanings, to those who accepted the formalism with minimum modifications. These include EPR objections; some ensemble or statistical; consistent histories; and some parts of hidden variables interpretations [10-14].
2- The Accommodating try to preserve the classical worldview while providing viable answers to the new questions. They are more open to modifications provided they converge to the classical view. Leaders of this view include Heisenberg, Bohr, Born, and Pauli [15-18].
3- The Objective Realists are willing to incorporate new ideas as part of a new objective worldviews. Bohm-De-Broglie proposed deterministic development through non-local hidden variables [14]; information interpretation tries to explain the Dirac and Newman controversial wave function reduction, or the non-local entanglement with long range ramifications [19]; Penrose and Strap suggested independent objective reality that incorporates consciousness as an outcome of the same laws which govern matter [21]. They propose deeper levels of actualization that may incorporate issues considered classically metaphysical [22-23].
4- The Maximalists tried to apply the new assertions of the QM to the letter, as absolute realities and they are willing to dramatically change the worldview by introducing unlimited modifications including the complete correlation of all process in the universe. Major interpretations include Evert many worlds, and minds [24-25].
In our classification the terms may need specific description.
The Minimalists consider reality as one of their judging criteria, which include determinism, locality, and logical completeness. But their definitions of the terms are not considered universal terms of reference for reality. In some respect they resist the modifications even if that keep or sometimes creates contradictions. The strength of their arguments lies in the minimum additions needed to describe the new phenomena. However, they rejected ideas and new understanding of realities by proposing paradoxes that don’t stand reality and sometimes logical tests, if moved to levels beyond the classical axioms. For example, the EPR proposals use arguments to discredit the theory and show its incompleteness by insisting on the absolute accuracy of some cyclic assumptions [10-11].
The Ensemble Interpretation explains fairly successfully the consistency of the new formulations using familiar ideas (the ensemble and ordinary statistics). The interpretation is mostly consistent, because it answers many questions without adding major concepts or assumptions. However in their quest to keep the old order, they reject possible additions to the concepts (e.g. built in consciousness) and ignore major outstanding questions that are not answered by the ensemble ideas (interference and diffraction). They practically dismiss the single particle state [13]. The consistent history interpretation claims to solve the interference question [12] [27], but offers only a description more than explanation. It did not address a condition for the occurrence beyond what Feynman originally proposed, which does not offer a universal quantitative limit. The main point was that interference will show only if the related histories that produce it are there and otherwise it will not show. In some sense the pointers allows for the possibilities but do not identify the controlling condition between yes or no. The interpretation claim of solving the double slit paradox is not objectively convincing and does not provide new information or prediction beyond the statistical ensemble, while it may provide a basis for machine learning statistical methods possible claim of natural intelligence.
The Accommodating were practical at the start. They tried to incorporate the new discoveries without altering the philosophical conception of reality. However, they were not consistent in many explanations. Bohr introduction of the complementary and correspondence principles served the transition and kept speculative ideas at check, but that did not take long. Heisenberg proposed the uncertainty principle, and Born conceptualized the relation between the probability density and the square of the wave function. In some respect the three are the creators of the in-determinism, and the associated probabilistic aspects of many natural processes, which produced other controversial issues [18 -19]. It seems we have to address these ideas in depth to resolve most of the outstanding problems.
One prominent issue is the seeming insistence on measurement centrality for realizations. Even though they did not limit the measurement to human participation by accepting it as a registration of actualities, they did not emphasize the objective interaction with other entities as the only needed requirement to make transitions between states. Their view of the actuality of the wave function and its role was hesitant at best.
The mixing of the states and the probabilities were considered potentialities that can be realized when a measurement or an interaction reduces the wave function, but then they did not suggest a mechanism or a condition that triggers the reduction besides the measurement [15]. Moreover, the uncertainty principle Heisenberg discovered was not utilized to its full potential as we will discuss extensively in the article.
Their amplification of the collapse dependence on measurement, overshadowed the more objective independent collapse that might need non-local phenomena. Their hesitance to consider the reality of non-locality made them vulnerable to the logic of EPR and Schrodinger paradoxes, which can be answered if they have changed their interpretation a little. In fact many of the EPR paradoxes can be trivialized by the ensemble interpretation. What is more critical however, is to expand the mixing- reduction scope to include cases of outcomes of mixing or correlations (interference, entanglements).
They were strongly attached to the classical concept that resisted considering new phenomena within the real domain, like emergence of macroscopic consciousness as a result of mixing not similar to the ones described by the ensemble approach [13].
The Objective Realists may not appear so in the eyes of others, because they insist on the reality of formalism (the states, distributions, and the state reduction for some). We consider them realists because they try to explain the phenomena in forms that are ontologically enriches the scope of real things compared to the classical worldview. In addition, they were able to make a strong case by incorporating the new formalism without losing the objectivity of the world.
One of the problems they have however is the insistence on considering the realization of states, always as a result of reduction of the wave function (without proving that) in full difference with ordinary realizations familiar in nature (statistical realizations are widespread in the universe without the wave function reduction). For this part the ensemble interpretation gains stronger points. Furthermore, they did not explain the emergence of quantum outcomes from mixing conditions on the classical level (superconductivity), even though Penrose spent a long discussion showing the fantastic expression of QM on the classical world. Penrose is also a strong promoter of the emergence of consciousness, but from reduction rather than from mixing.
In a recent experiment at Toronto, Sacha Kocsis et.al [26] claimed to catch interference patterns consistent with the Bohm calculations. The evidence is not conclusive, but supports a position that a certain amount of interaction can kill the interference, and if we can detect the path without consuming this amount the pattern can show. This is generally in agreement with Feynman proposition [12], which states an expectation, but does not define the underlying boundary. Their framework is open to refinements, but that may require adding new concepts, and physical constraints. Determinism is not the strong part of Bohm formulations.
The Maximalists try to increase the impact of the literal QM formalism to alter the worldview dramatically, and they are in some sense driven to do so. The many worlds interpretations are not consistent with a finite universe. They claim other worlds are created somewhere else for each realization of a possibility, and each event occurrence or measurement separates them and prevents detection. Because of the accumulative nature of this approach, it must have reached infinity longtime ago. Hence we have infinite undetectable universes that we cannot deal with afterwards. It ends up nowhere. The usefulness of the interpretation is the removal of the centrality of the wave function collapse which was not explained by other interpretation convincingly (Bohr considered measurement; and Penrose relates it to some type of fundamental gravity connected interaction [20]. The many worlds view is counter intuitive, and even though the universal wave function is theoretically acceptable, it is a remote possibility, and can be removed without harming reality or formalism.
I.2 Outstanding Questions
In conclusion, some important assertions of QM were not addressed or were treated casually. If dealt with differently, they could influence our understating with a small number of new assumptions. We may need to revisit the meaning of the following assertions.
The uncertainty of coupled operators [16] was treated at first as a practical measurement problem. Later it was accepted that the uncertainty is not connected to measurement, but a natural width of the operator values. The uncertainty in the result of measurement is not less than some limit even if we have better resolution.
The assertion is achieved through ensembles and not a single particle, which is expected. Accordingly, the uncertainty for a single particle was overlooked. In addition, the question of what will happen if it is less than the set limit was not addressed. The assertion that the measurement of the natural width of non-commuting operators is greater than some value is not logically equal to denying the processes that don’t accumulatively satisfy these requirements (a single particle measurement may yield values less than the limit), especially when we consider that things can happen before measurement (the evolution of the state function). The conclusion was that the limit means things cannot be considered real until we do some measurement, and that weekend the argument. That is said, it is unusual to hold such conception, while most of the daily theoretical or experimental physics takes as granted, using the uncertainty relations to explain virtual particles role in fundamental interactions, invariant mass, or lifetimes calculations. We will have more on this.
The non-locality of entangled states [28]: Yes we have a local world, but the QM theory suggests non-locality with indications. Bell showed QM is not compatible with locality, but many people did not accept it, and others considered it a case against the reality of QM. The question is, can we avoid non-locality without losing parts of reality. Alternatively, if we accept non-locality, is it possible to identify the conditions and processes that yield results consistent with the local world. Here again we can see virtual things significantly rule the interactions world, while we still call them not real. We will have a follow up on this too.
States mixing-reduction, entanglement, and emergence: The main missing point is the impact of mixing on classical phenomena. Is it possible to have interactions without mixing at some point. The other question is to define the quantitative transformation conditions between mixing and reduction.
Types of possibilities: An important issue that was not treated with enough distinction is the statistical nature of things. For many, statistical phenomena reflect the manifestation of the degree of ignorance [29]. That is maybe true for many of the statistical phenomena and how distributions are envisioned. Statistical mechanics tries to predict the evolution of distributions at the macroscopic level, without knowing the details of the processes at the microscopic level. That is ingenious, because it solves many infinitely impossible problems and deals with the macroscopic phenomena effectively. But maybe a main point was missed here.
Finding ways to predict certain macroscopic parameters from unknown details by defining the rules of behavior of ensembles is very important to form practical solutions, and realize the rules governing statistical phenomena. If we insisted on knowing every detail at the micro-level we could have failed and never discovered these rules, which are part of the dynamic laws. Emergence is one important phenomenon that cannot be noticed if we focus only on the single states.
The success of these laws may have weakened the search for the underlying processes that can happen in principle. There is a difference between waiving some details of the interacting components of a large ensemble, because the knowledge does not enrich the understanding, and starting to dismiss the phenomena at the single particle level.
We may ignore single processes that don’t impact the net outcome of the ensembles macroscopic parameters. But in many cases we may end up overlooking emergent phenomena that can be traced with full accuracy to the fundamental interactions of the single particle. In that respect it is not always safe to shelf the knowledge of the behavior of single entities, even if we can know the overall behavior by other means.
In the process of interpreting QM implications, there is some confusion between what the single particle can do, and what the ensemble propagates, because QM handles both with the same set of rules: The same thing that applies to a single particle as a probability function ends up creating a probability distribution for an ensemble. However, if we don’t count certain actions on the single particle level, macroscopic statistical predictions may fail to explain the outcome, even if they predict its macroscopic behavior.
With admission of the importance of the ensemble interpretation, we may miss important fine differences between options (what we will call as types of possibilities or statistics). These are different from fermion boson statistics which are connected to spin space structure. Three types can be seen from the QM formulations:
Type I possibilities are mutually exclusive like atomic decays, and most of the ordinary statistical phenomena with irreversible processes. These possibilities are independent (Eigen states) and the operator matrix yields only the diagonal elements. If the states are mixed they will separate to independent states. Additional constrains will be discussed in the next section. This type can be fully described by the ensemble statistics as commonly used in quantum mechanics.
Type II possibilities can mix in principle, but may not do so because the states represent a sequence in time or other parameter, or because mixing or not yields the same result. They appear mixed as a result of some parameter they share in common (many angular momentum states that are degenerate in energy, many position states sharing the same momentum). We may consider mixing real or not, because the end result can be reached the same way by the ensemble statistics. They are different from type I, because they share one common degeneracy, and they are generally reversible.
Two modes for this type of mixing can be suggested: particles occupy the mixed states according to the probability function and stay still, or each single particle oscillates between the states following the probability function. The latter is not expected to be feasible for orthogonal states without external action, because otherwise if we select certain polarized sample out of a mix, it will relax back to the original distribution without external interactions and that should not be possible. This point was a source of confusion that resulted in too much use of the wave function reduction when in reality there is no such reduction. In fact if this point is addressed correctly, ordinary mixing of states and the universal state function lose much of the associated mysteries.
Type III is fundamentally different from the first two types. Type I probability distribution reflects diagonal elements and any mixing of higher orders can always be reduced to the original Eigen states. For type II the states can remain degenerate for some parameter, but they are fundamentally separate states.
Here, two or more states are actually mixed, meaning each particle acts as if the states are occupied at the same time. It is possible they may not be represented as a linear sum of the Eigen states because they represent independent spaces. This is similar to the case of a particle spin and position. These are independent domains and the particle can have both of them concurrently. The difference here is that this happens in the same space which is not possible locally, and may require non-local influence to allow such configuration. Surely these states cannot be measured, but their mixing may manifest itself through behavior as if it is what is actually happening. That may seem impossible and contrary to reality or to the linearity of QM.
We consider here cases when states mixing coexist at the single particle level, and results are detected based on the mixing rather than separation or reduction as the case of Leymour oscillations (This is not the same as failure to remove energy degeneracy for angular momentum states). The best case is interference (no way to describe the state function of the two paths of a double slit as a linear sum of two independent Eigen states, and claim that the total probability is the square of the first and the second plus a mixing term that is zero for measurably separate states).
Even if the mixing term is not zero, we know that interference is complete and cannot be described by this form. This means every particle must be considered passing through the two slits. That is consistent with Feynman assertion if a particle is detected then the two states are not mixed and interference will not show. These results of mixing are completely different from atomic energy levels, which represent the solution to the Schrödinger equation. We cannot dismiss this fundamental difference. Ensemble interpretation cannot explain it, and attempts to do so are not convincing as described [13]. The consistent histories also does not offer a clue of the process, and describes interference just like the traditional phase difference accurately describe it, while implicitly assuming these are actually two waves. We know this has nothing to do with interaction of different waves, because interference stays even at arbitrarily very low intensity. The complementary principle fails, because if we describe it as a wave, a single photon cannot divide itself because the wave length will be different. The only way is to consider this as an independent class.
Addressing the above points may result in a worldview that can accommodate the classical world and incorporate processes that are hard to explain or envision classically, or with the interpretations available. We will present the model description in a part 2 to follow.
II. Model Of Reality Description
In this model for fundamental interactions and particles, we adopt mostly the standard model worldview about space-time, matter- energy and basic forces and fields, but with additions connected to our topic. The unsolved issues and calculations regarding strong, electroweak, and gravity interactions are not addressed in this model, but logically they can be.
II.1 Some Clarifications
When we call something objective we mean it has its own reality and does not require observers to assert it [40]. But then no one will know anything about it. Taking this point into account to acknowledge objective reality, one needs to assert: 1- it must have objective existence and 2- must have shown indicators of reality by experiments or inference. In this we don’t differ with the standard positivist point of view, but we are open for more possible outcomes. In this sense, dynamic laws, and boundary conditions are objective realities, and we can infer their role through observations and measurements.
Human observers are only needed to realize the objective realities within the human’s domain. Without that they will continue to exist, but we will not be aware of their existence. We may get problems when we encounter their actions not counted for, but we may not figure that out for a long time. This means our knowledge of reality is not complete. There is always a possibility to discover new realities, and our view should be modified accordingly (a triviality). Things don’t have to be detected directly, but through their impact with repeatability. Again we assert trivial things, but this is usually the case when trivial axioms are challenged.
Logically there are limits to our vision: we cannot know what was before the existence of Things, and how they existed. Sometimes we cannot know how long they last, and why they are the way they are. These points are logically beyond the realm of science and the description of objective reality. Luckily, we don’t need this regularly to find good practical solutions to the existing world, but there are few exemptions.
We can classify existing Things into two major types: observables that can be detected objectively independent of the point of view of the detector, and unobservable Things, that are not physically measurable, but have a measurable impact on observables, either one step away or through more complicated hierarchies. A subgroup of the unobservable are Things that did not show their impact on other things yet, and they are potential discoveries, or may never show their impact, because their domain of influence is not connected directly to observables [40].
According to our definitions in the next section, Entities can be observables, or unobservable. Forces and Fields, Dynamic Laws, and Boundary Conditions are all Things not observable themselves, but have shown their influence. New laws will not be counted until they show their impact.
A second point is that we assume all Things exist in the same universe in the sense of space-time. Since space and time are considered part of the Things especially after their demotion by relativity, there must be pre Things that we cannot comprehend. It is better to close at this point, and not spend a lot of time speculating, even though sometimes it is feasible to infer this preexistence or deny it. The big bang research spends a lot of time on how Things started, albeit claiming they only want to study the first few seconds of existence in order to explain the late evolution.
The second part is nonexistence, which can be potential observables that never materialized, or will never have any impact on any other things including unobservable things.
This covers a good part of imaginations that cannot be realized. The human brain is capable of imagination and can create combinations only limited by the permutations. A good part of these combinations deserves the label of nonexistence, but some (lies) can be installed in the minds and taught by others as real things and will appear realized.
This misplaced trust makes what follows as real according to our definitions. That will open the door for accepting big lies of history and wrong ideologies or acts as real things, which is the case in many aspects. Here we seem to be mixing true –false with right-wrong. If we observe the definition they must be considered things albeit they may need to be classified in some separate class. We may need to add other criteria to discriminate between these none things which are just creations of some mind experience and other things. Here there are many points of view. Some may say who we are to decide and put value on some and dismiss others out of existence. It is a very hard question.
Objectively we cannot deny them, but from subjective natural morality we prefer to put them in a different class compared to real good things (value judgment). Most of us will feel better if we can actually define criteria for good and bad in a sense based on truth.;[] good and evil article
Some good definers are the boundary conditions, others include consistency, and physics generally trusts mathematics as a universal definer. If a result is derived from the established math it is considered a good theory, and we tend to accept it as part of the real existence. Reliance on mathematics is good, but it is not always successful, because mathematics is based on logic, and that can be created from non-existing things just like the lies. Here experiment is usually the definer of sensible logic from the nonsense, but we know this is not that simple.
Mathematics is like instrument flying. If all conditions are counted for in the operation algorithm the result will be fine and the flight will be safe. If an unexpected variable comes in, and the mathematics cannot tell where to go a crash may occur. Realism can truly rely on established mathematics, but something from outside the logic is usually needed to foresee the unexpected and act accordingly. In most mathematical derivations we generally introduce solutions or tricks from outside the local scheme to keep the flight. If we stick to the logical mathematics we usually stumble.
It seems this will never be resolved fully. We may have to maximize cross testing, logical consistency and experiment. These are old questions, and the rise of positivism was to reduce the mess in the previous world. Their initial criteria were stringent, which was justifiable but not correct. Meanwhile, old positivism reduction of the world into existence of observables only has lost dominance in the last 100 years, and the new positivists are more accommodating to things beyond the observable. We have to keep in mind however, if we allow opening the bandura box, we can fall into a world of lies and helpless ideologies even within the heart of science itself, because corruption usually flourishes around the powerful and influential.
The optimum position is to stay open reluctantly for new existences, accept only what is necessary in the mathematical sense of logic, and the experimental implications where it is hard to explain things without them. They should be persistent and repeatable under similar conditions. They should not just disappear after removing certain conditions, and never come back when we re apply them. They should be self-consistent, and finally have some sufficiency of experimental evidence. Taking these considerations into account, we can start our model relying strongly on logical consistency and experimental support.
II.2 Definition of Reality and Things
II.2.1 Things: include Entities as energy and matter; Forces and Fields (FF); Space-time (ST); Dynamic Laws (DL); and Boundary Conditions (BC). We postulate they exist and have their own objective reality. The observer is not needed in principle to describe the objective existence of Things. Other Things beyond Entities show objective existence through influences on Entities. In what follows we address the fundamental nature of space-time, forces and fields, and dynamic laws without details, but we will deal with them in another article.
II.2.2 Objective Entities: (energy and matter) exist inside a space-time that is objective and are influenced by other entities through the fundamental forces and fields [42-43], [8]. These forces are objective realities connected to the existence of matter and energy, and they don’t need an observer to estimate their role or effects. The interaction between various Entities is governed by the Dynamic Laws that are universal, and represent a reality and start acting instantly, whenever Entities exist. Without the Entities there is not much we can infer for these Things. Some theories assume forces and fields as Entities that precede energy and matter. A lot of work on the current development in particle theory takes this course, and works on establishing the rules of mass creation from fields. it is hard to subscribe to such path for reasons that we will discuss in this work.
II.2.3 Boundary Conditions: can be summarized into four main principal universal laws: Least action, maximum entropy, generalized uncertainty principle, and the symmetry principle.
1. The Principle of Least Action: as understood. If many options are available for a particle or a system of particles the least energy consuming will be selected provided other principles are satisfied.
2. The Principle of Maximum Freedom or Entropy: as understood. If many possibilities exist for an ensemble, the selections will be made to maximize freedom provided other principles are satisfied. The competing requirements between this principle and the least action usually end with an optimized outcome. [37].
3. Symmetries and Theories: Energy, momentum, and other quantities that represent symmetries include space-time, with some variations on the applicability of the conservation laws.
4. The Generalized Uncertainty Principle: connects the values of non-commuting operators. The following assumptions apply:
1. The coupled quantities can be measured with combined uncertainties greater or equal than their natural width (Δai Δbi ≥ hi = ci [ai, bi] for each couple i) where c is a constant generally complex.
2. New For any state x, if the combined uncertainty of any couple i is less than hi the coupled operators cannot be measured. Within that limit the operators can violate their conservation laws until the limit is reached or exceeded.
3. For single states the operators variance applies to an ensemble and has no meaning for a single particle. For fermions this is not allowed and for boson it is preferred and superposition occurs.
4. New For different states x, y the distribution of the difference function δai= (aix – aiy); δbi= (bix – biy) for the coupled operators i should be greater than hi to consider them separate, (Δai Δbi ≥ hi = ci [δai, δbi]. One particle occupying two mixed states or 2 particles entangled by 2 mixed states.
5. New If the difference function is less than the limit for all coupled operators i, a complete mixing occurs, and produces results based on that mixing. If it applies for some i then they have partial mixing connected by the relevant coupled operators i. Partial or full mixing apply to a single particle dynamics.
6. New The uncertainties in the coupled operators momentum-space and energy-time have correlations with the corresponding De-Broglie waves energies, and momenta.
Justifications: assumption 1 is the traditional relation which is considered derivable from the wave equation (in fact the other way around). The second assumption is new and allows for values less than the commutator, provided they cannot be measured. The new point here allows for violation of the quantities within the given limits. Assumption 3 is the basis for the statistical ensemble logic, and it is applicable to systems of particles.
Assumption 4 and 5 are new. Here we define the correlation between two states based on the uncertainty limit. Single state values are within that limit. Two states must have variance greater than the limit to be considered separate. The other important implication of this assumption is that mixing applies to the single particle. It behaves within an ensemble as if it actually occupies the mixed states. The outcome of this mixing on the single particle creates outcomes that are different if the single particle did not act so. Mixing in this sense represents a unique state where the ensemble picture is not sufficient to explain the outcome.
De-Broglie waves relations are special cases of the generalized principle. The wavelength and frequency represent what we call the associated information for the wave speed. The actual physical speed, and the uncertainty in the position of the particle is defined by the Compton wavelength which defines the nominal size of the fundamental particle, where for smaller than that value there is no definite information, and the interaction of the particle is measurable only outside that size.
II.2.4 Implications of the boundary conditions: on the interaction topology and geometry
For more than one state, the difference represents the momentum or energy transfers, and their corresponding uncertainties in the position or life time.
It is familiar in atomic and nuclear physics to estimate the life time from the δE δt limit, and the momentum transfer is governed by the δx δp limit. The new assumption here is that the coupled operators difference between two states dictates the mixing/reduction or correlation/separation.
The forms of coupled operators dependencies in the various representations are derivable directly from De-Broglie relations. Another derivation is from the classical LaGrange and Hamilton relations [66-69].
1. Superposition: For a photon λ = h/p defines the natural uncertainty in the position. For two identical photons moving in the same direction with the same phase, the momentum and energy sum of the photons, superposition concludes their total wave amplitude =2A and energy density = 4A2 [31]. Using the De-Broglie relations, in connection with the uncertainty relations p1 + p2 = h (λ1+ λ2)/ λ1λ2 and the position uncertainty Δx = λ1λ2 / (λ1 + λ2) which equals λ/2 for the two identical waves. The relative intensity e = 4e1, with e1 = hν1/ Δx1.
2. Effective size of particles and the field interaction starting point For fields or particles the definition of size is related to Compton and De-Broglie wavelengths. For photons this is all the same. For massive particles we define the Compton wavelength as h/γmc derived from transforming the particle to energy which gives the field equivalence wavelength. The largest value is when the particle is at rest, and smallest value occurs when the particle speed approaches the speed of light. We call this wavelength as the nominal size of the particle in one dimension and applies with full meaning to fundamental particles only. The particle cannot be localized within a smaller distance and the effective field of the particle (electromagnetic, strong or gravitational and possibly different for weak fields) applies outside this region, and cannot be defined within the nominal size.
The center of mass concept and the point particle are based on approximate premises. There is no point particle in nature and the point center of mass only works as in the classical sense. A point particle is just a converging result from spherical symmetry of a distribution in space of the effect of the components. Hence we should not take the point particle concept literally. For a single particle a spherical symmetry makes it possible to consider it a point when we deal with it outside the object. Again a size of point particle must be considered when we apply the particle field outside. If we can define the distribution inside the particle, we can solve the problems as in the classical sense with quantum effects included. Otherwise (for the fundamental spherically symmetric particle) we can approximate its action as a point particle outside the nominal size, and we cannot define it inside using the same methods, because if assume it has uniform distribution then the potential at the center point will be infinite. For composite non relativistic particles λ = h/mc as usual
We suggest to apply this condition for the field quantization and particle dimension. The De-Broglie wavelength is always greater than this limit, and hence the information wave is always ahead of the interaction which fits reasonably with non-locality. This is strictly the case for non-composite particles.
For leptons, the particle size defines the starting point of operation on other particles or the startup of the field. For quarks it should work the same way on each quark alone, but due to the fact that the nominal size of the quarks covers the whole region of the baryon or meson, the quarks do not exhibit clear forces on each other within that length and behave as free. If the distance increases beyond the quark size the gluon field is too strong to let leave. Alternatively if the interaction field breaks the strong gluon attraction between the quarks there is always enough energy to create new combinations of quarks or new particles, and this is consistent with confinement.
3. Composite particles like atoms, the electron nominal size is smaller (10-11 m) than the distance between the nucleus and the ground state electron (10-10 m), and that enables the Coulomb field action. For the nucleons inside the nucleus, the same principle applies, because the nominal size of each nucleon (1 Fermi) albeit composite is smaller than the distance between the nucleons (1-10 Fermi). On the other hand the electron size is larger than the nucleus and that means at close distances the Coulomb potential washes away and the electron can pass the nucleus freely. Without falling in it due to the assumed singularity at the zero distance.
The quarks nominal size is larger than the composite particle size for certain range of relative energies of the quarks and the composite particle. This means the quark states can significantly overlap and represent a mixed state. It is not necessary to address the Pauli principle violation, because other quantum numbers distinguish the quarks (color for the same quark type). In fact for a particle occupying 2 or more mixed states Pauli principle does not apply.
4. Fields Quantization: instead of δ(x-y) the region of integration should be greater than the nominal size of the exchange boson, and should apply outside the nominal size of the quark or lepton. For many fields, each field interaction is calculated separately then added.
5. Particles and Fields: the field makes a good instrument to solve many outstanding problems, but it is hard to imagine a self-occurring field except maybe the gravitational (due to mass energy equivalence). Start from the gravitational field of gravitons, translate the effective size of the field point into h/p = λ; this is related to the field strength at the point and defines the possible outcomes.
The Planck units- mass or energy, length, and time, are built on the three constants h, G and c. They suggest a smallest length and time to pass it and the effective energy to localize within that length. They suggest practically similar conclusions as we have suggested above. The derivation of the shortest length comes mainly from combinations of the coefficients. (hc/g)(c2). The origin is implicitly based on our dissuasion above. Some examples my justify the point.
6. The generalized inverse law [38] The general energy time relation δE δt =h means virtual energy can be created in violation of conservation within the allowed time scale. The interaction energy is equal to hv or mc2 and the time = r/c, hence hvr/c = mc2r/c = εh with ε < 1. This is the maximum violation allowed within the time frame. Hence hv or mc2 α hc/r indicating an inverse law. (h = plank constant/2pi)
From experiment, For gravity ε = 10-42, for electromagnetic ε = 1/137, and for the strong interaction ε = 1. If we set for each case hv to equal the potential energy, this yields for Coulomb field ke2 = εehc with ε and e variables [65] For gravity εGhc = Gm2 and we need to define m and ε. The strong field is given by εshc/r = gluon energy. For the weak force the exchange is massive (mw or mz) c2 = εwhc/r. With mw or mz fixed, there is a maximum r for the operation to apply from the nominal size and outward. For example in the neutron decay mwc2 = εwhc/r with εw=1 yields r = a fraction of a Fermi, which is comparable to the nucleon size. It seems the nominal size concept works for all interactions albeit narrowly for the weak interactions.
For gravity, we have two arbitrary values (the minimum mass, and εG. For e.m using e as the minimum charge, and the electron nominal size at rest, yields the fine structure constant. For strong interaction we need to address the minimum gluon energy assuming εs =1.
7. Preliminary observations: If me is the minimum rest mass then εG is 10-42 and neutrinos must have zero mass. Virtual photons are potential emissions from charged particles. For a charge e at some point the potential photon energy arriving to another charge e located at a distance r is given by U = εhc/r = ke2/r. For multi charges add the combinations of all possible virtual photons. Example charge of n1e acting on a charge of n2e. The total number of combinations is nm photons yielding nmεhc/r. for different distances add similarly.
II.2.5 Impact of Symmetries on Theories: Energy, momentum, and other quantities that represent symmetries include space-time, with some variations on the applicability of the conservation laws. We will follow the outcome of the standard model as the base definition for Things, but we may differ in the mathematical development. In the following section we review briefly most of the symmetry based field theories, and identify advantages and limitations.
1.Gauge theories successes and limitations [74-75], [87-88]; electroweak gauge theories [31]. Gauge theories are a class that selects some symmetry invariance of the Lagrangian under certain transformation. For a class of fields considered the real source and generators, tune the transform until something fits the experimental data. Examples include, the successful classical electrodynamics vector potential; and the electroweak SU (2) assumes a symmetry breaking that produces 4 bosons, three with mass, two charged, one neutral, and 1 massless. The generators are two fields. A connected one is the Higgs mechanism shared with other similar 3 publications in 1964, utilizing the same idea, and producing the 4 electroweak bosons, then assuming the Higgs as the creator of mass (but claiming it is a result of mathematics is not true) [48-56].
2. Higgs models with mass and charge generation; fields [31], [77, 78, 85]; Creation of masses, fractional charges, and spins from fields quantization is not convincing. [89].
3. The standard model is based on similar symmetries; weak isospin, quarks and leptons, and exchange particles. [81-82, 87, [88]. Classification of particles and patterns, borrows heavily from the angular momentum symmetries [70-73], creating the isospin. Discrepancies of the standard model include proton decays; fifth force; magnetic monopoles, although the models predicts them [32].
The standard model [87] is mainly developed from experimental data and only partially based on generalized SU(3) to include the quarks, electroweak leptons, and associated bosons. 25 particles are created in total (6 leptons, 6 quarks, 4 electroweak bosons, 8 gluons, and the Higgs particle). Amongst these 25 particles 11 were already discovered, and 8 gluons were parametrized to fit the data. The 3 weak bosons and the last doublet of quarks predicted by the model were tuned to meet the expected estimates from experimental data.
There is no compelling reason that masses are created from fields or that fields existed before any masses or charges. Gauge transforms can satisfy the invariance of infinitely many fields. It seems more logical to take the other way around. In both cases we cannot explain why they existed being fields or particles. Hence this does not add an argument for either assumption, but we can use some limit checks.
Without masses and charges, fields disappear completely. In addition we can zero the fields at some point by just putting the masses or charges, in certain configuration, but we cannot zero the masses except through energy transformation, and because of particle anti particle asymmetry, this is not possible to transform all masses into energy. It seems the mass or its energy equivalence are more fundamental. Electromagnetic fields can be rendered zero, by adding negative to positive charges, but charges will remain while canceling each other fields [47], [84].
In addition, if the fields are the fundamental quantities, then particles and anti particles must be created in equal amounts even though there is no clear solid mechanism to explain it. Still we have to handle the problem of asymmetry between particles, and anti-particles. Spontaneous symmetry breaking solution for this question is not complete at best. Using symmetry breaking to explain any case we cannot explain, seems less scientific, and more dogma.
The fields relations originating from mass, and charge do not have these limitations. In addition, gauges fields are arbitrary, and represent inverse functions. If we start with mass, charge, and spin, we can produce all needed fields and effects without imposing conditions or allowing arbitrariness. We can always use the outcome of potentials to track back to the original cause without inducing exotic assumptions, but we cannot do that the other way without such assumptions as Higgs mechanism proposes.
The process of creation of mass from fields is bizarre. It admits the field first does not have this capacity, but because the symmetry is broken spontaneously, this allows for charge creation that cannot be there without mass. Creating charge means creating mass, so be it, mass is created, hence our process creates mass. This is typical case of self-fulfilling cycle [48-56].
4. QFT summary: field quantization is based on creation annihilation operators using a certain gauge and invariant field; defining the fields operators; breaking symmetries, then producing results. They consider fields as the originators of particles with fields. The model describes problems with large degrees of freedom; non local control of location. According to relativity, they cannot be particles. But that is not a satisfactory conclusion because general relativity is proven to be correct, or original. Divergence due to point localization of particles self-energies, and the divergence between proper and effective mass; why particles must be points. Most problems are from general relativity.
5. Normalization divergence: is a result of taking the field action on a point. The whole concept is flowed. creating fields and interaction, with no need for field quantization, but normal virtual bosons can describe the field; You need to span the universe to create the fields. Do fields exist on their own without originators from charge mass or nuclear charge? Then how come we can measure the field without charge in some local regions.
Higgs mechanism main assumption is that since there is no charged boson with zero mass then if there is a charged particle it must have mass. The Higgs field is just an arbitrary gauge field that can produce the intended result. Spontaneous symmetry breaking is used because there is no other way to allow for the mass creation. The potential state degeneracy is more similar to the degeneracy of the position state of a free particle with well-defined momentum. We do not conclude symmetry is broken if the position of the particle is identified at some point, because it can be predicted in principle within the wavelength. The gauge potential can break the symmetry if there is some momentum induced in certain direction by the field.
If there is absolute symmetry like the spin singlet with no preferred direction, it will not break except if there is enough interaction that breaks the singlet itself. The Mexican hat potential is a simple type I statistics like throwing the die and getting one face.
Higgs mechanism and symmetry breaking terms are needed by SU (2) to produce the particles with mass. Using normal terms the theory predicts a possibility of massive bosons, because by assumption, charged bosons must be there, and charged particles must have mass (cyclic argument).
Gauge theories are a good example of applying the uncertainty principle as a boundary condition in addition to other conditions. The problem with gauge theories they are many and arbitrary, and you have to pick the right one mainly by trial and error. Even though they are good and useful, they cannot give the whole picture. The standard model is based on a collection of gauge theories acquiring most of the needed information from experiments.
II.2.6 Remarks about symmetries and theories
- It is hard to envision quantum gravity based on general relativity.
- A more feasible quantum gravity can be reached along the lines of the standard model, if one can pick up the right gravity U (1) gauge fields similar to the e.m fields.
- Quantum field theories main problems are divergence and insistence on locality [84].
- Dark matter and energy are concepts not necessary. The missing 95% is a theory prediction that is not necessarily correct. There is no experimental evidence of this missing 95%, except if we assume the universe is 20 times the mass of the visible universe based on the space expansion (cyclic argument) [30]
- Absolute locality is not justified and the confinement of free quarks is not explained.
- Gravity is not addressed within a generalized framework.
- Super symmetries represent another layer of mathematics with no tangible outcome.
II.2.6 Origins of interactions- new
1. Uncertainty in the values of the basic physical quantities is important in triggering interactions.
2. Principles of Least action and maximum entropy are the underlying controls over states realization.
3. Four fundamental quantities reside on particles and define their identities (mass m, electric charge e, strong charge g, and spin).
4. There are no unique definers of the origin of these quantities. Any option will require other quantities to be the original ones [62-65].
II.3 A Model of Universal Existence
To address the above questions, we propose the following vision of existence. The start is an eternal open infinite universe with absolute parameters. We cannot tell how they operate, because we cannot have physical local access to it, except maybe a glimpse of non-local extensions. This assumption is not critically necessary, but can provide wider insight about what occurs in the finite universe.
In this vision our created physical universe is a bubble started at some space time point from within the eternal universe, and the relativity of space time started also then. If you don’t like the eternal universe background just omit it, but then you have to assume the universe started at some point [39, 41]. The energy was available as photons or gravitons but not gluons or weak bosons. At some similar condensed energy singularity and the process starts similar to the big bang description.
We can suggest many initial conditions for the starting point that can be localized if we accept the eternal universe, and there are few suggestions we will propose separately.
The big bang is heavily based on gravity, and that is not equal to being fundamentally dependent on general relativity. It is not. We do not yet have a solid theory of gravity, and borrowing from general relativity raises questions more than answers. The black hole concept has deep connection to gravity. It is a high density mass with huge field around it that can affect light [44], [58-61].
Gravity effect on photons has no derivation from relativity. It is just an assumption that makes sense, and if there is no measurable experimental evidence to confirm it, there is surely nothing against it. The mass energy relationship is rooted in gravity, [76].
There is a good need for special relativity, which can be described as relativity of space time, energy momentum, and invariance of four space-time and four energy vectors. It is real and we need it to predict things accurately at high speeds. It is so successful, we cannot function without it. Gravity is a big field to investigate, and special relativity is needed especially for the high energy regions.
To describe gravity, a U (1) symmetry similar to electromagnetism seems close, but there is a central difference in few aspects. The relative strength is very small 1: 10-37and the gravity field charge and inertial mass are the same. This is unique to gravity, but gravity inverse law is so similar to the electric case, it is tempting to develop quantum gravity dynamics on a similar path. Electrodynamics has failings, because of problems with interpretation of infinities of the probability functions integrals. Meanwhile re normalization is used to solve many of such problems. Quantum field theory has more problems [84, 90], but it can be utilized.
My proposal is to start along the quantum gravity dynamics. We end up with a theory with mass at its center and charge and spin as critical components. We will accept mass, charge, and spin as axioms (we may or may not have to do that for charge and spin) [85].
There are no advantages of deriving the mass, while abstracting the fields. It is understandable to use the effect to figure the cause through backtracking. We use that frequently in the derivations, but then we are just verifying the outcome from the basic mass assignments. Why they came that way, is addressed for the exited states, but not the basic doublets. Sometimes the cause and effect are similar (necessary and sufficient iff function), then it becomes a matter of style. I will choose the one with minimum floating parameters, and maximum determinism (mass- energy).
II.4 Statistical Foundations of Dynamic Laws
According to our proposal, the uncertainty principle defines the mixing behavior. Quantum mechanics is non-deterministic within this definition, and it is deterministic if the choices are separate beyond the uncertainty limits.
The solution of the wave or dynamic equation provides the possibilities for the Entities behavior. These are generally independent or separate, and the collection provides the possible. For example the energy spectrum for the hydrogen atom or the harmonic oscillator offer the possible states, which are not satisfied at once by a single particle. Solutions combinations are applicable to a collection of particles, and the given occupation distribution is an initial condition. We can prepare any mix and it will satisfy the equation. If the lower energy states are not occupied the system makes the needed transition to reach an optimized solution for action and entropy, governed by the boundary conditions. Here the systems seem mixed while in reality there is no mixing or a state reduction. In this particular problem the uncertainty limit represents the ensemble distribution for the operators.
If two states are degenerate in some operator (say energy), they will be occupied evenly by the particle or the ensemble provided other parameters allow that (for fermions the states must differ by at least one operator value). The mixing for single particle solutions means the particle can oscillate between the states provided boundary conditions are observed. Alternatively, if we start with a preparation with equal occupancy for these states it will remain steady provided other parameters do not dictate a change.
For coupled operators, the values do not need to be identical for the two mixed states. It is enough to consider them mixed if the operator difference is within the commutators limit. If the mixing is complete, the single particle must occupy the two states at once, because it will not violate any rule while it gains entropy. The true mixing here will produce outcomes based on the mixing. Any ensemble measurement will yield an outcome based on the fact of mixing. According to our re-phrasing of the uncertainty principle, the deciding factors to reduce the possibilities to single outcomes are the ones that give optimum freedom and least energy that are separate from the nearest state. Truly mixed states remain entangled and cannot be separated and their observable state reflects mixing. Otherwise the states separate fully or partially with their occupation dependent on the amplitudes before the moment of separation. If the uncertainty limit was zero instead of hi there will be no true mixing of states and only ordinary statistics will be needed. As noted in part I, mixing of states can come in three types:
Type I: Exclusive states that mix as potentialities
They are possible states that are mutually exclusive. All their coupled observables products exceed the limit (Δai Δbi ≥ hi for all i). They are not mixed and measurement yields their uncertainties. Their output will always be one of the possibilities with a complete removal of degeneracy for the operators in question (ordinary die possibilities). There seemingly mixing is not real and the probabilities are arbitrary based on the initial conditions.
Two main schemes follow: the system runs in the situations that can end up with one of the final solutions before it actually settles. An example is the dice. They are thrown at certain angular and linear speeds and from certain initial projectile angle. If we take all these values into account and the die faces are 100% equivalent then we can predict exactly the outcome. Instead we throw them randomly allowing all initial conditions to cover the whole possible range and provide equal probability for each face to show. This is in some sense a preparation with equal probability. The system will rotate during the flight, and if we change the strike moment the face will change.
Mathematical representation can describe them as mixed before the strike. Since the die cannot end up at two faces at the same time, the system seems to reduce to one at the moment of strike. The mixing is not real, because what we measure is what ends up at the strike, which is unique and the system has exclusive results. We can modify the probability outcome if we play with the faces in a way to make one preferable or throw always from a certain initial point. The process is fully deterministic, and the statistical outcome is due to the randomness of the initial conditions which is arbitrary. This is ordinary statistics and we cannot infer much from a single try.
The other scheme is to start with a certain potential and find all possible solutions, which we usually classify according to energy. We end up with a spectrum of energies and the system is allowed to be in any of them, but prefers the lowest based on the universal boundary conditions (least action). Each particle can be followed and defined, but that is not needed. There is no mixing in this process for a single particle. It is only relevant to the initial preparation. Type I interactions yield one of the possibilities. For many trials they yield the distribution of the initial probabilities available for a single particle states. Examples of Type I: particle decay, dice, the position of a free particle, and the spontaneous symmetry breaking.
Type II: Inclusive states and partial mixing
They can coexist but at least one of the coupled observables product exceeds hi and at least another couple’s product is less (Δai Δbi ≤ hi for some i; and Δaj Δbj ≥ hj for some j) These are partially mixed states and their distribution function reflects the probabilities for a single particle or an ensemble. Because the states are mixed for at least one couple they have some form of reversibility in general.
In type II interactions, if one or more of the coupled observables degeneracy is removed, the distribution of the new observed quantity follows the original probability distribution. If all mixing are removed it will yield a type I distribution. If the interactions don’t remove any mixing they may not yield information and if they do, it is more of type III. Types I and II can be treated in full using the ensemble statistics provided at least one degeneracy is removed during the interaction. Examples: Partially filled mixed states of angular momentum of degenerate energy, Zeeman and Stark effects, Stern-Gerlach splitting. If an interaction or a measurement removes the degeneracy they separate according to the initial probability distributions.
Type III inclusive states with complete mixing
They are possible states that are fully inclusive (Δai Δbi ≤ hi for all i) with ai and bi representing the difference functions for the coupled parameters. They come as two or more states of a single particle or one combined state for two or more particles. The result of their complete mixing can be recorded by measurement. Type III interactions do not remove any mixing, but they reflect the mixing. For these types, ensemble statistics description does not provide information or explain the result of mixing. If we try to find out the partial probabilities provided during mixing we may remove the phenomena (trying to find from which slit the particle passes kills the interference)
Examples: Interference, entangled states of a singlet spin state (Π0 decay outcome, pair inhalation, degenerate spin states in atoms, cooper pairs, supper fluidity, and superposition of electromagnetic waves).
II.5 Non Locality and Reality
Dynamic Laws and Boundary Conditions are not local and they span the whole universe. Only Entities and Forces are local, because they require exchange of energy that is governed by the speed of light limit. Any function of the dynamic laws or boundary conditions spans the universe instantly and mixing or breaking can occur instantly if the condition is satisfied independent of the physical space separation of the components connected through non-commuting operators. The “information exchange” can happen without energy exchange or movement of signals. It is part of the state description. Measurements or interactions are local. Mixing is governed by a limit that prevents measurement anyway, but can keep trace of the state of mixing afterward through the probability and statistical distributions. An alternative scheme is to rely on the De-Broglie wave as the carrier of information and propagator of the dynamic laws control. vinfo ≥ c the speed of light, with v the particle physical speed.
vinfo = ν λ = E/p = c m=0
= mc2/mv ∞ ≥ vinfo ≥ c m>0
It is equal to c when the physical speed = c or m=0; and ∞ when the particle of rest mass is at rest.
For massless particles energy carrying signals move in step with their De-Broglie waves, and this helps explain why Maxwell equations are always exact. For m > 0 particles, the De-Broglie wave speed is separate from the physical speed. Massive particles move at finite speeds and De-Broglie waves have interference, diffraction, with information and couplings always leading their physical movement. This can indicate non locality.
Fundamental interactions and non-locality
Two descriptions for basic interactions are generally accepted. The field description and the action from distance or exchange of particles. The exchange of intermediate particles combined with the uncertainty principle can bring a cohesive picture that explains short ranginess, and short living particles and jets energy lifetimes. In addition, this leads to the generalized inverse square laws was as shown above.
Fields are easier to use for formal solutions to the problem, but they are much harder to accept logically. The EPR thought experiment denies the QM uncertainties, because that may require non locality: how could a small charge located at some point spans the whole universe with its influence and action becomes instantaneous all over the universe. Fields end up producing non locality even though they are made on a locality principle. To avoid this problem cluster decomposition theorem was suggested. It is not necessary or derivable, but just an add-on condition to prevent no locality [75].
II.6 The Logical Completeness of the Model
The quantum solutions to the equation of motion of any Entity represent the options or states that are available for that Entity to take. The probability variation between these options is governed by the requirements of maximum freedom ΔS ≥ 0 and least action δLdt =0. The various options of behavior (states) can occur concurrently creating mixed states under certain conditions governed by the uncertainty principle as defined previously. The invariance of the conserved quantities is preserved whenever there is a physically observable state. These states exist on their own without the human observer and happen according to the optimization point of maximum entropy, least action and the invariance of the universal quantities. The behavior of a certain Entity is governed by the basic interactions, the dynamic laws and the boundary conditions. This logically means all laws can form a complete closed set that governs all behaviors.
III Examples and Extensions
In addition to the basic explanations we will address a number of historical questions that have feasible answers. These problems are more focused on the quantum laws.
III.1 Class 1 Problems
1.1 Spontaneous symmetry breaking Taking a path from a semi sphere top: the path width is governed by the uncertainty limit, and that includes many position states to mix. States of position within this width are mixed, but other paths cannot coexist with each other if the difference is greater than the limit. The occurrence of one path due to certain action removes the others and the symmetry is broken. The actual path taken can be influenced by the optimization of the energy, entropy boundary conditions, but in this case we assumed they are identical [79-80].
Classically it happens when some interruption of a classical amount accrues, and these amounts are far more than the minimum quantum limits. In QM this can only happen if the perturbation is greater than that allowed by the limit. This means many possibilities can coexist as potentialities, but they cannot mix beyond the position limit. They break because some interaction induced it that way, or it serves optimization of energy and entropy. The second case applies for spontaneous emission. It does not strictly apply here.
1.2 Free particles and plane waves The solution for a free particle is a plane wave that is free to take any energy, and for each energy can reach any position. That does not mean however the wave is not localized (even if m = 0). The probability distribution is flat and the particle or the wave can take any position. The wave function square tells about the overall possibilities (-∞ < x < ∞), but does not talk about the position state time dependence. These position states are mixed within some bandwidth (abs (Δx) ≤ h/p = λ) (De-Broglie, when m > 0). Bands separated from neighbors by a distance greater than the wavelength (│Δxi – Δx i±1│ ≥ λ) are exclusive with each other’s.
For a wave with momentum p the position width is De-Broglie λ or Compton when m> 0. The state band occupied as a function of time can be inferred or measured by connecting with the starting position by ct or vt. The non-normalization of the plane wave solution does not reflect fundamental failure of the solution. It just says the particle is free to be anywhere in the space, and because the probability is uniform and the space is infinite you cannot have a finite integral to normalize. If the probability density integral over all space yields a finite integral the particle position distribution can be defined. This usually occurs when the probability density function asymptotically decays to zero, or the range of operation is finite as the case for bound states.
We can always normalize the probability for finite regions, provided no singularities occur in the region. In that case the particle is considered bound and energy is quantized. Normalized only for the finite space or states, the particle is guaranteed to be within their space. It does not tell where the particle will be. The probability distribution depends on the choice of the boundary conditions.
Free particle plane wave solution means the same thing as the pure massless wave. The localization is within the minimum of the Compton wave length, and that is natural and does not require interactions or measurements. If the measurement of position resolution is greater than that wavelength, it will induce a change in the momentum. There is no need to use the wave packet concept, because even though we can localize energy and position using superposition, it does not tell much about the particle.
1.3 Schrödinger cat Can be explained by the ensemble statistics. The particle structure allows for two possibilities (natural decay or not) with probabilities according to the time constant. The state of no decay is separate from the state of decay, and their coupled parameters are fully above the uncertainty limit. Before the decay, the dynamics of the internal structure of the particle define the states of the components. Some of them don’t produce decay and others do. The probability of these states is governed by the field inside the particle and the probability of decay can be calculated. The net conclusion is that decay and not decay states cannot be mixed. By assumption, the state of the cat is a function of the state of the atom, and it is not part of the system which may trigger the decay. Result: the cat will never be in a mixed state of life and death. There is no universal state function that combines the atom and the cat in a mixed manner because they describe different spaces, just like spin and ordinary spaces. In addition, due to the assumption States of decay and life, or no decay and dead are not allowed by assumption except by coincidence.
III.2 Class 2 Problems
2.1 Partially filled atomic orbits For an atom with a single uncoupled electron in the S state, the initial state must have certain spin direction and the electron is polarized. Assuming no interaction and no thermal vibrations the atom will stay in that state. It will not rotate because other orbits differ by ΔLΔφ≥ h. The direction will not be sharp though, because the electron can oscillate around that direction by an angle φ0 such that ΔLΔφ0 ≤h. If it is possible to prepare a sample of atoms that are polarized and can reduce the interactions to less than the limit, the distribution will remain polarized.
2.2 Zeeman and stark effects The degeneracy of states is lifted after the application of the field. The new sates are separated by a measurable energy that shows as photon transitions. The distribution of the separated states is the same as provided for a single particle probability distribution for the same setup.
III.3 Class 3 Problems
Condition of Quantum Interference For two waves superposition of monochromatic coherent light, ordinary interference is possible in principle if more than 2 photons coincide within a span of λ, or an intensity greater than the photon frequency. Practically it requires higher than that to get meaningful patterns. This is equivalent to an intensity greater than 1015 photons/sec (⁓10-19 w) for the visible spectrum, and that is fairly low intensity. The classical interference then can happen between two photons passing from the two slits and the quantum phenomena is smeared. If we can decrease the intensity to prevent any coincidence between the photons, interference will only be due to quantum mixing. The intensity must be lower than the above limit to see the quantum effect which is hard.
We apply the commutation relations limit to get the conditions for quantum mixing. For the longitudinal motion with order m, Δxmax =0, and Δpl= h/λ (cos θm2– cos θm1) and the limit will never be reached and that does not harm interference. For the transverse motion with order m, the transverse momentum difference Δpt = (h/λ) [sin θm2 – sin θm1] < (h/λ) d/D for all orders m. Hence the uncertainty limit Δpt < h is always satisfied if (h/λ) d/D < h 🡪 λ > d/D.
With d the slits spacing, D the distance to the screen and λ the photons wavelength. this is best for low orders. The condition for interference is Δxt Δpt < h; and D > d2/λ. If D < d2/λ quantum interference should go away, provided there is no coincidence. For typical waves and slits, d = 103 λ, and D > 106 λ. For red D > 50 cm. To prevent coincidence the intensity of a typical red light passing the slits must be less than 1015 photons/second. This extremely low intensity puts a stringent limit on testing experiments.
If the condition applies, The two paths are possible at the same time, and there is no preferred slit, because they are within the uncertainty limit and will remain mixed. The behavior of photons will follow the usual interference pattern (dark fringes are forbidden states and the shined are allowed). The energy of the beam is localized at the bright fringes which means the dark sections don’t receive any photons. This is not the same as the 2 photons fields cancel. For the classical case we assume superposition of the two interfering photons creates the fringes. If we try to check where the photons pass from we have to put a detector: The interaction may make the difference between the two paths greater than the uncertainty limit and the two paths lose the mixing condition. The photon takes the path of least action from where it strikes the slits. If we can build a detector that can tell the photon path without exceeding the limit, interference should remain. But that is a small possibility, because mixing itself is not measurable even though the consequences are. Should make a note about the Bohm calculations
Role of potentials in the Aharonov Bohm effect [33-35] Non locality governs the behavior of the photons states, because wherever we put the screen it will figure out instantly the pattern positions. This is important, because for the classical case the photons do not need to know the screen position a priory to define the configuration while for the quantum effect the photons must know beforehand to figure out where to land.
The following notes are in order
- The interference appears weaker if the screen is moved away from the slits because the width of the fringe is proportional to D, the distance of the screen yielding wider area per fringe or weaker intensity.
- Δpt between the two paths in the transverse direction can exceed the limit for screens closer than some minimum value Dmin and quantum interference disappears. If the screen is beyond that minimum interference continues for any order.
- Δxt is equal to d and hence interference would disappear when the distance d between the slits is too large compared to the wavelength of the particles.
- Dmin increases with the d2 and that simplifies the test in point 5. But if d is tool large that will be a kill interference altogether because for very small Δpt the uncertainty limits are reached.
- It is possible to set up an experiment to measure Dmin using highly monochromatic coherent laser beams. If we build a slit of d = 103λ then Dmin =106 λ = 50cm. That is good for testing the hypothesis. The problem is how to operate at very low intensities.
Filled orbital angular momentum states (II and III): For an ideal non interacting atom where the S state, for example is filled with two electrons, the state is mixed and the electrons are entangled.
There is no preferred direction in space because of the spherical symmetry. This means all directions have equal probability to be occupied. There is no reason to assume in such situation that the two electrons occupy randomly some direction and stick to it. Other directions are allowed and are not exclusive because ΔS = zero between any two states in opposite directions. There are two possibilities either the electrons occupy all states and behave as clouds, or they rotate continuously in all directions like a spherical wave. Both scenarios give the same measurement for any experiment. It is not clear if it is possible to measure macroscopic phenomena reflecting a preferred possibility. The symmetry can only break if there is enough interaction to knock an electron out.
If we can measure the spin of the two electrons without breaking other symmetries (removing energy degeneracy) we get equal probability for all directions. After the measurement the two particles will remain entangled as if there was no measurement, but this is unlikely possibility because measurement usually removes energy degeneracy, if we use magnetic or electric fields.
Pion decay The pion with zero spin decays into two photons or an electron and positron, in opposite directions in the pion rest frame. This is similar to the two electrons within the S state of an atomو but only differ in the sense they are located increasingly further away from each other. There is no preferred direction in space. The photons or particles are entangled even though they are far away. Measuring the spin in some arbitrary direction for particle one must yield an opposite direction for the other. Here again, if measuring the spin does not break any degeneracy (momentum or energy but this is unlikely) the system will stay mixed and we can repeat the experiment forever yielding the same opposite directions. The spin states rotate in time and there is no preferred direction. But if we decide to select two directions with an arbitrary angle θ far from the π to measure simultaneously the spins of the two particles then that can break their entanglement, if ΔlΔφ exceeds the limit.
Pair production and pair annihilation Pair annihilation does not require a use of the principle, because energy transfer can happen through adding the positron and electron to create photons. For the pair production: The photon does not have a rest frame and that requires interaction with other particles to insure conservation of momentum. After the production, the e+e- pair make a polarized triplet that is not spherically symmetric. This is type II.
3.3 The EPR as proposed originally may not be viable logically, because the center point of the suggestion is as follows: “if we adhere to locality, then there is no way for quantum predictions to be realized; the uncertainty conditions don’t stand for independent particles where their momenta and position are not coupled, hence it should be possible to measure them with full accuracy”. The problem with this logic is that the QM early interpretations assumed the uncertainties are due to measurement, and that allowed a feasible EPR argument for simultaneous measurement of x1 & p2 to get p1 and x2 by inference.
According to EPR: x2-x1 and p2+p1 commute and they can be measured exactly at the same time (in fact they do not because they contain coupled operators portions). If we measure x1 we can infer the value of x2. This means we have actually measured x1-x2 and p1+ p2 prior to the measurement of x1 and p2. But according to QM when we measure x1 exactly p1 will not be accurate and if the value of p1+p2 stays the same or at least defined, p2 will not be accurate to infer p1. A contradiction. Alternatively If p1+p2 lose accuracy to keep p2 exact then x2-x1 will lose accuracy due to inaccuracy of x2. The only possibility for EPR is to measure p2 simultaneously with x1 in order to prevent the effects of changing p1 on p1+p2. The question then is can we measure p2 and x1 simultaneously.
According to the suggested generalized uncertainty principle it is not straight forward to claim that case. There is no way to measure x1 and p2 instantaneously if the state before measurement was x2-x1 and p2+p1, because then the two particles are entangled. Measuring x1 and p2 will break this entanglement and hence the whole argument fails. If they can stay connected to give exact predictions then there must be some non-local interaction that keeps them in an exact Eigenstate. This takes us back to square one. Assuming locality ends up with non-locality.
III.4 Extensions
This section is more of reflections, and the proposed ideas have a degree of speculation and need not be considered literally. Some ideas especially consciousness are worth serious consideration and refinements.
Entities built in consciousness The set of behaviors of Entities resulting from the interactions initiated by the fundamental forces, and governed by the dynamic laws under the constrains of the boundary conditions, represent our fundamental definition of their nano-consciousness. In that sense consciousness is a built in component of the definition of the objective Entity. The selection from choices and the reduction processes are governed by the chain of interactions the Entity encounters. No matter how complex or simple these entities are, they carry with them their effective sum of nano-consciousness.
For a single particle, the behaviors, and options are limited to a degree we don’t subscribe calling them consciousness. For an aggregation of matter that does not increase the possibilities (accumulative structures of repeated and constrained patterns, like solids), the options available can be less than the sum available for single particle components. Their overall interaction with their surrounding per component particle is then reduced and that can be dramatic to reach the point of turning off the dynamics of the nano-consciousness, and the removal of the quantum freedom available to single particles. This does not mean quantum effects disappear on the microscopic levels. In fact many quantum effects still can govern the overall structure of the solid, albeit the solid itself does not interact with its surroundings as a quantum object. This is the classical behavior of matter generally.
The aggregate consciousness of a bulk of material can be much less than that for a single particle (removal of the quantum effects or de-coherence). The effective consciousness per component particle is reduced to a vanishing value. We call these structures, inanimate. In that respect, the degree of effective consciousness depends on the complexity of the structure that keeps the total externally interacting consciousness equal or greater than the sum of the nano consciousness. Depending on the complexity of the structure the consciousness can take complicated degrees.
Theoretically the complex structure can multiply the sum of the nano-consciousness components many orders of magnitude. The maximum is proportional to the permutations available. We claim the human brain represents one of the most complex and advanced forms of structures that multiplies the nano components to a large macro consciousness. The degree of complexity is defined by the number of options available for the state of the entity to take. The human brain is built in a way that amplifies the quantum phenomena which govern single particles in such structure. And even though the local processes don’t reflect a quantum phenomenon, their governing behavior does (the firing of the axon takes long time compared to quantum phenomena and the behavior seems classical, but the timing of the firing, and occurrence conditions are highly sensitive to the dendrites potential that are quantum in their variation.
A single variation may lead to coordinated firings in a controlled way. The possibilities can reach infinitely large values, and they evolve according to the fundamental dynamic laws. Thinking processes are part of the available solutions constrained by the basic laws, and the optimization of the brain chemistry. In addition, the structure can grow more complex if it can store the history of choices. In some respects the brain structure can accumulate these options, and store them for later use (creating experiences), and some structures can emerge to control others, and make selections in some preferred way to serve the wellbeing of the existence of the brain and the human it is built to serve. The evolution follows the basic dynamic laws under the boundary conditions.
The spectrum between the simplest form of a single particle that is generally governed by basic laws (nano-consciousness), and the most complex is filled with many degrees of complexity, and so the overall macro-consciousness is near a continuum from the simplest to the most complicated. We don’t claim here that the brain basic chemistry is QM. In fact most of the dynamic is classical. The thermal energy guarantees the removal of the quantum phenomena on the level of a neural cells or axons. But as said earlier the critical junctions for the potential between the cells reflect highly sensitive responses. A very small variation in the local potential can differentiate between the macroscopic states. In that respect the brain components describe an apparatus that can register infinitesimal variations within the QM domain. The net result is a dominance of the QM phenomena on the overall behavior and functionality of the brain (the emergent functions represent the amplification).
References
[1] Peter J. Lewis, “Interpretations of Quantum mechanics”, IEP, https://www.iep.utm.edu/int-qm/
[2] Graham P. Collins, “The Many Interpretations of Quantum Mechanics”, Scientific American, November 19, 2007, https://www.scientificamerican.com/article/the-many-interpretations-of-quantum-mechanics/
[3] Tom Siegfried, “Tom’s Top 10 interpretations of quantum mechanics”, Science news, February 5, 2014, https://www.sciencenews.org/blog/context/tom%E2%80%99s-top-10-interpretations-quantum-mechanics
[4] Wikipedia, “Interpretations of quantum mechanics”, https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics
[5] Ian T. Durham, “What are current interpretations of quantum mechanics, how do they differ from each other”, quora, March 2012, https://www.quora.com/What-are-current-interpretations-of-quantum-mechanics-and-how-do-they-differ-from-each-other
[6] Thomson Gale, “Positivism, International Encyclopedia of the Social Sciences”, 2008, https://www.encyclopedia.com/philosophy-and-religion/philosophy/philosophy-terms-and-concepts/positivism#E
[7] Andrei Khrennikov, “On the problem of completeness of QM: von Neumann against Einstein, Podolsky, and Rosen”, arXiv: 0804.2006 [quant-ph], 17 May 2008, v4, https://arxiv.org/abs/0804.2006
[8] Michel Paty. “The nature of Einstein’s objections to the Copenhagen interpretation of quantum mechanics”, Foundations of physics, 1995, pp.183-204. <halshs-00170463>, https://halshs.archives-ouvertes.fr/halshs-00170463/document
[9] V .K .Ignatovich, On EPR paradox, Bell’s inequalities, and experiments that prove nothing, April 28, 2009, arXiv-quant-ph, https://arxiv.org/pdf/quant-ph/0703192.pdf
[10] Wikipedia, “EPR paradox”, 2017, https://en.wikipedia.org/wiki/EPR_paradox
[11] Fine, Arthur, “The Einstein-Podolsky-Rosen Argument in Quantum Theory”, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/entries/qt-epr/
[12] Robert B. Griffiths, “Consistent Quantum Theory 1st Edition”, Chapter 13, Cambridge University Press, 2002, http://quantum.phys.cmu.edu/CQT/chaps/cqt13.pdf
[13] L Ballantine,” Quantum Mechanics Modern development”, World Scientific, 2000, https://www.amazon.com/Quantum-Mechanics-Development-Leslie-Ballentine/dp/9810241054
[14]; Goldstein, Sheldon, “Bohmian Mechanics”, The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/sum2017/entries/qm-bohm/; http://www.bohmian-mechanics.net/research_papers.html
[15] Faye, Jan, “Copenhagen Interpretation of Quantum Mechanics”, The Stanford Encyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2014/entries/qm-copenhagen/
[16] Hilgevoord, Jan and Uffink, Jos, “The Uncertainty Principle”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/entries/qt-uncertainty/
[17] Susanne Dambeck, “Bohr & Heisenberg: Two Physicists in Occupied Copenhagen”, Lindau Nobel Laureate Meetings, 2015, https://www.lindau-nobel.org/bohr-heisenberg-two-physicists-in-occupied-copenhagen/
[18] Wikipedia, born rule, https://en.wikipedia.org/wiki/Born_rule; Information philosopher, “The Interpretation of Quantum Mechanics”, http://www.informationphilosopher.com/solutions/scientists/born/Interpretation.html
[19] Information philosopher, “Collapse of the Wave Function”, http://www.informationphilosopher.com/quantum/collapse/
[20] Roger Penrose, Martin Gardner, “The Emperor’s New Mind: Computers, Minds and the Laws of Physics”, Oxford University Press, 1989, https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind
[21] Roger Penrose, “Shadows Of The Mind: A Search for the Missing Science of Consciousness”, Oxford University Press, Sep 1995, https://altexploit.files.wordpress.com/2017/07/roger-penrose-shadows-of-the-mind_-a-search-for-the-missing-science-of-consciousness-oxford-university-press-1994.pdf
[22] Siewert, Charles, “Consciousness and Intentionality”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/consciousness-intentionality/
[23] Henry Stapp, “Mind, Matter and Quantum Mechanics”, Springer 2009, ,https://www.springer.com/la/book/9783540896531
[24] Vaidman, Lev, “Many-Worlds Interpretation of Quantum Mechanics”, The Stanford Encyclopedia of Philosophy (Fall 2016), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2016/entries/qm-manyworlds/
[25] Thomson Gale, “Many Worlds/Many Minds Interpretation of Quantum Mechanics”, Encyclopedia of Philosophy, 2006 https://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/many-worldsmany-minds-interpretation-quantum-mechanics
[26] Sacha Kocsis1, Boris Braverman, Sylvain Ravets, Martin J. Stevens, Richard P. Mirin, L. Krister Shalm, Aephraim M. Steinberg, “Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer”, Science 03 Jun 2011, http://science.sciencemag.org/content/332/6034/1170
[27] Griffiths, Robert B., “The Consistent Histories Approach to Quantum Mechanics”, The Stanford Encyclopedia of Philosophy (Spring 2017), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/qm-consistent-histories/
[28] Bub, Jeffrey, “Quantum Entanglement and Information”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/entries/qt-entangle/
[29] Felix Bloch, “Fundamentals of Statistical Mechanics”, Manuscript and Notes of Felix Bloch, Imperial College and World scientific, 2000.
[30] https://medium.com/starts-with-a-bang/how-is-the-universe-bigger-than-its-age-7[38]5cd59c605
[31] http://guava.physics.uiuc.edu/~nigel/courses/569/Essays_Fall2007/files/xianhao_xin.pdf
[32]https://www.quora.com/What-particles-have-been-predicted-by-the-Standard-Model-but-not-observed
[33] http://faculty.georgetown.edu/jmm67/papers/WhichGaugeMatters.pdf
[34] https://arxiv.org/ftp/arxiv/papers/1605/1605.05470.pdf
[35] Anastasovski, P.K., Bearden, T.E., Ciubotariu, C. et al. Found Phys Lett (2002) 15: 561. https://doi.org/10.1023/A:1023985620088
[36] Wikipedia,”Friedmann–Lemaître–Robertson–Walker”, https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric
[37] Afshin Shafiee and Majid Karimi , “ On the relationship between entropy and information “, arXiv : 0611061 [quant-ph], 6 Nov 2006, v2, arXiv:quant-ph/0611061v2
[38] https://physics.stackexchange.com/questions/126987/what-is-the-range-of-the-validity-of-coulombs-law ; Robert L. Oldershaw , “THE MEANING OF THE FINE STRUCTURE CONSTANT “ , arXiv : 0708.3501 [physics.gen-ph] , v2 , arXiv:0708.3501v2
[39] Fred Adams, “ Origins of Existence: How Life Emerged in the Universe “ , May 11th 2010 , Free Press.
[40] Wikipedia, “ Existence “, https://en.wikipedia.org/wiki/Existence
[41] Stephen Hawking, “ The Origin of the Universe “ , http://www.hawking.org.uk/the-origin-of-the-universe.html
[42] Wikipedia, “ Fundamental interaction ‘, https://en.wikipedia.org/wiki/Fundamental_interaction
[43] Encyclopædia Britannica, Fundamental interaction, Date Published: January 16, 2009, Access Date: April 01, 2019 , https://www.britannica.com/science/fundamental-interaction
[44] Clara Moskowitz, “ Hunt Is On for Gravity Waves in Space-Time “ , August 15, 2012 , https://www.space.com/17112-gravitational-waves-ligo-black-holes.html
[45] Wikipedia, ” Einstein field equations “, https://en.wikipedia.org/wiki/Einstein_field_equations
[46] http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/funfor.html
[47] Wikipedia, “ Physics beyond the Standard Model “, https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mode
[48] Bruno Maddox, “ The Higgs Boson : Steaming Particle of Bull$#!% “ , September 11, 2012 , https://www.gq.com/story/higgs-boson-rolf-dieter-heuer
[49] Wikipedia, “ Higgs field “ , https://simple.wikipedia.org/wiki/Higgs_field
[50] Wikipedia, “ Higgs mechanism “, https://en.wikipedia.org/wiki/Higgs_mechanism
[51] Michael Schirber , “ Focus: Nobel Prize—Why Particles Have Mass “ , October 11, 2013 Physics 6, 111 https://physics.aps.org/articles/v6/111
[53] Michael Brown, “ Origin of electric charge “, https://physics.stackexchange.com/questions/57199/origin-of-electric-charge
[54] Manoj Kumar Singh , “ What is the origin of charge of electron? “ , https://www.researchgate.net/post/What_is_the_origin_of_charge_of_electron
[55] Sankha Das, “ What is the origin of charge? “, https://www.quora.com/What-is-the-origin-of-charge
[56] Erik Haeffner, A Physical Origin for Mass and Charge, arXiv:physics/0010050v4 [physics.gen-ph]
[57] Wikipedia, “General relativity “, https://en.wikipedia.org/wiki/General_relativity
[59] R. Jay GaBany, “ A Singular Place “, https://www.cosmotography.com/images/supermassive_blackholes_drive_galaxy_evolution.html
[60] Nola Taylor Redd, “ Black Holes: Facts, Theory & Definition “, October 20, 2017 , https://www.space.com/15421-black-holes-facts-formation-discovery-sdcmp.html
[61] European Southern Observatory, “ Taking the First Picture of a Black Hole ”, https://www.eso.org/public/outreach/first-picture-of-a-black-hole/blog/
[62] H. D. Zeh, Feynman’s Interpretation of Quantum Theory , arXiv:0804.3348v6
[63] Hermann Nicolai, “ Viewpoint: Vanquishing infinity, August 17, 2009• Physics 2, 70 https://physics.aps.org/articles/v2/70; Wikipedia, “ Renormalization “, https://en.wikipedia.org/wiki/Renormalization
[64] Max Tegmark, Infinity Is a Beautiful Concept – And It’s Ruining Physics , February 20, 2015 , http://blogs.discovermagazine.com/crux/2015/02/20/infinity-ruining-physics/#.XKNMefdvbIV
[65] Robert L. Oldershaw , THE MEANING OF THE FINE STRUCTURE CONSTANT , arXiv:0708.3501v2 [physics.gen-ph]
[66] Ryan D. Reece, A Derivation of the Quantum Mechanical Momentum Operator in the Position Representation, http://www.hep.upenn.edu/~rreece/docs/notes/derivation_of_quantum_mechanical_momentum_operator_in_position_representation.pdf
[67] “ position operator in the momentum representation from knowing the momentum operator in the position representation “, physics.stackexchange , Nov / 5 /2013 https://physics.stackexchange.com/questions/86824/how-to-get-the-position-operator-in-the-momentum-representation-from-knowing-the/86935
[68] Wikipedia, “ Energy Operator “, https://en.wikipedia.org/wiki/Energy_operator
[69] Zhi-Yong Wang, Cai-Dong Xiong , How to Introduce Time Operator, arXiv:quant-ph/0609211v2
[70] Wikipedia, “ Angular Momentum Operator “, https://en.wikipedia.org/wiki/Angular_momentum_operator
[71] Wikipedia, “ Rotation operator “, https://en.wikipedia.org/wiki/Rotation_operator_(quantum_mechanics)
[72] L Del Debbio , Rotations , https://www2.ph.ed.ac.uk/~ldeldebb/docs/QM2/chap4.pdf
[73] Larry Sorensen, Angular Momentum, https://faculty.washington.edu/seattle/physics227/reading/reading-24-25.pdf
[74] Wikipedia, “ Quantum field theory “, https://en.wikipedia.org/wiki/Quantum_field_theory
[75] Wikipedia, “ Cluster decomposition theorem “, https://en.wikipedia.org/wiki/Cluster_decomposition_theorem
[76] https://www.scientificamerican.com/article/was-einstein-the-first-to-invent-e-mc2/
[77] Erick J. Weinberg, Radiative Corrections as the Origin of Spontaneous Symmetry Breaking, arXiv:hep-th/0507214v1
[78] Wikipedia , “ Spontaneous symmetry breaking “ , https://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking
[79] Kibble TWB. 2015 Spontaneous symmetry breaking in gauge theories.Phil. Trans. R. Soc. A 373: 20140033. http://dx.doi.org/10.1098/rsta.2014.0033
[80] Ling-Fong Li, Introduction to Spontaneous Symmetry Breaking , http://galileo.phys.virginia.edu/~pqh/Li.pdf
[81] Chris Quigg, Robert Shrock, “ Gedanken Worlds without Higgs: QCD-Induced Electroweak Symmetry Breaking “ arXiv:0901.3958v2 [hep-ph]
[82] Ovidiu Cristinel Stoica, The Standard Model Algebra – Leptons, Quarks, and Gauge from the Complex Clifford Algebra Cl6 , arXiv:1702.04336v3 [hep-th]
[83] B. A. Robson, Progressing Beyond the Standard Model , Advances in High Energy Physics
Volume 2013, Article ID 341738, 12 pages http://dx.doi.org/10.1155/2013/341738
[84] Frank Wilczek, Quantum Field Theory, http://web.mit.edu/physics/people/faculty/docs/wilczek_quantum_field_theory.pdf
[85] Debashis Sen, “ Of Mass, Charge and Spin, the basic attributes of matter –Their physical origin “ January 2011, DOI: 10.13140/2.1.3419.7445
[86] T. Z. Olesen, N. D. Vlasii, U.-J. Wiese, From Doubled Chern-Simons-Maxwell Lattice Gauge Theory to Extensions of the Toric Code, arXiv:1503.07023v1 [hep-lat]
[87] Pascal Paganini, The Standard Model, http://polywww.in2p3.fr/~paganini/ch7.pdf
[88] Wikipedia, “ History of quantum field theory “, https://en.wikipedia.org/wiki/History_of_quantum_field_theory
[89] Alfred Scharff Goldhaber, Fractional Charge Definitions and Conditions, Journal of Mathematical Physics 44, 3607 (2003); https://doi.org/10.1063/1.1586793
[90] David Tong, Quantum Field Theory, http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf
[91] Wikipedia, “ Einstein field equations “, https://en.wikipedia.org/wiki/Einstein_field_equations
[92] “ Layman’s explanation and understanding of Einstein’s field equations “, physics.stackexchange, Apr /29/2015 , https://physics.stackexchange.com/questions/179082/laymans-explanation-and-understanding-of-einsteins-field-equations
[93] Kirsten Hacker, “ Did Albert Einstein steal the work on relativity from his wife? “, quora, https://www.quora.com/Did-Albert-Einstein-steal-the-work-on-relativity-from-his-wife
[94] Cjohnson, “ Einstein’s Discovery of General Relativity, 1905-1915 “, Discover Magazine, November 29, 2005, http://blogs.discovermagazine.com/cosmicvariance/2005/11/29/einsteins-discovery-of-general-relativity-1905-1915/#.XKNcRPdvbIV
About the Author
The author is a retired Professor of High Energy Physics from Alquds University Palestine. His PhD work studied channeling of high energy particles in bent crystals experiment E660, Fermilab and State University of New York. 1980-1982, and worked on neutrino oscillations E776 at Brookhaven Laboratory and University of Illinois [1982-1985]. He was a visiting scientist and professor with CLEO collaboration at Cornell University [1992-1995]. He is the senior scientific advisor for Palestine Academy of Science & Technology, Palestine, co-founder of Alquds Jülich cooperation 212-218, and member of SESAME council and technical committee [1999-2018].
E-mail:salman@planet.edu
عن المؤلف
المؤلف أستاذ في فيزياء الطاقة العالية من جامعة القدس فلسطين. الدكتوراه: دراسة توجيه الجزيئات عالية الطاقة في البلورات المنحنية E660، في مختبر فيرمي وجامعة ولاية نيويورك. 1982، وعمل على تذبذبات النيوترينو E776 في مختبر بروكهافن وجامعة إلينوي 1982-1985. كان عالمًا زائرًا وأستاذًا بالتعاون مع CLEO في جامعة كورنيل [1992-1995]. كان مستشارا علميا الأكاديمية فلسطين للعلوم والتكنولوجيا (PALAST)، فلسطين، والمؤسس المشارك لتعاون القدس يوليش – Jülich، وشغل عضوًا في مجلس SESAME واللجنة الفنية للفترة 1999-2018.
A Comprehensive Model of Reality Based on Quantum Physics Worldview 1
I.1 Classification of QM Interpretations 2
The uncertainty of coupled operators 4
The non-locality of entangled states 4
States mixing-reduction, entanglement, and emergence 4
II. Model Of Reality Description 5
II.2 Definition of Reality and Things 7
1. The Principle of Least Action 7
2. The Principle of Maximum Freedom or Entropy 7
4. The Generalized Uncertainty Principle 7
II.2.4 Implications of the boundary conditions 8
2. Effective size of particles and the field interaction starting point 8
3. Composite particles like atoms 9
6. The generalized inverse law 9
II.2.5 Impact of Symmetries on Theories 9
1.Gauge theories successes and limitations 9
5. Normalization divergence 10
II.2.6 Remarks about symmetries and theories 11
II.2.6 Origins of interactions- new 11
II.3 A Model of Universal Existence 11
II.4 Statistical Foundations of Dynamic Laws 12
Type I: Exclusive states that mix as potentialities 12
Type II: Inclusive states and partial mixing 13
Type III inclusive states with complete mixing 13
II.5 Non Locality and Reality 13
Fundamental interactions and non-locality 14
II.6 The Logical Completeness of the Model 14
III Examples and Extensions 14
1.1 Spontaneous symmetry breaking 14
1.2 Free particles and plane waves 14
2.1 Partially filled atomic orbits 15
2.2 Zeeman and stark effects 15
Condition of Quantum Interference 15
Role of potentials in the Aharonov Bohm effect 16
Filled orbital angular momentum states 16
Pair production and pair annihilation 16
Entities built in consciousness 17
Summary
The cornerstone concept with fare aching consequences is the following
Mixed States are defied as state that one particle can occupy at the same time or can be shared by many or some through condensation or fermion entanglement.
The condition for states to mix is if the uncertainty difference between the two state variables
Mixed states can occur if all physical correlated operators do not commute and their uncertainties are less the plank constant
in such cases single particles can occupy one or more mixed state concurrently with no direct possibility of measurement but with measurable consequences that can be traced to such condition
multi particles can occupy such state yielding entangled states creating far reaching statistics
if the uncertainties exceed the plank limit for a couple or more of the physical operators . the state are fully or partially in such situation the relevant operates can be measured and the single rtile cannot occupy more than one state. We call this condition reduction of states.
- Some significant implications with possibly direct testing and explanations
- Double slit interference or single slit diffraction are a classical direct test of the assumptions
- Particle or wave size is defined by such concept.
- interaction starting point is defined accordingly.
- Divergences in field theories can be removed without artificial procedures with such concept
- Quark confinement is explained
- Generalized inverse law is derived
- Quantum gravity can be developed on reasonably stable grounds
- Unification of basic interaction obeys such overreaching rule
- When mixing is removed, the reduced solution follows the combination dictates of the principle of least action and maximization of entropy optimization.
- The particle-energy concept of the universe explains the fields model
- Field quantization comes as a natural outcome
- Long and short range nature of the basic interactions force originate from the this concept
- Many other problem were discussed in the paper and originate from this core assumption
direct advantages
maximum explanation
predictability
quantitative conditions