poniedziałek, 16 maja 2016

Planckons and elsymons. Part 3

Planckons and elsymons
Part 3

Contents

1. Hypothetical plankonic gas – thoughts versus
      cosmological data
2. The question of mutual interaction of planckons
3. The problem of speed
4. Minimal distance between planckons. The existence of limit (not asymptotic) value of potential energy and force
5. What is the system’s mass at the absolutely smallest distance between its elements?
Á propos. Associations, reflections and thoughts


1. Hypothetical planktonic gas – thoughts versus
    cosmological data
Let’s suppose that at the beginning we had the planckonic "gas", all the elements of which are equivalent. This gas, even if quantitatively limited, cannot in the long run create a uniform static continuum, even if such is its initial state. After all, we remember planckon’s properties. Hence we have "gas" (in quotes). [By the way, can there be the absolutely initial state, that is the state from which time begins its existence? Is the existence of no-time at all possible?]  
  If the system is spatially limited, there must be also an uncompensated gravity which attracts planckons, particularly those located in that uttermost space. In such a situation the system itself has the centre (also gravitational). In this case, stability is disturbed.
  However, if the system is unlimited, infinite, then (under the condition of stability and uniformity) we have a situation in which there is no relative movement as a result of the full balancing of forces (the resultant force acting on each element is equal to zero). In the context of thoughts on cosmological questions, this would be also absolute, timeless eternity. But here we are, the very existence of the author of this statement precludes such a possibility. [Of course, assuming that there is no external, transcendental factor which would initiate movement, that is some "force majeure", primum mobile, disturbing that eternal peace. Would that be enough to conjure the Universe from infinite continuum, the Universe with all its changeability? And what about this "force majeure", when did it deem worth it, and for what reason? At what moment of no-time...? In other words: Was it at all possible in nonexistence of time? After all, time implies changeability, and so the will to act on the part of this transcendent factor already indicates the existence of time. Something for those who like to mull over such things. As for myself, I don’t recommend any searches for the reasons and ways by which planckons came into being]. Fortunately, we are not dealing with transcendental (not to say theological) factors. We are dealing with the known to us, real Universe, which is dynamic, not static, has no centre, as it is reflected in cosmological principle (and thus the Hubble’s law); there is no space outside of it. An existence of the centre, and an existence of extra space, would go against the cosmological principle which we assume a priori. It would also contradict the correctly forecasted temperature of the CMB radiation - obtained in the empirical study. There is more about it in the article dealing with the CBM radiation. So far, this is a rather isolated view, even though the cosmological principle is widely accepted ...   
[Currently it is thought that the diameter of the observable Universe is approx. 92 billion light years; check the appropriate internet sites, such as: https://en.wikipedia.org/wiki/Observable_universe. Is this the last word of science? It is the latest word of its current representatives. So far, the Universe can (potentially) be one way or another - according to the Friedmann-Einstein equation (with the addition in the form of the cosmological constant).
     If the Universe is inherently flat, things can be a lot simpler, and its current diameter of the order of 30 billion light years. In addition, if there was no inflation, which was to push a large part of the Universe beyond visual range (because there was, after all, the phase transformation, followed by the Hubble expansion), this 92 billion could be put aside as an oddity from the turn of the century. So it's not about the expansion of the autonomous space but only about the relative motion of objects which speed limit defines the outer border of the flat space occupied by the Universe. The only complication (here I would turn to mathematicians-topologists) would be the absolute limit on the birth certificate of existing space so as to keep it within Hubble dimensions. "If it is flat, it should be infinite," but this is not the case – which is undoubtedly proved the existence of variation, and the existence of CMB radiation. This is not the case also because of c limit, which, in the confrontation with the manner of expansion (relative movements, rather than expanding space), actually indicates the historicity, the variability – there was once the beginning of this State of Affairs. The State of Affairs, rather than the Beginning of Everything.]
So let's leave the option of a uniform and static gas on the side. Because of  various interactions taking place in the planckon gas, it can be expected that it develops in the direction of forming the less or more complex systems, and elsymon lumps. We are not dealing here with an ideal gas in which the particles are points which “don’t feel” whether they are attracted or repelled, and where their encounters are perfectly springy collisions occurring in the zero range. That’s how it is generally imagined. Actually we are dealing here with repulsion at a very close range, but not at the zero range¹ 

2. The question of mutual interaction of planckons

Planckons, although very small, create a different world, the world of gravity. This makes them different from the perfect gas particles just as nature differs from mathematics (that is, they require different math...). They have certain sizes, but at the same time they can penetrate into each other (but not permeate through each other) being (physically) truly elementary creations. All the substantial matter is built out of them, including radiation and every particle and each body. So as to create (on paper) the principles of construction of stable planckon systems, the basis for the construction of elsymons (in the sense of inner balance and their relative resistance to destructive external influences), one should, above all, examine the interaction between two planckons forming an isolated system and connect the findings of this article with the findings contained in the preceding article, in the section entitled: "How to build elementary particles?"
At this stage we will describe a system in a static manner, i.e., without considering the relative movement of (our two) planckons. [In the particular case, both orbit around the common centre of mass, if in a circle then such a state is in fact equivalent to immobility (if you don’t take into account the outside witnesses). In the extreme case, the vibrations take place along the connecting them axis.] We’ll explore the dynamics further on. We will calculate the force of attraction (and repulsion) as the function of distance between their centres. We have examined the matter in relation to the system of two material points**. It will be interesting to see whether we’ll get the results consistent with the results of that study, which has the characteristics of generality and universality. It’s a kind of test for the planckon model.
As we know, in a very short range mass deficit becomes an important factor. This must be therefore taken into account in our research. We should also recognize that in the case of relative motion of planckons, the results of this research should remain in force. The planckons set apart attract each other by force of gravity, and according to the assumptions they make a closed system. Let one of them be at the origin of the coordinate system. Let’s consider the magnitude of force depending on 
the distance of the second planckon from the origin of the coordinate system.  Intuition suggests the graph shown above. Does this graph correspond to reality? We will check it in our study.
     We'll start, obviously, from deriving an equation for calculating force. We will rely here on the Newton’s law of universal gravitation, and we will take into account the fact that, with the mutual approach of its components, the mass of the system gradually decreases. Because in the formula for the resultant mass there are no restrictions on the value of r, the formula for the force, which we get, should be valid for any distance. This formula can be considered as a modification of Newton's law. So as to take into account the mass deficit, we rely, of course, on the formula:
derived in the first part of this essay (comprising three parts). Thus we obtain the following formula:
After dividing by 4, because mass of one element is equal to m*/2, we received an expression very similar to the formula (7) in an article on material points. We see that force is positive over the whole range of r values, that this formula does not indicate the change of the direction of the force (attraction - repulsion), because it does not recognize, for now, the possibility of repulsion. We know that it takes place when the distance between the centres of planckons is less than half the Planck length. To impose this condition, the formula (**) should include the factor that satisfies our need: 
Same as in the case of interaction of two material points, and in the case when we were determining the potential energy of planckons’ interaction. Thus we receive:
Let’s explore function F(r). We differentiate it and equate the derivative to zero in search of extreme (or inflection point):
In the calculation, as one can see, the factor Γ does not play a role. Most importantly, we see that our function has an extreme (or an inflection point), and actually in two places. Although it does not fit intuitive predictions, the r values that we received correspond to those received in respect of two material points. This is encouraging. By continuing our research, we find that the first point is a maximum, and the second is the inflection point (not minimum, as for the smaller r values the force is negative (factor Γ <0)). Here is the graph:
The force magnitude at these points:
Exactly the same as what we received with respect to the material points. Asymptotes, horizontal:
Again we are puzzled by the huge maximum value of the force:
We found it already in the article on material points. That's a huge number! It's a completely different world! But is it really different?
Characteristics of nature do not depend on scale. It's the same nature. Perceptible nature is simply more complex. [If we mix the blue and yellow powder, we get, in our perception, green colour. Today, our truth is green.] That what is given to our senses and can reach our consciousness, being the result of overlapping and mixing of stimuli, appears to us as a different quality and constitutes the basis for the theories that we invent and develop. Today's description of nature is necessarily of phenomenological character. Of course, it's not enough. The world of "sub-dimensions," although it requires a different approach, is no different. In it, the same fundamental rules apply.  
But coming back to that enormous force, it helps illustrate how strong (in an appropriate scale) is gravity. It’s not the first time that we came to this result. We could have expected it, remembering the result of exploring the interaction of two material points, the result which was of a more general character. This solution confirms also an already expressed view that gravity is the basis of all interactions (including the strong ones, of course). As for the graph (above), it is basically identical with the graph describing the interaction of two identical material points².
It is also noteworthy that within a certain range (L/2, L) the force increases with distance (until it reaches its maximum value, which we just discussed). This is not consistent with our habitual thought (force in the central field decreases with distance). This may be illustrated by stretching rubber band. Interestingly, such is the character of gluon-quark interactions. We are talking about the so-called asymptotic freedom and resulting from it colour confinement, which makes it impossible to observe quarks directly. Judging by the results of our considerations, it can be expected that the range of "gluing" of quarks is so short that it can be compared with Planck length. Perhaps quarks were indeed the primary structural elements of the panelsymon at the moment preceding the start of expansion. And what about gluons? In the dual gravity model they kind of lose their raison d’être. In a fit of arrogance I would even say that the concept indicating the need for the existence of the "force transmitting" particle in relation to the gravitational field, is not adequate. This is the fundamental field, base for other interactions. Gravitons? This is not the right direction. We can do without them. They are just the product of a particular paradigm. A theory can be always adjusted (to comply with the paradigm). Some are starting to see (or just suspect, not being able to substantiate their suspicion) that the search for graviton is a Sisyphean task. Am I wrong? Let someone prove it. Tip: look for the facts of nature (rather than arguments based on one theory, or another), contradicting dual gravity, in other words, give us the facts which cannot be rationally explained on this basis. It is worth remembering at this point that this model generates a lot of anticipations that can be checked even now (after liberating oneself from the current habits of thought).
Field is a being that cannot be separated from matter. In fact, matter, each body and every particle, is a field, the gravitational field, in a very condensed form. Even the famous Einstein's formula (mc²) can serve as an illustration of this fact. But this is not enough. So as to apprehend the truth about gravity, one must descend much deeper, toward "sub-dimensions" of elementary gravity, towards planckons. The derived formula testifies to it. Planckon is the ultimate condensation of the gravitational field, as well as its elementary source, the only one that exists. Therefore, there is no infinite continuity into depth, just as there is no physical singularity. Here resides the source of quantization (and of the gravitational field).  

3. The problem of speed

Let’s assume that the initial distance between two planckons is very large (mathematically tends to infinity). Leaving one of them at the origin of the coordinate system, we can say that the second falls on it unhampered under the influence of gravitational force. [We do not take into account the presence of other planckons here, because, as usually, we consider here the simplest case, "raw material", an elementary system. Physics of complex systems is not really different, but the aspect of calculation, even in case of just three components, becomes dominant and requires approximation methods. For us only the physical aspect is of importance, the concept at its source. This model is designed for testing.]
During the fall the force of attraction increases, and therefore acceleration as well. Interesting, what is the falling speed³. In our study we will answer three questions, which concern fundamentals:
1.   What is the distance between the centres of planckons at the moment when the speed is maximal?
2.   What is the value of the maximum speed?
3.   What is the minimum distance between their centres at the moment when they come to rest?
  
   The answer to the first question is immediate. This is the distance at which the force of attraction comes to zero, that is half of the Planck length. To answer the second question, we have to start by calculating the work of the propelling force (until it comes to zero) in the range from infinity to half the Planck length. This work is equal to the increment of kinetic energy. Therefrom the road to calculate speed. It can be expected that this speed may reach relativistic values. It immediately evokes serious concern about the expected in this case computational complexities. Deeper reflection (thanks to this concern) leads however to the conclusion that in this case the relativistic effect should not be taken into account. That’s because in Planck scale this effect does not occur. Planck mass is not subjected to relativistic increase as it is unequivocally elementary and invariant with respect to any transformation; independent of the choice of the reference system. More importantly, we do not need to observe our planckon. Observe? Using photons? Here the (observational) effects of the special theory of relativity are irrelevant. Besides, a truly elementary being must be invariant! So let’s calculate the work:
                                                                            dW = -Fdr
Minus expresses here the fact that during the fall the displacement is negative when the work is positive (during the approach under the force of attraction). The repulsion phase will be indicated by the Γ factor. Let’s in the above expression substitute the value of force on the basis of the formula (***). We get:                       
So as to calculate the maximum speed this expression should be integrated within the appropriate limits: 
Here Γ factor is positive, therefore it is omitted. We calculate this integral and obtain the following: 
because:
Assuming:
We get:
Let’s note that the coefficient 2/3 already occurred in the article dealing with the potential energy. It also appears in cosmological considerations based on the Friedmann equation (after all this is also worth noticing). If we accept as well-founded the basic definition of kinetic energy, which should not be a problem also in relation to planckons, we get: 
Thus this speed is greater than the speed of light. This should not cause trouble due to the invariance of planckon (I pointed this out above). This corresponds to our hypothesis regarding the very early phase of Big Bang (URELA), although we are talking here about the approach, not expansion. But in accordance with the principle of conservation of energy, these speeds should be the same, because after stopping, planckon accelerates under repulsive forces. The obtained speed is the relative speed of the nearest neighbours. The relative speed of the elements most distant from each other may be much greater. Let’s note also that in the description of complex systems - and such was the exploding Universe - simple extrapolations lead nowhere. The calculation of that "much greater" speed using the apparatus used here is not possible, given also the specific topology of the system, certainly different than the one known to us from the autopsy, given the non-linearity of spatial relations.
We received speed greater than c. By how much? This is easy to calculate. Here is the source of excessive kinetic energy, which dissipated during the phase transformation, giving birth to temperature. It's also the kind of indication for estimating the initial temperature (the highest in the history of Universe), and at the same time for confirming the whole concept. 
So let us now answer the third question: "What is the minimum distance?" Our planckon, while approaching, crossed the point at which the force came to zero, and the speed reached the maximum value. Continuing its motion it faces resistance. As it moves on it meets the growing repulsive force. Finally it stops. What's next, you already know. We are interested in the minimum distance corresponding to zero speed. To calculate it, let's start with the calculation of the value of work (integral) over the new interval: 
Let’s note that this time factor Γ is negative, hence the lack of a minus in the expression for work. In addition, the searched for distance x we express using Planck length: x = nL . Our aim is thus to find the value n. Let us therefore take note of the change in kinetic energy:
 So we get:
And hence the equation:
It has only one solution: n = 1/4. Therefore, the minimum distance is equal to the quarter of the Planck length. A clear solution was to be expected. No solution or more than one solution would render questionable all my efforts, since the solution should be unequivocal. The concept itself would have to be rejected. I was therefore under considerable stress when I was solving this equation for the first time (checked it several times).
The result was a source of a powerful experience. The fact that this result surprises with its elegance can mean that we are pretty close to the truth. But this is not the end, because the gravitational mass of such a system immediately arouses curiosity. Calculating it was not a problem. For now, however, please be patient. I think we should first calculate the potential energy at the moment of the closest approach. We need to hurry up before the start of repulsion, so as to make it before the Big Bang.

4. Minimal distance between planckons
   Let us return to the preceding article. For the record, there we dealt with, inter alia, the potential energy of planckons’ interaction. The graph shown there asymptotically approaches the energy axis (OY). As it turns out, there is the minimum distance at which two planckons can approach each other. It is equal to quarter of Planck length. It is easy to calculate that the corresponding potential energy is four times greater than the plankon’s rest energy. So there is no asymptote. Graph ends in a certain place. This is significant. There is one truth and the asymptote does not lead to it. This is however shown only in the (truly) elementary systems. But if that’s the case there, then in the whole matter – use of asymptote in the description of macroscopic systems it is only apparent and is means that there are limitations in the given method of description. Here's one more, this time "philosophical" confirmation, if not of the validity of the chosen path, then at least the legitimacy of investigations in this direction. Is it only for the philosophical elegance? This seemingly trivial statement, carry nonetheless some weight.
And what is the magnitude of the repulsive force at that threshold distance? Finding it is no longer a problem. Substituting in the general formula for force (***), taking into account Γ = – 1, and using the familiar formula:
we get:
The maximum repulsive force is thus 64 times greater than the maximum force of attraction. So here as well, there is no asymptote. The story ends at this value of the force. Pauli exclusion principle  is the direct consequence of this very fact.

5. What is the system’s mass at the absolutely smallest distance between its elements?
This predictable question was asked at the end of the third chapter. As we shall see, it was a good one.
It is obvious that at this particular moment the mass of the system is negative. Its value is expressed by the equation which we already know:
along with this, a reminder:
So we get the following:
As we know, from the defining formulas (Planck dimensions) it comes out that:
Here's the final result:
Amazing, although by intuition it was to be expected. Absolute symmetry! When they are infinitely far from one another, their total mass is, of course, equal to 2M. The system has the same mass when the distance between its elements is minimal, but the mass is negative. Yes, minimal! They can’t get any closer to each other, just as there is no greater distance than infinite. Also, the rule of conservation of energy remains valid and categorically excludes any further approach (it’s the question of the magnitude of mass). Again the Pauli exclusion principle! And actually, it is also an expression of the principle of conservation of energy. And probably just here lies its mystery. Is it not like quantization (gravity)? Infinity on the "one side of the mirror" becomes a definite quantity on its other side. Alleluia! This cry expresses everything...

Á propos
Associations, reflections and thoughts
¹) It is interesting from the psychological point of view that something like this is even considered (perfectly springing rebound in the zero range), as if it really took place in nature. Collisions of free electrons with individual photons, treated as central springy collisions of bullets, may serve here as a particular example*. In any case that’s the way the matter is perceived by young people interested in science. And that’s how the Compton effect is often presented in schools (wrongly of course, but for a somewhat different reason). However, physically (perfectly springy rebound in the zero range) is simply absurd. "But this is just the kind of approximation" – one could say... An approximation distancing from reality, deceiving imagination which we want to develop in the young so that they would grasp this reality in the best way possible. If already in school such and similar assumptions are used a priori, it is no wonder that quite often even scientists are automatically and mindlessly prone to incorrect judgements. Stereotypes formed in youth live their own lives and the more irrational, the stronger they are. I do not want to give examples from outside physics. Ad rem. Let's try to describe (qualitatively), what can happen when an electron approaches a photon. [Let's try to describe it in a kind of mechanistic way, without referring to quantum mechanics. Which does not mean that it's a better one. It means that it’s worth trying.] The speed of the electron relative to other bodies is here of no importance. Other bodies have nothing to do with it, are far beyond our system, and know nothing about that electron. In addition, their varied speeds, from zero to the speed of light, in relation to our electron, makes it pointless do deal with its kinetic energy, the value of which depends on the reference system – unambiguity is out of the question. And yet, among astronomers there are some (I deliberately not cite names, because they are quite well known and quoted - so I know about them) who explain the cosmic X-radiation as follows: "high energy electrons during a collision with photons of the CMB radiation transmit their energy making these X-ray, or even gamma radiation photons."
Let's stop the shot at the moment of closest approach of an electron to a photon. Both players are elsymons. The gravitational field of the electron when it approaches the photon causes deformation of both of them. Polarization occurs in the system of constituent planckons’ vibrations - gravitational induction). Thus, the photon becomes a source of uncompensated gravitational field. It attracts the electron (itself being attracted). [For this reason light is deflected in a strong gravitational field (not necessarily due to the curvature of space).] Here we have an analogy in relation to electrostatics - electrostatic induction (as scraps of paper in the field of rubbed object). In our system there is the gravitational interaction. As we know, in the gravitational field the wavelength of radiation increases (contrary to popular belief, not because of the gravitational time dilation, anyway, so I dare to believe), but it takes place only at the moment of contact***. Immediately after comes radiation, and thus the photon returns to its original state, to its original energy. This cannot be captured in an experiment. Possible to observe, however, is the dispersion in accordance with the principle of conservation of momentum. Already this suggests that the model of gravitational photon as elsymon is consistent. The change of momentum itself  proves that it participates in the interaction. What interaction? Of course, gravitational.
By the way, returning to the interaction of the photon, it is worth noting that disruptions occurring in its internal vibrations, and  thus its gravity caused by induction, can result only from massive particles (even electron). Photon alone is not the source of the gravitational field. Not surprisingly, the light beams do not interact with each other. The planckon model explains this fact as well. [And if the particle has negative mass? Then photon deflects in the opposite direction. I think that in the future it will be possible to check it. Scattering of light on neutrinos? Perhaps in this way it will be discovered that neutrinos actually have negative mass... About neutrinos in an essay devoted to them.]
Although all this sounds rather logical, research is essential... even if you, dear reader, consider it all as "fantasy for the poor." If that’s what you think, it means that you either have not read everything, or you are already programmed. Experiment is of crucial importance and so far none has even attempted to refute the model that with a good dose of arrogance I have present in my works. The known facts rather confirm it (at least do not contradict). 
²) This enormous force makes me think at this point of the hypothesis of inflation. There also some huge force caused the sudden, even an exponential increase in the size of this (This? Actually singularity, in spite of being zero...), which was to become the Universe. [Just for the record, inflation was expected to start (because of this singular zero) only after some time. Sly dodge. Would this make it more credible?] In that case it was some not fully specified force, vacuum energy, but we know that it was gravity. Moreover, this force (surprisingly) is called repulsive gravity, and is based on a known Planck mass, incomparably larger that the mass of any known particle. I invented the absolutely elementary being (with that mass), thanks to which the vacuum energy has its specific source. Inflation is the expression of intuition, and Urela based on the dual gravity concretises things on the basis of causality (and not according to the scheme: empirical fact → fitting hypotheses, hypotheses based on the currently in force paradigms and theories, resulting from creative inventiveness of those involved in the subject.). It is significant that Allan Guth invented some "repulsive gravity" yet hadn’t thought about the possibility of dual gravity. It is thanks to the concept of dual gravity that there is no need to use such hyper-abstract ideas as Higgs bosons, Higgs fields, the energy of vacuum, false vacuum, inflation field; there is no need (in this field of research) to use hyper-mathematics, and no need for hyperspace and supersymmetry. So that what was achieved starting from the quantum premises and from the top, deserves great admiration. From the top, that is on the basis of observable data; moreover, without taking into account gravity, which is, after all, (according to the common habit of thought) a super-weak interaction. Yes, without taking into account gravity, and yet, in spite of all, taking it into account at the Planck level in a contrived hypothesis (inflation) without a sound conceptual basis. It is praiseworthy (for intuition). For me it was a lot easier because I recognized gravity as the basic interaction, and all others resulting from complexity, and I accepted its duality as the base. Moreover, my view was not obstructed by sky-high fluctuations (existing in this range of dimensions), excluding insight because of the quantum conditions (already a priori). This is not a great revolution. It’s a minor treatment, a small step forward (rather than backward). By the way, this convergence (Inflation with Urela) would indirectly indicate that indeed gravity is the key to everything. No wonder that it wasn’t possible to co-opt it to other interactions as some secondary Cinderella. The fact that now (probably not only in my work) it triumphs by its magnitude, fully justifies the previous failures. The obtained (directly in several ways) "universal force" could be an indication. On this occasion it is worth pointing out that the existence of gravitational repulsion is, according to our concept, a natural consequence of the existence of attraction. Awareness of the existence of gravitational mass deficit also enabled clear understanding of why gravity in our observable surroundings is so weak.
³) Can the relative speed of plankons be different than c? Judging by their absolute elementarity? Other speeds are relative, so is the magnitude of force... But that’s from the point of view of an observer... who actually does not exist... In our testing planckons are material point, and their internal features of absolutely elementary beings are actually beyond this research. Planckon feels that it is attracted (or repelled). By what? It's none of its business. Let’s add to this that during the first phase of the Explosion the relative speed was to exceed, even by much, the invariant speed (c). It is, however, about the pace of expansion, the speed at which increased the radius of the Universe. Or maybe, in fact, the relative speed was equal exclusively to c? The speed of the nearest neighbours, of course. And as for the speed of further neighbours? There must have been quite a lot of these neighbours in every direction, and also in this set up the cosmological principle - which is the expression of symmetry, homogeneity and isotropy in global scale - was certainly in force****. According to this principle the relative speed is proportional to the mutual distance. In this case, it would be a multiple of the invariant c. Was then "c" the only possible basis for the relative speed? Is it possible that there was no acceleration in the initial phase? No Urela (or inflation)?? Or maybe the view that the relative speed of planckons could not be other than c is wrong, anyway according to the suggestion at the beginning of this digression. Because if it cannot be other than c, then why is it (in line with the cosmological principle) n-fold greater in relation to the more distant neighbours? This relativity would actually contradict its invariance. Thus, either all planckons moved in relation to all others at the speed c, regardless of the distance between them, or this speed is not a necessity, that is the relative speed may be other than c. In the first case there is no question of the expansion, of development. Only absolute stasis, as all relative to all others move at the same speed, or put it otherwise, do not move at all.                           Just a reminder, we are interested in mutual repulsion in the first phase of the Big Bang. This case therefore can be dismissed. So we have the second case. Which means that the invariant c inherently lies in the planckon itself, while outside of it the case is open. Let’s note that space is built of a progressive movement, the relative movement (as already observed in the text proper). So there wouldn’t be any expansion if the only speed would be the invariant speed c. Besides, planckons are endowed with mass. Photons are something else. The mutual c speed is reserved exclusively for them. It is a kind of external parameter, unlike in case of planckons for which c is something internal. For the record, photons appeared at a moment when the mass of the system was equal to zero (at the end of Urela, and commencement of phase transformation). As for the system of two planckons, the distance between their centres in this state is equal to half the Planck length. Á propos, such (field-like) is the meaning of Planck length. And what about the "crust" that determines the planckon’s dimensions? Encrusted is our imagination. Later was chaos, and the space began to expand with the speed constituting the upper limit of the relative speeds of elements endowed with mass. We will soon learn that the minimum distance at the beginning of the Big Bang, was equal to a quarter of Planck length. This will give some foothold for further research.
Planckons, actually only them, repelled each other (in a very short range), although their assemblies which formed particles attracted each other, attracted by relatively small forces, being gravitationally almost saturated systems. Hence the weakness of gravity within our perception. It is interesting that our poor planckon is personally responsible for this whole mess called the Universe.

*) Let's leave aside quantum mechanics with its uncertainty. The "mechanistic" approach should suffice. According to certain hypothesis which tries to explain the formation of the X-ray somewhere in space, electrons with great kinetic energy transmit, through collision, part of their energy to photons of the CMB radiation (microwaves), transforming them into photons of X-radiation and even gamma radiation. Sounds nice. The problem is that the kinetic energy is relative, that is, its magnitude depends on the reference system. It may be, for example, zero in respect of one of its observers. While the photon energy is as it is, and no other (hv), and it determines the type of radiation. "And what about Doppler effect?" It does not apply in the case, because there is no source of radiation. Photon relative to any observer moves at an invariant speed. This is what takes place between an electron and an encountered photon, has nothing to do with the infinite number of other objects outside the system. Thus, even the "great energy of an electron" (free) in relation to some of these objects, cannot affect the result of its interaction with a photon. And what kind of interaction we are having here? We know, as we remember that photon is electrically neutral.
**) Article (first in this series) entitled: “The dual nature of gravity”  
***) The existence of gravitational time dilation (here, in particular with regard to the photon in our example), it something quite troublesome, if you look at it as a whole. We would have, among other things, local time mismatch to global cosmic time, uncountable number of such mismatches (even without digging into the black hole). Which of the measured times is actually the global time? Or maybe this global, shared one, simply does not exist. So, how did it happen that the Universe exploded as a whole at one moment, being the self-adjusted object for the whole duration of Urela (or inflation)? How is it that the horizon of the Universe, no matter which way we look, is equally far away, that there is a cosmological principle, which observable confirmation is the Hubble’s law? Is H factor (which, as it is known, determines the age of the Universe) in relation to equidistant (from us) objects absolutely equal in every one of them? Or maybe it differs because of the different gravitational environment? So we have an additional possible cause of uncertainty in determining the value of this coefficient. And more generally, could there be global processes (of a cosmological nature) without full time coordination? Or maybe they don’t happen (and no dark energy...) Nothing but trouble. 
     Less troublesome solution would result from dispensing with the time dilation (or recognizing it as a mental shortcut releasing from mathematical requirements, and not an actual physical effect). Energy of a photon passing in the gravitational field consists of: its own hv energy (in the non-gravitational area) and negative potential energy of the photon at a given point in the gravitational field. The total energy is therefore smaller, which manifests itself in the longer wave. By this reasoning we come to a formula very similar to that predicted by GTR. Without involving time. Changing the speed of the passage of time makes the "new" photon’s features permanent, because it just "missed the train". After leaving the gravity field, if it came too late, photon does not return to its primary form. However, gamma rays or X-rays from the vicinity of neutron stars and (possibly, black holes) - gravitational field is there very strong - rather testifies to the possibility that photons - "weakened" after leaving the gravity field - return to their original form.
     "And what about black holes?" one may immediately ask this spontaneous question. I have resolved this problem as well, and of course, on the basis of dual gravity. Black hole is an object gravitationally closed, without singularities, and the matter inside is well known to us, though (in former stars) very condensed. The average density of such an object is inversely proportional to the square of its mass. It is easy to calculate and check with respect to an appropriately massive galactic nucleus. It may even be lower than the density of water. Most importantly in the context of our discussion, there is no time dilation. It is simply not needed - everything can fit without it. Dilation would rather disrupt (this playing with the false t).
     Time dilation - yes there is, though not gravitational-local, but kinematic, also global, cosmological, observed in distant objects because of their relatively high receding speed. This can be noted (can be even calculated), because the space of the Universe is flat. [I dealt with this matter in a series of articles under the title "Horizontal Disaster", and of course, in one of the books already published.] Discovered observationally (empirically) Hubble's law, indirectly means also the existence of global time. Let’s add to that the amazing uniformity of background radiation (Penzias, Wilson). Its minor heterogeneity is not caused by the heterogeneity of the passage of time, but by chaos that formed during the phase transformation - the chaos and the local non-homogeneities in temperature already at the very beginning of Hubble’s expansion.

****) By the way, as we already know, this original network of the panelsymon didn’t have to be, and rather wasn’t a network of identical cubes (as we may imagine). [Although in this case isotropy concerns not the basic elements, but the basic systems.] Rather regular tetrahedra and regular dodecahedrons connected by planckonic bridges. They are the ones, one might suppose, which constitute the structural basis of all particles (including photons). As a result, the masses of all the particles are (relatively) similar. It is a very important conclusion (see the preceding article). Moreover, this "crystal" structure forms the basis for the short range anisotropy, which does not, however, affect the homogeneity of the whole. [It is exactly this anisotropy that played its role during the dispersion of elements of that monocrystal during the phase transformation. Played it role by affecting the development of matter so that it gained the qualities of chaos. If absolute isotropy was at the beginning, if only planckons formed the network, then chaos could not happen during the phase transformation, and fluctuations could not take place. There would be no large-scale structures, there would be no galaxies and we wouldn’t exist. The primary anisotropy of the panelsymon was also the source of known today fractalization of matter.] Hence the diversity of the types of particles and limited (not infinite) number of their kinds. Maybe that is also the cause of the predominance of matter over antimatter. After bursting of panelsymon (phase transformation) only those systems remained that could exist as stable/permanent (without interference from outside). Perhaps these are the systems internally resonant, variable in a cyclical (reversible) manner. Thanks to this, (when in isolation) they do not emit any radiation – these are the particles that have the right to exist. [ "Radiation"? Of course, in principle it would not be electromagnetic radiation. So what kind? "Gravitational"? (On the subject of gravitons I have already spoken. I’d like to remind that the system is stable when changes occurring in it are cyclical, and its energy does not change.) Probably none. And if there was an internally resonant particle? That’ll be the day! - systems other than possible could not be formed, if only for reasons of energy. Here any speculations just don’t make sense.] The fact that most of them (those internally resonant) decay means that there is an external factor that causes decay. I have already drawn attention to this in the preceding article (in the reference with three stars). This factor is mainly neutrinos (or photons. With their secret gravity inductively induced? Very possible.)        Local anisotropy could also result in the release of free planckons or very massive aggregates, all of them with considerable mass and minimal dimensions, creating global network of dark matter. With the passage of time, around their local densifications (fluctuations) accumulated matter (as we know it), and the first swarms of stars (already 200 million years after the Big Beginning), and after approx. one billion years they started creating systems which we detect as quasars. These, over the next few billion years, will evolve into galaxies. About the formation of galaxies I wrote in another place, in an essay devoted to this topic).


December 2013 

















niedziela, 1 maja 2016

Planckons and elsymons. Part 2

Planckons and elsymons
Part 2

Contents    
1. The potential energy of a system of two planckons. Mathematical model. Potential energy niches - conclusions concerning planckons’ capacity to create structures.
2. How to build elementary particles? The basic repetitive elements of the structure of particles.
3. What is the dark matter? Panelsymon – Universe at the start. Urela and phase transition. Dark matter and the density parameter. Why masses of galaxies are basically similar to each other? How the islands of future galaxies were formed.
Reflections.

1. The potential energy of a system of two planckons
     One can ask: What constitutes an "encouraging" factor for planckons to form aggregates - particles? Answer: There should be a niche of potential energy. So let us consider how the potential energy of a system of two planckons changes with distance. Of course, we have to take into account the system’s mass defect which occurs when the distance between planckons diminishes. Here it should be noted that such a niche cannot exist if gravity means only attraction. According to the currently prevailing (even obligatory) view, gravity means attraction - repulsion is out of the question. In such a situation it is difficult to find motivation for undertaking research on the structure of elementary particles, as if these studies were simply not possible. Nowadays this is a generally held opinion. The argument behind this opinion is based on uncertainty principle, which implies fluctuations that undermine the possibility of assessing the linear sizes of particles. For this reason, it is difficult to talk about their structure (in particular concerning leptons). So it is best to assume that they are point objects. So how did all this structuring of matter come about - the chemical elements, atoms, subatomic particles...? Despite the expected reaction of many readers, the problem exists. It is in fact the basic problem. We have here a paradigm of observability conditioning the possibility of acquiring knowledge.
   Is such a niche really possible? Judging by the qualitative considerations in the previous article* – yes. Will the quantitative considerations confirm our suppositions? We'll see.
   On the basis of previous findings (formulas: (8), (9), (10) in the preceding article) we can write the formula for the potential energy of the system, taking into account the mass defect: 
As you can see, the resultant gravitational mass of the system was divided by two, so as to get mass per one planckon. 

   The matter, however, is not finished. It should be expected that the potential energy with respect to distances of less than half of the Planck length, should be positive (being negative for larger distances) - judging by the discussion regarding a system of two material points (the more general case), since it is about repulsion. At the minimum distance**, the (positive) potential energy should have the maximum value. In the case of an isolated system of two planckons we would basically have to deal with the cyclical movement, with vibrations. It associates with the oscillating Universe. The potential energy is equal to zero when, of course, the mutual distance is equal to half the Planck length, and in that case the gravitational mass of the system is equal to zero. This can be symbolically expressed as follows:

From formula (11) it transpires, however, that the potential energy should always be negative. It would be correct if gravity meant only attraction. But it turns out that this is not the case. In the formula for energy we should, therefore, take this into account by adding a factor, which should be defined as follows (just like we did in the first article***):
And here we come to the final form of the expression for potential energy:
For convenience sake let’s assume that: 

Let’s also note that, on the basis of formula (10):
Thus we get the following formula for potential energy:
 Function  E(p) can be explored. I would suggest this to high school students (although today it's a lost cause). Below we see a sketch of a graph of potential energy. We express it, of course in Mc² units. Note the consistency of this result with the previous results, in particular the zeroing of the system’s mass when the distance between planckons equals half the Planck length. 
Judging by the graph, we see (surprisingly?) the point of minimum potential energy, corresponding to a distance equal to 3/2 of Planck length (-8/27Mc²). This corresponds to energy of 3.67·10^18GeV. By the way, it would be the energy of two orders higher than the energies considered by GUT (Grand Unification Theory). This minimum energy helps in the process of connecting planckons into systems which include, one can assume, all particles (except photons and neutrinos). Yes, neutrinos, due to their unique characteristics, differing them from other particles. I will explore them in the third part of the book. Also "at the crossroads”, at the 1/2 point, the niche is formed (further on there is attraction, and before it there is repulsion) and it is there the place for planckon systems, with the proviso that the potential energy is there equal to zero. So this corresponds also to zero mass of the system. This is the place where photons are formed. By the way, let’s note the perfect convergence of this graph with the graph illustrating changes of the potential energy of two material points (article five). This was to be expect (by taking into account that 
R = 2L).
     As it transpires, we did not find here room for neutrinos. It is a sign that, despite all, our insight is only partial. But I think we've gone a little forward. That’s because there are more options which exclusion will bring us closer to the solution. By the way, it could mean that the neutrinos are just complex systems (let’s remember that our "theoretical" deliberations apply only to systems made of two planckons). One cannot explore neutrinos without paying attention to their very special properties. This could (already) imply that they didn’t have to come into being in some potential energy hole, but in some other way. As I have already noticed, the thing is described elsewhere. Besides, according to one of the hypotheses they diverged still before the occurrence of the phase transformation.
   So let’s continue our interpretation. The value of the potential energy which we obtained in the minimum: -8/27, makes one wonder, as it happen to be the third power of -2/3. Let’s also note that this value happens at the point of abscissa 3/2. This also makes one wonder, if only because of the fact that the product of these two numbers gives unity (minus expresses attraction), which, among other things, means certainty. This actually corresponds to the principle of "completeness" according to which reality in its objective form is an ideal towards which strive all its approximate descriptions, which under the name of theories aspire to accurately describe Nature. Among other things, this is exactly why scientists are looking for aesthetics in their equations – it’s a kind of unwritten criterion in the search for truth. They seek solutions in the form of whole numbers or simple fractions, or else special numbers such as the number π. Particular attention should be paid also to number φ = 1.61803 ..., called the golden ratio, which is the ratio of the lengths of two parts of the golden section, as well as the limit to which tends the ratio of consecutive numbers in the Fibonacci sequence. We will return to it in a moment. This is by no means about some kabbalah fun. Although who knows, maybe cabalism is based, among other things, on the (yet) unfathomable to us qualities of Nature, described symbolically in the Torah (?).
   Let’s note that the number 2 represents a fundamental alternative, it expresses the existence of two exclusions (up-down, right-left, true-false, attraction-repulsion, etc.). In this context number 3 defines the number of elements of a basic optional set: plus, minus, zero; in addition indicates the symmetry of the world –zero symbol separates "equally" two opposites, the combination of which also makes zero. 

It is a common thing in nature. Let’s take, for example, the absolute quantitative equality of positive and negative charges. It could means that the (electrically) charged being arose from the dissociation of some primary creation. This in itself indirectly indicates the existence of structuring (and therefore atomisticity) somewhere deep, below the threshold of observability. A place providing an opportunity for planckons (and future doctoral students) to show off their capacity.
In this context, the third power of the number -2/3 is perhaps (Perhaps? Rather quite naturally) an expression of three-dimensionality of space. I already noticed it in the fifth article. 

If we were some two-dimensional creatures, instead of -8/27 we would get 4/9... And if we were four-dimensional? Perhaps in the formula which we would get, the index would be 4 and the resulting fraction would amount to 16/81. That’s the product of imagination which can surely be erroneous, especially that sometimes my head brims with jokes. By the way, the energy would be positive, which would mean a maximum, not a niche. The particles would not exist, at least in this point. Even if it's a joke.
After all, the volume is determined by the third power of length. Three-dimensionality. It is worthy of consideration, although it’s non-convergent with the findings of the theory of superstrings. Apparently, to reach the micro-reality from above (being obligated, moreover, by the paradigm of observability), we should equip ourselves with some additional dimensions. However, from the bottom, at the source, everything looks simpler. Perhaps at the source, the space created by the multitude of planckons is three-dimensional, while the assumed multi-dimensionality is associated with the complexity of systems forming reality, and it constitutes a condition enabling description of this reality by the means available to our perception

In our perception there are three types of interactions (weak concerning neutrinos, outside the limits of our perception - to understand why, it may be worth reading the essay devoted to neutrinos). If we refer to each of these three interactions, the three dimensions necessary for the full understanding of dynamic systems, we’ll get together nine dimensions. Remember superstrings? Well. such loose associations.
Moreover, if all the interactions ultimately boil down to gravity (as the result of complexity of structures), no wonder that getting to the elementary structure on the basis of unawareness of this (fact?), requires additional procedures that extend the data space by additional "dimensions" (and greatly complicate mathematical modelling). So you might think, but the internal features of the planckon itself still remain a puzzle, and they suggest that to describe them we would still have to broaden our spatial "imagination".****
   To complete this issue, in the formula (13), by replacing the mass with the expression defining it (Formula (2) in the preceding article), we obtain another formula for potential energy:
There are (at least in my mind) various elsymons, the number of possibilities is enormous, almost infinite. However, not every system has features of durability, not each one of them can be a particle of a bearable life time. It is not my purpose (and there is no time for it) to seek formal rules of selection and exclusion enabling the construction (on paper) of creations with characteristics of specified particles, although it is not an impossible task. And if someone embarks on it, it will turn out that the currently in force standard model is a special case, or actually a "local" regularity - after taking into account the dual gravity and, of course, all what’s brought by the concept of the absolutely elementary being. It is even knows which direction to follow. We’ll talk about it further on. I hope that I will be able to leave the continuation of exploration (a fascinating research topic) to the young and willing. [Does it sound like a dream by someone affected by Asperger syndrome?]   

   In this context, the existence of "instability" (particle decays) is in itself an interesting subject, and the question: "What is its cause?", is not at all trivial. After all, "gravity is only and exclusively a bonding factor." Durability of a system is associated with the concept of balance. The specified system is in stable equilibrium when its state is characterized by the minimum potential energy. For example, a body located at the bottom of a pit is in a state of stable equilibrium. If gravity was only attraction, there would be no problem... and we wouldn’t exist. The pit would be actually one-sided endlessly deep chasm. Everything would fall into it and nothing would be left behind (another matter whence it would have come, to fall afterwards). It’s hard to speak here of the balance and stability. Somehow nobody talks about it. The problem ceases to exist if we do not talk about it. And if someone speaks out, well, that's his problem ...

Fascinating, how Everything could have arisen in such a situation, and, moreover, expand? Scientists have found a way. There is a way for everything. Thus, in spite of all, we exist, with all due respect for the graciously ruling singular black-hole-bureaucracy armed with vacuum power.

     
So let's ask: What is it that makes, however, the existence of stable systems possible? This mysterious factor that on the one hand does not allow for an unlimited collapse, and on the other hand causes systems decay. This is certainly a kind of repulsion. What could be its source? Certainly not electrostatics. As we know, the particle decays always involve the participation of neutrinos, which do not interact electromagnetically. We have one more reason to pounder.
     Thus the existence of repulsion changes the situation. Not only that. On the one hand we realized that there were certain problems and even internal contradictions arising from the traditional view about gravity, and on the other hand these problems have been (ideologically) resolved. But that's not all. Let's go back to our planckons, which permeate one another. As previously noted, there are two niches of potential energy enabling creation of systems-particles. In particular, where the distance between the centres of planckons is equal to half the Planck length, there is the niche of zero mass, the photon niche. If the distance is even smaller, the mass of the system becomes negative, which manifests itself in the force of repulsion. As the mutual distance between planckons becomes smaller, the numerical value of the (positive) potential energy rapidly increases. Thus the system of planckons resembles a spring that is certainly not at rest, a vibrating spring in fact, as we will find out further on. This also reminds of the Big Bang, and along with that, whatever preceded it. Gravitational repulsion, concurrently over the entire volume.
2. How to build elementary particles?
     We have already quite a solid "ideological" base for thought, maybe even for preliminary determinations concerning criteria for the construction of particles-elsymons. Let me get these thoughts, scattered all over the text (this one and the preceding ones), into one whole.
      While considering the potential energy of a system of two planckons, we came to the conclusion pointing to the existence of two minima, two places best suited for planckons to connect each other. It's quite an important criterion, but a graph of the potential energy which we obtained, relates to a system of two planckons, while actual particles are probably quite complex combinations of such systems. And what kind of arrangements are they? They should be durable, that is, when left alone they should not disintegrate. They should be, therefore, of cyclical character. I explained the matter in the reference with three stars in the previous article. That is, particle is a cyclic arrangement. It is a complex system where continuous (and ceaseless) changes take place, but these changes are cyclical. It's about vibrations. Planckons forming a particle, and also, irrespectively, their systems (sub-systems of a whole), vibrate. These sub-systems are coupled together. It can be assumed that the distribution of vibrations of the whole entity is of a Fourier character. The vibrations are "tailored" to each other. The system should be stable. Instability causes rapid rupture of the system. Of course, in case of a system of planckons, there is no electromagnetic radiation during the undergoing changes. Not only because these changes are cyclical. Also because the photon itself is just such a system.
     But that's not all. Additional criterion concerning the structure of particles is based on the (admittedly quite surprising) hypothesis of the existence of saturation of the gravitational field (I wrote about it in the preceding article). The concept assuming the existence of saturation of the gravitational field has not been so far taken into consideration, because it is not consistent with the mainstream research and, obviously, with the current set of beliefs. The impetus for thought in this new direction (apart for very old, student-era speculations) is the fact that not everything which constitutes current knowledge fits like a glove, makes a monolith. After all, we know that the present state of knowledge no longer anticipates anything new, while the astronomical observations generally surprise and the scientists have to adapt to them by (not necessarily justified) creations of new entities. The distinct example of this approach is the so-called dark energy (awarded even with the Nobel Prize). Elsewhere I presented the (quantitative) model explaining the effect of supernovae, which gave rise to the invention of the dark energy, the model - consistent with observations - anticipating the magnitude of darkening depending on the distance. This creation, by today science, of new beings left and right, is quite symptomatic (along with strong opposition that these words arouse amongst many members of the community of physicists, and I, though also a physicist, will be soon removed from this fraternity. To my joy, because it will be a sign that they have read what I’ve written). But this (creation of new entities) is a fact, even if today's geniuses make fun of phlogiston.
     The new concept should be, of course, checked, if only because it shatters the current order of things. Check so as to reject. Check because... it constitutes a considerable contribution for possible heuristics. It opens directly the cornucopia of research topics, among them the traditional research problems, including those still regarded as insoluble, as well as research topics which are far beyond the horizon of expectations of today's science and cognitive awareness. The implications may be numerous, significant to the extent that the exclusion, without serious study and research, of considerations based on the model presented in my works would not be too wise. Therefore, all this should be rejected in advance, and of course, wrapped in silence ... And that's what is actually happening..
     In the preceding article, so as to illustrate things, I used (kind of infantile) model of hands, which limited number means the existence of saturation. I stated there, that the matter is worth describing primarily in relation to the truly elementary planckon systems. It can be said that nature is minimalist. Planckon therefore has the least number of hands by which it can, first of all, connect with other planckons, and thus create the simplest elementary spatial arrangement, and secondly, this simplest system is still the source of the gravitational field enabling it to create more complex systems. Thus, plankton has four hands. Thanks to them, an elementary four-walled system come into being (regular tetrahedron), having also four hands. Why exactly four? Four points (four planckons forming a regular tetrahedron) form a uniform three-dimensional space (three-dimensional space is formed by four points - the vertices of a tetrahedron, just as three points make up a plane, and two a straight line.) Naturally, the system is not static. Planckons vibrate. One can initially ask: How? Are all four getting closer to each other and then move apart in the concurrent phases causing cyclical changes in the size of the tetrahedron? Or maybe they oscillate pair by pair in opposite phases? Here the description would be more complicated. I encourage you to think, and search for possibilities by which the system remains stable. This is of course an elementary system, which is linked with other identical systems, also vibrating. It is therefore about the stability of the system made which can be made of many of these interconnected tetrahedra. In the case of a single tetrahedron, in connection with its elementary character, there is no question of higher harmonics.
   Planckons can also create a dodecahedron*****. Its walls are regular pentagons. It has 20 vertices, which means that is composed of twenty planckons. So does it, therefore, have twenty outreaching hands? That might result, but not immediately. "The sides of regular pentagons may be of the same length as the sides of a tetrahedron, but here there are longer diagonals (smaller mass deficit - greater mass). Only tetrahedron is devoid of diagonals." However, the quoted sentence does not take into account the existence of saturation of the gravitational field - planckons on both sides of the diagonal do not feel each other’s existence. In our model, we would need longer hands, but they do not exist, since they all have the same length. All free hands are reaching outwards. It is worth a thought, even if it is a very childish model, since it serves as a model of quantum gravity. And if we remember Gauss’s law, we will immediately conclude that there, inside, there is no gravitational field. Inside this "ball" gravity does not exist.
   Apart from this, dodecahedral system should be prioritized, if only because of the relationship of regular pentagon with golden ratio, which manifests itself as a natural feature of a huge number of physical systems. Particularly noteworthy here are living organisms, in which the proportions of body construction are based on the golden ratio, and the question: "Why do we (not just us) have five fingers?" in this context is not at all trivial (unless we are the heroes of a cartoon film). Judging by all this, it can be expected that the gravitational mass of a dodecahedral form is five times greater than the mass of a tetrahedral form (20 vs. 4). Is it a far-reaching simplification? You can check it out if you won’t be as lazy as I am.
     And maybe these two forms separately make two different types of particles: leptons and hadrons? Proton would be, for example, a system made of only dodecahedral cells (So the quarks as well?) while electron would be a system made solely of tetrahedra. Such clean connections would be very stable and impossible to break by the means at our disposal. Let’s recall the magnitude of the minimum potential energy in the system of two planckons, it amounts to 3.67 · 10^18 GeV. And in a tetrahedron the binding energy is not much smaller. Indeed, we have only two absolutely stable particles. And what about neutrino? If neutrino does not decay either, then its construction is probably different. By the way, I think that the cause of any particle decays are the background neutrinos. If so, are they also dangerous to themselves? Somehow until now no one observed neutrino decays, although the phenomenon of their oscillation has been discovered. But that’s not decay into something else. Moreover, neutrino itself should be also built from such tetrahedra. Tetrahedron will not break tetrahedron. In the third part of the book I will devote to neutrinos a special series of articles.
     On the other hand the mixed systems are easier to break, especially by separating systems containing connected to each other dodecahedral and tetrahedral forms. Neutrino would do the job, judging by the previous suggestions. The binding energy between these two forms is weaker than between planckons making given form, or between identical forms. It is rather a reasonable assumption. "Homogeneous" systems would be, in principle, impervious to neutrino ["in principle", because also particles μ  and τ (leptons) disintegrate.] None of the neutrinos. There are only two inviolable systems: electron and proton. This is also an important tip. 
     And what about neutrino? What does it do at the moment of breaking a particle? It is probably enough that by its intrusion, by its own field forcing resonance (what a fantasy), it disturbs the order of particle’s autonomous vibrations, and that causes its disintegration. In addition, some evidence suggests that neutrino is gravitationally (not in appearance) repellent. But this is not an explanation, it is at most a qualitative premise.
     If at the same neutrino does not fall apart (spontaneously on its own or by force), it would provide an argument for the fact that it is indeed responsible for the decay of other particles. The experiment seems to confirm this (the presence of neutrinos during the process of particles’ decay). It is also a sign that it has a unique feature. Is it repulsion? But how otherwise? How else would it perform its breaking role? Otherwise neutrinos would attract each other, unable to cause decay in their own environment. Perhaps the phenomenon of oscillation testifies to this. I have already drawn attention to this possibility in the article on dual gravity (the first article of this series, and the fifth overall). There I pointed out that two systems of negative gravitational mass attract each other. In the light of those comments and conclusions, the hypothesis that the mass of neutrino is negative, is not as crazy as one might think at first glance. In addition, it could shed some light on the conditions in which this particle came into being. Rather earlier than the other particles.
     Separate attention should be given to systems forming the elementary charge, occurring both in leptons and in hadrons. These are also absolutely stable systems and permanently attached to one or the other system. It’s interesting how this being is built. Is there a third form, or the only one of its kind (in exactly two ways) combination of the known forms, an absolutely permanent combination? Or maybe, (yet another) crazy idea, some structural polarity, something that can be associated with a pair of sex chromosomes: XX and XY, this time forming two poles. By continuing this exciting association, we note that there is an asymmetry – feminine element on top. We will still return to this modelling.
    Quarks are also noteworthy - which differentiates between them? And what about their fractional charge? What structural conditions determine the membership to a particular, one of the three, generations (two quarks + two leptons)? And what about crumbling leptons (μ and τ)? What in structural terms is the excitation of electron or other particles? No, I do not have answers to these questions and to many others. Will the mechanistic insight be able to deal with them? And how is it in fact? Well, untamed imagination. But in spite of all we have moved a little forward, which does not mean that in the right direction. And if so, does it mean that I have to take care of everything?
     Returning to the vibrations occurring in planckon systems, especially in the context of the above considerations, we note that because of the existence of symmetry in both considered forms, vibrations in both of them should be coordinated. It may be noted that these elementary geometrical beings are not barred from their neighbors. They can even to some extend penetrate each other and create more complex systems. The condition of their stability, and in fact of their existence, is that vibrations, despite the complexity, must be also mutually coordinated. So there cannot exist non-cyclical systems - as absolutely unstable. This, with the increasing complexity of systems, reduces the number of possibilities. We are aware that the number of particles is not unlimited. The possibility of constructing standard models of particles can be taken here as evidence. Lack of restrictions would have made it impossible.
     It is also significant that the resultant mass of these systems is much smaller than the mass of a single planckon. The masses of all particles known to us are not very different from each other. I pointed this out in the previous article – they are comparable due to the existence of stable, unbreakable forms (two of them). These masses are relatively small due to the significant gravitational saturation. These are the known to us particles.
     Among them photons stand out as completely saturated. It is also known that the number of positive and negative charges is exactly equal in the entire Universe. Why? Apparently they resulted from dissociation of... what? Does the connection (unknown to us today) of elementary structural forms of positive and negative charges create a photon? Not quite sure, due to the diversity of photons in terms of energy. This would be a kind of addition to the repetitive form of varying complexity - as identical chains (maybe chain rings?) with different numbers of links having defined characteristic, repeatable tips, for example XX - see above. There will be more on this topic in the third part of the book, in an article entitled "Wave-corpuscular duality in the deterministic version". Today, what we know about photons is that they are bosons transferring electromagnetic interactions.          It is worth some pondering, some wheeling dealing. Dual gravity gives its blessing.
   Ahead of us, of course, still a lot of questions, a lot of research problems, maybe even more than before I went nuts. That's for sure. For example: What form, what planckon system, has the characteristics of what we perceive as an electric charge (something unique, something repeatable, something that occurs in the majority of particles)? What is the (structural, planckonic) nature of the magnetic field? After all, the magnetic field is the result of charge movement. How does the process of particle-antiparticle annihilation evolve? How do these entities actually differ in terms of structure? What system creates a strongly interacting form? What is the structure of quarks? Etc. A heap of questions. Is it wrong? It's great. Too bad I'm too old to take care of all this. But there are still many young people, moreover more talented than I am in terms of workshop efficiency.  
   The description presented above is of a qualitative character. Only suggestions. You will most likely, dear reader, name them as fantasies. I do not recommend, however, to dismissively reject them in advance, for those who will endure reading further may be rewarded. In any case you have to admit that, so far, no one tried to deal with the problem of structure of particles. This is simply not considered. Does this mean that it should not be considered? That it is too early? So far, there was no foothold. And now? Rejecting the orthodox understanding of the theory of relativity and quantum mechanics...
     To sum it up, we have affirmed:
a) the possible existence of the duality of gravity, based on a different than the currently accepted definition of the gravitational mass;
b) as fact, the existence of an absolutely elementary being;
c) the existence of structure, graininess, discreteness in the fabric of matter – we accepted it as the truth of nature. In this light, even if there are no planckons which are modeling here this absolutely elementary being, the attempt to describe the structure of particles is totally legitimate despite the fact that so far no one has tackled this problem. It is simply a natural consequence of explorations and there is no need for a genius to say so. Appropriate level of knowing how much remains to be learn, plus a bit of fantasy. It was only a qualitative description. I think it may indicate the direction for further, this time quantitative, inquiry. So we have a source of rather uncommon heuristics. That’s what I think. I would add that quite a lot of things can be explained on the basis of planckon model, in addition without negating the confirmed by observations and experiments characteristics of the matter within our knowledge. At the same time, you can quite easily "materialize" the matter waves and, without beating about the bush, in a simply mechanistic way provide a model of corpuscular-wave duality. The matter remains the same matter. And what about the dark matter? 
 3. What is the dark matter?
   We noted above the existence of two niches of potential energy where numerous and varied elsymons can be "hatched": separately photons, separately "normal" particles. This particular feature of the two planckons system gives (with total certainty) an inducement to considerations concerning both particles (perceptible) in the micro scale and in the scale of the Universe, especially at the very beginning of expansion. Until now it was impossible. This "impossibility" is justified, moreover, by the tradition of quantum uncertainty. The beginning of everything is (in accordance with the tradition) necessarily hazy and impossible to clarify. Another thing is that in our thought experiment we approached (two) planckons to each other, and, in relation to Universe, at the very beginning, at time zero, all planckons already formed one integrated, kind of general system of elsymons - panelsymon.
     This Universe "at the start", was in structural terms similar to a mono-crystal. I called it Panelsymon. Its very rapid expansion as a result of gravitational repulsion of constituent planckons, led to gradual loosening of the bonds between the components and consequently, at some point, to their breaking, and as a result, the dispersion of various elsymons. At that moment the phase transition took place. Chaos ensued. As the final result of the changes taking place there was created the material environment, which included photons (quantitatively dominant), and others known to us (and not known) massive particles.
     There was also "debris", remnants of "retarded" elsymons and especially free planckons, which till then served as connectors between the structures, which after the separation became autonomous particles. As we already know, planckon dimensions are smaller than their own gravitational horizon. Searching for them (visually) is therefore useless, although their total gravitational mass should be actually very large, given the huge mass of a single planckon in comparison to the mass of particles we know.
     Their separation would be related to very rapid, exceeding the speed of light, expansion (URELA)******, and not fully coordinated scattering of matter after its termination, resulting from aforementioned phase transformation. The conditions of chaos (including fractality) completed the rest. In addition to the matter that has evolved to make Us the observable fact (I should be put under special observation), there was also a lot of debris, a kind of waste tip at a building site.
     In the chaos caused by the phase transformation there appeared clusters of planckon matter. There, thanks to the extremely strong gravity, matter began to accumulate. In this matter stars started to form. And later in these places galaxies began to form. Now we know what dark matter is. It is simply the clusters of planckons. No wonder that dark matter does not radiate, that it is only a source of gravity. This is confirmed, willy-nilly, by studies conducted in recent years - gravitational lensing of light coming from very distant galaxies. As you can see, this model of dark matter is the most consistent and logical. And it doesn’t require bringing to life new, occasional beings. Ockham grows a beard.
     Galaxies were formed in fractals, condensations of planckons forming dark matter, while planckons scattered in intergalactic space form fairly homogeneous network filling the Universe. In these vast spaces the forces acting on a single body (say, the trial body), compensate each other almost completely. So we have vacuum and of course vacuum’s energy. Is it all impossible? Is it impossible because it is too simple and does not require equations? Or impossible because no one of importance came to this idea, and it occurred to me? Why it hasn’t occurred to them? Because, a trifle, one needed to define a little differently gravitational mass and accept as possible the existence of an absolutely elementary being (no matter whether planckon, or something else).
     All these planckons, all taken together, constitute a major contribution to the total mass of the Universe. They form, I repeat, the dark matter sought by scholars, which visual detection, for by now obvious reasons is not possible. Now, if we take into account the existence of planckon network embracing the whole Universe, we have reason to believe that we can live happily without dark energy to ensure that density parameter******* of the Universe is equal to unity. We’ll come back to it. As for the dark energy, in an essay under the telling title „Horizontal Catastrophe” I will lay it to rest and send into eternal oblivion. Indeed, as I noted above, despite the enormous mass of the planckon network, the forces acting on each object in the intergalactic space compensate each other almost entirely. Thus it is difficult to directly detect potential contributions to the unit value of density parameter.   
     We’ll come back to this topic¹ when we’ll get to the cosmological issues.
    The process of scattering elsymons in all the diversity of their types, gained, as mentioned above, the characteristics of chaos and fractality. This (taking into account the existence of extremely massive "dark" matter) would explain the observed heterogeneity in the distribution of visually detectable matter. In summary, we can say that the above-mentioned "common elsymon (panelsymon)" with the structural characteristics of mono-crystal, burst at some point during the rapid expansion and "shattered" (like a wine glass hitting the floor). Chaos ensued and by the same token Temperature came into being. Only then (!). And there also appeared non-gravitational interactions: strong and electromagnetic, plus, of course, radiation (photons), and the massive particles, and gravity turned into the force of attraction. And there also came density fluctuations responsible for the observable nowadays large-scale structure. And how the galaxies were formed?
    I often wondered about it: Why the masses of galaxies are of the same order of magnitude? Of course, not counting the small, satellite galaxies. How did this happen? Around 200 million years after the Big Bang, the stars began to appear. Matter was everywhere still sufficiently dense. So stars should have taken the whole space of the Universe. If this set had been absolutely homogeneous, matter would have never congested in numerous centres. According to cosmological principle, the Universe does not have a clearly specified centre.
      Fortunately, there were (thanks to chaos (phase transformation)) density fluctuations. However, the ordinary fluctuations could cause only concentration of matter of stars of any sizes, without any guarantee that they would form galaxies. Fluctuations caused rather the formation the large-scale structures - galaxy clusters, "walls". And galaxies, as I pointed out above, have comparable masses. This cannot be explained by fluctuations. So here we have a legitimate question: What would limit the size (and mass) of the "islands" out of which galaxies were formed?
     And here the meritorious dark matter plays its part (together with the concept of mass defect based on the new definition of the gravitational mass). Fluctuations in the concentration of matter apply also to planckons (dark matter). The places of their greater density began to attract, pulling in nearby stars. Yes, but why masses of galaxies are similar to each other? Well, planckonic matter in those clusters was gathering and densifying. So it created clear centre. However, as the matter densified, the mass deficit of such a system was increasing. Finally, regardless of the number of planckons, the mass of the nucleus of such a grouping, at its deepest level, remained close to zero. As a result, the resultant gravitational masses, regardless of the number of planckons in a given "island", were almost equal.
     If there is no other explanation, we have here, by the way, an indirect confirmation of the whole concept of dual gravity. Why such a mass, and not another? It's one of the secrets of the Big Bang and, of course, of the features of the planckon itself. Knowing these features it will be possible in the future to answer this question as well.
     Only planckons could densify in this manner. The nucleons couldn’t possibly undergo such a process. Atomic nuclei are incompressible. After all, the concrete objects that we identify as particles, are systems of unbreakable forms (tetrahedral and dodecahedral) with infinitesimal masses (and with high relative speeds - temperature). It’s hard to imagine them concentrating in the chaos, in the first moments of the Universe. In the beginning they were relatively homogeneous element. At that moment the densification of massive stars and galactic nuclei into gravitationally closed creations, is something that would happen in a very distant future, in at least several hundred million years.  
   We can already now try to describe the process of the formation of galaxies. After about 200 million years, the temperature was low enough so that the process of formation of the first stars could have begun. This process intensified with time. The matter was then still dense enough. Also the formation of clusters of dark matter, constituting gravitational centres drawing in the matter of gas and stars, had to take some time. This had to last no less that than a billion years, which is understandable after taking into account the dimensional scale of systems (in comparison with the dimensions of the stars themselves). So at this stage we have swarm of stars of the first generation, attracted by the islands of dark matter. It’s not surprising that the masses of the created groups of star were of the same order of magnitude, even if they contained completely different numbers of planckons. And what about the stars themselves? The stars pulled toward the centre pressed at each other from all sides. This had to result in thermonuclear hyper-explosion involving billions of stars. So we have quasars, we have the primary (and the most abundant) source of metals. Not some supernovae - capricious and extremely rare. Let’s note that the matter from which the solar system is formed has already existed for at least six billion years. I described the creation of galaxies elsewhere, in an essay entitled: "How galaxies were formed?" Everything’s there. If someone wants to discuss on topics raised here, lets him/her first get acquainted with the content of this article.

   Meanwhile, the number of hypotheses about the nature of the alleged dark matter keeps on growing. An example of those are hypothetical WIMPs (Weakly Interacting Massive Particles), which for obvious reasons do not participate in electromagnetic interactions. One more unnecessary being. Astronomers are searching for their tracks. What would they say about planckons and elsymons? They’d say nothing...

Reflections
   ¹ Perhaps these planckons form also an invisible homogeneous network, which to some extend can be compared to the famous aether. Would that be its physical meaning? Speaking of this, one could give some thought to the essence of the speed of light. Is it only the "internal" matter of electromagnetic interactions? Most likely not. In the model of the Universe, which I have introduced, in particular in the articles of Part 2 and, obviously, in my books published in 2010********, the speed of light is primarily the speed of expansion of the Universe, which is the upper limit of the set of relative speeds of objects, and its invariance results directly from the cosmological principle – there will be still a lot about this in later articles. According to the view that I express on other occasions, the existence of space is a direct result of this movement (and that’s the reason why the space of the Universe is flat). This I will explain briefly in the next article.
   Therefore, because the Universe is in the global scale isotropic (according to cosmological principle), and since the speed of light is the speed of its linear expansion, c speed should be the same everywhere and in all directions, that is not dependent on the choice of the reference system, or put it otherwise - absolute. The c magnitude is a parameter of the Universe, and not only the speed of expansion of electric and magnetic fields. No wonder that aether could not be detected in the Michelson-Morley experiment. Since c primarily determines the rate of expansion of the Universe, it cannot depend on the reference system, which is local by definition. Today everything, especially in the context of opinions expressed above, appears simple. However, over a century ago, when the Universe was considered infinite and static, Einstein's postulate of the invariance of the c parameter made a revolution, paved the way for physics of the twentieth century. Invariance of the speed of light is the basis of the special theory of relativity. In those former (?) times the invariance of the speed of light was not at all associated with the cosmological principle. And today? I guess it is not connected. In those old days, in this regard, there were only Maxwell’s equations and there were some mismatches in relation to classical mechanics based on Galilean relativity. And that was actually the spur, putting it simply, for research (conducted by many scientists), which ultimately led to the special theory of relativity.
   And now, going back to aether (needless to say, rejected by Einstein), hereby (tentatively) reactivated (the name), it can be expected that the spread of light occurs as a wave of (longitudinal) densifications and rarefactions in the global planckon network. Fine, but how do you reconcile it with the fact that the electromagnetic wave is a transverse wave? Has this anything to do with the speed of propagation? And what about wave-corpuscular duality (photons are strictly material creations)? One hypothesis chasing another, and along the way the old questions remain valid, even if we have a little less of them. A lot of things still need some thought. So let’s add something else. Now, in view of the fact that the Universe expands, the distances between individual planckons of our network get larger. It follows that the speed at which light propagates should gradually decrease. It’s interesting that the same conclusion, or actually supposition, can be reached in another way, on the basis of cosmological considerations (about this in subsequent articles). Astronomical observation (study of quasars’ spectra) seems to indicate that the fine structure constant: α = e²/ħc is increasing. As we can see, the observationally attested variability of this parameter may mean variation of universal constants. One of the options, in view of the increase in the fine structure constant, is that the speed of light diminishes. If the speed of light gradually decreases, the expansion of the universe will end at a minimum (if not zero) value of the c parameter. Then there will be inversion, and the Universe will begin to collapse. Judging by some calculations that I will quote later, it is to happen when the age of the Universe will be of the order of 10^20 years. That is the basis (and not on the density parameter Ω) of the model of oscillating Universe described in this book and in the first of the books mentioned hereunder.

   Is it FN (Fantasy), or maybe rather FN (Fiction)? What about the aforementioned Higgs particles? Let them better keep quiet. Maybe...

*) Part 1. Bereshit... (In the beginning…)
**) As it will turn out (in the next article), there is a minimum distance at which two planckons can approach each other. No closer (Pauli exclusion principle?). So there is a maximum positive potential energy of their interaction. The graph does not reach higher (so it is not an asymptote). At this point, the function reaches absolute maximum. It is a notable thing, also in the philosophical context. Facts of nature, ontological truths, are not mathematical constructs, in spite of the judgements of those whose reflectiveness can’t go beyond mathematical upper limit. I have already pointed it out in different contexts.
***) The dual character of gravity”                                                
****) Here is some other example of such an enlargement. In 1919 Albert Einstein received a letter whose sender was a mathematician from Königsberg, Theodor Kaluza. In his work, described in the letter, Kaluza presents the possibility of joining "in one equation" two fundamental interactions: gravity (general theory of relativity) and electromagnetism (Maxwell’s theory). The condition for this, as it turned out, was to introduce an additional, fourth spatial dimension. The letter surprised Einstein. Kaluza's article had to wait two years before it appeared under the title: "The problem of unity in physics." This delay did not matter, because the idea of Kaluza found fertile soil only after fifty years of continuous development of science. Today, we are talking about hyperspace having eleven dimensions (including time), where the complete unification of all known interactions is taking place. Superstring theory is the expression of this new approach. Its development is M-theory. It is an encouraging option for the future. And I dare to turn back the course of history? Personally, I have no such intention, although I do not deny that history repeats itself.
*****) Other platonic solids (cube, octahedron as well as icosahedron) are not suitable for our purposes. It is easy to find out why.                        
******) Urela – ultra-relativistic acceleration, is a process of my invention, of accelerated expansion at the very beginning of the Big Bang, the (much more consistent) alternative  to "inflation", which in principle is accepted by the scientific community on the basis of "better than nothing", while awaiting for something better. It is based on mutual repulsion of planckons forming at the beginning the squeezed to the limits network of "mono-crystal" - panelsymon. Urela ended with the phase transformation at the moment when the gravitational mass of the system came to zero (and breaking of connections between the nodes of the "crystal"). It’s all in the text (this one and the next).
*******) Density parameter: the ratio of the average density of the Universe to its critical density; It is used primarily in cosmology based on the general theory of relativity (Friedmann equation) as a criterion for defining development trend of the Universe. It is equal to unity if the geometry of the Universe is flat, that is Euclidean - what is ascertained observationally. In truth, this is "flatness on a knife edge," or, if you will, balancing on the tightrope, however inviolable during the whole history of the Universe (starting from phase transformation). This is, in short, the so-called flatness problem, which characterized Friedman’s cosmology. There will be quite a lot on this subject. It will also turn out that, resulting from my inquiry, this problem will disappear. Cosmology of the twentieth century took this problem away with the hypothesis of inflation, which I will also dispose of.
********) 1. Józef Gelbard –  “Let’s fantasize about the Universe  I. Oscillating? That’s not that simple.” ISBN: 978-83-62740-06-2  
2. “Let’s fantasize about the Universe II. Into the depths of matter: gravity in sub-dimensions.” ISBN: 978-83-62740-13-0


Next article: Planckons and elsymons. Part 3   
Some implications of the foregoing findings. The problem of force, speed; what is the absolutely minimal distance between planckons? Pauli exclusion principle differently.  
Multidimensionality. Planckons contra strings.