An Algorithmic Approach to Solve Continuum Hypothesis

The continuum hypothesis has been unsolved for hundreds of years. In other words, can I answer it completely? By refuting the culturally responsible continuum [1], one can link the problem to the mathematical continuum, and it is possible to disproof the continuum hypothesis [2] . To go ahead a step, one may extend our mathematical system (by employing a more powerful set theory) and solve the continuum problem by three conditional cases. This event is similar to the status cases in the discriminant of solving a quadratic equation. Hence, my proposed al-gorithmic flowchart can best settle and depict the problem. From the above, one can further con-clude that when people extend mathematics (like set theory — ZFC) into new systems (such as Force Axioms), experts can solve important mathematical problems (CH). Indeed, there are differ-ent types of such mathematical systems, similar to ancient mathematical notation. Hence, different cultures have different ways of representation, which is similar to a Chinese saying: ―different vil-lages have different laws.‖ However, the primary purpose of mathematical notation was initially to remember and communicate. This event indicates that the basic purpose of developing any new mathematical system is to help solve a natural phenomenon in our universe.


Introduction
In discussing the contradiction of the continuum hypothesis, set theory can be used as a reference. This event is because the origin of set theory is derived from Cantor's work in the number and its conceptual set properties. The result introduces our well-known continuum hypothesis problem. However, there are some alternatives of disproof; one is a master's thesis written by this author, which applies a version of the cultural component continuum (mathematics has a strong relationship with culture and thus has a close connection with the continuum hypothesis). The other disproof can be demonstrated by me through the use of mathematical analysis. I will outline both types of disproofs in the following sections.

A Philosophical Issue in our Number Line System
By employing mathematical analysis, the continuum hypothesis can be disproved with the following: First, theorems in number theories should be used to approximate natural and real numbers in different sequences [3] through Farey sequences and fractions together with Diophantine approximation for real numbers. The sequence of the continued fraction will finally converge to any irrational number. This is because, from the Euclidean algorithm, one can clearly see that every term of the infinite series must be smaller than the previous one [3]. In fact, the sequences of different rational and irrational numbers can then form different infinite series (N.B. the above approximation is completely different from the Mandelbrot set). This is because one has only applied number theories to approximate real numbers, but the -M‖ set is a fractal geometry of nature to describe non-ordinary straight lines and smooth arcs.
Conversely, in this case, every former term is smaller than the next. By using a suitable substation, one may turn these series into power series. Simultaneously, an analytic function can always be representable by a power series, which is required for the infinite Taylor series. Thus, one can find an analytic function f(z) together with the Taylor series error term. It should also be noted that the Laurent series is only an extension of the Taylor series, which covers negative power.
Furthermore, it may be suggested to find the expansion's residue through complex residue theorems [4]. In addition, the residue of the Laurent series can be used to evaluate many types of integrals. This implies that it is possible to completely evaluate each approximation series' cardinality. Indeed, all numbers share a commonality: any number can be approximated by a best fitted fractional number. Hence, the continuum hypothesis problem questions whether there is a cardinality between natural and real numbers, which will be disproved in this paper. If there were structures, each with different cardinalities laying between natural and real numbers, as well as the stepwise that moving up-ward with diverse properties, there will not be a (super) common numerical thing -the fractional approximation (in form of rational numbers) for expressing all numbers. In fact, it is only a potential infinity and can be expressed in a rational form, yet it diverts from real numbers. Hence, a contradiction will occur (although an integer works as a subset of a real number ℝ, it has additional properties that are more than those of ℤ. Thus, ℝ and ℤ have diverse or different properties, which contradicts the fact that they can both be expressed as a (super) common numerical thing-the use of fractional approximation in forms of rational numbers.) The aforementioned method is only a proposed outline of disproof, applying pure mathematical and complex analysis together with number theories. In such case, Gödel's incompleteness theorem will not be appropriated to the continuum hypothesis problem (the case of surreal numbers can be referenced to establish a new set of numbers which is made up of the aforementioned fractional approximation numbers). Simultaneously, the real numbers can be eliminated. It is worth mentioning that each irrational number can be sandwiched by two rational numbers. In such a case, the number system will then be modified, containing only fractions and without the set real number. However, this act may violate the well-ordering principle, which states as the following: -Every non-empty set of positive integers must contain the least element.‖ [5] That is, the set of integers must contain a well-ordered subset named natural numbers. That said, it can be shown that there are no smallest positive fractional numbers. Thus, the well-ordering principle, as well as the natural number, cannot be guaranteed (from some perspective, this may even induce a contradiction as the assurance of the least element in a non-empty positive integer set with the fact that there are no smallest positive fractional numbers). In which case, the problem becomes a philosophical discussion or an open-ended question without an absolute answer. This implies that our number line system needs to be amended or modified in order to eliminate the aforementioned controversial puzzle.

A Disproof of the Cultural Competent Continuum
There is also a similar disproof to the cultural component continuum. A study by this author found that children tend to only focus on their passions during their free time at home. The fact is, without parental supervision, the boy tends to enjoy playing computer games, while the girl prefers to chat with each other [1]. However, both excessive computer games and chatting are likely to have adverse effects on academic results. Thus, parents must implement stronger measures to monitor their children's ICT usage at home. In some cases, children respond negatively to these measures, which can lead to serious conflict. Under these circumstances, professional intervention (such as a social worker) might be necessary. These conditions allow consequence behaviour to be studied in detail. What would be the best method to solve this type of behaviour? The answer might be to allow ‗passionate-learning'. This consists of a well-balanced lifestyle, effective study methods, and a strong parent-child relationship. Hence, children would be able to study in an enjoyable and relaxing environment. Furthermore, parents should be educated about mediation philosophy (well-balanced monitoring) together with maintaining a healthy school-family balance. Although cultural differences exist between countries, ICT education has common values. These values include the need of parental education (changing parents' attitude towards handling ICT requests from their children by mediating the use of messaging platforms for non-educational purposes); good use of child psychology (enforce well-accepted ICT usage policies to establish a passion for learning by using educational software); and having a better educational philosophy (how to educate children about ICT usage at home-avoiding pornography during Internet searches). Common values in ICT education must exist in all cultures. The cultural competence continuum will be valid if these common values do not exist. More specifically, the cultural continuum model is disproved as it assumes that individuals are cognisant of a range of behaviour due to ethnic diversity. That said, this author believes that ICT education shares common values as a consequence of (humanised) domino behaviour. It is thus independent of diverse cultures or Intra-societal differences. This clearly contradicts the prescribed model. One may employ an algorithmic method for it, just like the three cases of discriminant in solving a quadratic equation. In conclusion, if the common values for ICT education are true, the continuum becomes invalid and is thus independent of culture. Otherwise, diverse cultures imply cultural continuum.

Literature Review -A History of the Continuum Problem
The origin of the continuum problem may have stemmed from ancient Greece, where scholars were interested in understanding the smallest components of matter. They argued the concept of -infinity‖, which has two different modern meanings: I) The limiting values of a converging series or so-called -actual infinity‖; II) The concept of infinity, referred to as -potential infinity‖, is not an exact numerical value or merely an approximation; For example, 0.333333… is equivalent to 1/3, where the former (0.33333…) is a potential infinity with infinite decimal places-a group with an isolated portion of numbers to represent it; while the latter is only a fractional approximation in form of rational numbers. More specifically, it is a process that continues to extend within any stage, remaining finite [6]. That said, if one considers the infinite sum of the following sequences: 1/2 + 1/4 + 1/8 +… which has a limiting value equal to one; for instance, there is something infinite that exists as a completed object [6], which is clearly infinity. The above two examples show the difference between these infinities, and cannot be mixed together whenever applying the concept of mathematical infinity.
From the perspective of the ancient Greeks, there were various repeated processes that repeatedly worked when dividing daily matter [7]. This was when the concept of potential infinity was conceived. The ancient Greeks believed that actual infinity was not a process in time, rather, it was an infinity that existed at any time (it had an infinite amount of elements). Specifically, the process of potential infinity is infinite but its value is finite at any specific time 1 (it contains many finite elements 2 ). Historically, the motivation that inspired Georg Cantor to develop set theory and point-set topology came from the following question: -Can a function have more than one represented by a trigonometric series?‖ [8] Before Cantor (1845Cantor ( -1918, there were at least two defects in mathematics: I) Mathematicians had difficulties in formulating precise definitions and they were often governed by intuition or geometric pictures. They usually treated real numbers as geometric points on a line; II) They only considered those functions with analytic expressions [8]; Cantor's work was significant as it led to the foundation of mathematics and built upon the ancient Greek's rigour and precise mathematical ideas. In addition, the author would like to provide some new thoughts and possibilities regarding this subject.
According to Ferreirs [9], in 1870, Cantor was able to provide a simplified proof, as follows: -Whenever there is a real function, one can always find a unique representation by Fourier series.‖ Two years later, Cantor generalised a unique representation result, which allows an infinite amount of points for both the divergent and in-coincide function. He also introduced the concept of derived sets (exceptional sets of point P). Derived sets later became a very important tool for both theories of real functions and integration [9]. Cantor also showed that there are some infinite sets of points which are not relevant to the representation question of real functions. In 1874, he proved that algebraic numbers are denumerable (one-to-one correspondence with natural numbers), while the set of real numbers are non-denumerable. In addition, the set of derived points is also denumerable. In this case, Cantor observed that there was a link between his results from 1874 and the continuum [9]. This indicated that he was interested in -the Labyrinth of infinity and the continuum‖.
The rest of Cantor's theory has been described in Lam, 2016. To simplify the continuum problem, David Hilbert illustrated the theory of infinite numbers in a lecture in 1924 [10] as follows: Suppose there is a hotel with an infinite number of rooms, and all rooms have been fully occupied. When a new guest arrives, the manager requests all other guests to move to another room with one number greater. As a result, all guest now have a room and the newcomer occupies Room 1.
The above figure demonstrates all hotel guests shift one to larger, room 1 is free (one may compare with the case ‫א‬ 0 + 1 = ‫א‬ 0 ) Later, a coach arrives with an infinite number passengers. The hotel manager attempts to solve this problem by: Asking the original guests shift to those rooms with even numbers, while the coach passengers are assigned to rooms with odd numbers.
The above way in handling an additional infinite bus load of people can be compared with the case ‫א‬ 0 + ‫א‬ 0 = ‫א‬ 0 Similarly, when there are infinite-infinity of numerous coaches arrived, the manager decided to arrange as the following: This is the manager's diagonal order for the above coaches (if one compares with the case of ‫א‬ 0 X ‫א‬ 0 = ‫א‬ 0 To summarize, the above is only a brief history in the discovery of infinity. Practically, the Grand Hotel story is an infinite nesting and may have a sense of formalism 1 . In the following section, we shall continue to discuss the continuum hypothesis problem in greater detail. A further discussion regarding its relation to Gödel's incompleteness theorem is stated below.

Comments on Gödel's Incompleteness Theorems
When discussing the mathematical continuum hypothesis, it is common to refer to Gödel's incompleteness theorems. This is because the theorems explain that the hypothesis is independent (undecidable) of ZFC (Zermelo-Fraenkel set theory with the axiom of choice). The theorems also mention: It is always true that either incompleteness and inconsistency exist in every non-trivial formal system. This implies: 1. Under a certain set of axioms, there are always questions that cannot be answered; 2. A set of axioms is only consistent under the application of another group of axioms. Although Gödel's incompleteness theorems seem to be widely applied-especially in mathematics -there are still comments that must be addressed: 1. The theorems violate the axiom of choice since he supposed that mathematics is a constructive one; however, lack of such axioms may suggest there would be no independent basis for a vector space; 2. The Law of Excluded Middle will also be invalid. This implies that the Law of Non-Contradiction cannot be true. As a result, most of the classical logic becomes false. This is because the Law states that -a statement can only be either true or false‖, but Gödel believed that there should also be a case of undecidable. In fact, Gödel first used: -this statement is not provable‖, instead of classical logic: -this statement is false‖ for the first incompleteness theorem. The prescribed case is similar what this author has mentioned before [11] in the -liar paradox‖. That said, the impossibility of replacing the latter statement with the former was discovered because one cannot represent: "Q is the Gödel number of a false formula‖, as a formula of arithmetic; The result is known as Tarski's undefinability theorem.
Finally, George Boolos used the Berry paradox for sketching an alternative proof to the first incompleteness theorem.
Indeed, Gödel's proposal on the theories of incompleteness was based on Platonism. There were once philosophers such as Wittgenstein who worked against him (anti-Platonists). For instance, Wittgenstein wrote the Tractatus logico-philosophicus to challenge Gödel. Another well-known example is: -Let us suppose I prove the unprovability (in Russell's system) of P; then by this proof, I have proved P. Now if this proof were one in Russell's system-I should, in this case, have proved at once that it belonged and did not belong to Russell's system. That is what comes of making up such sentences. But there is a contradiction here!‖ [12].
However, Wittgenstein disliked formalization and as a result posed the following statement: -The curse of the invasion of mathematics by mathematical logic is that now any proposition can be represented in a mathematical symbolism, and this makes us feel obliged to understand it. Although of course, this method of writing is nothing but the translation of vague ordinary prose‖ [12].
The significance of the above statements is that it contributes to philosophers and logicians looking for an -ideal language‖ [12]. In brief, Wittgenstein's work suggested that there are no meta-mathematics, and eventually, our arithmetic can be inconsistent [12]. Furthermore, if one assumes the proof relation of naive arithmetic is recursive, the argument will cause a challenge of Gödel's standard perspective and hence his results. As such, whether Gödel is correct or not primarily depends on certain philosophical views-whether he or she is a follower of Platonism or not. Finally, there are always comments to be made on Gödel's incompleteness theorems. One might even imagine the chance of another cardinal between natural number and real number under an anti-Platonism view. As a result, one might continue to refine them and discover more cardinals. Similarly, the issue of extending a new model by exploring new axioms has been included in the following algorithmic flowchart diagram. It is hoped that when old axioms and set existence become invalid, one may continue to refine the process of cardinals.

Main Results: An Algorithmic Flowchart that Solves the Continuum Hypothesis
While the continuum hypothesis problem is well known, most people believe that it was solved in the 1970s. But is this really the case? Up until now, there has been a great deal of discussion and proofs regarding the problem. With reference to the tower of transfinite mathematics [13], this author has tried to develop an algorithmic flowchart that transforms and summarizes the crucial stages of solving the continuum hypothesis. The author also hopes that the flowchart will be of assistance to future studies. Initially, one begins by setting ℵ j (where j = 0, i=1) which equal to the first ordinal or cardinal (i.e., the set of the natural number and C (or ℵ i ) equals to the cardinal of R (i.e., the set of real number).
The algorithm then attempts to refine those ordinals beyond according to the axioms of ZF. By checking whether the ordinals violate the axioms of ZF, one extends to a new model by exploring new axioms. The following step is designed to find those immediate cardinals / ordinals. Simultaneously, the result meets with the false output of ZF axiom violation. This checks whether the existence of the set is right or wrong. If false, the algorithm returns to refining the ordinals beyond N. If true, then Cohen's forcing extension is applied to find large inaccessible cardinals. The procedure continues until -0=1‖ or its equivalent cardinal is discovered. The whole process then terminates or returns to finding inaccessible cardinals.
Simply put, the algorithmic flowchart will have parts to check, search, and classify in four main stages: 1. Ordinals between natural and real numbers; 2. Cardinals/ Ordinals between real numbers and (start of large cardinals κ = ℵ k ; 3. Cardinals after κ = ℵ k until -0 = 1‖ or its equivalent cardinal; 4. Termination status -consistency collapses.
Diagram-1. An elementary flow chart of solving the continuum hypothesis problem

Discussion of the Results
In this section, this author will first discuss the following mathematical terms together with my previous algorithmic results:

Ordinal Numbers
1. Every finite well-ordered set is isomorphic to a unique natural number. An ordinal number is a wellordered set a, such that for each element "ξ‖ ∈ a, X(ξ) = a where X(ξ) is the segment of elements of set a preceding "ξ‖. In other words, X(ξ) = { x ∈ a: For any two elements y and z of ω, y ≤ z in ω iff y≤ z and y < ω. ω + is an ordinal number. This is because if ξ ∈ ω + , then it is either ξ ∈ ω or ξ = ω. In both cases, one will have ξ = X(ξ). Thus, we have (ω + ) + , ((ω + ) + ) + of ordinal numbers.
The refining process of ordinal ω, ω + 1, ω + 2… will run until it meets ω + ω or ω x 2: the second ordinal besides the first ordinal (cardinal) of the natural number or‫א‬ 0 .
The procedure continues with ω x 3, ω x 4… until an infinitely large number epsilon zero. It should be noted that there are epsilon numbers that are the fixed points of an exponential map that satisfies the equation: ε = ω ε They are ordinals, as well as a collection of transfinite numbers. The least of such an ordinal is ε 0 where ε 0 = (ω ω ) ω … = sup {ω, ω ω , (ω ω ) ω , ((ω ω ) ω ) ω ,…} The process views this as an immediate status because it can continue infinitely. One may not even be able to fit them into an infinite set.
When a collection of all the countable ordinal numbers forms a set, it can be called ω 1 . Obviously, ω 1 is an ordinal number that is larger than all of the countable set, and is thus uncountable. Hence, the definition of ‫א‬ 1 implies that there is no cardinal number between ‫א‬ 0 and ‫א‬ 1 if one assumes the axiom of choice is not applicable.

Cardinal Numbers
Every set can be equipotent to a unique cardinal number. Equipotent means that a bijective mapping existing between two sets; for example, set A and set B.
Suppose α is an ordinary number, then the power set P (α) is a set with the following properties (Leung & Chen, 1970): I.α is equipotent to a proper subset of P (α), and II.α is not equipotent to P (α) Suppose π is the ordinal number of the well-ordered set P (α), for the set B = {B∈π:： } where means two sets are equipotent.
When there is another ordinal number r with the property that, one will have γ < π and hence γ∈π. On the contrary, if γ > π and, then P (α) is equipotent to a subset of α but P (α) contains 2 P (α) elements of α. One may conclude that set B is the set of all ordinal numbers that are equipotent to α. Thus, a cardinal number is an ordinal number such that α ≤ B for all ordinal numbers B which are equipotent to α. Or α is the least element of the set of all ordinal numbers that are equipotent to it.
As shown above, it can be found that ‫א‬ 0 , the cardinal of the natural number, is the least countable infinite ordinal (/ cardinal) number ω. If the refining process is continued, it can be found that there are indeed some countable ordinal numbers beyond ω (/ ‫א‬ 0 ). They are ω + 1, ω + 2, …, ω x 2 ,…, ε 0 and so on, in-between ‫א‬ 1 , the cardinal of the real number-the least infinite uncountable cardinal. As such, the process terminates and begins searching for new axioms together with new models for those inaccessible cardinals, since the existence of the set is violated (from countable to uncountable), and hence questioned. This author is of the opinion that: i) Ordinals are actually well-ordered, but there is no largest countable ordinal after ω 1 and in front of ‫א‬ 1 , since these ordinals becoming infinitely larger. ii) Internally, each ordinal can be presented graphically in terms of a 'matchstick'. The ordinal ω 2 connected with each matchstick starts from the set of ordinals formed by w. m+n where m and n are natural numbers. The resulting plotted graph is similar to resonance damping in harmonic motion. iii) It should be noted that ‫א‬ 1 = 2 ‫0א‬ which is independent (undecidable) in axiom set theory. This is known as the continuum hypothesis problem. This can go a step further (by transfinite induction), in that ‫א‬ α+1 = 2 ‫א‬α . This is also known as the generalised continuum hypothesis problem.

Class
With set theory (which depends on different contextual foundations), a class is defined as a collection of sets in which all its members always unambiguously share a common property. In Zermelo-Fraenkel (ZF) set theory, for example, class is informal; while in von Neumann-Bernays-Godel set theory, the definition is concerning those entities that are not members of another entity. In ZF set theory, there are two examples: the equivalent class of sets and the equipotent class of sets [14].
According to Cameron [15], p. 45, the ordinal numbers do not form a set, but rather an ordered class. If one follows the steps in Zermelo's hierarchy (p. 48), a V can be constructed such that it is the 'class 'of all sets and "On‖ --the class of all ordinal numbers, i.e., V = ∪ α ∈ On V α where V α is the set of all sets constructed at stage α (or isomorphic to an ordinal number α) Furthermore, V α ⊆ V β for α < β [15]; Hence, using Zermelo's construction, one can explain why a collection of all ordinals is actually a class, as well as establish a progressively larger hierarchy of ordered sets. It is worth noting that the definition of a class may result in Russell's paradox: I. When R contains itself, by definition, R must be a set that is not a member of itself-which would obviously be a contradiction. II. When R does not contain itself, then R is one of the sets that must not be a member of itself-also a contradiction. To solve the problem, one of the following methods can be used: Method I: Alter the logical language or first order logic, such that the axioms of set theory are expressed in another way. Russell was successful in the development of this type of theory -altered logical language. However, he faced a problem when defining arithmetic through pure logic-later shown to be incomplete by Gödel. Because Pearno arithmetic is impossible to formalise, this author believes the approach is not feasible and, as such, does not recommend it.
Method II: Alter the axiom of set theory, in order to retain the logical language expressed. The paradox will only be resolved by allowing the construction of subsets, such that {x∈z: o(x)}; i.e., there is not a set containing all sets, which is a useful result. This approach may be the most suitable means of solving the defect (of proper class) arising from Zermelo's construction. This author proposes adding new axioms when refining the previously violating ZF axioms that occur in the algorithmic flowchart instead of using higher order logic etc. In addition, first order logic or predicate logic is different from propositional logic in that it has quantifiers such as the symbol ∀. This can be viewed as an extension of traditional proposition logic.
In brief, the issue over the size of a set can be resolved if a classing approach is employed. Class can prevent an oversized expanding set, which, in turn, leads to Russell's paradox. In such an event, Zermelo's construction highlights the inaccessible cardinals of transfinite mathematics [13].

Inaccessible Cardinals
According to Cameron [15] a cardinal α is inaccessible when the following three conditions are true simultaneously: I. α > ℵ 0 ; II. For any cardinal λ < α, we have 2 λ < α; III. The union of fewer than α ordinals, each smaller than α, is smaller than α. When the size of a set is too large, such that the existence of the set is questionable, the concept of class, as explained in section 3, should be applied. Hence, larger inaccessible cardinals can be found by using the technique developed by Cohen in 1963 of forcing (extension). If κis a cardinal of uncountable cofinality, one will be able to find a forcing extension, such that 2 ℵ 0 = κ. This author's algorithmic process is expected to continually search for all of the large cardinals until one terminates at the condition "0‖ = "1‖ or its equivalent cardinal. The results of obtaining inaccessible cardinals can be achieved by using the method of forcing extension [16], since the consistency will break down for those cardinals which are larger than the Reinhardt cardinal.
In each of the above cases, the algorithmic approach to the continuum hypothesis will list all of the feasible ordinals, together with the immediate cardinals, until the largest one which may lead to collapse in consistency when the refining keep goes on. We shall proceed to another part.
Gödel's constructible sets: The relative consistency of continuum hypothesis (CH) with respect to the of Zermelo-Frankel set theory (ZF) with the axiom of choice (the first half of the continuum hypothesis problem) has been proved by Godel. That is If ZF(C) is consistent, then ZFC + CH is consistent (where C means axioms of choice) or If ZF(C) is consistent, then ZFC ⊬¬CH. Gödel showed that for any set-theoretic universe U 2 , when fulfilling the axioms of Zermelo-Frankel set theory, it should contain a sub-universe L ⊆ U called "the universe of constructible sets‖. The sub-universe fulfils the axiom of Zermelo-Frankel set theory, together with the axiom of choice and the generalised continuum hypothesis.
Cohen's forcing extension method: To solve the second half of the continuum hypothesis problem, Cohen introduced a forcing method, in which a set-theoretic universe is expected to be extended by adding new subsets to infinite sets. These sets have already existed in the initial universe. When one attempts to review history, the first instance of applying forcing, such that there are sufficient additional (many) new subsets of ω, are known as Cohen reals. Thus, the result is the cardinality of the power-set of ω (in the extended universe) jumped to at least ℵ2. Therefore, Cohen concluded the consistency of ¬CH, relative to the axioms of set theory If ZF(C) is consistent, then ZFC + ¬CH is also consistent. Hence, Hilbert's continuum hypothesis problem was solved.

Category Theory
There are, however, defects in Cohen's method for finding large cardinals. One does not need to be concerned about a universe of sets, since there is a lack of understanding regarding the cardinals 'sequence. What is important though is the operations required to construct the sets. Mathematical practice requires a unique universe of discourse. This contradicts that category theory requires several levels of universes, which competes with Cohen's perspective of the universe. Indeed, when dealing with higher and higher levels of classes, one will study increasingly larger categories. The simplest means of handling the problem is by using the Tarski-Grothendieck set theory. Modern mathematicians work as though there is only one universe of discourse. They consider the axiomatisation ZF + a which characterises "the particular‖ universe of (discourse), where a is a proposition talking about the inner structure of the universe. By definition, a universe (in set theory, type theory, category theory, and the foundations of mathematics) is a collection of entities that one wants to consider in a given situation. Philosophically, it is a domain of discourse. Thus, as an alternative, category theory-or even a Grothendieck universe-will be used when dealing with increasingly higher levels of classes.

Model Theory
This is the study of mathematical structures, such as groups, fields, graphs, and the universes of set theory, using mathematical logic and a formal language. Model theory can be used with regard to the continuum hypothesis problem to investigate the structure of large cardinals or even exploit its topological hierarchy. Indeed, one will discover that there are categories such as strongly compact cardinals, super-compact cardinals, and extendible cardinals in the large cardinalities. In other words, there are meeting points between topology and model theory on p and t [17].
Inner model: In 2007, mathematicians discovered that there is a separable space which is an uncountably closed discrete subset that satisfies a certain relative version of countable para-compactness. This showed the existence of inner models with measurable cardinals [18].
Outer model: Due to unsuccessfully obtaining larger cardinals-other than the Woodin cardinalmathematicians tackled the problem from another direction and acquired L-like properties in a forcing extension to preserve the large cardinals. By doing so, it was possible to handle arbitrarily large cardinals [19].
Or that there is a genuinely deep inconsistency. From the above, HOD conjecture means towards a very deep inconsistency. While the proposed algorithmic flowchart mainly follows a traditional continuum hypothesis solving method, it is true that there are other methods, such as model theory and category theory. Although there can be cardinals after "0 = 1‖, this author ends the flowchart there, completing the algorithm. The flowchart can be further developed towards a very deep inconsistency. However, the very deep inconsistency problem must be solved using a new axiom (the wholeness axiom) of set theory [20].

Axiom of Wholeness
The basic principle of the Wholeness Axiom is that it tries to omit the schema instances of j-formulas. Hence inconsistency due to Replacement Axiom is avoided. Then the axiom of choice is allowed without any modifications to the replacement axiom. Indeed, the wholeness axioms are what we wanted the "ultimate axioms of infinity‖ with the boundary that is an inconsistency with ZFC. There is also an ultimate "L‖ which theoretically extends our orderly constructible sets of the world to include all large cardinals. Someone (includes this author) may believe that Ultimate L implies V = HOD However, this may suffix to the comment that rank-to-rank axioms may not consistent with this. Once if we assume the consistency, the strength of wholeness axioms is strictly increasing according to its hierarchy.
Or in other words, j: V λ → V λ witness a rank into rank cardinal, then we must have, < V λ , ∈, j> is a wholeness axiom's model. Therefore, if the wholeness axioms are consistent with ZFC, then this is consistent with ZFC + V = HOD

Conclusion
The continuum hypothesis problem has existed for nearly a century and began with whether a function can be expressed by a trigonometric series. Using the Fourier series, Cantor was then able to represent them. Finally, the expression question is related to the famous issue in continuum hypothesis. The limitation of this paper is that the flowchart assumes the classical continuum hypothesis results of Gödel and Cohen. The author's algorithmic flowchart is intuitive and elementary suffixes to personal's view and scholar studying. There is still a large amount of new research being done with regards to the continuum hypothesis problem, such as the development of inner and outer model programs. It is also clear that there are numerous ways of trying to solve the problem, with each one possibly having an alternative view of set theory, e.g., New Foundations, conceived by Willard Van Orman Quine. As a result of the study, this author hopes that more people will be encouraged to look for creative solutions to the continuum hypothesis problem. Indeed, the algorithmic flowchart outlined in this study is just one of many tools that can be employed. From the physics point of view, continuum hypothesis is the study of the width and height of our universe. This is described in Olsen and Naschie [13]. Hence, a computer program of the algorithmic flowchart will be extremely useful in solving the problem. Application of the continuum hypothesis usually focuses on the electromagnetic spread spectrum. This author suggests that energy harvesting would benefit the most, which will be discussed further in the next paper. Another application is the supercomputer project -MareNostrum‖ in Spain which stimulates the beginning of our universe and also other phenomena of it.

Cover Letter
Adopting an algorithmic approach in the form of a flow chart suggests a way of solving the continuum. What is the significance of finding an answer to this issue? The reply is that we can address problems such as transfinite induction and recursion etc. 2 These issues are related to hyper-computation 3 or super-Turing. This type of computing refers to models of computation, which give non-Turing-computable outputs. One case for hyper-computation is when a machine handles the halting problem. Physically, there are several models of hyper-computation; the three most common are: 4 1. Accelerated Turing machine using Superluminal Particles 2. Relativistic Computers 3. Quantum Computing Specifically, I am interested in the quantum model of hyper-computation. In fact, the quantum model could be based on probabilistic quantum modelling or even quantum Bayesian-ism (QB-ism). Recently, scientists performed an experiment to show that reality can perfectly turns back to return to its original status even if somebody does not change a quantum bit (qubit) after receiving it in the past. In simple terms, the study found that when a qubit is sent to the present, it turns back into its recent status no matter someone have changed it in the past. 5 This implies the possibility of being able to hide information. However, from my perspective, we may compute the changes that were made by the past person (after the qubit is being sent back to the past) via the probabilistic model or QB-ism. This principle would evaluate the effectiveness of a quantum computer. 6 At the same time, there is criticism towards hyper-computation. This may lead to biological hyper-computation or the subclass of the computational theory of mind. Therefore, the following question may arisecan we model our nervous system or brain? In practical terms, there are neural type of hyper-computation which are trying to model these biological components. In which case, I would like to present my HKLam theory, which may somehow be generalised, but an Artificial Neural Network (or ANN) could lose the probabilistic property in Bayesian network (i.e., it cannot estimate the probability of an event given prior observations or prior knowledge. 7 ) My proposed generalised version of HKLam theory is listed as follows: -Multiple sources of input (with hidden layers) to the multiple output -artificial neural (causal-dependency) network (ANCDN) that are mediated with random variables; they are subsequently mapped to multiple layers of domino causal events with the suitable linear transformation. The converse of this theory is also true when foreseeing the evolution of the source.‖ Remark that only the generalised version in ANCDN will have the probabilistic relationship like that in the Bayesian Network. Thus, we can apply the Bayesian Inference for analysis. That is, if we modify the normal ANN into a causal-dependency one, there will be another kind of analysis(like the one in Bayesian network or the causal dependency one) for us to explore. 8,9 This will be my final version of the generalised HKLam theory.
Indeed, I am the only author of this paper and, as far as I know, I have quoted all relevant material with corresponding suitable citations. If and when I discover any new and important un-cited material for this paper after publishing, I will send an email to make any necessary amendments.
Yours Faithfully, Lam Kai Shun

Declaration
I declare that all of the content presented in this paper is purely my work and that all of the references have been well quoted under corresponding citations (as far as I know). There are no conflicts of interest for this paper. There are also no funding sources for this paper. I thank to my former Department's professor -Prof. Siu Man Keung and Dr. Leung Kam Tim. I also thank the library of the University of Hong Kong for kindly lending me related books for this paper's referenced work. The library inspires me very much.

Remarks
1. This author notes that originally, the fractional approximations to real numbers is countable. This is because it maps the natural numbers to the rational one which must be countable. Later the sum of all these approximated rational numbers is mapped to the irrational (real) number. But the irrational number must be uncountable. The sum (which is a rational number) also becomes uncountable at the same time. This constitutes a contradiction to the fact that the set of all rational number must be countable.
N -> Q -> R (R is an uncountable set); N -> P(N); n |-> n where P(N) denotes the power set of natural numbers; Q -> P(Q); q |-> q where P(Q) denotes the power set of rational numbers.
In other words, one can find a rational approximated fractional set to approximate those real (most likely are irrational) numbers. In such case, the whole approximate fractional set (which is rational) should obviously be uncountable. This constitutes a contradiction to the fact that the set of all rational number must be countable. Hence, there must be something wrong in our present set theory of number. This further implies the theorems of continuum hypothesis also fail. Or it will be disproved in the coming days when one finds an immediate uncountable set with the cardinal valued between N and R.
2. This author also notes that practically there are Forcing Axioms. They are obtained by iterated forcing. The aim of forcing method is to let us have a generic bijection between countable and uncountable sets. That is the idea of non-pathology which can preserve the uncountability at a minimum (Justin, 2010 10 ). The exact quantification of non-pathological can thus yield different strength of forcing axioms (corresponding to different uncountability) at a minimum. Therefore if one can find the best forcing axiom with optimised or well-balanced pathology, it is possible that the Continuum Hypothesis will then be disproved. This means one can construct the best optimised set X in which one usually adds uncountable elements (a subset of ℵ 1 ) into the countable set. Practically, there may be infinite number of mapping for finding the optimisation. To solve the problem, we may apply the Euler-Lagrange equation to these mappings and hence compute the best optimised protraction. Then the required set's cardinal will just be located between N and R. Then CH is disproved in ZF(C) + FA. Or |x| > ℵ 1 and |X| < 2 ℵ 0 . Obviously this implies the existence of set X that lays between ℵ 1 and 2 ℵ 0 or ℵ 1 not equals to 2 ℵ 0 .
The main reason of the above result comes from the Baire Category Theorem that: -No compact space can be covered by countably many dense sets.‖ (Moore et. al, 2016, p.1) According to them, there are some natural classes of compact spaces where one can find that Baire Category's statement is always true. This is known as the forcing axiom 3. The Gödel's Incompleteness theorems violate the mathematical philosophy -Intuitionism: A mathematical statement is intuitively either be true or false. The undetermined cases will not exist. Otherwise, it violates the basic concepts of mathematical intuitionism.
Case I: During the period of influenza, one should make the decision of either having the vaccine or not. There is no cases between taking and not taking it. Case II: Our coin has only two sides. The undetermined case will never exist since the coin cannot have a third side. Hence, a contradiction may occur in Kurt Gödel's Incompleteness theorems in such case. Since he stated one cannot determine the consistency of an axiomatic system. Case three: For a conversation among human beings in any languages such as English, there are only complimentary and derogatory terms. There are no terms laying between both sides. While those middle terms have the semantics of telling just objective views or true facts. They are in no ways of meaning -neither complimentary nor derogatory‖. 4. This author's algorithmic flowchart is intuitive and elementary suffiexes to personal's view and scholar studying. There is still a large amount of new research being done with regards to the continuum hypothesis problem, such as the development of inner and outer model programs and large cardinals. This is related to the width and height of our universe as one may more interest in why and how. This project will try to give us an answer. (As in Lingamneni 2017, for any given model of set theory V, the inner model L of the universe consists only constructible sets. Roughly speaking, besides the existence of Whitehead Group, there are also non-free Whitehead Group, these are the so-called width Independence phenomena. There is also next half part of proof in the height Independence phenomena or those large cardinals such that their existence is inconsistency with V = L. (These large cardinals can be constructed by those meta-mathematical techniques.) 5. This author remarks that continuum hypothesis is undecidable when people only focus on the weak theory of ZFC or Gödel's Incompleteness theorem becomes valid. This event is because there are less philosophically and mathematically theories in ZFC to be applied in CH. Whenever, one can develop the corresponding axioms or theories through Cohen's forcing extension method with two types of iterative forcing axioms, then CH can ultimately be either disproved or solved. 6. A finalised three ways flow chart design for solving mathematical continuum hypothesis problem: 7. Hence and conclusively, Gödel's Incompleteness theorems in CH may finally be disproved when one agrees to extend the ZFC with forcing extension method and finds its iterative Force Axioms. To sum up, the continuum hypothesis problem is indeed in its: (i) Disproved Caseby forcing uncountable subset into countable set (i.e., the type 1 Force Axiom) and solve the infinite mappings' problem by Euler-Lagrange equation until the optimised mapping (or the set) is obtained; (ii) Solved Caseby forcing countable subset into uncountable set (i.e., the type 2 Force Axiom) until the Axiom of Wholeness to avoid the inconsistency; (iii) Undecidable Caseonly weak ZFC is valid (i.e., without any force axiom extensions) and obviously this indicates the independence of continuum hypothesis from ZFC (Zermelo-Fraenkel set theory with axiom of choice). 8. We may solve the infinite mapping problem by the following steps: (i) For each forcing of an uncountable subset into the countable set, we may construct a set, say the set Ui. Each set of Ui, it is corresponding to a cardinality say cardi, and path length li we may sum up pieces wisely for each cardi * li (ii) The summation of cardi can then be transformed into an integration with respect to a small change to path length li. Consider the problem of finding an extremal function y = f(card) such that y' = d f(card) / d card, y1 = f(card1) and y2 = f(card2) (iii) Once we have the f(card), we may find the respective functional: where l1, l2 are constants (or path length), y(l) is a twice continuously differentiable, y'(card) = d y(card) / d card, L(card, y(card), y'(card)) is also a twice continuously differentiable with respect to l, y(card), y'(card). N.B. One may need to apply the method -Change of variable‖ but the mathematical details are out of the scope of this paper.
In such case, the functional J[y] will attain its minimum at the function f or J[f] = 0 if and only if we can solve the respective Euler-Lagrange Equation. That is we have found the wanted minima function f with the corresponding path length l. Indeed, the mathematical details of computing the function f is out of the scope of the present article. I only outline the general principles (steps) or mathematical method for the disproof part of continuum hypothesis.
It is no doubt that there are infinite mappings or pathology for the Suslin tree in the case for the disproof of continuum hypothesis. We need to find the minimum path with the best optimisation. Indeed, I have just outlined the disproof of the continuum hypothesis by Euler Lagrange Equation as above. This is because the main concerns of this paper is the recommendation about the novel three ways algorithm (or flow-chart diagram) -1) Disproof; 2)