Commutative Hypercomplex Mathematics

Clyde M. Davenport
cmdaven@comcast.net

Introduction

Below, we shall give a comprehensive sketch of the 4-D commutative hypercomplex algebra (not quaternions) and its associated function theory and analysis. The great advantage is a complete, classical 4-D function theory, something that is impossible with quaternions and other noncommutative or nonassociative systems. It has significant potential for replacing or complementing vector analysis in science and engineering calculations. Fortunately, a wide audience should be able to follow the discussion, because the commutative hypercomplex math is derived directly from well-known, fundamental concepts, such as groups, rings, calculus, complex variables, matrices, complex function theory, and vector analysis. For a discussion of elementary group and ring theory, see any good introductory text on abstract algebra, such as Herstein, 1986. There will be no occasion for deep theorems and complicated proofs.

First, some background. We need to review the development of vector analysis in order to see where a new type of mathematics might make a contribution.

It is not widely taught, nor do many people know, that Sir William Rowan Hamilton developed the quaternions (4-D numbers similar to those that we will discuss here, except noncommutative) in the 1830s specifically for field calculations (this was before vector analysis was developed). Scientists and engineers of the time vehemently resisted their use. Something in the mindset at the time simply could not accept the notion that there could be a "fourth dimension," especially if it was claimed to be time. They persisted with a primitive combination of component-by-component calculation and extensive use of geometry. By the late 1800s, their field calculations were accompanied by elaborate geometrical figures that resembled the 2 X 4 framing of a house. Top

To allay their aversions, around 1880, J.W. Gibbs in America and O. W. Heaviside in Britain reformulated quaternion analysis so that all expressions would be constrained to three dimensions or less. For example, in the cross product of two three-dimensional vectors they arbitrarily set i×i=j×j=k×k=i×j×k=0 so that the result would come out as another 3-D vector. The quaternion product of two three-dimensional vectors is ab= -a·b+a×b, which has a scalar part and a 3-D vector part (i.e., is 4-D). It is the same with vector operators; for example,

.

Therefore, Gibbs and Heaviside avoided the quaternion product notation, and used only the dot and [modified] cross product components in what they cleverly renamed as vector analysis. Scientists and engineers accepted this subterfuge because it met their prejudice about 3-D being inviolate and it did not have the word "quaternion" mentioned anywhere. Nevertheless, vector analysis is a form of quaternion analysis [Crowe, 1967]. Top

Hamilton developed the quaternion algebra by trial and error. Apparently, he had a prejudice of his own: that every nonzero element (i.e., having at least one nonzero component) should have a multiplicative inverse. By adopting this view, he was led directly to quaternions, because we now know that the quaternions make up the only 4-D division algebra. What he didn't realize was that the quaternions form a group ring [i.e., the 1,i,j,k elements and their negatives form a group of order eight (the quaternion group, of course), and elements of the form 1x+iy+jz+kw, with x,y,z,w real, form a ring]. He didn't realize it because the notions of group and ring hadn't been developed at that time. We now know that there are exactly five distinct groups of order eight upon which group rings of 4-D elements may be constructed [Burnside, 1911]. The fact that we exclusively use the quaternion case (vector analysis) in science and engineering apparently stems from the fact that it was discovered first and the others were not examined for potential application when they were eventually uncovered. For a timeline on the development of quaternion analysis, see Jeff Biggus' quaternion history page. Top

There is no doubt that vector analysis is an effective tool. It underlies almost all scientific and engineering calculations, and might be considered as a major foundation block of our modern technological society. Putting humans on the moon in 1969, for example, was a gigantic problem in vector analysis. However, we can see from the above that it is an ad hoc system, one which often obscures as much as it reveals. We shall see that the commutative hypercomplex mathematics can add simplification, ease, and theoretical insight.

Group algebras, including those mentioned here, were first studied and described over one hundred years ago [Peirce, 1881], [Study, 1889]. No less than Dedekind published a paper [Dedekind, 1885] describing algebras that are direct sums of copies of the complex field, including the commutative hypercomplex algebra that will be described below. Accordingly, I do not claim original discovery of the commutative hypercomplex algebra, but do claim origination of certain of its representations, interpretations, and the formulation of the function theory and analysis that shall be constructed upon it, below. Recent work on 4-D function theories is given in [Price, 1991] and [Kantor and Solodovnikov, 1989]. They, of course, use a different mathematical approach and a different notational system than that in the present work. Top

Commutative Hypercomplex Algebra

Because we are working with numbers that have four distinct components, we must construct a formal algebra formed upon a basis group and (in the present case) a commutative ring. We must do this, in particular, so that we may have a consistent definition of multiplication and division of 4-D numbers, and consequently a consistent algebra. That being so, we are greatly constrained by the fact that there are only five distinct basis groups from which to choose.

In order to keep this manageable for an Internet reader, we shall merely sketch the line of reasoning and the main results. For convenience, we shall use to refer to the commutative hypercomplex algebra. This choice alludes to "duplex space," which significance will become evident below. The notation also suggests a natural progression from the classical complex algebra, . Readers wanting more detail may refer to the author's monograph [Davenport(2), 1991] or his contribution to the book by Ablamowicz(1), et al,1996. The aforementioned monograph is out of publication, but a Web search via www.worldcat.org with the keywords "commutative hypercomplex" yields a partial list of science libraries wherein a copy may be found. Top

We shall be aiming our formulation at physics applications, so we will use the notation Z=1x+iy+jz+kct, with x,y,z,ct real, for an element of the algebra . In the fourth component, t represents time, and c is a scale factor depending upon the medium in which we are working; in a vacuum, it would be the speed of light. What we really desire for these elements is that they would form a field, but it was long ago proved that no such field could exist [Frobenius, 1877]. Nevertheless, we shall see that something analogous to a field is possible and will satisfy all our requirements attendant upon creating a function theory and analysis and applying them to physics applications. Top

Basis Group

We start by establishing a group upon the basis elements and their negatives {1,i,j,k,-1,-i,-j,-k}. We must have a group so that multiplication is closed on the set. We cannot have, for example, ij=m, with m arbitrary. We must include the negatives because, for example, ii = -1. The group must be Abelian (commutative) because we ultimately want multiplication of elements of to be commutative. The necessary conditions for a group are as follows:

Let S={1,i,j,k,-1,-i,-j,-k} be the set of basis elements and let a,b,c be elements of the set; then we must have the following properties for a commutative group:

a· b = c       Closure under multiplication

a·(b· c) = (a· b)·c   Associativity

a· (b+c) = a· b+a· c  Distributivity

a· b = b· a       Commutativity

a· b = 1        Inverses exist for every element

We simply define the multiplication rules among the elements of the set S in such a way as to satisfy the above properties. We create a multiplication table. We know that we must define ii = -1 in order to include classical complex variable behavior. This is a small enough problem that we can simply draw an 8 X 8 table of the basis elements and their negatives, then fill in the products by trial and error in such a way as to get uniqueness of products within each column and each row, and diagonal symmetry of the table. This will satisfy all of the properties except associativity (which we prove, below). Figure 1 shows the resulting multiplication table: Top

Figure 1. Multiplication Table for the Basis Group

Note the diagonal symmetry. The summarized result is:

 ij=ji= k jk=kj= -i ki=ik= -j
 ii= jj= -kk= -1 ijk= 1

The group identity element is 1. The second line, here, indicates that every element has a multiplicative inverse. Associativity is immediately proved if we can find a real matrix representation, which we shall do below, hence we have a group. The fact that the group is Abelian assures that the ring that we will construct upon it will be commutative. With only a little manipulation, one may verify that the group is C2 X C4, where Cn is the cyclic group of order n. [ASIDE: Thanks to Peter Jack, who pointed out that it is not the same as the dihedral group of symmetries of the square.] [NOTE: There are two other eighth-order commutative groups but neither has an element of cyclic order 4, which is necessary for complex-like behavior.]

A matrix representation of the basis elements will prove to be very useful, but it is not intuitively obvious how to construct the same. I happened upon the following while constructing 4-D Cauchy-Riemann conditions by trial and error: Top

The fact that this is a faithful representation may be verified by simple matrix multiplication and comparison with the multiplication table given earlier. These matrices are orthogonal, with determinant +1. Considering them as unit rotation operators, the application of all four in any order (e.g., 1kji; they are commutative) brings the rotated object back to its original position: 1ijk=1.

Recall, above, that we said that we would defer a proof of associativity for the basis group until later. Here, we have a 4X4 real matrix representation of the group, which completes the requirement, because real matrix multiplication is associative.

The basis elements also have a 2 X 2 complex matrix representation; it is:

These are the commutative counterparts of the Pauli spin matrices of physics. If one were to recast quantum mechanics using commutative hypercomplex mathematics (and I have no doubt that it could be done), then these matrices would play an important role. Top

Commutative Ring

We must now define the allowed operations of addition, subtraction, multiplication, and division among elements of the form Z=1x+iy+jz+kct. We would hope that we could do so in such a fashion that numbers of the aforementioned form would form a classical number field because, then, they would have all of the useful properties of the real or complex variables. However, as we mentioned earlier, no such field exists, strictly speaking. We shall have to settle for the next best thing, a commutative ring with unity.

Let be the set of 4-D numbers {Z=1x+iy+jz+kct |  x,y,z,ct real}. Let a,b,c be elements of the set; then the following properties must be satisfied:

a+b = c       Addition is closed on

a+b = b+a     Addition is commutative

(a+b)+c = a+(b+c) Addition is associative

a+0 = a      An additive zero exists [0 = 10+i0+j0+k0]

a+(-a) = 0     Additive inverses exist

a·b = c        Multiplication is closed on

1a = a1 = a    A unit element exists

a·(b·c) = (a·b)·c   Multiplication is associative

a·b = b·a     Multiplication is commutative

a·(b+c) = a·b+a·c  Multiplication is distributive

a·b = 1      For every b with det(b) ≠ 0, a multiplicative inverse exists

Note that, in the last requirement, if there is only one zero (noninvertible) element, then we would have a field. For our present case, read on. Top

We shall now define the operations of addition, subtraction, multiplication, and division in such a way as to comply with the ring requirements, above. The ring elements have the form Z=1x+iy+jz+kct. Addition and subtraction are performed term-by-term, the same as for vectors. Exactly as for the complex variable case, multiplication of two elements is done by multiplying each term of the second element by every term of the first element, with reduction of the 1,i,j,k cross products by use of the group multiplication table, followed by collection of like terms. The result is:

Multiplication of 4-D elements is commutative because multiplication is commutative in the basis group and in the x,y,z,ct real numbers.

Next, we need a definition of multiplicative inverse, or division by, elements of the form Z=1x+iy+jz+kct. It is not intuitively obvious what form it might take, and that might be a serious problem, except that we can construct a matrix representation of the element, then take the inverse of that. We do so as follows: We have a matrix representation of the basis elements 1,i,j,k. We substitute the matrices into the Z=1x+iy+jz+kct form and perform simple matrix addition to telescope the element into the form of a single matrix: Top

[ASIDE: We could state all subsequent developments, including the algebra, function theory, and analysis, in terms of these 4X4 real matrices. As a theoretical basis, it would be very sound. Hovever, for applications we shall find a much more practical representation, further below.]

The above matrix form has the usual matrix inverse, itself expandable into the vector form. It is remarkable that the matrix inverse of the typical matrix element of is another matrix having the same distinctive pattern of entries:

Top

The reader may verify that, in the vector form, if one multiplies Z by Z-1 (or vice-versa), one obtains a result of unity. As I mentioned earlier: Not intuitively obvious. All of the other conditions for a ring are satisfied, as the reader may easily verify. It is a commutative ring with unity, and fails to be a field only because of the following. The term det(Z) is the determinant of the 4-D element in matrix form, and can be evaluated by means of an elementary cofactor expansion. The result, after suitable rearrangement, is:

which is zero under the conditions (x=ct,y=-z) or (x=-ct,y=z); therefore the ring is not defined under those conditions. This is probably the point at which Hamilton discarded this particular algebra on his way ultimately to quaternions. Many readers, upon learning that there are zero divisors, dismiss this algebra, this simple group ring, as if it is somehow invalid. However, the zero divisors are not scattered at random in the 4-D space. They lie in two orthogonal, 4-D hyperplanes (see below) and cause something similar to analytic branch cuts in classical complex-valued functions. Indeed, the 4-D quotient is an ordinary analytic function: f(Z)=Z-1, and the fact that it has planar singularities should surprise no one. We don't dismiss and discard the whole body of complex analytical functions, for example, because some exhibit such untidy features as discontinuities, essential singularities, and analytic branch cuts. Top

The fact of zero divisors is not a problem, here; rather, it will prove to be very useful. To explain, we will need yet another representation of the algebra. Having a ring, we are justified in rearranging the typical element into the form Top

This awkward-looking expression reveals some remarkable properties. If we use the notation

then we have

 , are classical complex variables (e1)n = e1, n a positive integer (e2)n = e2 e1e2 = (0,0,0,0) e1·e2 = 0, vector dot product e1+e2 = 1 -e1+e2 = k

Top

Consequently, the ring operations can be written as follows:

We shall call this the canonical form of the algebraic notation because of its fundamental simplicity. We have decomposed the algebra into two copies of the classical complex field, just as Dedekind wrote in 1885. However, these are not just any two copies of the complex plane. They have orientation with respect to the x,y,z,ct coordinate frame: The reader may verify that although they are each defined "everywhere" and for all times (each is a function of x,y,z,ct), they share only the point (0,0,0,0) in common! They are, in fact, mutually orthogonal in four dimensions; i.e., the reader may verify that Z1= 1x+iy-jy+kx is a general position vector in the first noninvertible plane, Z2=1x+iy+jy-kx is the same in the second noninvertible plane, and that Z1.Z2=0 by the classical dot product rules.

Actually, the above statements need some clarification. They hold as long as we are dealing with true 4-D numbers (i.e., x,y,z,ct all real and nonzero). However, if Z=1x+iy or Z=1x (either classical complex or real), then the eigenvalues in canonical notation will be equal.

The above operations can be used to show that the algebra is isomorphic to [ Davenport(4), 1991], so we could base the algebra simply on pairs (,) of complex numbers with their peculiar structure, without further recourse to vectors or geometrical arguments. This is the origin of the expressions " space" and "duplex space." Some readers might lose interest at the first mention of something so simple as . I encourage them to read on, because there are some remarkable consequences of this very fact. Top

That is not all, concerning the unusual properties of the canonical form. It so happens that, in the matrix form, [(x-ct)+i (y+z)], [(x+ct)+i (y-z)], and their complex conjugates are eigenvalues of the typical element Z of the ring . Moreover, e1 and e2 are eigenvectors (i.e., Ze1=e1 and Ze2=e2, whether in the canonical form or with all elements stated in terms of matrices). From matrix theory, the determinant is the product of the eigenvalues:

det(Z) = * *= [(x-ct)2+ (y+z)2] [(x+ct)2+ (y-z)2 ]

It is remarkable that any 4 X 4 real matrix would yield its determinant, eigenvalues, and eigenvectors by inspection. Even more remarkable, the 4 X 4 matrices are a faithful representation of the ring elements, meaning that they form a ring. Therefore, when one multiplies two of them, the result is another 4 X 4 matrix with the same distinctive structure; ditto, when one takes the inverse of one of them. Everything that we do here, including functions and operators, could be stated entirely in terms of 4 X 4 real matrices. That fact puts everything on very sound mathematical footing, but the canonical form is much more convenient for calculations.

The 4-D vector and the canonical form provide two different interpretations for the 4-D space with which we are working. If t is time and is considered to be uniformly increasing, then the vector form 1x+iy+jz+kct implies that our three-space 1x+iy+jz and everything in it is moving uniformly along the time axis with a speed c. Conversely, the eigenvalues (x-ct)+i(y+z) and (x+ct)+i(y-z) indicate that the 4-D space can be viewed as a pair of moving, orthogonally oriented classical complex planes, one moving in the positive x and one in the negative x direction. Our choice of coordinate frame orientation in space is arbitrary, so in the canonical viewpoint we can express the 4-D space and all actions within it in terms of a pair of complementary actions, one moving radially away from the source and one collapsing radially onto the source position. For example, any kind of wave motion about an infinitesimal element source can be broken down into an outgoing and a complementary incoming wave motion. Top

Something needs to be said about measure and metric on the algebra . First, the 4-D vector magnitude

as an operation is not defined within the algebra because of the multiplication rule, such that, in most cases,

[We leave it to the reader to prove that the above relation is an equality only when one or more of the associated eigenvalues is zero. Hint: use the canonical form notation.] All operations such as this must conform to the general definition

Accordingly, for vector length we define the modulus as:

It has all of the proper classical complex variable properties, but note that it is not a scalar quantity. It does not explicitly return the 4-D length. Nevertheless, the modulus as defined implicitly embodies length information about a vector Z, because Top

In light of the above, a metric that is representable within the -space algebra is:

Secondly, the hypercomplex conjugate is defined in accordance with the standard operator definition:

This has all of the expected classical complex variable properties except one: Top

except in special cases. As with the modulus, this operation does not return the 4-D length, alone, unlike for the corresponding operation on the classical complex variables.

The ring contains a group of 4-D orthogonal transformations. Let us denote it by . If U=1u+iv+jw+ks denotes an element of the group, then the conditions

u2+v2+ w2+s2=1,
us-vw=0

are sufficient to cause the 4 X 4 matrix form to be orthogonal. They also cause the eigenvalues to be of unit magnitude. The resulting orthogonal transformations can be stated and applied in the matrix form, the canonical form, or the vector form. The result is the same. [The following update was added 8/31/03 - CMD] Although the transformation is orthogonal in four dimensions, it is not always so when viewed in only three dimensions. A rigid rotation in four dimensions may not appear as a rigid rotation in the three space dimensions, and vice-versa. However, there is a frame of reference wherein it will so appear. For the object to be rotated, let the points be denoted by a four-vector of the following form: Top

Note that this represents a simple change in the coordinate frame of reference over our standard form, given earlier (a rotation + reflection, with determinant -1). Now, we know from elementary matrix theory that the trace of a matrix is invariant under orthogonal transformations. In the present case, the trace is 4ct, hence in this frame, t is invariant under, is unaffected by, and does not participate in, the orthogonal transformation. Consequently, if the remaining spatial three-space is being orthogonally rotated, it follows that the rotation is that of a rigid body. Top

We conclude this Section with the assertion that the -space algebra is analogous to a field. Although it fails the test of a single zero element, under a broadened view of the "zero element" it is analogous to a field. We argue as follows: Recall that the algebra is isomorphic to , the direct product of complex fields. has two maximal ideals. In our parlance, we can define them as: Top

The mathematical union of the ideals (the totality of all the noninvertible elements) has all of the properties of a multiplicative zero element. For example, in the language of sets,

* Elements of do not have multiplicative inverses in .

* If Z is an element of , then Z= Z = .

* If Z1Z2=, this implies that either Z1 or Z2 (or both) is an element of .

* If a complex-valued function f(z) is undefined at (0,0), then the corresponding 4-D
function f(Z) is undefined throughout the set .

Although not stated in the traditional formalism, this is an equivalence relation. Under this broadened view of the zero element, is analogous to a field.

Physicists would say that is a closed subspace, cut off from the rest of the universe (the space) by the relativistic limit. See Davenport(7), 1991 for further details. Top

4-D Function Theory

How would one define an analytic function of one independent variable of the form Z=1x+iy+jz+kct? It is not obvious how to do so, but this is where the several different representations of the algebra come in handy. The matrix representation is key. Previous researchers have shown that the way to define a function of a matrix variable is to diagonalize the matrix (so that it exhibits its eigenvalues), and apply the function to each eigenvalue [MacDuffee, 1946], [Bellman, 1960]. The canonical form, here, already exhibits its eigenvalues and they are complex variables, so an analytic function definition is immediate:

The 4-D function f (Z) is analytic if both f () and f () are analytic in the classical complex variable sense. This looks trivial, uninteresting - just two copies of a classical complex-valued function. However, the two eigenvalues have structure. Each of them is a function of all four of the real coordinates. The true complexity is not revealed until a function is expanded back into the vector form. To do so, expand each of the classical complex expressions f () and f () into their real and imaginary parts, expand e1=(1-k)/2 and e2=(1+k)/2, then simply perform all of the indicated multiplications and collect like terms. The result is: Top

For example,

Each component of this, or any other, analytic function so defined obeys a 4-D Laplace's equation, as we shall explain, below. Analytic functions such as this represent a gravitation-like distortion, or mapping, of the entire four-space. Notice that the four function components are very tightly linked. If one changes any parameter value in one component, all of the other components adjust their values in lockstep. This is also the behavior of electromagnetism [see the Electromagnetism page]. Top

Additionally, 4-D functions make very pretty 3-D fractals; see the julia fractal topic on the POV-Ray site for the mathematical details and Dave Makin's Makin Magic page for example 3-D images; return here by use of the browser "Back" button. A Web search for "hypercomplex fractals" turns up further examples.

As an aside, everything that we do here properly subsumes and extends the corresponding classical complex variable concepts. For example, in the 4-D exponential function, if one sets z=ct=0, one is left with exp(Z) = exp(x)[cos(y)+isin(y)], the complex variable case. Moreover, insamuch as a function of one 4-D variable reduces to the same function applied to two different complex variables, no new questions arise about existence, uniqueness, completeness, internal consistency, or similar requirements, over what is already known for the complex variable case.

Because of the way that 4-D analytic functions are defined, they have all the same properties as for the corresponding complex-valued functions and we can use all the same notation as for the complex variables. We have truly extended the complex analysis to treat a 4-D variable. This result is not possible with noncommutative quaternions, as shown by Scheffers, 1893. The only unexpected property is that there are multiple noninvertible elements; that is, whenever either eigenvalue ( or ) is zero. However, for that to occur, we must have x=ct or x= -ct. If we interpret t as time and c as the speed of light, then x= ±ct means that some coordinate is moving at the speed of light. This is the relativistic limit, which physics tells us that no material body can reach. Therefore, we are free to use the -space mathematics on any real-world problem. If there is a potential problem with the relativistic limit, the mathematics will automatically tell us where and when the problem will occur. Top

The Cauchy-Riemann conditions are an extremely important part of classical 2-D complex variable theory. The same is true for the 4-D case. Because of the way that a 4-D function is defined as a pair of classical complex functions, the 4-D Cauchy-Riemann equations are immediate [but messy to develop; see Davenport(6), 1991]. Using the notation F(Z)=1U+iV+jW+kS for an analytic function that has been expanded into the vector form, the result is:

Observe that the two upper left hand equations are the traditional Cauchy-Riemann conditions. The 4-D relations have many and far-reaching consequences. We present some of them, below. Top

Carefully note that the C-R conditions are not simply a set of PDEs that have a specific "solution." Rather, they are a set of conditions that hold for any analytic function. They can be combined with a typical linear or nonlinear PDE, and the combined solution will be analytic.

Hypercomplex Analysis

Our task here is to define operators such as derivative and integral for functions of a 4-D variable. They must be compatible with the function definition that we already have, and they must be amenable to formulation with the various forms of notation for the algebra. The obvious choice is

For example, if we have a function

,

then its 4-D derivative is:

Again, this looks deceptively simple and uninteresting, but because of the form of and , there is a very great deal going on behind the scene. All of the other operators from classical complex analysis are defined similarly. The result is that we can apply all of the powerful tools of complex analysis to four-space problems. Top

The 4-D Cauchy-Riemann conditions have a number of interesting consequences that are direct extensions of those for the complex variable case [Davenport(8), 1991]. We merely state these consequences, below. The reader may easily verify them by direct substitution of the C-R conditions. In the following, we use the notation F(Z)=1U+iV+jW+kS for an analytic function that has been expanded into the vector form,

is the 4-D operator corresponding to the 3-D del operator of vector analysis, and is the 4-D scalar del (Laplacian) operator.

The first of these says that the derivative of a 4-D analytic function is the same within a sign in all four coordinate directions. The first two equalities are the same as for complex variables. These equations can be used to reduce a partial differential equation in several real, independent variables to an ordinary differential equation in one 4-D variable. By doing so, we would be imposing continuity conditions on the PDE, because the Cauchy-Riemann conditions are a statement of continuity. PDEs are typically derived with the assumption of continuity, but without its explicit inclusion because convenient means have not been available. Note carefully that we are not constraining any potential solution, because the C-R conditions hold for any and all analytic functions. Top

The second and third lines indicate that the 4-D gradient of an analytic function is the same within a sign in all four coordinate directions. The fourth line indicates that all four vector components of an analytic function obey a 4-D Laplace's equation, just as the components of a complex-valued function obey a 2-D Laplace's equation. That is the same as saying that the 4-D components each obey a 3-D wave equation, because the unitary transformation x'=x, y'=y, z'=z, ct'=ict, where i is the classical imaginary, transforms each into a wave equation. The last line says that the four-gradient (not the 3-D vector gradient!) of any analytic function is always and everywhere zero. It is just a succinct statement of the Cauchy-Riemann conditions because it follows so directly from them. In fact, all of these relations are extensions of the corresponding complex variable cases. Top

The algebra, function theory, and analysis of the space have a number of interesting rotational invariants under the orthogonal group within . Some are expected as an extension of the 3-D case, but some are new in the 4-D space. Because of the matrix formulation, the eigenvalues, determinant, and vector norm, of course, are invariant. It might not be anticipated that all analytic functions would be invariant under 4-D orthogonal transformations, but they are, because they are defined on the (invariant) eigenvalues. The eigenvectors e1 and e2 are invariant under both rotations and application of functions or other operators, because the transformations and the functions are applied to only the eigenvalues. The x coordinate is left unchanged under an orthogonal transformation, indicating that the orthogonal group treats the x axis as a preferred direction in three dimensions. However, we showed, above, that a simple change of coordinate frame can make the ct coordinate, instead, invariant under orthogonal transformations.

The following invariants, however, might not be anticipated: If Z=1x+iy+jz+kct is an element of and F(Z)=1U+iV+jW+kS is an analytic function, then the quantities

(similarly for V,W,S gradient components) are all invariant under 4-D rotations. See Ablamowicz(2), et al, 1996 for details. Top

In conclusion, I believe that I have not just developed a generalization of the complex numbers, but the generalization. I have found an infinite sequence of algebras and systems of analysis that treat independent variables of 1, 2, 4, 8, ... , 2n, ... dimensions, and that obey the same axioms as for the complex variables. It can be completely stated in any of the following forms: 4-D vectors, 4 X 4 real matrices, 2 X 2 complex matrices, eigenvalue/eigenvector (canonical) form, and pairs of classical complex numbers with a certain structure.

All of the algebraic properties, functions, analysis, notation, etc. carry forward. The fourth-order system can analytically treat the entire four-space, meaning, in my opinion, that it can be used to describe any physics effects therein. All of physics and engineering could be recast in commutative hypercomplex notation, and would enormously benefit from the computational ease and insight that would be afforded. See the Electromagnetic Theory and Special Relativity pages for examples. Because the hypercomplex math is built so directly upon such solid, elementary math concepts, it cannot be dismissed without also dismissing elementary group, ring, matrix, and complex variable theory. I believe that it has great potential usefulness. Top