Reverse Engineering a solution to the Magic Square riddle

I have been interested in magic squares since early childhood. I always wondered how is it possible, that such a symmetric structure exists, while it was pretty much impossible to find sources going deep enough to explain why it works.
In the ancient times it was considered sacred by some cultures (vedic tradition), by others metaphysical (renaissance), some have studied them as mathematical structures (Euler, Ramanujan). But nowadays it is considered rather just a curious mathematical structure, a subject of recreational math. The history of magic squares is currently accepted to be at least 4000 years old. They look like this:
Each row and column has the same sum of numbers, 34 in this case and it is called magic sum. Notice we could easily divide each number in the matrix by it's magic sum and we would get something which mathematicians call doubly stochastic matrices.
Not only rows and columns have same sum, but also the two main diagonals, or broken diagonals (like: 10,11,7,6), in this case even every 2x2 sub-square has the same sum, or corner values. It is just fascinating you can do this with numbers. In this work I will call a magic square every such matrix, which satisfies the conditions. (not only magic squares of harmonic series)
What is intriguing about them is they seem to form closed algebra, you can add two magic squares and get another one, or you can multiply three magic squares to get one too. But why it must be three? It is quite odd, you can try yourself.
Also if you place weights in this ratios on a square board, it's center of gravity will be right in the middle. Also I have seen them appear in the context of quantum mechanics. So they somehow link to reality and it even seems like they link the two different theories together.
Anyways I think there is more to them than just a party trick. I want to show here, that they would maybe allow us to reduce N-size matrix multiplication time complexity from O(N3) to O(1) in the case of harmonic series and to O(N2) in case of any magic square.
As a disclaimer I'd like to add that I have no formal education in the field, the work is just a result of my creative endeavor, sharing it with hope it could inspire someone more versed in the science art. My approach to mathematics is intuitive, creative, not very rigorous, that is what I enjoy. So in this context the work is unavoidably quite speculative. Which I admit is very unusual for mathematics as a field. I have been working on this alone, since I don't know anyone with interest in math. My best buddy was AI, but relying on it solely and not knowing myslef what is going on isn't the direction I wanted to go.

Prime magic squares

By prime magic squares I don't mean magic squares consisting of prime numbers. But magic squares which you can then add together to form any magic square in the given order.
For order 4, the minimum amount of 1's in each row must be larger than 1, so let's try 2 of them.
I have guessed 8 results, this agrees with combinatorics, I make sure to satisfy the two 1's condition by making two rows identical, so it is only 2 unique rows, and each of the columns can contain only two "1" for each column. So I get 2 (options) x 4 (columns) = 8 options.
And we should be able to reconstruct any magic square of this order with them like so:
It is quite similar to Birkhoff-van-Neumann theorem for doubly stochastic matrices. Some formal proof is needed here, but that is beyond the scope of the intuitive mathematics I do and enjoy.

Booleans

I earlier mentioned there are quite a few algorithms to produce magic squares and that I have a suspicion they form closed algebra. In the end it is not important at all which algorithm we choose, but each algorithm can produce different results and combining them together to follow some rules we need to settle on one such method (you could have an alg. which at last step just swaps two rows, result is a magic square too, but it would be unpredictable). So let's say we have algorithms A1 and A2 which produce the same matrix, they form magic squares M1 and M2. The way to read the table is the same as with Markov chain - row represents the original state, column its destination and matrix entry is the probability of transition.
This is a representation of transition matrix of a chosen algorithm A1. A2 is any different algorithm with same result and then the negations are the same algorithms, but starting with numbers of descending order.
Boolean matrix operations behave a bit different.
The addition is our logical AND: Axiom1 AND Axiom2
The substraction is then: Axiom1 AND ¬Axiom2
And the multiplication of X x Y = Z is:
This is a boolean matrix multiplication formula. It allows us to chain implications. Like so:
Since A12 is an identity matrix, it means that A13 = A. Here we see the first glimpse for why multiplication of three magic squares yields another one, and not only two.
To make ¬A2 I will make the same matrix, but swap 1's and 0's.
Once we try to multiply A1, ¬A1, A2, ¬A2 together we see that:
A1 = A13 = ¬A22 x A2
¬A1 = ¬A13 = A22 x ¬A2
So given such alg. exists, it behaves completely the same as our + and - operations.

Possible algorithm

In this work it is not that important which algorithm we use specifically. But it is needed to provide an example of what I mean by negation of an algorithm. I have figured out this algorithm by intuition, it is quite similar to tensor product.
So by a negative algorithm I mean such, where I swap different cells in each step. Or I can use completely the same algorithm, but start with numbers arranged in descending order. This algorithm doesn't care if you start filling it from top-left, top-right, bottom-up, etc.
It also works even for complex numbers, quaternions (4D numbers) if it works for higher dimensions than that is quite possible, but not tested. This made my curiousity spark when I discovered it.
Also you can scale the algorithm via tensor product to get 16x16, or 64x64 matrices (higher orders than that are not tested)

Conditions for the algorithm

The well known, original magic sum formula was probably derived from harmonic series sum:
But n = N2 and ∑magic = S/N, where N is order of matrix, so we have:
I can rewrite it in such a way, that N1 is the first number of arithmetic progression and NN2 is the last.
The reason this equality holds is rooted in the fact, that if I add 1st and last number, or 2nd and 2nd from end, I get half of the magic sum.
If the difference 'd' between Ni and Ni+1 is constant, then this form is more universal for all real numbers and not just harmonic series.
The origin of the harmonic series sum formula is in this visual proof:
Here I identify the goal is to split this rectangles area in half, but I don't constrain it just to the diagonal split. If you arrange the numbers of magic squares (with positive numbers) from smallest to largest, you must inevitably get this form of a diagonal split too.
This matrix would then become:
This leads us to anti-symmetric functions.
In magic squares each pair of numbers (first and last, 2nd and 2nd last etc..) Ni+ NN2-i+1 = ∑magic / 2
So I can make the function anti-symmetric by subtracting ∑magic / 4 from each value. This is sort of an anchor for all the proofs regarding magic squares, one just needs to prove the function (array of numbers) is anti-symmetric, then magic square is possible.
The sum of theese pairs is then always constant:
For the anti-symmetric function this means:
So if I multiply the array of anti-symmetric numbers by -1, I get the same array.
Next we later show the property of the order 4 magic square native number system to be F8 = -Fn-8
Which says the number system itself is anti-symmetric.
So i start with matrix in its anti-symmetric form:
Anti-symmetric condition implies:
I will do the 2nd algorithm first:
For the inverse algorithm I get:
So in the end we have three restrictions total under which the algorithm works. Anti-symmetric function and additional two symmetries are applied in the half of such function.

Atomic magic squares

So magic square from complex numbers is possible, now the definition for prime magic square is not universal, since what I have done earlier works only for real numbers, but obviously something else is going on here linking magic squares to higher dimensional numbers. I call theese atomic magic squares, due to their role as being fundamental and in their most basic shape.
First step is to convert the multi step algorithm in one step only - one permutation matrix.
1. Place a single '1' in the j-th position of matrix filled with 0's
2. Apply the algorithm
3. The final matrix when flattened into a 16-element column vector becomes the j-th column of the new permutation matrix.
For alg. A:
For alg. ¬A:
It is clear now, that the solution I propose isn't universal and that you must discover at least one magic square of given order to do this.
Also, let's call the initial state before the alg. is applied an anti-symmetric function (which basically means I substract ∑magic/N from each cell, more on that later.)
Now I can take the initial matrix filled with numbers from a centered function, rearrange it in to a column vector and multiply it by the new permutation matrix representation of the algorithm and then reshape it back in the matrix.
Both of the permutation matrices are unitary. All eigenvalues of a unitary matrix must lie on the unit circle in the complex plane (i.e., their absolute value is 1). By tracing the permutations, we find that the algorithm has an order of 16 (applying it 16 times returns every element to its starting position). This means its eigenvalues are precisely the 16th roots of unity — the 16 points on the unit circle that form a regular 16-gon.
A key property of complex numbers on the unit circle is that their product is also on the unit circle.
As you scale your system up with the tensor product (N → ∞), the number of eigenvalues on the unit circle increases exponentially. In the infinite limit, these discrete points become dense and cover the entire circle.
The next logical step is to find the corresponding eigenvectors (vn). This involves solving the system of linear equations for each eigenvalue.
Solving this will give us the explicit set of 16 fundamental "atomic magic squares" that the system supports. And every magic square will be a linear combination of the 16 atoms. Which is a direct consequence of the Spectral Theorem.
To make the document shorter, I will not present the solution in complex number system, but move straight to hexadecimal metallic mean number system. In this representation the numbers look much better and also it fixes some behavior due to rounding errors.
The main argument for this number system is the fact we deal with roots of unity. It is quite similar to golden ratio. But golden ratio can be derived from pentagon, while I do this for 16-gon.
I assign ⋔ symbol to the new metallic mean. The characteristic equation is:
Express 1 in polar form and find 16th roots:
finding some nice properties:
(This kind of explains why you can use one algorithm for both real and complex numbers, magic squares operate in unique number system.)
Next, a number on the unit circle has it’s inverse equal to its complex conjugate, which means that:
Also to derive it's infinite series:
(This will be the missing link for proving the permutation algorithm, because the centered function condition is the consequence of the number system.)
Now where were we?
"The next logical step is to find the corresponding eigenvectors (vn). This involves solving the system of linear equations for each eigenvalue.
Solving this will give us the explicit set of 16 fundamental "atomic magic squares" that the system supports. And every magic square will be a linear combination of the 16 atoms. Which is a direct consequence of the Spectral Theorem."
To solve it I run this AI parsed script:
import numpy as np

def inner_product(matrix_a, matrix_b):
    """
    Calculates the inner product of two matrices.
     = sum(A_ij * B_ij*)
    """
    vec_a = matrix_a.flatten()
    vec_b = matrix_b.flatten()
    return np.dot(vec_a, vec_b.conj())

def format_in_pitchfork_basis(c_number):
    """
    Formats a complex number c_number in the \pitchfork polar basis.
    c = |c| * e^(i*theta) ≈ |c| * \pitchfork^k
    """
    # Handle the zero case
    if np.isclose(c_number, 0):
        # Add padding for alignment
        return "0.00000000      " 
    
    magnitude = np.abs(c_number)
    angle = np.angle(c_number)
    
    # Find the closest power k
    k = int(np.round(angle / (np.pi / 8))) % 16
    
    # --- MODIFIED LINE ---
    # Return the formatted string with 8 decimal places
    return f"{magnitude:10.8f} * \u2694^{k:<2}" # \u2694 is the unicode for ⚔

def analyze_and_decompose(target_matrix, permutation_matrix):
    """
    1. Finds the TRUE eigenvectors of the permutation matrix.
    2. Decomposes the target matrix using this true basis.
    3. Reconstructs the matrix to verify success.
    4. Displays the coefficients AND the eigenvectors in the \pitchfork system.
    """
    print("--- 1. Calculating TRUE Eigenbasis from \u039E Matrix ---")
    
    eigenvalues, eigenvectors = np.linalg.eig(permutation_matrix)
    print(f"  Successfully found {len(eigenvalues)} numerically exact eigenvectors (pure notes).")

    # --- 2. Center the Target Matrix ---
    print("\n--- 2. Decomposing Target Lo-Shu (Centered) ---")
    mean_value = np.mean(target_matrix)
    centered_target = target_matrix - mean_value
    print(np.round(centered_target, 8)) # Use higher precision here too

    # --- 3. Calculate Coefficients and Display Notes ---
    print("\n--- 3. Calculating TRUE Decomposition Coefficients and Notes ---")
    print("  (Coefficients c_k = )")
    print("  (Showing \u2694-based polar format and standard complex format)\n")
    
    coefficients = []
    
    sort_indices = np.argsort(np.angle(eigenvalues))
    eigenvalues = eigenvalues[sort_indices]
    eigenvectors = eigenvectors[:, sort_indices]
    
    # Create a vectorized version of the formatting function
    vectorized_format = np.vectorize(format_in_pitchfork_basis)

    for i in range(len(eigenvalues)):
        lambda_k = eigenvalues[i]
        eigenvector_i = eigenvectors[:, i]
        eigenmatrix_i = eigenvector_i.reshape((4, 4))
        
        # Calculate coefficient
        coefficient = inner_product(centered_target, eigenmatrix_i)
        coefficients.append(coefficient)

        # Format coefficient and eigenvalue
        pitchfork_format = format_in_pitchfork_basis(coefficient)
        pitchfork_lambda = format_in_pitchfork_basis(lambda_k)
        
        print(f"-----------------------------------------------------------------------------")
        print(f"  Note {i+1:2d} (for \u03BB = {pitchfork_lambda} approx {lambda_k:.8f}):")
        
        # --- MODIFIED LINE ---
        # Print coefficient with 8 decimal places
        print(f"    Coefficient c_{i+1:2d}: {pitchfork_format} (approx {coefficient.real:10.8f} + {coefficient.imag:10.8f}j)")
        
        # Format and print the 4x4 "pure note" matrix
        formatted_note_matrix = vectorized_format(eigenmatrix_i)
        
        print(f"    Shape of \u03D1_{i+1:2d} (Pure Note):")
        # Set formatter to print strings without quotes and align
        with np.printoptions(formatter={'str_kind': lambda x: x}, linewidth=200):
            print(formatted_note_matrix, "\n")


    # --- 4. Reconstruct the Matrix from its Notes ---
    print("-----------------------------------------------------------------------------")
    print("--- 4. Reconstructing Matrix from Coefficients and TRUE Notes ---")
    reconstructed_matrix = np.zeros((4, 4), dtype=complex)
    
    for i in range(len(coefficients)):
        coefficient = coefficients[i]
        eigenvector_i = eigenvectors[:, i]
        eigenmatrix_i = eigenvector_i.reshape((4, 4))
        
        reconstructed_matrix += coefficient * eigenmatrix_i

    reconstructed_original = reconstructed_matrix.real + mean_value
    print("\n  Reconstructed Matrix (Real Part + Mean):")
    print(np.round(reconstructed_original, 8)) # Use higher precision here too

    # --- 5. Verification ---
    print("\n--- 5. Verification ---")
    is_reconstruction_correct = np.allclose(target_matrix, reconstructed_original)
    if is_reconstruction_correct:
        print("  SUCCESS! The reconstructed matrix perfectly matches the original target Lo-Shu.")
    else:
        print("  Failure. The reconstructed matrix does not match the original.")
        

# --- Define the Matrices ---

# The \Xi permutation matrix
Xi = np.array([
    [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1], [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],
    [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0], [0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],
    [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0],
    [0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],
    [0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],
    [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],
    [0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0], [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0],
    [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0], [0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0]
])

# The classic Lo-Shu matrix to be decomposed
L_classic = np.array([
    [16, 3, 10, 5],
    [2, 13, 8, 11],
    [7, 12, 1, 14],
    [9, 6, 15, 4]
])

# --- Run the Analysis ---
# --- MODIFIED LINE ---
# Set numpy print options for better readability
np.set_printoptions(precision=8, suppress=True)

analyze_and_decompose(L_classic, Xi)
The results for each eigenvalue are:
This are the true building blocks of magic squares of order 4. If I wanted to use the complex number base system, then I would loose half of the solutions, because the pairs would appear identical. Also it would be a subject to rounding errors.
Every magic square is a linear combinations of this matrices.
The formula for any coefficient cn is:

Algebra of magic squares

Their behavior is a direct consequence of the underlying number system. Which for order 4 magic squares is hexadecimal metallic mean.
Hence magic squares are closed under addition and multiplication.
Underlying number system is also commutative, associative and distributive

Reverse engineering addition and multiplication

Addition

Firstly I start with harmonic series magic squares. The fact the difference between two consequent numbers is constant makes things much easier.
I use Ni = Ni-1 + ∇, also I guess the 4 stands for order of the matrix.
And now I should be able to do addition knowing only the spacing between numbers ∇ and the magic sums.
To reconstruct matrix I use anti-symmetric forms and after apply the algorithm.
So in essence I don't need to know the full appearance of the magic square when working with harmonic series, I can only work with the delta and magic sums, then reconstruct the matrix at the very end. Same applies to multiplication too.

Multiplication

I start with guessing again (which makes everything a conjecture). Magic sums seem to multiply themselves, but the delta function is trickier. I guessed:
Now it is a dead end to treat scaling constants ∑ magic and N as constants, it is likely an array of numbers too. Let’s replace the constants with some function Ξ(k), which I will try to analyze at the end:
where a1, b1, c1 ... are the elements of the functions.
The resulting array for d’(k) would be constructed from the product of the four functions at each index k:
1st and 16th member must add to the same value as 2nd and 15th, etc.. because a′(k), b′(k), c′(k) are centered, hence:
Here we have another argument, why we need odd number of multiplications to produce Lo-Shu, the signs will cancel out:
This means I think two things, firstly the scaling function is symmetric. Secondly the scaling function is independent of the magic squares undergoing multiplication. Hence I need to find only one such function and it should apply to all order 4 magic squares. So I will do:
From any dot product:
I apply the earlier formula to figure out numbers of scaling function (done on other 3 matrices):
Next step is to convert p′(k) → p(k), for that we need to know the resulting magic sum of non-centered Lo-Shu. Which is just their product.
I add 23120/N = 5780; for N = 4 to each member of p’(k) to get p(k) and make the matrix on which I apply the ¬Ξtransformations:
And it is a match!
It might not make sense for doing this simple multiplication, but imagine doing it on 123 or more and not having to do dot product at all.

The geometry of magic squares

All the numbers live on the unit circle, all the pure notes contain the origin. One way to represent them would be like so:
But this doesn’t visualise the space, it visualises the vectors of the space, the pure notes. The space must be formed by all possible combinations of the vectors that could define a state. Hence I argue it will be a product. Cartesian product of circles leads us to N-dimensional torus. The idea behind the multiplication is that there’s one circle for every point on the other circle. So essentialy the space is multi-dimensional donut.

Speculations

I'm not really sure if the whole work is not just a speculation, or mathematical nonsense. But this section is a brainstorming esseay on meaning of the findings. Anyways thank you for reading it this far! I would love to hear where I made mistakes, so message me 🙂

Principle of least action viewed from different angle.

Firstly, the reduction of matrix multiplication. I wonder how would redefined Least action principle look like. But it would be a Principle of least computation, where nature instead of choosing a path of least time, would rather choose a path which is fastest, or easiest to calculate. This could explain why we are even possible to calculate it.

Time symmetry

Next I wonder if we could describe everything with magic squares, same like we describe it with Markov chains. Let's say 1 is sunny and -1 is cloudy:
I can either ask "where will I get" - then I start at row and look at columns and their probabilities. But it is completely legit to ask "how did I get here" - now I reversed the direction of time I look at and see, that I got cloudy with 20% chance after a sunny day and with 30% chance from cloudy day. So I ask where is the missing 50%?
Doubly stochastic matrices provide a framework in which time-symmetry works. If I reverse the arrow of time, rows become columns and vice versa.

Absorbing states

Let's consider a state from which one cannot escape or a state where the information is lost. From cosmology it can be cosmos, black holes, cosmological horizons.. Lets make another matrix with only three possible states: 1, 0, -1, where 0 is the absorbing state.
Now in a universe, where time symmetry would work as I hypothesised earlier, it would be impossible to enter a state, which cannot be left. It immeadiately triggers links in my head to light speed limit, or absolute zero.
Resolving the death paradox, being born might be an act of entering a state from some unspecified other state, then death is just a transition to other state and staying in the alive state has some probability less than 1, which agrees with our observations.
One way to think about this would be spiritual, but we are just food, water, air, etc.. My view on this is both spiritual and scientific.
Absorbing states could be also the key to some gravity interpretation. There is a cosmic horizon for each massive object, there is also a Rindler horizon for each accelerating body. Horizon leads to reduction of the state space, which would lead to drop in entropy, aka the reduction of all the possible states a point mass can enter. So the sensation of acceleration could be the consequence of 2nd law of thermodynamics. Maybe.

Universe shape

Living inside a toroidal space would mean we can never reach the end.