I'm still reading this, but if this checks out, this is one of the most significant discoveries in years.
Why use splines or polynomials or haphazardly chosen basis functions if you can just fit (gradient descent) your data or wave functions to the proper computational EML tree?
Got a multidimensional and multivariate function to model (with random samples or a full map)? Just do gradient descent and convert it to approximant EML trees.
Perform gradient descent on EML function tree "phi" so that the derivatives in the Schroedinger equation match.
But as I said, still reading, this sounds too good to be true, but I have witnessed such things before :)
It's interesting to see his EML approach whereas mine was more on generating a context sensitive homoiconic grammar.
I've had lots of success combining spectral neural nets (GNNs, FNOs, Neural Tangent Kernels) with symbolic regression and using Operad Theory and Category Theory as my guiding mathematical machinery
If we make the analogy from Bertrand Russel's Principia Mathematica, he derived fully expanded expressions, i.e. trees where the leaves only may refer to the same literals, everyone claimed this madness underscored how formal verification of natural mathematics was a fools errand, but nevertheless we see successful projects like metamath (us.metamath.org) where this exponential blow-up does not occur. It is easy to see why: instead of representing proofs as full trees, the proofs are represented as DAG's. The same optimization would be required for EML to prevent exponential blow-up.
Put differently: if we allow extra buttons besides {1, EML} for example to capture unary functions the authors mentally add an 'x' button so now the RPN calculator has {1, EML, x}; but wait if you want multivariate functions it becomes an RPN calculator with extra buttons {1, EML, x,y,z} for example.
But why stop there? in metamath proofs are compressed: if an expression or wff was proven before in the same proof, it first subproof is given a number, and any subsequent invocations of this N'th subproof refers to this number. Why only recall input parameters x,y,z but not recall earlier computed values/functions?
In fact every proof in metamath set.mm that uses this DAG compressibility, could be split into the main proof and the repeatedly used substatements could be automatically converted to explicitly separate lemma proofs, in which case metamath could dispose of the single-proof DAG compression (but it would force proofs to split up into lemma's + main proof.
None of the proven theorems in metamath's set.mm displays the feared exponential blowup.
That is sort of comparable to how NAND simplify scaling.
Division is hell on gates.
The single component was the reason scaling went like it did.
There was only one gate structure which had to improve to make chips smaller - if a chip used 3 different kinds, then the scaling would've required more than one parallel innovation to go (sort of like how LED lighting had to wait for blue).
If you need two or more components, then you have to keep switching tools instead of hammer, hammer, hammer.
Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.
Same reason all boolean logic isn't performed with combinations of NAND – it's computationally inefficient. Polynomials are (for their expressivity) very quick to compute.
And are a much less arbitrary choice than NAND, vs. NOR, XOR, etc.
Using transistors as conceptual digital logic primitives, where power dissipation isn't a thing, Pass Logic is "The Way".
Pass logic. Digital. [0]
This is extremely basic digital circuit design. You can create digital circuits as compositions of gates. But you can often implement the same logic, with fewer transistors, using pass logic.
Pass logic is also great for asynchronous digital design.
Moreover, the amplifying function must exist at least in some gates, to restore the logic levels, but there is no need for it to exist in all gates.
For instance, any logic circuit can be implemented using AND gates and/or OR gates made with diodes, which have no gain, i.e. no amplifying function, together with 1-transistor inverters, which provide both the NOT function and the gain needed to restore the logic levels.
The logic functions such as NAND can be expressed in several way using simpler components, which correspond directly with the possible hardware implementations.
Nowadays, the most frequent method of logic implementation is by using parallel and series connections of switches, for which the MOS transistors are well suited.
Another way to express the logic functions is by using the minimum and maximum functions, which correspond directly with diode-based circuits.
All the logic functions can also be expressed using the 2 operations of the binary finite field, addition (a.k.a. XOR) and multiplication (a.k.a. AND).
This does not lead to simpler hardware implementations, but it can simplify some theoretical work, by using algebraic results. Actually this kind of expressing logic is the one that should have been properly named as Boolean logic, as the contribution of George Boole has been precisely replacing "false" and "true" with "0" and "1" and reinterpreting the classic logic operations as arithmetic operations. It is very weird to see in some programming languages a data type named Boolean, whose possible values are "false" and "true", instead of the historically correct Boolean values, "0" and "1", which can also be much more useful in programming than "false" and "true", by simplifying expressions, especially in array operations (which is why array-oriented languages like APL use "0" and "1", not "false" and "true").
But transistors can be N or P-channel, so it’s not a single logical primitive, like e.g. NAND-gates.
Because the EML basis makes simple functions (like +) hard to express.
Not to diminish this very cool discovery!
This also re-opens a lot of "party pooper" results in mathematics: impossibility of representing solutions to general quintic (fine print: if we restrict ourselves to arithmetic and roots/radicals). In mathematics and physics there have been a lot of "party pooper" results which later found more profound and interesting positive results by properly rephrasing the question. A negative result for a myopic question isn't very informative on its own.
It seems like a neat parlour trick, indeed. But significant discovery?
> Submitted on 23 Mar 2026 (v1), last revised 4 Apr 2026 (this version, v2)
A stack of zeros and ones can be encoded in a single number by keeping with bit-shifting and incrementing.
Pushing a 0 onto the stack is equivalent to doubling the number.
Pushing a 1 is equivalent to doubling and adding 1.
Popping is equivalent to dividing by 2, where the remainder is the number.
I use something not too far off for my daily a programming based on a similar idea:Rejoice is a concatenative programming language in which data is encoded as multisets that compose by multiplication. Think Fractran, without the rule-searching, or Forth without a stack.
``` look at this paper: https://arxiv.org/pdf/2603.21852
now please produce 2x+y as a composition on EMLs ```
Opus(paid) - claimed that "2" is circular. Once I told it that ChatGPT have already done this, finished successfully.
ChatGPT(free) - did it from the first try.
Grok - produced estimation of the depth of the formula.
Gemini - success
Deepseek - Assumed some pre-existing knowledge on what EML is. Unable to fetch the pdf from the link, unable to consume pdf from "Attach file"
Kimi - produced long output, stopped and asked to upgrade
GLM - looks ok
TIL you can taunt LLMs. I guess they exhibit more competitive spirit than I thought.
It got a result.
""" Consider a mathematical function EML defined as `eml(x,y)=exp(x)−ln(y)`
Please produce `sin(x)/x` as a composition on EMLs and constant number 1 (one). """
So, what happens if you take say the EML expression for addition, and invert the binary tree?
Reminds me of the Iota combinator, one of the smallest formal systems that can be combined to produce a universal Turing machine, meaning it can express all of computation.
eml(z, eml(x, 1))
= e^z - ln(eml(x, 1))
= e^z - ln(e^x)
= e^z - x
and the claim is that, after it's expanded, z will be such that this whole thing is equal to -x. but with some algebra, this is happening only if e^z = 0,
and there is no complex number z that satisfies this equation. indeed if we laboriously expand the given formula for z (the left branch of the tree), we see that it goes through ln(0), and compound expressions.x^-1 has the same problem.
both formulae work ...sort of... if we allow ln(0) = Infinity and some other moxie, such as x / Infinity = 0 for all finite x.
looks like it computes ln(1)=0, then computes e-ln(0)=+inf, then computes e-ln(+inf)=-inf
> EML-compiled formulas work flawlessly in symbolic Mathematica and IEEE754 floating-point… This is because some formulas internally might rely on the following properties of extended reals: ln 0 = −∞, e^(−∞) = 0.
And then follows with:
> But EML expressions in general do not work ‘out of the box’ in pure Python/Julia or numerical Mathematica.
Thus, the paper’s completeness claim depends on a non-standard arithmetic convention (ln(0) = -∞), not just complex numbers as it primarily advertises. While the paper is transparent about this, it is however, buried on page 11 rather than foregrounded as a core caveat. Your comment deserves credit for flagging it.
This is the standard convention when doing operations in the extended real number line, i.e. in the set of the real numbers completed with positive and negative infinities.
When the overflow exception is disabled, any modern CPU implements the operations with floating-point numbers as operations in the extended real number line.
So in computing this convention has been standard for more than 40 years, while in mathematics it has been standard for a couple of centuries or so.
As always in mathematics, when computing expressions, i.e. when computing any kind of function, you must be aware very well which are the sets within which you operate.
If you work with real numbers (i.e. in a computer you enable the FP overflow exception), then ln(0) is undefined. However, if you work with the extended real number line, which is actually the default setting in most current programming languages, then ln(0) is well defined and it is -∞.
−z = 1 − (e − ((e − 1) − z))
https://gist.github.com/CGamesPlay/9d1fd0a9a3bd432e77c075fb8...
That's awesome. I always wondered if there is some way to do this.
Quick google seach brings up https://github.com/pr701/nor_vm_core, which has a basic idea
[1] https://github.com/tromp/AIT/blob/master/ait/minbase.lam
And i is obviously `sqrt(-1)`
I read the paper. Is there a table covering all other math operations translated to eml(x,y) form?
You can find this link on the right side of the arxiv page:
https://arxiv.org/src/2603.21852v2/anc/SupplementaryInformat...
e^ix = cos x + i sin x
which means: e^-ix = cos -x + i sin -x
= cos x - i sin x
so adding them together: e^ix + e^-ix = 2 cos x
cos x = (e*ix - e^-ix) / 2
So I guess the real part of that.Multiplication, division, addition and subtraction are all straightforward. So are hyperbolic trig functions. All other trig functions can be derived as per above.
Posts like these are the reason i check HN every day
> Everyone learns many mathematical operations in school: fractions, roots, logarithms, and trigonometric functions (+, −, ×, /, sqrt, sin, cos, log, …), each with its own rules and a dedicated button on a scientific calculator. Higher mathematics reveals that many of these are redundant: for example, trigonometric ones reduce to the complex exponential. How far can this reduction go? We show that it goes all the way: a single operation, eml(x, y), replaces every one of them. A calculator with just two buttons, EML and the digit 1, can compute everything a full scientific calculator does. This is not a mere mathematical trick. Because one repeatable element suffices, mathematical expressions become uniform circuits, much like electronics built from identical transistors, opening new ways to encoding, evaluating, and discovering formulas across scientific computing.
Simply because bool algebra doesn't have that many functions and all of them are very simple to implement.
A complex bool function made out of NANDs (or the likes) is little more complex than the same made out of the other operators.
Implementing even simple real functions out of eml() seems to me to add a lot of computational complexity even with both exp() and ln() implemented in hardware in O(1). I think about stuff sum(), div() and mod().
Of course, I might be badly wrong as I am not a mathematician (not even by far).
But I don't see, at the moment, the big win on this.
It’s a derivation of the Y combinator from ruby lambdas
It's one of those facts that tends to blow minds when it's first encountered, I can see why one would name a company after it.
More on topic:
> No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, sqrt, and log has always required multiple distinct operations.
I was taught that these were all hypergeometric functions. What distinction is being drawn here?
When you have a function with many parameters it becomes rather trivial to express simpler functions with it.
You could find a lot of functions with 4 parameters that can express all elementary functions.
Finding a binary operation that can do this, like in TFA, is far more difficult, which is why it has not been done before.
A function with 4 parameters can actually express not only any elementary function, but an infinity of functions with 3 parameters, e.g. by using the 4th parameter to encode an identifier for the function that must be computed.
Granted, but the claim in the abstract says:
>> computing elementary functions such as sin, cos, sqrt, and log has always required multiple distinct operations
And I don't see how this is true as to hypergeometric functions in a way that isn't shared by the approach in the paper.
> Finding a binary operation that can do this, like in TFA, is far more difficult, which is why it has not been done before.
> A function with 4 parameters can actually express not only any elementary function, but an infinity of functions with 3 parameters, e.g. by using the 4th parameter to encode an identifier for the function that must be computed.
These statements seem to be in direct conflict with each other; you can use the second parameter of a binary function to identify a unary function just as you can use the fourth parameter of a quaternary function to identify a trinary one.
With binary functions you can compose them using a very complex composition graph.
With unary functions you can compose them only linearly, so in general it is impossible to make a binary function with unary functions.
You can make binary functions from unary functions only by using at least one other binary function. For instance, you can make multiplication from squaring, but only with the help of binary addition/subtraction.
So the one function that can be used to generate the others by composition must be at least binary, in order to be able to generate functions with an arbitrary number of parameters.
This is why in mathematics there are many domains where the only required primitives are a small number of binary functions, but there is none where strictly unary functions are sufficient. (However, it may be possible to restrict the binary functions to very simple functions, e.g. making a tuple from components, for instance the CONS function of LISP I.)
I have replied to your last statement:
> "you can use the second parameter of a binary function to identify a unary function just as you can use the fourth parameter of a quaternary function to identify a trinary one."
As I have explained above, what you propose does not work. It works in functions with 3 or more parameters, but it does not work in binary functions, because you cannot make binary functions from unary functions (without using some auxiliary binary functions).
What comes to my mind as an alternative which I would subjectivity finer is "axe". Think axiom or axiology.
Anyone with other suggestions? Or even remarks on this one?
Of course the redundant primitives aren't free, since they add code size or die area. In choosing how many primitives to provide, the designer of a numerical library aims to make a reasonable tradeoff between that size cost and the speed benefit.
This paper takes that tradeoff to the least redundant extreme because that's an interesting theoretical question, at the cost of transforming commonly-used operations with simple hardware implementations (e.g. addition, multiplication) into computational nightmares. I don't think anyone has found a practical application for their result yet, but that's not the point of the work.
This work is about continuous systems, even if the reduction of many kinds of functions to compositions of a single kind of function is analogous to the reductions of logic functions to composing a few kinds or a single kind of logic functions.
I actually do not value the fact that the logic functions can be expressed using only NAND. For understanding logic functions it is much more important to understand that they can be expressed using either only AND and NOT, or only OR and NOT, or only XOR and AND (i.e. addition and multiplication modulo 2).
Using just NAND or just NOR is a trick that does not provide any useful extra information. There are many things, including classes of mathematical functions or instruction sets for computers, which can be implemented using a small number of really independent primitives.
In most or all such cases, after you arrive to the small set of maximally simple and independent primitives, you can reduce them to only one primitive.
However that one primitive is not a simpler primitive, but it is a more complex one, which can do everything that the maximally simple primitives can do, and it can recreate those primitives by composition with itself.
Because of its higher complexity, it does not actually simplify anything. Moreover, generating the simpler primitives by composing the more complex primitive with itself leads to redundancies, if implemented thus in hardware.
This is a nice trick, but like I have said, it does not improve in any way the understanding of that domain or its practical implementations in comparison with thinking in terms of the multiple simpler primitives.
For instance, one could take a CMOS NAND gate as the basis for implementing a digital circuit, but you understand better how the CMOS logic actually works when you understand that actually the AND function and the NOT function are localized in distinct parts of that CMOS NAND gate. This understanding is necessary when you have to design other gates for a gate library, because even if using a single kind of gate is possible, the performance of this approach is quite sub-optimal, so you almost always you have to design separately, e.g. a XOR gate, instead of making it from NAND gates or NOR gates.
In CMOS logic, NAND gates and NOR gates happen to be the simplest gates that can restore at output the same logic levels that are used at input. This confuses some people to think that they are the simplest CMOS gates, but they are not the simplest gates when you remove the constraint of restoring the logic levels. This is why you can make more complex logic gates, e.g. XOR gates or AND-OR-INVERT gates, which are simpler than they would be if you made them from distinct NAND gates or NOR gates.
nix run github:pveierland/eml-eval
EML Evaluator — eml(x, y) = exp(x) - ln(y)
Based on arXiv:2603.21852v2 by A. Odrzywołek
Constants
------------------------------------------------------------------------------
1 K=1 d=0 got 1 expected 1 sym=ok num=ok [simplify]
e K=3 d=1 got 2.718281828 expected 2.718281828 sym=ok num=ok [simplify]
0 K=7 d=3 got 0 expected 0 sym=ok num=ok [simplify]
-1 K=17 d=7 got -1 expected -1 sym=ok num=ok [simplify]
2 K=27 d=9 got 2 expected 2 sym=ok num=ok [simplify]
-2 K=43 d=11 got -2 expected -2 sym=ok num=ok [simplify]
1/2 K=51 d=15 got 0.5 expected 0.5 sym=ok num=ok [simplify]
-1/2 K=67 d=17 got -0.5 expected -0.5 sym=ok num=ok [simplify]
2/3 K=103 d=19 got 0.6666666667 expected 0.6666666667 sym=ok num=ok [simplify]
-2/3 K=119 d=21 got -0.6666666667 expected -0.6666666667 sym=ok num=ok [simplify]
sqrt2 K=85 d=21 got 1.414213562 expected 1.414213562 sym=ok num=ok [simplify]
i K=75 d=19 got i expected i sym=ok num=ok [i²=-1, simplify]
pi K=153 d=29 got 3.141592654 expected 3.141592654 sym=ok num=ok [simplify]
Unary functions (x = 7/3)
------------------------------------------------------------------------------
exp(x) K=3 d=1 got 10.3122585 expected 10.3122585 sym=ok num=ok [simplify]
ln(x) K=7 d=3 got 0.8472978604 expected 0.8472978604 sym=ok num=ok [simplify]
-x K=17 d=7 got -2.333333333 expected -2.333333333 sym=ok num=ok [simplify]
1/x K=25 d=8 got 0.4285714286 expected 0.4285714286 sym=ok num=ok [simplify]
x - 1 K=11 d=4 got 1.333333333 expected 1.333333333 sym=ok num=ok [simplify]
x + 1 K=27 d=9 got 3.333333333 expected 3.333333333 sym=ok num=ok [simplify]
2x K=67 d=17 got 4.666666667 expected 4.666666667 sym=ok num=ok [simplify]
x/2 K=51 d=15 got 1.166666667 expected 1.166666667 sym=ok num=ok [simplify]
x^2 K=41 d=10 got 5.444444444 expected 5.444444444 sym=ok num=ok [simplify]
sqrt(x) K=59 d=16 got 1.527525232 expected 1.527525232 sym=ok num=ok [simplify]
Binary operations (x = 7/3, y = 5/2)
------------------------------------------------------------------------------
x + y K=27 d=9 got 4.833333333 expected 4.833333333 sym=ok num=ok [simplify]
x - y K=11 d=4 got -0.1666666667 expected -0.1666666667 sym=ok num=ok [simplify]
x * y K=41 d=10 got 5.833333333 expected 5.833333333 sym=ok num=ok [simplify]
x / y K=25 d=8 got 0.9333333333 expected 0.9333333333 sym=ok num=ok [simplify]
x ^ y K=49 d=12 got 8.316526261 expected 8.316526261 sym=ok num=ok [simplify]Although you also need to encode where to put the input.
The real question is what emoji to use for eml when written out.
Some Emil or another, I suppose. Maybe the one from Ratatouille, or maybe this one: https://en.wikipedia.org/wiki/Emil_i_L%C3%B6nneberga
I'm kidding, of course. You can encode anything in bits this way.
Exp and ln, isn't the operation its own inverse depending on the parameter? What a neat find.
This is a function from ℝ² to ℝ. It can't be its own inverse; what would that mean?
and
eml(eml(1,x),1) = e^e * x
Table 4 shows the "size" of the operators when fully expanded to "eml" applications, which is quite large for +, -, ×, and /.
Here's one approach which agrees with the minimum sizes they present:
eml(x, y ) = exp(x) − ln(y) # 1 + x + y
eml(x, 1 ) = exp(x) # 2 + x
eml(1, y ) = e - ln(y) # 2 + y
eml(1, exp(e - ln(y))) = ln(y) # 6 + y; construction from eq (5)
ln(1) = 0 # 7
After you have ln and exp, you can invert their applications in the eml function eml(ln x, exp y) = x - y # 9 + x + y
Using a subtraction-of-subtraction to get addition leads to the cost of "27" in Table 4; I'm not sure what formula leads to 19 but I'm guessing it avoids the expensive construction of 0 by using something simpler that cancels: x - (0 - y) = x + y # 25 + {x} + {y}xy = eml(eml(1, eml(eml(eml(eml(1, eml(eml(1, eml(1, x)), 1)), eml(1, eml(eml(1, eml(y, 1)), 1))), 1), 1)), 1)
From Table 4, I think addition is slightly more complicated?
const eml = (x,y) => Math.exp(x) - Math.log(y);
const mul = (x,y) => eml(eml(1,eml(eml(eml(1,eml(eml(1,eml(1,x)),1)),eml(1,eml(eml(1,eml(y,1)),1))),1)),1);
console.log(mul(5,7));
> 35.00000000000001For larger or negative inputs you get a NaN because ECMAScript has limited precision and doesn't handle imaginary numbers.
exp(a) = eml(a, 1) ln(a)=eml(1,eml(eml(1,a),1))
Plugging those in is an excercise to the reader
But no...
This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding.
Elementary functions such as exponentiation, logarithms and trigonometric functions are the standard vocabulary of STEM education. Each comes with its own rules and a dedicated button on a scientific calculator;
What?
and No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, √ , and log has always required multiple distinct operations. Here we show that a single binary operator
Yeah, this is done by using tables and series. His method does not actually facilitate the computation of these functions.
There is no such things as "continuous mathematics". Maybe he meant to say continuous function?
Looking at page 14, it looks like he reinvented the concept of the vector valued function or something. The whole thing is rediscovering something that already exists.
The point of this paper is not to revolutionize how a scientific calculator functions overnight, its to establish a single binary operation that can reproduce the rest of the typical continuous elementary operations via repeated application, analogous to how a NAND or NOR gate creates all of the discrete logic gates. Hence, "continuous mathematics" as opposed to discrete mathematics. It seems to me you're being overly negative without solid reasoning.
But he didn't show this though. I skimmed the paper many times. He creates multiple branches of these trees in the last page, so it's not truly a single nested operation.
"All" is a tall claim. Have a look at https://perso.ens-lyon.fr/jean-michel.muller/FP5.pdf for example. Jump to slide 18:
> Forget about Taylor series
> Taylor series are local best approximations: they cannot compete on a whole interval.
There is no need to worry about "sh-tt-ng" on their result when there is so much to learn about other approximation techniques.