.....sometimes, snorkeling in a turquoise ocean, relaxed and contemplating the
colourful and familiar inhabitants of the sea, all of a sudden strange creatures
emerge from the deep, arousing strangeness, shudder and bewilderment. Then either
they dissappear again after a while, leaving behind a faint memory of uneasiness
fading with time, or, upon closer scrutiny and aquaintance with the properties of
their natural environment, they become as natural and peaceful to the mind as all
those creatures hitherto known to one. In other words, there is a way to place
your unnamed identity into a natural environment as to make its appearance
completely obvious, and this has to do with some very classical and concrete
mathematics (with the pun to *Graham et al.* intended). Thus it shows how someone
*could* come up with this formula, and in demonstrating this we proceed in the
classical analytic way: Thesis – Antithesis – Synthesis.

**Thesis** First of all, you have to place all participants of your identity
into their natural home. Now your Eulerian numbers, as you define them, are far
away from home – one reason for this being that your definition is combinatorial,
whereas it is difficult to find a natural combinatorial definition of their
counterparts, the Bernoulli numbers which you wish them to relate to, because of
their being noninteger rational numbers–. To assign them to their natural home
one has, as their name suggests, to direct one’s attention to their founding
father, the great Leonhard Euler. So on one wonderful day in midst eighteenth
century, Herr Euler decided, for reasons hinted at below, to get interested in
finding a rational expression for the formal power series
\begin{equation*}
P_n(x):=\sum_{k=0}^{\infty} k^n x^k \quad,\quad
\text{with $n$ fixed, but arbitrary.}\tag{0}
\end{equation*}
And thus he achieved this: he started with the identity
\begin{equation*}
1+x+x^2+x^3+\cdots = \dfrac{1}{1-x},\tag{1}
\end{equation*}
differentiated it and multiplied it with $x$, obtaining thus
\begin{equation*}
x+2x^2+3x^3+\cdots = \dfrac{x}{(1-x)^2},
\end{equation*}
differentiated it and multiplied it with $x$, obtaining thus
$$
x+2^2x^2+3^2x^3+\cdots = \dfrac{x^2+x}{(1-x)^3},
$$
differentiated it and multiplied it with $x$, obtaining thus
$$
x+2^3x^2+3^3x^3+\cdots = \dfrac{x^3+4x^2+x}{(1-x)^4},
$$
and so on, arriving at his sought-for rational expression
\begin{equation*}
\fbox{$0^nx^0+1^nx^1+2^nx^2+3^nx^3+\cdots =
\sum_{k=0}^{\infty} k^n x^k =
\dfrac{A_n(x)}{(1-x)^{n+1}}$} \tag{2}
\end{equation*}
(with the prescription $0^n:=[n=0)]$), where $A_n(x) := \sum_{k=0}^n A(n,k)x^k$
are monic polynomials, called the *Eulerian polynomials*, whose nature reveals
itself by contemplating the step $n-1 \rightarrow n$: write down this formula for $
n-1$, differentiate it and multiply by $x$, thus arriving at the recursion
\begin{equation*}
A_n(x) = x(1-x)A_{n-1}'(x) + nxA_{n-1}(x) \quad,\quad
A_0(x) = 1.\tag{3}
\end{equation*}
This puts the Eulerian numbers $A(n,k)$ into their natural home as the
coefficients of the Eulerian polynomials. From (3) it is straightforward to derive their recursion
$$\tag{3a}
A(n,k) = kA(n-1,k)+(n-k+1)A(n-1,k-1) \quad,\quad
A(0,0) = 1,
$$
and from (2) the explicit formula
$$
A(n,k) = \sum_{i=0}^k (-1)^i\binom{n+1}{i} (k-i)^n
= \sum_{j=0}^k (-1)^{k-j}\binom{n+1}{k-j} j^n.
$$
(That this definition of the Eulerian numbers coincides with yours combinatorial
one – up to an index shift, see the **Remark** below – is by no means
obvious and one of the many strange ways mathematics take; it is mainly due to the fact that, ominously, the number of $n$-permutations having $k+1$ descents obey the same recursion law (3a) as the Eulerian numbers.) So our working
definition of the Eulerian numbers $A(n,k)$ is

**Definition** The Eulerian numbers $A(n,k), n,k \in \mathbb{N}$, are
defined by
$$
\fbox{$A(n,k) := \text{coefficient of $x^k$ in the Eulerian polynomial $A_n(x)$.}$}
$$
**Antithesis** Now to the next species, the Bernoulli numbers $B_n$. Their
natural home is the arena of the century old battle for finding
a closed formula for the power sums
\begin{equation*}
S_n(k) := 1^n+2^n+3^n+\cdots +k^n =
\sum_{j=1}^k j^n\tag{4}.
\end{equation*}
The first instances were known throughout the centuries and to countless scholars
tormented with pointless inductions:
$$
\begin{split}
S_1(k) & = \dfrac{1}{2}k^{2} + \dfrac{1}{2}k
= \dfrac{k(k+1)}{2};\\ \newline
S_2(k) & = \dfrac{1}{3}k^{3} + \dfrac{1}{2}k^{2} +
\dfrac {1}{6}k^{\phantom{1}}
= \dfrac{k(k+1)(2k+1)}{6}\\ \newline
& = \dfrac{(2k)(2k+1)(2k+2)}{24}; \\ \newline
S_3(x) & = \dfrac{1}{4}k^{4} + \dfrac{1}{2}k^{3} +
\dfrac{1}{4}k^{2}
= \dfrac{k^2(k+1)^2}{4}.
\end{split}
$$
What these first instances lead one to conjecture, is true – $S_n(k)$ is a
polynomial in $k$ of degree $n+1$ with leading term $\dfrac{k^{n+1}}{n+1}$ and no
constant term; this is easily proved by an ingenious telescoping argument due to
Pascal. Replacing $k$ by an indeterminate $x$ this gives the *power sum
polynomials $S_n(x)$*, and one of the Holy Grails of mathematics for centuries was
to exhibit a closed formula for these power sum polynomials.

Things acquired momentum in the seventeenth century where Calculus was lying in
the air and the desire grew to compute the areas or volumes of everything one
could lay one's greedy claws on, and in the course of these events suspicion grew
that the key to a closed formula for the $S_n(x)$ lay in other time-withered
remnants of traditional lore bestowed upon us by the ancient ones - the *figurate
numbers*. These were already known to the Pythagoreans, since they fitted well to
their belief of a mythical unity of number and form– algebra and geometry, in
modern terms –, and this fundamental dichotomy is alive and in best shape today
in form of the hot topic called *Arithmetc Geometry*. Not turning quite so big
wheels, they coined the concept of *figurate numbers*, numbers given by the number
of elements of elementary well-known geometric configurations, like pebbles on the
beach, say. Ordered by dimension $n$, they formed familes $F(k,n) = F_n(k)$, $k=1,2
,3,\dots$ as follows:

$n=0$, $F(k,0)$: in dimension 0, there is no place to move and you can place the
pebbles just one at a time, so

$F(k,0) = 1, 1, 1, \dots$

$n=1$, $F(k,1)$: in dimension 1, you can put one pebble after the other in line,
one at a time, and so you build the naturals:

$F(k,1) = 1, 1+1=2, 1+1+1=3, \dots$

$n=2$, $F(k,2)$: in dimension 2, you have room to pile the patterns of dimension
1 on top of each other in a plane in triangular shapes, and so you build the
*triangular numbers*:

$F(k,2) = 1, 1+2=3, 1+2+3=6, \dots$

$n=3$, $F(k,3)$: in dimension $3$, you have room to pile the patterns of
dimension 2 on top of each other in space in tetraedrical shapes, and so you build
the *triangular pyramidal* or *tetrahedral numbers*:

$F(k,3) = 1, 1+3=4, 1+3+6=10, \dots$

From there on, visualization breaks down, but the pattern should be clear: the
figurate numbers $F(k,n)$ confirm to the recursion
\begin{equation*}
F(k,n) = \sum_{j=1}^k F(j,n-1) \quad,\quad F(k,0) = 1.
\tag{5}
\end{equation*}
Another Holy Grail of the ancient times was to give a closed formula for the
figurate numbers $F(k,n)$. Such a formula was found, then forgotten, then
rediscovered once and again; this colourful and interesting story is told in [1].
At any rate, the answer is, in modern terms,
\begin{equation*}
\fbox{$F(k,n) = \dbinom{k+n-1}{n}$},\tag{6}
\end{equation*}
which, as we will see, comprises the key to paradise.

At this point, things begin to become interesting. The $F(k,n)$ are polynomials in
$k$ of degree $n$, and (5) reads, because of (6),
\begin{equation*}
\binom{k+n-1}{n} = \sum_{j=1}^k \binom{j+n-2}{n-1},\tag{7}
\end{equation*}
and this is a closed expression for a sum analogous to (4), namely the accumulated sum of the values of a polynomial of degree $n$ at ascending arguments. And more to that, it is the key to the Holy Grail pertaining to (4), or at least, partly, since it leads to a recursion for a closed form of the power sum polynomials. To see this, let us take a closer look at the first instances of (7): first rewrite (7) as
$$
\dfrac{(k+n-1)(k+n-2) \cdots k}{1\cdot 2 \cdots n} =
\sum_{j=1}^k
\dfrac{(j+n-2)(j+n-3) \cdots j}{1\cdot 2 \cdots (n-1)}.
$$
For $n=2,3,4,\dots$ this gives the following equations by expanding the numerators
under the sum sign:
$$
\begin{split}
\dfrac{(k+1)k}{1\cdot2}
&= \sum_{j=1}^k \dfrac{j}{1}\\ \newline
&= S_1(k)\\ \newline
\dfrac{(k+2)(k+1)k}{1\cdot2\cdot3}
&= \sum_{j=1}^k \dfrac{(j+1)j}{1\cdot2} \\ \newline
&=\dfrac{1}{2}S_2(k)+\dfrac{1}{2}S_1(k)\\ \newline
\dfrac{(k+3)(k+2)(k+1)k}{1\cdot2\cdot3\cdot4}
&= \sum_{j=1}^k \dfrac{(j+2)(j+1)j}{1\cdot2\cdot3}\\ \newline
&= \dfrac{1}{6}S_3(k)+\dfrac{1}{2}S_2(k)+\dfrac13S_1(k)
\end{split}
$$
and so on, yielding a triangular system for the $S_n(k)$, which can then be
recursively solved. This system of three equations, for instance, yields the
expressions for $S_1(k), S_2(k), S_3(k)$ given above.

So this detour via the figurated numbers was a possible route to the Holy Grail
of a closed espression for the power sums. Time was ripe in the seventeenth
century for this approach, it was recognized and taken by many a one like Fermat,
Pascal, Wallis, and Jakob Bernoulli, although none of them walked the road to the
end – Fermat announced in letters to Mersenne and Roberval autumn 1636 (see [4])
that he had proved (6) and that this had enabled him to solve the general power
sum problem, but otherwise, as was his wont, gave no further details; Pascal, in
his famous *Traité du triangle arithmétique*, published 1665, also had (6), but
then went his own way for a recursive solution of the power sum problem by
providing his celebrated telescoping argumen (still founded on binomials, of course); Wallis, in his *Arithmetica Infinitorum* of 1656 used the method sketched above to obtain $S_n(k)$ up to order 4, saying one could proceed further this way “as far as one likes”.

And then enter Jakob Bernoulli with his *magnum opus* where he laid the foundations of modern probability theory, *Ars
conjectandi*,
posthumously published in 1713 (for an annotated english translation see [5]). There, in Part II, he begins with a careful description of the combinatorial foundations of his enterprise, the number of “combinations of distinct elements of a given exponent” as he called them – just the binomial coefficients, as we would say today, the number of ways to choose $k$ elements from $n$ given ones – and from the start he links them to the property (5) of the figurate numbers by systematically generating them in increasing order of the exponents and
arranging them in a familiar pattern, which can be seen in *loc.cit* on the bottom of p. 87. (very strange that he does not mention Pascal here). After some elaborations on figurate numbers – where he provides, among other things, a proof of (6) in the verbose Fermat-Pascal style (since Fermat was rather reticent in disclosing details, “verbose” here refers to the way of formulating results, which was cumbersome and done in natural language due to the lack of a developed system of mathematical notation as we have today) – he turns, in a *scholium*
(*loc.cit*, pp. 95-99) to the power sum problem
and Wallis' treatment of it, for whatever reason, and uses the above recursion via the figurate numbers, as did Wallis, to push forward the computation of the $S_n(k)$, going far beyond Wallis until $n=10$ (see
*loc.cit*, p. 96). If he had pursued this road
to the end, he might have landed at the full recursion scheme sketched above
involving the Stirling numbers of the first kind and at its inversion leading to
the closed formula of the $S_n(k)$ in terms of linear combinations of binomials
with coefficients the Stirling numbers of the second kind, but these numbers were
in 1713 still awaiting their discovery by James Stirling in 1730. So he used the
figurate ansatz just for computations to gather data, and, fortunately, he stopped there. Fortunate, because in this data he perceived a special structure which all his forerunners had fail to perceive, and which he probably also would have failed to perceive had he followed the Stirling way till the end, because it it not visible there (as the authors of [2] state it on p.289: “ *Bernoulli would probably not have discovered his numbers if he had taken this route.*”).

What Jakob Bernoulli perceived and what brought him immortal fame was the
following. At each stage $n$ there is just one new rational number $B_n$ joining
the game of building $S_n(k)$. If we order the $S_n(k)$ by falling powers, it
enters at the very right as the coefficient of the linear term (there is no
constant term by definition of the $S_n(k))$. By each next steps this number
travels one step to the left to the next higher power and gets multiplied by a well-defined rational number in such a way that it is determined by the lower $B_m$ via the recursion $S_n(k) = 1$. Bernoulli's description of how to build the $S_n(k)$can be seen on pp. 96-97 of
*loc.cit* (unfortunately in latin; an english translation is on pp. 47-48 of [2]).

What Jakob Bernoulli was saying can be rephrased in modern terms as

**Theorem** *The power sum polynomials $S_n(x)$ obey the following
recursion scheme
$$
\text{$S_n(x) = n\int_0^x S_{n-1}(t) dt + B_n x\quad$ for
$n\ge1$} \quad,\quad S_0(x) = x,
$$
with the $B_n$ recursively defined by*
$$
S_n(1) = n\int_0^1 S_{n-1}(t) dt + B_n = 1
\quad,\quad n=0,1,2,\dots
$$
How fine this recursion works out can be seen by the following minimalistic MAPLE procedure.

S:=proc(n);
if n=0 then
x;
else
T:=n*int(S(n-1),x);
B:= 1-eval(T,x=1);
T+B*x;
end if;
end proc;

**Corollary** *The power sum polynomials have the explicit expression
$$
S_n(x) = \dfrac{1}{n+1}\sum_{j=0}^{n} \binom{n+1}{j}
B_{j} x^{n-j+1}
$$
with the $B_n, n=0,1,2,\dots$, called the* Bernoulli numbers, *recursively given by*
$$
\sum_{j=0}^{n} \binom{n+1}{j} B_{j} = n+1.
$$
For the corollary just note that the linear term of $S_n(x)$ is exactly $B_nx$.
Iterating the recursion prescription $k-1$ times yields that the term of degree $k$
of $S_{n+k-1}(x)$ is
$$
\dfrac{n+k-1}{k}\cdots\dfrac{n+2}{3}\cdot
\dfrac{n+1}{2}B_n x^k =
\dfrac{1}{n+k}\binom{n+k}{k}B_nx^k,
$$
– where the factors of the numerator stem from the factor in front of the
integral, and the factors in the denominator stem from intgrating $x^j$, $j=1,2,
\dots, k-1$ – and the claim follows by reindexing. The recursion on the $B_n$ is
just a reformulation of the condition $S_n(1) = 1$.

How Jakob Bernoulli arrived at this perception, – or prescription, as you like –
remains shrouded in mystery and open to ceaseless speculations, since no
single sign of explanation has been handed down to us. What he achieved this way
is remarkable: instead of having the computational load of determining all the $S_j(x)$, $j < n$, to obtain $S_n(x)$, this task is now reduced to recursively
determining a sequence of rational numbers $B_n$, with the rest of the structure
of the $S_n(x)$ clearly determined. And it brings us to our working definition of
the Bernoulli numbers $B_n$:

**Definition** The Bernoulli numbers $B_n, n=0,1,2,\dots$
are defined by
$$
\fbox{$\begin{split}
B_n :&= \text{coefficient of the linear term of the}\\ \newline
&\hspace{1.5em}\text{$n$-th power sum polynomial
$S_n(x)$}\\ \newline
&= S_n'(0).
\end{split}$}
$$

**Synthesis** Our final aim is to bring the definitions of the $A(n,k)$ and
the $B_n$ under one roof to produce the formula
\begin{equation*}
\forall n\ge 2:\quad\sum_{m=1}^{n} (-1)^m \frac{A(n,m)}
{\left(n\atop m\right)}=(n+1) B_n.\tag{8}
\end{equation*}
**Remark** The notation “$A(n,m)$” in the wikipedia entry for the Eulerian
numbers is an unfortunate one, because it conflicts with the standard use as e.g. in [0], where $A(m,n)$ refers to the Eulerian number $\left\langle {n \atop m+1}
\right\rangle$ in the Knuth-Karamata notation and the notation is $a(n,m)$ for $
\left\langle {n \atop m} \right\rangle$. The confusion comes about because there
are two conventions with regard to the Eulerian polynomials. One convention - to
which I adhere - denotes them $A_n(x)$ and have them defined via (2), so they are
of degree $n$. The other one denotes them $a_n(x)$ or $S_n(x)$ and have them
defined as to have $xa_n(x) = A_n(x)$, so in this convention the $n$-th Eulerian
polynomial unfortunately has degree $n-1$, which is confusing and error-prone. The
wikipedia entry somehow is vague about the status of the $A_n(x)$. $\hspace{30em}
\square$

Now Euler's primary interest in (0) was not to prove such a formula (8), but to
put $x:=-1$ in (2) for $n=1,2,\dots$, which would give him
$$
-1^n+2^n-3^n+4^n \mp \cdots = \dfrac{A_n(-1)}{2^{n+1}}
$$
or
$$
\zeta(-n) = 1^n+2^n+3^n+4^n+ \cdots =
\dfrac{\sum_k (-1)^k A(n,k)}{(1-2^{n+1})2^{n+1}} \quad,
\quad n=1,2, \dots
$$
where
$$
\zeta(s) := \sum_{k=1}^{\infty} k^{-s} \quad,\quad
s \in (1,\infty) \subseteq \mathbb{R}
$$
the (real) *zeta function*, one of Euler's many prestigious brain children. By
this method – which later became known as *Abel summation* – Euler manages “to
regularize”, i.e. to give a finite reasonable value to, a divergent expression,
here $\zeta(-n)$, some 80 years before Abel. Together with his equally prestigious
formulae
$$
\zeta(2n) = (-1)^{n+1}\dfrac{(2\pi)^{2n}}{2(2n)!}B_{2n}
\quad,\quad n=1,2, \dots
$$
this led him to formulate the functional equation of the zeta function and to
verify it on the integers with the help of these formulas, which leads down to the
equalities
$$
\forall n \ge 1:\,\,\dfrac{\sum_k (-1)^k A(n,k)}
{(2^{n+1}-1)2^{n+1}} = \dfrac{B_{n+1}}{n+1},
$$
another string of identities between the Eulerian and the Bernoulli numbers
(however different from (8)). Since we now know that these cases imply the full
functional equation for the full complex zeta function, he conjectured it (for the
real case), and, in some sense, implicitely proved it, some hundred years before
Riemann, without having notions and tools like analytic continuation at his
disposal. For all this see [3].

Nevertheless, (2) is of help also in our situation because of the following

**Lemma** *Let $f(x) = \sum_{k=0}^{\infty} a_k x^k$ be a formal power
series. Define the* summation series $\Sigma f(x)$ of $f(x)$ *as $\Sigma f(x) :=
\sum_{k=0}^{\infty} (\Sigma a_k) x^k$ with $\Sigma a_k := \sum_{j=0}^k a_j$. Then
for a formal power series $g(x) = \sum_{k=0}^{\infty} b_k x^k$ there holds*
$$
g(x) = \Sigma f(x) \iff g(x) = \dfrac{f(x)}{1-x}.
$$
For this just note $\dfrac{f(x)}{1-x} = \sum_{k=0}^{\infty} a_k x^k \sum_{l=0}^{
\infty} x^l$ and multiply following the rules of the Cauchy product.

Then, as a corollary, we obtain

**Theorem** *The generating function of the values of the power sum
polynomials at nonnegative integers is given by*
\begin{equation*}\tag{9}
\sum_{k=0}^{\infty} S_n(k) x^k = \dfrac{A_n(x)}{(1-x)^{n+2}}
= \sum_{m=0}^n A(n,m) \dfrac{x^m}{(1-x)^{n+2}}.
\end{equation*}
For this, the summation series of (0) is just $\sum_{k=0}^{\infty} S_n(k) x^k$.
Apply the lemma and use (2).

And now our journey has come to an end. The formula (9) will give a closed form
for the $S_n(x)$ in terms of a linear combination of certain binomials with
coefficients the Eulerian numbers. Taking the coefficient of the linear term then
will, in conjunction with our definition of the $B_n$, provide (8). In order not to spoil the fun of doing those computations for
oneself: DETAILS ARE LEFT TO THE READER....

—

[0] Comtet, L. –
*Advanced Combinatorics.* D. Reidel 1974.

[1] Edwards, A.W.F. –
*Pascal's arithmetical triangle.* Dover Publications 2019.

[2] Graham, R.L et al. –
*Concrete Mathematics* (2nd Edition). Addison-Wesley 1994.

[3] Euler, L. –
Remarques sur un beau rapport entre les séries des puissances tant directes que
réciproques. *Mémoires de
l'académie des sciences de Berlin 17*, 1768, pp. 83-106.

[4] Knoebel, A. et al. –
*Mathematical Masterpieces.* Undergraduate Texts in Mathematics.
Springer 2007.

[5] Sylla, E.D. –
*Jacob Bernoulli – The Art of Conjecturing.*
The Johns Hopkins University Press 2006.

1more comment