^{th}grade I antagonized one teacher by solving problems of this sort using more obscure number theoretic functions such as, σ(n), the sum of the positive divisors of a number. Since σ(4)=7, σ(σ(4))=σ(7)=8 and σ(σ(σ(4)))=15, many of these problems become more or less trivial if one is allowed to use this function. Similarly, one cute puzzle is to represent π using four fours; the only solutions I am aware of all involve either inverse trigonometric functions or even more esoteric functions such as the gamma function or the Riemann zeta function. A similar game involves rolling 4 dice and then everyone trying to find a way to represent the number 24 with the numbers on all four dice (variants exist involving drawing cards from a deck as well). One can however ask a more well-defined question: Given, some set of n, ks and and some fixed set of allowed operations can we represent a given m? Or, for a given m, can we produce easy bounds on how many ks we need to represent m? How small can we choose n if we want to be able to represent all m less than some given x using up to n ks?

Assume for a moment that are allowed operations are addition, multiplication, subtraction and division. Define gk(n) to be the minimum number of ks we need to represent n . Thus, for example g1(6)=5 since 6=(1+1)*(1+1+1) and it t is not hard to show that we in fact require 5 1s for 6. Now, for any given k we may write 1 as k/k and may write k=1+1+1…+1 Thus, we have gk(n) ≤ 2 g1(n) and g1(x) ≤ kg1(n). It is thus natural to just think about g1(n). However, even this is a very difficult function to work with. So it is natural to try to understand g1(n) by thinking about the slightly easier function f(n) which is exactly the same as g1(n) but we only allow multiplication and addition as operations.

For the remainder of this post we are going to work with f(n) which is the minimum number of 1s needed to represent n where we can only use multiplication and addition. What can we say about f(n)? It is not hard to show that for all x, f(n) ≤ 1 + 3log2n. To see this, observe that f(1)=1, f(2)=2 and for any given n if we can represent n with k 1s then we can represent 2n and 2n+1 with at most k+3 1s since We have 2n=(1+1)*n and 2n+1=(1+1)*n+1. Thus, f(2n) ≤ f(n)+3, f(2n+1) ≤ f(n)+3 and so the desired result follows by induction. It is also not hard to prove that f(n) ≥ 3log3 n. Despite the highly elementary nature of these bounds, substantially better bounds on f are difficult to come by. Richard Guy’s “Unsolved Problems in Number Theory” last updated as of 2003 does not give substantially better bounds.

There are many questions about representations of these forms that are still unsolved. For example, it is conjectured that if p is prime f(p)=1+f(p-1). That is, if p is a prime the most efficient representation comes from taking a representation of p-1 and then adding 1. For example, f(1)=7 from the representation 7=(1+1)*(1+1+1+1+1) and f(11)=8 which we get from 11=(1+1)*(1+1+1+1+1)+1. Better bounds on f and similar functions turn out to be related to open problems in complexity theory including the P=NP problem. Entry F26 in Guy’s aforementioned book which gives a summary of what is known about this and related problems along with a bibliography. These simple questions about the behavior of f are good examples of why number theory is such an interesting area of mathematics. One can ask very simple, naïve questions which are unsolved. Moreover, those questions often turn out to be related to deep, difficult mathematics.