Friday, August 6, 2010

Integer Complexity: An Upper Bound Update

It looks like I may have a final upper bound on the integer complexity problem. Recall we defined f(n) to be the minimum number of 1s needed to represent n as a product or sum of 1s. Thus, f(6)=5 because we can write 6=(1+1)(1+1+1). Harry Altman and I have been working on improving the lower and upper bounds on this function. Prior posts about this problem can be found under the label integer complexity. Harry has a large number of posts on the subject also, but they aren't all tagged so I'll just direct readers to the most recent one.

I'm writing this post to announce that we have a tight upper bound for f(n). Prior to our work, it was known that if n ≥ 2, we have 3log2 n ≥ f(n) ≥ 3log3 n. It is clear that the lower bound is hit infinitely often (whenever n is a power of 3). We (and by we I mean mostly Harry) have worked on understanding what n have close to the lowest possible value, and using this to understand other conjectures about f, such as the conjecture that if n=2^a*3^b then the most efficient representation for n is the obvious one. However, I've been also working on improving the upper bound, and whenever I've thought that I've been done improving the upper bound, I've then realized another way to tweak my proof to reduce the upper bound slightly. However, at this point, I'm confident that cannot happen anymore using the techniques in question, barring substantially new insight. Thus, I'm now announcing the new upper bound. We have for n ≥ 2, f(n) ≤ αlog2 n where α = 70/(log_2 102236160)= 2.630853... Note that this constant is just small enough for us to also state the weaker but prettier result that f(n) ≤ (19/5) log n. Unfortunately, the proof in question is very ugly, involving a large amount of case checking as well as nasty calculation of inequalities. I'm in the process of writing up the last bit. If readers are interested I can email people a draft of the proof when it is fully written.

8 comments:

Sniffnoy said...

Yaaaaaaay!

Jr said...

This and P \neq NP! A great week for mathematics!

Joshua said...

Jr, I'm reading the P != NP proof now and unfortunately don't have the background to understand a fair bit of it. But the material I do understand does look not obviously wrong. However, there are a few potential issues with the proof that more technically minded people have brought up, http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/

has a good list. It does seem very clear that even if Deolalikar's proof is fundamentally flawed it contributes interesting new techniques which can be used for other problems as well.

Now, I'll just go try to pretend for a few moments that your comparison between Deolalikar's work and our work was serious and bask in the flattery...

Jackie said...

Am I missing something? It would seem that if x is prime, then f(x) = x, so x would be the upper bound.

Joshua said...

Jackie, you can combine multiplication and addition. So for example, f(7)=6 because I can write 7=1+(1+1)(1+1+1). Similarly, f(13)=8 because I can write 13 = 1 + (1+1+1)(1+1+1+1).

Does that clarify things?

Jackie said...

Yes, I didn't realize you could combine the operations.

Jack said...

A related, but more difficult and practical problem, might be that you are given a real number and a tolerance, and need to find the most compact way to represent it within that tolerance with small integers (and maybe pi and e), using any algebraic operation. It seems that we often come across weird numbers empirically, where a compression into an algebraic expression would be useful (as you did by moving to 19/5 in your post) or perhaps to guess a deeper structure.

Joshua said...

Jack, that's a much deeper problem but there's been a lot of work regarding questions like that. For example, there's the theory of continued fractions which looks at for a given irrational number what are the best approximations with simple rational numbers (in this case, a rational number is simpler than another if it has a smaller denominator). There's also work regarding Kolmogorov complexity which examines for some string how simple a computer program can be that will output that string.

Unfortunately, general answers to these sorts of questions turn out to be very difficult.