The Bad, Bad, Bad, Bad Question

I have taught a second semester course on discrete mathematics for several years. Early on, I would ask the following standard question on assignments when we got to asymptotic analysis.

Show that

One of the prerequisites for this course is Calc I, and so we cover big-O, big-theta, and big-omega with proof techniques based on the definitions (showing inequalities) as well as the limit method. The problem is that students would immediately try to apply the limit method (since it is simpler and easier to use than the definition) to this question and produce a proof something like the following.

Let and . Then we have that

Therefore, .

Which of course is completely wrong. First, the result is incorrect. The limit method actually shows that (that is, little-o), which implies big-O but precludes big-theta. However, indeed . The error in the supposed proof is an invalid calculus operation. Though the limit of a sum is equal to the sum of its limits, in this case the number of terms is not constant. The number of terms grows as so we cannot simply split the limits.

After a few semesters of continually getting these wrong answers, I changed the question. I gave it in two parts: the first part I gave them the typically incorrect proof and told them it was incorrect and asked them to identify why the proof was wrong. The second part asked them to prove the original inclusion using the definition-based proof. Most students were able to do the second part, but still far too many didn't understand why the limit proof was wrong.

Yet again, I changed the question. The first part changed so that I asked them to work through a limit proof for in which case the summation simplifies to


which is the well-known Gauss's formula and which is something that we go over in class several times when analyzing standard algorithms.

The intention was that when they go through the proof with the closed form using Gauss's formula, they'll get that which is incongruous with the original proof which shows that it is little-o.

Despite even this tweak, most students still do not get this question correct or struggle greatly with it. Some ignore the fact that we go over Gauss's formula multiple times; some lack the calculus rigor to see the problem, some are still struggling with trying to grasp the concept of asymptotic analysis.

I had always thought that presenting this question provided value in that it gave students an opportunity to avoid a common mistake and at the same get a deeper understanding by examining WHY the proof is wrong. I saw it as a sort of companion to the "bad" proof typically presented to students that (by hiding an invalid division by zero operation). However, I've slowly come to think that the question is far more trouble than it is worth. The only thing it succeeds in doing is getting every student to show up for office hours where I end up explaining the entire thing dozens of times. I could achieve the same thing by presenting it during lecture. Or, I could just get rid of the question altogether.