Fri Sep 22 23:01:15 CDT 2006
Here are some notes on logarithmic running times.
This is a function where each iteration divides the problem size by a factor of
two each time through the loop.
void countIterations(int n) {
int numIterations = 0; // c1
for (int i = n; // c2
i >= 1; // c3
i /= 2) { // c4
numIterations++; // c5
}
cout << numIterations << endl; // c6
}
Then the overall runtime (as you can guess) will be logarithmic; but what about
the T(n)? Well, we know that the number of iterations of the loop will be
something like log_2(n), where the 2 comes from the fact that we divide by 2
each time.
Clearly, the statements associated with c2, c3, and c4 are going to depend
logarithmically on n. The most precise way to say this is that (for example),
the "numIterations++" happens ceil(log_2(n)) times. Here "ceil" is the ceiling
function -- the next greatest integer in the case that log_2(n) is not an
integer, which it probably isn't!
Therefore, we get a T(n) that is something like this:
T(n) = c1 * 1 +
c2 * 1 +
c3 * (ceil(log_2(n)) + 1) +
c4 * ceil(log_2(n)) +
c5 * ceil(log_2(n)) +
c6 * 1
= O(log(n))
Let's do an experiment to validate this analysis. When I run this code with
varying values of n, I get the following results:
n | log_2(n) | actual countIterations output
--------+----------+------------------------------
1 | 0 | 1
10 | 3.32 | 4
100 | 6.64 | 7
1000 | 9.96 | 10
10000 | 13.28 | 14
100000 | 16.60 | 17
1000000 | 19.93 | 20
We can see that the function really seems to be growing as log_2(n), so it is
O(log(n)). So this validates our analysis.