The fact is it’s difficult to determine the exact runtime of an algorithm. It depends on the speed of the computer processor. So instead of talking about the runtime directly, we use Big O Notation to talk about how quickly the runtime grows depending on input size.
With Big O Notation, we use the size of the input, which we call
n. So we can say things like the runtime grows “on the order of the size of the input” (
O(n)) or “on the order of the square of the size of the input” (
O(n2)). Our algorithm may have steps that seem expensive when
n is small but are eclipsed eventually by other steps as
n gets larger. For Big O Notation analysis, we care more about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as
n gets very large.