in reply to An informal introduction to O(N) notation
I think it's usefull to have a look at a more formal way of saying what the Big O means after having read this informal orientation. In the end, it's not that hard to understand.
You can express the complexity of an algorithm as a complete function that includes constants (startup cost) and constant factors (for example if you iterate over a list thrice), based on the assumption that all primitve actions (+,-,/,*,assignation,...) cost 1 "step" of time. Let's call this function f(n,m,...) where n,m,... are all the variable factors that influence the problem. How can we now find a function g(n,m,...) so that O(g(n,m,...)) = f(n,m,...)? Simple
f(n,m,...) = O(g(n,m,...)) means that there exists a constant "c", for which c * g(n,m,...) >= f(n,m,...) is true for large enough n,m,...
If you think about this relation it's clear that you can forget all constant factors or constants in your program, because a large enough c will always suffice to make c*g() grow faster. Because O(n^2) = O(n^3) you should always find the smallest g() for which f() = O(g()) is true in order to have a representative function g().
f(n,m,...) = Ω(g(n,m,...) means that there is .... c * g(n,m,...) <= f(n,m,...)
This is the lower bound. And finally Θ marks an average "middle":
--f(n,m,...) = Θ(g(n,m,...)) means that f(n,m,...) = O( g(n,m,...) ) and f(n,m,...) = Ω( g(n,m,...) )
http://fruiture.de
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Re: An informal introduction to O(N) notation
by dakkar (Hermit) on Mar 25, 2003 at 14:26 UTC | |
by Anonymous Monk on Aug 29, 2007 at 14:14 UTC | |
Re: Re: An informal introduction to O(N) notation
by theorbtwo (Prior) on Jan 18, 2003 at 21:54 UTC |