Examine what Big-O notation is and explain its role in evaluating efficiencies of algorithms.
Definition:
"f(x)=O(g(x))" as "x\\to \\infin"
if the absolute value of f(x) is at most a positive constant multiple of g(x) for all sufficiently large values of x. That is, f(x)=O(g(x)) if there exists a positive real number M and a real number x0 such that
"|f(x)|\\le Mg(x)" for all "x\\ge x_0"
Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity.
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Comments
Leave a comment