Prashant | Mon, 24 Aug, 2020 | 176

To compare algorithms, let us define a few objective measures:

**Execution times?** Not a good measure as execution times are specific to a particular computer.

**Number of statements executed?** Not a good measure, since the number of statements, varies with the programming language as well as the style of the individual programmer.

**Ideal solution?** Let us assume that we express the running time of a given algorithm as a function of the input size n (i.e., f(n)) and compare these different functions corresponding to running times.

This kind of comparison is independent of machine time, programming style, etc.

To analyze the given algorithm, we need to know with which inputs the algorithm takes less time (performing wel1) and with which inputs the algorithm takes a long time. We have already seen that an algorithm can be represented in the form of an expression. That means we represent the algorithm with multiple expressions: one for the case where it takes less time and another for the case where it takes more time.

There are basically three types of analysis:

**Worst Case:**Slowest time to complete.**Best Case:**Fastest time to complete**Average Case:**Run the algorithm many times, using many different inputs that come from some distribution that generates these inputs, compute the total running time (by adding the individual times), and divide by the number of trials.