Daa Notes Full

November 28, 2018 | Author: Aarif Muhammed | Category: Time Complexity, Recursion, Algorithms, Dynamic Programming, Subroutine
Share Embed Donate


Short Description

Download Daa Notes Full...

Description

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

ALGORITHM Informal Definition: An Algorithm is any well-defined computational procedure that takes some value or set of values as Input and produces a set of values or some value as output. Thus algorithm is a sequence of computational steps that transforms the i/p i/p into the o/p. Formal Definition: An Algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In addition, all algorithms should satisfy the following criteria.

INPUT  Zero or more quantities are externally supplied. OUTPUT  At least one quantity is produced. DEFI DEFINI NITE TENE NESS SS  Each instruction is clear and unambiguous. FINITENESS  If we trace out the instructions of an algorithm, then for all cases, the algorithm terminates after a finite number of steps. 5. EFF EFFECTI ECTIVE VEN NESS ESS  Every instruction must very basic so that it can be carried out, in principle, by a person using only pencil & paper.

1. 2. 3. 4.

  /   r   n .   o   c .   e    b Issues or study of Algorithm:   u   t   e   s  creating and algorithm.  How to device or design an algorithm   c  How to express an algorithm   /  /  definiteness.   :  How to analysis an algorithm   p  time and space complexity.   t   t algorithm  fitness.  How to validate an    h  Testing the algorithm  checking for error. Recursive Algorithms:     



A Recursive function is a function that is defined in terms of itself. Similarly, an algorithm is said to be recursive if the same algorithm is invoked in the body. An algorithm that calls itself is Direct Recursive. Algorithm ‘A’ is said to be Indirect Recursive if it calls another algorithm which in turns calls ‘A’. The Recursive mechanism, are externally powerful, but even more importantly, many times they can express an otherwise complex process very clearly. Or these reasons we introduce recursion here. The following 2 examples show how to develop a recursive algorithms. 

In the first, we consider the Towers of Hanoi problem, and in the second, we generate all possible permutations of a list of  characters.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Time Space Tradeoff  1. Towe Towers rs of Hano Hanoi: i:

. . .

Tower A

             

Tower B

  /   r   n .   o   c .   e    b   u   t   e   s   c   /  /   :   t  p   t    h

Tower C

It is Fashioned after the ancient tower of Brahma ritual. According to legend, at the time the world was created, there was a diamond tower (labeled A) with 64 golden disks. The disks were were of decreasing decreasing size and were stacked stacked on the tower in in decreasing order of size bottom to top. Besides these tower there were two other diamond towers(labeled B & C) Since the time of creation, Brehman priests have been attempting to move the disks from tower A to tower B using tower C, for intermediate storage. As the disks are very heavy, they can be moved only one at a time. In addition, at no time can a disk be on top of a smaller disk. According to legend, the world will come to an end when the priest have completed this task. A very elegant solution results from the use of recursion. Assume that the number of disks is ‘n’. To get the largest disk to the bottom of tower B, we move the remaining ‘n-1’ disks to tower C and then move the largest to tower B.  Now we are left with the tasks of moving the disks from tower C to B. To do this, we have tower A and B available. The fact, that towers B has a disk on it can be ignored as the disks larger than the disks being moved from tower C and so any disk scan be placed on top of it.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Time Space Tradeoff  1. Towe Towers rs of Hano Hanoi: i:

. . .

Tower A

             

Tower B

  /   r   n .   o   c .   e    b   u   t   e   s   c   /  /   :   t  p   t    h

Tower C

It is Fashioned after the ancient tower of Brahma ritual. According to legend, at the time the world was created, there was a diamond tower (labeled A) with 64 golden disks. The disks were were of decreasing decreasing size and were stacked stacked on the tower in in decreasing order of size bottom to top. Besides these tower there were two other diamond towers(labeled B & C) Since the time of creation, Brehman priests have been attempting to move the disks from tower A to tower B using tower C, for intermediate storage. As the disks are very heavy, they can be moved only one at a time. In addition, at no time can a disk be on top of a smaller disk. According to legend, the world will come to an end when the priest have completed this task. A very elegant solution results from the use of recursion. Assume that the number of disks is ‘n’. To get the largest disk to the bottom of tower B, we move the remaining ‘n-1’ disks to tower C and then move the largest to tower B.  Now we are left with the tasks of moving the disks from tower C to B. To do this, we have tower A and B available. The fact, that towers B has a disk on it can be ignored as the disks larger than the disks being moved from tower C and so any disk scan be placed on top of it.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Asymptotic Notations

Analysis Best case: This analysis constrains on the input, other than size. Resulting in the fasters possible run time Worst case :

This analysis constrains on the input, other than size. Resulting in the fasters  possible run time Average case:

This type of analysis results in average running time over every type of  input.

  /   r   n .   o   c .   e Asymptotic analysis:    bin term of its relationship to know Expressing the complexity   u   t analysis.   e function. This type analysis is called asymptotic   s   c   /  / Asymptotic notation:   :   t  p   t Big ‘oh’: the function f(n)=O(g(n))    h iff there exist positive constants c and no such that

Complexity: Complexity refers to the rate at which the storage time grows as a function of  the problem size

f(n)≤c*g(n) for all n, n ≥ no.

f(n)=Ω(g(n)) iff there exist positive constants c and no such that Omega: the function f(n)=Ω(g(n)) f(n) ≥ c*g(n) for all n, n ≥ no. positiv e constants c1,c2 and no such that Theta: the function f(n)=ө(g(n)) f(n)=ө(g(n)) iff there exist positive c1 g(n) ≤ f(n) ≤ c2 g(n) for all n, n ≥ no.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Properties of big-Oh notation 1. Fastest growing growing function function dominates dominates a sum

O(f(n (f(n)+g(n )+g(n)) is O(max{f(n (max{f(n), g(n g(n)}) 2. Product of of upper bounds is upper upper bound bound for the product If  f  is O(g) and h is O(r) then fh is O(gr) 3. f is f is O(g) is transitive If  f  is O(g) and g  is O(h) then f  is O(h) 4. Hierarchy Hierarchy of functions functions O(1), O(logn (logn), O(n1/2), O(nlogn logn), O(n2), O(2n), O(n!) Some Big-Oh’s are not reasonable Polynomial Time algorithms An algorithm is said to be polynomial if it is O( nc ), c >1 >1 Polynomial Polynomial algorithms algorithms are said to be reasonable reasonable They solve problems in reasonable times! Coefficients, constants or low-order terms are ignored e.g. e.g. if f(n f(n)) = 2n2 2n2 then then f(n) f(n) = O( O(n2 n2))

  /   r   n .   o   c Exponential Time algorithms .   e An algorithm is said to be exponential if it is    b O( rn ), r > 1   u   t   esaid to be unreasonable Exponential Exponential algorithms algorithms are unreasonable   s   c   /   / Classifying Algorithms based on Big-Oh   :   p   t   t A function function f(n) is said said to be of at most most logar logarithm ithmic ic growth growth if f(n) = O(log n)    h A function function f(n) is said said to be of at most most quadr quadratic atic growt growth h if f(n) = O(n2) O(n2) A functi function on f(n) is said said to to be of at most most polyno polynomial mial growth growth if if f(n) = O(nk), O(nk), for some some natu natura rall number number k > 1 A function f(n) is said to be of at most exponenti exponential al growth if there there is a cons consta tant nt c, such such that that f(n) f(n) = O( O(cn cn), ), and and c > 1 A function function f(n) is said said to be of at most most factor factorial ial growt growth h if f(n) = O(n! O(n!). ). A function f(n) is said to have have constant constant running running time if the the size of the the input n has no effect on the running time of of the algorithm (e.g., assignment assignment of a value to a variabl variable). e). The equat equation ion for this this algor algorithm ithm is is f(n) = c Other Other logarit logarithmic hmic classific classificatio ations: ns: f(n) = O(n log log n) f(n) = O(log log n)

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Conditional asymptotic notation

When we start the analysis of algorithms it becomes easy for us if we restrict our  initial attentions to instances whose size is satisfied in certain conditions (like being a power  of 2). Consider n as the size of integers to be multiplied. The algorithm moves forward if  n=1, that needs microseconds for a suitable constant ‘a’. If n>1, the algorithm proceeds by multiplying multiplyi ng four pairs of integers of size n/2 (or three if we use the better algorithm). To accomplish additional additional tasks tasks it takes some linear amount of time. This algorithm takes a worst case time which is given by the function T: N → R≥0   recursively defined where R is a set of non negative integers. T(1) = a T(n) = 4T([n /2]) + bn for n >1

  /   r   n .   o   c .   e    b is b -smooth if it is non-decreasing and Consider b≥2 b≥2 as an integer. Function F  u   t satisfies satis fies the condit condition ion F(bn) F(bn) O(F(n O(F(n)). )). Other wise, there should wise, should be a constant constant C (depend (depending ing   e   s on b) such that F(bn)≤C F(n) for all n≥n . A function is said to be smooth smooth if it is b-smooth for    c   / every integer b ≥2.   /   :   t  p   t Most expected functions in the analysis of algorithms will be smooth, such as log n,    h nlogn, n , or any polynomial whose leading coefficient will be positive. However, functions Conditional asymptotic asymptotic notation is a simple notational notational convenience. Its main interest is that we can eliminate eliminate it after using it for for analyzing the algorithm. algorithm. A function function F: N → R ≥0 is non decreasing function if there is an integer threshold n 0 such that F(n)≤F(n+1) for all n≥n 0. This implies that mathematical mathematical induction induction is F(n) ≤F(m) whenever m ≥n≥n 0.

0

2

those grow too fast, such as nlogn, 2n or n! will not be smooth because the ratio F(2n)/F(n) is unbounded. log gn Which shows that (2n) log(2n) is not approximate of O(n lo ) because a constant never bounds 2 2n . Functions that are bounded above by a polynomial are usually smooth smooth only if they are eventually non-decreasing. non-decreasing. If they are not eventually non-decreasing then there is a  probability for the function to be be in the exact order of some other function function which is smooth. smooth. For instance, let b(n) represent the the number of bits equal to 1 in the binary expansion expansion of n, for  instance b(13) = 3 because 13 is written as 1101 in binary. Consider F(n)=b(n)+log n. It is easy to see that F(n) is not eventually non-decreasing non-decreasing and therefore it is not smooth because b k  k  (2 −1)=k whereas b(2 )=1 for for all k. k. Howev However er F(n) F(n) Θ(l Θ(log og n) n) is a smoo smooth th funct functio ion. n.

Removing condition from the conditional asymptotic notation

A constructive property of smoothness is that if we assume f is b -smooth for any specific integer b≥2, then it is actually smooth. To prove this, consider any two integ ers a and b (not smaller than 2). Assume that f is b -smooth. It is important to show that f is a-smooth as well.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Consider C and n 0 as constants such that F(bn) ≤C F(n) and F(n)≤F(n+1) for all n≥n 0. Let i= [log ba]. By definition of the logarithm

.

Consider n≥n0. It is obvious to show by mathematical induction from  b-smoothness of F that F(bin)≤C i F(n). But F(an)≤F(b in) because F is eventually nondecreasing and approximate to b in ≥ an ≥ n 0. It implies that F (an) ≤ CˆF(n) for Cˆ=C i, and therefore F is a-smooth. Smoothness rule

Smooth functions seem to be interesting because of the smoothness rule. Consider F:  N → R  (where R ≥0 is non negative integers) as a smooth function and T: N → R ≥0 as an eventually non-decreasing function. Consider an integer where b ≥2. The smoothness rule states that T(n) Θ(F(n)) whenever T(n) Θ(F(n) | n is a power of b). We apply this rule equally to O and Ω notation. The smoothness rule assumes directly that T(n) Θ(n 2) if n2 is a smooth function and T(n) is eventually non-decreasing. The first condition is immediate since the function is approximate to n 2 (which is obviously nondecreasing) and (2n) 2=4n2. We can demonstrate the second function from the recurrence relation using mathematical induction. Therefore conditional asymptotic notation is a stepping stone that generates the final result unconditionally. i.e. T(n)=Θ(n 2). ≥0

  /   r   n .   o   c .   e    b Some properties of asymptotic order of growth   u   t   e   s   c   /  /   :   t  p   t    h Asymptotic growth rate

· O(h(n)): Class of function F(n) that grows no faster than h(n) · Ω(h(n)): Class of function F(n) that grows at least as fast as h(n) · Θ(h(n)): Class of function F(n) that grows at same rate as h(n)

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Recurrence equations

Recursion:

Recursion may have the following definitions: -The nested repetition of identical algorithm is recursion. -It is a technique of defining an object/process by itself. -Recursion is a process by which a function calls itself repeatedly until some specified condition has been satisfied. When to use recursion:

Recursion can be used for repetitive computations in which each action is stated in terms of previous result. There are two conditions that must be satisfied by any recursive procedure.

  /   r   n .   o   c In making the decision about whether to write an algorithm in recursive or non-recursive .   e form, it is always advisable to consider a tree    b structure for the problem. If the structure is   uappears quite bushy, with little duplication simple then use non-recursive form. If the tree   t   e of tasks, then recursion is suitable.   s   c   /   / The recursion algorithm for finding  : the factorial of a number is given below,   p   t   t Algorithm : factorial-recursion    h 1. Each time a function calls itself it should get nearer to the solution. 2. There must be a decision criterion for stopping the process.

Input : n, the number whose factorial is to be found. Output : f, the factorial of n Method : if(n=0) f=1 else f=factorial(n-1) * n if end algorithm ends.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Solving recurrence equations

A Recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Special techniques are required to analyze the space and time required RECURRENCE RELATIONS EXAMPLE EXAMPLE 1: QUICK SORT T(n)= 2T(n/2) + O(n) T(1)= O(1) In the above case the presence of function of T on both sides of the equation signifies the presence of recurrence relation ( SUBSTITUTION MEATHOD used ) The equations are simplified to produce the final result: T(n) = 2T(n/2) + O(n) = 2(2(n/22) + (n/2)) + n = 22 T(n/22) + n + n = 22 (T(n/23)+ (n/22)) + n + n = 23 T(n/23) + n + n + n = n log n

  /   r   n .   o   c .   e    b   u   t   e   s   c   /  / EXAMPLE 2: BINARY SEARCH   : T(n)=O(1) + T(n/2)   t  p   t T(1)=1    h

Above is another example of recurrence relation and the way to solve it is by Substitution. T(n)=T(n/2) +1 = T(n/22)+1+1 = T(n/23)+1+1+1 = logn T(n)= O(logn)

Analysis of linear search

Let us assume that we have a sequential file and we wish to retrieve an element matching with key ‘k’, then, we have to search the entire file from the beginning till the end to check whether the element matching k is present in the file or not. There are a number of complex searching algorithms to serve the purpose of searching. The linear search and binary search methods are relatively straight forward methods of  searching.

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

Sequential search: In this method, we start to search from the beginning of the list and examine each element till the end of the list. If the desired element is found we stop the search and return the index of that element. If the item is not found and the list is exhausted the search returns a zero value. th

In the worst case the item is not found or the search item is the last (n ) element. For both situations we must examine all n elements of the array so the order of magnitude or complexity of the sequential search is n. i.e., O(n). The execution time for this algorithm is proportional to n that is the algorithm executes in linear time. The algorithm for sequential search is as follows, Algorithm : sequential search Input : A, vector of n elements K, search element Output : j –index of k  Method : i=1 While(i do { . . .

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

} For Loop: For variable: = value-1 to value-2 step step do

{ . . . } repeat-until: repeat . . . until

8.

  /   r   n .   o   c .   e    b   u   t forms.   e A conditional statement has the following   s   c   /  /  If then   :  If then   t  p   t Else    h Case statement:

Case { : : . . . : : : else :

} 9. Input and output are done using the instructions read & write. 10. There is only one type of procedure: Algorithm, the heading takes the form, Algorithm Name (Parameter lists)

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms 

As an example, the following algorithm fields & returns the maximum of ‘n’ given numbers: 1. 2. 3. 4. 5. 6. 7. 8. 9.

algorithm Max(A,n) // A is an array of size n { Result := A[1]; for I:= 2 to n do if A[I] > Result then Result :=A[I]; return Result; }

In this algorithm (named Max), A & n are procedure parameters. Result & I are Local variables.

  /   r   n . of translation problem into  Next we present 2 examples to illustrate the process   o   c an algorithm. .   e    b   u   t 2   e TUTORIAL   s   c Selection Sort:   /  /   :   p an algorithm that sorts a collection of n>=1   tdevise  Suppose we Must  t    h type. elements of arbitrary 

A Simple solution given by the following.



( From those elements that are currently unsorted ,find the smallest & place it next in the sorted list.)

Algorithm: 1. For i:= 1 to n do 2. { 3. Examine a[I] to a[n] and suppose the smallest element is at a[j]; 4. Interchange a[I] and a[j]; 5. } 

Finding the smallest element (sat a[j]) and interchanging it with a[ i ] 

We can solve the latter problem using the code, t

:= a[i];

http://csetube.co.nr/

LECTURE PLAN Subject code & Subject Name: 141401Design and Analysis of Algorithms

a[i]:=a[j]; a[j]:=t; 

The first subtask can be solved by assuming the minimum is a[ I ];checking a[I] with a[I+1],a[I+2]…….,and whenever a smaller element is found, regarding it as the new minimum. a[n] is compared with the current minimum.



Putting all these observations together, we get the algorithm Selection sort.

Theorem:

Algorithm selection sort(a,n) correctly sorts a set of n>=1 elements .The result remains is a a[1:n] such that a[1]
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF