Analysis And Design Of Algorithms (ADA) 2 marks question

Share Embed Donate


Short Description

Download Analysis And Design Of Algorithms (ADA) 2 marks question...

Description

UNIT 1 1. Define Algorithm. An algorithm is a sequence of unambiguous instructions for solving a problem, which means to obtain a required output for any legitimate input in a finite amount of time. 2. What are the characteristics of an algorithm? • Simplicity • Generality • Definiteness • Finiteness • Effectiveness 3. What are three parts of algorithm? • Pre Condition • Body of Algorithm • Post Condition 4. What are the criteria used to identify the best algorithm? • Order Of Growth • Time Complexity • Space Complexity 5. Define time and space complexity. Time Complexity indicates how fast an algorithm runs. T(P)=Compile Time+ Run Time .(Tp) Where Tp is no of add, sub, mul... Space Complexity indicates how much extra memory the algorithm needs. Basically it has three components. They are instruction space, data space and environment space. 6. Difference between Compile and Running Time. Compile Time Running Time 1)"Compile time" can also refer to the amount of time required for compilation.

1)“Running Time” can refer to the time required for executing the program

2) The operations performed at compile time usually include syntax analysis, various kinds of semantic analysis and code generation

2)Type checking, storage allocation, and even code generation and code optimization may be done at compile-time or upon a run-time, depending on the language and compiler.

3) Compile time occurs before link time (when the output of one or more compiled files are joined together) and runtime

3) Running time occurs after compile time.

DAA TWO MARK QUESTION AND ANSWER

1

7. Find the time complexity for summation of N numbers. The time complexity for summation of N numbers is O (N). 8. What are the various types of time complexity of an algorithm? •

Big Oh



Big Omega



Big Theta



Little Oh



Little Omega

9. Difference between Best Case and Worst Case Complexities. The best case complexity of an algorithm is its efficiency for the best case input of size N, which is an input of size N for which the algorithm runs fastest among all possible inputs of that size. The worst case complexity of an algorithm is its efficiency for the worst case input of size N, which is an input of size N for which the algorithm runs longest among all possible inputs of that size. 10. What is the Space Complexity for the following algorithm? Void N() { Int P,A,B,C; P=A+B-C; Printf(“%D”,P); } Space Complexity = Space Needed For P,A,B,C = 8 11. Define asymptotic notation. The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principle indicator of the algorithm’s Efficiency. To compare and rank such orders of growth scientists’ use three notations (Big Oh, Big Omega, and Big Theta, which is called as asymptotic notation.

12. What are the 3 standard notations? a) Big Oh(O) b) Big Omega(Ω) c) Big Theta( )

DAA TWO MARK QUESTION AND ANSWER

2

13. Define Big-Oh. A function T(N) is said to be in O(G(N)),denoted T(N)€ O(G(N)), If T(N) is bounded above by some constant multiple of G(N) for all large N, if there exists some positive constant C and some non negative integer N0 such that T (N) =N0 14. Define Conditional Asymptotic Notation. Let F be an eventually nonnegative sequence of real numbers and Let X be a set. The function O(F|X) yields a non empty set of functions from N to R and is defined as follows: O(F|X) = {T; T Ranges Over Elements Of RN: Vc,N (C > 0 Vn (N ≥ N N X T(N) ≤ C* F(N)

T(N) ≥ 0))}.

15. Define Smoothness Rule. Let F (N) be a nonnegative function defined on the set of natural numbers. F(N) is called Smooth If it is eventually non decreasing and F(2n) Θ (F(N)). Let T(N) be an eventually a non decreasing function and F(N) be a smooth function. If T(N)

Θ (F(N)) For Values Of N That Are Powers Of B,

Where B>=2, Then T (N)

Θ (F(N)) For Any N.

16. What Are The Common Running Time For Big-Oh. The common running time for Big-Oh Are • O(N) • O(Log N) • O(N Log N) • O(N^C),C>1 17. Define Recurrence Relation. The equation defines M (N) not explicitly, ie.., is a function of n, but implicitly as a function of its value at another point, namely N-1. Such equations are called recurrence relation.

18. What are all the types of recurrence relation? • • •

Geometric sequence Arithmetic sequence Linear sequence

DAA TWO MARK QUESTION AND ANSWER

3

19. Define Linear Search. In computer science, linear search or sequential search is a method for finding a particular value in a list, which consists in checking every one of its elements, one at a time and in sequence, until the desired one is found. 20. What are the Best, Worst and Average Case complexity of Linear Search? • • •

Best Case – O(1) Average Case – O(N) Worst Case – O(N)

21. Define Little Oh and Little Omega Notations. is read as "f(x) is little-o of g(x)". Intuitively, it means that The relation g(x) grows much faster than f(x). It assumes that f and g are both functions of one variable. Formally, it states

For non-negative functions, f(n) and g(n), f(n) is little omega of g(n) if and only if f(n) = Ω(g(n)), but f(n) ≠ Θ(g(n)). This is denoted as "f (n) = ω (g (n))". 22. Define Substitution Method. Substitution method is a method of solving a system of equations wherein one of the equations is solved for one variable in terms of the other variables.

23. Give Master Theorem. Let T(n) be a monotonically increasing function that satisfies T(n) = aT(n/b ) + f(n) T(1) = c where a ≥ 1, b ≥ 2, c > 0. If f(n)ε O(n^d) where d ≥ 0, then O(n^d) if a < b^d T(n) = O(n^d log n) if a = b^d O(n^logb a) if a > b^d

24. Give the general plan for analyzing recursive algorithm. • • •



Decide a parameter indicating an Input’s Size. Identify the algorithms Basic Operation Check whether the number of times the basic operation is executed only on the size of an input. If it also depends on some additional property, the worst case, average case, and if necessary, best case efficiencies have to be investigated separately. Set up a recurrence relation, with an appropriate initial condition, for the number of times basic operation is executed.

DAA TWO MARK QUESTION AND ANSWER

4

25. Give the general plan for analyzing non recursive algorithm. • • •



Decide a parameter indicating an Input’s Size. Identify the algorithms Basic Operation Check whether the number of times the basic operation is executed only on the size of an input. If it also depends on some additional property, the worst case, average case, and if necessary, best case efficiencies have to be investigated separately. Using standard formulas and rules of sum manipulation either find a closed formula for the count or, at the very least, establish its order of growth.

27. Define Data Space, Instruction Space and Environment Space. •

Data Space: The space needed to store all constants and variable values.



Instruction Space: The space needed to store the compiled version of all program instruction



Environment Space: The environment stack is used to save information needed to resume execution of partially completed functions. Each time a function is invoked the following data are saved the environment stack. o The return address. o The values of all local variables.

29. What are all the methods available for solving recurrence relations? • • • •

Forward Substitution Backward Substitution Smoothness Rule Master Theorem

30. Write down the problem types. • • • • • • •

Sorting Searching String Processing Graph Problem Combinational Problem Geometric Problem Numerical Problem

31. What are the types of recurrence relations? • •

Homogeneous recurrence relation. Non homogeneous recurrence relation.

34. Define Order Of Growth. An order of growth of an algorithm is a set of functions whose asymptotic growth behavior is considered equivalent to the given algorithm. For example, 2n, 100n and n + 1 belong to the same order of growth, which is written O(n) in “Big-Oh notation” and often called “linear” because every function in the set grows linearly with n.

DAA TWO MARK QUESTION AND ANSWER

5

UNIT 2 1. What is the order of an algorithm? A standard notation used to represent the computing time of the algorithm is termed as the order of an algorithm and is denoted by O. 2. What are the objectives of sorting algorithms? The objective of the sorting algorithm is to rearrange the records so that their keys are ordered according to some well-defined ordering rule. 3. What is meant by optimal solution? A solution to an optimization problem which minimizes (or maximizes) the objective function is termed as an optimal solution. An optimization problem is a computational problem in which the object is to find the best of all possible solutions. More formally, find a solution in the feasible region which has the minimum (or maximum) value of the objective function. 5. Find the number of comparisons made by the sequential search in the worst case and the best case. The number of comparisons made by the sequential search in the worst and best case are one and n key comparisons respectively. 6. Define a knapsack problem. The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most useful items. 7. Define divide and conquer strategy. Divide and conquer (D&C) is an important algorithm design paradigm based on multibranched recursion. A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. 8. Give some examples of divide and conquer method. Some of the examples of D&C method are binary search, Euclidean algorithm, CooleyTukey fast Fourier transform (FFT) algorithm and merge sort algorithm.

DAA TWO MARK QUESTION AND ANSWER

6

9. Write control abstraction for D&C technique. The control abstraction for D&C technique is as follows: Procedure D&C (P,Q) //the data size is from p to q { If size(P,Q) is small Then Solve(P,Q) Else M ¬ divide(P,Q) Combine (D&C(P,M), D&C(M+1,Q)) } Sometimes, this type of algorithm is known as control abstract algorithms as they give an abstract flow. This way of breaking down the problem has found wide application in sorting, selection and searching algorithm. 10. Advantages of binary search. Binary search is a fast way to search a sorted array. The idea is to look at the element in the middle. If the key is equal to that, the search is finished. If the key is less than the middle element, do a binary search on the first half. If it's greater, do a binary search of the second half. 11. What are all the advantages of Binary search? The Binary search is an optimal searching algorithm using us can search the desired element effectively. Applications: The binary search is an efficient searching method & is used to search desired record from database. For solving nonlinear equations with one unknown this method is used. 12. What is the time complexity of binary search? Best Case

Average Case

Worst Case

O(1)

O(log2n)

O(log2n)

13. What is the time complexity of straight forward with D & C of finding Min-max? • • •

Best case occurs when the elements are in increasing order, so the number of element comparisons is nWorst case occurs when the elements are in decreasing order, so the number of element comparisons is 2(n-1). Average case occurs when a[i] is greater than max half the time.

DAA TWO MARK QUESTION AND ANSWER

7

14. Define greedy technique: The greedy method is the most straight forward design technique that can be applied to a wide variety of problems. 15. Define feasible solution. A feasible solution is a solution that satisfies the constraints 16. Define optimal solution. An optimal solution is a feasible solution that optimizes the objective/optimization function. In general, finding an optimal solution is computationally hard. 17. What is the Control abstraction of greedy technique? • • •

Select ()-is the function that selects an input from a [] and removes it. Feasible ()-is a Boolean-valued function that determines whether x can be included into the solution vector. Union ()-is the function that combines x with the solution and updates the objective function.

18. What is meant by Container-loading? Container-loading problem is a problem which is used to load the ship with the maximum number of containers. 19. Define Complexity Analysis. • • •



A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc. The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on nondeterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc. The resources (or resources) that are being bounded and the bounds: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc.

21. Application of greedy algorithm: • • • • • •

Container loading. Knapsack problem. Topological sorting. Bipartite cover Single-source shortest paths Minimum-cost spanning trees

DAA TWO MARK QUESTION AND ANSWER

8

22. How the choices are made in each step in greedy method: • • •

Choices of solution made at each step of problem solving in greedy approach is as follows... Feasible-it should the problem's constraints. Locally optimal-among all feasible solutions the best choice is to be made Irrevocable-once the particular choice is made then it should not get changed on subsequent steps.

23. General plan for divide & conquer algorithm: • • •

Divide the problem into two or smaller sub problems Conquer the sub problems by solving them recursively Combine the solutions to the sub problems into the solutions for the original problem.

UNIT 3 1. What is the difference between dynamic programming and greedy algorithm? GREEDY ALGORITHM 1. GA computes the solution in bottom up technique. It computes the solution from smaller sub-routines and tries many possibilities and choices before it arrives at the optimal set of choices. 2. It does not use the principle of optimality, i.e. there is no test by which one can tell GA will lead to an optimal solution.

DYNAMIC PROGRAMMING 1. DA computes its solution by making its choices in a serial fashion (never look backirrevocable).

2. It uses the principle of optimality.

2. What is the difference between dynamic programming and divide and conquer? DIVIDE AND CONQUER 1. Divide the given problem into many subproblems. Find the individual solutions and combine them to get the solution for the main problem.

DYNAMIC PROGRAMMING 1. Many decisions and sequences are guaranteed and all the overlapping subinstances are considered.

2. Follows top down technique.

2. Follows bottom up technique.

3. Split the input only at specific points (mid point).

3. Split the input at every possible points rather than at a particular point.

4. Each problem is independent.

4. Sub-problems are dependent on the main problem.

DAA TWO MARK QUESTION AND ANSWER

9

3. What is dynamic programming? Dynamic programming is an algorithm design method, where the solution to the main problem can be viewed as the result of a sequence of decisions. It avoids the calculation of same stuff twice, by keeping the table of known results of sub-problems. 4. What are the basic elements of dynamic programming? • • •

Sub structure Table structure Bottom up computations.

5. Define sub structure, table structure and bottom up computations. A problem is said to have optimal substructure if the globally optimal solution can be constructed from locally optimal solutions to sub problems. We can either build up solutions of sub-problems from small to large (bottom up) or we can save results of solutions of sub-problems in a table (top down + memorization).This forms the table structure. 6. What are the applications of dynamic programming? • • • • •

Fibonacci numbers Longest increasing sequence Minimum weight triangulation The partition problem Appropriate string matching

7. Define multi stage graph. A multistage graph is a graph o o

G=(V,E) with V partitioned into K >= 2 disjoint subsets such that if (a,b) is in E, then a is in Vi , and b is in Vi+1 for some subsets in the partition; and | V1 | = | VK | = 1.

The vertex s in V1 is called the source; the vertex t in VK is called the sink. G is usually assumed to be a weighted graph. The cost of a path from node v to node w is sum of the costs of edges in the path. The "multistage graph problem" is to find the minimum cost path from s to t. Each set Vi is called a stage in the graph. 8. Define the principle of optimality. It states that in an optimal sequence of decisions, each sub sequence must be optimal. To use dynamic programming the problem must observe the principle of optimality that whatever the initial state is, remaining decisions must be optimal with regard the state following from the first decision.

DAA TWO MARK QUESTION AND ANSWER

10

9. Give the formula for cost (i,j) of multistage graph in both forward and backward approach. Forward approach

Cost(i,j)=min{c(i,j)+cost(i+1,l)}

l Є vi+1 (vertex in next stage) , Є E, Since, cost (k-1,j)=c(j,t), if ≤ E and cost (k-1,j)= ∞, if !Є E Backward approach

Bcost(i,j)=min{bcost(i-1,l)+c(l,j)}

l Є vi-1 , Є E since,bcost(2,j)=c(1,j) if Є E and bcost(2,j)= ∞ if !Є E 10) Derive the formula for the Floyd’s algorithm and running time of the algorithm. for k
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF