CL-I LAB MANUAL 2010-11

April 9, 2017 | Author: Dhiraj Neve | Category: N/A
Share Embed Donate


Short Description

Download CL-I LAB MANUAL 2010-11...

Description

PUNE INSTITUTE OF COMPUTER TECHNOLOGY DHANKAWADI, PUNE – 43.

LAB MANUAL ACADEMIC YEAR: 2010-2011 DEPARTMENT: COMPUTER ENGINEERING CLASS: B.E

SEMESTER: I

SUBJECT: COMPUTER LABORATORY-I INDEX OF LAB EXPERIMENTS EXPT. NO

PROBLEM STATEMENT

Revised on

PART I: Principles of Compiler Design 1

2 3 4

5

6

Write a LEX program to count number of characters, words and lines as remove the C and C++ comments from a given input text file. Create an output text file that consists of the contents of the input file with line no. and display total no. of character word and lines. Implement a lexical analyser for a subset of C using LEX. Implementation should support error handling. Implement a natural language parser using LEX & YACC. Write an ambiguous CFG to recognise an infix expression and implement a parser that recognises the infix expression using YACC. Provide the details of all conflicting entries in the parser table generated by LEX and YACC and how they have been resolved. (can take calculator as an application) Write an attributed translation grammar to recognise declarations of simple variables, “for”, assignment, if, if-else statements as per syntax of C or Pascal and generate equivalent three address code for the given input made up of constructs mentioned above using LEX and YACC. Write a code to store the identifiers from the input in a symbol table and also to record other relevant information about the identifiers. Display all records stored in the symbol table. Laboratory Project For a small subset of C with essential programming constructs, write a compiler using LEX and YACC(To be carried out in a group of 4 to 6 students).

P:F-LTL-UG/03/R1

02/07/2010

22/06/2009 22/06/2009 22/06/2009

22/06/2009

22/06/2009

Page 1 of 99

7

8 9

10 11 12 13

14

PART II : Operating System Study of Various Commands in Unix/ Linux This assignment includes: General commands like grep,locate,chmod,chown,ls,cp etc. It also includes the various system calls like: File System Calls: read(),write(),open() etc. Process System Calls: fork(),execv(),execl() etc. Inter-process System Calls :pipe(), popen(), fifo(),signal() etc. Each command should be written as per the format specified

22/06/2009 22/06/2009

COMMAND NAME

command name

FORMAT

command [option(s)] argument(s)

DESCRIPTION

A brief description of what the command does.

OPTIONS

A list of the most useful options and a brief description of each.

ARGUMENTS

Mandatory or optional arguments.

EXAMPLE

A simple example of how to use the command.

Using fork system call create a child process, suspend it using wait system call and transfer it into the Zombie state. Write a program for Client-Server communication using following inter- process communication mechanism. 1. Unnamed pipe 2. Named pipe 3.Semaphore (General) File management using low level file access system calls such as write, read, open lseek, fstat Implement an Alarm clock application using signals Create a program which has three threads 1. Display Seconds 2. Display Minutes 3:Display Hours. Write and insert module in Linux Kernel. PART III: Design and Analysis of Algorithm Note: Compute the time and space complexity for following assignments.

22/06/2009

Implement using divide and Conquer strategy(any one) • Merge Sort and Randomized Quick sort (recursive and non recursive) and Compare recursive and non recursive versions • Multiplication of 2 ‘n’ bit numbers where ‘n’ is a power of 2 .

22/06/2009

P:F-LTL-UG/03/R1

22/06/2009

22/06/2009 22/06/2009 22/06/2009 22/06/2009 22/06/2009

Page 2 of 99

15

Implement using Greedy Method Minimal spanning tree/Job scheduling

22/06/2009

16

Find shortest path for multistage graph problem (single source shortest path and all pair shortest path) Implement 0/1 Knapsack's problem using Dynamic programming, Backtracking and Branch & Bound strategies. Analyse the problem with all three methods Implement with Backtracking(any one) 1. Implement ‘n’ queens problem with backtracking. Calculate the no. of solutions and no. of nodes generated in the state space tree 2. For the assignment problem of ‘n’ people to ‘n’ jobs with cost of assigning as C( i, j), find the optimal assignment of every job to a person with minimum cost. Implement following using branch and Bound Traveling salesperson problem

22/06/2009

17 18

19

Head of Department (Computer Engineering)

P:F-LTL-UG/03/R1

22/06/2009 22/06/2009

22/06/2009

Subject Coordinator (Prof. Archana Ghotkar)

Page 3 of 99

STUDENT ACTIVITY FLOW-CHART START

Get imparted knowledge from Lab Teacher Design the applications

NO Consult Lab Teacher & If accepted

Make suggested modification YES

YES Write the program and execute and test for different inputs

Demonstrate to lab teacher for different i/p

NO If Completed

Make suggested modification

YES

END

P:F-LTL-UG/03/R1

Page 4 of 99

Revised on: 02/07/2010 Using LEX TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE S/W PACKAGES AND HARDWARE APPARATUS USED REFERENCES

STEPS INSTRUCTIONS FOR WRITING JOURNAL

P:F-LTL-UG/03/R1

Write a LEX program to count number of characters, words and lines in a given input text file. Create an output text file that consists of the contents of the input file as well as line numbers •

Understand the importance and usage of LEX automated tool

Windows 2000, PC with the configuration as Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, Keyboard, Mouse Linux with a support of LEX utility 1. A V Aho, R. Sethi, .J D Ullman, "Compilers: Principles, Techniques, and Tools", Pearson Education, ISBN 81 - 7758 - 590 2. J. R. Levine, T. Mason, D. Brown, "Lex & Yacc", O'Reilly, 2000, ISBN 81-7366 -061-X.– 8 3. K. Louden, "Compiler Construction: Principles and Practice", Thomson Brookes/Cole (ISE), 2003, ISBN 981 - 243 - 694-4:

Refer to student activity flow chart, theory, algorithm, test input, test output • Title • Problem Definition • Theory • Source Code • Output • Conclusion

Page 5 of 99

Theory: Lex is a tool for generating programs that perform pattern-matching on text. It is a tool for generating scanners i.e. Programs which recognize lexical patterns in text. The description of the scanner is in the form of pairs of regular expressions and C code calls rules. Lex generates as output a C source file called lex.yy.c Format of the input file : The general format of lex source shall be: Definitions %% Rules %% UserSubroutines The definition section is the place to define macros and to import header files written in C. It is also possible to write any C code here, which will be copied verbatim into the generated source file. • The rules section is the most important section; it associates patterns with C statements. Patterns are simply regular expressions. When the lexer sees some text in the input matching a given pattern, it executes the associated C code. This is the basis of how lex operates. • The C code section contains C statements and functions that are copied verbatim to the generated source file. These statements presumably contain code called by the rules in the rules section. In large programs it is more convenient to place this code in a separate file and link it in at compile time. •

How the input is matched : When the generated scanner is run, it analyzes its input looking for strings which match any of its patterns. If it finds more than one match, it takes the one matching the most text (for trailing context rules, this includes the length of the trailing part, even though it will then be returned to the input). If it finds two or more matches of the same length, the rule listed first in the flex input file is chosen. Once the match is determined, the text corresponding to the match (called the token) is made available in the global character pointer yytext, and its length in the global integer yyleng. The action corresponding to the matched pattern is then executed (a more detailed description of actions follows), and then the remaining input is scanned for another match. If no match is found, then the default rule is executed: the next character in the input is considered matched and copied to the standard output. Thus, the simplest legal flex  input is: which generates a scanner that simply copies its input (one character at a time) to its P:F-LTL-UG/03/R1

Page 6 of 99

output. Actions in lex : The action to be taken when an ERE is matched can be a C program fragment or the special actions described below; the program fragment can contain one or more C statements, and can also include special actions. Four special actions shall be available: | The action ’|’ means that the action for the next rule is the action for this rule. ECHO: Write the contents of the string yytext on the output. REJECT:Usually only a single expression is matched by a given string in the input. REJECT means "continue to the next expression that matches the current input", and shall cause whatever rule was the second choice after the current rule to be executed for the same input. Thus, multiple rules can be matched and executed for one input string or overlapping input strings. BEGIN The action: BEGIN newstate;

P:F-LTL-UG/03/R1

Page 7 of 99

Algorithm 1. Write a lex input file and save it as 2. Generate a C file using the command 'lex '. This creates a .c file named lex.yy.c 3. Compile the .c file using the command 'gcc lex.yy.c -lfl'. This compiles the c file using the lfl library 4. Execute the file using the command './a.out '

Test Input: [root@localhost Lex&Yacc]# lex first.l [root@localhost Lex&Yacc]# cc lex.yy.c -ll [root@localhost Lex&Yacc]# ./a.out myinput.c //myinput.c--/* hello world!!!! this is a program to test my first lex assignment */ #include #include main() { // this is a single line comment int num,i; printf("\nEnter the number: "); scanf("%d",&num); if(num count.txt The output redirection operator will create count.txt if it does not exist or overwrite it if it already exists. (The file does not, of course, require the .txt extension, and it could have just as easily been named count, lines or anything else.) The following is a slightly more complex example of combining a pipe with redirection to a file: echo -e "orange \npeach \ncherry" | sort > fruit The echo command tells the computer to send the text that follows it to standard output, and its -e option tells the computer to interpret each \n as the newline symbol (which is used to start a new line in the output). The pipe redirects the output from echo -e to the sort command, which arranges it alphabetically, after which it is redirected by the output redirection operator to the file fruit. As a final example, and to further illustrate the great power and flexibility that pipes can provide, the following uses three pipes to search the contents of all of the files in current directory and display the total number of lines in them that contain the string Linux but not the string UNIX: cat * | grep "Linux" | grep -v "UNIX" | wc -l In the first of the four segments of this pipeline, the cat command, which is used to read and concatenate (i.e., string together) the contents of files, concatenates the contents of all of the files in the current directory. The asterisk is a wildcard that represents all items in a specified directory, and in this case it serves as an argument to cat to represent all objects in the current directory. The first pipe sends the output of cat to the grep command, which is used to search text. The Linux argument tells grep to return only those lines that contain the string Linux. The second pipe sends these lines to another instance of grep, which, in turn, with its -v option, eliminates those lines that contain the string UNIX. Finally, the third pipe sends this output to wc -l, which counts the number of lines and writes the result to the display screen.

P:F-LTL-UG/03/R1

Page 45 of 99

Algorithm: 1. Unnamed Pipe: 1. Create two pipe ends using pipe () command. 2. Create child process using fork () command. 3. Close read end for client ie.child and write data on the write end. 4. Close write end for server ie.parent and read data from the read end written by the client. 2. Named Pipe: 1. Create two FIFOs using mkfifo() command. 2. Open one for reading (FIFO1) and other for writing (FIFO2). 3. Send data from client. 4. Wait from data from client and print the same. 3. Semaphore: Server: 1. Semaphore set is obtained with semget () function with some specific Semaphore key and set value to one. 2. While not reset by client continue else read data from the file written by client. Client: 1. Semaphore set is obtained with semget () function with servers specific Semaphore key and set value to 1. 2. Write data to the file and reset the semaphore value and break. Test Input: 1. Named Pipe Client: [root@localhost Programs]# gcc –o client namedclient.c [root@localhost Programs]#. /client hellopictsctr Named Pipe Server: [root@localhost Programs]# gcc –o server namedserver.c [root@localhost Programs]#. /server 2. Unnamed Pipe: [root@localhost Programs]# gcc unnamed.c [root@localhost Programs]# ./a.out 3. Semaphore Client: [root@localhost Programs]# gcc semclient.c [root@localhost Programs]# ./a.out Semaphore Server [root@localhost Programs]# gcc semserver.c [root@localhost Programs]# ./a.out

P:F-LTL-UG/03/R1

Page 46 of 99

Test Output: 1. Named Pipe Server: Half duplex Server: Read from Pipe: hellopictsctr Half duplex Server: Converting string: HELLOPICTSCTR 2. Unnamed Pipe: Parent process: Enter the data to the pipe: hellopictsctr Child process: Pipe read successfully: hellopictsctr 3. Semaphore Client: Enter data: hellopictsctr Semaphore Server: Waiting for clients to update… Updated : hellopisctsctr

P:F-LTL-UG/03/R1

Page 47 of 99

Revised On: 22/06/2009 System Call TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE

Using fork system call creates child process, suspend it using wait system call and transfer it into the Zombie state.

To understand concept of Zombie state and learn system calls fork and wait. • To implement fork system call to create child process and transfer it into Zombie state. S/W PACKAGES Linux Fedora 4 AND PC with the configuration as HARDWARE Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, APPARATUS Keyboard, Mouse USED REFERENCES Advanced Unix Programming By Richard Stevans Vijay Mukhi's -The 'c' odyssey UNIX- Gandhi STEPS Refer to student activity flow chart, theory, algorithm, test input, test output INSTRUCTIONS Title FOR • Problem Definition WRITING • Theory JOURNAL • Algorithm • Source code • compilation steps • Output • Conclusion

P:F-LTL-UG/03/R1

Page 48 of 99

Theory: On Unix and Unix-like computer operating systems, a zombie process or defunct process is a process that has completed execution but still has an entry in the process table, this entry being still needed to allow the process that started the zombie process to read its exit status. The term zombie process derives from the common definition of zombie—an undead person. In the term's colorful metaphor, the child process has died but has not yet been reaped. When a process ends, all of the memory and resources associated with it are deallocated so they can be used by other processes. However, the process's entry in the process table remains. The parent can read the child's exit status by executing the wait system call, at which stage the zombie is removed. The wait call may be executed in sequential code, but it is commonly executed in a handler for the SIGCHLD signal, which the parent is sent whenever a child has died. After the zombie is removed, its process ID and entry in the process table can then be reused. However, if a parent fails to call wait, the zombie will be left in the process table. In some situations this may be desirable, for example if the parent creates another child process it ensures that it will not be allocated the same process ID. As a special case, under Linux, if the parent explicitly ignores the SIGCHLD (sets the handler to SIG_IGN, rather than simply ignoring the signal by default), all child exit status information will be discarded and no zombie processes will be left. A zombie process is not the same as an orphan process. An orphan process is a process that is still executing, but whose parent has died. They don't become zombie processes; instead, they are adopted by init (process ID 1), which waits on its children. Zombies can be identified in the output from the Unix ps command by the presence of a "Z" in the STAT column. Zombies that exist for more than a short period of time typically indicate a bug in the parent program. As with other leaks, the presence of a few zombies isn't worrisome in itself, but may indicate a problem that would grow serious under heavier loads. Since there is no memory allocated to zombie processes except for the process table entry itself, the primary concern with many zombies is not running out of memory, but rather running out of process ID numbers. To remove zombies from a system, the SIGCHLD signal can be sent to the parent manually, using the kill command. If the parent process still refuses to reap the zombie, the next step would be to remove the parent process. When a process loses its parent, init becomes its new parent. Init periodically executes the wait system call to reap any zombies with init as parent.

P:F-LTL-UG/03/R1

Page 49 of 99

Algorithm: 1. 2. 3. 4. 5. 6.

Include , and along with other header files. Call the fork system call. In the child process print the child process and parent process ID. In the parent process print the parent process ID. Execute the wait () system call in the parent. While executing, programs use the ps command to see the Zombie child.

Test Input: None Test Output: [root@localhost Programs]# gcc -o zombie zombie.c [root@localhost Programs]# ./zombie Child process. PID = 3952 Parent PID = 3951 Parent process. PID = 3951 //On terminal 2 [root@localhost Programs]# ps -al F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 0 3951 3037 0 82 0 - 396 pts/0 00:00:00 zombie 1 Z 0 3952 3951 0 82 0 - 0 exit pts/0 00:00:00 zomb 4 R 0 3953 3099 0 77 0 - 1069 pts/1 00:00:00 ps

P:F-LTL-UG/03/R1

Page 50 of 99

Revised On: 22/06/2009 File management TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE

File management using low level file access system calls such as write,read,open, lseek, fstat

To understand and learn system calls read,write,open,lseek,fstat. • To implement above system calls for file management . S/W PACKAGES Linux Fedora 4 AND PC with the configuration as HARDWARE Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, APPARATUS Keyboard, Mouse USED REFERENCES The Design of UNIX Operating System by Maurice Bach Vijay Mukhi's -The 'c' odyssey UNIX- Gandhi STEPS Refer to student activity flow chart, theory, algorithm, test input, test output INSTRUCTIONS Title FOR • Problem Definition WRITING • Theory JOURNAL • Algorithm • Source code • compilation steps • Output • Conclusion

P:F-LTL-UG/03/R1

Page 51 of 99

Theory: Various File access system calls are described below: 1. write (): The write() system call is used to write data to a file or other object identified by a file descriptor. The prototype is #include size_t write (int fildes, const void *buf, size_t nbyte); fildes is the file descriptor, buf is the base address of the area of memory that data is copied from, nbyte is the amount of data to copy. The return value is the actual amont of data written, if this differs from nbyte then something has gone wrong. 2. read (): The read() system call is used to read data from a file or other object identified by a file descriptor. The prototype is #include size_t read (int fildes, void *buf, size_t nbyte); fildes is the descriptor, buf is the base address of the memory area into which the data is read and nbyte is the maximum amount of data to read. The return value is the actual amount of data read from the file. The pointer is incremented by the amount of data read. 3. open (): The open() system call is usually used with two parameters although an extra parameter can be used under certain circumstances. The prototype is #include int open (const char *path,int oflag); The return value is the descriptor, -1 if the file could not be opened. The first parameter is path name of the file to be opened and the second parameter is the opening mode specified by bitwise ORing one or more of the following values O_RDONLY, O_WRONLY, O_RDWR etc. 4. lseek ():The lseek() system call allows programs to manipulate read/write pointer directly so providing the facility for direct access to any part of the file. It has three parameters and the prototype is #include #include long lseek (int fildes,off_t offset,int whence) fildes is the file descriptor, offset the required new value or alteration to the offset and whence has one the three values i.e. SEEK_SET, SEEK_CUR, SEEK_END 5. fstat (): The fstat () system call obtains the same information about an open file known by the file descriptor fd.Prototype is given as: Int fstat (int fd, struct stat *sb);

P:F-LTL-UG/03/R1

Page 52 of 99

Algorithm: 1. Include , and along with other header files. 2. Accept filename and choice of operation from the user. 3. If choice is read, open file in mode O_RDONLY, read and display the contents and close file. 4. If choice is write, open file in mode O_WRONLY, accept data to be written and write into file and close file. 5. If choice is to append to file, open it in mode O_RDWR.Using lseek, position pointer to end of the file, accept data to be written and write into file and close file. 6. If choice is check file status, open file in any mode, use fstat () to get file status and display the same. Test Input: [root@localhost Programs]# gcc fileop.c [root@localhost Programs]#./a.out Enter the filename: /root/programs/test.c Enter choice:

Test Output: Choice: Write Enter data: I am student of Pict, Pune. Data written Choice: Append Enter data: I am studying in BE. Data written Choice: Read File contents are: I am student of Pict, Pune. I am studying in BE. Choice File Status: File Status is: ID of the device containing the file: 2056 File serial number: 133702 Mode of file: 33188 Size in bytes: 80 Last access: Wed Sep 26 22:26:05 2007

P:F-LTL-UG/03/R1

Page 53 of 99

Revised On: 22/06/2009 Signals TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE

Implement an Alarm clock application using signals

To understand the concept of signal. • To implement Alarm clock using SIGALRM signal. S/W PACKAGES Linux Fedora 4 AND PC with the configuration as HARDWARE Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, APPARATUS Keyboard, Mouse USED REFERENCES Advanced Unix Programming By Richard Stevens Vijay Mukhi's -The 'c' odyssey UNIX- Gandhi STEPS Refer to student activity flow chart, theory, algorithm, test input, test output INSTRUCTIONS Title FOR • Problem Definition WRITING • Theory JOURNAL • Algorithm • Source code • compilation steps • Output • Conclusion

P:F-LTL-UG/03/R1

Page 54 of 99

Theory: Signals, to be short, are various notifications sent to a process in order to notify it of various "important" events. By their nature, they interrupt whatever the process is doing at this minute, and force it to handle them immediately. Each signal has an integer number that represents it (1, 2 and so on), as well as a symbolic name that is usually defined in the file /usr/include/signal.h or one of the files included by it directly or indirectly (HUP, INT and so on. Use the command 'kill -l' to see a list of signals supported by your system). Each signal may have a signal handler, which is a function that gets called when the process receives that signal. The function is called in "asynchronous mode", meaning that no where in your program you have code that calls this function directly. Instead, when the signal is sent to the process, the operating system stops the execution of the process, and "forces" it to call the signal handler function. When that signal handler function returns, the process continues execution from wherever it happened to be before the signal was received, as if this interruption never occurred. Note for "hardwarists": If you are familiar with interrupts (you are, right?), signals are very similar in their behavior. The difference is that while interrupts are sent to the operating system by the hardware, signals are sent to the process by the operating system, or by other processes. Note that signals have nothing to do with software interrupts, which are still sent by the hardware (the CPU itself, in this case). Signals are usually used by the operating system to notify processes that some event occurred, without these processes needing to poll for the event. Signals should then be handled, rather then used to create an event notification mechanism for a specific application. When we say that "Signals are being handled", we mean that our program is ready to handle such signals that the operating system might be sending it (such as signals notifying that the user asked to terminate it, or that a network connection we tried writing into, was closed, etc). Failing to properly handle various signals, would likely cause our application to terminate, when it receives such signals. The most common way of sending signals to processes is using the keyboard. There are certain key presses that are interpreted by the system as requests to send signals to the process with which we are interacting: Ctrl-C Pressing this key causes the system to send an INT signal (SIGINT) to the running process. By default, this signal causes the process to immediately terminate.

P:F-LTL-UG/03/R1

Page 55 of 99

Ctrl-Z Pressing this key causes the system to send a TSTP signal (SIGTSTP) to the running process. By default, this signal causes the process to suspend execution. Ctrl-\ Pressing this key causes the system to send a ABRT signal (SIGABRT) to the running process. By default, this signal causes the process to immediately terminate. Note that this redundancy (i.e. Ctrl-\ doing the same as Ctrl-C) gives us some better flexibility. We'll explain that later on. Another way of sending signals to processes is done using various commands, usually internal to the shell: kill The kill command accepts two parameters: a signal name (or number), and a process ID. Usually the syntax for using it goes something like: kill - For example, in order to send the INT signal to process with PID 5342, type: kill -INT 5342 This has the same affect as pressing Ctrl-C in the shell that runs that process.If no signal name or number is specified, the default is to send a TERM signal to the process, which normally causes its termination, and hence the name of the kill command. fg On most shells, using the 'fg' command will resume execution of the process (that was suspended with Ctrl-Z), by sending it a CONT signal. A third way of sending signals to processes is by using the kill system call. This is the normal way of sending a signal from one process to another. This system call is also used by the 'kill' command or by the 'fg' command. Here is an example code that causes a process to suspend its own execution by sending itself the STOP signal: #include /* standard unix functions, like getpid() */ #include /* various type definitions, like pid_t */ #include /* signal name macros, and the kill() prototype */ /* first, find my own process ID */ pid_t my_pid = getpid(); /* now that i got my PID, send myself the STOP signal. */

P:F-LTL-UG/03/R1

Page 56 of 99

kill(my_pid, SIGSTOP); An example of a situation when this code might prove useful, is inside a signal handler that catches the TSTP signal (Ctrl-Z, remember?) in order to do various tasks before actually suspending the process. We will see an example of this later on. Most signals may be caught by the process, but there are a few signals that the process cannot catch, and cause the process to terminate. For example, the KILL signal (-9 on all unices I've met so far) is such a signal. This is why you usually see a process being shut down using this signal if it gets "wild". One process that uses this signal is a system shutdown process. It first sends a TERM signal to all processes, waits a while, and after allowing them a "grace period" to shut down cleanly, it kills whichever are left using the KILL signal. STOP is also a signal that a process cannot catch, and forces the process's suspension immediately. This is useful when debugging programs whose behavior depends on timing. Suppose that process A needs to send some data to process B, and you want to check some system parameters after the message is sent, but before it is received and processed by process B. One way to do that would be to send a STOP signal to process B, thus causing its suspension, and then running process A and waiting until it sends its ohso important message to process B. Now you can check whatever you want to, and later on you can use the CONT signal to continue process B's execution, which will then receive and process the message sent from process A. Now, many other signals are catchable, and this includes the famous SEGV and BUS signals. You probably have seen numerous occasions when a program has exited with a message such as 'Segmentation Violation - Core Dumped', or 'Bus Error - core dumped'. In the first occasion, a SEGV signal was sent to your program due to accessing an illegal memory address. In the second case, a BUS signal was sent to your program, due to accessing a memory address with invalid alignment. In both cases, it is possible to catch these signals in order to do some cleanup - kill child processes, perhaps remove temporary files, etc. Although in both cases, the memory used by your process is most likely corrupt, it's probable that only a small part of it was corrupt, so cleanup is still usually possible.

P:F-LTL-UG/03/R1

Page 57 of 99

Algorithm: 1. Write the function to be invoked on the receipt of the signal. 2. Use the “signal” system call with SIGALARM in the signum field and address of function written in (1) as the function argument to register the user function. 3. Accept number of seconds after which to signal the alarm. 4. Use alarm function with the number of seconds accepted in (3) to invoke the function written in (1) after the specified timestamp. Test Input: [root@localhost Programs]# gcc -o signal signals.c [root@localhost Programs]# ./signal Enter the alarm interval in seconds: 5 Test Output: 1 2 3 4 ******** ALARM ********

P:F-LTL-UG/03/R1

Page 58 of 99

Revised On: 22/06/2009 MultiThreading TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE

Create program which has three threads 1. Display Seconds 2. Display Minutes 3:Display Hours.and synchronize them To understand the concept mutithreading. • To implement digital clock by creating three threads and join them. S/W PACKAGES Linux Fedora 4 AND PC with the configuration as HARDWARE Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, APPARATUS Keyboard, Mouse USED REFERENCES Advanced Unix Programming By Richard Stevens The Design of UNIX Operating System by Maurice Bach STEPS Refer to student activity flow chart, theory, algorithm, test input, test output INSTRUCTIONS Title FOR • Problem Definition WRITING • Theory JOURNAL • Algorithm • Source code • compilation steps • Output • Conclusion

P:F-LTL-UG/03/R1

Page 59 of 99

Theory: We can think of a thread as basically a lightweight process. In order to understand this let us consider the two main characteristics of a process: Unit of resource ownership -- A process is allocated: • •

a virtual address space to hold the process image control of some resources (files, I/O devices...)

Unit of dispatching - A process is an execution path through one or more programs: • •

execution may be interleaved with other processes the process has an execution state and a dispatching priority

If we treat these two characteristics as being independent (as does modern OS theory): •





The unit of resource ownership is usually referred to as a process or task. This Processes have: o a virtual address space which holds the process image. o protected access to processors, other processes, files, and I/O resources. The unit of dispatching is usually referred to a thread or a lightweight process. Thus a thread: o Has an execution state (running, ready, etc.) o Saves thread context when not running o Has an execution stack and some per-thread static storage for local variables o Has access to the memory address space and resources of its process all threads of a process share this when one thread alters a (non-private) memory item, all other threads (of the process) sees that a file open with one thread, is available to others

Benefits of Threads vs Processes If implemented correctly then threads have some advantages of (multi) processes, They take: • • •

Less time to create a new thread than a process, because the newly created thread uses the current process address space. Less time to terminate a thread than a process. Less time to switch between two threads within the same process, partly because the newly created thread uses the current process address space.

P:F-LTL-UG/03/R1

Page 60 of 99



Less communication overheads -- communicating between the threads of one process is simple because the threads share everything: address space, in particular. So, data produced by one thread is immediately available to all the other threads.

Example : A file server on a LAN • • •

It needs to handle several file requests over a short period Hence more efficient to create (and destroy) a single thread for each request Multiple threads can possibly be executing simultaneously on different processors

Thread Levels There are two broad categories of thread implementation: • •

User-Level Threads -- Thread Libraries. Kernel-level Threads -- System Calls.

There are merits to both, in fact some OSs allow access to both levels (e.g. Solaris). User-Level Threads (ULT) In this level, the kernel is not aware of the existence of threads -- All thread management is done by the application by using a thread library. Thread switching does not require kernel mode privileges (no mode switch) and scheduling is application specific Kernel activity for ULTs: • • •

The kernel is not aware of thread activity but it is still managing process activity When a thread makes a system call, the whole process will be blocked but for the thread library that thread is still in the running state So thread states are independent of process states

Advantages and inconveniences of ULT Advantages: • • •

Thread switching does not involve the kernel -- no mode switching Scheduling can be application specific -- choose the best algorithm. ULTs can run on any OS -- Only needs a thread library

Disadvantages: •

Most system calls are blocking and the kernel blocks processes -- So all threads within the process will be blocked

P:F-LTL-UG/03/R1

Page 61 of 99



The kernel can only assign processes to processors -- Two threads within the same process cannot run simultaneously on two processors

Kernel-Level Threads (KLT) In this level, All thread management is done by kernel No thread library but an API (system calls) to the kernel thread facility exists. The kernel maintains context information for the process and the threads, switching between threads requires the kernel Scheduling is performed on a thread basis. Advantages and inconveniences of KLT Advantages • •

the kernel can simultaneously schedule many threads of the same process on many processors blocking is done on a thread level kernel routines can be multithreaded

Disadvantages: thread switching within the same process involves the kernel, e.g if we have 2 mode switches per thread switch this results in a significant slow down.

P:F-LTL-UG/03/R1

Page 62 of 99

Algorithm: 1. Create three threads pertaining to the display of hour, minutes and seconds. 2. set the variables hh, mm and ss to the value corresponding to the local time. 3. Invoke the sleep function in each thread, handling function with arguments to sleep depending upon the type of thread. Test Input: [root@localhost Programs]# gcc gthread.c -lpthread [root@localhost Programs]# ./a.out Test Output: 15:2:17 15:2:18 15:2:19 15:2:20 15:2:21

P:F-LTL-UG/03/R1

Page 63 of 99

Revised On: 22/06/2009 Insertion of module in Kernel TITLE PROBLEM STATEMENT /DEFINITION OBJECTIVE

Write and insert module in Linux Kernel.

To implement program by writing module and insert it into kernel by using make file. S/W PACKAGES Linux Fedora 4 AND PC with the configuration as HARDWARE Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, APPARATUS Keyboard, Mouse USED REFERENCES Linux Kernel Programming by Michael Beck STEPS Refer to student activity flow chart, theory, algorithm, test input, test output INSTRUCTIONS Title FOR • Problem Definition WRITING • Theory JOURNAL • Algorithm • Source code • compilation steps • Output • Conclusion

P:F-LTL-UG/03/R1

Page 64 of 99

Theory: What exactly is a kernel module? Modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system. For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system. Without modules, we would have to build monolithic kernels and add new functionality directly into the kernel image. Besides having larger kernels, this has the disadvantage of requiring us to rebuild and reboot the kernel every time we want new functionality. You can see what modules are already loaded into the kernel by running lsmod, which gets its information by reading the file /proc/modules. How do these modules find their way into the kernel? When the kernel needs a feature that is not resident in the kernel, the kernel module daemon kmod execs modprobe to load the module in. modprobe is passed a string in one of two forms: // modprobe is command used to load a single module in kernel //Modprobe will automatically

load all base modules needed in a module stack, as described by the dependency filemodules.dep. If the loading of one of these modules fails, the whole current stack of modules loaded in the current session will be unloaded automatically.

• •

A module name like softdog or ppp. A more generic identifier like char-major-10-30.

If modprobe is handed a generic identifier, it first looks for that string in the file /etc/modprobe.conf.[2] If it finds an alias line like: alias char-major-10-30 softdog

it knows that the generic identifier refers to the module softdog.ko. Next, modprobe looks through the file /lib/modules/version/modules.dep, to see if other modules must be loaded before the requested module may be loaded. This file is created by depmod -a and contains module dependencies. For example, msdos.ko requires the fat.ko module to be already loaded into the kernel. The requested module has a dependency on another module if the other module defines symbols (variables or functions) that the requested module uses.

P:F-LTL-UG/03/R1

Page 65 of 99

Lastly, modprobe uses insmod to first load any prerequisite modules into the kernel, and then the requested module. modprobe directs insmod to /lib/modules/version/[3], the standard directory for modules. insmod is intended to be fairly dumb about the location of modules, whereas modprobe is aware of the default location of modules, knows how to figure out the dependencies and load the modules in the right order. So for example, if you wanted to load the msdos module, you'd have to either run: insmod /lib/modules/2.6.11/kernel/fs/fat/fat.ko insmod /lib/modules/2.6.11/kernel/fs/msdos/msdos.ko

or: modprobe msdos

What we've seen here is: insmod requires you to pass it the full pathname and to insert the modules in the right order, while modprobe just takes the name, without any extension, and figures out all it needs to know by parsing /lib/modules/version/modules.dep. Linux distros provide modprobe, insmod and depmod as a package called module-inittools. In previous versions that package was called modutils. Some distros also set up some wrappers that allow both packages to be installed in parallel and do the right thing in order to be able to deal with 2.4 and 2.6 kernels. Users should not need to care about the details, as long as they're running recent versions of those tools.

P:F-LTL-UG/03/R1

Page 66 of 99

Algorithm: 1. Write a .c program which consists of functionality that is to be implemented. 2. Write a make file. 3. Go to the directory where both these files are stored and run make command. Then .ko file will be created. 4. For inserting it use insmod. 5. To remove it use rmmod. 6. To check the message see file /log/messages or use dmesg command.

Test Input: .c file

P:F-LTL-UG/03/R1

Page 67 of 99

Revised On: 22/06/2009 DIVIDE AND CONQUER TITLE Implement using divide and Conquer strategy(any one) PROBLEM STATEMENT /DEFINITION

OBJECTIVE

S/W PACKAGES AND HARDWARE APPARATUS USED REFERENCES

STEPS INSTRUCTIONS FOR WRITING JOURNAL

P:F-LTL-UG/03/R1

1. Merge Sort and Randomized Quicksort (recursive and non recursive) and Compare recursive and non recursive versions 2. Multiplication of 2 ‘n’ bit numbers where ‘n’ is a power of 2 • To understand the divide and conquer algorithmic strategy • To implement searching and sorting using divide and conquer strategy and application of divide and conquer • Analyze the above algorithms and verify with execution of program on different inputs Windows 2000, Turbo C++, PC with the configuration as Pentium IV 1.7 GHz. 128M.B RAM, 40 G.B HDD, 15’’Color Monitor, Keyboard, Mouse • Fundamental of Algorithms by Bressard • Fundamentals of algorithms by Horowitz / Sahani Galgotia • Introduction to Algorithms by Cormen / Charles PHI Refer to student activity flow chart, theory, algorithm, test input, test output • Title • Problem Definition • Theory (covering Concept of Divide and Conquer) • Algorithms • Analysis of above for time and space complexity • Program code • Output for different i/p comparing time complexity • Conclusion

Page 68 of 99

Theory Divide and conquer Strategy In general, divide and conquer is based on the following idea. The whole problem we want to solve may too big to understand or solve at once. We break it up into smaller pieces, solve the pieces separately, and combine the separate pieces together. We analyze this in some generality: suppose we have a pieces, each of size n/b and merging takes time f(n). (In the heapification example a=b=2 and f(n)=O(log n) but it will not always be true that a=b -- sometimes the pieces will overlap.) The easiest way to understand what's going on here is to draw a tree with nodes corresponding to subproblems (labeled with the size of the sub-problem) n / | \ n/b n/b n/b /|\ /|\ /|\ . . . . . . . . . For simplicity, let's assume n is a power of b, and that the recursion stops when n is 1. Notice that the size of a node depends only on its level: size(i) = n/(b^i). What is time taken by a node at level i? time(i) = f(n/b^i) How many levels can we have before we get down to n=1? For bottom level, n/b^i=1, so n=b^i and i=(log n)/(log b). How many items at level i? a^i. So putting these together we have (log n)/(log b) T(n) = sum a^i f(n/b^i) i=0 This looks messy, but it's not too bad. There are only a few terms (logarithmically many) and often the sum is dominated by the terms at one end (f(n)) or the other (n^(log a/log b)). In fact, you will generally only be a logarithmic factor away from the truth if you approximate the solution by the sum of these two, O(f(n) + n^(log a/log b)). P:F-LTL-UG/03/R1

Page 69 of 99

Let's use this to analyze heapification. By plugging in parameters a=b=2, f(n)=log n, we get log n T(n) = 2 sum 2^i log(n/2^i) i=0 Rewriting the same terms in the opposite order, this turns out to equal log n T(n) = 2 sum n/2^i log(2^i) i=0 log n = 2n sum i/2^i i=0 infty
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF