[Notes] Linux Administration - B. Sc. IT. (Sem 5)

July 9, 2017 | Author: Mitesh Sawant | Category: Linux, File System, Booting, Unix, Operating System
Share Embed Donate

Short Description

B. Sc. IT. Semester 5...


An Introduction to UNIX, Linux, and GNU. 

In recent years Linux has become a phenomenon. Various organizations that have adopted it, including some government departments and city administrations.

Major hardware vendors like IBM and Dell now support Linux, and major software vendors like Oracle support their software running on Linux.

Linux truly has become a viable operating system, especially in the server market.

What is Unix ? History :1. The UNIX operating system was originally developed at Bell Laboratories, once part of the telecommunications giant AT&T. Designed in the 1970s for Digital Equipment PDP computers. 2. UNIX has become a very popular multiuser, multitasking operating system for a wide variety of hardware platforms, from PC workstations to multiprocessor servers and supercomputers. 3. UNIX is a trademark administered by The Open Group, and it refers to a computer operating system that conforms to a particular specification. 4. This specification, known as The Single UNIX Specification, defines the names of, interfaces to, and behaviors of all mandatory UNIX operating system functions. 5. The specification is largely a superset of an earlier series of specifications, the P1003, or POSIX (Portable Operating System Interface) specifications, developed by the IEEE (Institute of Electrical and Electronic Engineers). 6. Commercial Versions of UNIX : - IBM AIX, HP UX. SUN Solaris. 7. Free Version : - FreeBSD and Linux. 8. In the past, compatibility among different UNIX systems has been a real problem, although POSIX was a great help in this respect.

UNIX Philosophy/Characteristics 1) Simplicity i)

Many of the most useful UNIX utilities are very simple and, as a result, small and easy to understand.

ii) Larger, more complex systems are guaranteed to contain larger, more complex bugs, and debugging is a chore that should be avoided. www.myitweb.weebly.com

iii) KISS, “Keep It Small and Simple,” is a good technique to learn. 2) Focus i)

It’s often better to make a program perform one task well than feature-bloat programs because such programs are difficult to maintain.

ii) Programs with a single purpose are easier to improve as better algorithms or interfaces are developed. iii) In UNIX, small utilities are often combined to perform more demanding tasks when the need arises, rather than trying to anticipate a user’s needs in one large program.

3) Reusable Components: i)

This Make the core of application available as a library.

ii) Well-documented libraries with simple but flexible programming interfaces can help others to develop variations or apply the techniques to new application areas. iii) Examples include the dbm database library, which is a suite of reusable functions rather than a single database management program. 4)

Filters: i)

Many UNIX applications can be used as filters. That is, they transform their input and produce output.

ii) UNIX provides facilities that allow quite complex applications to be developed from other UNIX programs by combining them in new ways. iii) This kind of reuse is enabled by reusable component feature. 5)

Open File Formats: i)

The more successful and popular UNIX programs use configuration files and data files that are plain ASCII text or XML.

ii) This is good choice for application development. iii) It enables users to use standard tools to change and search for configuration items and to develop new tools for performing new functions on the data files. 6)



i) This means writing the program so that its network-aware and able to run across a network as well as on a local machine. ii) You can’t anticipate exactly how ingeniously users will use your program. Try to be as flexible as possible in your programming. Try to avoid arbitrary limits on field sizes or number of records. iii) Never assume that you know everything that the user might want to do.

What Is Linux? 1. Linux is a freely distributed implementation of a UNIX-like kernel, the low-level core of an operating system. 2. Because Linux takes the UNIX system as its inspiration, Linux and UNIX programs are very similar. In fact, almost all programs written for UNIX can be compiled and run on Linux. 3. Some commercial applications sold for commercial versions of UNIX can run unchanged in binary form on Linux systems. Linux History : 4. Linux was developed by Linus Torvalds at the University of Helsinki, with the help of UNIX programmers from across the Internet. 5. It began as a hobby of Linus Torvalds who was inspired by Andy Tanenbaum’s Minix, a small UNIX like system, but has grown to become a complete system in its own right. 6. The Linux was having copyright but it can be freely distributed. 7. Versions of Linux are now available for a wide variety of computer systems using many different types of CPUs. 8. Even some handheld PDAs and Sony’s Playstations 2 and 3 run on Linux. If it’s got a processor, someone somewhere is trying to get Linux running on it.

The GNU Project and the Free Software Foundation 1. Linux owes its existence to the cooperative efforts of a large number of people. The operating system kernel itself forms only a small part of a usable development system. 2. Commercial UNIX systems traditionally come bundled with applications that provide system services and tools. but for Linux systems, these additional programs have been written by many different programmers and have been freely contributed.


3. The Linux community (together with others) supports the concept of free software, that is, software that is free from restrictions, subject to the GNU General Public License (the name GNU stands for the recursive GNU’s Not Unix). 4. Although there may be a cost involved in obtaining the software, it can thereafter be used in any way desired and is usually distributed in source form. 5. The Free Software Foundation was set up by Richard Stallman, the author of GNU Emacs, one of the best-known text editors for UNIX and other systems. 6. Stallman is a pioneer of the free software concept and started the GNU Project, an attempt to create an operating system and development environment that would be compatible with UNIX. 7. The GNU Project has already provided the software community with many applications that closely mimic those found on UNIX systems under the license of GNU. 8. A few major examples of software from the GNU Project distributed under the GPL such as GCC (compiler), GDB: A source code–level debugger etc. 9. Many other packages have been developed and released using free software principles and the GPL, including spreadsheets, source code control tools, compilers and interpreters, Internet tools, graphical image manipulation tools such as the Gimp, and two complete object-based environments: GNOME and KDE. 10. There is now so much free software available that with the addition of the Linux kernel it could be said that the goal of a creating GNU, a free UNIX-like system, has been achieved with Linux.

Linux Distributions 

Linux is actually just a kernel. You can obtain the sources for the kernel to compile and install it on a machine and then obtain and install many other freely distributed software programs to make a complete Linux installation.

These installations are usually referred to as Linux systems, because they consist of much more than just the kernel. Most of the utilities come from the GNU Project of the Free Software Foundation.


Backing Up and Restoring Files

1. Backup required to secure data from computer failure, some people may harm others property or system if system administrator not perfect to handle it. 2. We back up important files so that in the event of a failure of hardware, security, or administration, the system can be up and running again with minimal disruption. 3. Backup can be taken in high capacity tape drive and can be stored in disks.. 4. It’s it more sensible to back up user accounts and system configuration files from the distribution CDs. CDs would be quicker and easier than getting the basics off a tape archive. 5. System Administrator must decide what to back up and how frequently backup. 6. System Administrator may maintain a series of incremental backups i.e. adding only the files that have changed since the last backup or multiple full backups. 7. System Administrator may use RAID (redundant array of independent disks ) for Backup , which is multiple hard drives all containing the same data as insurance against the failure of any one of them, in addition to other backup systems. 8. System administrator should carry restore at least once in non critical time. 9. Need to formulate a plan for bringing the system back up in the event of a failure.


Configuring a Secure System 1. There is a common thread in Linux system administration, something that is a constant presence in everything you do, it is the security of the computer and data integrity. 2. The system administrator’s task, first and foremost, is to make certain that no data on the machine or network are likely to become corrupted, whether by hardware or power failure, by misconfiguration or user error (to the extent that the latter can be avoided), or by malicious or inadvertent intrusion from elsewhere. 3. No one involved in computing can have failed to hear of the succession of increasingly serious attacks upon machines connected to the Internet such as DDoS, E-mail attacks and worms attacks. 4. Depending on how and to what a Linux machine is connected, the sensitivity of the data it contains and the uses to which it is put, security can be as simple as turning off unneeded services, monitoring the Red Hat Linux security mailing list to make sure that all security advisories are followed, and otherwise engaging in good computing practices to make sure the system runs robustly. 5. It can be an almost full-time job involving levels of security permissions within the system and systems to which it is connected, elaborate firewalling to protect not just Linux machines but machines that, through their use of non-Linux software, are far more vulnerable, and physical security — making sure no one steals the machine itself. 6. For any machine that is connected to any other machine, security means hardening against attack and making certain that no one is using your machine as a platform for launching attacks against others. 7. If you are running Web, ftp, or mail servers, then make sure that passwords are not easily guessed and not made available to unauthorized persons, that disgruntled former employees no longer have access to the system,


and that no unauthorized person may copy files from your machine or machines. 8. Your job as a system administrator is to strike the right balance between maximum utility and maximum safety, all the while bearing in mind that confidence in a secure machine today says nothing about the machine’s security tomorrow.


Creating and Maintaining User Accounts 1. In Linux machine an account must be created for each user and system administrator may do this. 2. System administrator can give option to users to change their own password. 3. In red hat enterprise System administrator can configure a setting to prompt user that they must change password periodically. 4. System Administrator can stop person from accessing account. 5. System administrator can make restriction on web surf like user can surf web but has access to particular sites. 6. System administrator can stop certain websites.


Installing and Configuring Application Software 1. It is possible for individual users to install some applications in their home directories, drive space set aside for their own files and customizations these applications are not available to other users without the intervention of the system administrator. 2. if an application is to be used by more than one user, it probably needs to be installed higher up in the Linux file hierarchy, which is a job that can be performed by the system administrator only. 3. The administrator can limit which users may use which applications by creating a “group” for that application and enrolling individual users into that group. 4. New software packages might be installed in /opt, if they are likely to be upgraded separately from the Red Hat Linux distribution itself. 5. By so doing, it’s simple to retain the old version until you are certain the new version works and meets expectations. 6. Some packages may need to go in /usr/local or even /usr, if they are upgrades of packages installed as part of Red Hat Linux. 7. The location of the installation usually matters only if you compile the application from source code. 8. In Red Hat Package Manager (RPM) packages automatically goes where it should.


Installing and Configuring Servers 1. Linux “server” has a very broader meaning . The standard Red Hat Linux graphical user interface (GUI) requires a graphical layer called XFree86. This is a server. This server runs even on a standalone machine with one user account and it must be configured. 2. Likewise, printing in Linux takes place only after you have configured a print server. 3. You cannot graphical desktop without a Xserver. 4. But you can have you can have World Wide Web access without a Web server , file transfer protocol (FTP) access without running an FTP server, and Internet e-mail capabilities without ever starting a mail server. 5. Whenever a server is connected to machines outside your physical control, security issues arise. You want users to have easy access to the things they need, but you don’t want to open up the system you’re administering to the whole wide world. 6. System Administrator need to know exactly which server you need and how to employ them, because it is a bad practice and potential security nightmare to enable the services that system is not using and does not need.


Monitor and Tuning Performance

1. System tuning is an ongoing process aided by a variety of diagnostic and monitoring tools. 2. Some performance decisions are made at installation time, while others are added or tweaked later. 3. A good example is the use of the hdparm utility, to squeeze the best performance from your equipment, monitor your system carefully and use Linux built-in configuration wisely. 4. Proper monitoring allows you to detect a misbehaving application that might be consuming more resources than it should or failing to exit completely on close. 5. Possibly most important, careful system monitoring and diagnostic practices give you an early heads-up when a system component is showing early signs of failure, so that any potential downtime can be minimized. 6. Combined with the resources for determining which components are best supported by System, performance monitoring can result in replacement components which are far more robust and efficient in some cases. 7. Careful system monitoring plus wise use of the built-in configurability of Linux allows you to squeeze the best possible performance from your existing equipment. 8. This Includes customizing video drivers to applying special kernel patches to simply turning off unneeded services to free memory and processor cycles.


The Linux System Administrator 1. LINUX is a multiuser, multitasking operating system from the ground up, and hence the system administrator has flexibility and responsibility far beyond those of other operating systems. 2. New tools : Administrating System is sophisticated task hence system administrator should be acquainted with new tools in order to manage System. 3. Make no mistake: Every System Administrator decides which Application Software’s and Peripherals Machine should have. If a user decide to have a particular application then we can say that user has taken mantle of System Administrator. 4. Precaution from Virus Attacks : When System is connected with internet there is prone of virus attacks, attacks can be Distributed Denial of Service (DDoS) or e-mail macro virus etc. 5. Understand Control of Linux : The Linux system administrator is more likely to understand the necessity of active system administration than are those who run whatever came on the computer, assuming that things came from the factory properly configured. 6. Definition :- Linux system administrator :- Linux system administrator is the person who has “root” access, which is to say the one who is the system’s “super user.” 7. A standard Linux user is limited as to the things he or she can do with the underlying engine of the system. 8. But the “root” user has unfettered(open) access to everything i.e. all user accounts, their home directories, and the files which are present in, all system configurations and all files on the system.


Using Tools to Monitor Security

1. Crackers : It’s the clever people aiming to break into other peoples computers, just to amuse themselves or doing potential larceny. If there is a vulnerability in a system, they will find it. 2. The Linux development community is quick to find potential exploits and to find ways of slamming shut the door before crackers can enter by making available new, patched versions of packages in which potential exploits have been found. 3. First and best security tool is making sure that whenever a security advisory is issued, you download and install the repaired package. 4. As good as the bug trackers are, sometimes their job is reactive. Preventing the use of your machine for nefarious purposes and guarding against intrusion are, in the end, your responsibility alone. 5. If your machine is connected to the Internet, you will be amazed at the number of attempts that are made to break into your machine. And you’ll be struck by how critical an issue security is.



1. For any operating system to boot on standard PC hardware, you need what is called a boot loader 2. The boot loader is the first software program that runs when a computer starts. 3. It is responsible for handing over control of the system to the operating system. 4. The boot loader will reside in the Master Boot Record (MBR) of the disk, and it knows how to get the operating system up and running. 5. The main choices that come with Linux distributions are GRUB (the Grand Unified Bootloader) and LILO (Linux Loader). 6. GRUB is the most common boot loader that ships with the newer distributions of Linux and it also has a lot more features than LILO. 7. Both LILO and GRUB can be configured to boot other non-native operating systems.


Bootstrapping Bootstrapping done in following two phase. 1) Kernel Loading i) Once GRUB has started and you have selected Linux as the operating system to boot, the first thing to get loaded is the kernel. ii) No operating system exists in memory at this point, and PCs (by their unfortunate design) have no easy way to access all of their memory. iii) Thus, the kernel must load completely into the first megabyte of available random access memory (RAM), In order to accomplish this, the kernel is compressed. iv) The head of the file contains the code necessary to bring the CPU into protected mode (thereby removing the memory restriction) and decompress the remainder of the kernel.

2) Kernel Execution i) With the kernel in memory, it can begin execution, It knows only whatever functionality is built into it, which means any parts of the kernel compiled as modules are useless at this point. ii) At the very minimum, the kernel must have enough code to set up its virtual memory subsystem and root file system (usually, the ext3 file system). iii) Once the kernel has started, a hardware probe determines what device drivers should be initialized. iv) From here, the kernel can mount the root file system, root system is same as that of C drive in windows OS. v) The kernel mounts the root file system and starts a program called init.



1) Most modern Linux distributions use GRUB as the default boot loader during installation. 2) GRUB is the default boot loader for Fedora, Red Hat Enterprise Linux (RHEL), OpenSUSE, Mandrake, Ubuntu, and a host of other Linux distributions. 3) GRUB aims to be compliant with the Multiboot Specification and offers many features.

GRUB Boot Process / Boot Process 1) The GRUB boot process happens in stages. 2) Each stage is taken care of by special GRUB image files, with each preceding stage helping the next stage along. 3) Two of the stages are essential, and any of the other stages are optional and dependent on the particular system setup. Stages in GRUB Boot Process Stage 1 i) The image file used in this stage is essential and is used for booting up GRUB in the first place. ii) It is usually embedded in the MBR of a disk or in the boot sector of a partition. iii) The file used in this stage is appropriately named stage1.


iv) A Stage 1 image can next either load Stage 1.5 or load Stage 2 directly.

Stage 2 i) The Stage 2 images actually consist of two types of images: the intermediate (optional image) and the actual stage2 image file. ii) The optional images are called Stage this images serve as a bridge between Stage 1 and Stage 2. iii) The Stage 1.5 images are file system–specific i.e. they understand the semantics of one file system or the other. (a) The Stage 1.5 images have names of the form—x_stage_1_5 — where x can be a file system of type e2fs, reiserfs, fat, jfs, minix, xfs, etc. For example, the Stage 1.5 image that will be required to load an operating system (OS) that resides on a File Allocation Table (FAT) file system will have a name like fat_ stage1_5. (b) The Stage 1.5 images allow GRUB to access several file systems. When used, the Stage 1.5 image helps to locate the Stage 2 image as a file within the file system.

iv) Actual image is stage2 image It is the core of GRUB. v) It contains the actual code to load the kernel that boots the OS, it displays the boot menu, and it also contains the GRUB shell from which GRUB commands can be entered. vi) The GRUB shell is interactive and helps to make GRUB flexible. For example, the shell can be used to boot items that are not currently listed in GRUB’s boot menu or to bootstrap the OS from an alternate supported medium.


vii) Other types of Stage 2 images are the stage2_eltorito image, the nbgrub image, and the pxegrub image. viii)

The stage2_eltorito image is a boot image for CD-ROMs.

ix) The nbgrub and pxegrub images are both network-type boot images that can be used to bootstrap a system over the network (using Bootstrap Protocol [BOOTP], Dynamic Host Configuration Protocol [DHCP], Preboot Execution Environment [PXE], Etherboot, or the like). x) A quick listing of the contents of the /boot/grub directory of most Linux distributions will show some of the GRUB images.


LILO 1. LILO, short for Linux Loader, is a boot manager. 2. It allows you to boot multiple operating systems, provided each system exists on its own partition. 3. In addition to booting multiple operating systems, with LILO, you can choose various kernel configurations or versions to boot, This is especially handy when you’re trying kernel upgrades before adopting them. 4. Configuring LILO is straightforward: A configuration file (/etc/lilo.conf) specifies which partitions are bootable and, if a partition is Linux, which kernel to load. 5. When the /sbin/lilo program runs, it takes this partition information and rewrites the boot sector with the necessary code to present the options as specified in the configuration file. 6. At boot time, a prompt (usually lilo:) is displayed, and you have the option of specifying the operating system, a default can be selected after a timeout period. 7. LILO loads the necessary code, the kernel, from the selected partition and passes full control over to it. 8. LILO is what is known as a two-stage boot loader. The first stage loads LILO itself into memory and prompts you for booting instructions with the lilo: prompt or a colorized boot menu. 9. Once you select the OS to boot and press enter, LILO enters the second stage, booting the Linux operating system. 10.LILO has somewhat fallen out of favor with most of the newer Linux distributions. Some of the distributions do not even give you the option of selecting or choosing LILO as your boot manager.



1. The /etc/inittab file specifies which scripts to run when runlevels change. 2. These scripts are responsible for either starting or stopping the services that are particular to the runlevel. 3. Because of the number of services that need to be managed, rc scripts are used. 4. The main one, /etc/rc.d/rc, is responsible for calling the appropriate scripts in the correct order for each runlevel. 5. Such a script could easily become extremely uncontrollable to keep this from happening, a slightly more elaborate system is used. 6. For each runlevel, a subdirectory exists in the /etc/rc.d directory. 7. These runlevel subdirectories follow the naming scheme of rc X .d, where X is the runlevel. For example, all the scripts for runlevel 3 are in /etc/rc.d/rc3.d. 8. In the runlevel directories, symbolic links are made to scripts in the /etc/rc.d/init.d directory. 9. Instead of using the name of the script as it exists in the /etc/rc.d/init.d directory, however, the symbolic links are prefixed with an S, if the script is to start a service, or with a K, if the script is to stop (or kill) a service. 10.These two letters are casesensitive. You must use uppercase letters, or the startup scripts will not recognize them. 11.In many cases, the order in which these scripts are run makes a difference. For example, you can’t start services that rely on a configured network interface without first enabling and configuring the network interface.


12.To enforce order, a two-digit number is suffixed to the S or K. Lower numbers execute before higher numbers; for example, /etc/ rc.d/rc3.d/ S10network runs before /etc/rc.d/rc3.d/S55sshd (S10network configures the network settings, and S55sshd starts the Secure Shell [SSH] server). 13.The scripts pointed to in the /etc/rc.d/init.d directory are the workhorses i.e. they perform the actual process of starting and stopping services. 14.When /etc/rc.d/rc runs through a specific runlevel’s directory, it invokes each script in numerical order. 15.It first runs the scripts that begin with a K and then the scripts that begin with an S. For scripts starting with K, a parameter of stop is passed. Likewise, for scripts starting with S, the parameter start is passed.



1. The init process is the first non-kernel process that is started, and, therefore, it always gets the process ID number of 1. 2. init reads its configuration file, /etc/inittab, and determines the runlevel where it should start. 3. Essentially, a runlevel dictates the system’s behavior. 4. Each level (designated by an integer between 0 and 6) serves a specific purpose. 5. A runlevel of initdefault is selected if it exists; otherwise, you are prompted to supply a runlevel value.

Run Level ID



Halt the system


Enter single-user mode


Multiuser mode, but without Network File System (NFS)


Full multiuser mode (normal)




Same as runlevel 3, except using an X Window System login rather than a text-based login


Reboot the system


6. When it is told to enter a runlevel, init executes a script, as dictated by the /etc/ inittab file. 7. The default runlevel that the system boots into is determined by the initdefault entry in the /etc/inittab file. 8. If, for example, the entry in the file is :- id:3:initdefault: ,this means that the system will boot into runlevel 3. But if, on the other hand, the entry in the file is id:5:initdefault:, this means the system will boot into runlevel 5, with the X Window subsystem running with a graphical login screen.


Commonly used RAID Structures in Linux

1) RAID 1 IN SOFTWARE i) RAID stands for Redundant Array of Independent Disks. RAID 1, known as disk mirroring, is a redundant RAID disk mode. ii) A mirror of the first disk is kept on the other disks. iii) If all disks crash but one, all data can still be recovered. iv) To work properly, RAID1 needs two or more disks, and zero or more spare disks.

2) RAID 5 IN SOFTWARE i) RAID 5 combines the ability to use a large number of disks while still maintaining some redundancy. ii) It uses three or more disks, and spare disks are optional. iii) The final RAID 5 array contains the combined file size of all disks except one. iv) The equivalent of one disk’s worth of space is taken up by the parity information, which is written evenly across all the disks. v) A RAID 5 array can survive one disk loss, but if more than one disk fails, all data is lost. 3) RAID IN HARDWARE i) The principles of the software RAID levels also apply to hardware RAID setups. www.myitweb.weebly.com

ii) The main difference is that in hardware RAID the disks have their own RAID controller with built-in software that handles the RAID disk setup, and I/O. iii) To Linux, the hard RAID interface is transparent, so the hardware RAID disk array looks like one giant disk. 4) STORAGE AREA NETWORKS i) A Storage Area Network (SAN) is a high availability, high performance disk storage structure that enables several systems to store their partitions on the same large disk array. ii) A server handles this large disk array and also such things as disk failover and regular information backup. iii) This system provides reliable storage and fast access, since the servers are connected to the disk array through a SCSI link or through a fiber channel.


Disk Partitioning To see how your Linux disks are currently partitioned and what file systems are on them, look at the /etc/fstab file. Partitioning an x86 machine 1. When partitioning an x86 PC, you need to be mindful of the limitations present in the x86 architecture. 2. You are allowed to create 4 primary partitions, Primary partitions are the only partitions that are bootable. 3. You can create more partitions if you make logical partitions, Logical partitions are set into a primary partition. 4. So if you choose to make logical partitions, you are allowed to make only three primary partitions for operating system use, and the fourth partition is dedicated to hosting the logical partitions. Mounting other OS partitions/slices 1. Not only can Linux read other operating systems’ file systems, it can also mount disk drives from other systems and work with their partition tables. 2. However, it is necessary to compile two options into the kernel to do this. 3. You must have the file system support and the file partitioning support turned on in the kernel. 4. Usually file system support is compiled as a module by default, but disk partition support usually has to be explicitly compiled.


5. Some common partitioning schemes that Linux supports are: x86 partitions, BSD disklabel, Solaris x86, Unixware, Alpha, OSF, SGI, and Sun. 6. Mounting other operating systems’ partitions is helpful if you need to put a Sun hard disk into a Linux machine, for example. You may need to do this if the original Sun system has gone bad, and you need to recover the information that was on its disk, or if it’s the target of a forensic computer crime investigation, and you need to copy the disk contents to another machine to preserve evidence. 7. This method takes advantage of the fact that copying a large amount of data is much faster across a SCSI connection than across a network. 8. If you need to copy a large amount of raw disk data across a network, you can use the Network Block Daemon, which enables other machines to mount a disk on your machine as if it were on their machine. 9. When running the Network Block Daemon,make sure that you have the appropriate partition support compiled into the kernel.


File System Structure

1. Red Hat has chosen to follow the standards outlined in the File system Hierarchy Standard (FHS). 2. The FHS provides specific requirements for the placement of files in the directory structure and placement is based on the type of information contained in the file. 3. Basically two categories of file information exist: shareable or unshareable, and variable or static. 4. Shareable files are files that can be accessed by other hosts, and unshareable files can be accessed only by the local system. 5. Variable files contain information that can change at any time on their own, without anyone actually changing the file. A log file is an example of such a file. A static file contains information that does not change unless a user changes it. Program documentation and binary files are examples of static files. 6. Linux’s method of mounting its file systems in a flat, logical, hierarchical method has advantages over the file system mounting method used by Windows. 7. Linux references everything relative to the root file system point /, whereas Windows has a different root mount point for every drive. 8. If you have a /usr partition that fills up in Linux, you can create another file system called /usr/local and move your /usr/local data from /usr to the new file system definition. 9. This practice frees up space on the /usr partition, and is an easy way to bring your system back up to a fully functional state.


10.This trick wouldn’t work on a Windows machine, because Windows maps its file locations to static device disk definitions. 11.You would have to change programs’ file references from c:\ to d:\, and so forth. Linux’s file system management is another good reason to use Linux on your production servers instead of Windows.

The / directory 1) The / directory is called the root directory and is typically at the top of the file system structure. 2) In many systems, the / directory is the only partition on the system and all other directories are mounted under it. 3) The primary purpose of the / directory is booting the system, and to correct any problems that might be preventing the system from booting. 4) According to the FHS, the / directory must contain, or have links to, the following directories: i) bin — This directory contains command files for use by the system administrator or other users. The bin directory can not contain subdirectories. ii) boot — On Red Hat systems, this is the directory containing the kernel, the core of the operating system. Also in this directory are files related to booting the system, such as the bootloader. iii) dev — This directory contains files with information about devices, either hardware or software devices, on the system. iv) etc — This directory and its subdirectories contain most of the system configuration files. If you have the X Window System installed on your system, the X11 subdirectory is located here. Networking related files www.myitweb.weebly.com

are in the subdirectory sysconfig. Another subdirectory of etc is the skel directory, which is used to create files in users’ home directories when the users are created. v) home — This directory contains the directories of users on the system. Subdirectories of home will be named for the user to whom they belong. vi) lib — The shared system files and kernel modules are contained in this directory and its subdirectories. vii) mnt — This directory is the location of the mount point for temporary file systems, such as a floppy or CD. viii) opt — This directory and its subdirectories are often used to hold applications installed on the system. ix) proc — Information about system processes is included in this directory. x) sbin — Contained in this directory are system binaries used by the system administrator or the root user. xi) tmp — This directory contains temporary files used by the system. xii) usr — This directory is often mounted on its own partition. It contains shareable, read-only data. Subdirectories can be used for applications, typically under /usr/local. xiii) var — Subdirectories and files under var contain variable information, such as system logs and print queues.


Linux—Supported File Systems  One reason that Linux supports so many file systems is because of the design of its Virtual File Systems (VFS) layer. The VFS layer is a data abstraction layer between the kernel and the programs in userspace that issues file system commands.  The VFS layer avoids duplication of common code between all file systems. It provides a fairly universal backward compatible method for programs to access all of the different forms of file support. Only one common, small API set accesses each of the file system types, to simplify programming file system support. Standard disk file systems 1) MINIX i) Minix holds an important spot in Linux history, since Torvalds was trying to make a better minix than minix. He wanted to improve upon the 16bit minix kernel by making a better 32-bit kernel that did everything that minix did and more. This historical connection is the main reason why minix file system support is available in Linux. ii) One reason to run a minix file system is that it has less metadata overhead than ext2, and so it is a preferable choice for boot and rescue disks. Rescue disks made with older versions of Red Hat Linux use a minix file system.

2) EXT2 i) ext2 has become the standard file system for Linux. It is the next generation of the ext file system. The ext2 implementation has not changed much since it was introduced with the 1.0 kernel back in 1993. Since then there have been a few new features added. One of these was sparse super blocks, which increases file system performance. www.myitweb.weebly.com

ii) ext2 was designed to make it easier for new features to be added, so that it can constantly evolve into a better file system. Users can take advantage of new features without reformatting their old ext2 file systems. ext2 also has the added bonus of being designed to be POSIX compliant. New features that are still in the development phase are access control lists, undelete, and on-the-fly compression. iii) ext2 is flexible, can handle file systems up to 4TB large, and supports long filenames up to 1,012 characters long. In case user processes fill up a file system, ext2 normally reserves about 5 percent of disk blocks for exclusive use by root so that root can easily recover from that situation. Modern Red Hat boot and rescue diskettes now use ext2 instead of minix.

3) EXT3 i) The extended 3 file system is a new file system introduced in Red Hat 7.2. ext3 provides all the features of ext2, and also features journaling and backward compatibility with ext2. The backward compatibility enables you to still run kernels that are only ext2 aware with ext3 partitions. ii) You can upgrade an ext2 file system to an ext3 file system without losing any of your data. This upgrade can be done during an update to Red Hat 7.2. iii) ext3 support comes in kernels provided with the Red Hat 7.2 distribution. If you download a kernel from somewhere else, you need to patch the kernel to make it ext3 aware, with the kernel patches that come from the Red Hat ftp site. It is much easier to just stick with kernels from Red Hat. iv) ext3’s journaling feature speeds up the amount of time it takes to bring the file system back to a sane state if it’s not been cleanly unmounted (that is, in the event of a power outage or a system crash). www.myitweb.weebly.com

v) Under ext2, when a file system is uncleanly mounted, the whole file system must be checked. This takes a long time on large file systems. ext3 keeps a record of uncommitted file transactions and applies only those transactions when the system is brought back up.

vi) A cleanly unmounted ext3 file system can be mounted and used as an ext2 file system. This capability can come in handy if you need to revert to an older kernel that is not aware of ext3. The kernel sees the ext3 file system as an ext2 file system. vii) ext3’s journaling feature involves a small performance hit to maintain the file system transaction journal. Therefore it’s recommended that you use ext3 mostly for your larger file systems, where the ext3 journaling performance hit is made up for in time saved by not having to run fsck on a huge ext2 file system. 4) REISERFS i) The Reiser file system is a journaling file system designed for fast server performance. It is more space efficient than most other file systems, because it does not take up a minimum of one block per file. ii) If you write a bunch of really small files to disk, reiserfs squeezes them all into one block instead of writing one small file to one block like other file systems do. Reiserfs also does not have fixed space allocation for inodes, which saves about 6 percent of your disk space.

5) SYSTEMV i) Linux currently provides read support for SystemV partitions, and write support is experimental. ii) The SystemV file system driver currently supports AFS/EAFS/EFS, Coherent FS, SystemV/386 FS, Version 7 FS, and Xenix file systems. www.myitweb.weebly.com

6) UFS i) ufs is used in Solaris and early BSD operating systems. Linux provides read support, and write support is experimental. 7) FAT i) FAT is one of a few different file systems used with Windows over the years. Almost every computer user has used FAT at one time or another, since it was the sparse base operating system at the heart of all Windows operating systems. ii) FAT was originally created for QDOS and used on 360K (double density, double sided) floppy disks. Its address space has since been extended from 12 bit to 32 bit, so it can handle very large file systems. Nowadays it’s possible to create FAT file systems over a terabyte in size.

8) NTFS i) NTFS is the next generation of HPFS. It comes with all business versions of Microsoft operating systems beginning with Windows NT. ii) Unlike FAT, it is a b-tree file system, meaning it has a performance and reliability advantage over FAT. 9) HPFS i) The High Performance File System first came with OS/2 Version 1 created by Microsoft. It’s the standard OS/2 file system. ii) Since OS/2 usage has dropped off significantly, HPFS has become a relatively uncommon file system.




i) The Hierarchical File System is used with older versions of Mac OS. Macintosh-formatted files have two parts to them: a data fork and a resource fork. ii) The resource fork contains Macintosh operating system-specific information such as dialog boxes and menu items. The data fork contains the meat of the file data. The data fork is readable by Unix. iii) Linux to Macintosh file conversion tools are available bundled in the macutils package. The tools included in that package are: macunpack, hexbin, macsave, macstream, binhex, tomac, and frommac. See the previous section called “Important File System Commands” to read more about these tools. iv) The modern Mac OS file system, hfs+, is currently not supported under Linux.



1. Logical Volume Manager (LVM) enables you to be much more flexible with your disk usage than you can be with conventional old-style file partitions. 2. Normally if you create a partition, you have to keep the partition at that size indefinitely. 3. For example, if your system logs have grown immensely, and you’ve run out of room on your /var partition, increasing a partition size without LVM is a big pain. You would have to get another disk drive, create a /var mount point on there too, and copy all your data from the old /var to the new /var disk location. 4. With LVM in place, you could add another disk, and then assign that disk to be part of the /var partition. Then you’d use the LVM file system resizing tool to increase the file system size to match the new partition size. 5. Normally you might think of disk drives as independent entities, each containing some data space but when you use LVMs, you need a new way of thinking about disk space. 6. First you have to understand that space on any disk can be used by any file system. A Volume Group is the term used to describe various disk spaces (either whole disks or parts of disks) that have been grouped together into one volume. 7. Volume groups are then bunched together to form Logical volumes. 8. Logical volumes are akin to the historic idea of partitions. You can then use a file system creation tool such as fdisk to create a file system on the logical volume. 9. The Linux kernel sees a logical volume in the same way it sees a regular partition. 10. Some Linux tools for modifying logical volumes are pvcreate for creating physical volumes, vgcreate for creating volume groups, vgdisplay for showing volume groups, and mke2fs for creating a file system on your logical volume.


Memory file systems and virtual file systems

These file systems do not exist on disk in the same way that traditional file systems do. They either exist entirely in system memory, or they are virtual because they are an interface to system devices. Some of them are : 1. CRAMFS a. cramfs is designed to cram a file system onto a small ROM, so it is small, simple, and able to compress things well. The largest file size is 16MB, and the largest file system size is 256MB. b. Since cramfs is so compressed, it isn’t instantly updateable. The mkcramfs tool needs to be run to create or update a cramfs disk image. c. The image is created by compressing files one page at a time, so this enables random page access. d. The metadata is not compressed, but it has been optimized to take up much less space than other file systems. e. For example, only the low 8 bits of the gid are stored. This saves space but also presents a potential security issue. 2. TMPFS a. tmpfs is structured around the idea that whatever is put in the /tmp file system is accessed again shortly. b. tmpfs exists solely in memory, so what you put in temp doesn’t persist between reboots. c. Creating /tmp as an in-memory file system is a performance boost. Creating /tmp as an in-memory file system is done in Solaris since the overhead of Solaris is very large. www.myitweb.weebly.com

d. Creating /tmp as an in-memory file system hasn’t been done before in Linux because the ext2 file system has pretty good performance already. e. But for those who feel that they need the performance gains of storing /tmp in memory, this option is now available in Linux.

3. RAMFS a. ramfs is basically cramfs without the compression. 4. /DEV/PTS a. /dev/pts is a lightweight version of devfs. Instead of having all the device files supported in the virtual file system, it provides support for only virtual pseudo terminal device files. /dev/pts was implemented before devfs. 5. DEVFS a. The Device File System (devfs) is another way to access “real” character and block special devices on your root file system. b. The old way used major and minor numbers to register devices. devfs enables device drivers to register devices by name instead.


System wide Shell Configuration Scripts : 1. These files determine the default environment settings of system shells and what functions are started every time a user launches a new shell. 2. These all files or scripts are located in /etc and affect all shells used on the system. 3. An individual user can also set up a default configuration file in their home directory that affects only their shells. 4. This ability is useful in case the user wants to add some extra directories to their path, or some aliases that only they use. 5. When we use this files in the home directory, the names of files remain same, except they have ‘.’ in front of them. So /etc/bashrc affects bash shells systemwide, but /home/User/.bashrc effects only the shells of mentioned user.

Some of the Configuration Scripts are as follows  SHELL CONFIG SCRIPTS: BASHRC, CSH.CSHRC, ZSHRC 1. Bashrc is read by bash, csh.cshrc is read by tcsh, and zshrc is read by zsh. 2. These files are read every time a shell is launched, not just upon login, and they determine the settings and behaviors of the shells on the system. 3. These are places to put functions and aliases. Note :

tcsh is a Unix shell based on and compatible with the C shell (csh). It is essentially the C shell with programmable command line completion, command-line editing, and a few other features.


The Z shell (zsh) is a Unix shell that can be used as an interactive login shell and as a powerful command interpreter for shell scripting.

 PROFILE 1. This file is read by all shells except tcsh and csh upon login. Bash falls back to reading it if there is no bash_profile. 2. Profile is a good place to set paths because it is where you set environmental variables that are passed on to child processes in the shell. 3. More default paths than are necessary can pose a security risk.


Read upon Startup

1. 2. 3. 4.

/etc/profile ~/.bash_profile ~/.bash_login ~/.profile.

Read upon Log-out

1. ~/.bash_logout

TCSH 1. /etc/csh.cshrc 2. /etc/csh.login Config File in user Home Directory 1. 2. 3. 4. www.myitweb.weebly.com

~/.tcshrc Or ~/.cshrc ~/.history ~/.login ~/.cshdirs


ZSH 1. 2. 3. 4. 5. 6. 7.

/etc/zshenv ~/.zshenv, /etc/zprofile ~/.zprofile /etc/zshrc ~/.zshrc /etc/zlogin.

Non Login Shell 1. ~/.bashrc


1. ~/.zlogout 2. /etc zlogout

System configuration files in the /etc/sysconfig directory The system configuration files in the /etc/sysconfig directory are quite varied. These files set the parameters used by many of the system hardware devices, as well as the operation of some system services. Some of the files in /etc/sysconfig directory are : 1. /etc/sysconfig/apmd : a. apmd contains configuration information for the advanced power management. b. This is most useful for laptops rather than servers, since it contains lots of settings to suspend your linux machine, and restore it again. c. Some settings in this file include when you should be warned that battery power is getting low, and if Linux should synch with the hardware clock when the machine comes back from suspend mode.

2. /etc/sysconfig/clock a. This file contains information on which time zone the machine is set to, and whether or not it is using Greenwich Mean Time for its system clock time. b. This file controls the interpretation of values read from the system clock. c. UTC=value, where value is one of the following Boolean values: True – indicates that the hardware clock is set to Universal Time. Any other value indicated that it set to local time. 3. /etc/sysconfig/amd


a. amd is the file system automounter daemon. It automatically mounts an unmounted file system whenever a file or directory within that file system is accessed. b. File systems are automatically unmounted again after a period of disuse. c. This file is where you would add amd options to be run every time by amd. Options include specifying where amd activity should be logged, and specifying how long amd should wait before umounting an idle file system. 4. /etc/sysconfig/amd a. This file contains information on what UPS is attached to your system. b. You can specify your UPS model, to make it easier for the Linux system to communicate with your UPS when the UPS needs to shut down the system. 5. /etc/sysconfig/irda a. This file controls how infrared devices on your system are configured at startup.

6. /etc/sysconfig/keyboard i. This file controls the behaviour of the keyboard. 7. /etc/sysconfig/mouse i. This file is used by /etc/init.d/gpm to specify information about the available mouse.


8. etc/sysconfig/network i. This file is used to specify information about the desired network configuration. 9. etc/sysconfig/rhn i. This directory contains configuration files and GPG keys for Red Hat Network. No files in this directory should be edited by hand. 10./etc/sysconfig/iptables i. This file stores information used by the kernel to set up packetfiltering services at boot time or whenever the service is started. You should not modify this file by hand unless you are familiar with how to construct iptables rules. ii. If you are running an X server you can type system-configsecuritylevel from a terminal prompt or select ApplicationSystem Settings – Security Level from main menu.


System Configuration Files 1. The system configuration files in the /etc directory are the first places a system administrator goes after installing a system to set it up. 2. The /etc directory is probably the most often visited directory by a system administrator after their own home directory and /var/log. 3. All of the systemwide important configuration files are found either in /etc or in one of it subdirectory. 4. An advantage to keeping all system configuration files under /etc is that it’s easier to restore configurations for individual programs 5. The important thing is that /etc are only modifiable by appropriate users. generally this means being modifiable only by root because these files are so important and their contents so sensitive such as users’ hashed passwords, host’s ssh key. 6. Hence it is important to keep the file permissions set properly on everything in /etc. 7. Almost all files should be owned by root, and nothing should be world writable. 8. Some notable exceptions are files such as /etc/shadow, where users’ hashed passwords are stored, and /etc/wvdial.conf, which stores dial-up account names and passwords. 9. The /etc/sysconfig directory contains configuration scripts written and configured by Red Hat and Red Hat administration tools. 10./etc/sysconfig contains both system and networking configuration files. 11.Putting these files in /etc/ sysconfig distinguishes them from other /etc configuration files not designed by Red Hat.


12.Admin should keep these files in a separate directory so that the risk of other developers writing configuration files with the same names and putting them in the same place as existing config files, is reduced. 13.The system configuration files can fall within a few different functions, some specify system duties, such as logging and automatically running programs with cron. 14.Some system configuration files set appearance of the system, color for directory listing and banners that pop up when someone log in.

Note :- cron scripts are scripts that runs within mentioned intervals. (Eg. Triggers)


System environmental settings  The following are files those deals with system environmental settings. 1. /etc/motd : This file contains the message that users see every time they log in. 2. Issue : Whatever is in this file shows up as a prelogin banner on your console. 3. Issue.net : This file generally contains the same thing as /etc/issue. It shows up when you attempt to telnet into the system. 4. aliases: /etc/aliases is the email aliases file for the Sendmail program, and Postfix uses /etc/postfix/aliases a. For example root user mail box can be alias as b. root: taleen c. Root: [email protected] 5. fstab: fstab contains important information about your file systems, such as what file system type the partitions are, where they are located on the hard drive, and what mount point is used to access them. 6. grub.conf : The /etc/grub.conf file is a symbolic link to the actual file that is located in /boot/grub/grub.conf file. 7. cron files : cron is a daemon that executes commands according to a preset schedule that user defines. It wakes up every minute and checks all cron files to see what jobs need to be run at that time. a. User crontab files are stored in /var/spool/cron/. b. System cron files are stored in /etc directory. c. System cron files are stored in /etc directory. cron.d, cron.daily, cron.hourly ,cron. monthly, cron. weekly 8. Syslog.conf: This daemon logs any notable events on your local system. It can store logs in a local file or send them to a remote log host for added security. It can also accept logs from other machines when acting as a remote log host. These options and more, such as how detailed the logging should be are set in the syslog.conf file. a. Authentication previlege messages contain somewhat sensitive information so they are logged to /var/log/secure. That file can be read by root only, whereas /var/log/messages is sometimes set to be readable by everyone .


9. Id.so.conf : This configuration file is used by idconfig, which configures dynamic linker runtime bindings. It contains a listing of directories that hold shared libraries. Shared library files typically end with .so, whereas static library files typically end with .a indicating they are an archive of objects. 10. Logrotate.conf : logrotate.conf and the files within the logrotate.d directory determine how often your log files are rotated by the logrotate program. Log rotation refers to the process of deleting older log files and replacing them with more recent ones. logrotate can automatically rotate, compress, remove, and mail your log files. Log files can rotated based on size or on time, such as daily, weekly or monthly.


Classless Interdomain Routing (CIDR) 1. CIDR was invented several years ago to keep the Internet from running out of IP addresses. 2. The class system of allocating IP addresses can be very wasteful. Anyone who could reasonably show a need for more than 254 host addresses was given a Class B address block of 65,533 host addresses. 3. Even more wasteful was allocating companies and organizations Class A address blocks, which contain over 16 million host addresses. 4. People realized that addresses could be conserved if the class system was eliminated. By accurately allocating only the amount of address space that was actually needed, the address space crisis could be avoided for many years. 5. This solution was first proposed in 1992 as a scheme called supernetting. Under supernetting, the class subnet masks are extended so that a network address and subnet mask could, for example, specify multiple Class C subnets with one address. 6. For example, if you needed about a thousand addresses, you could supernet 4 Class C networks together. 7. CIDR will probably keep the Internet happily in IP addresses for the next few years at least. 8. After that, IPv6, with 128 bit addresses, will be needed. Under IPv6,0 even careless address allocation would comfortably enable a billion unique IP addresses for every person on earth


Configuring Dynamic Host Configuration Protocol 1. Using DHCP, you can have an IP address and the other information automatically assigned to the hosts connected to your network. 2. This method is quite efficient and convenient for large networks with many hosts, because the process of manually configuring each host is quite time consuming. 3. By using DHCP, you can ensure that every host on your network has a valid IP address, subnet mask, broadcast address, and gateway, with minimum effort on your part. 4. You should have a server configured for each of your subnets and Each host on the subnet needs to be configured as a DHCP client. 5. You may also need to configure the server that connects to your ISP as a DHCP client if your ISP dynamically assigns your IP address Setting up the server 1. The program which runs on the server is dhcpd and is included as an RPM on Red Hat 7.2 installation CD 2. 2. Look for the file dhcp-2.0pl5-1.i386.rpm and use the Gnome-RPM (the graphical RPM tool) from the desktop, or use the rpm command from a command prompt to install it. 3. In Red Hat Linux the DHCP server is controlled by the text file /etc/ dhcpd.conf. 4. If this file does not exist on your server, you can create it using a text editor. Be sure to use the proper addresses for your network. 5. To start the server, run the command dhcpd. To ensure that the dhcpd program runs whenever the system is booted, you should put the command in one of your init scripts. www.myitweb.weebly.com

Configuring the client 1. First you need to check if the dhcp client is installed on your system. You can check for it by issuing command: which dhcpcd 2. If the client is on your system, you will see the location of the file. 3. If the file is not installed, you can find it on Red Hat Installation CD 1. 4. Install the client using the rpm command. After you install the client software, start it by running the command dhcpcd. 5. Each of your clients will now receive its IP address, subnet mask, gateway, and broadcast address from your dhcp server. 6. Since you want this program to run every time the computer boots, you need to place it in the /etc/rc.local file. Now whenever the system starts, this daemon will be loaded.


Setting Up a Network Interface Card Loopback Interface 1. Even if the computer is not connected to outside networks, an internal network functionality is required for some applications. 2. This address is known as the loopback and its IP address is 3. You should check that this network interface is working before configuring your network cards. 4. To do this, you can use the ifconfig utility to get some information. If you type ifconfig at a console prompt, you will be shown your current network interface configuration. 5. If your loopback is configured, the ifconfig shows a device called lo with the address If this device and address are not shown, you can add the device by using the ifconfig command as follows: ifconfig lo 6. You then need to use the route command to give the system a little more information about this interface. For this you type: route add -net 7. You now have your loopback set up and the ifconfig command shows the device lo in its listing.

Configuring the network card 1. Configuring a network card follows the same procedure as configuring the loopback interface. 2. You use the same command, ifconfig, but this time use the name ‘eth0’ for an Ethernet device. 3. You also need to know the IP address, the netmask, and the broadcast addresses. www.myitweb.weebly.com

4. These numbers vary depending on the type of network being built. 5. For an internal network that never connects to the outside world, any IP numbers can be used, however there are IP numbers typically used with these networks. RESERVED NETWORK NUMBERS Network Class A B



Network Addresses–––

6. If you are connecting to an existing network, you must have its IP address, netmask, and broadcast address. You also need to have the router and domain name server addresses. 7. In this example, you configure an Ethernet interface for an internal network. You need to issue the command: ifconfig eth0 netmask broadcast 8. The result of above is file get created in /etc/sysconfig/network-scripts called ifcfg-etho 9. We can check this file by issuing following command : [[email protected]~]# cat /etc/sysconfig/network-scripts/ifcfg-etho

Note : A broadcast address is a logical address at which all devices connected to a multiple-access communications network are enabled to receive datagrams. A message sent to a broadcast address is typically received by all network-attached hosts, rather than by a specific host.


Configuring an internal network 1. Now you have a network device configured for one computer, to add additional computers to your network you need to repeat this process on the other computers you want to add. 2. The only change is that you need to assign a different IP address. For example, the second computer on your network could have the address, the third could have, and so on. 3. In addition to configuring the network cards on each of the computers in the network, three files on each computer need to be modified. These files are all located in the /etc directory and they are: i. /etc/hosts ii. /etc/hosts.conf iii. /etc/resolv.conf 4. The /etc/hosts.conf file contains configuration information for the name resolver and should contain the following: order hosts, bind multi on 5. This configuration tells the name resolver to check the /etc/hosts file before attempting to query a nameserver and to return all valid addresses for a host found in the /etc/hosts file instead of just the first. 6. The /etc/hosts file contains the names of all the computers on the local network. 7. For a small network, maintaining this file is not difficult, but for a large network keeping the file up to date is often impractical. 8. The /etc/resolv.conf file provides information about name servers employed to resolve hostnames. [[email protected]~]# cat /etc/resolv.conf search rcn.com nameserver www.myitweb.weebly.com

Note : /etc/host.conf is a short, plain text file that specifies how host (i.e., computer) names on a network are resolved, i.e., matched with their corresponding IP addresses.


TCP/IP 1. TCP/IP is an acronym for Transmission Control Protocol/Internet Protocol, and refers to a family of protocols used for computer communications. 2. TCP and IP are just two of the separate protocols contained in the group of protocols developed by the Department of Defense, sometimes called the DoD Suite, but more commonly known as TCP/IP. 3. In addition to Transmission Control Protocol and Internet Protocol, this family also includes Address Resolution Protocol (ARP), Domain Name System (DNS), Internet Control Message Protocol (ICMP), User Datagram Protocol (UDP), Routing Information Protocol (RIP), Simple Mail Transfer Protocol (SMTP), Telnet and many others. 4. To be able to send and receive information on the network, each device connected to it must have an address. 5. The address of any device on the network must be unique and have a standard, defined format by which it is known to any other device on the network. 6. This device address consists of two parts a. Network addresses are IP addresses that have assigned to the device. b. The two unique address are typically called Network layer addresses and Media Access Control (MAC) addresses 7. Devices that are physically connected to each other (not separated by routers) would have the same network number but different node, or host numbers. 8. This would be typical of an internal network at a company or university, these types of networks are now often referred to as intranets.

Data Transfer in TCP/IP 1) The data transfer is accomplished by breaking the information into small pieces of data called packets or datagram. 2) It’s necessary to break data into small pieces because of two reasons which are sharing resources and error correction. www.myitweb.weebly.com

a) Sharing resources (1) If two computers are communicating with each other, the line is busy. If these computers were sharing a large amount of data, other devices on the network would be unable to transfer their data. (2) When long streams of data are broken into small packets, each packet is sent individually, and the other devices can send their packets between the packets of the long stream. b) Error correction (1) Because the data is transmitted across media that is subject to interference, the data can become corrupt. (2) One way to deal with the corruption is to send a checksum along with the data, A checksum is a running count of the bytes sent in the message. 3) These packets are made up of two parts, the header, which contains the address and reassembly instructions and the body which contains the data. 4) Keeping all this information in order is the protocol, The protocol is a set of rules that specifies the format of the package and how it is used.


Understanding Network Classes 1. All addresses must have two parts, the network part and the node or host part. 2. Addresses used in TCP/IP networks are four bytes long, called IP addresses, and are written in standard dot notation, which means a decimal number separated by dots. 3. For example, The decimal numbers must be within the numeric range of 0 to 255 to conform to the one-byte requirement. 4. IP addresses are divided into classes with the most significant being classes A, B, and C depending on the value of the first byte of the address.

Class Class A Class B Class C

First Byte 0-127 128-191 192-233

5. The reason for the class division is to enable efficient use of the address numbers. 6. If the division were the first two bytes to the network part and the last two bytes to the host part, then no network could have more than 216 hosts. 7. This would be impractical for large networks and wasteful for small networks. 8. There are a few ways to assign IP addresses to the devices depending on the purpose of the network. 9. If the network is internal, an intranet, not connected to an outside network, any class A, B, or C network number can be used. 10.Although this is possible but in the real world this approach would not allow for connecting to the Internet. www.myitweb.weebly.com

Understanding Subnetting

1. A few more steps accomplish outside connection, including configuring a router, obtaining an ip address and actually making the connecting. 2. IP numbers are not assigned to hosts, they are assigned to network interfaces on hosts. 3. Subnetting is process of dividing single network address into multiple networks. 4. Even though many computers on an IP network have a single network interface and a single IP number, a single computer can have more than one network interface. 5. In the current (IPv4) implementation, IP numbers consist of 4 (8-bit) bytes for a total of 32 bits of available information. 6. This system results in large numbers, even when they are represented in decimal notation. To make them easier to read and organize, they are written in what is called dotted quad format. 7. Example the internal network IP address Each of the four groups of numbers can range from 0 to 255. 8. Binary notation: If the bit is set to 1 it is counted, and if set to zero it is not counted. The binary notation for is; 11000000.10101000.00000001.00000001 9. The dotted quad notation from this binary is: (128+64).(128+32+8).(1).(1) =

Note : See How to convert IP into Binary


Interpreting IP numbers

1. IP numbers can have three possible meanings. a. The first of these is an address of a network, which is the number representing all the devices that are physically connected to each other. b. The second is the broadcast address of the network, which is the address that enables all devices on the network to be contacted. c. The last meaning is an actual interface address. 2. Look at the Class C network for an example. a. is a Class C network number. b. is a host address on this network c. is the network broadcast address.

Before you subnet your Network 1. First you need to decide the number of hosts on each of your subnets so you can determine how many IP addresses you need. 2. Every IP network has two addresses that cannot be used – the network IP number itself and the broadcast address. Whenever you subnetwork the IP network you are creating additional addresses that are unusable 3. Every IP network has two addresses that cannot be used — the network IP number itself and the broadcast address. 4. Every time you subnet you are creating these two unusable addresses, so the more subnets you have, the more IP addresses you lose. www.myitweb.weebly.com

5. Hence the point is, don’t subnet your network more than necessary. 6. If you wanted to divide your Class C network into two subnetworks, you would change the first host bit to one and you would get a net mask of 11111111.11111111.11111111.10000000 or 7. This would give you 126 possible IP numbers for each of your subnets because that you lose two IP address for each subnet. 8. If you want to have four subnetworks, you need to change the first two host bits to ones and this would give you a netmask of . You would have 62 IP addresses available hosts for your Class C networks. 9. Now all you need to do is assign the appropriate numbers for the network, the broadcast address, and the IP addresses for each of the interfaces and you’re nearly done. 10.To create subnets for Class A and B networks, you follow the same procedure as that shown for Class C networks.


Gateways and Routers 1. Even though after subnetting individual network segments cannot communicate with each other, to configure a path for them we use router. 2. Router is necessary for separate networks to communicate with each other, each network must be connected to a router in order for this communication to take place. 3. This router that is connected to each network is called its gateway. 4. In Linux, you can use a computer with two network interfaces to route between two or more subnets. To be able to do this you need to make sure that you enable IP Forwarding. 5. You can check this by entering the following query at a command prompt: cat /proc/sys/net/ipv4/ip_forward 6. If forwarding is enabled, the number 1 is displayed; if forwarding is not enabled, the number 0 is displayed. 7. To enable Ip forwarding if it is not already enabled, type the following command: Echo “0” > /proc/sys/net/ipv4/ip_forward 8. Assume that a computer running Linux is acting as a router for your network, It has two network interfaces to the local LANs using the lowest available IP address in each subnetwork on its interface to that network.


Configuring NFS Client 1. Configuring a client system to use NFS involves making sure that the portmapper and NFS file locking daemons statd and lockd are available, adding entries to the clients /etc/fstab for the NFS exports and mounting the exports using the mount command. 2. Make sure that the portmapper is running on the client system using the portmap initialization script: service portmap status 3. If the output says portmap is stopped (it shouldn’t be), start the portmapper: service portmap start 4. Presumably, you have already started nfslock on the server, so all that remains is to start it on the client system: service nfslock start 5. Now mount the file system. To mount /home from the server use following command command as root: mount –t nfs bubba:/home /home 6. If you want to mount by specifying client mount options. mount –t nfs bubba:/home /home –o rsize=8292,hard a. rsize sets the NFS read buffer size to n bytes b. Hard – enables failed NFS file operations to continue retrying after reporting “server not responding” on the system. 7. Using Automount Services a. The easiest way for client systems to mount NFS exports is to use autofs, which automatically mounts file system not already mounted when the file system is first accessed. www.myitweb.weebly.com

b. Autofs uses the automount daemon to mount and unmount file systems that automount has been configured to control. c. Autofs uses a set of map files to control automounting. A master map file, /etc/auto.master, associates mount points with secondary map files. The secondary map files, in turn control the file systems mounted under the corresponding mount points d. Example consider the following /etc/auto.master autofs configure file: /home /etc/auto.home /var /etc/auto.var --timeout 600


NFS Advantages 1. The biggest advantage NFS provides is centralized administration, because It is much easier, to back up a file system stored on a server than it is to back up /home directories scattered throughout the network, on systems that are geographically dispersed, and that might or might not be accessible when the backup is made. 2. NFS, especially when used with NIS, makes it trivially simple to update key configuration files, provide access to shared disk space, or limit access to sensitive data. 3. NFS can also conserve disk space and prevent duplication of resources because file systems that change infrequently or that are usually read-only, such as /usr, can be exported as read-only NFS mounts. 4. It is easy to upgrade applications employed by users throughout a network because it simply becomes a matter of installing the new application and changing the exported file system to point at the new application. 5. End users benefit from NFS when NFS is combined with NIS because users can log in from any system, even remotely, and still have access to their home directories and see a uniform view of shared data. 6. Users can protect important or sensitive data that would be impossible or time consuming to recreate by storing it on an NFS mounted file system that is regularly backed up.

Note : The Network Information Service, or NIS (originally called Yellow Pages or YP) is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network.


NFS Disadvantages 1. As a distributed, network-based file system, NFS is sensitive to network congestion that is heavy network traffic slows down NFS performance. 2. Similarly, heavy disk activity on the NFS server adversely affects NFS’s performance, In such cases, NFS clients seem to be running slowly because disk reads and writes take longer. 3. If the disk or system exporting vital data or application becomes unavailable for any reason (say, due to a disk crash or server failure), no one can access that resource. 4. NFS suffers potential security problems because its design assumes a trusted network, not a hostile environment in which systems are constantly being probed and attacked. 5. The primary weakness of most NFS implementation based on protocol version 1, 2, and 3 is that they are based on standard (unencrypted) remote procedure calls (RPC). 6. Never use NFS version 3 and earlier on systems that front the internet because clear-text protocols are trivial for anyone with a packet sniffer to intercept and interpret.


NFS Security 1) General NFS Security i) One NFS weakness, in general terms, is the /etc/exports file. If a cracker is able to spoof or take over a trusted address, an address listed in /etc/exports, your exported NFS mounts are accessible. ii) Another NFS weak spot is normal Linux file system access controls that take over once a client has mounted an NFS export: once mounted, normal user and group permissions on the files take over access control. iii) To overcome above problem use host access control limit access to services on your system, particularly the portmapper, which has long been a target of exploit attempts and you should put in entries for lockd, statd, mountd, and rquotad. iv) Configure your firewall by following rule: iptables –A INPUT –f –j ACCEPT This rule accepts all packet fragments except the first one (which is treated as a normal packet) because NFS does not work correctly unless you let fragmented packets through the firewall. v) More generally, wise application of IP packet firewalls, using netfilter, dramatically increases NFS server security. vi) netfilter is stronger than NFS daemon-level security or even TCP Wrappers because it restricts access to your server at the packet level. vii) NFS over TCP is currently experimental on the server side, so you should almost always use UDP on the server. However, TCP is quite stable on NFS clients. Nevertheless, using packet filters for both protocols on both the client and the server does no harm. www.myitweb.weebly.com

2) Server Security consideration a) On the server, always use the root_squash option in /etc/exports. NFS helps you in this regard because root squashing is the default, so you should not disable it (with no_root_squash) unless you have an extremely compelling reason to do so, such as needing to provide boot files to diskless clients. b) With root squashing in place, the server substitutes the UID of the anonymous user for root’s UID/GID (0), meaning that a client’s root account cannot change files that only the server’s root account can change.

3) Client security Consideration a) On the client, disable SUID (set UID) root programs on NFS mounts using the nosuid option. b) The nosuid mount option prevents a server’s root account from creating an SUID root program on an exported file system, logging in to the client as a normal user, and then using the UID root program to become root on the client.


NFS SERVER CONFIGURATION AND STATUS FILES 1) The server configuration file is /etc/exports, which contains a list of file systems to export, the clients permitted to mount them, and the export options that apply to client mounts. 2) Each line in /etc/exports has the following format : dir [host(options)] [host(options)] ……... i) dir specifies a directory or file system to export. ii) host specifies one or more hosts permitted to mount dir. iii) options specifies one or more mount options.

3) If you omit host, the listed options apply to every possible client system. 4) If you omit options, the default mount options will be applied. 5) host can be specified as a single name, an NIS netgroup, as a group of hosts using the form address/netmask, or as a group of hosts using the wildcard characters ? and *. Multiple host(options) entries are accepted, which enables you to specify different export options depending on the host or hosts mounting the directory. 6) When specified as a single name, host can be any name that DNS or the resolver library can resolve to an IP address. If host is an NIS netgroup, it is specified as @groupname. The address/netmask form enables you to specify all hosts on an IP network or subnet. 7) Example Lines in /etc/exports file : i) /usr/local


This line permits all hosts with a name of the format somehost.example.com to mount /usr/local as a read-only directory. www.myitweb.weebly.com

ii) /usr/devtools

This line uses the address /net mask from in which the net mask is specified in Classless Inter-domain Routing (CIDR) format. In CIDR format, the net mask is given as the number of bits (/24, in this example) used to determine the network address. A CIDR address of allows any host with an IP address in the range to ( is excluded because it is the netwrok address; is excluded because it is the broadcast address) to mount /usr/devtools read-only. iii) /home

This line specify that with an IP address in the range to to mount /home in read – write mode. iv) /projects

@dev (rw)

This line permits any member of the NIS netgroup named dev to mount /projects in read – write mode. v) /var/spool/mail (rw)

This line permits only the host IP address is to mount /var/mail. vi) /opt/kde


This line allows any host using RPCSEC_GSS security to mount /opt/kds in read – only mode. 8) There are various other option available other than ro and rw some of them are Secure, Insecure, Sync, Async and many more.


NFS SERVER DAEMONS 1. Providing NFS services requires the services of six daemons: a. /sbin/portmap portmap : It enables NFS clients to discover the NFS services available on a given server b. /usr/sbin/rpc.nfsd nfsd : Provides all NFS services except file locking and quota management c. /usr/sbin/rpc.mountd mountd : Processes NFS client mount requests d. /sbin/rpc.statd statd : Implements NFS lock recovery when an NFS server system crashes e. /usr/sbin/rpc.rquotad rquotad : Provides file system quota information NFS exports to NFS clients using file system quotas f. /sbin/rpc.lockd lockd : Starts the kernel’s NFS lock manager 2. The NFS daemons should be started in the following order to work properly: a. portmap b. nfsd c. mountd d. statd e. rquotad (if necessary)


3. Above list omits lockd because nfsd starts it on an as-needed basis, so you should rarely, if ever, need to invoke it manually. 4. The Red Hat Linux initialization script for NFS, /etc/rc.d/init.d/nfs, takes care of starting up the NFS server daemons. 5. By default, the startup script starts eight copies of nfsd in order to enable the server to process multiple requests simultaneously. 6. To change this value, make a backup copy of /etc/rc.d/ init.d/nfs, edit the script, and change the line that reads RPCNFSDCOUNT=8. 7. Change the value from 8 to one that suits you. Busy servers with many active connections might benefit from doubling or tripling this number. 8. If file system quotas for exported file systems have not been enabled on the NFS server, it is unnecessary to start the quota manager, rquotad, but Red Hat initialization script starts rquotad whether quotas have been enabled or not.


NFS SERVER SCRIPTS AND COMMANDS 1) Three initialization scripts control the required NFS server Daemons, /etc/rc.d/init.d/portmap, /etc/rc.d/init.d/nfs and /etc/rc.d/init.d/nfslock. 2) If you ever need to start the NFS server manually, Follow below steps : i) service portmap start ii) service nfs start iii) service nfslock start 3) Conversely, to shut down the server, reverse the start procedure: i) service nfslock start ii) service nfs start iii) service portmap start 4) The exportfs command enables you to manipulate the list of current exports on the fly without needing to edit /etc/exports. 5) To add a new export to etab and to the kernel’s internal table of NFS exports without editing /etc/exports, use the following syntax: i) exportfs -o opts host: dir Opts, host and dir use the syntax as that described for /etc/exports ii) Consider the following command: exportfs –o async, rw /var/spool/mail iii) This command exports /var/spool/mail with the async and rw options to the host whose IP address is 6) To remove an exported file system, use the –u option with exportfs. For example, the following command unexport the /home file system shown i) exportfs –v –u /home unexported.


-v option lists the files

7) The showmount command provides information about clients and the file systems they have mounted. 8) The showmount command queries the mount daemon, mountd, about the status of the NFS server. Its syntax is: i) showmount [-adehv] [host] (a) showmount –a // display client hostnames and mounted directories (b) showmount –d //Displays the directories client have mounted. (c) showmount – e // displays NFS server’s list of exported file system. 9) The nfsstat command display detailed information about the status of the NFS subsystem. For example: $service nfs status rpc.mountd (pid 4358) is running…


Planning NFS Installation NFS server configuration is uncomplicated, you can divide server configuration into four steps : 1. 2. 3. 4.

Design Implementation Testing Monitoring

1) Design :  Designing is a useful, NFS server involves following designing measures (a) Selecting the file systems to export (b) Choosing which users (or hosts) are permitted to mount the exported file systems (c) Selecting a naming convention and mounting scheme that maintains network transparency and ease of use (d) Configuring the server and client systems to follow the convention  A few general rules exist to guide the design process you need to take into account site-specific needs such as which file systems to export, the amount of data that will be shared, the design of the underlying network, what other network services you need to provide and number and type of servers and clients.

2) Implementation :


 With the design in place, implementation is a matter of configuring the exports and starting the appropriate daemons. 3) Testing :  Testing, obviously, makes sure that the naming convention and mounting scheme works as designed and identifies potential performance bottlenecks. 4) Monitoring :  Monitoring, finally, extends the testing process: you need to ensure that exported file systems continue to be available and that heavy usage or a poorly conceived export scheme does not adversely affect overall performance.


NFS (Network File System) 1.

NFS, the Network File System, is the most common method for providing file sharing services on Linux and Unix networks.


It is a distributed file system that enables local access to remote disks and file systems.


NFS is also a popular file sharing protocol, so NFS clients are available for many non-Unix operating systems, including the various Windows versions, MacOS, VAX/VMS, and MVS.


NFS uses a standard client/server architecture, the server portion consists of the physical disks containing shared file systems and several daemons that make the shared file systems visible to and available for use by client systems on the network this process is normally referred to as exporting a file system.


Server daemons also provide for file locking and, optionally, quota management on NFS exports.


NFS clients simply mount the exported file systems known as NFS mounts on their local system just as they would mount file systems on local disks.


Many sites store users home directories on a central server and use NFS to mount the home directory when users log in or boot their systems.


In this case, the exported directories must be mounted as /home/username on the local (client) systems, but the export itself can be stored anywhere on the NFS server, say, /exports/users/ username.


Another common scheme is to export public data or project-specific files from an NFS server and to enable clients to mount these remote file systems anywhere they see fit on the local system.

10. NFS is also used to provide diskless clients, such as X terminals or the slave nodes in a cluster, with their entire file system, including the kernel image and other boot files. 11. NFS version 4 - NFS version 4 is the version available in RHEL offers significant security and performance enhancements over older versions of the NFS protocol and adds features such as replication (ability to duplicate a server’s exported file systems on other servers) and migration (the capability to move file systems from one NFS server to another without affecting NFS clients) that NFS has historically lacked.

Note :- Daemon : In Unix and other multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user.


What is Samba ? 1. Computers running Windows 95 or greater use a protocol called Server Message Block (SMB) to communicate with each other and to share services such as file and print sharing. 2. Using a program called Samba, you can emulate the SMB protocol and connect your Red Hat Network to a Windows network to share files and printers. 3. The Linux PC icon appears in the Windows Network Neighborhood window, and the files on the Linux PC can be browsed using Windows Explorer.

Installing Samba 1. Before you can use Samba to connect to the Windows computers, it must first be installed on the Linux PC. 2. All current distributions of Linux include Samba, but it may not have been installed during the system installation. 3. All current distribution of RHEL include three Samba packages: Samba, Samba-client, and Samba common. 4. Even if it is has been installed, you should always check for the latest version, to find out if any problems have been fixed by the latest release, and to install it if necessary. 5. To see if Samba is installed on your system, type the following: rpm -q samba 6. If Samba is not installed, the command returns the output shown in Fig


7. If Samba is installed, the RPM query returns the version number as shown in Figure

8. The latest version of Samba (2.0.7 as of this writing) can be obtained at Samba’s Web site located at http://www.samba.org. 9. After downloading the Samba RPM file, install it as follows (“name of file” is the version number downloaded): rpm -i samba(name of file) Using Compressed Version 1. If you are unable to download the RPM version, or you want to compile the program yourself, download the file samba-latest.tar.gz. 2. Extract the file using the following command: tar -xfvz samba-latest.tar.gz 3. Change to the directory containing the extracted files (usually /usr/src) and then type : ./configure


4. Press Enter and wait for the command prompt to return. From the command prompt type : make 5. Press Enter and wait for the command prompt to return. Finally, type install from the command prompt. 6. If all goes well, Samba is installed and the command prompt returns.


Configuring the Samba Server 1) Beginning with version 2.0 Samba includes a utility called SWAT, the Samba Web Administration Tool. This tool makes setting up Samba very easy. 2) The samba configuration file is called smb.c onf and is located in /etc/samba directory by the installation program, a smb.conf file was created during the installation that can be used for reference and modification. 3) SWAT enables you to use a Web browser as the interface to /etc/smb.conf and makes the necessary modifications to this file. 4) The smb.conf file is divided into several sections, Each section contains a list of options and values in the format: option = value some of them are : 5) [global] : The first section of the smb.conf file is the [global] section. There are various option available with variety of respective values. Example : [global] workgroup = ONE Explanation : This is global section and has value workgroup. workgroup = ONE is the name of the workgroup shown in the identification tab of the network properties box on the Windows computer. 6) [homes]


[homes], is used to enable the server to give users quick access to their home directories. Example : [homes] comment = Home Directories read only = No Explanation : comment = Home Directories is a comment line which has purpose of user understanding. read only = No specifies that users can write to their directories. 7) [printers] : This section sets the options for printing. Example : [printers] path = /var/spool/samba Explanation : path = /var/spool/samba is the location of the printer spool directory.

Note : Spool Directory : spool refers to the process of placing data in a temporary working area for another program to process. The most common use is in writing files on a magnetic tape or disk and entering them in the work queue (possibly just linking it to a designated folder in the file system) for another process. Spooling is useful because devices access data at different rates. Spooling allows one program to assign work to another without directly communicating with it.


Creating Samba Users 1) You can convert all of your system users to Samba users by running the following command: cat /etc/passwd | mksmbpasswd.sh > /etc/samba/smbpasswd 2) To create passwords for your users by using the smbpasswd command and the user’s name as shown here: [[email protected] terry]# smbpasswd terry Output: New SMB password: Retype new SMB password: Password changed for user terry

Starting Samba Server 1) The last step in samba server is to start samba daemon, the command to start is : smb start 2) Example [[email protected] terry]# /sbin/service smb start O/P Starting SMB services:[OK] Starting SMB services:[OK]


Connecting to samba client: 1) We can connect our system to other system running on SMB protocol. 2) All Microsoft OS use SMB protocol for communication, but it is not necessary that we could only connect to Microsoft networks rather whichever computer supports SMB protocol we can connect to it. 3) In order to make connection linux machine should be configured with samba server. 4) We can connect to other computer by two methods, either by smbclient utility or by smbmount command. 5) smbclient i) You can log in to any windows PC from RED Hat system. ii) You will prompted for a password to log in and will get some information about the windows system. iii) Syntax : [[email protected] terry]#smbclient//computer name/sharename iv) Example :- [[email protected] terry]# smbclient //terrycollings/c 6) smbmount i) Another way to make the files on the samba client accessible on your RED HAT system is mount the client file system on your file system. ii) The syntax for this command is : smbmount//computer name/directory/mysystem/mount/point iii) Example [[email protected] terry]# smbmount //terrycollings/c /mnt/windows O/P : Password:


iv) You can change to the directory on the system where you mounted the windows system. [[email protected] terry]# cd /mnt/windows

Connecting from a windows PC to the samba Server 1. After configuring samba and starting samba daemon you are ready to test your connection on the Windows PC. 2. For systems running Windows 2000 or XP, no configuration is required. 3. On the Windows computer, Double-click the My Network Places icon from the desktop. 4. In the Network Places window, you should now see a listing for the Red Hat computer.


Providing a Caching Proxy Server 1. A caching proxy server is software (or hardware) that stores (caches) frequently requested Internet objects such as Web pages, Java Scripts, and downloaded files closer (in network terms) to the clients that request those objects. 2. When a new request is made for a cached object, the proxy server provides the object from its cache instead of following the request to go to the source. 3. That is the local cache serves the requested object as a proxy or substitute for the actual server. 4. The motivation for using a caching proxy server is two-fold:  To provide accelerated Web browsing by reducing access time for frequently requested objects  To reduce bandwidth consumption by caching popular data locally, that is, on a server that exists between the requesting client and the Internet. 5. The Web proxy we used in linux is called Squid. 6. The following rpmquery command will show you if Squid is installed $ rpmquery squid Squid -2.5.Stables-1.FC3.1 7. If squid is not installed, you’ll obviously need to install it before proceeding. 8. squid provide full support for SSL. Note : Secure Sockets Layer (SSL), are cryptographic protocols that provide communication security over the Internet.


Squid Installation / Configuration of Squid The configuration process includes the following steps: 1. 2. 3. 4. 5.

Verify the kernel configuration Configuration Squid Modifying Netfilter configuration. Starting Squid Testing the configuration.

1) Verifying Kernel Configuration i) We use term Verifying Kernel Configuration because we use various kernel features such as IP forwording and Netfilter in Squid cofuguration. ii) Netfilter is important because it supports various features like IP Table, Connection Tracking, Full NAT(Network Address Translation) and support for the redirect target. iii) Here check ip forwarding : Command: # sysctl –n nst.ipv4.ip_forward 1 iv) Above command check ip forwarding is enabled or not, if output returns 1 it means ip forwarding is enabled else if output return 0 it means ip forwarding is disabled. v) If it return 0 then we need to enable it using command : # sysct1 –w net.ipv4. ip_forward = 1 O/P : net.ipv4. ip_forward = 1


Note : IP forwarding also known as Internet routing is a process used to determine which path a packet or datagram can be sent. Netfilter is a framework that provides hook handling within the Linux kernel for intercepting and manipulating network packets. Put more concretely, Netfilter is invoked, for example, by the packet reception and send routines from/to network interfaces. iptables is a user space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall Connection tracking allows the kernel to keep track of all logical network connections or sessions, and thereby relate all of the packets which may make up that connection. Network Address Translation (NAT) is the process of modifying IP address information in IP packet headers while in transit across a traffic routing device.

2) Configuring Squid i) The Squid configuration file on Fedora Core and RHEL, system is /etc/squid/squid.conf. ii) The initialization script that controls Squid is /etc/rc.d/init.d/squid, which reads default values from /etc/sysconfig/squid. iii) Useful command with Squid







Identifies the group Squid runs as



Identifies the user Squid runs as



Defines the hostname of the real HTTP server (if using acceleration )

3) Modifying Netfilter i) Modifying your netfilter firewall rules is an optional step. ii) If we are not using netfilter to maintain a firewall or provide LAN access to internet then its completely optional step.

4) Starting Squid i) To start Squid # service squid start O/P : Starting Squid:


ii) To start and stop automatically: # chkconfig – level 0123456 squid off # chkconfig – level 345 squid on 5) Testing the Configuration i) You first configure the Web browser. ii) If you are using Firefox, select Edit  Preferences to open Preferences dialog box. iii) On general tab, click Connection Settings to open Connection Settings dialog box, then click Manual proxy configuration radio button and type the hostname or IP address of the proxy server in the HTTP Proxy text box. Type 3128 in the accompanying Port text box. iv) Click ok to close the Connection Settings dialog box and OK again to save your changes and close the Preferences dialog box


Time Server What is Time Server ? 1. A Time server is a daemon that runs on one machine and to which other systems(client on your LAN) synchronize their system clocks. 2. In the common case, you synchronize your time server’s clock against one or more reference time servers that are situated outside the LAN. 3. In some cases, the time server synchronizes it time against a specially designed clock, this hardware clock is a separate, single-purpose hardware device that maintains extremely accurate time. Selecting a Time Server Solution  For time server, your options fall into three categories: Hardware Software Both Hardware and Software Hardware Solution : 1. This involves installing a high-resolution clock in your facility and then configuring client systems to synchronize their system clocks against that dedicated device. 2. This device gets time signals broadcasted by the United States Naval Observatory (USNO), the National Institute of Standards and Technology (NIST). 3. Eg. Cesium Fountain Clock


Note : NIST-F1 is a cesium fountain clock or atomic clock in the National Institute of Standards and Technology (NIST) in Boulder, Colorado, and serves as the United States' primary time and frequency standard. Software Solution: i) The software solution is best fit for small organization and networks, this solution runs on NTP protocol. ii) NTP(Network Time Protocol) NTP is an open standard that defines how internet time servers work and how clients can communicate with these time servers to maintain accurate time. iii) The best precision is about 232 picoseconds. iv) NTP consist of a daemon (ntpd), a small set of utility programs which are ntpq, ntpdc, ntpdate etc and all important configuration file, /etc/ntp.conf. v) NTP daemon perform two tasks as a client and as a server. (a) Server : - It acts as a server, listening for time synchronization requests and providing the time in response. (b) Client : - It as a client communicating with other time servers to get the correct time and adjust the local system according. vi) Some of NTP utility Programs are :(a) Ntpdate : Sets the system date and time via NTP (b) Ntpdc : Controls ntp daemon, ntpd (c) Ntp-keygen : Generates public and private keys for use with NTP (d) Ntpq : Queries the NTP daemon.


Configuring the Time Server You will need to do the following tasks in order to configure time server : 1. 2. 3. 4. 5.

Install the NTP software. Locate suitable time servers to server as reference clocks. Configure your local time server. Start the NTP daemon on the local time server. Make sure that the NTP daemon responds to requests.

 Installing NTP 1. Installing NTP software is simple. Use the rpmquery command to make sure that the ntp package is installed: 2. Example : $ rpmquery ntp Output : Ntp-4.2.0.a.20040617-4 3. If installed, version name will get displayed. If the ntp package isn’t installed, install it using the installation tool of your choice before proceeding.

 Selecting Reference Clocks 1. NTP is also hierarchical and organizes time servers into several strata(levels) to reduce the load on any given server, or set of servers. 2. Stratum 1 servers are referred to as primary server, stratum 2 servers as secondary servers, and so on.


3. There are more secondary servers than primary servers, Secondary servers sync to primary servers, and clients sync to secondary or tertiary servers. 4. NTP also provides for syncing to pool servers, a large class of publicity accessible secondary servers maintained as a public service for use by the Internet connected computing community at large. 5. You can check all pool servers using following command $ host pool.ntp.org  Procedure for configuration of NTP Server 1. Add the following lines to /etc/ntp.conf: broadcast autokey crypto pw serverpassword keysdir /etc/ntp 2. Generate the key files and certificates using the following commands: # cd /etc/ntp # ntp –keygen –T –I –p serverpassword 3. If ntpd is running, restart it: # service ntpd restart 4. If ntpd is not running, start it: # service ntpd start 5. Use the following chkconfig commands to make sure that ntpd starts in at boot time and in all multiuser run level: chkconfig --level 0123465 ntpd off chkconfig – level 145 ntpd on


Optimizing NFS 1. Using a journaling file system offers many advantages for an NFS server such as in the event of the crash, journaling file system recover much more quickly than non-journaling file system. 2. Spread NFS exported file systems across multiple disks and, if the possible, multiple disk controllers, the purpose of this strategy is to avoid disk hot spots, which occur when I/O operations concentrate on a single disk or a single area of a disk. 3. Replace IDE disks with serial ATA disks and If you have the budget for it then use Fiber channel disk arrays. 4. If your NFS server is using RAID, use RAID 1/0 to maximize write speed and to provide redundancy in the event of a disk crash. 5. RAID 5 seems compelling at first because it ensures good read speeds, which is important for NFS clients, but RAID5’s write performance is lackluster(lack) and good write speeds are important for NFS servers. 6. Consider replacing 10-Mbit Ethernet cards with 100-Mbit Ethernet cards throughout the network. 7. A common NFS optimization is to minimize the number of write intensive NFS exports. 8. In extreme cases, re-segmenting the network might be the answer to NFS performance problems.


Note : The disk controller is the circuit which enables the CPU to communicate with a hard disk, floppy disk or other kind of disk drive. Network segmentation in computer networking is the act or profession of splitting a computer network into subnetworks, each being a network segment or network layer. Advantages of such splitting are primarily for boosting performance and improving security. The Fibre Channel electrical interface is one of two related standards that can be used to physically interconnect computer devices.


Optimizing Samba Networking 1. The most important consideration for optimizing Samba performance is file transfer speed between the Samba server and clients. There are options that can be set in the /etc/samba/smb.conf file that will increase file transfer performance between the client and server. 2. Socket options – A good choice for tuning your local network is to use socket options = IPTOS_LOWDELAY TCP_NODELAY. 3. DNS proxy – Setting this option to prevents the server from doing unnecessary name lookups and increase system performance. 4. Debug level – This option should be set to 2 or less. Setting the debug level higher than two causes the server to flush the log file after each operation, a time-consuming process indeed. 5. Level 2 oplocks – This option provides for caching of downloaded files on the client machine. Setting this option to true can increase system performance.

Note : A Network Socket is an endpoint of an inter-process communication flow across a computer network.


Using Your Linux Machine as a Server  We Can Following protocol can be configured on linux machine and let linux machine act as a server 1) http i) The most common Web server used on Linux is Apache. Apache is started out of a system’s rc scripts. ii) Apache is easily configurable, and its configuration files live in /etc/httpd/conf/. iii) While Apache can be set to listen to many different network ports, the most common port it listens on is port 80. 2) sshd i) The secure shell daemon (sshd) is started out of the system’s rc scripts. ii) Its global system configuration files are in /etc/ssh, and users’ ssh configuration files are in $HOME/.ssh/. iii) The ssh server listens on port 22. 3) ftpd i) ftpd is file transfer protocol daemon, ftpd configuration files ftpaccess, ftpconversions, ftpgroups, ftphosts, and ftpusers, are located in the /etc directory. ii) The FTP daemon uses ports 20 and 21 to listen for and initiate FTP requests.


4) dns i) dns is The Domain Name Service (DNS) which maps IP addresses to hostnames,, Its configuration file is named.conf in the /etc directory. ii) It is served by the named program on port 53.


Configuring Linux Firewall packages 1) Linux provides a few different mechanisms for system security. One of these mechanisms is Linux’s firewall packages. 2) Two of the firewalling packages available are tcp-wrappers and ipchains. 3) tcp-wrappers is a minimalistic packet filtering application to protect certain network ports. 4) ipchains is a packet filtering firewall. 5) Another important firewall package, called iptables, 1. tcp-wrappers a. The TCP Wrapper program is a network security tool whose main functions are to log connections made to inetd services and restrict certain computers or services from connecting to the tcp-wrapped computer. b. TCP wrappers works only on programs that are started from inetd. So services such as sshd, apache, and sendmail cannot be “wrapped” with tcp-wrappers. 2. ipchains a. ipchains is Linux’s built-in IP firewall administration tool. Using ipchains enables you to run a personal firewall to protect your Linux machine. b. If the Linux machine is a routing gateway for other machines on your network, it can act as a packet filtering network firewall if more than one network interface is installed.


Configuring the xinetd Server

What is xinetd Server ? 1. Xinetd starts at boot time and waits and listens for connections to come in on the ports to which they are assigned in their conf files. 2. One of the most notable improvements of xinetd over inetd is that anyone can start network services but with inetd only root can start a network service. 3. Xinetd supports encryption plain text services such as the ftp command channel by wrapping them in stunnel. 4. Xinetd also enables you to do access control on all services based on differences criteria, such as remote host address, access time, remote hostname, and remote host domain. 5. Xinetd also takes the extra security step of killing servers that aren’t in the configuration file and those that violate the configuration’s access criteria.

Configuration of xinetd Server :

1. The xinetd.conf file is easier to read through and customize i.e. All the files in the /etc/xinet.d directory are read into the xinetd.conf file as well. 2. Each service started by xinetd gets its own dedicated file in the /etc/xinetd.d directory.


3. This way you can tell with a glance at the xinetd.d file listing, what services are being started by it. 4. xinetd Configuration files # # Simple configuration file for xinetd # # Some defaults, and include /etc/xinetd.d/ defaults { instances = 60 log_type = SYSLOG authpriv log_on_success = HOST PID log_on_failure = HOST cps = 25 30 } includedir /etc/xinetd.d

Comparing xinetd and Standalone 1. Which services are stand-alone, and which are started from inetd or xinetd? Sometimes it can get confusing keeping track of how to start a certain service, or keep it from running in the first place. 2. In order to control a service, you need to know what spawns it. 3. xinetd or inetd services are started from inetd, or from xinetd on newer systems. Each of these services should ideally have its own file in the /etc/xinetd.d directory, so you should look in that directory to enable or disable these services. 4. Example of xinetd services: rlogin — service similar to telnet, but enables trust relationships between machines rsh — remote shell www.myitweb.weebly.com

5. Standalone services are started from the rc scripts specifically written for them in the rc directories. You can enable or disable these services from those directories. 6. Example apache — Web server sshd — ssh server

Note : Stunnel is an open-source multi-platform computer program, used to provide universal TLS/SSL tunneling service. TLS/SSL : Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide communication security over the Internet.


Less Secure Services  These are insecure services that should not be used, since they trust that the network is absolutely secure. Their secure equivalents should be used instead.  Using these services should be discouraged, because all their traffic is sent over the network in plain text. This means that anyone with a common sniffer such as tcpdump can see every key stroke that is typed in the session, including your users passwords.  Some of these services are : 1) telnet i) telnet is a protocol and application that enables someone to have access to a virtual terminal on a remote host. It resembles text-based console access on a Unix machine. ii) All telnet traffic, including passwords, is sent in plain text, the Secure Shell (ssh) command should be used instead, if at all possible because ssh provides an equivalent interface to telnet, with increased features, and most importantly, encrypted traffic and passwords. iii) Example : iv) [[email protected]~]$ telnet xena trying Connected to xena Welcome login:

2) ftp i) ftp is a ubiquitous (Present Everywhere) file transfer protocol that runs over ports 20 and 21. For transferring software packages from anonymous ftp repositories, such as ftp.redhat. com, ftp is still the standard application to use. www.myitweb.weebly.com

ii) However, for personal file transfers, you should use scp. scp encrypts the traffic, including passwords. iii) Once you have successfully logged on to an ftp server, you can type help for a list of available commands. Two important commands to remember are put to move a file from your machine to the remote machine, and get to pull a file from the remote server to your machine. To send multiple files you can use mput, and to retrieve multiple files you can use mget. ls or dir give you a listing of files available for download from the remote side. 3) rsync i) Rsync is unencrypted file transfer program that is similar to RCP ii) It includes the added feature of allowing just the differences between two sets of files on two machines to be transferred across the network. iii) Because it sends traffic unencrypted, it should be tunneled through ssh. Otherwise don’t use it. The rsync server listens on port 873.

4) rlogin i) rlogin is a remote login program that connects your terminal to a remote machine’s terminal. ii) rlogin is an insecure protocol, because it sends all information, including passwords, in plain-text. 5) rsh i) rsh is an unencrypted mechanism to execute commands on remote hosts.


ii) Normally you specify a command to be run on the remote host on rsh’s command line, but if no command is given, you are logged into the remote host using rlogin. iii) rsh’s syntax : rsh remotehostname remotecommand 6) finger i) finger enables users on remote systems to look up information about users on another system. ii) Generally finger displays a user’s login name, real name, terminal name, idle time, login time, office location, and phone number. iii) You should disable finger outside of your local network, because user information gathered from it could be used to compromise your system. The finger daemon listens on port 79. 7) talk and ntalk i) Talk and talk are real-time chat protocols. The talk server runs on port 517 and the ntalk server runs on port 518. ii) To send someone else a talk request, type talk or ntalk [email protected] iii) If their server is running a talk or ntalk daemon and they are logged in, they will see a message inviting them to chat with you. iv) Talk and ntalk aren’t as popular as they once were, since instant messenger clients have become very popular.


Secure Services  There are common services, such as telnet and ftp, that were written in the days when everyone trusted everybody else on the Internet but these services used to send all of their traffic in plain text, including passwords but since Plain text traffic is extremely easy to eavesdrop on by anyone between the traffic’s source and destination secure replacements have been developed.  These replacements provide stronger authentication controls and encrypt all their traffic to keep your data safe.  Some of these secure services are : 1) SSH: i) Secure Shell, also known as SSH, is a secure Telnet replacement that encrypts all traffic, including passwords using a public/private encryption key exchange protocol. ii) It provides the same functionality of telnet, plus other useful functions, such as traffic tunneling. iii) To set up the SSH tunnel : ssh –N –L 16510: [email protected] 2) scp i) Secure Copy, also known as scp is part of the SSH package, it is a secure alternative to RCP and FTP, because like SSH, the password is not sent over the network in plain text. ii) You can scp files to any machine that has an ssh daemon running. iii) Syntax :- Scp [email protected]: file1 [email protected]:file2


3) sftp i) Secure File Transfer Program, also known as sftp, is an FTP client that performs all its functions over SSH. ii) Syntax : Sftp [email protected]:file file

Note : Traffic Tunneling : Computer networks use a tunneling protocol when one network protocol (the delivery protocol) encapsulates a different payload protocol. Remote Copy Protocol (RCP), a command on the Unix operating systems, is a protocol that allows users to copy files to and from a file system residing on a remote host or server on the network.


Checking Configuration / Using DNS Tools for Troubleshooting

 The three very useful tools available for troubleshooting DNS problems that are included with BIND are : nslookup host dig 1) nslookup : i) Nslookup is one of several programs that can be used to obtain information about your name servers. ii) This information helps you to fix problems that may have occurred. 2) host i) Host enables you to find an IP address for the specified domain name, all required is that the domain name of the remote host. ii) Following command shows its illustration : [[email protected] root]# host tactechnology.co tactechnology.com has address 3) dig i) dig can be used for debugging and obtaining other useful information. ii) The basic syntax is: dig (@server) domain name (type)


Configuring a Caching DNS server 1. Begin by verifying the zone information in /etc/named.conf. when you installed the BIND package, the /etc/named.conf file was created and it contained zone information for your localhost. 2. Next, you need to check the configuration of the /var/named/named.local file, this file contains the domain information for the localhost, and is typically created when BIND is installed. 3. You need to check the /etc/nsswitch file to be sure it contains the following line: hosts: files nisplus dns 4. You need to check the /etc/resolv.conf file to make sure that the IP address ( of your localhost is listed as a name server. 5. Finally, you need to check that your /etc/host.conf contains the word bind. 6. After you have completed all of the previous steps, it is time to start the named daemon and check your work. 7. Type service named start at a command prompt, press Enter, wait for the prompt to return and then type rndc status and Press Enter. 8. You will see output similar to this: Number of zones: 8 Debug level: 0 Xfers running:0 ……….. Server is up and running 9. You have successfully configured and started your caching name server.


Configuring a Master Server / primary Master Server

1. The /etc/named.conf file on the master server needs to be modified. 2. This modification will be same as slave configuration i.e adding two zones for forward and reverse lookup. 3. Add following lines for forword lookup : zone “tactechnology.com” { notify no; type master; file “tactech.com”; }; 4. Add following lines for zone “1.168.192.in-addr.arpa” { notify no; type master; file “tac.rev”; };


Configuring a Slave Server / Secondary Master Server

1. On the server you want to be the slave, go to the /etc/named.conf file and add two more zones, one for the forward lookup of your server, and one for the reverse lookup. 2. For the forward lookup, you need to add the following. zone “tactechnology.com” { notify no; type slave; file “tactech.com”; masters {; }; }; 3. For the reverse lookup you add following zone “1.168.192.in-addr.arpa” { notify no; type slave; file “tac.rev”; masters {; }; }; 4. After modifying the /etc/named.conf file, the configuration of the slave server is complete and you can move on to configuring the master server.


Examining Server configuration files 1. We needs five files to set up the named server out of three files are required regardless of the configuration as a master, slave, or caching-only server, and two files are used on the master server. 2. Three required files are: a. Named.conf – Found in the /etc directory, this file contains global properties and sources of configuration files. b. Named.ca – Found in var/named, this file contains the names and addresses of root servers. c. Named.local – Found in /var/named, this provides information for resolving the loopback address for the localhost. 3. The two additional files required for the master domain servers are: a. Zone – This file contains the name and addresses of the servers and workstation in the local domain and maps names to IP addresses. b. Reverse zone – This file provides information to map IP addresses to names. 4. The following script is used to start the BIND server: /etc/rc.d/init.d/named – This is the BIND server initialization file used to start BIND, A sample file is installed with the RPM from the installation CDs.


 Understanding DNS 1. While browsing we enter names that are easy to remember and the names are resolved into numbers (IP Addresses) that computer find easy to understand. 2. Enabling efficient human/machine interaction is the function of name address resolution. 3. The first part of the domain name is name of the company, institution or organization. The next part after the period (dot) is called the top-level domain (TLD).

There are several types of TLD some of them are : TOP-LEVEL DOMAIN



A TLD typically used to register a business


A TLD typically used to register a educational Institution


A TLD given to a US government agency


A TLD used by a branch of the US military


A TLD used by non commercial organization.

4. When you type in a hostname, your uses its resources to resolve names into IP addresses, One of these files is /etc/nsswitch.conf (name service switch), which contains a line telling the system where to look for host


information. 5. Within the nsswitch file one of the local files searched is the /etc/hosts file. 6. The host file has IP addresses and hostnames that you used on your sample network. 7. But, the hosts file is not a manageable solution on a large network, because it is impossible task to keep it up date. You could not have control over every IP address. 8. After system looks in the host file and fails to find the address, the next file checked is /etc/resolv.conf. this file contains the IP address of computer that are known as domain name servers and these are listed in /etc/resolv.conf as just name servers.


Understanding types of Domain Server 1. A top-level domain server, one that provides information about the domains is typically referred to as a root name server. 2. A search for www.abc.edu looks to the root name server for .edu for information. 3. The root name server then directs the search to a lower-level domain name server until the information is found. 4. You can see an example of this by using the nsllokup or dig command to search for root name server for .edu 5. The three types of local domain name servers are master, or primary, slave or secondary and caching only servers. a. Master i. The master contain all the information about the domain and supplies this information when requested. ii. A Master server is listed as an authoritative server when it contains the information you are seeking and it can provide that information. b. Slave i. The slave is intended as a backup in case the master server goes down or is not available.


ii. The server contains the same information as that master and provides it when requested if the master server cannot be contacted. c. Caching i. A caching server does not provide information to outside sources, it is used to provide domain information to other server and workstation on the local network. ii. The caching the server remembers the domains that have been accessed hence use of the caching server speeds up the searches since the domain information is already stored in the memory.


 Configuring Sendmail 1. A number of mail transport agents are available for Red Hat Linux, including Qmail, smail, and Sendmail. The most widely used MTA is Sendmail, which is the default MTA for Red Hat Linux. 2. Before you start to configure Sendmail, be sure that it’s installed on your computer. 3. The following example shows how to check using the rpmquery command. The output shows not only that Sendmail is installed, but which version of Sendmail is installed: # rpmquery –a | grep sendmail sendmail –cf -8.13.4-1.1 ……… sendmail –8.13..4-1.1 4. If Sendmail is installed, make sure that it starts at boot time. You can use the following chkconfig command to verify this: # chkconfig – list sendmail O/P :- sendmail 0:off 1:off 2:on 3:on 4:on 5:on 6:off 5. If you don’t see this type of output, use the following chkconfig command to correct it chkconfig –levels 0123456 sendmail off chkconfig –levels 2345 sendmail on 6. Another technique you can use is to telnet to localhost, port 25. if Sendmail is running you’ll see something resembling the following: # telnet localhost 25 Trying …. Connected …… QUIT Type QUIT to exit the session.


7. Here are the key lines a Red Hat system administrator might want to edit in /etc/mail/sendmail.cf. The line begins with uppercase letters DS and occurs near line 100. 8. The following example specifies the host named mailbeast.example.com as a mail relay host. # “Smart” relay host (may be null) DS change the line to define the name of the mail relay host. # “Smart” relay host (may be null) DSmailbeast.example.com


 Introducing SMTP 1. MTA sends messages, but all messages are sent between MTAs using SMTP. 2. SMTP is the TCP/IP protocol for transferring e-mail messages between computers on a network. 3. SMTP specifies message movement between MTAs, by the path the message takes i.e. Messages may go directly from the sending to the receiving MTA or through other MTAs on other network computers. 4. These other computers briefly store the message before they forward it to another MTA, if it is local to the MTA, or to a gateway that sends it to an MTA on another network. 5. The SMTP protocol can transfer only ASCII text. It can’t handle fonts, colors, graphics, or attachments. 6. If you want to be able to send these items, you need to add another protocol to SMTP. 7. The protocol you need is called Multipurpose Internet Mail Extensions, or MIME, MIME enables you to add colors, sounds, and graphics to your messages while still enabling them to be delivered by SMTP. In order for MIME to work, you must have a MIME-compliant MUA.

Understanding POP3 1. POP3 is the Post Office Protocol version 3, this protocol runs on a server that is connected to a network and continuously sends and receives mail. 2. The POP3 server stores any messages it receives, POP3 was developed to solve the problem of what happens to messages when the recipient is not connected to the network.


3. Without POP3, the message could not be sent to the recipient if the recipient were offline, But with POP3, when you want to check your e-mail, you connect to the POP3 server to retrieve your messages that were stored by the server. 4. After you retrieve your messages, you can use the MUA on your PC to read them. Of course, your MUA has to understand the POP3 to be able to communicate with the POP3 server. 5. The messages you retrieve to your PC are then typically removed from the server. This means that they are no longer available to you if you want to retrieve them to another PC.


 Maintaining Email Security 1) Protecting against Eavesdropping i) Your mail message goes through more computers than just yours and your recipient’s because of store-and-forward techniques. ii) All a cracker has to do to snoop through your mail is use a packet sniffer program to intercept passing mail messages. 2) Using Encryption i) Many email products enable your messages to be encrypted (coded in a secret pattern) so that only you and your recipient can read them. ii) IBM’s Lotus Notes provides email encryption, for example. One common method it to sign your messages using digital signatures, which makes it possible for people to confirm that a message purporting to come from you did in fact come from you. iii) Fedora Core and RHEL ship with GNU Privacy Guard, or GPG, which provides a full suite of digital signature and encryption services. 3) Using a Firewall i) If you receive mail from people outside your network, you should set up a firewall to protect your network The firewall is a computer that prevents unauthorized data from reaching your network. ii) For example, if you don’t want anything from ispy.com to penetrate your net, put your net behind a firewall. The firewall blocks out all ispy.com messages.


4) Don’t get Bombed, Spammed, or Spoofed 1. Bombing : a. Bombing happens when someone continually sends the same message to an email address either accidentally or maliciously. b. If you reside in the united States and you receive 200 or more copies of the same message from the same person, you can report the bomber to the FBI. 2. Spamming a. Spamming is a variation of bombing, A spammer sends unsolicited(unwanted) email to many users (hundreds, thousand, and even tens of thousands) 3. Spoofing a. Spoofing happens when someone sends you email from a fake address.


 Serving Email with POP3 and IMAP  Windows systems used as desktop network clients ordinarily do not have an MTA of their own. Such systems require email access using IMAP or POP3.

Setting up an IMAP Server 1. The IMAP implementation configured here is the Dovecot IMAP server. 2. First make sure Dovecot package is installed. 3. The following rpmquery command shows you whether Dovecot package is installed. If not, install the dovecot package before proceeding: # rpmquery dovecot dovecot -0.99.14-4.fc4 4. Configure the dovecot service to start when the system boots Use the following commands to start dovecot at boot time: chkconfig –levels 0123456 dovecot off chkconfig –levels 345 dovecot on 5. Connect to the IMAP server, again using telnet: $ telnet localhost imap Trying Connected to localhost.localdomain ( logout 6. Test the server, connect to the POP3 server as a mortal user using telnet $ telnet localhost pop3 try …. connected to localhost.localdomain +OK dovecot ready. quit +OK Logging out www.myitweb.weebly.com

 Tracing the Email Delivery Process  Before understanding E-mail process we will understand some basic concepts about E-mail. 1) Email : i) Email message, just like a letter sent through regular mail, begins with a sender and ends with a recipient. ii) We use TCP/IP protocols to accomplish E-mail service on internet. 2) Email Programs : i) A Mail User Agent (MUA) for users to be able to read and write e-mail. ii) A Mail Transfer Agent (MTA) to deliver the e-mail messages between computers across a network. iii) A Local Delivery Agent (LDA) to deliver messages to users’ mailbox files. iv) An mail notification program to tell users that they have new mail.

Email Delivery Process : 1) Mail User Agent (MUA) i) To be able to send mail, you, or your users, need a program called a Mail User Agent (MUA). The MUA, also called a mail client, enables users to write and read mail messages. ii) Two types of MUAs are available: a graphical user interface (GUI), such as Netscape Messenger, and a command-line interface, such as Pine.


2) Mail Transfer Agent (MTA) i) Whether your MUA is a GUI or command-line interface, after the message is composed, the MUA sends it to the mail transfer agent (MTA). ii) The MTA is the program that sends the message out across the network and does its work without any intervention by the user. iii) The MTA installed by default on your Red Hat system is called Sendmail. iv) The MTA reads the information in the To section of the e-mail message and determines the IP address of the recipient’s mail server. v) Then the MTA tries to open a connection to the recipient’s server through a communication port, typically port 25. vi) If the MTA on the sending machine can establish a connection, it sends the message to the MTA on the recipient’s server using the Simple Message Transfer Protocol (SMTP).

3) Local Delivery Agent (LDA) i) After the LDA receives the message from the MTA, it places the message in the receiver’s mailbox file that is identified by the username. ii) On your Red Hat system this is a program called procmail. The location of the user’s mailbox file is /usr/ spool/mail/. iii) The final step in the process happens when the user who is the intended receiver of the message reads the message. The user does this using the MUA on his or her PC.


4) Mail Notifier (1) An optional program is a mail notifier that periodically checks your mailbox file for new mail. If you have such a program installed, it notifies you of the new mail. (2) If new mail has arrived, the shell displays a message just before it displays the next system prompt. It won’t interrupt a program you’re running. (3) You can adjust how frequently the mail notifier checks and even which mailbox files to watch. (4) If you are using a GUI, there are mail notifiers available that play sounds or display pictures to let you know that new mail has arrived.


Using the Postfix Mail Server 1. Postfix is a mail transport agents used every day at sites that handle thousands and tens of thousands of messages per day. 2. The best part is that Postfix is fully compatible with Sendmail at the command level. 3. The similarity is deliberate, for Postfix was designed to be a highperformance, easier-to-use replacement for Sendmail.

Switching to Postfix 1. By default, Fedora Core and RHEL use Sendmail, Switching to Postfix is simple, but before doing so, stop Sendmail: # service sendmail stop 2. The next step is to make sure the Postfix is installed $ rpmquery postfix postfix—2.2.2-2

Configuring Postfix 1. The configuration file is /etc/postfix/main.cf . The following variables need to be checked or edited a. Domain name: mydomain =example.com b. Local machine domain: myhostname=coondog.example.com c. Domain name appended to unqualified addresses myorigin=$mydomain This causes all mail going out to have your domain name appended. www.myitweb.weebly.com

d. The mydestination variable tells Postfix what addresses it should deliver locally. mydestination=$myhostname, localhost, localhost.$mydomain

2. Postfix supports a larger number of configuration variables than the four just listed, but these are the mandatory changes you have to make. 3. Create or modify /etc/aliases file : At the very least, you need aliases for Postfix, postmaster, and root in order for mail sent to those addresses to get to a real person. Example : postfix: root postmaster: root root: bubba 4. After creating or modifying the aliases file, regenerate the alias database using Postfix’s newaliases command. /usr/sbin/newaliases 5. The last step is to start Postfix: # service postfix start Starting postfix: [ OK ]


Using SFTP

1. As an alternative to configuring vsftpd to play nicely with SSL, you can use sftp-server a program that is part of the OpenSSH (Secure Shell) suite of secure client and server programs. 2. Sftp-server implements the server-side portion of the FTP protocol. 3. In ordere to use sftp you need to have the OpenSSH related packages installed. The following query checks whether sftp package is installed or not: rpmquery openssh {,-{clients, askpass,server}} O/P If installed Openssh—4.opl-2 …………. Openssh-server-4.opl-2 4. Next step is to make sure the following line appears in /etc/ssh/sshd_config: Subsystem sftp /usr/libexec/openssh/sftp-server 5. This directive tells sshd to execute the program /usr/libexec /openssh/sftpserver to service the SFTP subsystem. 6. Finally restart the SSH daemon using the following command: # service sshd restart 7. Important difference between clear-text FTP and secure FTP is that sftp does not support anonymous FTP; users will always be prompted to provide a password unless they have set up SSH keys. 8. But you can configure vsftpd to provide anonymous FTP and then use OpenSSH to provide secure, authenticated FTP service for users that have valid accounts on the system. The two services can exist side by side because sftp uses port 115 and FTP uses ports 25. www.myitweb.weebly.com

Advanced FTP Server Configuration 1) Running vsftpd from xinetd i) The standard vsftpd installation does not install a configuration file for xinetd, however vsftpd’s author provides a sample xinetd configuration file. ii) But it is not recommend starting vsftpd from xinetd because the performance will be crummy. 2) Enabling Anonymous Uploads i) Anonymous uploads pose all sorts of security risks, suppose someone uploading a virus or Trojan to your server and having your FTP server become a warez server(illegal), that is, a transfer point for illegal or cracked copies of software, music, and/or movies. ii) Its not recommended t enable anonymous uploads, but in case us any requirements come then follow below steps. iii) Edit /etc/vsftpd/vsftp.conf and change or add the entries shown in table below




Sets umask used for files created by the anonymous user to 077


Permits anonymous uploads


Permits FTP write commands


iv) Create a directory for anonymous uploads: # mkdir /var/ftp/incoming v) Change the permissions of /var/ftp/incoming: #chmod 770 /var/ftp/incoming #chmod g+s /var/ftp/incoming The first chmod command give the user and group full access to /var/ftp/incoming. The second command turns on the set-group ID bit on the directory, causing any file created in this directory to have its group ownership set to the group that owns the directory. vi) Make the ftp user the group owner of /var/ftp/incoming: # chgrp ftp /var/ftp/incoming vii) Restarting vsftpd: # service vsftpd restart

3) Enabling guest User FTP Accounts i) A guest user, referred to in vsftpd’s documentation as a virtual user, describes any nonanonymous login that uses a login name that does not exist as a real login account on the FTP server. ii) Command to create a guest user account : #useradd –d /var/ftp/rhlnsa3 –s /sbin/nologin authors iii) This command created the user authors with a home directory of /var/ftp/rhlnsa3 and login shell of /sbin/nologin, which disables local logins for that account.


4) Running vsftpd over SSL i) Vsftpd’s Secure Sockets Layer (SSL) support convincingly answers one of the primary criticisms of FTP, passing authentication information in clear text. ii) vsftpd can use SSL to encrypt FTP’s control channel, over which authentication information is passed, and FTP’s data channel, over which file transfers occur. To use SSL, with vsftpd, you need to set at least the ssl_enable=YES in /etc/vsftpd/vsftpd.conf. iii) Add the following entries to /etc/vsftpd/vsftpd.conf: Directives ssl_enable=YES

Description Enables vsftpd’s SSL support


Permits anonymous users to use SSL


Forces local user’s FTP sessions to use SSL on the data connection


Forces local user’s FTP sessions to use SSL for the login exchange


Enables vsftpd’s SSL TLS version 1 protocol support

iv) Create a self –signed RSA certificate file: # cd /usr/share/ssl/certs # make vsftpd.pem v) Start or restart the vsftpd service: # service vsftpd start


Note : umask (user mask) is a command and a function in POSIX environments that sets the file mode creation mask of the current process which limits the permission modes for files and directories created by the process.


Configuring User Level FTP Access

1) The /etc/vsftpd/ftpusers file contains a list of user or account names, one per line, that are not allowed to log in using FTP, this file is used to increase security. 2) In general, /etc/vsftpd/ftpusers is used to prevent privileged user accounts, such as root from using FTP to obtain access to the system. 3) The following code shows the default /etc/vsftpd/ftpusers file: i) root ii) bin iii) daemon iv) adm v) lp vi) sync vii) shutdown viii) halt ix) mail x) news xi) uucp xii) operator xiii) games xiv) nobody 4) So, to prevent a user named bubba from using FTP to log in, or, rather, to prevent bubba from logging in to the system via FTP, add bubba to the end of /etc/vsftpd/ftpusers. Configuring vsftpd Features / More about /etc/vsftpd/vsftpd.conf 1. The most important (and potentially the longest) vsftpd configuration file is /etc/vsftpd/vsftpd.conf.


2. The configuration directives in this file enable you to exercise finely grained control over vsftpd’s behavior. 3. The configuration file itself has a pleasantly simple format, because each line is either a comment, which begins with #, or a directive. Directives have the form option=value. 4. Most of the configuration options are Boolean, so they are either on or off, or, rather, YES or NO. 5. A second group of configuration options take numeric values, and a third, considerably smaller set of configuration options accept string values.

Disabling Anonymous FTP 1. The easiest way is to remove the ftp user is from /etc/passwd and /etc/group: 2. Commands : Cp –p /etc/passwd /etc/passwd.ftp Cp –p /etc/group /etc/group.ftp Userdel –r ftp Userdel: /var/ftp not own by ftp, not removing Find / -user 50 | xargs rm -r


Configuring vsftpd

1. You might not have installed vsftpd. to find out, execute the command : # rpmquery vsftpd 2. vsftpd-2.0.1-5 // If installed package vsftpd is not installed // If not installed 3. If vsftpd is installed, configure it to start at boot time using the chkconfig command: # chkconfig –levels 0123456 vsftpd off #chkconfig – levels 345 vsftpd on 4. Alternatively, you can use the graphical service Configuration tool. To do so, type system-config services at a command prompt or select Main Menu -> System Settings -> Server Setting -> Services. 5. When you see the below screen, scroll down to the vsftpd entry near the bottom of the services scroll box. 6. Click the check box near the vsftpd to enable it and then click Save to save your changes. Select File ->Exit to close the Service Configuration Tool after saving your changes.


7. Next add the following line to the bottom of /etc/vsftpd/vftpd.conf, the vsftpd configuration file: Listen=Yes This entry configures vsftpd to run as Standalone daemon 8. Now Start vsftpd: Service vsftpd start 9. Finally, try to log in as an anonymous user. You can use a login name of ftp or anonymous: www.myitweb.weebly.com

$ ftp localhost Connected to localhost ( 220 (vsFTPd 2.0.1) Name (localhost:bubba): ftp 331 Please specify the password. Password: 230 Login successful.


Introducing vsftpd 1. The default FTP server daemon on Fedora Core and RHEL systems is vsftpd, the Very Secure FTP Daemon, which has a project Web site at http://vsftpd.beasts.org/. 2. vsftpd is extremely lightweight in that it makes sparing use of system resources and does not rely on system binaries for parts of its functionality. 3. vsftpd offers the additional security features and usability enhancements some of them are : a. Support for virtual IP configurations b. Support for so-called virtual users c. Can run as a standalone daemon or from inetd or xinetd d. Configurable on a per-user or per-IP basis e. bandwidth throttling f. IPv6-ready

Note : System Binaries : System Binaries are all the program files(including libraries) you have installed. What tripwire(alarm) does is to save every checksum of every program in a database to check later if the file wasn't corrupted. Virtual IP address (VIP or VIPA) is an IP address assigned to multiple domain names, servers or applications residing on a single server instead of connected to a specific server or network interface card (NIC) on a server.


View more...


Copyright ©2017 KUPDF Inc.