Dheeraj Seminar Report

January 30, 2018 | Author: Ankur Sen | Category: Computer Network, Computer Data Storage, Networks, Computer Data
Share Embed Donate


Short Description

A PROJECT REPORT ON STORAGE AREA NETWORK...

Description

Storage Area Network A Seminar Report Submitted by

Dheeraj Kumar (Roll no. 2015024143)

MCA-IIndYr Under the Guidance Of Mr. Muzammil hasan (AP, CSED)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING Madan Mohan Malaviya University of Technology Gorakhpur (U.P.) - INDIA Session 2015-16

CONTENTS 1.

Acknowledgment

2.

Introduction

3.

Architecture

4.

Need of SAN

5.

H/W and S/W Requirements

6.

Storage Models

7.

Comparison

8.

Components of SAN

9.

Advantage

10. Disadvantage 11. Conclusion 12. Reference

ACKNOWLEDGEMENT I am very grateful to my seminar guide Mr. Muzammil Hasan, Asst. Professor, Dept. of Computer Science & Engineering, for his consistent and unfailing support and continuous encouragement throughout this seminar preparation work at the university. He has provided me with guidance and encouragement enabling me to achieve the goal I had set for myself. The freedom provided to me during my tenure here is a model to follow. I express my sincere thanks to Dr. Rakesh Kumar (Prof. and Head of the Computer Science & Engineering Department) and other staff members for providing necessary facilities for successful completion of this work.

I would also like to acknowledge the efforts of the lab administration and support staff, which have ensured that I always have the necessary computing resources operational. I would like to thank all of my colleagues for their constructive suggestion and criticism during development of this work.

INTRODUCTION A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storages if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device. A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities. Storage-area networks are managed centrally, and Fiber Channel (FC) SANs have the reputation of being expensive, complex and difficult to manage. The emergence of iSCSI has reduced these challenges

by encapsulating SCSI commands

into IP packets for

transmission

over

an Ethernet connection, rather than an FC connection. Instead of learning, building and managing two networks -- an Ethernet local-area network (LAN) for user communication and an FC SAN for storage -- an organization can now use its existing knowledge and infrastructure for both LANs and SANs.

Architecture The architecture of San Francisco is not so much known for defining a particular architectural style, rather, with its interesting and challenging variations in geography and topology and tumultuous history, San Francisco is known worldwide for its particularly eclectic mix of Victorian and modern architecture. There are mainly four kind’s network storage systems Server Attached RAID, centralized RAID, NAS and SAN. The former two have been used for many years, yet due to the limitation of their usability, scalability, Data backup and migration, it is difficult to meet the application. The latter are two primary network storage systems nowadays, and each has their own advantages. But they also have their own limitations, and can’t meet high-speedily increasing network application. This paper presents a new network storage architecture made by integrating NAS and SAN in IP: High Performance Storage Network (HPSN). Firstly, with the help of Global Multi Protocol File System (GMPFS), HPSN implements the unification of NAS and SAN and meets the requirements of a high scalability and capacity. Secondly, HPSN can provide the block I/O and file I/O services at the same time by iSCSI module, having the advantages of NAS.

Fig: of Architecture of SAN

Need of SAN A storage area network (SAN) is a network which provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear to the operating system as locally attached devices. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments. A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.

Hardware and Software Requirements Hardware Requirement:   

Minimum 4G.B RAM Core 2 Duo Processor 2.0 GHZ Hard Disk Drive is Maximum for creating server minimum (1 Terabyte)

Software Requirement: 

ESXI SERVER 4.0(64 Bit)



VMSPHARE CLIENT 4.0



Microsoft Windows XP, 7 Environment

Storage Models A storage area network (SAN) is a network which provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear to the operating system as locally attached devices. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments. A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.

Fig: of Storage Models

Comparison A Storage Area Network (SAN) applies a networking model to storage in the data center. The SANs operate behind the servers to provide a common path between servers and store age devices. Unlike server-based Direct Attached Storage (DAS) and file-oriented Network Attached Storage (NAS) solutions, SANs provide block level or file level access to data that is shared among computing and personnel resources. The predominant SAN technology is implemented in a Fiber Channel (FC) configuration, although new configurations are becoming popular including iSCSI and Fiber Channel over Ethernet (FCoE). The media on which the data is stored is also changing. With the growth of SANs and the worldwide domination of Internet Protocol (IP), using IP networks to transport storage traffic is in the forefront of technical development. IP networks provide increasing levels of manageability, interoperability and cost-effectiveness. By converging the storage with the existing IP networks (LANs/MANs/WANs) immediate benefits are seen through storage consolidation, virtualization, mirroring, backup, and management. The convergence also provides increased capacities, flexibility, expandability and scalability.

Direct Attached Storage (DAS): DAS is the traditional method of locally attaching storage devices to servers via a direct communication path between the server and storage devices. the connectivity between the server and the storage devices are on a dedicated path separate from the network cabling. Access is provided via an intelligent controller. The storage can only be accessed through the directly attached server. This method was developed primarily to address shortcomings in drive-bays on the host computer systems.

Network Attached Storage (NAS): NAS is file-level access storage architecture with storage elements attached directly to a LAN. It provides file access to het erogenous computer systems. Unlike other storage systems the storage is accessed directly via the network as shown in Figure 2. An additional layer is added to address the shared storage files. This system typically uses NFS (Network File System) or CIFS (Common Internet File System) both of which are IP applications. A separate computer usually acts as the "filer" which is basically a traffic and security access controller for the storage which may be incorporated into the unit itself

Storage Area Networks (SANs): A SAN is connected behind the servers. SANs provide block-level access to shared data storage. Block level access refers to the specific blocks of data on a storage device as opposed to file level access. One file will contain several blocks. SANs provide high availability and robust business continuity for critical data environments. SANs are typically switched fabric architectures using Fiber Channel (FC) for connectivity.

Fig: of DAS NAS AND SAN

Components of SAN File Server: Multiple servers, from different vendors, on different operating systems can all be connected to a Storage Area Network (SAN). Unlike DAS (Direct Attached Storage), all servers connected to the Storage Area Network (SAN) can share all of the storage available, which reduces the overall cost per megabyte for your business.

Storage Area Network Fabric: The Storage Area Network (SAN) Fabric is effectively the SAN network that connects the servers to the storage. The Storage Area Network (SAN) Fabric is made up of 2 GB fiber channel switches (supplied by multiple vendors including CISCO, Brocade and IBM) which manage the connectivity from the servers HBA (Host Bus Adaptor) to the Storage Area Network (SAN) storage.

Host Bus Adaptors (HBA's): PCI adaptor connects a server to the SAN fabric. Each HBA installed is referred to as a host. Two HBA's can be installed into each server for additional resilience.

Fiber Cabling: High speed fiber optic cabling used to interconnect between servers, storage and tape backup devices.

Tape library: Linking a tape library into the Storage Area Network (SAN) Fabric provides a fast and reliable solution to backup critical data. Data stored on the Storage Area Network (SAN) is transferred directly to the tape library, using "server less" technology. This reduces the load on each server and ensures data is backed up within the time window available.

Management Software: The software enables individual components to be configured and optimized for performance. It monitors network for bottlenecks enabling IT Managers to preempt problems and adjust accordingly.

Fig of Components of SAN

Advantage       

Automatic Backup High data storage capacity Reduce cost of multiple servers Increase recovery problem Data sharing Improved backup and recovery High performance

Disadvantage    

It is very expensive. Require high level technical person. Hard to maintain. Not affordable for small business

.CONCLUSION

Storage area network (SAN) is a network designed to attach computer storage devices such as disk array controllers and tape libraries to servers.SAN allows a machine to connect to remote targets such as disks and tape drives on a network for block level I/O. Due to SAN technology architectures are not for everyone. But if your applications demand continuous operations and the benefits of universal access to data, then you are encouraged to upgrade to an enterprise SAN. Whatever your reasons, here is a superb architecture around which to operate your business.

REFERENCES 1.

A. Gallatin, J. Chase, and K. Yocum, "Trapeze/IP: TCP/IP at Near-Gigabit Speeds", Proceedings of USENIX Technical Conference (FreeNix Track), June 1999.

2. R. Van Meter, G. Finn and Steve Hotz, "VISA: Netstation’s Virtual Internet SCSI Adapter", ASPLOS-VIII, October 1998. 3. A. Benner, "Fibre Channel: Gigabit Communications and I/O For Computer Networks", McGraw-Hill, 1996. 4. J.

Satran

et

al.,

"iSCSI",

IETF

Work

in

Progress

(IPS

group),

http://www.ietf.org/html.charters/ips-charter.html, 2001. 5. Hsiao Keng, and J. Chu, "Zero-copy TCP in Solaris", Proceedings of the USENIX 1996 Annual Technical Conference, January 1996.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF