online Doctor Finder
document on online Doctor Finder...
Chapter 1 Introduction ________________________________________________________________________
The purpose of this project is to develop an On-line Doctor Finder System that provides customers/Patent with the facility to search doctor and get appointment on-line. The system will provide all the facilities to its customers when their authentications [user id and password] match, including viewing account information, performing transfers, giving the customer an option of changing address, password retrieval, performing transactions, viewing appointments. The system should also support an online enrolment facility for new customers. The administrator should have the ability to perform various operations like inserting all details regards Hospital for the customer and performing functions like providing facility to user to search easily, when the customers want take appointment they have to register first and then admin verify their status after cheeking all details . The administrator also has the privilege to close the customer‘s account on the request of the customer. The customer should be able to access his/her account from anywhere just by inputting the correct user-id and password.
Chapter 2 System Analysis ________________________________________________________________________
Identification on need
Need to locate a provider quickly? Our online Doctor Finder (provider search) gives you flexibility in a simple format. Be sure to check your criteria for the provider search webpage most appropriate for your plan. This online Doctor Finder helps you find a perfect match for your medical needs Doctor Finder provides you with basic professional information on virtually every licensed physician. While it is our goal to provide the most up to date information, our provider network is constantly developing. Always verify that the provider you choose is participating in our network before you receive services Schedule appointments 24 hours a day, 7 days a week Whether it‘s 2:00 AM and your office is closed or it‘s 2:00 PM and your phones are busy—be there for your patients and fill your schedules, too.
Turn your website traffic into real appointments Potential patients are visiting your site right now—and leaving. In a matter of minutes Doctor Finder can allow these visitors to book appointments with you instantly.
You receive the appointment details! Patients provide their reason for visit and insurance information, so your practice always runs smoothly. We send several appointment reminders to make sure your patients show up on time. Patients can even book appointments directly from your personal website
Search Hint: The optimal way to search for a physician by name is to search by Last Name only and the State. You may also want to perform a "sounds-like" search if you are unsure of the exact
spelling of a name or city, or if your search did not return the desired results. This option is available beneath the name and address fields on the "Search by Physician Name" page. The optimal way to search for a physician by specialty is to select a Specialty and State. If your search result is larger than the predetermined limit, you will be asked to modify the search by adding City and/or Zip Code. Occasionally, a physician is difficult to locate because:
The physician has moved to a different state and the AMA has not yet received the new address;
A small number of physicians have requested a "no contact" designation on their AMA records (no contact records are managed like an unlisted phone number and are not released);
Physicians without active licenses do not appear in DoctorFinder;
The physician's name may have a space in it, like "Mc Donald" (use of the space is required);
DoctorFinder uses the primary medical specialty designated by the physician (your physician may practice more than one medical specialty).
In this process, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of Preliminary Investigation, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates. Main Tasks of the preliminary investigation phase are:
Investigate the present system and identify the functions to be performed
Identify the objectives of the new system. In general, an information system benefits a business by increasing efficiency, improving effectiveness, or providing a competitive advantage
Identify problems and suggest a few solutions
Identify constraints, i.e. the limitations placed on the project, usually relating to time, money and resources
Evaluate feasibility - whether the proposed system promises sufficient benefit to invest the additional resources necessary to establish the user requirements in greater detail
To conclude the preliminary examination, the systems analyst writes a brief report to management in which the following are listed:
The problem that triggered the initial investigation
The time taken by the investigation and the persons consulted
The true nature and scope of the problem
The recommended solutions for the problem and a cost estimate for each solution
The analyst should then arrange a meeting with management to discuss about the report and other matters if need be. The end result, or deliverable, from the Preliminary Investigation phase is either a willingness to proceed further, or the decision to abandon the project.
2.3 Feasibility Study
It is a test of system proposal, according to its workability, impact on application area, ability to meet user need, and effective use of resources. It focuses on four major questions:
1. 2. 3. 4.
What are the user‘s demonstrable needs and how does a candidate system meet them? What resources are available for given candidate system? Is the problem worth solving? What are the likely impacts of the candidate system on application area? How well does it fit within the application area?
These questions revolve around investigation and evaluation of the problem, identification and description of candidate system, specification of performance and the cost of each system, and final selection of the best system. Objective of feasibility study is not to solve the problem but to acquire a sense of its scope. During the analysis, the problem definition is crystallized and aspects of the problem to be included in the system are determined. Feasibility analysis is to serve as decision phase to have an analysis of questions that, is there a new and better way to do the job that will benefit the user, what are the cost and savings of the alternatives. Three key considerations are involved in feasibility analysis: economic, technical, and behavioural.
2.3.1 Economic Feasibility
Economic analysis is the most frequently used method for evaluating the effectiveness of a candidate system. More commonly known as cost benefit analysis, the procedure is to determine the benefits and savings that are expected from a candidate system and compare them with costs. If benefits outweigh costs, then the decision is made to design and implement the system. The benefits and savings that are expected from a candidate system are mainly in terms of time. When a user is directly able to handle a project through interfaces provided by OBS without the burden of coding for every kind of modification a lot of time and human effort is saved.
There was a need of estimating the bearing cost on the resources needed (manpower and computing systems) for the development of the system. Full Cost estimation of the resources was done prior to the project kick off. There was procurement costs, consultations cost, purchase of equipments, installations cost and management cost involved with the development of the new proposed system. In addition, there are start up costs, and no new costs for operating system software, communications equipment installations, recruitment of new personnel, cost of disruption to the rest of the system required. There is further no need to purchase special applications software, do software modifications, training and data collection, and just a meager documentation preparation cost involved. Lastly, there is a system maintenance, depreciation or rental cost involved with the new system.
2.3.2 Technical Feasibility
Technical feasibility centers on the existing computer system (hardware, software, etc.) and to what extent it can support the proposed addition. This phase involves financial considerations to accommodate technical enhancements. If the budget is a serious constraint, then the project is judged not feasible. System Technical feasibility is one of the most difficult areas to assess at this time of systems engineering. If right assumptions are made anything might seem possible. The considerations that are normally associated with technical feasibility include:
1) Development Risk:
Can the system element be designed so that necessary function and performance are achieved within the constraints uncovered during the analysis of the present system?
The new system proposes to bring significant changes to the present system to make it more efficient. The new system proposed meets all the constraints requirements and performance requirements identified for the system to become successful.
2) Resource availability:
Are skilled staffs available for the development of the new proposed system? Are any other necessary resources available to build the system? The Participants working with the proposal are seniors who have sufficient knowledge and learning skills required to know about the development of the new system. There is also no need of any other special need of resources with the development of the proposed system and it can be very well developed using the computing and non-computing resources available within the present system.
Has the relevant technology progressed to a state that will support the system?
Technology in the form of different works done in the related field is already available with the commercial world and has been already successively used in many areas. Therefore, there is no need of any special technology to be developed. The new system is fully capable of meeting the performance, reliability, maintainability and predictability requirements. The social and legal feasibility encompasses a broad range of concerns that include contracts, liability, infringement etc. Since the system is being developed by the students of the institute themselves there are no such concerns involved with the development of the proposed system. The degree to which alternatives are considered is often limited by cost and time constraints. However variations should be considered which could provide alternative solutions to the defined problem. Alternative systems that could provide all the functionality of the desired system are not available and hence the present solution is itself the most complete solution of the defined problem. FBTS has a feasibility of around 95% to be implemented. The candidate system fully supports the existing computer system (hardware, software, etc).
2.3.3 Behavioural Feasibility
People are inherently resistant to change, and computers have been known to facilitate change. An estimate should be made of how strong a reaction the user is likely to have towards the development of a system. The introduction of candidate system Work Planner will not require special effort to educate, sell and train the user on new ways of conducting the system. As far as performance of the system is concerned the candidate system will help attain accuracy with least response time and minimum of programmer‘s efforts through the user-friendly interface.
Planning begins with process decomposition. The project schedule provides a road map for a software project manager. Using a schedule as a guide, the project manager can track and control each step in the software engineering process.
2.4.1 Project Tracking
Complete specification of the system including 1-2 the framing of policy etc.
List of tables and attributes of each of them.
High Level Design :
Use case Diagram
Class Diagram & etc.
Detailed Design :
Pseudo code or algorithm for each activity
Implementation of 4.
Implementation of the frontend of the system
Screen that giving various options for each login
Screens for each of the options
Screens connected to data base and updating front-end data base as required.
Integrating the with the database
The system should be thoroughly tested by 11-12 running all the test cases written for the system.
Issues found during the previous milestone are fixed and the system is ready for the final 12-14 review.
2.5 Project Scheduling
11-14 Weeks TESTING & 10-11 Weeks
P R O C E S S
1-2 Weeks REQUIREME NT ANALYSIS
TIME (In 16 weeks)
2.6 Software Requirement Specification
2.6.1 An Introduction to ASP.Net ASP.Net is a web development platform, which provides a programming model, a comprehensive software infrastructure and various services required to build up robust web application for PC, as well as mobile devices. ASP.Net works on top of the HTTP protocol and uses the HTTP commands and policies to set a browserto-server two-way communication and cooperation. ASP.Net is a part of Microsoft .Net platform. ASP.Net applications are complied codes, written using the extensible and reusable components or objects present in .Net framework. These codes can use the entire hierarchy of classes in .Net framework. The ASP.Net application codes could be written in either of the following languages:
Visual Basic .Net
J# ASP.Net is used to produce interactive, data-driven web applications over the internet. It consists of a large number of controls like text boxes, buttons and labels for assembling, configuring and manipulating code to create HTML pages.
ASP.Net Web Forms Model: ASP.Net web forms extend the event-driven model of interaction to the web applications. The browser submits a web form to the web server and the server returns a full markup page or HTML page in response. All client side user activities are forwarded to the server for stateful processing. The server processes the output of the client actions and triggers the reactions. Now, HTTP is a stateless protocol. ASP.Net framework helps in storing the information regarding the state of the application, which consists of:
Session state The page state is the state of the client, i.e., the content of various input fields in the web form. The session state is the collective obtained from various pages the user visited and worked with, i.e., the overall session state. To clear the concept, let us take up an example of a shopping cart as follows. User adds items to a shopping cart. Items are selected from a page, say the items page, and the total collected items and price are shown in a different page, say the cart page. Only HTTP cannot keep track of all the information coming from various pages. ASP.Net session state and server side infrastructure keeps track of the information collected globally over a session. The ASP.Net runtime carries the page state to and from the server across page requests while generating the ASP.Net runtime codes and incorporates the state of the server side components in hidden fields. This way the server becomes aware of the overall application state and operates in a two-tiered connected way. ASP.Net Component Model: The ASP.Net component model provides various building blocks of ASP.Net pages. Basically it is an object model, which describes:
Server side counterparts of almost all HTML elements or tags, like and .
Server controls, which help in developing complex user-interface for example the Calendar control or the Gridview control. ASP.Net is a technology, which works on the .Net framework that contains all web-related functionalities. The .Net framework is made of an object-oriented hierarchy. An ASP.Net web application is made of pages. When a user requests an ASP.Net page, the IIS delegates the processing of the page to the ASP.Net runtime system.
The ASP.Net runtime transforms the .aspx page into an instance of a class, which inherits from the base class Page of the .Net framework. Therefore, each ASP.Net page is an object and all its components i.e., the server-side controls are also objects. Components of .Net Framework 3.5 Before going to the next session on Visual Studio.Net, let us look at the various components of the .Net framework 3.5. The following table describes the components of the .Net framework 3.5 and the job they perform: Components and their Description 1. Common Language Runtime or CLR It performs memory management, exception handling, debugging, security checking, thread execution, code execution, code safety, verification and compilation. Those codes which are directly managed by the CLR are called the managed code. When the managed code is compiled, the compiler converts the source code into a CPU independent intermediate language (IL) code. A Just in time compiler (JIT) compiles the IL code into native code, which is CPU specific. 2. Net Framework Class Library It contains a huge library of reusable types classes, interfaces, structures and enumerated values, which are collectively called types. 3. Common Language Specification It contains the specifications for the .Net supported languages and implementation of language integration. (4) Common Type System It provides guidelines for declaring, using and managing types at runtime, and cross-language communication. Metadata and Assemblies Metadata is the binary information describing the program, which is either stored in a portable executable file (PE) or in the memory. Assembly is a logical unit consisting of the assembly manifest, type metadata, IL code and set of resources like image files etc. (5) Windows This contains the graphical representation of any window displayed in the application.
ASP.Net is the web development model and AJAX is an extension of ASP.Net for developing and implementing AJAX functionality. ASP.Net AJAX contains the components that allow the developer to update data on a website without a complete reload of the page. (7) ADO.Net It is the technology used for working with data and databases. It provides accesses to data sources like SQL server, OLE DB, XML etc. The ADO .Net allows connection to data sources for retrieving, manipulating and updating data. (8) Windows Workflow Foundation (WF) It helps in building workflow based applications in Windows. It contains activities, workflow runtime, workflow designer and a rules engine. (9)Windows Presentation Foundation It provides a separation between the user interface and the business logic. It helps in developing visually stunning interfaces using documents, media, two and three dimensional graphics, animations and more. (10) Windows Communication It is the technology used for building and running connected systems.
(11) Windows Card It provides safety of accessing resources and sharing personal information on the internet.
(12) LINQ It imparts data querying capabilities to .Net languages using a syntax which is similar to the tradition query language SQL.
ASP source code runs on the personal web server of ASP. The ASP Server dynamically generates the HTML and sends the HTML output to the client‘s web browser.
2.6.2 Why use ASP?
Microsoft ASP.NET is more than just the next generation of Active Server Pages (ASP). It provides an entirely new programming model for creating network applications that take advantage of the Internet.
New Application Models
Improved Performance and Scalability
2.6.3 The Advantages of ASP
ASP has a number of advantages over many of its alternatives. Here are a few of them. 1. ASP.NET drastically reduces the amount of code required to build large applications. 2. With built-in Windows authentication and per-application configuration, your applications are safe and secured. 3. It provides better performance by taking advantage of early binding, just-in-time compilation, native optimization, and caching services right out of the box. 4. The ASP.NET framework is complemented by a rich toolbox and designer in the Visual Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides. 5. Provides simplicity as ASP.NET makes it easy to perform common tasks, from simple form submission and client authentication to deployment and site configuration. 6. The source code and HTML are together therefore ASP.NET pages are easy to maintain and write. Also the source code is executed on the server. This provides a lot of power and flexibility to the web pages. 7. All the processes are closely monitored and managed by the ASP.NET runtime, so that if process is dead, a new process can be created in its place, which helps keep your application constantly available to handle requests. 8. It is purely server-side technology so, ASP.NET code executes on the server before it is sent to the browser.
9. Being language-independent, it allows you to choose the language that best applies to your application or partition your application across many languages. 10. ASP.NET makes for easy deployment. There is no need to register components because the configuration information is built-in. 11. The Web server continuously monitors the pages, components and applications running on it. If it notices any memory leaks, infinite loops, other illegal activities, it immediately destroys those activities and restarts itself. 12. Easily works with ADO.NET using data-binding and page formatting features. It is an application which runs faster and counters large volumes of users without having performance problems In short ASP.NET, the next generation version of Microsoft‘s ASP, is a programming framework used to create enterprise-class web sites, web applications, and technologies. ASP.NET developed applications are accessible on a global basis leading to efficient information management. Whether you are building a small business web site or a large corporate web application distributed across multiple networks, ASP.NET will provide you all the features you could possibly need…and at an affordable cost: FREE!
2.6.4 An Introduction to RDBMS
A Relational Database Management System (RDBMS) is an information system that presents information as rows contained in a collection of tables, each table possessing a set of one or more columns. Now days, the relational database is at the core of the information systems for many organizations, both public and private, large ad small. Informix, Sybase, SQL Server are RDBMS having worldwide acceptance. Oracle is one of the powerful RDBMS products that provide efficient and effective solutions for database management.
2.6.5 The Features of SQL Server
Scalability and Performance
Realize the scale and performance you‘ve always wanted. Get the tools and features necessary to optimize performance, scale up individual servers, and scale out for very large databases.
High Availability : SQL Server 2008 AlwaysOn provides flexible design choices for selecting an appropriate high availability and disaster recovery solution for your application. SQL Server AlwaysOn was developed for applications that require high uptime, need protection against failures within a data center (high availability) and adequate redundancy against data center failures
Virtualization Support : Microsoft provides technical support for SQL Server 2005 and later versions for the following supported hardware virtualization environments: o o o
Windows Server 2008 and later versions with Hyper-V Microsoft Hyper-V Server 2008 and later versions Configurations that are validated through the Server Virtualization Validation Program (SVVP).
Replication : Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.
Enterprise Security :
SQL Server delivers the most secure database among leading database vendors
SQL Server solutions provide everything you need to adhere to security and compliance policies—out of the box. This includes the most up to date encryption technologies built on our Trustworthy Computing initiatives.
Management Tools: SQL Server Management Studio is an integrated environment for accessing, configuring, managing, administering, and developing all components of SQL Server. SQL Server Management Studio combines a broad group of graphical tools with a number of rich script editors to provide access to SQL Server to developers and administrators of all skill levels.
SQL Server Management Studio combines the features of Enterprise Manager, Query Analyzer, and Analysis Manager, included in previous releases of SQL Server, into a single environment. In addition, SQL Server Management Studio works with all components of SQL Server such as Reporting Services and Integration Services. Developers get a familiar experience, and database administrators get a single comprehensive utility that combines easy-to-use graphical tools with rich scripting capabilities.
Spatial and Location Services
Complex Event Processing (StreamInsight)
Integration Services-Advanced Adapters
Integration Services-Advanced Transforms
Analysis Services-Advanced Analytic Functions
Business Intelligence Clients
Master Data Services
Minimum system requirements are listed below: Hardware and Software Requirements Processor
Intel Core i3
256 MB or more
T Operating System: a b l Database e
Windows 2000 Server, Windows XP, 2007.
Hard Disk 1 space:
: Web Server
Internet Explorer 5.0 or higher, Google Chrome
Visual Basic.Net 2010
SQL Server 2005
2.7. Software Engineering Paradigm applied. Conceptual Model The first consideration in any project development is to define the projects life – cycle model. The software life – cycle encompasses all the activities required to define, develop, test, deliver, operate and maintain a software product. Different models emphasize different aspects of the life cycle, and no single model is appropriate for all types of software‘s. It is important to define a life – cycle model for each product because the model provides a basis for categorizing and controlling the various activities required to develop and maintain a software product. A
life–cycle model enhances project manageability, resource allocation, cost control, and product quality. There are many life–cycle models, as:
The Waterfall Model
The Prototyping Model
The Waterfall Model:
The model used in the development of this project is Waterfall model. This is due to some of the reasons like
The model is more controlled and systematic
All the requirements are identified at the time of initiating the project.
2.7 Use Case Diagrams, ER Diagrams
Use Case Diagram:
Activity diagrams for transfer fund scenario:
System Design __________________________________________________________________
3.1 Modularisation details.
There are Three Categories of Users who can Use Application 1. Customer (Login User) 2. Admin(Super User) 3. Doctors ( Login doctor ) We can Divide Whole Application within 3 Modules 1) Admin Module Manage Profile : Admin can manage the profile of Users as well as Doctors who is registered their self after cheeking all details of user/doctor. Here manage Profile means, Admin activate or authorised the current user/doctor to become a member of this website. After being a member of this website a user or doctor is able to make transaction.
Change password : Admin can also change their password. View all doctor and searching a doctor : Admin has full authority to view all details regards Doctor. Can also search the doctor by entering some basic details (e.g city ,state , specialist, name).
View all user and searching a user : Admin has full authority to view all users who has been registered in this website. They can also search any particular user from database(e.g active ,city, state, name) Feedback user and doctor : User/Doctor can send feedback to admin and they can reply user/doctor to his/her email or Mobile no. Contact us details : In this page all details are given about website every one can contact us easily by call, message or email. Even any visitors or guest also can contact us.
Add doctor details. Verify user account than user login (active ,deactivate account ) Verify doctor account than doctor login.(block doctor details option) Submit news and update news or delete news. Logout.
2) Doctor Module
Login user General Profile update Make profile Education, hospital, degree, profile photo, degree snapshot. View user Query and solved. Inbox (view all message send by user ). Send feedback
3) User Module
Profile update Change password Send feedback Search doctor(e.g by name, by city, by state, by specialist , by hospital ) Send query to doctor after searching disease specialist. View Query results.
4) Visitor Module
View current news. Search doctor by name, city , state ,specialist , hospital About us Contact us Services How many register user on this websites And in left side show doctor list
Dr_ ducation Dr_ hospital
U_type(a/d/u) Email State
Reply Reply by Reply date
3.3 Database Design
Database Design :
Chapter 4 Coding
Paste your source coding here.........
Chapter 5 Testing
5.1 Testing Techniques & Strategies. Testing is vital to the success of any system. Testing is done at different stages within the development phase. System testing makes a logical assumption that if all parts of the system are correct, the goals will be achieved successfully. Inadequate testing or no testing leads to errors that may come up after a long time when correction would be extremely difficult. Another objective of testing is its utility as a user-oriented vehicle before implementation. The testing of the system was done on both artificial and live data.
5.1.1. Test Strategy The purpose of the Project Test Strategy is to document the scope and methods that will be used to plan, execute, and manage the testing performed within the comScore File Library Project. The purpose of the testing will be to ensure that, based on the solutions designed, the system operates successfully.
5.1.2 Unit Testing
Unit testing focuses verification efforts on the smallest unit of software design, the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The unit test is while box oriented and the steps can be conducted in parallel for multiple components.
5.1.3 Integration Testing Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. Integration testing was conducted by testing as different modules like client server programs were tested that correct data is passing, retransmission module was tested that it is giving proper times, protocol system was tested that it is sending acknowledgement and if not received one retransmitting packets or not. The interfaces were tested thoroughly so that no unpredictable event should occur by pressing any button.
5.1.4 Validation Testing At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected and a final series of software tests – validation testing may begin. Validation can be defined as successful when software functions in a manner that can be reasonably expected by the customer. Software validation is achieved through a series of black-box testing that demonstrate conformity with requirements. After each validation test case has been conducted, either the function or performance characteristics conform to specification and are accepted or a deviation from specification is uncovered and a deficiency list is created. In this case testing was done with a perception of user. Everything was integrated and made sure that data was passing from one class to another properly. The protocol is working properly with respect to client and server. Retransmission class was giving proper time and data was shown properly. All the things are at its place and desired output is coming after giving proper input. It was made sure that proper errors are generated if wrong inputs are given.
5.1.5 White Box Testing It focuses on the program control structure. Here all statements in program have been executed at least once during testing and that all logical conditions have been exercised.
5.1.6 System Testing System testing is done when the entire system has been fully integrated. The purpose of the system testing is to test how the different modules interact with each other and whether the system provides the functionality that was expected. It consists of the following steps.
User Acceptance Testing
5.1.7 Regression Testing It is retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made, and that the modified system still meets its requirements. It is performed whenever the software or its environment is changed.
5.1.8 Functional Testing Functional Testing is performed to test whether a component or the product as a whole will function as planned and as actually used when sold.
5.1.9 Black Box Testing
This is designed to uncover the errors in functional requirements without regard to the internal workings of a program. This testing focuses on the information domain of the software, deriving test case by partitioning the input and output domain of a programming - a manner that provides thorough test coverage.
5.1.10 Equivalence Partitioning In equivalence partitioning method we check O/P on different I/P. In MUM we check it by entering different i/p‘s to the system, by this we check whether it is working for all i/p‘s or not.
5.1.11 Boundary Value Analysis In boundary value analysis we check the values at the boundary values. Like at 0th row or at the end of the rows. Because sometimes we use the array from position 1 but it actually takes the value from the 0th position. So because of this the system fails at boundary values. In MUM we tested the application for all boundary values to check whether it is working fine or not.
RS1 The should login
system Essential have a
RS2 The system Essential should have help screens
Description of Requirement
A login box should The logins are appear when the system assigned by Admin is invoked. when the user opens an account with it. Help about the various features of the system should be provided in sufficient detail in a Q&A format.
The policies (like commission charged for various operations etc) should also be part of the help.
RS3 The system Essential should ‗lock‘ the login id if wrong password is entered 3 times in a row
After 2 false attempts user should be given a warning and at the 3rd false attempt should be locked.
RS4 User should have Desirable the facilty to change his passwords
The login password and the transaction password should be different
This is a must so as to prevent fraud users from logging into the system.
5.1.12 Resource Management
184.108.40.206 Roles and Responsibilities
Sr. Roles No.
Prepare and Update Test Cases
Testing of builds
Verifying bug fixes
Prepare Test Results for each build
Prepare defect summary report for each build
Prepare Test Plan
Review of Test cases
Verify /Suggest changes in test strategy
Sr. Roles No.
Review of Test plan
Overseeing testing activities
5.1.13 Test Schedule Since the project, deliverables are dynamic and so are the test schedules
All functional requirements are properly defined and meet users‘ needs.
The Developers performs adequate unit testing + code review before sending modules to QA.
The Developers fix all the defects identified during unit testing prior to system testing. Else the defects should be mentioned in the release notes.
The application will be delivered on the expected delivery date according to the schedule. Delivery and downtime delays shall cause adjustments in the test schedule and can become a risk for on time product delivery.
QA team should be involved in initial project discussions and should have a working knowledge of the proposed production system prior to beginning integration and system testing.
Change control procedures are followed.
The number of test cases -has a direct impact upon the amount of time it takes to execute the test plan
During the test process, all required interfaces are available and accessible in the QA environment
Testing - occurs on the most current version of the build in the QA environment
All incidents identified during testing are documented by QA and the priority and severity is assigned based upon previously defined guidelines
The Project Manager is responsible for the timely resolution of all defects
Defect resolution does not impede testing
Communication between all groups on the project is paramount to the success of the project, therefore QA should be involved in all relevant project communication
Sufficient time is incorporated into the schedule not only for testing, but also for unit testing by developer, test planning, verification of defect fixes, and regression testing by QA
5.1.15 Defect Classification The following are the defect priorities defined according to their precedence: Urgent: The defect must be resolved immediately for next build drop as testing cannot be preceded further. High: The defect must be resolved as soon as possible because it is impairing development and / or testing activities. System use will be severely affected until the defect is fixed. Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created. Low: The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved at all.
The following are the defect severities defined according to their precedence: Causes Crash: The defect results in the failure of the complete software system, of a sub-system, or of a software unit (program or module) within the system. Critical: The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way
to make the failed component(s), however, there are acceptable processing alternatives that will yield the desired result. Major: The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability. Minor: The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect. Enhancement: The defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a request for an enhancement. Defects at this level may be deferred.
5.1.16 Summary This chapter documented the results of the Quality Assurance Procedure and the different variety of tests that were performed on the system implemented in order to verify its completeness, correctness and user acceptability. This completes the total system development process. We see that the system developed totally satisfies all the requirements of the user and so is fully ready for user site deployment.
5.2 Debugging & Code Improvement In computers, debugging is the process of locating and fixing or bypassing bugs (errors) in computer program code or the engineering of a hardware device. To debug a program or hardware device is to start with a problem, isolate the source of the problem, and then fix it. A user of a program that does not know how to fix the problem may learn enough about the problem to be able to avoid it until it is permanently fixed. When someone says they've debugged a program or "worked the bugs out" of a program, they imply that they fixed it so that the bugs no longer exist. Debugging is a necessary process in almost any new software or hardware development process, whether a commercial product or an enterprise or personal application program. For complex products, debugging is done as the result of the unit test for the smallest unit of a system, again at component test when parts are brought together, again at system test when the product is used with other existing products, and again during customer beta test, when users try the product out in a real world situation. Because most computer programs and many programmed hardware devices contain thousands of lines of code, almost any new product is
likely to contain a few bugs. Invariably, the bugs in the functions that get most use are found and fixed first. An early version of a program that has lots of bugs is referred to as "buggy." Debugging tools (called debuggers) help identify coding errors at various development stages. Some programming language packages include a facility for checking the code for errors as it is being written.
Chapter 6 System Security Measure
As the use of the Web grows on both Intranets and the public Internet, information security is becoming crucial to organizations. The Web provides a convenient, cheap, and instantaneous way of publishing data. Now that it is extremely easy to disseminate information, it is equally important to ensure that the information is only accessible to those who have the rights to use it. With many systems implementing dynamic creation of Web pages from a database, corporate information security is even more vital. Previously, strict database access or specialized client software was required to view the data. Now anyone with a Web browser can view data in a database that is not properly protected. Never before has information security had so many vulnerable points. As the computing industry moves from the mainframe era to the client/server era to the Internet era, a substantially increasing number of points of penetration have opened up. For much of Internet security, database specialists have had to rely on network administrators implementing precautions such as firewalls to protect local data. Because of the nature of Intranet/ Internet information access, however, many security functions fall into a gray area of responsibility. This article describes the primary areas where security falls within the domain of the DBA, who must create the information solutions. New security procedures and technology are pioneered daily, and this article explains the various security systems involved with solving the current problems. This article should provide a primer. for further study of Web security and a framework for understanding current security methodology. For Web security, you must address three primary areas: 1. Server security -- ensuring security relating to the actual data or private HTML files stored on the server 2. User-authentication security -- ensuring login security that prevents unauthorized access to information
3. Session security -- ensuring that data is not intercepted as it is broadcast over the Internet or Intranet. You can view these layers as layers of protection. For each layer of security added, the system becomes more protected. Like a chain, however, the entire shield may be broken if there is a weak link. Server Security Server security involves limiting access to data stored on the server. Although this field is primarily the responsibility of the network administrator, the process of publishing data to the Web often requires information systems specialists to take an active hand in installing and implementing the security policy. The two primary methods in which information from databases is published to the Web are the use of static Web pages and active dynamic Web page creation. These two methods require almost completely different security mechanisms. Static Web Pages
Static Web pages are simply HTML files stored on the server. Many database specialists consider static page creation the simplest and most flexible method of publishing data to the Web. In a nutshell, a client program is written to query data from a database and generate HTML pages that display this information. When published as static Web pages, Web files can be uploaded to any server; for dynamic creation, however, the Web server usually must be modified (or new scripts or application software installed). Static pages have the secondary advantage of being generated by traditional client/server tools such as Visual Basic or PowerBuilder. Because almost any development system can output text files, only the necessary HTML codes must be added to make them Web pages. The creation of the pages, therefore, uses standard methods of database access control such as database security and login controls. Once created, the files must be uploaded to the Web server. Protecting the documents stored there occurs in the same manner that any other Web documents would be secured. One of the most straightforward ways to protect sensitive HTML documents is to limit directory browsing. Most FTP and Web servers allow directories to be configured so that files stored within them may be read but the files may not be listed in the directory. This technique prevents any user who does not know the exact filename from accessing it. Access may be permitted by simply distributing the exact filenames to authorized personnel.
Directories may also be protected using the integrated operating system security. Some Web servers allow security limitations to be placed on particular folders or directories using standard operating system techniques (such as file attributes) and then use this security to restrict access. This implementation will vary among Web servers. These security implementations to gain access to particular files or folders fall under the user-authentication category of security (described in a later section of this article). Dynamic Page Generation Favored by large organizations, this method is gaining popularity as the technology to generate Web pages instantly from a database query becomes more robust. A dynamic Web page is stored on the Web server with no actual data but instead a template for the HTML code and a query. When a client accesses the page, the query is executed, and an HTML page containing the data is generated on the fly. The necessary data is filled into the slots defined in the template file in much the same way that a mail merge occurs in a word-processing program. A program may be active on the Web server to generate the necessary Web page, or a CGI script might dynamically create it. One of the first security issues that a DBA must confront is setting up access to the database from the Web server. Whether using a CGI script, server-based middleware, or a query tool, the server itself must have access to the database. Database Connections With most of the dynamic connectors to databases, a connection with full access must be granted to the Web server because various queries will need to access different tables or views to construct the HTML from the query. The danger is obvious: A single data source on the server must be given broad access capabilities. This makes server security crucial. For example, an ODBC data source given full administrator access could potentially be accessed by any other program on the server. A program could be designed to retrieve private information from a data source regardless of whether the program's author is permitted access. This security problem is most dangerous on a system where users are allowed to upload CGI scripts or programs to run on the server. To prevent unauthorized access to your data, make sure that the server that owns the database connector is physically secure and does not permit unrestricted program execution. Table Access Control Standard table access control, if featured in the user authentication system, is more important on Web applications than on traditional client/server systems. DBAs are often lax in restricting access to particular tables because few users would know how to create a custom SQL
query to retrieve data from the database. Most access to a database on a client/server system occurs through a specifically built client that limits access from there. Not so with Web-based applications: Client/server development requires substantial experience, but even some novices can program or modify HTML code, and most user productivity applications such as word processors or spreadsheets that can access databases also save documents as HTML pages. Therefore, more solutions will be created by intermediate users -- and so valid security is a must. Remember, a little knowledge can be a dangerous thing. User-Authentication Security Authentication security governs the barrier that must be passed before the user can access particular information. The user must have some valid form of identification before access is granted. Logins are accomplished in two standard ways: using an HTML form or using an HTTP security request. If a pass-through is provided to normal database access, traditional security controls can be brought into play. Figure 1 shows an example of a standard security login through the Netscsape Communications Corp.'s Netscape Navigator browser. The HTML login is simply an HTML page that contains the username and password form fields. The actual IDs and passwords are stored in a table on the server. This information is brought to the server through a CGI script or some piece of database middleware for lookup in a user identification database. This method has the advantage of letting the DBA define a particular user's privilege. By using a table created by the DBA, numerous security privileges specific to a particular project can be defined. Once a login has occurred, a piece of data called a "cookie" can be written onto the client machine to track the user session. A cookie is data (similar to a key and a value in an .ini file) sent from the Web server and stored by a client's browser. The Web server can then send a message to the browser, and the data is returned to the server. Because an HTTP connection is not persistent, a user ID could be written as a cookie so that the user might be identified during the duration of the session. HTML form login security, however, must be implemented by hand. Often this means reinventing the wheel. Not only must a database table or other file be kept to track users and passwords, but authentication routines must be performed, whether through CGI script or via another method. Additionally, unless using a secured connection (see the section on SSL later in this article), both the username and password are broadcast across the network, where they might be intercepted.
HTML form login is excellent when security of the data is not paramount yet specialized access controls are required. Browser login is most useful when it is integrated with existing database security through some type of middleware. Even with users properly authenticated, additional security concerns arise. Session Security After the user has supplied proper identification and access is granted to data, session security ensures that private data is not intercepted or interfered with during the session. The basic protocols of the network do not set up a point-to-point connection, as a telephone system does. Instead, information is broadcast across a network for reception by a particular machine. TCP/IP is the basic protocol for transmission on the Internet. The protocol was never designed for security, and as such it is very insecure. Because data sent from one machine to another is actually broadcast across the entire network, a program called a "packet sniffer" can be used to intercept information packets bound for a particular user. Therefore, even though a user has properly logged onto a system, any information that is accessed can be intercepted and captured by another user on the network. There is no easy way to prevent this interception except by encrypting all of the information that flows both ways.
Public and Private Key Security The world of encryption is often a fairly arcane field of study. The growth -- as well as the insecurity -- of the Internet has forced users unfamiliar with even the basic concepts of cryptography to become at least acquainted with its common implementations. Two basic types of encryption are used in Web security: secret-key security (using a single key) and public-key security (using two keys). Secret-key security (which is also known as symmetrical-key security) is somewhat familiar to most people. A Little Orphan Annie decoder ring is a common example. The secret key, in this case the decoder ring, is used by each party to encrypt and decrypt messages. Both parties must have access to the same private key for them to exchange messages. If the key is lost or exposed, the system is compromised. Public-key security is a little more complicated. With public-key security, each individual holds two keys, one public and one private. The public key is freely published, and the private key is kept private. Once a message is encrypted with one key, it cannot be decoded without the other key. Using this type of encryption, someone can take a data file and encode it with your public key. Only your private key can be used to decode it. Likewise, if you encode a data file with your
private key, it can only be decoded with your public key. Therefore, the receiver of the data file knows that it came from you because only your private key can generate a file that can be decoded by the public key. This is so reliable, in fact, that it is admissible in a court of law. Only you, or someone with access to your private key, could possibly have created data that can be decoded with your public key. The primary difference between implementing these two systems is computational. Using a secret-key system, encryption and decryption can take place between 100 and 10,000 times faster than the equivalent data using a public-key system. The private-key systems often use a smaller key, perhaps even a user password. The public-key systems use computers to generate the keys, each of which is usually 512 or 1024 bits long. That's about 50 to 100 characters long -not easy to remember off the top of your head. Most Internet systems use a combination of the two to provide secure communication. Typically they use the public-key encryption system to encrypt a secret key (usually machine-generated based on a time code). Both the server and the client encrypt a secret key with their private keys and send the encrypted data and their public keys to each other. Alternatively, the public keys might be retrieved from a trusted third party such as a Certificate Server (which I describe later in this article). The public keys are now used to decode the data, so both the client and the server now have secret keys. When exchanging information, the data is encrypted with the secret key and sent between the machines. This system combines the authentication and extra security of a publickey system with the speed and convenience of a secret-key system.
Chapter 7 Cost Estimation of Project
4.1.5 Cost Estimation Cost in the project is due to the requirements in the software, hardware, and human resources. The size of the project is the primary factor for cost; the other factors have a lesser effect. Constructive Cost Model (COCOMO model) developed by Boehm helps to estimate the total effort in terms of person- months of the technical staff.
Overview of COCOMO The COCOMO cost estimation model is used by thousands of software project managers, and is based on a study of hundreds of software projects. Unlike other cost estimation models, COCOMO is an open model, so all of the details are published, including:
The underlying cost estimation equations
Every assumption made in the model (e.g. "the project will enjoy good management")
Every definition (e.g. the precise definition of the Product Design phase of a project)
The costs included in an estimate are explicitly stated (e.g. project managers are included, secretaries aren't)
Because COCOMO is well defined, and because it doesn't rely upon proprietary estimation algorithms, Costar offers these advantages to its users:
COCOMO estimates are more objective and repeatable than estimates made by methods relying on proprietary models
COCOMO can be calibrated to reflect your software development environment, and to produce more accurate estimates
Costar is a faithful implementation of the COCOMO model that is easy to use on small projects, and yet powerful enough to plan and control large projects. Typically, you'll start with only a rough description of the software system that you'll be developing, and you'll use Costar to give you early estimates about the proper schedule and staffing levels. As you refine your knowledge of the problem, and as you design more of the system, you can use Costar to produce more and more refined estimates. Costar allows you to define a software structure to meet your needs. Your initial estimate might be made on the basis of a system containing 3,000 lines of code. Your second estimate might be more refined so that you now understand that your system will consist of two subsystems (and you'll have a more accurate idea about how many lines of code will be in each of the subsystems). Your next estimate will continue the process -- you can use Costar to define the components of each subsystem. Costar permits you to continue this process until you arrive at the level of detail that suits your needs. One word of warning : It is so easy to use Costar to make software cost estimates, that it's possible to misuse it -- every Costar user should spend the time to learn the underlying
COCOMO assumptions and definitions from Software Engineering Economics and Software Cost Estimation with COCOMO II. Introduction of COCOMO Model: The most fundamental calculation in the COCOMO model is the use of the Effort Equation to estimate the number of Person-Months required to develop a project. Most of the other COCOMO results, including the estimates for Requirements and Maintenance, are derived from this quantity. Source Lines of Code The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code (SLOC). SLOC is defined such that:
Only Source lines that are DELIVERED as part of the product are included -- test drivers and other support software is excluded
SOURCE lines are created by the project staff -- code created by applications generators is excluded
One SLOC is one logical line of code
Declarations are counted as SLOC
Comments are not counted as SLOC
The original COCOMO 81 model was defined in terms of Delivered Source Instructions, which are very similar to SLOC. The major difference between DSI and SLOC is that a single Source Line of Code may be several physical lines. For example, an "if-then-else" statement would be counted as one SLOC, but might be counted as several DSI. The Scale driver In the COCOMO II model, some of the most important factors contributing to a project's duration and cost are the Scale Drivers. You set each Scale Driver to describe your project; these Scale Drivers determine the exponent used in the Effort Equation. The 5 Scale Drivers are:
Architecture / Risk Resolution
Note that the Scale Drivers have replaced the Development Mode of COCOMO 81. The first two Scale Drivers, Precedentedness and Development Flexibility actually describe much the same influences that the original Development Mode did. Cost Driver COCOMO II has 17 cost drivers – you assess your project, development environment, and team to set each cost driver. The cost drivers are multiplicative factors that determine the effort required to complete your software project. For example, if your project will develop software that controls an airplane's flight, you would set the Required Software Reliability (RELY) cost driver to Very High. That rating corresponds to an effort multiplier of 1.26, meaning that your project will require 26% more effort than a typical software project. COCOMO II defines each of the cost drivers, and the Effort Multiplier associated with each rating. Check the Costar help for details about the definitions and how to set the cost drivers. COCOMO II order equation The COCOMO II model makes its estimates of required effort (measured in Person-Months – PM) based primarily on your estimate of the software project's size (as measured in thousands of SLOC, KSLOC)):
Effort Where EAF E
Effort Adjustment Factor derived from the Cost an exponent derived from the five Scale
As an example, a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the project is projected to consist of 8,000 source lines of code, COCOMO II estimates that 28.9 Person-Months of effort is required to complete it: Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months
Effort Adjustment Factor The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers corresponding to each of the cost drivers for your project. For example, if your project is rated Very High for Complexity (effort multiplier of 1.34), and Low for Language & Tools Experience (effort multiplier of 1.09), and all of the other cost drivers are rated to be Nominal (effort multiplier of 1.00), the EAF is the product of 1.34 and 1.09. Effort
Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months COCOMO II schedule mequation The COCOMO II schedule equation predicts the number of months required to complete your software project. The duration of a project is based on the effort predicted by the effort equation: Duration
Where Effort Is the effort from the COCOMO II effort equation SE Is the schedule equation exponent derived from the five Scale Drivers Continuing the example, and substituting the exponent of 0.3179 that is calculated from the scale drivers, yields an estimate of just over a year, and an average staffing of between 3 and 4 people: Duration
Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people
Future scope & Further Enhancement of the Project ________________________________________________________________________
This project can be enhanced to provide all the functionalities to the customers, which are
Herbertt Schildt, ―The Complete Reference – ASP.net Using C#‖.
Horstmann and Gary Cornell , ―ASP.Net Volume I‖.
Phil Hanna, ―The Complete Reference – AJAX2.0‖ .
Anisha Bhakaria, ―JSP in 21 days‖ .
Roger Pressman, ―Software Engineering‖.
Gray Booch, ―UML Guide‖ .
Ivan Bayross, ―SQL, PL/SQL‖.
Bill Kennedy, ―HTML Guide‖.
10. Henry Korth, ―Database System Concepts‖.
11. www.sun.com, ―ASP Docuementation‖.
12. www.oracle.com, ―Oracle Documentation‖.
13. www.google.co.in, ―Google Search‖.
API – Application Programming Interface.
DBMS- Database Management System A complex set of programs that control the organization, storage and retrieval of data.
GUI- A Graphical User interface that has windows, buttons and menus used to carry out tasks.
SQL Client (ADO.Net) Database Connectivity .
SqlClient API – It supports application top jdbc Manager Communiactions.
SqlClient Driver API – It supports jdbc Manager to Driver implementation Communication.
ODBC – Open Database Connectivity.
Project – Any piece of work that is undertaken or attempted.
Report – Awritten document describing the findings of some individual or group.
SQL – Strictured Query Language.