S01 System Overview and Architecture W ADMS
November 20, 2023 | Author: Anonymous | Category: N/A
Short Description
Download S01 System Overview and Architecture W ADMS...
Description
Network Manager Standard technical documentation System overview and architecture
— System overview and architecture Contents 1.
System overview and architecture ................................................................ 2 1.1.
1.2.
1.3.
1.4.
1.5.
Conceptual configuration ....................................................................... 2 1.1.1. Overview .............................................................................. 2 1.1.2. System components .............................................................. 4 1.1.3. System configuration principles ............................................... 4 Software architecture............................................................................ 7 1.2.1. Operating systems ................................................................. 7 1.2.2. Virtual environment ............................................................... 7 1.2.3. Configurable parameters ........................................................ 7 1.2.4. Client server architecture........................................................ 8 Configuration control, redundancy and failure management........................ 9 1.3.1. Supervision of the control system ............................................ 9 1.3.2. System and subsystem start ..................................................10 1.3.3. Automatic and manual start of the system ...............................10 1.3.4. Start of the application servers ...............................................11 1.3.5. Start of the operator stations .................................................11 1.3.6. Start of remote communication servers ...................................11 1.3.7. Device supervision ................................................................12 1.3.8. System interconnect .............................................................12 1.3.9. Supervision of application servers ...........................................13 1.3.10. Supervision of the human machine interface ............................14 1.3.11. Supervision of master station power supply ..............................15 1.3.12. Subsystem supervision and switch-over ...................................15 1.3.13. Operating modes ..................................................................15 1.3.14. Application server subsystem .................................................15 1.3.15. System time keeping ............................................................17 1.3.16. Automatic time synchronization ..............................................18 1.3.17. Independent system health monitoring ....................................19 Emergency center ...............................................................................22 1.4.1. Multi master emergency center ..............................................22 1.4.2. Synchronized emergency center .............................................24 1.4.3. System copy/cold emergency .................................................25 Data architecture ................................................................................25 1.5.1. Database hierarchies.............................................................28 1.5.2. Message handling .................................................................28 1.5.3. Database integrity ................................................................29 1.5.4. Study database ....................................................................29
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
1
Network Manager Standard technical documentation System overview and architecture
1.
System overview and architecture
Section 1 – Describes the generic Network Manager architecture, its major components, the structure of the database, system redundancy and failure monitoring. This section also describes the environment for developing software and the diagnostic and system maintenance tools included with the baseline offering.
1.1. Network Manager systems are configured to meet high availability and performance requirements for power system control.
1.1.1.
Conceptual configuration Implementation of a main control system and one or more systems for emergency control is supported. The system includes a set of components which can be combined and implemented in a flexible way to meet the requirements of each individual installation with respect to availability and performance.
Overview Network Manager platform offers Energy Management System (EMS), Generation Management System (GMS), and Advanced Distribution Management System (ADMS). The ADMS includes integrated SCADA, Outage Management System (OMS), and Distribution Management System (DMS) applications, operating on a single network model and integrated operator functionality. It has been built upon decades of research and development to provide an integrated, comprehensive operations management solution. Figure 1 provides a conceptual diagram of the Network Manager ADMS functionality. Figure 1 - Fully-Integrated Distribution Operations Management System
The design of Network Manager is modular, in order to effectively meet the needs of particular distribution organizations. This design permits customers to incrementally add particular modules, such as SCADA, OMS, and DMS applications, and as their business needs change. SCADA – Distribution SCADA infrastructure is shown at the bottom of Figure 1. Network Manager SCADA collects analog and status data from RTU’s and IED’s, and provides functionality such as control, alarms, events, and tagging. Network Manager’s OMS and DMS applications can utilize Network Manager SCADA, in an
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
2
Network Manager Standard technical documentation System overview and architecture integrated platform, or be integrated with a third-party SCADA through ICCP or other types of interfaces. Single Dynamic Distribution Network Model – As shown in the center of Figure 1, Network Manager ADMS utilizes a common distribution network model. This greatly simplifies system maintenance, as it eliminates the need to build, maintain, and synchronize multiple data models. An additional benefit is coordination of planned outages and unplanned outages, due to temporary lines, line cuts, manually-dressed device operations, and sub-transmission operations. The single dynamic distribution model also means that the DMS applications always utilize the asoperated state of the distribution network, with the present location and state of switching devices, capacitor banks, and customer loads. OMS and DMS Functionality Network Manager includes the industry’s most proven OMS and DMS applications. This includes years of experience with electrical applications such as unbalanced load flow, fault location analysis, and restoration switching analysis. The OMS has evolved through decades of helping distribution organizations of all sizes meet their challenges, including some of the largest IOU’s in the US. Advanced OMS functions are available such as nested outages, partial restoration, automated ETR calculation, and referrals. Integrated Operator Graphical User Interface - The Network Manager Graphical User Interface provides a consistent user interface across the operational functions in an organization. This can include distribution SCADA, DMS, OMS, and even transmission SCADA if required. The result is improved operator effectiveness and flexibility, as well as reduced maintenance and training costs. Packaged Business Intelligence for Distribution Organizations – ABB provides packaged Business Intelligence solutions, specifically for distribution organizations, as an option to Network Manager. This enhances an organization’s reporting, situational awareness, and business intelligence needs. This allows individuals across the distribution organization to understand what is happening through the use of standard KPI’s, dashboards, and reports that come out-of-the-box. With minimal training, custom pages can also be designed. This provides operations, management, and others across the organization an improved picture of situational awareness. It also provides customer service representatives information that they require to be very responsive to customers inquiring about service issues. Additional information on Packaged Business Intelligence solutions is provided in other ABB documentation. Integration with Other Utility Systems - ABB has in-depth experience in systems integrations with numerous suppliers of operational and business IT systems, including GIS, CIS, SCADA, IVR, Mobile Workforce Management, Asset Management, and other systems.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
3
Network Manager Standard technical documentation System overview and architecture
1.1.2.
System components Operational system components
The Avanti database, for real-time data and message sharing between application programs. The Avanti database management system is a high-performance, real-time database management system especially designed to meet the high requirements of process supervision and control systems. Avanti includes a message passing function for communication between application programs. Application servers, each including a set of application programs. Application programs can be moved between application servers without changes as long as the interface to the outside world is the Avanti database and message passing system. Depending on the individual system, separate application servers may be created for SCADA functions, EMS network applications, Generation Management and Distribution Management. User interface, a state-of-the-art high performance, fullgraphics Human Machine Interface solution for all Network Manager applications. Database supporting the Information Storage and Retrieval functionality. An Oracle RDBMS is used with the associated tools commonly used for data mining and reporting. PCU400 Front-end servers, to manage communication with RTUs, IEDs and Substation Automation Systems. They provide flexibility, performance and scalability in a cost-effective manner.
Maintenance tools A set of tools are used to maintain and develop programs and databases as well as the configuration of the individual system in accordance with the functional scope of the individual system.
1.1.3.
System configuration principles The Network Manager has a rich set of functionality and open architecture making it the ideal solution for deploying a highly available, high performing real-time control system. Built-in functionality such as distributed processing, bump-less failover, virtualization support and replicated/synchronized server concepts enables a large amount of configuration options to meet every customer’s need. The functionality is described in the sections below.
Open architecture The proposed system architecture utilizes standard, commercially available hardware and 3rd party software products. Together with well-structured and documented Application Programming Interfaces (APIs) it supports the integration of external applications developed by third parties or by electric utilities to the standard Network Manager solution. Since the APIs are guaranteed to be backward compatible, applications developed by third parties or by the customer will be independent of any release upgrade of the basic Network Manager platform and applications.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
4
Network Manager Standard technical documentation System overview and architecture
Distributed processing Network Manager offers the possibility to distribute SCADA/EMS applications to multiple servers connected over the Local Area Network. What applications should run in each server is configurable and decided based on the expected workload for each application. The distribution of the various applications to different servers is seamless to the operator. Typically, all functions are executed in one server but for extremely big networks, high requirements for application frequency execution and/or very demanding calculations the builtin distribution processing capabilities of Network Manager can be utilized.
System distribution, redundancy and failover System application distribution, redundancy and failover are provided using standard Network Manager Avanti and Oracle data distribution and replication mechanisms. The system can be configured with up to six servers of the same type forming a server group. Both the Avanti and Oracle based redundancy and failover solutions result in a smooth transition with no data loss for all SCADA and EMS applications. The same Avanti based failover mechanism is used for SCADA, including RTU and ICCP communications and various EMS applications, namely AGC and Network Applications. In this manner, zero data loss at the EMS application level is also guaranteed. Additional failover capabilities include:
Hot standby redundancy for all real-time applications including SCADA and EMS applications; Real-time failover requirements for all real-time servers including applications; No need for off-line synchronization of servers after standby server start-up; Data acquisition from RTUs continue uninterrupted during the failover; Independent failovers in all server pairs, i.e., failover in one server pair will not lead to failovers for other servers; Line-by-line failover for RTU lines in the Front-Ends.
Operational modes The various servers within a server group are assigned different operational modes.
On-line Real-time operational tasks are executed in the On-line server; Hot Standby A Hot Standby server takes on the Online server tasks in the event of a failure or upon user command. That is, a server in the Hot Standby mode is ready to become “Online” by “failover” (automatic state transition) or
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
5
Network Manager Standard technical documentation System overview and architecture
“switchover” (manually initiated state transition). The change is performed without any loss of data; Synchronized A server in the Synchronized mode is ready to become Hot Standby on operator request. No data transfers are required during a transition from Synchronized to Hot Standby state. The Hot Standby and Synchronized modes are identical from a database point of view, except that it behaves differently in a failover situation. In a failover scenario the synchronized servers can be configured to o Require manual operation to take over role of online/host-standby o Automatically change state to online/hot-standby in case of loss of any or both. Synchronized servers are typically used in the Emergency system concept or for added redundancy Replicated A replicated server receives database updates in real-time just like the standby and synchronized servers. However, it always runs in a read-only mode and there is only a oneway replication. The replicated server is typically used for secure read-only access to real-time and historical data for external users and for study executions. Off-line An Off-line server or device is not communicating with other elements of the SCADA System and is not capable of participating in any SCADA System activity.
Mode transitions Mode transitions are initiated automatically by the function for supervision of the application servers when needed to maintain system availability. In addition, the operator can initiate transitions manually.
System databases The central component of the Network Manager architecture is the system database, which serves as the real-time repository for all power system models and control system data. This central data repository is based on the Network Manager Avanti database. The database architecture provides functions for the administration of a number of parallel databases within one application server. In addition to the real-time database, which is used for real-time process supervision and control, one can create several study databases. A study database is normally a copy of parts of the real-time database which can be used for different study purposes for EMS applications, data testing and operational training. This provides a powerful facility to run programs in a true operational environment without affecting the actual real-time system. The same database definitions can be used in all application servers. It is possible to logically designate each database file and message queue to a certain server database. If the file or queue is not defined as local at creation time, a remote definition to the appropriate server is made. It is possible to merge a number of logical databases together into one common database. This makes the system easy to transform between different distributed and © 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
6
Network Manager Standard technical documentation System overview and architecture non-distributed implementations. Redundant server configurations are fully supported. All specified updates of the on-line application server are transferred to the hot standby application server. For all system and database access, Application Programming Interfaces (APIs) and Message Passing services are used. A comprehensive set of tools for the initial definition and maintenance of the database and data structures are included as part of the database.
1.2.
Software architecture
1.2.1.
Operating systems The Network Manager platform and application software run on a combination of Red Hat Enterprise Linux and Windows according to the following: Red Hat Enterprise Linux 7
Windows Server 2012 r2 Windows 10
SCADA Applications
Domain Controllers
EMS Applications
PSE Access Server
OMS/DMS Applications
Front End Processors
Replicated Server
Operator Workstations
Data Engineering Thin Client Server
Historian Server (UDW) Front End Processors The operating systems used in the Network Manager system make the best use of each operating system’s advantages:
1.2.2.
Linux – performance and reliability Windows server – Active Directory Windows 10 – Usability
Virtual environment Network Manager is fully supported in a virtual environment. The complete Network Manager system, including the production servers, can be deployed in the virtual environment. Or a combination of physical, for the production servers, and virtual servers, for supporting systems, can be used. The preferred virtualization platform is VMware vSphere, which is the industry-leading virtualization platform for building cloud infrastructures. When running Network Manager in vSphere you can retain or increase availability and performance at the lowest total cost of ownership. The Network Manager baseline releases are continuously tested on the VMware vSphere platform.
1.2.3.
Configurable parameters The Network Manager system is built to be flexible to meet the users evolving needs.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
7
Network Manager Standard technical documentation System overview and architecture The following functions are all controlled by means of parameters stored in the database, that is, they can be tuned by means of database population:
1.2.4.
System and subsystem division; Authority area for consoles and operators; Event processing including audible alarms; Event and Alarm storage; Printouts; Disturbance data collection and presentation; Historical information system, Utility Data Warehouse (UDW); Control system supervision.
Client server architecture Network Manager ADMS, as shown in Figure 2 is a modular, clientserver architecture. The client-server architecture was selected for its ability to utilize distributed computing based on industry trends. Specifically, network data models are becoming larger, greater details are required by operations, and the number of system users is expanding. The client-server architecture provides the scalability, performance, and flexibility needed to meet these requirements. Redundancy of major system components is provided. For all system and database access, Oracle Data Bus and Message Passing services are used. Figure 2 - Client server architecture
The ADMS Oracle Database Server is the central data repository for Network Manager ADMS, and is implemented using Oracle database technologies. It contains the network model, customer data, information about customer calls, outages, crews, and other data. The data repository can be located at a single physical
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
8
Network Manager Standard technical documentation System overview and architecture location or distributed over several servers. Comprehensive utility functions are provided for the initial definition and maintenance of data structures. The ADMS Network Model Server includes system supervision, functions to maintain synchronization of the Oracle database and the contents of shared memory, outage analysis, message brokering, and other functions. Both the ADMS Oracle Database Server and the ADMS Network Model Server operate on Linux. ADMS Application Servers run the Network Analysis Applications Unbalanced Load Flow, Restoration Switching Analysis, and Volt/VAR Optimization. They operate on the Windows Server operating system. The OMS Web Server allows browser-based clients to access the system, and supports tabular displays for Calls, Dispatch, Crew Administration, Executive Reports, Reports, and Administration. The Network Manager ADMS full-function client user interface runs on a desktop PC. The functions running on the PC include the graphical map interface program (Power System Explorer - PSE), and the Operations Management Interface (OMI). The PSE application includes local graphical display of the electrical network and land base data. In addition, tracing, display navigation, load allocation, load flow, ratio loads and restoration switching analysis applications are built into this program. The OMI is a separate executable that runs on the client and is seamlessly integrated with the PSE program. The OMI displays real-time tabular lists of outages, trouble calls, crews, tags, switching plans and temporary network objects. These lists allow the user to sort and filter listed objects and to pan the PSE graphical map to the location of a selected object. The full-function operator client is designed to require low server involvement in order to perform its functions. Network Manager ADMS architecture is highly scalable and provides fast response times to user requests, since most requests are processed locally on the client PC. In Network Manager ADMS implementations that also include Network Manager SCADA, the full-function Power System Explorer (PSE) operator client provides display of analog and status value on geographic world map, common tagging between one-line displays and geographic world map, interlock checking (loop creation, interruption of customers, etc.) resulting from control actions.
1.3.
Configuration control, redundancy and failure management
1.3.1.
Supervision of the control system The Supervision of the Control System includes a set of activities that detect hardware and software failures and configures the system to maintain critical operations in such conditions The supervision monitors both individual devices and critical functions or processes within the system.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
9
Network Manager Standard technical documentation System overview and architecture These supervision of the control system includes the following:
The Application Servers Subsystem (the computers running application servers and their required peripheral and support hardware); The Human Machine Interface Subsystem (operator stations, large screen controllers, printers); The auxiliary support equipment, if such equipment is supervised.
The control operation can be divided into the following parts:
System and subsystem start; Device supervision; Subsystem supervision and switch-over; System time-keeping.
Each of these is described in a subsequent section.
1.3.2.
System and subsystem start A start takes place at the system, subsystem and device levels. At the system level, it activates a system and proceeds until it is fully operational outgoing from a state where all servers in the system have functional operating systems. At the subsystem and device levels, a start activates a fully operational and integrated subsystem or device that communicates with the other parts of the system, from a state where the processors in the subsystem/device have functional operating systems. In general, start comprises the following activities:
1.3.3.
Initialize the system data structures; Activate the software.
Automatic and manual start of the system A start is accomplished either automatically or manually. It can be manually initiated from a terminal window of the master application server or through the operator interface if at least one server is already running in online mode. For a manual start, the following start options are available:
Start of the run-time system with a warm or cold copy of the database; Start of an off-line system with or without database; Internal, external, or manually entered time source.
An automatic start is initiated in the following situations:
When a switchover condition is detected by the on-line server if the other server is not in standby mode; Following recovery from a general power failure of the system.
Database initialization A cold or warm start of the database may be selected at the system start.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
10
Network Manager Standard technical documentation System overview and architecture At a cold start, all dynamic process data is marked as non-current in the database. Historical information, events and alarms data is erased. A status check is requested to update process data with telemetered values. The cold start is typically not used in a production system since without restoring dynamic data information is lost. A warm start uses the latest database copy available on the bulk storage. All process data in the database are considered valid and a status check is requested to update the database with interim changes.
1.3.4.
Start of the application servers An ordinary login procedure is used to manually start and stop the application servers. However, it is also possible to start and stop the application servers remotely, from the operator interface. An application server configured for automatic start will be started automatically if the application server was in operation when the failure occurred. This means that a boot start, e.g., after a power failure, will not start more application servers than were operating at the time of the failure. A start procedure is used to return an application server in a failed or off-line mode to a normal operational mode. If there is currently no on-line server, the start procedure brings the server up in the on-line mode. If there is currently an on-line server, the start procedure initializes the server to hot standby mode and initiates a warm-up. The warm-up procedure updates the standby database to reflect current data and synchronizes the server clock with system time.
1.3.5.
Start of the operator stations The operator stations are started just like any standard windows program, either manually or automatically after a reboot.
1.3.6.
Start of remote communication servers The Remote Communication Server system (RCS) consists of:
The RCS Application The RCS Application program executes in the same computer as the SCADA application server. It controls the communication lines to the Remote Terminal Units and forwards commands to, and collects data from the process; The Process Communication Units (PCUs) The PCUs handle the communication with the Remote Terminal Units. The communication protocols are implemented in the PCUs.
The Remote Communication Server (RCS) subsystem automatically begins at a cold start when power is restored to the Process Communication Units (PCUs) following a power failure, or manually on request from an Operator Station. When the communication with the application server has been established, the Remote Communication Servers are initialized. The initialization comprises the following steps:
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
11
Network Manager Standard technical documentation System overview and architecture
The RCS database from the on-line application server is downloaded to the PCU; The tables in the PCUs that define the structure of the RTU polling scheme are initialized; Communication with RTUs is established and the data collection is started.
When a start of the RCS subsystem occurs, a status check of all RTUs is initiated to provide the current state of all monitored devices.
1.3.7.
Device supervision Network Manager provides continuous, automatic supervision of devices depending on their supervision status. SNMP is used to monitor network-attached devices for conditions that warrant administrative attention.
Device supervision status The supervision status of each monitored device in the system is continuously updated. Devices can have the following supervision statuses:
In Service Out of Service Operable; Inoperable.
A unit is set to 'In Service' and 'Out of Service' by manual entry. A unit that is ‘In Service’ is automatically supervised and given 'Operable' or 'Inoperable' status depending if errors are detected. Using standard features of the Human Machine Interface, the status is shown on:
Configuration diagrams showing all supervised devices in the configuration and their status; Alarm and event lists; Communication statistics reports.
Error statistics from the monitoring of RTUs and serial communications are shown in a communications report. The report covers device status and error counters for each RTU. The error counters are reset at midnight when the communications report is printed. Prior to clearing error counters, the reports are printed on their assigned printers. However, reports can be displayed or printed at any time to show present status and counter values. Each change of status of a unit initiates Event/Alarm handling. A change in device supervision status initiates event processing.
1.3.8.
System interconnect The subsystems and devices communicate with, and are supervised by, a single or a dual local area network (LAN) using the TCP/IP protocol. The dual LAN design is used to provide redundant communication and load sharing to enhance the communication throughput.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
12
Network Manager Standard technical documentation System overview and architecture Redundancy is maintained by connecting redundant devices to different LANs; load sharing is achieved by connecting different subsets of devices to the LANs. The LANs are generally interconnected with the customer’s WAN infrastructure through firewalls. The communication software in the application servers maintains an image of the current LAN structure in the database.
1.3.9.
Supervision of application servers The application servers are supervised:
Within an application server group; By communication between the master application server and the other application server groups.
The latter only applicable in case of a distributed system.
Supervision within a server group Each server, its main memory, disk memory, and software must have operable status for the server itself to have operable status and the capability to perform its functions. The servers monitor their own operation and the operation of their peripheral devices.
Test and check messages Each application server supervises the operation of the other corresponding servers. This mutual supervision is accomplished with test messages and check messages. Test messages are transmitted every two seconds from each server to the other on the local area network (LAN) or, alternatively, they can be sent on a separate supervision link. Check messages are transmitted from the on-line server to the standby server every two seconds over the inter-processor data link on the LAN. The standby server sends acknowledgments to these check messages back to the on-line server over the interprocessor data link. The check message indicates if the server has detected a failure in its own operation or in a critical peripheral. The LAN and the optional separate link are always in-service and cannot be set to out-of-service status. If the standby server fails to receive both the test message and the check message for ten seconds, the on-line server is considered to have failed and its status is set to inoperable. If the on-line server does not receive the acknowledgment messages from the standby, the standby server is considered to have failed and its status is set to inoperable. Any change of operational status detected in the on-line server will result in the initiation of event processing. The specific status of each server, as well as any messages generated, are maintained in the on-line server database and included in information passed to the standby server for maintaining the standby database up-todate. Critical errors in the standby server are reported to the on-line server over the data link, and are reflected in the Control System Event List, which is maintained by the on-line server. © 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
13
Network Manager Standard technical documentation System overview and architecture
Memory access protection supervision Memory protection is provided to prevent a process from interfering with other processes or with the operating system.
A process is terminated if it attempts to write onto a protected area of memory. In such a case, an event is generated. If the violating program is "critical", a switchover is requested. If the violating program is not "critical", an event is generated and the program is restarted.
Software supervision The execution of software is supervised to detect failures in the following critical functions:
Timed execution of programs (periodic or at a designated time); Message handling; Disk resident data base accesses; Inter-server data transfers.
If an error is detected in any of these functions, the failure is reported to the other server.
Data verification Programs verify the data they access and process. The detection of a data error initiates event processing. Execution of the program is terminated with a "data violation" code and the program is restarted.
1.3.10.
Supervision of the human machine interface The Human Machine Interface equipment comprises:
Operator Stations; Printers.
Supervision of these devices is performed as described in the following sections.
Supervision of operator stations The application server periodically polls the Operator Stations for status information. In the reply message received from each Operator Station the status for the display monitors connected to the server are included. If no reply is received from an Operator Station before the next poll, the status for that Operator Station is set to inoperable and event processing is initiated. The display processor, the display monitors, the keyboard and the hard copy device, which belong to that Operator Station, are all given status “inoperable”. A local Operator Station function displays a message stating that communication with the application server is lost. When the Operator Station status is set to operable, the start procedure for that server is initiated. Currently connected display monitors are also set to operable status.
Supervision of printers Report printers are supervised by cyclically checking if the printer queue is stalling. Detection of an error status from the device initiates event processing and the supervision status of the device
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
14
Network Manager Standard technical documentation System overview and architecture is set to inoperable. When a printer has inoperable status, all output for that device is directed to the backup device.
1.3.11.
Supervision of master station power supply The Uninterruptible Power Supply (UPS) equipment for the control center comprises the following units:
Rectifier; Battery; Inverter; Static switch (for supply directly from the power network).
Alarm status Alarm status outputs from the power supply equipment can be connected, if the master station includes a local Remote Terminal Unit, and can then be presented on a dedicated picture.
1.3.12.
Subsystem supervision and switch-over The Application Servers Subsystem includes redundant devices so that the failure of critical devices within these subsystems does not cause the subsystem itself to fail. This is achieved by configuring these subsystems with pairs of critical devices. One of the devices in a pair can execute all the functions which are assigned to the pair. The supervision status of a critical device, which is set by the device supervision function, determines if that device can execute the subsystem functions. Subsystem supervision monitors the supervision status of critical devices within a subsystem, assigns subsystem functions to specific devices, and switches functions to other devices in response to device failures and on manual request. The subsystem supervision monitors the supervision status of Remote Terminal Units and configures communications when a failure is detected, if redundant communication paths are available.
1.3.13.
Operating modes The operating mode of a critical device within a subsystem characterizes the functions, which are assigned to that device. Critical devices are usually redundant in the system configuration. The operating modes, which are allowed for such a device, depend on the supervision status of the device and the operating mode of its paired redundant device. The following operating modes are defined:
1.3.14.
On-line; Standby; Synchronized; Off-line.
Application server subsystem Application server modes During normal operation, in each application server group, one server, and its fixed set of system components (mass storage
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
15
Network Manager Standard technical documentation System overview and architecture devices and dedicated controllers), are assigned the on-line mode and perform the on-line functions. Another server is fully operational and ready to immediately take over the on-line processing (hot standby mode). The group may include a third fully operational server (synchronized) that may take over the role as hot standby. Such a takeover is always manually initiated. Failure monitoring is enabled in all servers. The switchover control process is enabled in the on-line and hot standby servers. Most peripheral devices, Human Machine Interface devices and communication circuits are connected to the on-line server. Some devices however may be connected to the hot standby server depending on its current use.
Automatic mode switchover (failover) Automatic transitions are initiated as a result of failures in critical devices detected by the device supervision function. When an application server in hot standby mode detects a failure of the on-line server, the hot standby server terminates all background activities, changes to on-line mode, and sets the previous on-line server to off-line mode. If an on-line application server detects a failure within its own system, and there is a corresponding server in standby mode, the on-line server sends a switchover request for the standby server to assume the on-line role. The new on-line application server switches all peripheral devices, operator stations, and Remote Communication Servers to itself and initiates event processing to alarm the switchover. When the change to on-line mode is completed, it is possible to manually initiate background tasks on the new on-line server. System operation continues with the new on-line server. No operator action is needed to perform the switchover. After switchover immediate operability is achieved:
The Human Machine Interface devices are normally connected to the new on-line server. The operator does not need to log in again. However, current dialogs are canceled; The same pictures as before the switchover are presented. All process data are completely updated and valid; The system is ready to perform dialogs concerning process control; Process data received from PCUs during the switchover are resent to the new on-line server to prevent loss of data.
Standby application server failures When a failure of the hot standby processor is detected, the mode of that server is set to off-line. All peripherals are switched to the on-line server.
Switchover diagnosis Whenever a failure initiates a switchover between servers, a report is stored for later fault diagnosis describing the status of the faulty system prior to the switchover.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
16
Network Manager Standard technical documentation System overview and architecture
Operation without a standby When no standby server is available, the operation of the on-line server is modified. The on-line server continues to monitor the status of the previous hot standby servers and returns to normal operation as soon as an operational hot standby is available. In the interim, the system attempts to operate under conditions that would normally cause a switchover, as follows:
When a non-critical program fails, an operation alarm is issued with an identification of the program. The program is automatically restarted; When a critical program fails, or operating system software fails, the system saves status data for post analysis and initiates a start of the system; Upon restoration following a power failure, a start of the system is initiated automatically.
Manual mode transitions Transitions between the different modes may be initiated manually from an Operator Station, if the operator has the appropriate authority. The system rejects subsequent manual transition requests while an active transition is in progress.
1.3.15.
System time keeping The Time Synchronization Subsystem comprises an external clock as the source for the time base, and the necessary interface equipment to synchronize the clocks in the application servers and Process Communication Units (PCUs). The time in the on-line application server is designated as the Standard Time. The system also accommodates the transition between the Standard Time (the internal time) and the Daylight Savings Time. The time used for presentation is called Calendar Time and can be offset from the Standard Time. The on-line master application server distributes both Standard Time and Calendar Time to the hot standby application server, distributed server groups, and the Operator Stations, but only Standard Time is distributed to the PCUs. The PCUs also receive a minute pulse directly from the external clock to assure the time synchronization. The PCUs, in turn, distribute Standard Time to the RTUs to achieve an accurate system-wide time setting. This time is used by the RTUs for time tagging. To meet local standards, it is possible to adapt the time format to those most commonly used.
Start At start, time is automatically read from the external clock, if available in the configuration, and translated to Standard Time, which is applied to the system. Otherwise, the source of Standard Time can be selected from the Internal Time-of-year-clock or by manual entry.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
17
Network Manager Standard technical documentation System overview and architecture
1.3.16.
Automatic time synchronization Automatic time synchronization is achieved in following steps:
Master Station Synchronization The Master Station receives time synchronization messages from a time system based on a GPS satellite receiver. The on-line master application server periodically sends time synchronization messages to the hot standby servers, the Operator Stations, and the Remote Communication Servers. The Remote Communication Servers additionally receive a minute pulse from the external clock via dedicated lines. In a system with a distributed configuration, the master server provides time synchronization of all application servers.
Remote Terminal Unit Synchronization The clock in each RTU is synchronized by a time synchronization message from the Remote Communication Servers. The time synchronization takes into account baud-rate and transmission delays, which are set via data maintenance in the master station.
Manual setting The Calendar Time can be adjusted from an operator station. The adjustment (a number of seconds) is specified by manual data entry on a Control System Picture. Once the value for adjustment has been entered, the system changes the internal time in the wanted direction until the adjustment has been achieved.
Daylight savings time The date and time of the change from Standard Time to Daylight Savings Time, and vice versa, are fetched from the standard LINUX UNIX Daylight Savings Time table and stored in the database. Six dates for future changes between Standard Time and Daylight Savings Time are displayed for the operator in a Control System Picture. The changes affect the Calendar Time offset and have no effect on Standard Time, which is still used internally for time tagging. The translation of Standard Time to Calendar Time, and vice versa, is used in the following functions:
Event Processing; Trend Presentation; Reports; Time Selection; Time activation of programs.
Event processing Events are time tagged with Standard Time and sorted by this time. This ensures that the events are presented in the order in which they occur. The presentation of events on pictures and reports is by Calendar Time.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
18
Network Manager Standard technical documentation System overview and architecture
Trend presentation For those days that contain 23 or 25 hours, the curves in trend pictures are presented as continuous curves and the time axis is adjusted, that is, one hour is missing or one hour is doubled. The following is an example of a trend when Calendar Time has been set back one hour (in the autumn): Figure 3 - Trend presentation
Reports In reports, transition days are presented as having 23 or 25 hours and the day calculations use these numbers of hours, respectively.
Time selection and paging Calendar Time is entered for time selection and paging. These functions use the appropriate Calendar Time offset to access time tagged data.
Time activation of programs Time activation of programs is performed according to Calendar Time.
1.3.17.
Independent system health monitoring The Independent System Health Monitor provides an independent way to supervise the health of critical components (for example alarm processing and transmission applications) in the Network Manager system. The monitoring is done by independently running SNMP agents that supervises the various system functions in parallel with the standard supervision functionality. The SNMP agents then respond to requests from an SNMP monitor that will be able to display info/alert based on the Network Manager system status. Network Manager includes an SNMP monitor for the workstations, which makes the operators aware of any issues with the system, in addition to the integration to third party monitoring systems.
Monitoring of network manager Health applications monitor the various components of the Network Manager system. The health information for each supervised component is:
Status (Not Started, Good, Bad)
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
19
Network Manager Standard technical documentation System overview and architecture
Reason Timestamp
Below is a list of the monitored applications.
Alarm processing The following is supervised by the alarm processing health application:
Processes Queues If an alarm list is full If an alarm list is broken If an alarm can be created by an indication status change
If a process or a queue vital for the alarm processing does not work as expected the health is considered bad. The health is also considered bad if an alarm list is full or broken. The health is only considered good if all supervised parts of the alarm processing is working as expected.
Data acquisition The following is supervised by the data acquisition health application:
Processes Queues
If a process or a queue vital for the data acquisition does not work as expected the health is considered bad. The health is only considered good if all supervised parts of data acquisition are working as expected. Stale data and secondary source is part of the data acquisition health application.
ICCP The following is supervised by an ICCP health application:
Processes Queues Data is received and processed correctly
If a process or a queue vital for the ICCP does not work as expected the health is considered bad. The health is also considered bad if data is not received or processed correctly. The health is only considered good if all supervised parts of ICCP are working as expected.
Alarm rate The alarm rate per second is provided by the SNMP agent and the threshold for issuing an alert should be configured in the SNMP Monitor.
AGC The AGC health application supervises that each AGC program is executed as expected. If not, the health is considered bad.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
20
Network Manager Standard technical documentation System overview and architecture
Network apps The Network Apps health application supervises that the Real-time State Estimator sequence, Real-time State Estimation, and RealTime Security Analysis are executed as expected. If not, the health is considered bad.
SPL The following is supervised by the SPL health application:
Processes Queues
If a process or a queue vital for SPL does not work as expected the health is considered bad. The health is only considered good if all supervised parts of SPL are working as expected.
Calculation The following is supervised by a Calculation health application:
Processes Queues
If a process or a queue vital for Calculation does not work as expected the health is considered bad. The health is only considered good if all supervised parts of Calculation are working as expected.
Heartbeat visualization application The Heartbeat Visualization Application (HVA) is an SNMP monitor, typically installed on the operator workstations, that on a regular basis requests information of the health of Network Manager by sending SNMP request. The HVA will indicate the status of the health by an icon, balloon tip and audible alarm.
Icon The HVA indicates the status of the health by an icon in the system tray (located in windows toolbar). The icon shifts colors depending on the health of the requested system:
Green The health of the requested system is OK Yellow No response from the system No SCADA online found Red At least one part of the system is reported erroneous.
Balloon tip When the icon shifts color a balloon tip displays the reason of the change of status.
Audible alarm It is possible to configure an audible alarm for the HVA. If the health is reported bad the audible sounds until it is acknowledge or the status of the health is changed. Audible alarm is also generated when there is no communication with IHM on the application servers.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
21
Network Manager Standard technical documentation System overview and architecture
1.4.
Emergency center Network Manager’s flexible architecture allows for various emergency center configurations. The details of the configurations will be described in the following sections.
1.4.1.
Multi master emergency center Figure 4 - Multi master emergency center
Online
Hot Stand by
RTDB
RTDB
Online
Hot Stand by
RTDB
RTDB
Manual entry sync
Site 2
Site 1 WAN
PCU400 Front Ends
RTU
RTU
The multi master emergency control center concept allows for two active masters, one in each control center, where both of the master executes the complete data acquisition, SCADA and applications. Both of the systems receive data from the field devices while only one is able to send out controls. The multi master concept has the advantage that each system execute the full set of applications, from data acquisition to advanced applications, completely independently. This way the risk that a fault in one of the servers would risk the operation of the whole system, both the main and emergency site, is reduced.
Manual entry sync All data coming from field devices or through external interfaces connected to both systems will be processed in each system and do not require any synching between the sites. However, all locally created data in the main center needs to be synchronized in a reliable and efficient manner. The ABB Network Manager multimaster emergency concept handles this through its integrated sync process that supports synching of:
Tags Locally collected or manually entered measurand values and indication status. Alarm acknowledgement and deletion
Locally collected or manually entered measurand values and indication states from the following sources can be synchronized between the systems:
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
22
Network Manager Standard technical documentation System overview and architecture
Manual entry ICCP (Provided external partner does not have redundant links) External interfaces (Provided external system does not exist at both sites) Calculation results
To ensure a consistent alarm and event list between the systems alarm acknowledgement and deletion are synchronized between the systems for power system data, e.g. measurands, indications, transmission line, station, etc, and for control system data, e.g. RTU, com-line, etc. The synchronization is based on Object Identity and Time, compensating for minor deviations in the time stamps in the two systems
Data engineering One centralized data engineering environment is used in the multi master emergency configuration. The centralized environment is used to maintain both sites to avoid redundant data entry as well as consistency between the sites. An additional advantage of the multi master emergency site is that database changes can be applied and tested to one of the sites first, letting that site execute with the new configuration for a while before adding it to the other site. Using this procedure for database maintenance adds an extra level of security in the database maintenance process.
External interfaces Typically external interfaces such as ICCP, ELCOM and file exchange will be active on both sites. Therefore data will be processed locally and no need for data synchronization is required. However, sometimes the external systems do not have the capability to interface with a total of four servers (online/hotstandby in each site). To support this scenario Network Manager has the capability to synchronize specific, local only data, across the sites.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
23
Network Manager Standard technical documentation System overview and architecture
1.4.2.
Synchronized emergency center Figure 5 Synchronized emergency center
Online
Hot standby
RTDB
RTDB
Sync standby
Sync standby
RTDB
RTDB
Database sync
Site 2
Site 1 WAN
PCU400 Front Ends
RTU
RTU
The Synchronized Emergency Center is based on the Network Manager database synchronization functionality, in which a server group can consist of up to six redundant servers, one online, one hot-standby and up to four synchronized servers. The servers can be distributed across one to six different sites allowing for one or more emergency sites. The concept keeps all databases always insync and ensures that the emergency center is always ready to take over operations using a bump-less failover/switchover. The synchronized emergency center can be configured to run in different modes:
Require manual intervention to start-up Take over immediately when the on-line server fails (running in hot-standby mode) Automatically replace any or both missing server function(s) (on-line and/or hot-standby) in the main center
The advantages of the synchronized emergency center are that:
The databases are always kept in sync and apart from the split mode scenario, do not require any re-sync/merge of data when moving to/back from emergency operations. It is one logical system (made up of up to six servers in each server group) makes the system easy to maintain. Having additional synchronized servers increases redundancy of the main system since the system could failover to emergency site servers. Added communication redundancy since front-end computers in both sites can be utilized by the online servers.
The mode in which the emergency center runs in can be changed using the operator workstation maintenance screens, provided the person has the right authority.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
24
Network Manager Standard technical documentation System overview and architecture
Split Mode for synchronized emergency The Split Mode functionality handles the more complicated scenario where a Network Manager system is configured to, in an emergency scenario, be able to be separated into controlling different parts of the network when there is no communication between the centers. Running with two servers online and no communication between them will lead to manually entered data, historical data, tags, etc. only being updated in the respective system and when going back to normal operations will need to be merged. The Network Manager split mode functionality takes care of the complete process from entering into split mode, running the two masters in parallel and going back to normal operations including merging of the required data.
1.4.3.
System copy/cold emergency The system copy/cold emergency center is a complete copy of the production system and updated periodically with the source database, real-time database, historical database and displays. The function can be used either as a stand-alone emergency site or to update other non-production systems such as a PDS. The update function can be executed manually or set to automatic mode, for example once per day or once per week. To avoid disturbing the online operation the copy function can be configured to utilize either a synchronized or a replicated server and not the online/hot-standby servers.
1.5. Avanti helps achieve data independence and integrity
Data architecture Avanti, which may be regarded as a shell around the database, controls all database manipulation. Thereby, the objectives of data independence and integrity is achieved. Aside from the comprehensive data management, there are facilities for message handling that ensure the integrity of all messages in the system. In addition they:
Enables flexible data structures suitable to a variety of applications; Ability to connect between different databases; Centralizes database accesses to achieve independence of physical storage location; Provides general data manipulation facilities independent of storage format and physical data-base structure; Provides physical access facilities for functions with extremely high access time requirements; Performs data conversion as specified by user; Ensures integrity of data against destruction; Contains recovery and restart functions in event of system failure; Provides concurrent multi-program update/retrieval of data from database.
Efficient inter-process communication
Permits concurrently open databases; Ensures multi-user support on multi-CPUs - true real-time behavior;
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
25
Network Manager Standard technical documentation System overview and architecture
Gives efficient data transfer between database and user; Provides high speed disk I/O with efficient caching; Provides a SQL based query language–Avanti Query Language– for use in terminal sessions as well as in programs; Provides a Database Definition Language (DDL) with redefinition possibilities; Provides study databases; simulation and training can be done in a copy of the real-time database; Supports redundant and distributed configurations; Provides distributed access facilities; Provides powerful database editor and other tools; Allows primary memory resident files for fast data access; Allows users to subscribe to Avanti data.
Figure 6 - Overall database structure
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
26
Network Manager Standard technical documentation System overview and architecture
Data description The description of the logical and physical data structures is contained in the Data Catalog, which is stored within the database itself. In a specific customer system, the Data Catalog is a complete documentation of database structures. The structures are examined and modified using the database definition tool Avanti Define (ADF).
Data manipulation All data manipulation is performed using the Data Manipulation Language, DML. The transformation of the logical address into the physical data storage and vice-versa is transparent to the application program and the user. The data manipulation language commands are available from both C, C++ and FORTRAN programs. The comprehensive repertoire of commands enables the programmer to store, retrieve, modify and delete data.
Data services The data services include special access functions as well as utility functions for generation, backup, maintenance and performance monitoring.
Start and restart; Transfers to hot standby application server; Message handling; Backup copying of database for recovery purposes; Security copying of main memory database; Synchronization of database in on-line and hot standby application servers; Database definition and generation; Access; Statistics; Interactive database manipulation; Access-time measurement; Message system analysis; General database replication functions.
Design The design concept of Avanti is chosen to provide high performance real-time database management in connection with a true logical view of the database. The data structures defining the layout of the database as well as buffer areas for added flexibility reasons. Redundant server configurations are fully supported by Avanti. All specified updates of the on-line application server are transferred to the hot standby application server.
Distributed access The distributed access facility of Avanti makes message queues and database files accessible from anywhere within a computer
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
27
Network Manager Standard technical documentation System overview and architecture network. To the user process, the queue or file appears to be a part of the currently assigned local database (see Figure 7). Figure 7 - Distributed access
1.5.1.
Database hierarchies The Avanti DBMS supports hierarchies of databases. A subdatabase can share files or parts of files with its parent database. This facility is used to create study databases. A study database is a sub-database that shares a description of a supervised power process with its parent database. However, the actual process status is not shared. The study database keeps a local variant of that information. The process can now be simulated and the result can be stored in the study database. As the study database has the same structure as the parent database, the result of the simulation can be presented in the same way as data from the real process. The presentation programs need only to assign the study database instead of the real-time database. A hierarchy of databases is called a database set.
1.5.2.
Message handling Avanti message handling facilities are an integral part of the system. They are designed as a powerful means for inter-program and inter-server communication without requirements for synchronized execution. Avanti message handling provides a number of advantages to the user:
It implies a flexible way to transfer collections of data items of arbitrary size between programs; Avanti stores an arbitrary number of messages in the database in chronological order; The database storage of the messages gives full database backup support; The messages are put in FIFO (First In First Out) queues connected to the receiving processes;
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
28
Network Manager Standard technical documentation System overview and architecture
1.5.3.
Multiple queues to a single program are supported with priorities between the queues. The priority can be overridden; Message data are stored in primary memory for minimal delay of user program; A message can be sent to another program in 'wait mode'. Execution in the sending program will not be resumed until the receiving program has handled the message. The receiving program can optionally send return data, for instance status codes, back to the sending program.
Database integrity Database integrity involves the ability of the system to maintain the values of data items as they were initially placed in the database. Avanti provides powerful facilities to ensure safe operation in case of malfunction in any portion of the system and keeps system downtime due to an invalid database at a minimum. Together with the automatic integrity mechanisms built into Avanti, there are a number of preventive measures outside the database management system to be taken, with or without the aid of Avanti utility functions. The facilities and measures for database integrity insurance are as follows:
1.5.4. Avanti provides functions for administration of a number of databases within one application server program.
DML Command Analysis; Access Restrictions; Cyclical Backup of Main Memory Database; Data transfers in Redundant Server Systems. The basic idea is that all programs that make database modifications are message-controlled. The use of message queues for transfer of data between programs results in unbroken chains. If the database of a hot standby application server or the entire copy used for restarting the system contains only one of the messages in such a chain, the associated programs will continue their execution upon restart and thereby no data will be lost; Synchronizing of files in distributed systems Synchronizing is the updating of the real-time database of a hot standby application server with the amendments and additions made in the real-time database of the corresponding on-line application server during the time the standby application server has been non-operational. Synchronizing is also performed after database generation; Synchronizing at start-up of a hot standby application server; Synchronizing after database population; Manual synchronizing.
Study database The database used for process supervision and control is referred to as the real-time database. The study database, on the other hand, is normally a copy of parts of the real-time database and is used for the following purposes:
Interactive studies of different alternatives to run the process; Test of programs and data; Operator training.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
29
Network Manager Standard technical documentation System overview and architecture The study database is a powerful facility to run programs in a true operational environment without affecting the actual real-time system (see Figure 8). A number of features are incorporated in the study database concept:
Definition possible on both on-line and replicated servers; Read-only files in common with the real-time database; Unique files can be included which are not defined in the realtime database; Files in primary and secondary memory; Message-controlled programs can, without changes, access both real-time and study databases. The database identity is transparent to the programs.
Figure 8 - Study database concept
Database transparency The study database is composed of a global part, shared with the real-time database, and a unique study database part. The study applications can only read the data in the global part, while data in the unique part can both be read and updated. This design assures that study applications never update or contaminate realtime information. The results from the study applications are stored in the study database. The same tabular and single-line displays can be used to show the results from both the real-time and study database.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
30
Network Manager Standard technical documentation System overview and architecture Calculated power system results are stored in the same way in the database as acquired data.
Data initialization A planning case used in a study database can be established from the following sources:
The real-time database; Saved study cases; Old planning cases (from off-line media).
A planning case can be initialized from the real-time database by copying the process data and real-time network model to the corresponding study process data and study network model in a study database.
Modification of old planning cases An existing planning case can be modified interactively to reflect new operating conditions. A comprehensive interactive functionality is available, including:
Opening/Closing of breakers and disconnects; Modification of active/reactive loads, on individual or system/area basis; Modification of generator active power and voltage set points. The generator outputs can be updated on individual or on system basis; Updating of transformer tap changer ratios and voltage set points; Changing of VAR generation.
Management of study cases Network Manager supports storage, retrieval and maintenance of study cases. The following operations are available:
Creation and storage of study cases; Retrieval of saved cases; Deletion of saved cases.
The number of cases that can be stored is only limited by the size of the available disk storage. Each saved case is identified with a unique name and the creation date.
Study database configuration options
The study database is supported either in the real-time server or in one or more replicated servers.
© 2018 ABB | All Rights Reserved | S01 System Overview and Architecture w ADMS.docx
31
View more...
Comments