Uniformance®Robust Data CollectionUser GuideR310Copyright, Notices, and Trademarks© Honeywell International Inc. 1998 – 2012 All Rights Reserved.While this information is presented in good faith and believed to be accurate, Honeywell disclaims the implied warranties of merchantability and fitness for a particular purpose and makes no express warranties except as may be stated in its written agreement with and for its customers.In no event is Honeywell liable to anyone for any indirect, special or consequential damages. The information and specifications in this document are subject to change without notice.Honeywell, Experion, PlantScape, TotalPlant, Uniformance PHD, and Business FLEX are U.S. registered trademarks of Honeywell International Inc.Other brand or product names are trademarks of their respective owners.Release InformationUniformanceDocument Revision: 14Document Revision Date: February, 2012Document ID: pim3501Document Revisions:DocumentPAR DescriptionRevision13 n/a Revised the document for R300.14 n/a Revised the document for R310.Honeywell Process Solutions1860 W. Rose Garden LnPhoenix, Arizona 85027-2708 USA/psii Uniformance - Robust Data Collection User GuideSupport and Other ContactsUnited States and CanadaContact: Honeywell Solution Support CenterPhone: 1-800 822-7673.Calls are answered by dispatcher between 6:00 A.M. and 4:00 P.M. Mountain StandardTime. Emergency calls outside normal working hours are received by an answeringservice and returned within one hour.Mail: Honeywell HPS TAC, MS L171860 W Rose Garden LnPhoenix, Arizona 85027-2708EuropeTAC-EMEAContact: HoneywellPhone: +32-2-728-2732Facsimile: +32-2-728-2696Mail: TAC-BE02PlazaHermes1HHermeslaan,B-1831 Diegem, BelgiumPacificContact: Honeywell Global TAC – PacificPhone: 1300-300-4822 (toll free within Australia)Australia)(outside+61-8-9362-9559Facsimile: +61-8-9362-9564Mail: Honeywell Limited Australia5 Kitchener WayBurswood 6100, Western AustraliaEmail: GTAC@IndiaContact: Honeywell Global TAC – India66039400Phone: +91-20-Facsimile: +91-20- 66039800Mail: Honeywell Automation India Ltd.56 and 57, Hadapsar Industrial EstateHadapsar, Pune –411 013, IndiaEmail: Global-TAC-India@Uniformance Robust Data Collection User Guide iiiSupport and Other Contacts iv Uniformance - Robust Data Collection User GuideKoreaContact: Honeywell Global TAC – KoreaPhone: +82-80-782-2255 (toll free within Korea)Facsimile: +82-2-792-9015Mail: Honeywell Co., Ltd4F, Sangam IT Tower B4-4 Block1590, DMC Sangam-dong, Mapo-gu,Seoul, 121-835, KoreaEmail: Global-TAC-Korea@People’s Republic of ChinaContact: Honeywell Global TAC – ChinaPhone: +86- 21-52574568Mail: Honeywell (China) Co., Ltd33/F, Tower A, City Center, 100 Zunyi Rd.Shanghai 200051, People’s Republic of ChinaEmail: Global-TAC-China@SingaporeContact: Global TAC – South East AsiaPhone: +65-6580-3500Facsimile: +65-6580-3501+65-6445-3033Mail: Honeywell Private LimitedHoneywell Building17, Changi Business Park Central 1Singapore 486073Email: GTAC-SEA@TaiwanContact: Global TAC – TaiwanPhone: +886- 7- 536 2567Facsimile: +886-7-536 2039Mail: Honeywell Taiwan Ltd.17F-1, No. 260, Jhongshan 2nd Road.Cianjhen DistrictKaohsiung, Taiwan, ROCEmail: Global-TAC-Taiwan@Support and Other ContactsJapanContact: Global TAC – JapanPhone: +81-3-6730-7160Facsimile: +81-3-6730-7228Mail: Honeywell Japan Inc.New Pier Takeshiba, South Tower Building,20th Floor, 1-16-1 Kaigan, Minato-ku,Tokyo 105-0022, JapanEmail: Global-TAC-JapanJA25@ElsewhereCall your nearest Honeywell office.World Wide WebHoneywell Solution Support Online: /psTraining ClassesHoneywell Automation College: Uniformance - Robust Data Collection User Guide vSupport and Other Contactsvi Uniformance - Robust Data Collection User GuideContents1.ABOUT THIS DOCUMENT (11)1.1Who Should Use this Guide (11)1.2What is in this Guide (11)1.3Contact Us (11)2.INTRODUCING ROBUST DATA COLLECTION (13)2.1RDC Functionality (13)2.2RDC Configuration Overview (14)RDI/Link Configuration Requirements (14)RDC Port Number Usage (15)RDI Setup Utility (15)Interface.Dat File Requirements (15)Interfaces_CustomConfig.Dat File (15)Time Synchronization Requirements (16)2.3Single Collector Node to Shadow Configuration (16)2.4Dual Collector Node to Shadow Configuration (16)2.5Summary of RDC Features (19)2.6Real-time System Description (20)2.7Required Software Components (20)2.8History Recovery (21)Data Collection During History Recovery (21)History Recovery for RDC Scheme without a Standby Collector (22)Duration of History Recovery (22)RDC Caching during History Recovery (22)3.ROBUST DATA COLLECTION - DUAL COLLECTOR MODE (23)3.1Dual Collector RDC Architecture (23)3.2Fail-over Functionality (25)Uniformance Robust Data Collection User Guide viiContentsAutomatic Fail-over (25)Manual Fail-over (26)3.3Short Duration Data Loss (26)4.CONFIGURING ROBUST DATA COLLECTION (27)4.1General Guidelines (27)4.2Determine the Port Numbers (27)4.3RDC Configuration Checklist (28)4.4RDI-Specific Documents (30)4.5Prepare the System Environment (30)Update Hosts File on PHD Servers (30)Update Services File to Reserve RDC Ports (31)4.6To Define Source System Tag Attributes and Data Types (32)4.7Complete the RDI Type Configuration Form (32)4.8Complete the Interfaces (RDI’s & Links) Form (33)4.9Complete the RDC configuration Form (37)4.10Verify PHD Configuration on Each RDC Node (40)To Increase Maximum Tags (PhdParams.Dat) (40)5.INSTALL INTERFACES ON RDC NODES (41)5.1Run RDISetup (41)To Run RDISetup for RDC Nodes (41)5.2History Recovery on RDC Nodes (45)5.3Interpret Interfaces.Dat File on RDC Nodes (46)5.4Define and Start interfaces on a Running PHD System (47)6.CONFIGURING PHD TAGS (49)6.1Tags on RDC Shadow Interfaces (49)Tag Field Usage (49)6.2Tags on RDC Collector Interfaces (49)6.3Implement an RDC Watch Dog Tag (Optional) (50)WATCHDOG_TIMER parameter (50)viii Uniformance - Robust Data Collection User GuideContents Watch Dog Tag Configuration Guidelines (50)To Implement a Watch Dog Tag (51)7.MODIFY RDC REGISTRY SETTINGS (53)7.1Enable RDC Disk Caching (Optional) (53)7.2Enable Interface to Execute in Standby and Active Modes (54)8.MONITOR RDC STATUS (55)8.1Access RDC Status Display (55)8.2RDC Status Display Examples (56)8.3Use NSCAN Parameter To Monitor Status (58)9.TROUBLESHOOT RDC (59)9.1Watchdog-related Symptoms (59)APPENDIX A – RDC CONFIGURATION EXAMPLE (61)Example – RDC Topology (61)RDC Data specification form (62)Example – RDC Data Specification Form (63)Example – RDC Entries in Interfaces.Dat File (64)Interfaces.Dat – RDC Shadow node S37 (64)Interfaces.Dat – Collector node APP49 (64)Interfaces.Dat – Collector node APP50 (65)APPENDIX B – RDC FLOW CHARTS (67)RDC Flow Charts (67)Active RDI Flow (67)Standby RDI Flow (68)Shadow RDI Flow (69)GLOSSARY (71)INDEX (73)Uniformance - Robust Data Collection User Guide ixContentsx Uniformance - Robust Data Collection User Guide1. About This Document1.1 Who Should Use this GuideThis guide is intended for those experienced in the configuration and commissioningof PHD.1.2 What is in this GuideThe following table shows the information in each section of this guide:This section… Contains this information…Introducing Robust Data Collection Detailed description of what Robust Data Collection accomplishes when operating in single or dual collector mode.Robust Data Collection - Dual Collector Mode Additional details about an RDC scheme operating in dual collector mode.Configuring Robust DataCollectionInstructions on how to configure Robust Data Collection.Install Interfaces on RDC Nodes Instructions on how to install Robust Data Collection onthe shadow and collector nodes.Monitor RDC Status How to access and interpret the RDC status display.Appendix A – RDC Configuration Example An example RDC topology with associated PHD Configuration Tool forms and Interfaces.Dat files.Appendix B – RDC Flow Charts Flow charts of RDC operation.1.3 ContactUsIf you have any comments or concerns about this documentation, please e-mail us at: support@. Ensure that you type Uniformance Documentation in the subject line of your e-mail message.1 About This Document 1.3 Contact Us2. Introducing Robust DataCollectionFunctionality2.1 RDCRobust Data Collection (RDC) can be referred to as the methodology used to transferdata from a collector node PHD server to the shadow PHD server. A shadow server isa remote PHD node used to gather and store process information from one or morePHD collector node.RDIs and Links (interfaces) on the collector system send the real-time values directlyto the shadow server. The shadow and collector nodes use the same RDBMSdatabase and, therefore, share the same tags. The shadow node provides a bufferbetween the client environment and the process environment. The PHD collectornodes collect real-time data while the PHD Shadow nodes serve process history datato end users. The end user has access to process data without having to connect to anode on the real-time system.On a per-interface basis, Robust Data Collection provides the ability to configuredata collection in the following modes:∙single collector to shadow server∙dual collector to shadow server (not available for Links)When dual collector nodes are implemented, one collector acts as a primary collector,sending data to both the standby and the shadow servers. The standby collector actsas a warm backup node. If a standby collector detects that the primary is not in anACTIVE state, the standby will take over data collection.Note: Links can only be configured to use RDC in single collector to shadow servermode.2 Introducing Robust Data Collection2.2 RDC Configuration Overview2.2 RDC Configuration OverviewRDI/Link Configuration RequirementsThe ability to implement RDC is provided for all interfaces supported by PHD. Each interface must be configured to use RDC behavior for collector and shadowfunctionality. Each of the interfaces must be configured to act in collection or shadow operation. This behavior is available in the common base interface definitionstructures used by all RDIs and Links.You use the PHD Configuration Tool (previously named TPI) to enter the SQLServer data:∙Parameters must exist in the RDI Types configuration form. ∙ Interfaces must exist in the Interface (RDI’s & Links) configuration form for both collector and shadow nodes.∙An entry must exist in the Robust Data Collection configuration form to provide the port numbers for each interface that is to participate in an RDC scheme. The RDC configuration form contains a graphic diagram similar to the following figure for entry of the RDC data. You use the form to identify the behavior forcollector and shadow functionality. The form contains fields for each machine that is to run a copy of the RDI in an RDC scheme.An RDC scheme uses either two or three machines, depending on the functionality required by the site:∙SHADOW ∙ACTIVE ∙ STANDBY (only required for dual collector systems) Figure 1 – RDC Configuration Form – Single Collector Example2 Introducing Robust Data Collection2.2 RDC Configuration OverviewRDC Port Number UsagePHD 150 and prior releases had the Shadow node query (or poll) the collector node for historical and current data.PHD 201 and later now use a push technology for transferring data from the collector node to the shadow node.Communication from the collector computer to the shadow computer requires the configuration and use of unique TCP/IP port addresses between the computers. The ports used must not be used by any other applications resident on or accessing the computers.The receiving-end computer (which, for simple configurations, is the shadow computer) listens on the identified port. The sending-end computer (which, for simple configurations, is the collector computer) transmits data to the identified port. All subsequent communications for the interface between the affected computers use this identified port.Even during a period of History Recovery, the collector node continues to transmit current tag values to the shadow node. The shadow accumulates these values and processes them only after completing the processing of all historical values associated with the interface on the collector node.RDI Setup UtilityAfter entering the SQL Server data on the systems, you run the RDISetup utility (the replacement for the RDI_Services program) on each PHD Server to automatically configure the required SET commands in the Interfaces.Dat file. It may also have to copy the RDI DLL file.Interface.Dat File RequirementsAfter running RDISetup, the Interfaces.Dat file will contain the required RDC SET commands.Interfaces_CustomConfig.Dat FilePrior to Uniformance 210, the RDC SET commands were manually entered by the user into the Interfaces_CustomConfig.Dat file.On pre-210 systems, manual modifications to the Interfaces_CustomConfig.Dat file were required for any environment where RDC configuration was employed, because RDI_Services did not provide the extended configuration requirements for RDC configuration.2 Introducing Robust Data Collection2.3 Single Collector Node to Shadow ConfigurationOn 210 and greater systems, the Interfaces_CustomConfig.Dat file is used only forcustom interface configurations (for example, the WATCHDOG_TIMER parameter)and RDI_Services is replaced by the RDISetup utility.Time Synchronization RequirementsHoneywell highly recommends Time Synchronization across nodes if Robust DataCollection is being implemented.REFERENCE: Refer to Microsoft documentation for information about Windowstime synchronization mechanisms.2.3 Single Collector Node to Shadow ConfigurationIn the example form shown in Figure 1, the shadow computer is being told to listenon port 54200 for incoming communication; and the collector computer is being toldto send communications to port 54200 on the shadow computer. For thisconfiguration, the RDISetup utility inserts the following commands into theInterfaces.Dat file:On Shadow computer:SET MYRDI1:MODE SHADOWSET MYRDI1:ACTIVENODE COLLECTR1/54200On Collector computer:SET MYRDI1:MODE ACTIVESET MYRDI1:ACTIVENODE MYSHADOW/54200For each additional configured interface using RDC, the user must add another porton the shadow and collector computer.Note: Honeywell recommends that when configuring single collector/shadowconfigurations, the user increment the port number on the shadow computer by twofor each RDI.SET MYRDI1:ACTIVENODE COLLECTR1/54200SET MYRDI2:ACTIVENODE COLLECTR1/54202SET MYRDI3:ACTIVENODE COLLECTR2/542042.4 Dual Collector Node to Shadow ConfigurationIn the example in Figure 2, the user added a second collector (standby) computer tothe collection environment, and has set up the shadow computer to listen on port54201 for data from the standby computer COLLECTR1B. For this configuration,RDISetup will insert the following commands into the Interfaces.Dat file:On Shadow computer:2 Introducing Robust Data Collection2.4 Dual Collector Node to Shadow ConfigurationSET MYRDI1:ACTIVENODE COLLECTR1/54200SET MYRDI1:STANDBYNODE COLLECTR1B/54201Figure 2 – RDC Configuration Form – Dual Collector ExampleThe steps identified must be followed for each and every interface running on the collector computer for which the data collected is to be replicated to the shadow computer.For each interface running on any computer, a unique port must be used such that, should two separate collectors be replicating collected data to the same shadow computer, each interface on the shadow has a unique port.The forms in Figure 3 set up the following configuration for the shadow computer: ∙Listen on port 54203 for data from standby computer COLLECTR1B for RDIMYRDI2∙Listen on port 54205 for data from standby computer COLLECTR2B for RDIMYRDI3.For this configuration, RDISetup inserts the following commands into the Interfaces.Dat file.On Shadow computer:SET MYRDI2:ACTIVENODE COLLECTR1/54202SET MYRDI2:STANDBYNODE COLLECTR1B/54203SET MYRDI3:ACTIVENODE COLLECTR2/54204SET MYRDI3:STANDBYNODE COLLECTR2B/542052 Introducing Robust Data Collection2.4 Dual Collector Node to Shadow ConfigurationFigure 3 – RDC Configuration Forms – Additional Dual Collector Examples2 Introducing Robust Data Collection2.5 Summary of RDC Features The following figure summarizes the SET commands generated for RDIs in the previous examples.Figure 4 – Dual Collector to Shadow Server – Topology2.5 Summary of RDC FeaturesThe following is a list of the main features of Robust Data Collection.∙Configuration of a Standby Server for redundancy.∙Automatic fail-over to the Standby Server upon failure of the Active Server.∙Manual fail-over to Standby Server due to software maintenance on Active Server. ∙Automatic data recovery by the Active Server from the Standby Server.∙Standard access to PHD data from the Standby Server while in Standby Mode.∙No dual collection of data from the source system by the Standby Server while in Standby Mode.∙Interface type independent.2 Introducing Robust Data Collection2.6 Real-time System Description2.6 Real-time System DescriptionThe critical element is the ability of the source system to provide a lost connectivityindication to the interface. In general, most interfaces rely on a connectivity state,which is maintained by the interface layer to the source system. The capabilities ofthe interface layer may vary. As an example, the File Access RDI does not connectdirectly to a source system and, therefore, does not have a connectivity state. Thus, itdoes not support the switchover capability. Honeywell can provide furtherclarification upon request.2.7 Required Software ComponentsIn all cases, the collector computer uses an interface that communicates to the rawdata provider (it is a source system collector node).Example: RDILXS.dll, RDIOPC.dll, or PHDEXPInterface.dllIn all cases, the shadow computer will use either RDIShadow.dll orPHDEXPInterface.dll.From R310 and up RDI’s can be reentrant. In some cases this may not be the case. Ifa non-reentrant RDI is being used then each interface defined on a collectioncomputer must have a unique name. It is recommended this name be different fromthe supplied rdiXXX.dll name.Example (where 'rdiname' is identical on both nodes):RDIShadow.dll is copied to RDI<rdiname>.dll on the shadow node.RDILXS.dll is copied to RDI<rdiname>.dll on the collector node.2 Introducing Robust Data Collection2.8 History RecoveryFigure 5 – RDC Software Components Diagram2.8 HistoryRecoveryData Collection During History RecoveryWhen using Robust Data Collection with active/standby collectors, history recovery occurs concurrently with data collection. During the history recovery, the collected data is buffered and then applied to the archives once history recovery has completed. Continuation of real-time data collection during history recovery alleviates the problem with data gaps.The RDI Server buffers the real-time data that is collected while history recovery is in progress. The data is buffered until the history recovery is complete. Upon completion, the real-time data collected during history recovery is transferred to the PHD archives. When the interface retrieves real-time data during a history recovery, it records the data in the PHD archives in the correct time-sequenced order.2 Introducing Robust Data Collection2.8 History RecoveryHistory Recovery for RDC Scheme without a Standby CollectorWith 210 and later (and 201.1.6 or later and 202.1.2 or later), history recovery occursfrom the source system through the Collector node to the Shadow server, even if asecondary (Standby) collector is NOT configured in RDC.In previous releases under RDC, history recovery occurred from the secondarycollector only. When RDC history recovery was required in previous releases with asingle collector, a Gateway RDI on the shadow was used with the Remote Peer RDIon the collector.With release 310, history recovery may occur from the source system through theBuffer to the Shadow server, even if a secondary (Standby) collector is notconfigured in Robust Data Collection.In RDC, you can implement a shadow server and a single collector as a PHD to PHDtype of interface - a shadow interface runs on the Shadow RDC node and a historyrecoverable interface runs on the buffer node.Duration of History RecoveryFor systems running PHD 200 and later software, the interfaces on the Shadow RDCnode automatically perform full history recovery (all collected history is recovered).The default configuration may be modified if the site requires history recovery to bedependent on the amount of data being collected by the collector node or the durationof an outage.If full history recovery is not a site requirement, then parameters MIN_HISTRECMNand MAX_HISTRECMN (with appropriate settings) must be configured.For instructions on adding these parameters, refer to the section History Recovery onRDC Nodes.RDC Caching during History RecoveryDuring history recovery, simultaneously collected data is buffered. By default thevalues are buffered in memory. On a system with a large number of tags, orinterfaces, history recovery could take a long time. The longer history recovery takes,the more collected values need to be buffered.To minimize the amount of memory required by the RDI Server to buffer values, amechanism exists to cache the values to a disk file; thus, memory overhead isreduced and larger systems are supported.This requires a modification to the registry.For instructions for modifying the registry, refer to the section 7.1, Enable RDC DiskCaching (Optional).3. Robust Data Collection - DualCollector Mode3.1 Dual Collector RDC ArchitectureDual collector RDC mode is only available for RDI’s. Links do not support this modeof operation.The following Figure illustrates the architecture for RDI support of the fail-overcapability.Attention: The concept of 'Active' and 'Standby' is RDI-based, not node-based. Acomputer node can host both Active and Standby RDIs; this is beneficial for loadbalancing. For simplicity, the following examples segregate Active and StandbyRDIs on separate computer nodes.On the Active system, the RDIs collect data directly from the source system. Theysend the data to the Active PHD Server and to the partner RDIs on the StandbyServer.The RDIs on the Standby Server initialize, but do not communicate with the sourcesystem for the tag values. They rely on the RDIs on the Active system for the tagvalues. The values are pushed to the Standby Server so that the values are always upto date.Figure 6 – Dual Collector Fail-over ArchitectureIf the Active system goes down (loss of network connectivity or RDI shutdown), theStandby RDIs begin collecting data from the source system and provide these valuesto the Standby PHD Server. It continues to collect these values until the Activesystem comes back up and begins collecting.3 Robust Data Collection - Dual Collector Mode3.1 Dual Collector RDC ArchitectureWhen the Active system comes back up, it initiates a history recovery from theStandby Server and simultaneously begins collecting from the source system. See theActive RDI Flow in Appendix B – RDC Flow Charts.Similarly, when a Standby system has been down, and comes back up, it does ahistory recovery and begins receiving real-time values from the Active system. TheStandby always has a full history of the data collected on the Active Server. When afail-over condition occurs, any application that re-directs to the Standby has access tothe history data in addition to the real-time data. See the Standby RDI Flow inAppendix B – RDC Flow Charts.The mechanism for transferring data from the Active to the Standby RDIs is alsoused to transfer data between the collector and shadow systems (see the followingfigure).Upon Active collector fail-over, RDIs on the Standby system send the real-timevalues directly to the shadow server.Figure 7 – Dual Collector to Shadow Server RDC ArchitectureWhen a Shadow Server is receiving from a collector system consisting of anActive/Standby pair, the Shadow Server is able to receive data from the Standbysystem on failure of the Active Server. See the Shadow RDI Flow in Appendix B –RDC Flow Charts.3 Robust Data Collection - Dual Collector Mode3.2 Fail-over FunctionalityFunctionality3.2 Fail-overThe Standby system∙provides data storage,∙provides data collection,∙provides data transfer to a shadow, and∙is accessible to client applications.The fail-over functionality is available on all source systems supported by PHD. Automatic Fail-overIf the Active server is no longer in an ACTIVE state, or if network communication is lost, the Standby server takes over collection of data from the source system and continues the collection/storage of this data in PHD.For automatic fail-over to occur, there must exist at least one collected tag on the RDI (with a frequency as slow as one minute) to ensure that RDC is aware that the RDI on the primary went down.Automatic Failure Detection by Standby SystemRDC provides the ability for the Standby system to detect the failure of the Active RDI collection system and automatically begin the collection of data.Fail-over will occur only when∙the primary RDI is down or inaccessible, and∙the watchdog tag test has failed (if configured).Note: External errors such as node isolation, DCS errors, or bad data will not trigger an RDC fail-over unless they cause the RDI to fail.Optional Watchdog TagA watchdog tag is an optional feature of RDC, intended to protect the user from duplicate data collection between the primary and standby server in case of a network communication failure. During a network failure, the standby PHD server will check the state of the watchdog tag to see if the primary is continuing to operate normally. If the watchdog tag is being updated, the standby will not start collection, thus avoiding duplicate data collection.Note: A watchdog tag failure on its own will not cause RDC to fail-over. An RDC fail-over occurs only when∙the primary RDI is not ACTIVE, or∙the primary RDI cannot be reached over the network, and∙the watchdog tag is not current.。