TWS - Admips ps spsps splsgnjgakkdjbhajptx

BGOUTHAMSABARIES 73 views 141 slides Aug 02, 2024
Slide 1
Slide 1 of 141
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141

About This Presentation

Gakidnnkskddk


Slide Content

IBM Workload Scheduler v9.2 Overview

Introduction IBM Workload Automation is a portfolio of products provided by IBM to automate all workload management tasks. The scheduling features of IBM Workload Automation help you plan every phase of your workload production.

IBM Workload Scheduler – Architecture

IBM Workload Scheduler – Architecture IBM Workload Scheduler network consists of a set of linked workstations on which you perform job processing. A network is composed of one or more domains , each having a domain manager workstation acting as a management hub, and one or more agent workstations.

Architecture Master domain manager : A workstation acting as the management hub for the network. It manages all your scheduling objects. Agent : The computer system where you run your tasks(Jobs). DWC : A Web-based user interface available for viewing and controlling scheduling activities in production on in IWS

Backup Master domain manager : A workstation which can act as a backup for the master domain manager, when problems occur. It is effectively a master domain manager, waiting to be activated. Architecture

Domain Manager : The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. All workstations in a distributed Tivoli Workload Scheduler network are organized in one or more Domain (s) Architecture

IWS Components - Connections

IWS Components - Connections

IWS Components - Connections

IWS Components - Connections

IWS Components - Connections

IWS Components - Connections

IWS Components - Connections

Port A port number is a way to identify a specific process to which an Internet or other network message is to be forwarded when it arrives at a server. When a service ( server  program) initially is started, it is said to  bind  to its designated port number. As any  client  program wants to use that server, it also must request to bind to the designated port number.

IWS Ports overview

IWS Ports

IWS WAS Ports

IWS Processes The communication management in the IWS workload scheduler is performed on each workstation by the below processes. Network Management process Event Management process Pass through between netman & mailman Mail Management process Production Management process Job Management process

IBM Workload Scheduler uses TCP/IP protocol for network communication. The node name and the port number used to establish the TCP/IP connection are set for each workstation in its workstation definition. A store-and-forward technology is used by IBM Workload Scheduler to maintain consistency and fault-tolerance at all times across the network by queuing messages in message files while the connection is not active. When TCP/IP communication is established between systems, IBM Workload Scheduler provides bi-directional communication between workstations using links . IWS Communication

Connection Initialization – One Way

Connection Initialization – Two Way

Message Files for inter-process communication IWS uses message queues for local inter-process communication. These files have ‘. msg ’ extension

Message Files for inter-process communication IWS uses message queues for local inter-process communication. These files have ‘. msg ’ extension

Message Files for inter-process communication IWS uses message queues for local inter-process communication. These files have ‘. msg ’ extension

Job Processing

Processes Explored The management of communication between workstations and local job processing, together with the notification of state updates, are performed on each IBM Workload Scheduler workstation by a series of management processes that are active while the engine is running.

Processes Explored

Processes Explored

Processes Tree

Process Tree – Windows - Unix On Windows platforms there is an additional service, the Tivoli Token Service , which enables IBM Workload Scheduler processes to be launched as if they were issued by the IBM Workload Scheduler user.

Workstation - Types The computer system where you run your jobs and job streams is called a workstation . When you define a job or job stream in the IBM Workload Scheduler database you identify the workstation definitions for the physical or virtual computer systems on which your job is scheduled to run.

Workstation - Types

Workstations - MDM A workstation acting as the management hub for the network. The master domain manager is the highest level workstation of a IBM Workload Scheduler network. It contains or connects to the relational database that stores scheduling object definitions. It manages all your scheduling objects. It also creates or updates a production file when the plan is created or extended and then distributes the file to the network. It performs all logging and reporting for the network. It can perform the role of event processing server for the event-driven workload automation feature. The master domain manager workstation must be installed with this role.

Workstations - BMDM A workstation that can act as a backup for the master domain manager when problems occur. It is a master domain manager, waiting to be activated. Define a backup master domain manager at installation to point to either the database being used by the master domain manager or to a mirror of that database. In this way the backup master domain manager has the latest data available to it at all times.

Workstations – Domain Manager A workstation that controls a domain and that shares management responsibilities for part of the IBM Workload Scheduler network. Install this component if you need a multi-domain network and you want to manage workload by assigning it to a predefined workstation that is to run your workload statically. To define a domain manager , install a fault-tolerant agent on your workstation and then define it as  manager  in the workstation definition.

Workstations – Dynamic Domain Manager Install this component if you need a multi-domain network and you want to manage your workload both statically that dynamically. All domains below the master domain have dynamic domain managers to manage the workstations in its domain. When you install a dynamic domain manager the workstation types listed below are created in the database: Fta : Fault-tolerant agent component manually configured as domain manager Broker :Broker server component Agent : Dynamic agent component

Workstations – Backup DDM A workstation which can act as a backup for the dynamic domain manager, when problems occur. It is effectively a dynamic domain manager, waiting to be activated. When you install a backup dynamic domain manager the workstation types listed below are created in the database: Fta : Fault-tolerant agent component manually configured as domain manager Broker :Broker server component Agent : Dynamic agent component

Workstations – Standard Agent A workstation that receives and runs jobs only under the control of its domain manager. It is installed as an agent, and then configured as a standard agent workstation when you define the workstation in the database.

Workstations – Standard Agent A workstation that receives and runs jobs only under the control of its domain manager. It is installed as an agent, and then configured as a standard agent workstation when you define the workstation in the database.

Workstations – Fault Tolerant Agent A workstation that receives and runs jobs. If there are communication problems with its domain manager, it can run jobs locally. It is installed as an agent, and then configured as a fault-tolerant agent workstation when you define the workstation in the database. This workstation is recorded in the IBM Workload Scheduler database as  fta .

Workstations – Extended Agent Extended agents are used to extend the job scheduling functions of IBM Workload Scheduler to other systems and applications. A workstation that has a host and an access method. The host is any other workstation, except another extended agent. The access method is an IBM-supplied or user-supplied script or program that is run by the host whenever the extended agent is referenced in the production plan.  It must be physically hosted by a fault-tolerant agent (up to 255 extended agents per fault-tolerant agent) and then defined as an extended agent in the database. Extended Agent An agent configured to run 3rd party application jobs like SAP R/3,Oracle,Peoplesoft.

Workstations – Workload Broker A workstation that manages the lifecycle of Workload Broker jobs in Workload Broker. It is installed and configured as a dynamic workload broker workstation in the database. Workload Broker A workstation that runs both existing job types such as Unix /Windows jobs and advanced job types.

Workstations – Dynamic Agent A workstation that manages a wide variety of job types, for example, specific database or FTP jobs, in addition to existing job types. This workstation is automatically created and registered when you install the dynamic agent. Because the installation and registration processes are performed automatically, when you view the agent in the Dynamic Workload Console, it results as updated by the Resource Advisor Agent. Dynamic agents connect directly to a master domain manager or to a dynamic domain manager. In DB, the workstation will be of type agent .

Workstations – Pool A workstation grouping a set of dynamic agents with similar hardware or software characteristics to submit jobs to. IBM Workload Scheduler balances the jobs among the dynamic agents within the pool and automatically reassigns jobs to available dynamic agents if an agent is no longer available. To create a pool of dynamic agents in your IBM Workload Scheduler environment, define a workstation of type  pool  hosted by the workload broker workstation, then select the dynamic agents you want to add to the pool. A computer system group is automatically defined in the workload broker database together with its associated dynamic agents. Pool A logical workstation that groups a set of agents with similar hardware or software characteristics to run jobs to. It’s hosted by the workload broker..

Workstations – Dynamic Pool A workstation grouping a set of dynamic agents that is dynamically defined based on the resource requirements you specify. This workstation maps all the dynamic agents in your environment that meet the requirements you specified. The resulting pool is dynamically updated whenever a new suitable agent becomes available.  Dynamic Pool A logical workstation that groups a set of agents, dynamically defined based on resource requirements.

IBM Workload Scheduler – Installations

IBM Workload Scheduler – Release Notes IBM Workload Scheduler Release Notes for v9.2 http://www-01.ibm.com/support/docview.wss?uid=swg27041032 The Release Notes for Tivoli Workload Scheduler, version 9.2 contain the following topics: Interoperability tables Fix packs Installation limitations and problems, and their workarounds Software limitations and workarounds APARS Fixed in this release[ Authorised Program Analysed Report]:problems & their fixes in the release. Documentation updates To download the appropriate package for your operating system, see the  Tivoli ®  Workload Scheduler download page . For detailed system requirements for all operating systems, see the  Detailed System Requirements page . A complete list of new or changed functions in this release are documented in the "Overview->Summary of enhancements" section of the information center at the following link:  Summary of enhancements.

IBM Workload Scheduler – Licensing IBM licenses software for use either for a fixed term or indefinitely (depending on the type of license obtained), and as long as the Licensee complies with the terms of the license agreement. Indefinite use of software - "License + SW Subscription and Support 12 Months " The "License + Software Subscription and Support 12 Months" license grants the right to: indefinitely use the specific version/release of the software obtained. receive Software Subscription and Support (previously referred to as Software Maintenance) for a period of 12 months. While in effect, Software Subscription and Support authorizes Licensee to use the most current commercially available version, release, or update, should any be made available as well as receive support for the Program. Fixed Term license of software - "Initial Fixed Term License + SW Subscription and Support" The "Initial Fixed Term License + Software Subscription and Support" term license grants the right to: use the software on premise for a limited period only (most commonly of 12 months) receive Software Subscription and Support (previously referred to as Software Maintenance) for the period of the term. While in effect, Software Subscription and Support authorizes Licensee to use the most current commercially available version, release, or update, should any be made available as well as receive support for the Program. At the end of each fixed term (most commonly 12 months), the license may be renewed for an additional 12 month fixed term (at the prevailing price)

IBM Workload Scheduler – Licensing TWS Licensing can be performed (based on the needs): Using PVU (Processor Value Unit) licensing Using per jobs licensing 1 PVU (Processor Value Unit) licensing it is a unit of measure by which the program can be licensed. The number of PVU entitlements required is based on the processor technology defined by processor value, brand, type, and model number in other words it is based on the CPU cores available to the machine (server) physical / virtual where TWS is deployed and runs jobs, a detailed list can be found on IBM site in   PVU table . PVU = no. of sockets(chip) x no. of processor core per socket x PVU per Core as per PVU table . 2. Per jobs licensing in other words you will license the number of jobs ran (aka10 MONTHLY JOBS License). e.g. Number of daily executed jobs: 500 Number of monthly executed jobs (multiply by 31 ): 15,500 Number of groups of 10 jobs: 1,550 (divide by 10 as the part is per 10 Monthly Jobs) List price of IBM Workload Automation for 10 jobs equals: X List price is calculated at 1,550 job packs of 10 jobs multiplied by X.

Core,Socket,CPU

IBM Workload Scheduler – Download Software To download the appropriate package for your operating system, see the  Tivoli ®  Workload Scheduler download page . Select the required Part Number to download.

IBM Workload Scheduler – Installation methods The methods to install a master domain manager or its backup, a dynamic domain manager or its backup : Launchpad : The launchpad automatically accesses and runs the related installation setup file in interactive mode. The installation from the launchpad can be driven and simplified according to the deployment model you chose. Installation Wizard : The Installation Wizard guides you through the installation steps. Silent Installation : Installation is done using a customized file called response file . The installation script uses the given response file that has the configuration settings.

IBM Workload Scheduler – TWA Instances TWS TDWC

IBM Workload Scheduler – TWA Instances

IBM Workload Scheduler – Installation methods The method to install an Agent: twsinst : This script installs agent. Show command usage and version twsinst -u | -v Install a new instance twsinst -new - uname username -password user_password - acceptlicense yes|no [- addjruntime true|false ] [-agent dynamic|fta|both ] [-company company_name ] [- displayname agentname ] [-domain user_domain ] [-gateway local|remote|none ] [- gwid gateway_id ] [-hostname host_name ] [- inst_dir install_dir ] [- jmport port_number ] [- jmportssl true|false ] [- lang lang_id ] [-master master_cpu_name ] [-password user_password ] [-port port_number ] [- skip_usercheck ] [- stoponcheckprereq ] [- tdwbhostname host_name ] [- tdwbport tdwbport_number ] [- thiscpu workstation] [- work_dir working_dir ]

IBM Workload Scheduler – Agent Path The agent also uses the same default path structure, but has its own separate installation directory: TWA_home /TWS/ITA/ cpa Note: The agent also installs some files outside this path. If you have to share, map, or copy the agent files (for example when configuring support for clustering) share, map, or copy these files, as well: UNIX and Linux operating systems / etc / teb / teb_tws_cpa_agent _< TWS_user >. ini /opt/IBM/CAP/EMICPA_default.xml / etc / init.d / tebctl-tws_cpa_agent _< TWS_user > (on Linux and Solaris) / etc / rc.d / init.d / tebctl-tws_cpa_agent _< TWS_user > (on AIX) / sbin / init.d / tebctl-tws_cpa_agent _< TWS_user > (on HP-UX) Windows operating systems % windir %\ teb \teb_tws_cpa_agent_& lt;tws_user >. ini

IBM Workload Scheduler – Agent Path The agent uses the following configuration files which you might need to modify: JobManager.ini : This file contains the parameters that tell the agent how to run jobs. You should only change the parameters if advised to do so in the Tivoli Workload Scheduler documentation or requested to do so by IBM Software Support. Its path is: TWA_home /TWS/ITA/ cpa /config/JobManager.ini JobManagerGW.ini: When a dynamic agent is installed and -gateway local|remote is specified, then this file contains the same parameters as the JobManager.ini file except for the following differences: The ResourceAdvisorUrl parameter points to the dynamic workload broker, and not the master domain manager. The JobManagerGW.ini file is installed in the following location: TWA_home /TWS/ITA/ cpa /config/JobManagerGW.ini ita.ini This file contains parameters which determine how the agent behaves.

IBM Workload Scheduler – TDWB Path The files that give the dynamic scheduling capability are installed in the following path: TWA_home /TDWB

IBM Workload Scheduler – TDWC Path The Dynamic Workload Console can be installed in the path of your choice, but the default installation path is as follows: On Windows C:\Program Files\IBM\TWAUI On UNIX /opt/IBM/TWAUI

IBM Workload Scheduler – Websphere Application Server Path The WebSphere Application Server is automatically installed when you create a new Tivoli Workload Automation instance . You can specify any path for the installation. The default installation path is: TWA_home /WAS For the Dynamic Workload Console: C:\Program Files\IBM\TWA\WAS Or /opt/IBM/TWA/WebSphere

IBM Workload Scheduler – CLI Path The command line client is installed outside all Tivoli Workload Automation instances . Its default path is: UNIX /opt/ ibm /TWS/CLI Windows C:\Program Files\IBM\TWS\CLI

IBM Workload Scheduler – TWS Registry The list of components installed in a Tivoli Workload Automation instance, UNIX / etc /TWA Windows % windir %\TWA Each Tivoli Workload Automation instance is represented by a file called: twainstance < instance_number >. TWA.properties Attention: Do not edit the contents of this file

IBM Workload Scheduler – TWS Registry

DB Schema

DB Schema

IBM Workload Scheduler – Configuring

Configuring MDM After Installing MDM, Configure CPU Definition. Configure the MASTERDM Domain Definition Configure the FINAL & FINALPOSTREPORTS   jobstreams. Set the Global Options Set the local & user options Configure User authentication to allow users authorization on actions & objects. Setting connection security to enable SSL for inter-component communication Initiate Jnext

Configuring MDM – CPU Definition Add the CPU definition to the TWS DB using composer utility Ex: CPUNAME CORETWSMDM DESCRIPTION "MASTER CPU tws92" OS UNIX NODE nmetcvtws0002 TCPADDR 31111 DOMAIN MASTERDM FOR MAESTRO TYPE MANAGER AUTOLINK ON BEHINDFIREWALL OFF FULLSTATUS ON END Workstation Definition

Configuring MDM – MASTERDM Domain Definition A domain is a group of workstations consisting of one or more agents and a domain manager. The domain manager acts as the management hub for the agents in the domain. Definition: domain   domainname [ description  “description”]     * manager  workstation      [ parent   domainname   | ismaster ] end Domain Definition

Configuring MDM – Setting Up Environment Variables On Windows operating systems , run the tws_env.cmd shell script to set up both the PATH and TWS_TISDIR variables. For example, if Tivoli Workload Scheduler is installed in the % ProgramFiles %\IBM\TWA\TWS directory, the PATH variable is set as follows: c:\Program Files\IBM\TWA\ TWS;c :\Program Files\IBM\TWA\TWS\bin On UNIX and Linux operating systems , source the tws_env shell script to set up both the PATH and TWS_TISDIR variables. For example, if Tivoli Workload Scheduler is installed in the default directory /opt/IBM/TWA/TWS directory, tws_env.sh sets the variables as follows: PATH=/opt/IBM/TWA/TWS:/opt/IBM/TWA/TWS/bin:$PATH export PATH TWS_TISDIR=/opt//opt/IBM/TWA/TWS export TWS_TISDIR The tws_env script has two versions: tws_env.sh for Bourne and Korn shell environments tws_env.csh for C Shell environments

Configuring MDM – Setting Up Global Options [ optman ] optman Manages the Tivoli Workload Scheduler global options. You can list, show and change them. optman [-u | -v] optman [ < connectionParams > ] chg { <option> | < shortName > } = <value> optman [ < connectionParams > ] ls optman [ < connectionParams > ] show { <option> | < shortName > } global options

Configuring MDM – Setting Up Local Options [ localopts file] Set local options in the localopts file. Changes do not take effect until netman is stopped ( conman shut;wait ) and restarted ( StartUp ). A template file containing default settings is located in TWA_home /TWS/config/ localopts . During the installation process, a working copy of the local options file is installed as TWA_home /TWS/ localopts . localopts file

Configuring MDM – Setting Up UserOptions [ useropts file] Set the user options you require for each user on a workstation who needs them in the  useropts  file. The concept of the useropts file is to contain values for localopts parameters that must be personalized for an individual user. The files must be located in the user_home /.TWS directory of the user. When Tivoli Workload Scheduler needs to access data from the localopts file, it looks first to see if the property it requires is stored only or also in the useropts file for the user, always preferring the useropts file version of the value of the key. If a property is not specified when invoking the command that requires it, or inside the useropts and localopts files, an error is displayed. The main use of the useropts file is to store the user-specific connection parameters used to access the command line client. These are the following keys, which are not stored in the localopts file: Username : User name used to access the master domain manager. The user must be defined in the security file on the master domain manager (see Configuring user authorization (Security file)) Password : Password used to access the master domain manager. The presence of the ENCRYPT label in the password field indicates that the specified setting has been encrypted; if this label is not present, you must exit and access the interface program again to allow the encryption of that field. A useropts file is created for the < TWS_user > during the installation, but you must create a separate file for each user that needs to use user-specific parameters on a workstation. This useropts file can be updated with user credential using ‘composer au’ command.

Configuring MDM – Setting Up Security[security file] The way IBM Workload Scheduler manages security is controlled by a configuration file named  security file . This file controls activities such as: Linking workstations. Accessing command-line interface programs and the Dynamic Workload Console. Performing operations on scheduling objects in the database or in the plan. In the file you specify for each user what scheduling objects the user is allowed to access, and what actions the user is allowed to perform on those objects.  A template file named TWA_home /TWS/config/ Security.conf is provided with the product. During installation, a copy of the template file is installed as TWA_home /TWS/ Security.conf , and a compiled, operational copy is installed as TWA_home /TWS/Security .

Configuring MDM – Setting Up Security[security file] To modify the security file, perform the following steps: Navigate to the  TWA_home /TWS directory from where the  dumpsec  and  makesec  commands must be run. Run the  dumpsec  command to decrypt the current security file into an editable configuration file. dumpsec > outputfileName Modify the contents of the editable security configuration file Run the  makesec  command to encrypt the security file and apply the modifications. makesec filename Security File Template

Configuring MDM – Setting Up Security[security file]

Configuring MDM – Setting Up Jnext The  FINAL  job stream is placed in production every day and runs  JnextPlan  before the start of a new day. The  FINALPOSTREPORTS  job stream, responsible for printing post production reports, follows the  FINAL  job stream and starts only when the last job listed in the  FINAL  job stream ( SWITCHPLAN ) is completed successfully. The installation creates the <TWS_INST_DIR>\TWS\Sfinal file that contains the  FINAL  and  FINALPOSTREPORTS  job stream definitions. Log in as < TWS_user > or as administrator. Set the environment variables Add the  FINAL  and  FINALPOSTREPORTS  job stream definitions to the database by running the following command from the /opt/IBM/TWA/TWS directory using composer: composer add Sfinal 4. Run JnextPlan from < twshome >/TWS directory.This generates the production plan.

Configuring MDM – Audit An auditing option is available to track changes to the database and the plan. Database auditing :You can track changes to the database in a file, in the database itself, or in both. All user modifications are logged, including the current definition of each modified database object. If an object is opened and saved, the action is logged even if no modification has been done. plan auditing :You can track changes to the plan in a file. All user modifications to the plan are logged. Actions are logged whether they are successful or not. Each audit log provides audit information for one day, from 00:00:00 UTC to 23:59:59 UTC regardless of the timezone of the local workstation, but the log file is only created when an action is performed or the WebSphere Application Server is started. The files are called yyyymmdd , and are created in the following directories: < TWA_home >/TWS/audit/plan < TWA_home >/TWS/audit/database

Configuring MDM – Enabling Audit The auditing option is enabled by setting the following two entries in the global options, using optman : enPlanAudit = 0|1 enDbAudit = 0|1 A value of 1 (one) enables auditing and a value of (zero) disables auditing. Auditing is disabled by default on installation of the product. To initiate database auditing, you must shut downTivoli Workload Scheduler completely. When you restart Tivoli Workload Scheduler, the database audit log is initiated. Plan auditing takes effect whenJnextPlan is run. The header record fields are separated by vertical bars ( | ), as follows: HEADER| < GMT_date > | < GMT_time > | < local_date > | < local_time > | < object_type > | > <workstation> | < user_ID > | <version> | <level> The log file entries are in the following format: < log_type > | < GMT_date > | < GMT_time > | < local_date > | < local_time > |< object_type >| > < action_type > | <workstation> | < user_ID > | < object_name > | < action_data_fields >

IBM Workload Scheduler – Workstation Definitions A workstation is a scheduling object that runs jobs. It is usually an individual computer on which jobs and job streams are run. A workstation definition is required for every computer that runs jobs in the IBM Workload Scheduler network. Primarily workstation definitions refer to physical workstations. However, in the case of extended agents, the workstations are logical definitions that must be hosted by a physical workstation. If you are defining a workstation that manages the lifecycle of Tivoli Workload Scheduler Workload Broker type jobs in Tivoli Dynamic Workload Broker, it represents the Tivoli Dynamic Workload Broker Bridge component of the server that acts as router for the workstation pool managed by Tivoli Dynamic Workload Broker Workstation Definition Note: Dynamic Agent workstation definition is added to the DB automatically, if enAddWorkstation / aw = yes .

IBM Workload Scheduler – Workload Broker A workstation that runs both existing job types and job types with advanced options. It is the broker server installed with the master domain manager and the dynamic domain manager. It can host one or more of the following workstations: extended agent remote engine pool dynamic pool agent. This definition includes the following agents: dynamic agent Tivoli Workload Scheduler for z/OS agent agent for z/OS This workstation is recorded in the Tivoli Workload Scheduler database as  broker . Workstation Definition

IBM Workload Scheduler – Dynamic Agent Definition & Configuration A workstation that manages a wide variety of job types, for example, specific database or FTP jobs, in addition to existing job types. This workstation is automatically created and registered in the Tivoli Workload Scheduler database when you install the dynamic agent. The dynamic agent is hosted by the workload broker workstation. Because the installation and registration processes are performed automatically, when you view the dynamic agent in the Dynamic Workload Console, it results as updated by the Resource Advisor Agent. Dynamic agents can be grouped in pools and dynamic pools. This workstation is recorded in the Tivoli Workload Scheduler database as  agent . Configuring Dynamic Agent Note: Dynamic Agent workstation definition is added to the DB automatically, if enAddWorkstation / aw = yes .

IBM Workload Scheduler – Pool Definition A logical workstation that groups a set of dynamic agents with similar hardware or software characteristics to submit jobs to. Tivoli Workload Scheduler balances the jobs among the dynamic agents within the pool and automatically reassigns jobs to available dynamic agents if a dynamic agent is no longer available. This workstation is recorded in the Tivoli Workload Scheduler database as  pool . Workstation Definition

IBM Workload Scheduler – Dynamic Pool Definition A logical workstation that groups a set of dynamic agents, which is dynamically defined based on the resource requirements you specify and hosted by the workload broker workstation. For example, if you require a workstation with low CPU usage and Windows installed to run your job, you specify these requirements using the Dynamic Workload Console or the  composer  command When you save the set of requirements, a new workstation is automatically created in the Tivoli Workload Scheduler database. This workstation maps all the dynamic agents in your environment that meet the requirements you specified. The resulting pool is dynamically updated whenever a new suitable dynamic agent becomes available. Jobs scheduled on this workstation automatically inherit the requirements defined for the workstation. This workstation is hosted by the workload broker workstation and recorded in the Tivoli Workload Scheduler database as  d-pool . Workstation Definition

Job Environment Overview On each workstation, jobs are launched by the  batchman  production control process. The  batchman  process resolves all job dependencies to ensure the correct order of job processing, and then queues a job launch message to the  jobman  process. Each of the processes launched by  jobman , including the configuration scripts and the jobs, retains the user name recorded with the logon of the job.  The jobman process starts a job monitor process that begins by setting a group of environment variables, and then runs a standard configuration script named TWS_home / jobmanrc which can be customized. The jobmanrc script sets variables that are used to configure locally on the workstation the way jobs are launched, regardless of the user.

jobmanrc Vs . jobmanrc On UNIX workstations, the local configuration script  . jobmanrc  permits users to establish a desired environment when processing their own jobs. On Windows workstations the local configuration script djobmanrc.cmd is run if it exists in the user's Documents and Settings directory which is represented by the environment variable %USERPROFILE% and depends on the Windows language installation. The djobmanrc.cmd script will be ran by jobmanrc.cmd script. Unlike the  jobmanrc  script, the  . jobmanrc  script can be customized to perform different actions for different users. Each user defined as  tws_user  can customize in the home directory the  . jobmanrc  script to perform pre- and post-processing actions. Jobmanrc is used to set the desired environment before each job is run. A standard configuration script template named TWS_home /config/ jobmanrc The jobmanrc script sets variables that are used to configure locally on the workstation the way jobs are launched, regardless of the user.

Dynamic Workload Console The Dynamic Workload Console is a Web Portal application It can manage the workload on both z/OS and distributed systems at the same time. No need to install and maintain components on each user’s PC: only a web browser is required to access the interface. It allows many concurrent users and up to the minute real time monitoring As IWS 9.x is complaint with OSLC,DWC is integrated with Jazz for Service Management [ JazzSM ]. DASH is the UI for Jazz for Service Management. IBM Dashboard Application Services Hub provides a single console for administering IBM products and related applications. Open Services for Lifecycle Collaboration - an open community creating specifications for integrating tools. These specifications allow conforming independent software and product lifecycle tools to integrate their data and workflows in support of end-to-end lifecycle processes] Jazz for Service Management brings together the Open Services for Lifecycle Collaboration (OSLC) community's open specifications for linking data and other shared integration services, including a linked data registry service, administration services, dashboard service, reporting service, and security services. It underpins client-defined management scenarios such as cloud, performance monitoring, and IT service management. https://<dwchostname>:16311/ibm/console

Dynamic Workload Console – Architecture

Configuring DWC After Installing DWC, Configure Engine Connection. Configure User Repository Configure Users & Groups - Roles. Configuring High Availability Configuring SSO Configure SSL. DWC Global Settings

Configuring DWC Engine Connection After Installing DWC, Configure Engine Connection. Configure User Repository Configure Users & Groups - Roles. Configuring High Availability Configuring SSO Configure SSL. DWC Global Settings

IBM Workload Scheduler – Administration

Stop & Start of IWS Application on a Workstation Stop & start commands The type of operating system installed on the workstation determines how Tivoli Workload Scheduler processes can be started from the command line.

Stop & Start of IWS Broker Stop command <TWS home>/wastools/ stopBrokerApplication.sh Start command <TWS home>/wastools/ startBrokerApplication.sh The type of operating system installed on the workstation determines how Tivoli Workload Scheduler processes can be started from the command line.

IWS Workstation Status

IWS Workstation Status Listing of the show cpu states in the output of conman " showcpus ": The state of the workstation’s links . Up to five characters are displayed as follows: [L] [T|H|X] [I] [J] [W|H|X] [F]  where:  L - The primary link is open (linked) to its domain/upper manager.  T - This flag is displayed if the fault-tolerant agent is directly linked to the domain manager from where you run the command.  H - The workstation is linked through its host.  X - The workstation is linked as an extended agent (x-agent).  I - The jobman program has completed startup initialization.  J - The jobman program is running.  W - The writer process is active on the workstation.  F - The workstation is fully linked through primary and all secondary connections. This flag appears only if the enSwfaultTol global option is set to YES using the optman command line on the master domain manager, and it indicates that the workstation is directly linked to its domain manager and to all its full backup domain managers.  NOTE: If the workstation running conman is the extended agent’s host, the state of the extended agent is LXI JX  If the workstation running conman is not the extended agent’s host, the state of the extended agent is  LHI JH 

IWS Workstation Status The state of the monitoring agent.  Up to three characters are displayed as follows: [M] [ E|e ] [D]  where:  M - The monman process is running.  E - The event processing server is installed and running on the workstation.  e - The event processing server is installed on the workstation but is not running.  D - The workstation is using an up-to-date package monitoring configuration.  The state of the WebSphere Application Server .  A one-character flag is displayed, if the application server is installed: [A|R]  where:  A - The WebSphere Application Server was started.  R - The WebSphere Application Server is restarting.  The flag is blank if the application server is down or if it was not installed

WAS Utilities Show Host properties : Displays WAS ports <TWS home>/wastools/ showHostProperties.sh Show Security: Displays the security configurations <TWS home>/wastools/ showSecurityProperties.sh Show Datasource : Displays the DB configurations <TWS home>/wastools/ showDataSourceProperties.sh Backup Config: Backup the WAS configurations <TWS home>/wastools/ backupConfig.sh Restore Config: Restore the WAS configurations <TWS home>/wastools/ restoreConfig.sh

WAS Utilities Change Host properties : Applies the changes in WAS ports <TWS home>/wastools/ changeHostProperties.sh <filename> Change Security: Applies the security configurations <TWS home>/wastools/ changeSecurityProperties.sh <filename> Change Datasource : Applies the DB configurations <TWS home>/wastools/ changeDataSourceProperties.sh <filename> Change Trace Properties: Changing the Trace properties <TWS home>/wastools/ changeTraceProperties.sh Restore Config: Restore the WAS configurations <TWS home>/wastools/ restoreConfig.sh

WAS Utilities – Change Password Use the  changePassword  script to change the passwords of any of the following users: Tivoli Workload Scheduler instance owner (< TWS_user >) WebSphere Application Server user Database (J2C) user for either Oracle or DB2 Change Password : Updates the password. <TWS home>/wastools/ changePassword.sh changePassword.sh -user <USERID> -password <PASSWORD> [- wasuser <WASUSER>] [- waspassword <WASPASSWORD>] [- usroptshome <HOMEDIR>] Note: Using changeSecurityProperties.sh, DB & TWS user credential can be changed.

Logs & Traces Logs are messages that provide you with information, give you warning of potential problems, and inform you of errors.  Traces are messages for IBM® Software Support that provide in depth information about IBM Workload Scheduler processes.  These logs & traces are recorded in < TWShome >/stdlist/logs and < TWShome >/stdlist/traces respectively with file name <DATE>_TWSMERGE.log. The Tivoli Workload Scheduler log files are switched every day, at the time set in the  startOfDay  global options ( optman ). You can customize the information written to the log files by modifying selected parameters in its properties file.

CCLog CCLog is a logging engine that creates log files in a defined structure. CCLog file locations The log files it produces are stored in different places, depending on the settings in the localopts file: merge stdlists = yes < TWA_home >/TWS/stdlist/logs/< yyyymmdd >_NETMAN.log This is the log file for netman. < TWA_home >/TWS/stdlist/logs/< yyyymmdd >_TWSMERGE.log This is the log file for all other processes merge stdlists = no < TWA_home >/TWS/stdlist/logs/< yyyymmdd >_< process_name >.log CCLog switching The Tivoli Workload Scheduler log files are switched every day, at the time set in the startOfDay global options ( optman ). The CCLog properties file is as follows: < TWA_home >/TWS/ TWSCCLog.properties where < TWA_home > is the directory where Tivoli Workload Scheduler is installed.

Modifying IWS logging level Edit < TWA_home >/TWS/ TWSCCLog.properties Modify tws.loggers.msgLogger.level . This determines the type of messages that are logged. Change this value to log more or fewer messages, as appropriate, or on request from IBM® Software Support. Valid values are: INFO :All log messages are displayed in the log. The default value. WARNING :All messages except informational messages are displayed. ERROR : Only error and fatal messages are displayed. FATAL : Only messages which cause Tivoli Workload Scheduler to stop are displayed. 3. Save the file. The change is immediately effective.

Modifying IWS tracing level Edit < TWA_home >/TWS/ TWSCCLog.properties Modify tws.loggers.trc <component>.level. This determines the type of trace messages that are logged. Change this value to trace more or fewer events, as appropriate, or on request from IBM Software Support. Valid values are: DEBUG_MAX : Maximum tracing. Every trace message in the code is written to the trace logs. DEBUG_MID : Medium tracing. A medium number of trace messages in the code is written to the trace logs. DEBUG_MIN : Minimum tracing. A minimum number of trace messages in the code is written to the trace logs. INFO :All informational, warning, error and critical trace messages are written to the trace. The default value. WARNING :All warning, error and critical trace messages are written to the trace. ERROR :Only error and critical messages are written to the trace. CRITICAL :Only messages which cause Tivoli Workload Scheduler to stop are written to the trace. 3. Save the file. The change is immediately effective.

Modifying DWC tracing level Follow these steps to activate the Dynamic Workload Console traces at run time: Log in to the Dynamic Workload Console as administrator of the embedded WebSphere® Application Server In the Dynamic Workload Console navigation pane select Settings > Websphere Admin Console Click Launch Websphere Admin Console. In the navigation tree, click Troubleshooting > Logs and Trace > server name (for example tdwcserver ) > Diagnostic Trace. Select: Configuration:If you want to apply the changes to the trace settings after having restarted the server. Run time:If you want to apply the changes to the trace settings without restarting the server. Click Change Log Detail Levels under Additional Properties . Choose the packages for which you want to activate the traces. For the Dynamic Workload Console traces, make this selection: Scroll down to com.ibm.tws.* and expand the tree Click com.ibm.tws.webui.* Either select All Messages and Traces or click Messages and Trace Levels and choose the trace level you require. Click OK > Save. Stop and start the server, if necessary.

Modifying DWC tracing level [Alternate method] Edit the following server.xml XML file: < tdwc_install_dir >/ AppServer /profiles/< your_profile >/config/cells/< your_cell >/nodes/< your_node >/servers/< your_server >/server.xml [/app/IBM/ JazzSM /profile/config/cells/JazzSMNode01Cell/nodes/JazzSMNode01/servers/server1/server.xml /app/IBM/TWA/WAS/ TWSProfile /config/cells/ TWSNodeCell /nodes/ TWSNode /servers/server1/server.xml ] 2. Change the value assigned to the property startupTraceSpecification from: com.ibm.tws.webui .*=info to: com.ibm.tws.webui .*=all . 3. Save the changes Stop and start the WAS server. When you enable tracing at run time the traces are stored in the following file: /app/IBM/TWA/WAS/ TWSProfile /logs/server1/SystemOut.log /app/IBM/ JazzSM /profile/logs/server1/SystemOut.log

Disabling DWC tracing level Edit the following server.xml XML file: < tdwc_install_dir >/ AppServer /profiles/< your_profile >/config/cells/< your_cell >/nodes/< your_node >/servers/< your_server >/server.xml [/app/IBM/ JazzSM /profile/config/cells/JazzSMNode01Cell/nodes/JazzSMNode01/servers/server1/server.xml /app/IBM/TWA/WAS/ TWSProfile /config/cells/ TWSNodeCell /nodes/ TWSNode /servers/server1/server.xml ] 2. Change the value assigned to the property startupTraceSpecification from: com.ibm.tws.webui .*=all to: com.ibm.tws.webui .*=info . 3. Save the changes Stop and start the server. When you enable tracing at run time the traces are stored in the following file: /app/IBM/TWA/WAS/ TWSProfile /logs/server1/SystemOut.log /app/IBM/ JazzSM /profile/logs/server1/SystemOut.log

Logs & Traces – WAS The log and trace files for the WebSphere Application Server can be found in: Application server run time log and trace files < WAS_profile_path >/logs/server1/ SystemOut.log < WAS_profile_path >/logs/server1/ trace.log Ex: /app/IBM/TWA/WAS/ TWSProfile /logs/server1/SystemOut.log /app/IBM/ JazzSM /profile/logs/server1/trace.log Trace files containing messages related to the plan replication in the database < WAS_profile_path >/logs/server1/ PlanEventMonitor.log.0 < WAS_profile_path >/logs/server1/ PlanEventMonitor.log.1

Modifying WAS Tracing Level Log on to the computer where Tivoli Workload Scheduler is installed as the following user: UNIX :root Windows:Any user in the Administrators group. Access the directory: < TWA_home >/ wastools Run the script: UNIX : ./changeTraceProperties.sh -user < TWS_user > -password < TWS_user_password > -mode < trace_mode > Windows: changeTraceProperties.bat -user < TWS_user > -password < TWS_user_password > -mode < trace_mode > where: < trace_mode > is one of the following: active_correlation : All communications involving the event correlator are traced. tws_all_jni :All communications involving the jni code are traced. The jni code refers to code in shared C libraries invoked from Java. This option is used by, or under the guidance of, IBM Software Support. tws_all :All Tivoli Workload Scheduler communications are traced. tws_alldefault :Resets the trace level to the default level imposed at installation. tws_cli :All Tivoli Workload Scheduler command line communications are traced. tws_conn :All Tivoli Workload Scheduler connector communications are traced. tws_db :All Tivoli Workload Scheduler database communications are traced. tws_info :Only information messages are traced. The default value. tws_planner :All Tivoli Workload Scheduler planner communications are traced.

Modifying WAS Tracing Level Log on to the computer where Tivoli Workload Scheduler is installed as the following user: UNIX :root Windows:Any user in the Administrators group. Access the directory: < TWA_home >/ wastools Run the script: UNIX : ./changeTraceProperties.sh -user < TWS_user > -password < TWS_user_password > -mode < trace_mode > Windows: changeTraceProperties.bat -user < TWS_user > -password < TWS_user_password > -mode < trace_mode > where: < trace_mode > is one of the following: tws_utils :All Tivoli Workload Scheduler utility communications are traced. tws_broker_all :All dynamic workload broker communications are traced. tws_broker_rest :Only the communication between dynamic workload broker and the agents is traced. tws_bridge :Only the messages issued by the workload broker workstation are traced. 6. Stop and restart the application server

Logs & Traces – Dynamic Agent The log messages are written in the following file: < TWA_home >/TWS/stdlist/JM/JobManager_message.log The trace messages are written in the following file: < TWA_home >/TWS/stdlist/JM/ITA_trace.log < TWA_home >/TWS/stdlist/JM/JobManager_trace.log < TWA_home >/TWS/ JavaExt /logs/javaExecutor0.log Logging information about job types with advanced options : You can use the logging.properties [ TWA_home /TWS/ JavaExt / cfg / logging.properties ] file to configure the logging process for job types with advanced options, with the exception of the Executable and Access Method job types. The logging level (from INFO to WARNING, ERROR, or ALL) in the following keywords: . level :Defines the logging level for the internal logger. com.ibm.scheduling :Defines the logging level for the job types with advanced options. To log information about job types with advanced options, set this keyword to ALL .

Modifying Dynamic Agent Tracing Level Trace files are enabled by default for the dynamic agent. To modify the related settings you can use one of the following options: Edit the [ JobManager.Logging ] section in the JobManager.ini file. Restart the dynamic agent. Or Use one or more of the following command-line commands, without stopping and restarting the dynamic agent: The commands can be found in < TWA_home >/TWS/ITA/ cpa / ita . The syntax for the commands is as follows: enableTrace : Sets the trace to the maximum level, producing a verbose result. disableTrace :Sets the traces to the lowest level. showTrace [ > trace_file_name.xml] Displays the current settings defined in the [ JobManager.Logging ] section of the JobManager.ini file for the dynamic agent traces. You can also redirect the [ JobManager.Logging ] section to a file to modify it. Save the modified file and use the changeTrace command to make the changes effective immediately. changeTrace [trace_file_name.xml] Reads the file containing the modified trace settings and implements the changes immediately and permanently, without stopping and restarting the dynamic agent.

Logs & Archived Files - Maintenance Log files are produced from a variety of Tivoli Workload Scheduler activities. Other activities produce files which are archived after they have been used. <TWS home>/bin/ rmstdlist <# of retention days>

Logs & Archived Files - Maintenance Log files are produced from a variety of Tivoli Workload Scheduler activities. Other activities produce files which are archived after they have been used. <TWS home>/bin/ rmstdlist <# of retention days>

Logs & Archived Files - Maintenance <TWS home>/bin/ rmstdlist <# of retention days>

Logs & Archived Files - Maintenance < TWA_home >/TWS/ITA/ cpa /config/JobManager.ini , MaxAge . Default is 2 days.

Logs & Archived Files - Maintenance

Logs & Archived Files - Maintenance

Plan – Types of Plans

The production plan contains all the jobs and job streams that are scheduled to run within the production period together with their depending objects and all workstation definitions. Scheduling object definitions stored in the database(jobs, job streams) become instances in the production plan, where they can be monitored and modified. The production plan data is created by MDM, stored in the Symphony file and replicated in the database. Production Plan

A preproduction plan is used to identify in advance the job stream instances and the job stream dependencies involved in a specified time period. The preproduction plan contains: The job stream instances to be run during the covered time period. The external dependencies that exist between the job streams and jobs included in different job streams. It improves performance when generating the production plan by preparing in advance a high-level schedule of the anticipated production workload. Preproduction Plan

The Pre-Production plan is stored in the DB in the JSI_JOB_STREAM_INSTANCES and JDP_JOB_STREAM_INSTANCE_DEPS tables. It calculates in advance the job stream instances for the next days (from 7 to 14 by default) and to resolve external dependencies calculating which instance of the predecessor job stream should be the actual predecessor. The management of the Pre-Production plan is completely automated with extension / replans triggered automatically at the end of UpdateStats or at the beginning of MakePlan Preproduction Plan

A Symnew plan is a temporary plan. It is an intermediate production plan that covers the whole time the new production plan that is being generated will cover. It is replaced by the production plan as soon as it starts. Symnew Plan

Creating Production Plan – JnextPlan The production plan contains information about the jobs to run, on which agent, and what dependencies must be satisfied before each job can start. While creating or extending the production plan, we define the production period that can span from a few hours to multiple days (by default it lasts 24 hours). The JnextPlan script is to generate the production plan(written into Symphony file) and distribute it across the IBM Workload Scheduler network.

To extend the Production plan at a fixed interval, for example everyday, a job stream called FINAL can be scheduled to run at the end of each production day. This job stream has below jobs to create the production plan. Automating production plan processing Checks & starts Appserver to connect to DB Creates production plan taking data from DB and intermediate plan called Preproduction plan Includes the uncompleted schedules from the previous production period into the current plan and distributes the production plan in the IWS network.

Another job stream called FINALPOSTREPORTS runs after FINAL to check if the plan is loaded in database. Besides, updates jobs’ statistics, creates post production reports. Automating production plan processing Monitors the replication process of Symphony in DB Logs job statistics.

Creating & automating Production Period The  JnextPlan  script is used to manage the entire process of moving from an old to a new production plan (Symphony), including its activation across the IBM Workload Scheduler network. Every time you run  JnextPlan  all workstations are stopped and restarted. It can run from MDM only . JnextPlan         [ -V  |  -U  ] |          [ from   mm⁄dd ⁄[ yy ] yy [ hh [:]mm[ tz  |  timezone   tzname ]]]        { -to   mm⁄dd ⁄[ yy ] yy [ hh [:]mm[ tz  |  timezone   tzname ]] |          -for  [h] hh [:]mm  [- days  n] | - days  n}       [ -noremove ]

Creating & automating Production Period The  JnextPlan  script is composed of the following sequence of commands and specialized scripts, each managing a specific aspect of the production plan generation: Startappserver : This command is invoked to start the WebSphere Application Server if it is not already running. Makeplan : MakePlan  invokes internally the  planman  command line.  MakePlan  performs the following actions: Creates a new plan or extends the current plan and stores the information in an intermediate production plan called Symnew file containing: All the scheduling objects (jobs, job streams, calendars, prompts, resources, workstations, domains, files, users, dependencies) defined in the selected time period. Prints preproduction reports.

Creating & automating Production Period Switchplan : This script invokes internally the stageman command. SwitchPlan performs the following actions: Stops IBM Workload Scheduler processes. Generates the new Symphony file starting from the intermediate production plan created by MakePlan i.e merges old Symphony file with SymNew Archives the old plan file with the current date and time in the schedlog directory. Runs " planman confirm " to update plan status information in DB (e.g. plan end date and current run number) Creates a copy of the Symphony file called Sinfonia to distribute to the workstations. Restarts IBM Workload Scheduler processes which distribute the Sinfonia file to the workstation targets for running the jobs in plan.

Creating & automating Production Period UpdateStats : This script invokes internally the  logman  command. UpdateStats performs the following actions: Logs job statistics. Checks the policies and if necessary extends the preproduction plan. Updates the preproduction plan reporting the job stream instance states. CreatePostReports : This job just creates a report

Plan Mirroring on Database Since 9.1, IBM Workload scheduler has introduced a new copy of the plan in the Database. This is called ‘ Plan Mirroring ’ This copy of the plan is used only for plan monitoring from the UI (and from Java APIs) and is still not used for scheduling purposes that continue to work using the Symphony file to assure consistency between Master and agents.  This change has tremendously improved the scalability of the UI with performance

Plan Mirroring on Database The ‘ Plan Mirroring ’ on DB is just a copy of the Symphony file and does not have any impact on the job scheduling. Of course, since this is used for monitoring, this is still a critical components. The new component introduced for plan mirroring is the PlanUpdate , a set of threads running inside the Application Server that are responsible for any update to the Plan Mirroring. In details there is a main PlanUpdate thread receiving messages from mirrorbox.msg , and a set of sub threads each with its own mirrorbox_#.msg message queue. The main thread handles the resyncs, distribute the messages to sub threads and process directly some kind of message. The sub threads are mainly processing messages about jobs and job streams.

Plan Mirroring on Database – Sync Phases There are actually two different flows/phases that alternate during the normal operations and that assure the Plan Mirroring is aligned with the Symphony file, they both run on the actual master: Resync : This phase takes a snapshot of the Symphony file that is then loaded in the DB. Apply Messages : This is a normal running phase. As batchman updates the Symphony file applying the messages in the Intercom.msg coming from other IWS components or other agents, also the Plan Mirroring is updated applying the same messages queued in the mirrorbox.msg .

Plan Mirroring on Database – Resync Phase Resync : This phase takes a snapshot of the Symphony file that is then loaded in the DB. Resyncs are automatically triggered when a new Symphony file is started (when batchman starts at the end of SwitchPlan ), when a switch master is performed or when the mirrobox.msg queue is full (this prevents that issues on the DB may have any impact on scheduling activities). Resyncs can be also forced manually using  planman resync . At any time it's possible to check the synchronization status of the mirroring using  planman checksync . This is also used inside  FINALPOSTREPORTS  in order to monitor that the automatic resync of the new Symphony file has completed successfully.

Plan Mirroring on Database – Resync Phase

Plan Mirroring on Database – Apply Message Phase Apply Message : Since 9.1 every message that need to be sent to the local Batchman is now duplicated and sent also to PlanUpdate via mirrorbox.msg . PlanUpdate processes the message and apply the change to the Plan Mirroring as Batchman is doing for the Symphony file. Job and Job Stream messages are not processed directly by the main PlanUpdate thread but are first distributed to a sub thread that finally handle them.

Conclusion Brief summary of today’s session
Tags