scpo_Technical_Implementation_Basics.pptx

Thirupathis9 93 views 40 slides Jun 02, 2024
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

jda blue yonder technical implimentation


Slide Content

Tech Training Workshops Basics of Tech Implementation Vipul Kohli & Mark Rodriguez May 2016

AGENDA Tech Planning Key Tasks and Deliverables by Phase Difference between Cloud v/s At Client Regular & Fast Track Implementations General Tips Integration Approach - IGP or not UDTs v/s UDCs Batch Framework Basic Housekeeping Tasks for SCPO

Technical Planning This should be one of the first few activities, if not the first, for the tech team Vital since it helps align all stakeholders around common objectives & eliminates surprises Client IT – Mgmt., PM, Dev leads, DBAs , Sys Admins Client Business Sponsors and Business Leads JDA Team members – Functional / Tech / PM Partner /SI Team members Objective Define all tasks / deliverables for the implementation along with RACI for RICEFs (Reports/Integrations/ Config ./ Extns /Forms). Must for projects with a partner Identify Key decisions are required Actual decisions will most probably be made later

Technical Planning (Cont.) Topics to be covered for Tech Planning Solution Footprint / Project Scope &Timelines HW Architecture & Software Pre- reqs . What comes from JDA and what client is responsible for Optional Software/HW for – Load Balancer / Proxy Server / Enterprise Scheduler Environments Strategy - # of Environments and how to manage changes Integration Approach Batch Development Security Set up – OS / DB / Application Roles and Responsibilities – Client Team v/s JDA Team

Typical Implementation No different from any other implementation of Commercial Packaged Enterprise Software A Typical Implementation of Demand & Fulfillment can last from anywhere between 6 – 12 months Depends on scope Shorter if only Demand or if only Fulfillment is being implemented Bulk of Technical Effort is spent on Integration Batch Development & Automation Little Less but significant amount also required for Installation & Architecture Performance Tuning Base Code for product is never customized ( No Option) Copyright © 2014 JDA Software Group, Inc. Confidential

Technical Deliverables Activity Deliverables Resources Tech Planning Project Plan Integration Approach Environment Strategy Roles & Responsibilities JDA Tech Lead Client IT Lead JDA & Client PMs Prepare Phase Design Phase Activity Deliverables Resources Tech Requirements review Technical Design Doc JDA Tech Lead Client IT Lead Fill Out / Revisit the Sizing Survey Formal JDA Sizing Proposal JDA Tech & Functional Leads Sizing SME Solution Prototyping Installed Prototype Env . Initial Data Mapping One time data loads JDA Tech and Functional Consultants Client IT Performance Review Perf. Inputs for Env . Set up Performanc e test plan JDA Perf. Services / Tech Con Client IT

Technical Deliverables (cont.) Activity Deliverables Resources Set up Dev environment Install Software Set up Batch Framework Set Up Security Configure JDA environment – Set up UDCs, UDTs AYI documentation JDA Tech Client IT Develop Interfaces Finalized Data Mapping Interface Scripts Inbound – Client IT Outbound – JDA Develop JDA scripts / Reports Functional/Technical Specs SQL , SRE XML scripts, Monitor Script. Reports Unit Test scripts with sample data JDA Tech Develop Batch Flow Batch Flow Documentation – Spreasheet / visio for Daily/Weekly/Monthly Batch JDA Functional & Tech Construct Phase

Technical Deliverables (cont.) Activity Deliverables Resources Set up UAT environment Install Software + Patches (as required) Set up Batch Framework Promote JDA Configuration ( using ECM) and Batch Scripts / Interfaces JDA Tech & Client IT Set Up Batch Scheduler Fully Automated Batch Schedule with Dependencies Supporting Documentation JDA & Client IT Run SIT / UAT Initial Data Loads for full volumes Fully Automated Batch Defect Fixes (as required) JDA & Client IT Conduct Performance Tuning Updated SRE & DB Config . Performance Report Performance Baseline JDA Perf. Services & JDA Tech Develop Cutover Plan Detailed Cutover Plan JDA Functional & Tech Validate Phase

Technical Deliverables (cont.) Activity Deliverables Resources Execute Cutover as per cutover Plan Install Software + Patches (as required) Set up Security – OS / DB/Application Promote JDA Configuration ( using ECM) and Batch Scripts / Interfaces Apply SRE & DB Config changes for performance Set up batch scheduler and promote the batch schedules Run initial loads Initiate Nightly / Weekly Batches JDA & Client Teams Post Go Live Support Provide Support as required Complete Knowledge transfer to client teams JDA Tech Deploy Phase

Technical Deliverables (cont.) Activity Deliverables Resources Completed Transition to Support / Cloud Performance Baseline Doc for production environment Finalize Transition documentations Archive all project documentation in JCES JDA, Support and Cloud Teams Transition Phase

Working with JDA Cloud Copyright © 2014 JDA Software Group, Inc. Confidential For the purpose of a Tech implementation – JDA Cloud replaces the Client Infrastructure Teams DBA Support / Sys Admin / Scheduler Set up / Batch Monitoring Cloud Team will take care of environment provisioning but JDA tech needs to own the validation of the environment Be involved with planning for environments Ask for sizing details & Cloud SOW Be aware of Oracle License implications for your clients You may have a choice in operating system platform for JDA servers Be aware of JDA Cloud processes ( Service Desk) and associated lead times Work with Cloud SDM & Cloud PM to agree on Roles & Responsibilities

Fast Track Implementations a.k.a NOW Copyright © 2014 JDA Software Group, Inc. Confidential Limited in Functional Scope and also some limitation in volumes / User counts From a technical implementation point of view , there is no difference in deliverables

General Tips Copyright © 2014 JDA Software Group, Inc. Confidential Stick with JDA Standards ( Installs / Batch Framework / Documentation Templates) Review the SOW and be aware of Solution Scope Review the Solution Design and be aware of performance impact of design decisions Involve client r esources early for knowledge t ransfer Be on the good side of DBAs and Sys Admins Share detailed information in advance of tasks No Surprises Establish rules of engagement early in the project – Specially on Roles and Responsibilities / Enhanced Access

Integration Approach For Inbound Integrations SCPO has adopted an “Open Integration” approach. Agnostic to the integration tools used. Some influencing factors are Client’s Integration Tool / Platform of choice Available skill set to develop and support these Integrations JDA can support the entire spectrum of Integration approaches Degree of Sophistication Simple Complex Flat Files with SQL Loader ETL Tool with complex transformations

Integration Approach (cont.) All referential integrity is built in at the database level. This dictates the sequence for populating tables for any given application. Eg – Before loading SKU, ITEM and LOC would have to be loaded SCPO & Collab provide a separate set of shadow tables (and associated Oracle SPs) in a separate schema – IGP Once Data is loaded into these IGP Tables, Oracle Stored Procedures are invoked to move data to Application tables Other Options for direct to table loads are also often employed for high volume implementations

Interface Generation Program(IGP) SKU INTINS_SKU INTUPD_SKU INTUPS_SKU INTERR_SKU INTJOBS SCPO Schema IGP Schema Source System(s) For Inserts For Updates For Insert & Updates Integration Tool of Choice Stored Procedure to move data to Target Table Records Rejected Job Tracking No Constraints All Constraints enabled

Technical Development Effort Technical Effort is Primarily in two areas Development of Inbound Integrations from source systems to IGP Requires knowledge of Source Systems Dependent on Source System Technology Platform Typically done by Client Resources JDA Functional Resources assist with Data Mapping Scripting effort for invoking stored procedures and subsequent error checking A Single generic wrapper is used and different job id/ store procedures passed as parameters JDA Resources will guide / help with development JDA IGP SOURCE Inbound Integrations IGP SPs

If we use IGP PROs Do not have to worry about Referential Integrity / Constraints. Taken care of IGP Stored Procedures Provides Error Handling in form of the INTERR_ tables CONs Extra Step of loading into IGP tables using Informatica and then load to Target Tables via IGP Stored Procedures Extra Scripting/ Job Scheduling to manage the IGP to SCPO step SCPO IGP SOURCE ETL IGP SPs

If we do not use IGP PROs One less Step.. save time Lesser development for additional scripts CONs Referential integrity checks has to be built into the Integration ETL layer would have to handle rejected records and provide tracking/status of jobs SCPO SOURCE ETL Tool

IGP or Not? Use IGP for Master Data tables for sure( DMDUNIT, ITEM, LOC / PRODUCT/LOCATION/PLANNING ITEM) Maybe consider alternate ways for High Volume tables ( HIST / SKU / DFUVIEW) Direct Inserts for tables like DFU / Demand PARAM tables / DFUMAP/DFUTOSKU For Collab , PL/SQL scripts for Loading Planning Component Data ( with certain assumptions) When not using IGP account for Error Handling

IGP Best Practices Copyright © 2014 JDA Software Group, Inc. Confidential Generate IGP only for the tables required. Not for entire schema Disable triggers and invoke stored procedures explicitly Disable Net Change Logic for SCPO ( Pre & Post Processing steps in IGP.properties ) Build a backup table for INTJOBS table ( JDA_INTJOBS) to keep track of job history for IGP loads and to retain only latest run in INTJOBS Use a wrapper script to invoke IGP ( as part of Batch framework) Real Time Integration v/s Batched processing – Typically the latter

SCPO – UDTs v/s UDTs 22 USER DEFINED TABLES USER DEFINED COLUMNS Useful for storing custom data specific to a customer which may not easily map to an existing table Table added at Oracle Level (along with PK) and then enrolled using JDA foundation utilities. Once enrolled, further columns to the UDT can be added using Data Model Manager Create UDTs in JDA SCPO schema only. The UDT must have at least one Primary Key for Flexible Editor and Compound Workspaces to work properly. Useful for storing custom data specific to a customer which maps directly to existing JDA table or Used for Search criteria /scripting logic UDCs can only be added using Data Model Manager. Create UDCs on demand and fulfillment tables in the SCPO schema. UDCs can be removed through the Data Model Manager too.

SCPO – UDTs v/s UDTs (Cont.) 23 USER DEFINED TABLES USER DEFINED COLUMNS Talk to Support / Tech Con before dropping / altering a UDT. Rigorous procedures and change control should be followed when creating UDTs. UDTs need to be created and enrolled in each and every environment. Ongoing changes to existing UDTs can be promoted using JDA foundation provided change management utilities (MCM) Any schema changes to a UDT require the corresponding IGP tables to be regenerated. NEVER alter/remove a UDC at the DB level Rigorous procedures and change control should be followed when creating UDCs. Get sign offs on creation of UDCs from Data Architect & Business teams – justifying need for UDCs UDCs can be promoted from one environment to an other using JDA foundation provided change management utilities (MCM) Any additions/deletions of UDCs require the corresponding IGP tables to be regenerated

24 USER DEFINED TABLES USER DEFINED COLUMNS Best Practices UDTs can be stand alone or modeled based on a “like” table in the SCPO schema. Follow a Naming convention. Start with “UDT_” Define a default value for Non PK columns in the UDT, if possible Best Practices UDCs should not be made as part of the primary key on base SCPO tables Avoid creating UDCs on Output tables for JDA processes (HIST / FCST / PLANARRIV etc) Follow a naming convention ( “U_”) Define a default value for all UDCs, if possible Avoid building a Parallel Application using UDTs and UDCs. SCPO – UDTs v/s UDTs (Cont.)

25 UDCs – Factors to Consider Before creating a UDC , always consider How will the UDC be populated – Integration or User Maintained? UDC will never be read by or populated by a core JDA process but can impact performance of the process. Null or Not Null? Default Value? UDCs on a Base table preferred assuming Limited / Manageable number of UDCs Will support key user navigation / searches / support for scripting logic Take less space Intuitive for UI Navigation

26 UDTs – Factors to Consider What will it be used for? Viewing Data or Supporting some user maintained process How will the UDT be populated – Integration or User Maintained? A UDT will never be read by or populated by a core JDA process. No bearing on performance too. Impact on User Navigation / Usability – Require additional training and attention from users. Space – More than a UDCs.

Batch Framework It is JDA’s Intellectual Property Built for Minimizing time and effort for batch development. New resource introduction. Familiarity . Standardized Reporting batch metrics. Scheduler setup . Simplify specific activities Performance tuning. Configurable AWR, other Oracle metrics. Debugging. Logging . Developed for SCPO. Wrapper Scripts available for Windows/ Unix / Linux Can be easily extended for Other Oracle based apps

Batch Framework (Cont.) Must be used for all new implementations – Cloud or on premises. Consider for use in upgrades Simplified Installation and Deployment utility on GTS Team site Deployment on Batch Server Directories and scripts owned by the JDA instance owner. One framework installation for each JDA environment. Zip file with framework components and default directory structure. Batchenv must be updated for each environment. Folder Description …/ Root folder for the batch framework and custom processes. .../ fwk Core scripts and configuration files used by batch framework. …/fwk/dbinstall Scripts for creating the framework database objects. …/batch All custom batch scripts e.g. ksh, sql, xml etc. …/batch/ksh Custom shell scripts. …/batch/ sql Custom sql scripts. …/batch/xml Xml files for SRE etc. …/doc Documents related to the framekwork and environment. …/log All log files generated by the batch framework. …/ tmp All temporary files generated by the framework and other scripts. …/data Data files in and out from the environment. …/data/in Inbound data files. …/data/out Outbound data files.

Batch Framework (Cont.) Deployment on Database Framework components owned by dedicated account, JDABATCH. Package for DDL support created in each application schema. Scripts to create JDABATCH user and related objects. Scripts to grant typical privileges to JDABATCH for running processes . Installation Steps Review the scripts located in fwk / dbinstall . Execute install_fwk.sql using sqlplus from a DBA account. This step installs the jdabatch schema and the core framework components. Review the output for errors. Review install_others.sql . Make a copy and update according to environment requirements. Execute the modified install_others.sql using sqlplus from a DBA account. This step installs the ddl utility in application schemas and grant privileges on application schemas to JDABatch . Review the output for errors

Batch Framework (Cont.) For Batch Development Review the Wrapper Scripts Follow naming conventions for scripts - dmd _ / ful _ / mp _ Tie Pre & Post Processing with main script as required - < script_name >_pre /_post Update PROCESS table with appropriate Entry Execute the process with appropriate command line to unit test

Examples Example 1 – Update DmdPostDate --------------------------------------------------------- -- NAME : dmd_update_dmdpost -- PURPOSE : Set demand post date to last sunday . --------------------------------------------------------- DECLARE v_dfu_count NUMBER; v_dmdpost DATE := trunc ( sysdate , 'D'); BEGIN pkg_log.info('New dmdpostdate = ' || to_char ( v_dmdpost , 'MM/DD/YYYY')); UPDATE dfu SET dmdpostdate = v_dmdpost ; v_dfu_count := SQL%ROWCOUNT; pkg_job.add_volume ( v_dfu_count ); EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit

Examples Example 1 – Update DmdPostDate Process Descr PreProcess_Type Process_Type PostProcess_Type Dmd_upd_dmdpost Update demand post date for all DFUs. SQL Configuration: Table Process Run process: run_process.ksh dmd_upd_dmdpost

Examples Example 2 – Calc Model (pre) --------------------------------------------------------- -- NAME : dmd_calc_model_pre.sql -- PURPOSE : Prepare for calculate model for all DFUs. --------------------------------------------------------- BEGIN pkg_log.debug ('Truncate PROCESSDFU'); scpomgr.pkg_ddl.truncate_table ('PROCESSDFU'); pkg_log.debug ('Truncate DFUEXCEPTION'); scpomgr.pkg_ddl.truncate_table ('DFUEXCEPTION'); pkg_log.debug ('Truncate SMOOTHSEASON'); scpomgr.pkg_ddl.truncate_table ('SMOOTHSEASON'); EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit

Examples Example 2 – Calc Model (main) <? xml version='1.0' encoding='UTF-8 '?> <!–- NAME : dmd_calc_model.xml PURPOSE : Calculate model for all DFUs as batch. --> < batchJob xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation ="process_request.xsd" logFilePrefix =" dmd_calc_model " batchName = " dmd_calc_model " > < processRequest processName =" Demand.CalcModel " nodeConfigurationName ="BATCH"> < processOptionSet optionSetName =“ Batch_CalcModel " isPublic =“false"/> < searches> < search enterpriseName ="JDA" searchName =“ Batch_CalcModel " isPublic =“false"> </ search> </ searches> < jobOptions > < traceCategoryRange > < category>PROCESS_STATUS</category> < category>APPLICATION_EXCEPTION</category> </ traceCategoryRange > </ jobOptions > </ processRequest > </ batchJob >

Examples Example 2 – Calc Model (post) --------------------------------------------------------- -- NAME : dmd_calc_model_post.sql -- PURPOSE : Post processing for calculate model. --------------------------------------------------------- DECLARE v_count NUMBER; BEGIN scpomgr.pkg_ddl.gather_table_stats ( table_name => 'FCSTDRAFT', degree => 8 ); -- Capture volume if info turned on. IF pkg_log.is_info THEN SELECT num_rows INTO v_count FROM all_tables WHERE table_name = 'FCSTDRAFT' AND owner = 'SCPOMGR'; pkg_job.add_volume ( v_count ); END IF; EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit

Examples Example 1 – Update DmdPostDate Process Descr PreProcess_Type Process_Type PostProcess_Type Dmd_calc_model Run Calc Model for all DFUs. SQL SRE SQL Configuration: Table Process Run process: run_process.ksh dmd_calc_model

Housekeeping Tasks Essential for maintaining a stable, functional solution over time Must be built into Daily /Weekly / Monthly Batch Common Scripts as part of Batch Framework

Housekeeping Tasks (Cont.) Tasks with no impact to Functional Data Old Logs files (App Server / SRE / Batch logs ) – Archive & then clear Description Extension / Search Criteria Location Servers XML Log files generated by SRE processes *.log.xml with time stamp older than 7 days <JDA_INSTALL>/config/logs From wherever the Batch is run Log files generated by  JDA Node Pools manu_sre*.log with timestamp older than 14 days <JDA_INSTALL>/config/logs On each install from where the node pools are run Files generated by FE Exports and Imports ImportExport_*.csv ImportExport_*.log   Location is configurable On each install from where the node pools are run Any jdacore files generated due to process failures jdacore_*.xml with timestamp older than 7 Days <JDA_INSTALL>/config/logs On each install from where the node pools are run Application Server logs files manu_*.log On each Managed Node in the application cluster <JDA_INSTALL>/config/logs On App Servers Files generated during Export / Import *txt_from_clob *csv_from_clob On each Managed Node <JDA_INSTALL>/config/JDADomain On App Servers Welogic Logs access.log* WebWORKS?.log* With timestamp older than 14 days On Each Managed Node in the application cluster <JDA_INSTALL>/config/JDADomain/servers/WebWORKS1/logs <JDA_INSTALL>/config/JDADomain/servers/WebWORKS2/logs On App Server Log file generated by Batch Scripts *.log Location is client specific On App or SRE Server, from where batch is run.

Housekeeping Tasks (cont.) SRE_JOB_SUMMARY – Weekly. Retain logs for Batch User for longer durations ( 3 months) --------WWFMGR.SRE_JOB_SUMMARY – --------Retain records for all other users for upto 7 Days DELETE FROM wwfmgr.sre_job_summary WHERE submission_date < (sysdate-7) and user_name not in ('Daemon',' BatchUser '); --------Retain Records for upto four months for Batch user DELETE FROM wwfmgr.sre_job_summary WHERE submission_date < trunc (sysdate-120,'MM') and user_name =' BatchUser '; BATCHMGR.JOB_SUMMARY – Same frequency and retention duration as prev. JDA_INTJOBS – Same frequency and retention dur as prev INTERR_ tables – Clear before new load / Maybe retain for a week. Built in as part of IGP load scripts

Housekeeping Tasks (cont.) Tasks with functional impact ( Application Data) Mostly built in as part of batch scripts / functional design Purge Demand History older than required duration. CDS will enforce this. Purge HISTFCST older than set duration ( eg . Fcst Dates > 1 year) Purge DFUEXCEPTION / SKUEXCEPTION – Clear at the beginning of batch. Purge SKUPROJSTATIC / SKUSTATSTATIC / FCSTPERFSTATIC – before running the batch job Process of Discontinuing & Deleting DFUs based on business logic ( eg . Inactive DMDUNITs, Store closings, Inactive assortments etc.). Design the process to include changes on Upper Level DFUs as well. Similar process for SKUs as well Keep PLANNING_ITEMs (Collab) in Sync with changes in Demand/ Fulfillment
Tags