Tech Training Workshops Basics of Tech Implementation Vipul Kohli & Mark Rodriguez May 2016
AGENDA Tech Planning Key Tasks and Deliverables by Phase Difference between Cloud v/s At Client Regular & Fast Track Implementations General Tips Integration Approach - IGP or not UDTs v/s UDCs Batch Framework Basic Housekeeping Tasks for SCPO
Technical Planning This should be one of the first few activities, if not the first, for the tech team Vital since it helps align all stakeholders around common objectives & eliminates surprises Client IT – Mgmt., PM, Dev leads, DBAs , Sys Admins Client Business Sponsors and Business Leads JDA Team members – Functional / Tech / PM Partner /SI Team members Objective Define all tasks / deliverables for the implementation along with RACI for RICEFs (Reports/Integrations/ Config ./ Extns /Forms). Must for projects with a partner Identify Key decisions are required Actual decisions will most probably be made later
Technical Planning (Cont.) Topics to be covered for Tech Planning Solution Footprint / Project Scope &Timelines HW Architecture & Software Pre- reqs . What comes from JDA and what client is responsible for Optional Software/HW for – Load Balancer / Proxy Server / Enterprise Scheduler Environments Strategy - # of Environments and how to manage changes Integration Approach Batch Development Security Set up – OS / DB / Application Roles and Responsibilities – Client Team v/s JDA Team
Technical Deliverables Activity Deliverables Resources Tech Planning Project Plan Integration Approach Environment Strategy Roles & Responsibilities JDA Tech Lead Client IT Lead JDA & Client PMs Prepare Phase Design Phase Activity Deliverables Resources Tech Requirements review Technical Design Doc JDA Tech Lead Client IT Lead Fill Out / Revisit the Sizing Survey Formal JDA Sizing Proposal JDA Tech & Functional Leads Sizing SME Solution Prototyping Installed Prototype Env . Initial Data Mapping One time data loads JDA Tech and Functional Consultants Client IT Performance Review Perf. Inputs for Env . Set up Performanc e test plan JDA Perf. Services / Tech Con Client IT
Technical Deliverables (cont.) Activity Deliverables Resources Set up Dev environment Install Software Set up Batch Framework Set Up Security Configure JDA environment – Set up UDCs, UDTs AYI documentation JDA Tech Client IT Develop Interfaces Finalized Data Mapping Interface Scripts Inbound – Client IT Outbound – JDA Develop JDA scripts / Reports Functional/Technical Specs SQL , SRE XML scripts, Monitor Script. Reports Unit Test scripts with sample data JDA Tech Develop Batch Flow Batch Flow Documentation – Spreasheet / visio for Daily/Weekly/Monthly Batch JDA Functional & Tech Construct Phase
Technical Deliverables (cont.) Activity Deliverables Resources Set up UAT environment Install Software + Patches (as required) Set up Batch Framework Promote JDA Configuration ( using ECM) and Batch Scripts / Interfaces JDA Tech & Client IT Set Up Batch Scheduler Fully Automated Batch Schedule with Dependencies Supporting Documentation JDA & Client IT Run SIT / UAT Initial Data Loads for full volumes Fully Automated Batch Defect Fixes (as required) JDA & Client IT Conduct Performance Tuning Updated SRE & DB Config . Performance Report Performance Baseline JDA Perf. Services & JDA Tech Develop Cutover Plan Detailed Cutover Plan JDA Functional & Tech Validate Phase
Technical Deliverables (cont.) Activity Deliverables Resources Execute Cutover as per cutover Plan Install Software + Patches (as required) Set up Security – OS / DB/Application Promote JDA Configuration ( using ECM) and Batch Scripts / Interfaces Apply SRE & DB Config changes for performance Set up batch scheduler and promote the batch schedules Run initial loads Initiate Nightly / Weekly Batches JDA & Client Teams Post Go Live Support Provide Support as required Complete Knowledge transfer to client teams JDA Tech Deploy Phase
Technical Deliverables (cont.) Activity Deliverables Resources Completed Transition to Support / Cloud Performance Baseline Doc for production environment Finalize Transition documentations Archive all project documentation in JCES JDA, Support and Cloud Teams Transition Phase
Integration Approach For Inbound Integrations SCPO has adopted an “Open Integration” approach. Agnostic to the integration tools used. Some influencing factors are Client’s Integration Tool / Platform of choice Available skill set to develop and support these Integrations JDA can support the entire spectrum of Integration approaches Degree of Sophistication Simple Complex Flat Files with SQL Loader ETL Tool with complex transformations
Integration Approach (cont.) All referential integrity is built in at the database level. This dictates the sequence for populating tables for any given application. Eg – Before loading SKU, ITEM and LOC would have to be loaded SCPO & Collab provide a separate set of shadow tables (and associated Oracle SPs) in a separate schema – IGP Once Data is loaded into these IGP Tables, Oracle Stored Procedures are invoked to move data to Application tables Other Options for direct to table loads are also often employed for high volume implementations
Interface Generation Program(IGP) SKU INTINS_SKU INTUPD_SKU INTUPS_SKU INTERR_SKU INTJOBS SCPO Schema IGP Schema Source System(s) For Inserts For Updates For Insert & Updates Integration Tool of Choice Stored Procedure to move data to Target Table Records Rejected Job Tracking No Constraints All Constraints enabled
Technical Development Effort Technical Effort is Primarily in two areas Development of Inbound Integrations from source systems to IGP Requires knowledge of Source Systems Dependent on Source System Technology Platform Typically done by Client Resources JDA Functional Resources assist with Data Mapping Scripting effort for invoking stored procedures and subsequent error checking A Single generic wrapper is used and different job id/ store procedures passed as parameters JDA Resources will guide / help with development JDA IGP SOURCE Inbound Integrations IGP SPs
If we use IGP PROs Do not have to worry about Referential Integrity / Constraints. Taken care of IGP Stored Procedures Provides Error Handling in form of the INTERR_ tables CONs Extra Step of loading into IGP tables using Informatica and then load to Target Tables via IGP Stored Procedures Extra Scripting/ Job Scheduling to manage the IGP to SCPO step SCPO IGP SOURCE ETL IGP SPs
If we do not use IGP PROs One less Step.. save time Lesser development for additional scripts CONs Referential integrity checks has to be built into the Integration ETL layer would have to handle rejected records and provide tracking/status of jobs SCPO SOURCE ETL Tool
IGP or Not? Use IGP for Master Data tables for sure( DMDUNIT, ITEM, LOC / PRODUCT/LOCATION/PLANNING ITEM) Maybe consider alternate ways for High Volume tables ( HIST / SKU / DFUVIEW) Direct Inserts for tables like DFU / Demand PARAM tables / DFUMAP/DFUTOSKU For Collab , PL/SQL scripts for Loading Planning Component Data ( with certain assumptions) When not using IGP account for Error Handling
SCPO – UDTs v/s UDTs 22 USER DEFINED TABLES USER DEFINED COLUMNS Useful for storing custom data specific to a customer which may not easily map to an existing table Table added at Oracle Level (along with PK) and then enrolled using JDA foundation utilities. Once enrolled, further columns to the UDT can be added using Data Model Manager Create UDTs in JDA SCPO schema only. The UDT must have at least one Primary Key for Flexible Editor and Compound Workspaces to work properly. Useful for storing custom data specific to a customer which maps directly to existing JDA table or Used for Search criteria /scripting logic UDCs can only be added using Data Model Manager. Create UDCs on demand and fulfillment tables in the SCPO schema. UDCs can be removed through the Data Model Manager too.
SCPO – UDTs v/s UDTs (Cont.) 23 USER DEFINED TABLES USER DEFINED COLUMNS Talk to Support / Tech Con before dropping / altering a UDT. Rigorous procedures and change control should be followed when creating UDTs. UDTs need to be created and enrolled in each and every environment. Ongoing changes to existing UDTs can be promoted using JDA foundation provided change management utilities (MCM) Any schema changes to a UDT require the corresponding IGP tables to be regenerated. NEVER alter/remove a UDC at the DB level Rigorous procedures and change control should be followed when creating UDCs. Get sign offs on creation of UDCs from Data Architect & Business teams – justifying need for UDCs UDCs can be promoted from one environment to an other using JDA foundation provided change management utilities (MCM) Any additions/deletions of UDCs require the corresponding IGP tables to be regenerated
24 USER DEFINED TABLES USER DEFINED COLUMNS Best Practices UDTs can be stand alone or modeled based on a “like” table in the SCPO schema. Follow a Naming convention. Start with “UDT_” Define a default value for Non PK columns in the UDT, if possible Best Practices UDCs should not be made as part of the primary key on base SCPO tables Avoid creating UDCs on Output tables for JDA processes (HIST / FCST / PLANARRIV etc) Follow a naming convention ( “U_”) Define a default value for all UDCs, if possible Avoid building a Parallel Application using UDTs and UDCs. SCPO – UDTs v/s UDTs (Cont.)
25 UDCs – Factors to Consider Before creating a UDC , always consider How will the UDC be populated – Integration or User Maintained? UDC will never be read by or populated by a core JDA process but can impact performance of the process. Null or Not Null? Default Value? UDCs on a Base table preferred assuming Limited / Manageable number of UDCs Will support key user navigation / searches / support for scripting logic Take less space Intuitive for UI Navigation
26 UDTs – Factors to Consider What will it be used for? Viewing Data or Supporting some user maintained process How will the UDT be populated – Integration or User Maintained? A UDT will never be read by or populated by a core JDA process. No bearing on performance too. Impact on User Navigation / Usability – Require additional training and attention from users. Space – More than a UDCs.
Batch Framework It is JDA’s Intellectual Property Built for Minimizing time and effort for batch development. New resource introduction. Familiarity . Standardized Reporting batch metrics. Scheduler setup . Simplify specific activities Performance tuning. Configurable AWR, other Oracle metrics. Debugging. Logging . Developed for SCPO. Wrapper Scripts available for Windows/ Unix / Linux Can be easily extended for Other Oracle based apps
Batch Framework (Cont.) Must be used for all new implementations – Cloud or on premises. Consider for use in upgrades Simplified Installation and Deployment utility on GTS Team site Deployment on Batch Server Directories and scripts owned by the JDA instance owner. One framework installation for each JDA environment. Zip file with framework components and default directory structure. Batchenv must be updated for each environment. Folder Description …/ Root folder for the batch framework and custom processes. .../ fwk Core scripts and configuration files used by batch framework. …/fwk/dbinstall Scripts for creating the framework database objects. …/batch All custom batch scripts e.g. ksh, sql, xml etc. …/batch/ksh Custom shell scripts. …/batch/ sql Custom sql scripts. …/batch/xml Xml files for SRE etc. …/doc Documents related to the framekwork and environment. …/log All log files generated by the batch framework. …/ tmp All temporary files generated by the framework and other scripts. …/data Data files in and out from the environment. …/data/in Inbound data files. …/data/out Outbound data files.
Batch Framework (Cont.) Deployment on Database Framework components owned by dedicated account, JDABATCH. Package for DDL support created in each application schema. Scripts to create JDABATCH user and related objects. Scripts to grant typical privileges to JDABATCH for running processes . Installation Steps Review the scripts located in fwk / dbinstall . Execute install_fwk.sql using sqlplus from a DBA account. This step installs the jdabatch schema and the core framework components. Review the output for errors. Review install_others.sql . Make a copy and update according to environment requirements. Execute the modified install_others.sql using sqlplus from a DBA account. This step installs the ddl utility in application schemas and grant privileges on application schemas to JDABatch . Review the output for errors
Batch Framework (Cont.) For Batch Development Review the Wrapper Scripts Follow naming conventions for scripts - dmd _ / ful _ / mp _ Tie Pre & Post Processing with main script as required - < script_name >_pre /_post Update PROCESS table with appropriate Entry Execute the process with appropriate command line to unit test
Examples Example 1 – Update DmdPostDate --------------------------------------------------------- -- NAME : dmd_update_dmdpost -- PURPOSE : Set demand post date to last sunday . --------------------------------------------------------- DECLARE v_dfu_count NUMBER; v_dmdpost DATE := trunc ( sysdate , 'D'); BEGIN pkg_log.info('New dmdpostdate = ' || to_char ( v_dmdpost , 'MM/DD/YYYY')); UPDATE dfu SET dmdpostdate = v_dmdpost ; v_dfu_count := SQL%ROWCOUNT; pkg_job.add_volume ( v_dfu_count ); EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit
Examples Example 1 – Update DmdPostDate Process Descr PreProcess_Type Process_Type PostProcess_Type Dmd_upd_dmdpost Update demand post date for all DFUs. SQL Configuration: Table Process Run process: run_process.ksh dmd_upd_dmdpost
Examples Example 2 – Calc Model (pre) --------------------------------------------------------- -- NAME : dmd_calc_model_pre.sql -- PURPOSE : Prepare for calculate model for all DFUs. --------------------------------------------------------- BEGIN pkg_log.debug ('Truncate PROCESSDFU'); scpomgr.pkg_ddl.truncate_table ('PROCESSDFU'); pkg_log.debug ('Truncate DFUEXCEPTION'); scpomgr.pkg_ddl.truncate_table ('DFUEXCEPTION'); pkg_log.debug ('Truncate SMOOTHSEASON'); scpomgr.pkg_ddl.truncate_table ('SMOOTHSEASON'); EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit
Examples Example 2 – Calc Model (post) --------------------------------------------------------- -- NAME : dmd_calc_model_post.sql -- PURPOSE : Post processing for calculate model. --------------------------------------------------------- DECLARE v_count NUMBER; BEGIN scpomgr.pkg_ddl.gather_table_stats ( table_name => 'FCSTDRAFT', degree => 8 ); -- Capture volume if info turned on. IF pkg_log.is_info THEN SELECT num_rows INTO v_count FROM all_tables WHERE table_name = 'FCSTDRAFT' AND owner = 'SCPOMGR'; pkg_job.add_volume ( v_count ); END IF; EXCEPTION WHEN OTHERS THEN pkg_job.fatal_error ; raise; END; / COMMIT ; exit
Examples Example 1 – Update DmdPostDate Process Descr PreProcess_Type Process_Type PostProcess_Type Dmd_calc_model Run Calc Model for all DFUs. SQL SRE SQL Configuration: Table Process Run process: run_process.ksh dmd_calc_model
Housekeeping Tasks Essential for maintaining a stable, functional solution over time Must be built into Daily /Weekly / Monthly Batch Common Scripts as part of Batch Framework
Housekeeping Tasks (Cont.) Tasks with no impact to Functional Data Old Logs files (App Server / SRE / Batch logs ) – Archive & then clear Description Extension / Search Criteria Location Servers XML Log files generated by SRE processes *.log.xml with time stamp older than 7 days <JDA_INSTALL>/config/logs From wherever the Batch is run Log files generated by JDA Node Pools manu_sre*.log with timestamp older than 14 days <JDA_INSTALL>/config/logs On each install from where the node pools are run Files generated by FE Exports and Imports ImportExport_*.csv ImportExport_*.log Location is configurable On each install from where the node pools are run Any jdacore files generated due to process failures jdacore_*.xml with timestamp older than 7 Days <JDA_INSTALL>/config/logs On each install from where the node pools are run Application Server logs files manu_*.log On each Managed Node in the application cluster <JDA_INSTALL>/config/logs On App Servers Files generated during Export / Import *txt_from_clob *csv_from_clob On each Managed Node <JDA_INSTALL>/config/JDADomain On App Servers Welogic Logs access.log* WebWORKS?.log* With timestamp older than 14 days On Each Managed Node in the application cluster <JDA_INSTALL>/config/JDADomain/servers/WebWORKS1/logs <JDA_INSTALL>/config/JDADomain/servers/WebWORKS2/logs On App Server Log file generated by Batch Scripts *.log Location is client specific On App or SRE Server, from where batch is run.
Housekeeping Tasks (cont.) SRE_JOB_SUMMARY – Weekly. Retain logs for Batch User for longer durations ( 3 months) --------WWFMGR.SRE_JOB_SUMMARY – --------Retain records for all other users for upto 7 Days DELETE FROM wwfmgr.sre_job_summary WHERE submission_date < (sysdate-7) and user_name not in ('Daemon',' BatchUser '); --------Retain Records for upto four months for Batch user DELETE FROM wwfmgr.sre_job_summary WHERE submission_date < trunc (sysdate-120,'MM') and user_name =' BatchUser '; BATCHMGR.JOB_SUMMARY – Same frequency and retention duration as prev. JDA_INTJOBS – Same frequency and retention dur as prev INTERR_ tables – Clear before new load / Maybe retain for a week. Built in as part of IGP load scripts
Housekeeping Tasks (cont.) Tasks with functional impact ( Application Data) Mostly built in as part of batch scripts / functional design Purge Demand History older than required duration. CDS will enforce this. Purge HISTFCST older than set duration ( eg . Fcst Dates > 1 year) Purge DFUEXCEPTION / SKUEXCEPTION – Clear at the beginning of batch. Purge SKUPROJSTATIC / SKUSTATSTATIC / FCSTPERFSTATIC – before running the batch job Process of Discontinuing & Deleting DFUs based on business logic ( eg . Inactive DMDUNITs, Store closings, Inactive assortments etc.). Design the process to include changes on Upper Level DFUs as well. Similar process for SKUs as well Keep PLANNING_ITEMs (Collab) in Sync with changes in Demand/ Fulfillment