Streamlining End-to-End Testing Automation

abagmar 330 views 75 slides Jun 21, 2024
Slide 1
Slide 1 of 75
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75

About This Presentation

Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines

Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatabl...


Slide Content

@BagmarAnand
Streamlining
End-to-End
Test Automation
Anand Bagmar
Software Quality Evangelist

@BagmarAnand
Anand Bagmar@BagmarAnand
About Me

Ground
Reality
Distributed teams
Hybrid working
Different network setup and speeds
Many teams
(>100)
Mac, Windows and Linux laptops
Different Software versions
Certificates, Policies and multiple VPNs
Inconsistent
Developer &
SDET experience
Test execution environment setup is tedious
System Tests (e2e & component UI): teswiz (Appium, Selenium, Applitools, JDK 17)
•Emulator/Simulator setup (Android SDK, xcode)
API & API Workflow tests: karate
Contract tests: Specmatic
Unit tests, Sonar code quality checks
Test automation
toolset
Many environments
Test data
Branch
Configuring appropriate test execution in build and release pipelines
Complex path to
production
ADO agents: Windows Server & Linux agents
Firewall restrictions download dependencies
Direct access prohibited to CI agents
Multiple node & JDK versions
Connectivity issues to application-under-test
No browsers/devices on CI agents
CI execution
@BagmarAnand

Path to Production
@BagmarAnand

Getting a simple automated
test to run consistently for
all Developers and Testers
and in CI (ADO) is painful!
Setup
Execution (full or specific tests)
@BagmarAnand

Challenges of
End-2-End
Test
Automation
Ensuring Test
Environment Consistency
Coordinated Test
Execution
Test setup & execution on
CI Agents
@BagmarAnand

Solutions
Implemented
@BagmarAnand

Consistent
Environment Setup
#1
@BagmarAnand

•Setup important applications on Mac
•https://gist.github.com/anandbagmar/92b9f92298b1e17fa32c3404ad115
871
•Script to setup Android SDK on Mac
•https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup
AndroidSDK.sh
•Script to setup Android SDK on Linux
•https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup
_linux.sh
Test Authoring Environment Setup
@BagmarAnand

•Node script to install all dependencies (for system tests):
•https://github.com/znsio/getting-started-with-
teswiz/blob/main/package.json
•npm install – and you are ready!
Test Execution Environment Setup
@BagmarAnand

Test Automation
Framework support
#2
@BagmarAnand

•Setup should be simple – Ex:
•git pull
•./gradlew build
•No code change required for
•Running tests against any environment (local, dev, qa, staging, prod, etc.)
•Test data and environment configurations are separately maintained
•Running all or subset of tests
•Tests should run from command-line
Test Automation Framework Criteria
@BagmarAnand

Your Ultimate Open-Source Solution to
Automate Real-User Scenarios!
@BagmarAnand

Architecture
@BagmarAnand

Run tests
from CLITest Authoring
Execution Setup
1 2
2
3
4
5
6
6
Execution Reports
CI Tool
Feature coverage
@BagmarAnand 16

•Web browsers
•Mobile-web browsers
•Android apps
•iOS apps
•Windows desktop apps
•Electron apps
Platform support
@BagmarAnand

•Open source framework to automate real-user scenarios
•Multi-user
•Multi-device
•Multi-app
•Setup a HARD-GATE for your functional tests!
Unique capabilities of teswiz
@BagmarAnand

•Cloud device farm integrations
•Applitools AI for validations
•Comprehensive reports with trend analysis, feature
coverage, failure analysis using AI-ML
•CLI
•Configurable
Unique capabilities of teswiz
Defaults
Property
files
Environment
Variables
@BagmarAnand

CI Execution
@BagmarAnand

Node Setup
#3
@BagmarAnand

Use the right node version
@BagmarAnand

Use the right node version
@BagmarAnand

Script for
downloading artifacts
#4
@BagmarAnand

•For System Tests, the artifact (apk/app) could have been
generated from another pipeline
•This artifact needs to be available in local or cloud device
before tests can start execution
Script for downloading artifacts – Why?
@BagmarAnand

•Understand the CI tool APIs
•Script downloads the android/iOS artifact for:
•Specific branch
•Latest successful build, or a specific build number
Script for downloading artifacts
@BagmarAnand

•Script uploads the android/iOS artifact to your device farm
OR
•Teswiz can upload it automatically for you
Script for downloading artifacts – Bonus!
@BagmarAnand

Proxy handling
#5
@BagmarAnand

•Understand what dependencies in your framework need proxy
information. Ex:
•Gradle/maven
•Downloading newer version of browser drivers
•Any external connectivity
•Framework should be configurable to pass this at test execution
time. Ex:
•No proxy required from local laptop execution
•Proxy required when running tests from CI
Proxy Handling
@BagmarAnand

@BagmarAnand

updateGradlePropertiesForDevOps.sh
@BagmarAnand

Downloading
dependencies - Uber jar
#6
@BagmarAnand

•To reduce the number of dependencies to be downloaded,
teswiz is built as a uber jar.
•Specify only “teswiz” as a dependency in your test framework
Uber jar
@BagmarAnand

•Run as a java process
•./gradlew run
Uber jar
@BagmarAnand

Run browser in
docker
#7
@BagmarAnand

•CI agents may not have browsers installed
•The installed browser may be an older version
Run browser in docker – Why?
@BagmarAnand

•Should allow support for any os/architecture
•Should allow choosing the browser (ex: firefox, chrome, etc.)
•Should allow starting the containers with specific project
names and dynamic ports to prevent conflict with multiple
test executions
•Support specifying proxy information
•Can be used on local laptops as well as in CI executions
Run browser in docker
@BagmarAnand
https://github.com/znsio/teswiz/blob/main/dockerContainers.sh
https://github.com/znsio/teswiz/blob/main/docker-compose-v3.yml

Template for running
tests in build pipeline
#8
@BagmarAnand

Path to Production
@BagmarAnand

•Create templates
•Reuse with appropriate configuration parameters
Running tests in Build Pipeline
@BagmarAnand

@BagmarAnand

Task Groups for running
tests in release pipeline
#9
@BagmarAnand

Path to Production
@BagmarAnand

•Create Task Groups
•Include in each relevant stage of Release pipeline
Running tests in Release Pipeline
@BagmarAnand

@BagmarAnand

@BagmarAnand

@BagmarAnand

@BagmarAnand

@BagmarAnand

@BagmarAnand

Hard Gate
Make your tests valuable!
#10
@BagmarAnand

@BagmarAnand
What is a Hard Gate! Why is it required?
https://github.com/znsio/teswiz/blob/main/docs/HardGate.md
•Automated tests should allow you to take decisions on
product quality

@BagmarAnand
What is a Hard Gate! Why is it required?
https://github.com/znsio/teswiz/blob/main/docs/HardGate.md
•For every test execution cycle:
•Passing tests are expected to pass
•Known Failing tests are supposed to fail, unless:
•The product (bug) is fixed, OR
•The test is fixed/updated
If either criteria is not met, the build should fail!

@BagmarAnand
Hard Gate - Make your tests valuable!
https://github.com/znsio/teswiz/blob/main/docs/HardGate.md
Build passes if Hard Gate criteria is met.
Build fails if
•one or more passing tests have failed, or,
•one or more failing tests have passed

@BagmarAnand
Functional (e2e) Automation as Hard Gate!
https://github.com/znsio/teswiz/blob/main/docs/HardGate.md

Applitools Visual AI
for validations
#11
@BagmarAnand

AI-powered Validations
@BagmarAnand

90% less code to write & maintain with infinite coverage.
WITH APPLITOOLS AI
Every Element Is Validated They Look & Work Perfect
Use Applitools Ultrafast Grid (UFG)
-Test is simpler – one call to Applitools
(eyes.checkWindow()) validates the full screen
-Run the test once
-Get results from all browsers automatically
-Less test data
-No additional load on the application environment
@BagmarAnand

•Works for all platforms
•Native & hybrid apps – android, iOS
•Web browsers
•Desktop applications
•Electron applications
•Seamless scaling using Applitools Ultrafast Grid
@BagmarAnand
Applitools Visual AI

Specify as many browsers with viewports and devices
as required for validation
You do not need to do cross-browser validation at the
end anymore!
@BagmarAnand
AI-powered Cross Browser Test Automation

reportportal as a
Central reporting server
#12
@BagmarAnand

@BagmarAnand

A central reporting server for your
organization
@BagmarAnand

Test Execution
Real-time status
•See progress of launches
currently in progress
•Can also see details of tests
that are currently running, till
the point of execution
@BagmarAnand

Test Execution Details – Device farm report link & Device logs
•The link to the device farm test execution
dashboard is available in the result
•teswiz attaches browser logs/device logs
automatically to the result in ReportPortal
@BagmarAnand

Test Execution Details – with screenshots
•The test result includes screenshots as
captured by the test
@BagmarAnand

Test Execution Details – Applitools Visual AI Validation Results
•The test result includes the status of
Applitools Visual AI validation
•Link to the Applitools dashboard is available in
the result
@BagmarAnand

Test Execution Trend Analysis
•Each test shows the trend of its execution –
giving an indication of (in)stability
@BagmarAnand

Test Results – Next Steps
•On investigation of the failed tests, mark the
failures with appropriate reasons (as
configured)
@BagmarAnand

Auto-analysis of failed tests
Analyse the failure reasons by Auto-Analyzer based on Machine Learning
@BagmarAnand

Auto Analysis of Test Failures
•Why waste time
marking the test
failed for the same
reason as last time?
•ReportPortal can do
this automatically
for you with the
Auto Analysis and
Pattern Analysis
feature
@BagmarAnand

Auto Analysis of Test Failures
@BagmarAnand

Test Result
Visualization
Configure simple and
understandable
reports
•Create as many dashboards as
relevant for the team
•Dashboards may be for different
persona/role, giving appropriate
information
@BagmarAnand

•Teswiz and karate test frameworks can automatically upload
test results to your reportportal server
•sendToReportPortal:
•https://github.com/znsio/sendToReportPortal/blob/main/importRes
ultsAndUpdateAttributes.sh
•Can upload junit test results generated by any type of tests to
reportportal with relevant test execution metadata
reportportal.io
@BagmarAnand

Challenges
•Ensuring Test
Environment Consistency
•Coordinated Test
Execution
•Test setup & execution on
CI Agents
Solutions
Consistent environment setup
Test Automation Framework support
Node setup
Script for downloading artifacts
Proxy Handling
Downloading dependencies - Uber Jar
Browsers in docker
Template for build pipelines
Task groups for release pipelines
Hard Gate
AI for validations
Central reporting server
Summary
@BagmarAnand

@BagmarAnand
Anand Bagmar@BagmarAnand
Thank you