Unit-5 Selenium.pptx about selenium tool

ranjith_kssr5 1 views 99 slides Oct 08, 2025
Slide 1
Slide 1 of 99
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99

About This Presentation

about testing tool selenium few info


Slide Content

Selenium – Introduction Selenium features JavaScript testing

Selenium Introduction

Selenium Components

Selenium Components

Benefits of Selenium IDE

Drawbacks of Selenium IDE

Selenium RC

Selenium Features Selenium is an open source and portable Web testing Framework. Selenium IDE provides a playback and record feature for authoring tests without the need to learn a test scripting language . Selenium supports various operating systems, browsers and programming languages.

Selenium Features Selenium can be integrated with frameworks like Ant and Maven for source code compilation. Selenium can also be integrated with testing frameworks like TestNG for application testing and generating reports. Selenium requires fewer resources as compared to other automation test tools.

JavaScript Testing The JS test framework is a tool for examining the functionality of JavaScript web applications. It helps to ensure all the components are working properly. It also enables you to easily identify bugs. JavaScript Unit Testing is a method in which JavaScript test code is written for a web page or application module. It is then combined with HTML as an inline event handler and executed in the browser to test if all functionalities work as desired. These unit tests are then organized in the test suite

JavaScript Testing Karma is a test runner for unit tests in the JavaScript language Jasmine is a Cucumber-like behavior testing framework Protractor is used for AngularJS.

T he following JavaScript Testing Frameworks are helpful for unit testing in JavaScript. They are as follows : 1. Unit.js An assertion library for Javascript runs on Node.js and the browser. It works with any test runner and unit testing framework like Mocha , Jasmine , Karma , protractor ( E2E test framework for Angular apps), QUnit , etc.

2. Mocha Mocha is a test framework running both in Node.js and in the browser. This framework makes asynchronous testing simple by running tests serially . Mocha tests run serially, allowing for flexible and accurate reporting while mapping uncaught exceptions to the correct test case.

3. Jest It is an open-source testing framework built on JavaScript, designed majorly to work with React and React Native-based web applications

4. Jasmine Jasmine is a popular JavaScript behavior-driven development framework for unit testing JavaScript applications. It provides utilities that run automated tests for both synchronous and asynchronous code. It is also highly beneficial for front-end testing.

5. Karma Karma is a node-based test tool allowing you to test your JavaScript codes across multiple browsers. It makes test-driven development fast, fun, and easy and is termed as a test-runner technically.

Testing backend integration points The backend of a software application refers to the server side of the application and includes components like databases, application logic, APIs, server configuration, etc . Backend testing involves testing these server-side components to ensure they meet functionality, reliability, performance, and security requirements.

Testing backend integration points Some critical aspects of backend testing include: Testing database schemas and queries Validating application APIs and integrations Checking server configurations Analyzing performance under load Testing error and exception handling Testing security protections

Testing backend integration points SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) are two primary web service communication protocols that require regular testing to ensure optimal functioning and security. Various tools like Postman for SOAP and SoapUI for REST are available to facilitate the testing process, providing a user-friendly interface and clear analysis of responses.

Testing backend integration points REST (Representational State Transfer)  is the type of architectural pattern adopted widely for modern web-based application development. It can administer the key application components like files, media components, or objects. The REST uses the API-ready approach. It uses the HTTP protocol for communication. feedback-based fuzzing.

Testing backend integration points SOAP means Simple Object Access Protocol and is an API protocol designed with the intention of security and consistency in the data exchange process. It uses the XML data format for communications and can only be used for web services. It can consume more bandwidth as SOAP messages hold a huge chunk of data. RestAssured java library

Test-driven development Test-driven development (TDD) is a software development practice that emphasizes writing tests before writing the actual code. It follows a cyclical process of writing a failing test, writing the minimum code to make the test pass, and then refactoring the code.

Test-driven development TDD is usually described as a sequence of events Implement the Test Verify that the new Test fails Write code that implements the tested feature Verify that the new test passes together with the old tests. Refactor the code

Test-driven development TDD is usually described as a sequence of events Implement the Test Start by writing the test and write the code afterwards . To be able to write the test, the developer must find all relevant requirement specifications, use cases, and user stories.

Test-driven development Verify that the new Test fails The newly added test should fail because there is nothing to implement the behavior properly Write code that implements the tested feature The code we write doesn't yet have to be particularly elegant or efficient

Test-driven development Verify that the new test passes together with the old tests. Refactor the code

REPL-driven developmen t Read-Eval-Print Loop (REPL) It is a running environment (or program). A programmer can send input (code or data) to (Read), evaluate it (Eval) and have the result printed (Print) by the REPL. REPL-driven development encourages the programmer to work in small steps, try out, and interact with the environment by continuously evaluating their code This style of development is very common when working with interpreted languages, such as Lisp, Python, Ruby, and JavaScript .

REPL-driven developmen t Read-Eval-Print Loop (REPL) We write small functions that are independent and also not dependent on a global state. The focus is on writing small functions with no or very few side effects We can combine this style of development with unit testing

REPL-driven developmen t Read-Eval-Print Loop (REPL) We write small functions that are independent and also not dependent on a global state. The focus is on writing small functions with no or very few side effects We can combine this style of development with unit testing

A complete test automation scenario Example : Consider a complete test automation example, for the user database web application for an organization The application consists of the following layers: A web frontend A JSON/REST service interface An application backend layer A database layer

A complete test automation scenario The test code will work through the following phases during execution: Unit testing of the backend code Functional testing of the web frontend, performed with the Selenium web testing framework Functional testing of the JSON/REST interface, executed with soapUI

A complete test automation scenario All the tests are run in sequence, and when all of them succeed, the result can be used as the basis for a decision to see whether the application stack is deemed healthy enough to deploy to a test environment, where manual testing can commence

Manually testing our web application Start a fresh test . This resets the database backend to a known state and sets up the testing scenario so that manual testing can proceed from a known state. The tester points a browser to the application's starting URL :

Click on the Add User link. Add a user: Enter a username—Alan Turing, in our test case. Save the new user. A success page will be shown.

Click on the Search User link. Search for Alan Turing . Verify that Alan is present in the result list.

Deployment of the system: Deployment systems Deployment refers to the act of moving code or software from a development environment to a test, staging, or production environment , where it ultimately becomes accessible to end-users. This phase involves tasks such as configuring servers, installing software components, and ensuring the application is ready for use. Deployment is a technical process that focuses on the infrastructure and technical aspects of getting the software up and running.

Why are there so many deployment systems There are many deployment syste ms because deploying software can be complex Automatically place the applications on to the servers Ex: Consider an application with three main components Web Server—manages website requests Application Server—runs the main program Database Server—stores the data Manual deployment takes time and may lead to mistakes. Manual work can be boring

Why are there so many deployment systems Many companies can have multiple servers and applications Each server and application has different methods for updating An application can run on A physical server (requires one kind of deployment system) A Virtual machine (requires one kind of deployment system) Container (requires one kind of deployment system)

Main functions of deployment systems Automatically place the application on server Apply updates or new versions of the application Set up the necessary configuration to run the application properly

Virtualization stacks Virtualization is a process that allows the creation of multiple simulated computing environments from a single pool of physical system resources. It is often used to run multiple operating systems on the same hardware system at the same time .

Virtualization stacks Virtualization is made possible by a software layer called the hypervisor. This software abstracts the resources of its host system—whether that is CPU, memory, storage space, or network bandwidth Then dynamically allocates them among a number of virtual machines (VMs) running on the system based on the resource requests it receives.

Virtualization stacks Virtualization allows to create virtual computers ( virtual machines) inside the physical system. These virtual machines acts like real systems with their own virtual hardware Ex : To Test a newly developed software on different operating systems, use virtual machine software

Virtualization stacks Hypervisors are key to virtualization Hypervisor is a special program that creates and manages virtual machines . It shares the computer resources like CPU, memory among various VMS Two types of Hypervisors Bare Metal Hypervisor —Doesn’t need OS, directly runs on computer Hardware—Ex: Vmware ESX Hosted Hypervisor : Runs on top of the OS. Ex :KVM on Linux

Executing code on the client

Puppet Puppet is a system management tool for centralizing and automating the configuration management process. Puppet is also used as a software deployment tool. It is an open-source configuration management software widely used for server configuration, management, deployment, and orchestration of various applications a nd services across the whole infrastructure of an organization.

Puppet Puppet is specially designed to manage the configuration of Linux and Windows systems . It is written in Ruby and uses its unique D omain S pecific L anguage (DSL) to describe system configuration .

What Puppet can do? For example, you have an infrastructure with about 100 servers. As a system admin, it’s your role to ensure that all these servers are always up to date and running with full functionality. System Admin working manually on the servers

What Puppet can do? To do this, you can use Puppet , which allows you to write a simple code which can be deployed automatically on these servers. This reduces the human effort and makes the development process fast and effective. Puppet automates server management

Puppet performs the following functions: Puppet allows you to define distinct configurations for every host. The tool allows you to continuously monitor servers to confirm whether the required configuration exists or not and it is not altered. If the config is changed, Puppet tool will revert to the pre-defined configuration on the host. It also provides control over all the configured system, so a centralized change gets automatically effected. It is also used as a deployment tool as it automatically deploys software to the system. It implements the infrastructure as a code because policies and configurations are written as code.

Deployment Models There are two deployment models for configuration management tools : Push-based deployment model: initiated by a master node. In this deployment model master server pushes the configurations and software to the individual agents. After verifying a secure connection, the master runs commands remotely on the agents. For example, Ansible and Salt Stack . Pull-based deployment model: initiated by agents. In this deployment model, individual servers contact a master server , verify and establish a secure connection, download their configurations and software and then configure themselves accordingly — for example, Puppet and Chef.

Deployment Models Puppet is based on a Pull deployment model, where the agent nodes check in regularly after every 1800 seconds with the master node to see if anything needs to be updated in the agent. If anything needs to be updated the agent pulls the necessary puppet codes from the master and performs required actions .

Example: Master – Agent Setup: The Master A Linux based machine with Puppet master software installed on it. It is responsible for maintaining configurations in the form of puppet codes. The master node can only be Linux. The Agents The target machines managed by a puppet with the puppet agent software installed on them. The agent can be configured on any supported operating system such as Linux or Windows or Solaris or Mac OS.

Puppet Master Agent Communication

Puppet Master Agent Communication Step 1) Once the connectivity is established between the agent and the master, the Puppet agent sends the data about its state to the Puppet master server. These are called Facts: This information includes the hostname, kernel details, IP address, file name details, etc.… Agent Sends Facts to Master

Puppet Master Agent Communication Step 2) Puppet Master uses this data and compiles a list with the configuration to be applied to the agent. This list of configuration to be performed on an agent is known as a catalog. This could be changed such as package installation, upgrades or removals, File System creation, user creation or deletion, server reboot, IP configuration changes , etc. Master sends a catalog to Agent

Puppet Master Agent Communication Step 3) The agent uses this list of configuration to apply any required configuration changes on the node. In case there are no drifts in the configuration, Agent does not perform any configuration changes and leaves the node to run with the same configuration . Agent applies configuration

Puppet Master Agent Communication Step 4) Once it is done the node reports back to puppet master indicating that the configuration has been applied and completed. Four types of Puppet building blocks are Resources---inbuilt functions Classes—combination of different resources into a single unit Manifest--directory containing puppet DSL files. Modules--collection of files and directories such as Manifests, Class definitions.

ANSIBLE Ansible is an automation tool primarily used for configuration management, application deployment, and task management. With Ansible repetitive tasks like updates, backups, creating users and assigning permission, and even system reboots can be done efficiently . Similar applications include puppet, chef, salt, juju, cfengine

Why use Ansible: Few Benefits. Simplicity and Ease of Use: Ansible is agentless, so there is no need to install or manage additional software on the target machines. Ease of Use : Ansible uses YAML, a straightforward language for writing automation scripts (called playbooks), which is easy to learn and read.

Why use Ansible: Idempotency: Ansible ensures that tasks are performed only when necessary, meaning running the same playbook multiple times will not change the system unless needed. Scalability: You can manage small or large-scale environments with the same Ansible playbook, making it ideal for enterprises. Extensibility: Ansible has a wide range of built-in modules that let you automate almost anything — network configurations, cloud provisioning, container management, and more

Ansible Architecture: Ansible architecture is based on 3 main components The control node Managed nodes Inventory

Control Node This serves as the machine where Ansible is installed. Imagine you have 10 servers where you need to update software. Without an automation tool like Ansible, you would have to log into each server one by one to perform the update. But with Ansible installed on a control node, you can create a playbook on the control node and run it. Ansible will automatically log into each of the 10 servers and perform the update for you.

Managed node : Managed nodes are the machines that Ansible controls. On Ansible no special software needs to be installed on these nodes. Ansible communicates with them using SSH access , and the only requirement is that Python is installed, which is typically pre-installed on most Linux systems. This simplicity makes Ansible lightweight and easy to set up across multiple machines .

Inventory : The inventory is a file that lists all the managed nodes (or hosts) that Ansible will be controlling. It defines the targets for your automation tasks. Managed nodes can be grouped, for example, by function (e.g ., webservers , databases , load_balancers ) or by environment (e.g., staging , production ).

Ansible playbooks Ansible playbooks are written in YAML. They are files that define a series of tasks to be executed on the managed nodes. They group multiple modules, which are executed from top to bottom. With a playbook, you can orchestrate the steps of any manually ordered process.

Chef Chef is a popular open-source configuration management tool used for automating the deployment, configuration, and management of software and infrastructure across different platforms .

It is designed to help system administrators and DevOps engineers automate the process of deploying and managing infrastructure, applications, and services in a scalable and reliable manner . Chef uses a declarative approach to define and manage infrastructure as code ( IaC ), which means that system administrators can define the desired state of their infrastructure using code

Chef Architecture

Chef Server Chef server contains all configuration data, it also stores cookbooks, recipes and metadata that describe each node in the Chef-Client. Configuration details are given to node through Chef-Client. Chef Workstation This machine holds all the configuration data that can later be pushed to the central chef server. Several chef command line utilities will be available in the system, which can be used to interact with nodes, update configurations etc.

Chef Workstation Workstation will have two main components Knife Utility: Its a Command Line Tool used to communicate with the central chef server from workstation. Adding, removing, changing configurations, of nodes in central chef server will be carried out by using this knife utility A Local Chef Repository: This is a place where every configuration components of chef server is stored. This chef repository can be synchronized with the chef central server (using knife).

Chef Client & Node A node is any physical or virtual machine in your network that is managed by the Chef server.  The Chef client is a piece of software that runs on each node and securely communicates with the Chef server to get the latest configuration instructions. The Chef client uses the instructions to bring the node to its desired state. It uses Ruby as its reference language for creating cookbooks and defining recipes, with an extended DSL for specific resources. 

Cookbooks – Cookbooks are created using Ruby language and Domain Specific languages are used for specific resources. A cookbook contains recipes which specify resources to be used and in which order it is to be used. The cookbook contains all the details regarding the work and it changes the configuration of the Chef-Node..

Advantages of Chef Continuous Deployment Increase System Robustness Adaption to the cloud Managing Data Centers and Cloud Environments

SaltStack SaltStack , commonly known as Salt, is an open-source infrastructure automation and configuration management tool . SaltStack is designed to handle dynamic, fast-paced environments with its real-time remote execution capabilities . It can manage and automate a variety of IT operations, from configuring servers to orchestrating complex application deployments.

SaltStack Key Features : Push-based architecture: Agents (minions) on your systems proactively pull configuration changes from the Master server. Declarative configuration: Define desired states using YAML or Jinja templates, instead of complex scripting. Scalability and flexibility: Handles large infrastructure across various systems and environments (physical, virtual, cloud). Remote execution: Execute commands and manage tasks across your infrastructure effortlessly. Security and control: Fine-grained access control, encryption, and auditing functionalities. Module library: Rich ecosystem of community-developed modules for diverse tasks and integrations.

SaltStack operates on a Master-Minion architecture: Salt Master: The central server that controls and manages the infrastructure. It sends commands and configurations to Salt Minions. Salt Minions: Agents that run on the managed systems and receive instructions from the Salt Master. Minions are responsible for executing commands, applying states, and reporting back to the Master

Core Concepts in SaltStack States SaltStack uses States to define the desired configuration of managed nodes. States are written in YAML and describe the desired state of a system, including which packages should be installed, which services should be running, and more. Grains Grains are static information about the system, such as OS version, IP address, or installed software. They help target specific Minions for commands or state applications.

Core Concepts in SaltStack 3. Pillars Pillars are like Grains but are defined on the Master and provide secure data to Minions. They are used for storing sensitive information like passwords or keys that you don’t want to expose. 4. Modules SaltStack has various modules, including execution modules (for executing commands) and state modules (for applying states). They provide granular control over the infrastructure.

Core Concepts in SaltStack 5. Reactor The Reactor system in SaltStack allows for automated event-driven responses. For example, if a service crashes, the Reactor can detect the event and trigger a recovery action automatically. 6. Salt Mine The Salt Mine is a feature that allows Minions to store arbitrary data on the Salt Master. This data can be accessed by other Minions and is useful for coordinating actions.

SaltStack Work Flow: The Master broadcasts or pushes configuration changes to relevant Minions. Minions receive the commands and compare them to their current state (stored locally or on the Master). If any discrepancies exist, Minions execute the necessary commands to achieve the desired state. Minions send back execution results to the Master, providing valuable insights and audit trails.

Chef and Puppet are both agent-based and designed for large-scale deployments with complex configurations. They have steep learning curves and use their own domain-specific languages for resource declaration. Ansible, on the other hand, is agentless and uses YAML for its playbooks, making it more lightweight and easier to use for simpler configurations and cloud deployments. SaltStack is also agent-based and uses Python for its configuration files, making it highly scalable and ideal for large-scale, complex, and dynamic deployments.