7. Module3 - Sorting&Searching Techniques.pdf

nithish2173 10 views 87 slides Sep 14, 2025
Slide 1
Slide 1 of 87
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87

About This Presentation

Python


Slide Content

BACSE101
Problem Solving Using Python
Sorting Algorithms:
•Bubble sort
•Selection sort
•Insertion sort
•Quick sort
•Merge sort
Searching Algorithms:
•Linear search
•Binary search

Sorting Techniques -Intro
•Sortingreferstorearrangementofagivenarrayorlistof
elementsaccordingtoacomparisonoperatoronthe
elements.
•Thecomparisonoperatorisusedtodecidetheneworderof
elementsintherespectivedatastructure.
WhySortingAlgorithmsareImportant
•SortingalgorithmsareessentialinComputerScienceas
theysimplifycomplexproblemsandimproveefficiency.
•Theyarewidelyusedinsearching,databases,divideand
conquerstrategies,anddatastructures.
KeyApplications:
Organizinglargedatasetsforeasierhandlingandprinting
Enablingquickaccesstothek-thsmallestorlargestelements
Makingbinarysearchpossibleforfastlookupsinsorteddata
Solvingadvancedproblemsinbothsoftwareandalgorithmdesign

Sorting Techniques -Types
Basic Sorting
Algorithms
Advanced Sorting
Algorithms
Bubble Sort
Repeatedly swaps adjacent
elements if they are in the wrong
order.
Time: O(n
2
) Space: 0(1)
Selection Sort
Selects the minimum element and
places it at the correct position in
each pass.
Time: O(n
2
) Space: 0(1)
Insertion Sort
Inserts each element into its
correct position in the sorted
part.
Time: O(n
2
) Space: 0(1)
best: O(n)
Quick Sort
Divides the array using a pivot
and recursively sorts the
partitions.
Time: O(n
2
) Space: O(log n))
Avg: O(n log n))
Merge Sort
A divide-and-conquer algorithm
that splits and merges arrays in
sorted order.
Time: O(n*log n)) Space: 0(n)

Note:Time and Space Complexity –Efficiency
Levels
The “O” in Big-O notation is short for the word “Order
of”.
1.O(1) → “Order of 1” (constant time)
The algorithm takes the same amount of time, no matter
how big the input is.
??????Always same speed.
2.O(n) → “Order of n” (linear time)
Time increases directly with input size.
�Double the input → double the time.
3.O(n²) → “Order of n squared” (quadratic time)
Time grows like the square of input size.
�If n = 10 → ~100 steps, n = 100 → ~10,000 steps.
4.O(log n) → “Order of Log of n”(logarithmic time)
Time grows very slowly compared to input size.
�If n = 1,000 → only ~10 steps, n = 1,000,000 → only ~20
steps.
5.O(n log n) → “Order of product of n and Log of n”
(linearithmic time)
Between linear and quadratic.
�Scales better than O(n²).

Bubble Sort Algorithm -Intro
•BubbleSortisthesimplestsortingalgorithmthatworksby
repeatedlyswappingtheadjacentelementsiftheyareinthe
wrongorder.
•Thisalgorithmisnotsuitableforlargedatasetsasitsaverage
andworst-casetimecomplexityarequitehigh.

Bubble Sort Algorithm -Working
WorkingofBubbleSortAlgorithm:
•Wesortthearrayusingmultiplepasses.After
thefirstpass,themaximumelementgoesto
end(itscorrectposition).Sameway,after
secondpass,thesecondlargestelementgoes
tosecondlastpositionandsoon.
•Ineverypass,weprocessonlythoseelements
thathavealreadynotmovedtocorrect
position.Afterkpasses,thelargestkelements
musthavebeenmovedtothelastkpositions.
•Inapass,weconsiderremainingelementsand
comparealladjacentandswapiflarger
elementisbeforeasmallerelement.Ifwekeep
doingthis,wegetthelargest(amongthe
remainingelements)atitscorrectposition.

Bubble Sort Algorithm -Example
WriteaPythonprogramtosortagivenlistofintegersinascendingorder
usingtheoptimizedBubbleSortalgorithm.
Theprogramshould:
1.Acceptalistofunsortedintegers.
2.Compareadjacentelementsandswapthemiftheyareinthewrong
order.
3.Aftereachpass,placethelargestelementatitscorrectposition.
4.Optimizetheprocessbystoppingearlyifnoswapsoccurinapass
(indicatingthatthelistisalreadysorted).
5.Finally,displaythesortedarray.
•ExampleInput:
arr=[64,34,25,12,22,11,90]
•ExampleOutput:
Sortedarray:11122225346490

Bubble Sort Algorithm –Example
Stage 1: Problem Analysis Chart
Input data Process Outut
•A list of unsorted
numbers, e.g. [64,
34, 25, 12, 22, 11,
90].
1.Traverse the array multiple times.
2.Compare adjacent elements.
3.Swap them if they are in the wrong order.
4.After each pass, the largest element moves
to its correct position.
5.Repeat until the array is fully sorted.
6.Stop early if no swaps occur (optimization).
•A sorted list of
numbers in
ascending order, e.g.
[11, 12, 22, 25, 34,
64, 90].

Bubble Sort Algorithm –Example
Stage 2: Algorithm
•Step 1: Start
•Step 2: Take an array arrof size n.
•Step 3: Repeat steps for i= 0 to n-1:
Set a flag swapped = False.
For each j = 0 to n-i-1:
Compare arr[j] and arr[j+1].
If arr[j] > arr[j+1], swap them and set swapped = True.
If swapped == False, break (array already sorted).
•Step 4: Print the sorted array.
•Step 5: Stop.

Bubble Sort Algorithm –Example
Stage 3: Programming
# Optimized Python program for implementation
of Bubble Sort
arr= [64, 34, 25, 12, 22, 11, 90]
n = len(arr)
# Traverse through all array elements
for iin range(n):
swapped = False
# Last ielements are already in place
for j in range(0, n-i-1):
# Traverse the array from 0 to n-i-1
# Swap if the element found is greater
than the next element
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
swapped = True
if (swapped == False):
break
Output:
Sorted array:
11 12 22 25 34 64 90
print("Sorted array:")
for iin range(len(arr)):
print("%d" % arr[i], end=" ")

Time and Space Complexity Analysis of
Bubble Sort:
•The time complexity of Bubble Sort is O(n
2
) in the worst-case
scenario and the space complexity of Bubble sort is O(1).
•Bubble Sort only needs a constant amount of additional space
during the sorting process.
Complexity TypeComplexity
Time Complexity
Best: O(n)
Average: O(n
2
)
Worst: O(n
2
)
Space ComplexityWorst: O(1)

Bubble Sort : Pros and Cons
•Advantages of Bubble Sort:
Bubble sort is easy to understand and implement.
It does not require any additional memory space.
It is a stable sorting algorithm, meaning that elements with the same key
value maintain their relative order in the sorted output.
•Disadvantages of Bubble Sort:
Bubble sort has a time complexity of O(n
2
) which makes it very slow for
large data sets.
Bubble sort has almost no or limited real world applications. It is mostly
used in academics to teach different ways of sorting.

Selection Sort Algorithm -Intro
•SelectionSortisacomparison-basedsortingalgorithm.
•Itsortsanarraybyrepeatedlyselectingthesmallest(or
largest)elementfromtheunsortedportionandswappingitwith
thefirstunsortedelement.
•Thisprocesscontinuesuntiltheentirearrayissorted.
Selection Sort Algorithm -Working
1.Firstwefindthesmallestelementandswapitwiththefirstelement.Thiswaywe
getthesmallestelementatitscorrectposition.
2.Thenwefindthesmallestamongremainingelements(orsecondsmallest)andswap
itwiththesecondelement.
3.Wekeepdoingthisuntilwegetallelementsmovedtocorrectposition.

Selection Sort Algorithm -Working

Selection Sort Algorithm -Example
WriteaPythonprogramtosortagivenlistofintegersinascending
orderusingtheSelectionSortalgorithm.
Theprogramshould:
1.Dividethelistintoasortedandanunsortedportion.
2.Findtheminimumelementfromtheunsortedportionineachpass.
3.Swapitwiththefirstelementoftheunsortedportion.
4.Repeatuntiltheentirelistissorted.
5.Finally,displaythesortedarray.
•ExampleInput:arr=[64,25,12,22,11]
•ExampleOutput:Sortedarray:[11,12,22,25,64]

Selection Sort Algorithm –Example
Stage 1: Problem Analysis Chart
Input data Process Outut
•An unsorted array of
integers. Example:
[64, 25, 12, 22, 11]
1.Start from the first element.
2.Find the minimum element in the unsorted
portion.
3.Swap it with the current position.
4.Move boundary of sorted and unsorted
portion.
5.Repeat until all elements are sorted.
•The sorted array in
ascending order.
Example: [11, 12, 22,
25, 64]

Selection Sort Algorithm –Example Stage
2: Algorithm
•Step 1: Start
•Step 2: Read the array of integers
•Step 3: Set n = len(arr)
•Step 4: For ifrom 0 to n-1:
Assume the element at iis the minimum (min_idx= i)
For j from i+1 to n-1:
If arr[j] < arr[min_idx], update min_idx = j
Swap arr[i] and arr[min_idx]
•Step 4: Print the sorted array.
•Step 5: Stop.

Selection Sort Algorithm –Example Stage
3: Programming
Output:
Original array: [64, 25, 12, 22, 11]
Sorted array: [11, 12, 22, 25, 64]
#SELECTION sort
# Python program for implementation of Selection Sort
arr= [64, 25, 12, 22, 11]
print("Original array: ",arr)
n = len(arr)
for iin range(n -1):
# Assume the current position holds the minimum element
min_idx= i
# Iterate through the unsorted portion to find the actual minimum
for j in range(i+ 1, n):
if arr[j] < arr[min_idx]:
# Update min_idxif a smaller element is found
min_idx= j
# Move minimum element to its correct position
arr[i], arr[min_idx] = arr[min_idx], arr[i]
print("Sorted array: ",arr)

Complexity Analysis of Selection Sort
•Time Complexity: O(n
2
),as there are two nested loops:
One loop to select an element of Array one by one = O(n)
Another loop to compare that element with every other Array element = O(n)
Therefore overall complexity = O(n) * O(n) = O(n*n) = O(n
2
)
•Auxiliary Space:O(1) as the only extra memory used is for temporary
variables.
Complexity TypeComplexity
Time Complexity O(n
2
)
Space Complexity O(1)

Selection Sort : Pros and Cons
Advantages of Selection Sort
Easytounderstandandimplement,makingitidealforteachingbasicsorting
concepts.
RequiresonlyaconstantO(1)extramemoryspace.
Itrequireslessnumberofswaps(ormemorywrites)comparedtomanyother
standardalgorithms.Onlycyclesortbeatsitintermsofmemorywrites.
Therefore,itcanbesimplealgorithmchoicewhenmemorywritesarecostly.
Disadvantages of the Selection Sort
Selection sort has a time complexity of O(n
2
) makes it slower compared to
algorithms like Quick Sort or Merge Sort.
Does not maintain the relative order of equal elements which means it is not
stable.

Applications of Selection Sort Algorithm:
•Perfectforteachingfundamentalsortingmechanismsand
algorithmdesign.
•Suitableforsmalllistswheretheoverheadofmorecomplex
algorithmsisn'tjustifiedandmemorywritingiscostlyasit
requireslessmemorywritescomparedtootherstandardsorting
algorithms.
•HeapSortalgorithmisbasedonSelectionSort.

Insertion sort
Insertionsortisasimplesortingalgorithmthatworksbyiteratively
insertingeachelementofanunsortedlistintoitscorrectpositionin
asortedportionofthelist.

Steps:
•TheinsertionSortfunctiontakesanarrayarrasinput.Itfirst
calculatesthelengthofthearray(n).Ifthelengthis0or1,the
functionreturnsimmediatelyasanarraywith0or1elementis
consideredalreadysorted.
•Forarrayswithmorethanoneelement,Westartwithsecond
elementofthearrayasfirstelementinthearrayisassumedtobe
sorted.
•Comparesecondelementwiththefirstelementandcheckifthe
secondelementissmallerthenswapthem.
•Movetothethirdelementandcompareitwiththefirsttwoelements
andputatitscorrectposition
•Repeatuntiltheentirearrayissorted.

Example:

arr= {23, 1, 10, 5, 2}
Initial:
•Current element is 23
•The first element in the array is assumed to be sorted.
•The sorted part until 0th index is : [23]
First Pass:
•Compare 1 with 23 (current element with the sorted part).
•Since 1 is smaller, insert 1 before 23 .
•The sorted part until 1st index is: [1, 23]

Second Pass:
•Compare 10 with 1 and 23 (current element with the sorted part).
•Since 10 is greater than 1 and smaller than 23 , insert 10 between 1 and
23 .
•The sorted part until 2nd index is: [1, 10, 23]
Third Pass:
•Compare 5 with 1 , 10 , and 23 (current element with the sorted part).
•Since 5 is greater than 1 and smaller than 10 , insert 5 between 1 and 10
•The sorted part until 3rd index is : [1, 5, 10, 23]

Fourth Pass:
•Compare 2 with 1, 5, 10 , and 23 (current element with the sorted
part).
•Since 2 is greater than 1 and smaller than 5 insert 2 between 1 and 5 .
•The sorted part until 4th index is: [1, 2, 5, 10, 23]
Final Array:
•The sorted array is: [1, 2, 5, 10, 23]

Python Code:
# Python program for implementation of Insertion Sort
##arr= list(map(int, input().split()))
arr= [12, 11, 13, 5, 6] # Original array
print("Original array:", arr) # Print the original array
n = len(arr) # Length of the array
# Insertion Sort logic
for iin range(1, n): # Iterate over the array starting from the second element
key = arr[i] # Store the current element as the key to be inserted in
#the right position
j = i-1
# Move elements greater than key one position ahead
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j] # Shift elements to the right
j -= 1
arr[j + 1] = key # Insert the key in the correct position
# Print the sorted array
print("Sorted array:", arr)
for iin range(len(arr)):
print(arr[i], end=" ")
Output:
Original array: [12, 11, 13, 5, 6]
Sorted array: [5, 6, 11, 12, 13]
5 6 11 12 13

Advantages andDisadvantagesof Insertion Sort
Advantages
•Simple and easy to implement.
•Stablesorting algorithm.
•Efficient for small lists and nearly sorted lists.
•Space-efficient as it is an in-place algorithm.
•Adoptive. Thenumber of interventions is directly proportional to number of
swaps.
•For example, no swapping happens for a sorted array and it takes O(n) time
only.
Disadvantages
•Inefficient for large lists.
•Not as efficient as other sorting algorithms (e.g., merge sort, quick sort) for
most cases.

Quick sort
•QuickSortis a sorting algorithm based on the Divide and Conquer
that picks an element as a pivot and partitions the given array around
the picked pivot by placing the pivot in its correct position in the
sorted array.
•It works on the principle ofdivide and conquer, breaking down the
problem into smaller sub-problems.

There are mainly three steps in the algorithm:
1.Choose a Pivot:Select an element from the array as the pivot. The
choice of pivot can vary (e.g., first element, last element, random
element, or median).
2.Partition the Array:Re arrange the array around the pivot. After
partitioning, all elements smaller than the pivot will be on its left,
and all elements greater than the pivot will be on its right. The pivot
is then in its correct position, and we obtain the index of the pivot.
3.Recursively Call:Recursively apply the same process to the two
partitioned sub-arrays (left and right of the pivot).
4.Base Case:The recursion stops when there is only one element left
in the sub-array, as a single element is already sorted.

Here’s a basic overview of how the QuickSortalgorithm works.

Choice of Pivot:
Therearemanydifferentchoicesforpickingpivots.
•Alwayspickthefirst(orlast)elementasapivot:
Thebelowimplementationpicksthelastelementaspivot.Theproblemwith
thisapproachisitendsupintheworstcasewhenarrayisalreadysorted.
•Pickarandomelementasapivot:
Thisisapreferredapproachbecauseitdoesnothaveapatternforwhichthe
worstcasehappens.
•Pickthemedianelementispivot:
Thisisanidealapproachintermsoftimecomplexityaswecanfindmedian
inlineartimeandthepartitionfunctionwillalwaysdividetheinputarrayinto
twohalves.Butittakesmoretimeonaverageasmedianfindinghashigh
constants.

Illustration of QuickSortAlgorithm:

Algorithm
Step 1: Initialization
Set n = length of arr.
Initialize a stack with the full array range:
stack = [(0, n -1)]
Record the start time for performance measurement.
Step 2: Process the Stack
Repeat until the stack is empty:
Pop (low, high) from the stack.
If low < high:
Choose the pivot as the last element:
pivot = arr[high]
Initialize i= low -1 (index of smaller element)

Algorithm
Step 3: Partitioning
For each j from low to high -1:
If arr[j] < pivot:
Increment i
Swap arr[i] and arr[j]
After the loop:
Place the pivot in its correct position:
arr[i+ 1], arr[high] = arr[high], arr[i+ 1]
Set pi = i+ 1 (partition index)
Step 4: Push Subarrays to Stack
Push left subarray (low, pi -1) to the stack.
Push right subarray (pi + 1, high) to the stack.
Repeat the process until the stack is empty.
Step 5: Output
Record the end time.
Print the sorted array.
Print the execution time.

Python Code:
# Python program for implementation of Quick Sort
import time
#arr=list(map(int, input().split()))
arr= [10, 80, 30, 90, 40] # Input array
# Display the original array before sorting
print("Original array:", arr)
start_time=time.time()
# Initialize a stack with the full range of the
array
# This simulates the recursive calls of Quick Sort
n = len(arr)
stack = [(0, n -1)]
Output:
Original array: [10, 80, 30, 90, 40]
Sorted array: [10, 30, 40, 80, 90]

Python Code:
while stack: #Process the stack until it's empty
low, high = stack.pop() #Pop the top range from the stack
## print(low, high)
if low < high:
pivot = arr[high] #Choose the last element as the
pivot
i= low -1 #Index of smaller element
# Partitioning: move elements smaller than pivot to the left
for j in range(low, high):
if arr[j] < pivot:
i+= 1
# Swap arr[i] and arr[j]
arr[i], arr[j] = arr[j], arr[i]
# Place the pivot in its correct position
arr[i+ 1], arr[high] = arr[high], arr[i+ 1]
pi = i+ 1 # Partition index
# Push left and right subarrays to the stack for
further sorting
stack.append((low, pi -1)) # Left subarray
## print(stack)
stack.append((pi + 1, high)) # Right subarray
## print(stack)
end_time=time.time()
# Display the sorted array
print("Sorted array:", arr)
print(f"Executiontime: {end_time-start_time:.6f} seconds")
Output:
Original array: [10, 80, 30, 90, 40]
Sorted array: [10, 30, 40, 80, 90]

Advantages of Quick Sort:
•It is a divide-and-conquer algorithm that makes it easier to solve
problems.
•It is efficient on large data sets.
•It has a low overhead, as it only requires a small amount of memory
to function.
•It is Cache Friendly as we work on the same array to sort and do not
copy data to any auxiliary array.
•Fastest general purpose algorithm for large data when stability is not
required.

Disadvantages of Quick Sort:
•It has a worst-case time complexity of O(n
2
), which occurs when the
pivot is chosen poorly.
•It is not a good choice for small data sets.
•It is not a stable sort, meaning that if two elements have the same key,
their relative order will not be preserved in the sorted output in case of
quick sort, because here we are swapping elements according to the
pivot's position (without considering their original positions).

Merge sort
•Merge Sort is aDivide and Conqueralgorithm. It divides input array
in two halves, calls itself for the two halves and then merges the two
sorted halves.
•The merge() functionis used for merging two halves.
•The merge(arr, l, m, r) is key process that assumes that arr[l..m] and
arr[m+1..r] are sorted and merges the two sorted sub-arrays into one.

How does Merge Sort work?
Here's a step-by-step explanation of how merge sort works:
Divide: Divide the list or array recursively into two halves until it can
no more be divided.
Conquer: Each subarray is sorted individually using the merge sort
algorithm.
Merge: The sorted subarrays are merged back together in sorted order.
The process continues until all elements from both subarrays have been
merged.

Illustration of Merge Sort:

Let’s look at the working of above example:
Divide:
•[38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
•[38, 27] is divided into [38] and [27] .
•[43, 10] is divided into [43] and [10] .
Conquer:
•[38] is already sorted.
•[27] is already sorted.
•[43] is already sorted.
•[10] is already sorted.

Merge:
•Merge [38] and [27] to get [27, 38] .
•Merge [43] and [10] to get [10,43] .
•Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]
•Therefore, the sorted list is [10, 27, 38, 43] .

Algorithm
Input:
arr= [12, 11, 13, 7, 6, 5]
Step 1: Initialize
Determine the length of the array: n = len(arr)
Create a stack to simulate recursive calls: stack = [(0, n -1)]
Create an empty list to store subarray boundaries: subarrays = []
Step 2: Simulate Recursive Splitting
Repeat until the stack is empty:
Pop (l, r) from the stack.
If l < r:
Compute midpoint: m = (l + r) // 2
Push left half (l, m) and right half (m + 1, r) back to the stack.
Store the merge boundary (l, m, r) in subarrays.
This simulates the recursive breakdown of the array into smaller parts.

Algorithm
Step 3: Merge Subarrays in Reverse Order
For each (l, m, r) in reversed(subarrays):
Create temporary arrays:
L = arr[l to m]
R = arr[m+1 to r]
Initialize pointers:
i= 0 → index for L
j = 0 → index for R
k = l → index for arr(where merged values go)
Merge L and R into arr[l to r]:
Compare L[i] and R[j]
Place the smaller value into arr[k]
Increment ior j, and always increment k
Copy remaining elements:
If any elements remain in L, copy them to arr
If any elements remain in R, copy them to arr

Algorithm
Step 4: Final Output
After all merges are complete, the array is sorted.
Print the result:
Sorted array: [5, 6, 7, 11, 12, 13]

Python Code:
# Python program for implementation of Merge Sort
# Input array
arr= [12, 11, 13, 5, 6, 7]
print("Original array:", arr)
# Length of the array
n = len(arr)
# Create a stack to simulate recursion
stack = [(0, n -1)]
subarrays = []
# Step 1: Break the array into subarrays (simulate
recursive calls)
while stack:
l, r = stack.pop()
if l < r:
m = (l + r) // 2
stack.append((l, m))
stack.append((m + 1, r))
subarrays.append((l, m, r))
Output:
Original array: [12, 11, 13, 7, 6, 5]
Sorted array: [5, 6, 7, 11, 12, 13]

Python Code:
# Step 2: Merge subarrays in reverse order (bottom -
up)
for l, m, r in reversed(subarrays):
n1 = m -l + 1
n2 = r -m
# Create temp arrays
L = [0] * n1
R = [0] * n2
# Copy data to temp arrays
for iin range(n1):
L[i] = arr[l + i]
for j in range(n2):
R[j] = arr[m + 1 + j]
# Merge the temp arrays back into arr[l..r]
i= j = 0
k = l # merging elements into array
Output:
Original array: [10, 7, 8, 9, 1, 5]
Sorted array: [1, 5, 7, 8, 9, 10]

Python Code:
while i< n1 and j < n2:
if L[i] <= R[j]:
arr[k] = L[i]
i+= 1
else:
arr[k] = R[j]
j += 1
k += 1
# Copy remaining elements of L[]
while i< n1:
arr[k] = L[i]
i+= 1
k += 1
# Copy remaining elements of R[]
while j < n2:
arr[k] = R[j]
j += 1
k += 1
# Print the sorted array
print("Sorted array:", arr)
Output:
Original array: [10, 7, 8, 9, 1, 5]
Sorted array: [1, 5, 7, 8, 9, 10]

Advantages and Disadvantages of Merge Sort:
Advantages
•Stability: Merge sort is a stable sorting algorithm, which means it
maintains the relative order of equal elements in the input array.
•Guaranteed worst-case performance:Merge sort has a worst-case
time complexity ofO(N logN), which means it performs well even on
large datasets.
•Simple to implement:The divide-and-conquer approach is
straightforward.
•Naturally Parallel: We independently merge subarrays that makes it
suitable for parallel processing.

Disadvantages:
•Space complexity: Merge sort requires additional memory to store the
merged sub-arrays during the sorting process.
•Not in-place: Merge sort is not an in-place sorting algorithm, which
means it requires additional memory to store the sorted data. This can
be a disadvantage in applications where memory usage is a concern.
•Merge Sort is Slower than QuickSortin general as QuickSortis more
cache friendly because it works in-place.

Searching Algorithms:
•Linear search
•Binary search

Searching Algorithms -Intro
•Searching algorithms are critical in computer science for identifying
specific data elements.
•This tutorial will concentrate on array searching techniques.
•The two predominant algorithms utilized depend on the nature of the
input array.
1.Linear Search
2.Binary Search

Searching Algorithms -Types
1.Linear Search:
Utilized for unsorted arrays,
this method involves sequential comparisons of the target item with array
elements,
operating in O(n) time complexity.
2.Binary Search:
Applied to sorted arrays,
this algorithm initially assesses the middle element and, upon finding a
match, returns;
otherwise, it recursively searches either the left or right segment based on
the comparison outcome,
exhibiting superior efficiency with a time complexity of O(Log n).

Note:Time and Space Complexity –Efficiency
Levels
The “O” in Big-O notation is short for the word “Order
of”.
1.O(1) → “Order of 1” (constant time)
The algorithm takes the same amount of time, no matter
how big the input is.
??????Always same speed.
2.O(n) → “Order of n” (linear time)
Time increases directly with input size.
&#3627932872;Double the input → double the time.
3.O(n²) → “Order of n squared” (quadratic time)
Time grows like the square of input size.
&#3627932872;If n = 10 → ~100 steps, n = 100 → ~10,000 steps.
4.O(log n) → “Order of Log of n”(logarithmic time)
Time grows very slowly compared to input size.
&#3627932873;If n = 1,000 → only ~10 steps, n = 1,000,000 → only ~20
steps.
5.O(n log n) → “Order of product of n and Log of n”
(linearithmic time)
Between linear and quadratic.
&#3627932874;Scales better than O(n²).

Linear Search Algorithm
•In Linear Search, each element of the array is examined to determine
equality with the target element.
•Upon finding a match, the index of the current element is outputted.
•Conversely, if no match is located, a message indicating the element's
absence is printed.
•Linear search is alternatively referred to as sequential search.

Linear Search Algorithm -Example
•Givenanarray,arr[]ofnintegers,andanintegerelementx,findwhetherelementxispresentin
thearray.Printtheindexofthefirstoccurrenceofxinthearray,orPrint“itdoesn'texist”.
•Testcases:
1.Input: arr[] = [1, 2, 3, 4], x = 3
Output: 2
Explanation: There is one test case with array as [1, 2, 3 4] and element to be searched as 3.
Since 3 is present at index 2, the output is 2.
2.Input: arr[] = [10, 8, 30, 4, 5], x = 5
Output: 4
Explanation: For array [10, 8, 30, 4, 5], the element to be searched is 5 and it is at index 4. So,
the output is 4.
3.Input: arr[] = [10, 8, 30], x = 6
Output: it doesn't exist
Explanation: The element to be searched is 6 and its not present, so we print “it doesn't exist”.

Linear Search Algorithm –Example
Stage 1: Problem Analysis Chart
Input data Process Outut
•An integer n → size of the
array
•Array arr[] of n integers
•An integer x → element to
be searched
•Start from the first element
of the array.
•Compare each element with
x.
•If arr[i] == x, print index iand
stop.
•If the loop finishes without
finding x, print “it doesn’t
exist”.
•Index of the first occurrence
of x in arr[]
•OR message: “it doesn’t
exist”

Linear Search Algorithm –Example
Stage 2: Algorithm
Step 1: Start
Step 2: Read n
Step 3: Read array arr[0..n-1]
Step 4: Read element x
Step 5: For i= 0 to n-1
If arr[i] == x then
Print i
Stop
Step 6: If not found, Print "it doesn’t exist"
Step 7: Stop

Linear Search Algorithm –Example with
explanation
•For example:Consider the arrayarr[] = {10, 50, 30, 70, 80, 20, 90,
40}andkey= 30

Linear Search Algorithm –Example
Stage 3: Programming
arr=[2, 3, 4, 10, 40]
x=10
n=len(arr)
# Iterate over the array in order to find the key x
foriinrange(0, n):
if(arr[i] ==x):
result=i
break
else:
result=-1
if(result==-1):
print("Element is not present in array")
else:
print("Element is present at index", result)
Output:
Element is present at index 3

Time and Space Complexity of Linear Search
Algorithm:
•Time Complexity:
Best Case:In the best case, the key might be present at the first index. So the
best case complexity is O(1)
Worst Case:In the worst case, the key might be present at the last index i.e.,
opposite to the end from which the search has started in the list. So the
worst-case complexity is O(N) where N is the size of the list.
Average Case:O(N)
•Auxiliary Space:O(1) as except for the variable to iterate through the
list, no other variable is used.

Applications of Linear Search Algorithm:
•UnsortedLists:Whenwehaveanunsortedarrayorlist,linearsearch
ismostcommonlyusedtofindanyelementinthecollection.
•SmallDataSets:LinearSearchispreferredoverbinarysearchwhen
wehavesmalldatasetswith
•SearchingLinkedLists:Inlinkedlistimplementations,linearsearchis
commonlyusedtofindelementswithinthelist.Eachnodeischecked
sequentiallyuntilthedesiredelementisfound.
•SimpleImplementation:LinearSearchismucheasiertounderstand
andimplementascomparedtoBinarySearchorTernarySearch.

Linear Search: Pros, Cons, and Applications
•Advantages of Linear SearchAlgorithm:
Linear search can be used irrespective of whether the array is sorted or not. It can
be used on arrays of any data type.
Does not require any additional memory.
It is a well-suited algorithm for small datasets.
•Disadvantages of Linear SearchAlgorithm:
Linear search has a time complexity of O(N), which in turn makes it slow for large
datasets.
Not suitable for large arrays.
•When to use Linear SearchAlgorithm?
When we are dealing with a small dataset.
When you are searching for a dataset stored in contiguous memory.

Binary Search Algorithm
•BinarySearchisasearching
algorithmthatoperatesonasorted
ormonotonicsearchspace,
repeatedlydividingitintohalvesto
findatargetvalueoroptimal
answerinlogarithmictimeO(logN).
Conditions to apply Binary Search
Algorithm in a Data Structure:
•To apply Binary Search algorithm:
The data structure must be sorted.
Access to any element of the data
structure should take constant time.

Binary Search Algorithm -Intro
Below is the step-by-step algorithm for Binary Search:
•Divide the search space into two halves byfinding the middle index
"mid".
•Compare the middle element of the search space with thekey.
•If thekeyis found at middle element, the process is terminated.
•If thekeyis not found at middle element, choose which half will be used as
the next search space.
-> If thekeyis smaller than the middle element, then theleftside is used
for next search.
-> If thekeyis larger than the middle element, then therightside is used
for next search.
•This process is continued until thekeyis found or the total search space is
exhausted.

Binary Search Algorithm –Example with
explanation
•Consider an arrayarr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and
thetarget = 23.

Binary Search Algorithm –Example
Stage 1: Problem Analysis Chart
Input data Process Outut
•A sorted array arr[] = {2, 5,
8, 12, 16, 23, 38, 56, 72,
91}
•Target element x = 23
1.Initialize low = 0, high = n-1.
2.Calculate mid = (low + high) // 2.
3.Compare arr[mid] with x.
•If arr[mid] == x → Found, return mid.
•If arr[mid] < x → Search in the right half
(low = mid + 1).
•If arr[mid] > x → Search in the left half
(high = mid -1).
4.Repeat steps 2–3 until low > high.
5.If not found → Print “it doesn’t exist”.
•Index of the element
23 in the array (if
present).
•OR message: “it
doesn’t exist”

Binary Search Algorithm–Example
Stage 2: Algorithm
Step 1: Start
Step 2: Read n
Step 3: Read array arr[0..n-1]
Step 4: Read element x
Step 5: Initialize low = 0, high = n-1
Step6: While low <= high
mid = (low + high) // 2
If arr[mid] == x
Print mid
Stop
Else if arr[mid] < x
low = mid + 1
Else
high = mid -1
Step 7: If not found, Print "it doesn’t exist"
Step 8: Stop

Binary Search Algorithm –Example
Stage 3: Programming
arr=[2, 3, 4, 10, 40]
x=10
low=0
high=len(arr) -1
result=-1
whilelow<=high:
mid=low+(high-low) //2
# Check if x is present at mid
ifarr[mid] ==x:
result=mid
break
# If x is greater, ignore
left half
elifarr[mid] <x:
low=mid+1
# If x is smaller, ignore
right half
else:
high=mid-1
Output:
Element is present at index 3
ifresult!=-1:
print("Element is present at index", result)
else:
print("Element is not present in array")

Complexity Analysis of Binary Search Algorithm:
•Time Complexity:
Best Case: O(1)
Average Case: O(log N)
Worst Case: O(log N)
•Auxiliary Space:O(1).

Applications of Binary Search Algorithm:
•Searchinginsortedarrays
•Findingfirst/lastoccurrenceorclosestmatchinasortedarray
•Databaseindexing—UsedinB-treesandsimilarstructuresforfastdatalookup.
•Debugginginversioncontrol—Toolslikegitbisectusebinarysearchtoisolate
faultycommits.
•Networkrouting&IPlookup—Efficientlyfindroutingentriesintablessorted
byaddressranges.
•Filesystems&libraries—Fastsearchthroughsorteddirectoriesorsymbol
tables.
•Gaming/graphics—Collisiondetectionorraytracingusingsortedspatialdata.
•Machinelearningtuning—Efficienthyperparametersearch(e.g.,learningrate,
thresholds).
•Optimizationproblems&competitiveprogramming—Solveboundary-value
challengesbynarrowingsearchspace.
•Advanceddatastructures—Binarysearchtrees,self-balancingBSTs,and
fractionalcascadingrelyonsearchlogic.
Tags