Elak1 greedy presentation for design and analysis of algorithms.ppt

elakkiyaelakar 8 views 32 slides Mar 02, 2025
Slide 1
Slide 1 of 32
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32

About This Presentation

Greedy approach


Slide Content

week 4 Complexity of Algorithms 1
Fundamental Techniques
There are some algorithmic tools that
are quite specialised. They are good for
problems they are intended to solve,
but they are not very versatile.
There are also more fundamental
(general) algorithmic tools that can be
applied to a wide variety of different
data structure and algorithm design
problems.

week 4 Complexity of Algorithms 2
The Greedy Method
An optimisation problem (OP) is a problem
that involves searching through a set of
configurations to find one that minimises
or maximizes an objective function defined
on these configurations
The greedy method solves a given OP going
through a sequence of (feasible) choices
The sequence starts from well-understood
starting configuration, and then iteratively
makes the decision that seems best from
all those that are currently possible.

week 4 Complexity of Algorithms 3
The Greedy Method
The greedy approach does not always
lead to an optimal solution.
The problems that have a greedy
solution are said to posses the
greedy-choice property.
The greedy approach is also used in
the context of hard (difficult to
solve) problems in order to generate
an approximate solution.

week 4 Complexity of Algorithms 4
Fractional Knapsack Problem
In fractional knapsack problem,
where we are given a set S of n items,
s.t., each item I has a positive benefit
b
i and a positive weight w
i, and we
wish to find the maximum-benefit
subset that doesn’t exceed a given
weight W.
We are also allowed to to take
arbitrary fractions of each item.

week 4 Complexity of Algorithms 5
Fractional Knapsack Problem
I.e., we can take an amount x
i of each item i
such that
The total benefit of the items taken is
determined by the objective function

week 4 Complexity of Algorithms 6
Fractional Knapsack Problem

week 4 Complexity of Algorithms 7
Fractional Knapsack Problem
In the solution we use a heap-based
PQ to store the items of S, where the
key of each item is its value index
With PQ, each greedy choice, which
removes an item with the greatest
value index, takes O(log n) time
The fractional knapsack algorithm can
be implemented in time O(n log n).

week 4 Complexity of Algorithms 8
Fractional Knapsack Problem
Fractional knapsack problem satisfies
the greedy-choice property, hence
Thm: Given an instance of a fractional
knapsack problem with set S of n
items, we can construct a maximum
benefit subset of S, allowing for
fractional amounts, that has a total
weight W in O(n log n) time.

week 4 Complexity of Algorithms 9
Task Scheduling
Suppose we are given a set T of n tasks, s.t.,
each task i has a start time s
i and a
completion time f
i
.
Each task has to be performed on a machine
and each machine can execute only one task
at a time.
Two tasks i and j are non-conflicting if fi  sj
or fj  si.
Two tasks can be executed on the same
machine only if they are non-conflicting.

week 4 Complexity of Algorithms 10
Task Scheduling
The task scheduling problem is to
schedule all the tasks in T on the
fewest machines possible in a non-
conflicting way

week 4 Complexity of Algorithms 11
Task Scheduling (algorithm)

week 4 Complexity of Algorithms 12
Task Scheduling (analysis)
In the algorithm TaskSchedule, we begin
with no machines and we consider the tasks
in a greedy fashion, ordered by their start
times.
For each task i, if we have the machine that
can handle task i, then we schedule i on that
machine.
Otherwise, we allocate a new machine,
schedule i on it, and repeat this greedy
selection process until we have considered
all the tasks in T.

week 4 Complexity of Algorithms 13
Task Scheduling (analysis)
Task scheduling problem satisfies the
greedy-choice property, hence
Thm: Given an instance of a task
scheduling problem with set of n
tasks, the algorithm TaskSchedule
produces a schedule of the tasks with
the minimum number of machines in
O(n log n) time.

week 4 Complexity of Algorithms 14
Divide and Conquer
Divide: if the input size is small then solve
the problem directly; otherwise divide the
input data into two or more disjoint
subsets
Recur: recursively solve the sub-problems
associated with the subsets
Conquer: take the solutions to the sub-
problems and merge them into a solution to
the original problem

week 4 Complexity of Algorithms 15
Divide and Conquer
To analyse the running time of a divide-
and-conquer algorithm we utilise a
recurrence equation, where
T(n) denotes the running time of the
algorithm on an input of size n, and
Characterise T(n) using an equation that
relates T(n) to values of function T for
problem sizes smaller than n, e.g.,

week 4 Complexity of Algorithms 16
Substitution Method
One way to solve a divide-and-conquer recurrence equation is
to use the iterative substitution method, a.k.a., plug-and-chug
method, e.g., having
We get
And after i-1 substitutions we have
And for i = log n, we get

week 4 Complexity of Algorithms 17
Recursion Tree (visual approach)
In recursion tree method, some overhead (forming a part of
a recurrence equation) is associated with every node of the
tree. E.g., having
Where the overhead corresponds to summand +bn. We get
The value of T(n) corresponds to the sum of all overheads. In
this example, depth of the tree times overhead at each level,
which is O(n log n)

week 4 Complexity of Algorithms 18
Guess-and-Prove
In guess-and-prove method the solution to a recurrence
equation is guessed and then proved by mathematical induction
We guess that T(n) = O(n log n). We have to prove that
T(n) < C n· log n for some constant C and large enough n.
We use inductive assumption that T(n/2) < C · n/2 · log (n/2) =
Cn/2·(log n –1) = (Cn · log n)/2 – Cn/2.
T(n) = 2T(n/2) +bn < 2((Cn · log n)/2 – Cn/2) +bn = Cn · log n +
(-Cn + bn) < Cn · log n, for any C > b.

week 4 Complexity of Algorithms 19
The Master Method

week 4 Complexity of Algorithms 20
Matrix Multiplication
Suppose we are given two n x n
matrices X and Y, and we wish to
compute their product Z=X·Y, which
is defined so that:
Which naturally leads to a simple
O(n
3
) time algorithm.

week 4 Complexity of Algorithms 21
Matrix Multiplication
Another way of viewing this product is in terms of sub-matrices:
where
However this gives a divide-and-conquer algorithm with
running time T(n), s.t., T(n) =8T(n/2) +bn
2
= O(n
3
)

week 4 Complexity of Algorithms 22
Strassen’s Algorithm
Define seven matrix products:

week 4 Complexity of Algorithms 23
Strassen’s Algorithm
Having S
is we can represent I, J, K, L:

week 4 Complexity of Algorithms 24
Strassen’s Algorithm
Thus, we can compute Z=XY using
seven recursive multiplications of
matrices of size (n/2) x (n/2), where
One can prove, e.g., using Master
Theorem, that:
Thm: We can multiply two n x n
matrices in O(n
log 7
) = O(n
2.808
) time.

week 4 Complexity of Algorithms 25
Dynamic Programming
The dynamic programming (DP)
algorithm-design technique is similar
to divide-and-conquer technique.
The main difference is in replacing
(possibly) repetitive recursive calls by
the reference to already computed
values stored in a special table.

week 4 Complexity of Algorithms 26
Dynamic Programming
DP technique is used primarily for
optimisation problems
We very often apply DP where the
brute-force search for the best is
infeasible
However DP is efficient only if the
problem has a certain amount of
structure that we can exploit

week 4 Complexity of Algorithms 27
Dynamic Programming
Simple sub-problems: there must be a way of
braking the whole optimisation problem into
smaller pieces sharing a similar structure
Sub-problem optimality: an optimal solution
to the global problem must be a composition
of optimal sub-problem solutions
Sub-problem overlap: optimal solutions to
unrelated sub-problems can contain sub-
problems in common

week 4 Complexity of Algorithms 28
0-1 Knapsack Problem
In 0-1 knapsack problem, is the knapsack
problem where taking fractions of items is not
allowed, i.e., each item s
i  S, for 1  i  n, must
be entirely accepted or rejected
Item s
i has a benefit b
i (s.t., b
1  b
2  …  b
n)
and an integer weight w
i
We have the following objective:
where T S

week 4 Complexity of Algorithms 29
0-1 Knapsack Problem
Exponential solution: we can easily solve 0-1
knapsack problem in O(2
n
) time by testing
all possible subsets of items
Unfortunately exponential complexity is
not acceptable for large n and we rather
have to focus on nice characterisation for
sub-problems in order to use DP approach

week 4 Complexity of Algorithms 30
0-1 Knapsack Problem
Let S
k = {s
i: i= 1,2,…,k}
Let B[k,w] be the maximum total benefit
of a subset of S
k from among all those
subsets having total weight exactly w
We have b[0,w]=0, for each wW, and

week 4 Complexity of Algorithms 31
0-1 Knapsack Problem

week 4 Complexity of Algorithms 32
0-1 Knapsack Problem
The running time of the 01Knapsack
algorithm is dominated by the two
nested for-loops, where the outer one
iterates n times and the inner one
iterates at most W times
Thm: 01Knapsack algorithm finds the
highest benefit subset of S with
total weight at most W in O(nW) time
Tags