Numerical Method Analysis: Algebraic and Transcendental Equations (Non-Linear)

100005232690054 11,272 views 10 slides Nov 18, 2015
Slide 1
Slide 1 of 10
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10

About This Presentation

Numerical Method Analysis- Solution of Algebraic and Transcendental Equations (Non-Linear Equation). Algorithms- Bisection Method, False Position Method, Newton-Raphson Method, Secant Method, Successive Approximation Method.

Visit here for getting code implementation- https://github.com/MinhasKama...


Slide Content

2014
Numerical Method Analysis
Solution of Algebraic and Transcendental Equations





Minhas Kamal
BSSE0509

2

Solution of Algebraic and
Transcendental Equations

An equation of the type 234 5 6 is either algebraic or transcendental. These types
of equations can be solved by using two types of methods-

1. Direct Method: This method gives the exact value of all the roots directly in a
finite number of steps.

2. Indirect or Iterative Method: Iterative methods are best suited for computer
programs to solve an equation. It is based on the concept of successive
approximation.
In Iterative Method there are two ways to solve an equation-
i. Bracketing Method: We take two initial points where the root lies in
between them. Example- Bisection Method, False Position Method.
ii. Open End Method: We take one or two initial values where the root may
be any-where. Example- Newton-Raphson Method, Successive Approximation
Method, Secant Method.
Bellow we shall discuss about these methods in detail.

Bisection Method
The Bisection Method is a root-finding method that repeatedly bisects
an interval and then selects a subinterval in which a root must lie for further
processing.
Suppose that the bisection method is used to find a root of the polynomial-
23475 738 – 3 –
Now, two numbers a and b have to be found such that f(a) and f(b) have opposite
signs. For the function, a=1 and b=2 satisfy this condition, as-
Anfeaq aft h f h o q ho and Anoeaq aot h o h o q ya
In the first iteration the midpoint is-

3

pa qans r legoa q afbca
The function value is-
Anpeaq afbct h fbc h oa qahubfoca
Because f(c) is negative, value of a is replaced with 1.5 and as this continues, the
interval between a and b will become increasingly smaller, converging on the root of
the function. See this happen in the table below.

Iteration a b c f(c)
1 1.00000 2. 00000 1.50000 −0.12500
2 1.50000 2. 00000 1.75000 1.60937
3 1.50000 1.75000 1.62500 0.66601
4 1.50000 1.62500 1.56250 0.25219
5 1.50000 1.56250 1.53125 0.05911
6 1.50000 1.53125 1.51562 −0.03405
7 1.51562 1.53125 1.52343 0.01225
8 1.51562 1.52343 1.51953 −0.01097
9 1.51953 1.52343 1.52148 0.00062
So, we got 1.52148 as a root for the polynomial where ϵ0=0.001.
Here is a function in C that uses this method to solve equations-

/**
* takes two initial values and shortens the distance by both side
**/
double BisectionMethod(){
double root=0;

double a=1, b=3;
double c=0, fc=0;

int loopCounter=0;
if(f(a)*f(b) < 0){
while(1){
loopCounter++;
c=(a+b)/2;
fc=f(c);

4

if(fc<0.00001 && fc>-0.00001){
root=c;
break;
}

if((f(a))*(fc) < 0){
b=c;
}else{
a=c;
}

}
}
printf("It took %d loops.\n", loopCounter);

return root;
}



False Position Method
It is a method that ends up with a new root estimate after every repetition and
gradually approaches toward more precise solution. The method uses this equation
for finding estimated root-

pa q
ds.A(l)ial.A(s)T
d (l)97 (s)T

Let’s take the previous function,
23475 738 – 3 –
For, a=1 and b=2, f(a) and f(b) have opposite signs. So in the first iteration-

c = [1*4 – 2*(-2) ] / [4 – (-2)] = 1.3333
The function is-
Anpeaqa hubvwov

5

So we get the following table-
Iteration c f(c)
1 1.3333 −0.9629
2 1.4626 −0.3333
3 1.5040 −0.1018
4 1.5163 −0.0298
5 1.5199 −0.0086
6 1.5209 −0.0025
7 1.5212 −0.0007

So, we got root =1.5212 for the polynomial.
Here is a function in C that uses this method to solve equations-

/**
* takes two initial values and shortens the distance by single side
**/
double FalsePosition(){
double root=0;

double a=1, b=2;
double c=0, fc=0;

int loopCounter=0;
if(f(a)*f(b) < 0){
while(1){
loopCounter++;

c=(a*f(b) - b*f(a)) / (f(b) - f(a));
fc=f(c);

if(fc<0.00001 && fc>-0.00001){
root=c;
break;
}

if((f(a))*(fc) < 0){
b=c;

6

}else{
a=c;
}
}
}
printf("It took %d loops.\n", loopCounter);

return root;
}



Newton-Raphson Method
The Newton-Raphson method is simple, fast and the best-known method of finding
roots of a function f (x). It uses one initial point to find the root. It not only uses the
main function but also the derivative of the function.
Derivative of the previous function is-
7′an eaq a- xa h afa
For this function we take x1 =1 and we try to find x2 with this equation-

37 5 73 –
(3)

′(3)

And, in every iteration we update the value of x1. So first time we get-

37 5 71
So-

23475 77
Here is the table-
Iteration X2 f(x2)
1 2.0000 4.0000
2 1.6363 0.7453
3 1.5303 0.0539
4 1.5214 0.0003

7

So, we got root =1.5214.
Here is the code of function-

/**
* uses one initial value and gradually takes that value near to the real one
**/
double NewtonRaphson(){
double root=0;

double x1=1;
double x2=0;

int loopCounter=0;
while(1){
loopCounter++;

x2 = x1 - (f(x1)/f2(x1));


if(f(x2)<0.00001 && f(x2)>-0.00001){
root=x2;
break;
}

x1=x2;
}
printf("It took %d loops.\n", loopCounter);

return root;
}



Successive Approximation Method
In this method we first take an initial value, then we try to approximate the root and
examine if it is the real root, if it is not the root we try to approximate the next root
depending on the previous approximation thus the approximation evaluates toward
the root.
Take the previous example. We shall first find
I(3) -

Let, 23475 767

8

⟹ ta h a a h aoa q au
⟹ ta q a r oa
⟹ a q atWn r oe
So, In eaq at
√n r oe

Now for the initial value we take x=1. So then-

In eaq afbyyooc

We get-

h In eaqahubyyooc

Here is the table-

Iteration g(x) x-g(x)
1 1.44225 -0.44225
2 1.50989 -0.06764
3 1.51972 -0.00982
4 1.52114 -0.00141
5 1.52134 -0.00020

The method finds out
1.52134 as the root.

The code is bellow-

/**
* uses one initial value and gradually takes that value near to the real one
**/
double FixedPoint(){
double root=0;
double x=1;

9

int loopCounter=0;
while(1){
loopCounter++;

if( (x-g(x)) <0.00001 && (x-g(x)) >-0.00001){
root = x;
break;
}

x=g(x);
}
printf("It took %d loops.\n", loopCounter);

return root;
}




Secant Method
The secant method is very similar to the Newton-Raphson method. Newton-
Raphson method needs to determine derivatives of the function at several points,
which is a drawback.
For our equation we take two initial value (the root may/may not be between them)
x0=1, x1=2 and see if x1 is the root-

f(x1) = 4
If not then we try to value
x2 where-

32 =
!"30∗$(3 1)&−"3 1∗$(3 0)&'
"$(3
1)−$(3 0)&
= 1.333
We set x0= x1 & x1= x2 and repeat the process till f(x1)=0.



See this happen in the following table-

10

Iteration X0 X1 f(X1)
1 1.0000 2.0000 4.00000
2 2.0000 1.3333 -0.96296
3 1.3333 1.4626 -0.33333
4 1.4626 1.5311 0.05862
5 1.5311 1.5209 -0.00269
6 1.5209 1.5213 -0.00002

So we get root = 1.5213.
Here is the C coded function-
/**
* uses two initial values & both value approaches to the root
**/
double Secant(){
double root=0;

double x0=1;
double x1=2;
double x2=0;

int loopCounter=0;
while(1){
loopCounter++;

if(f(x1)<0.00001 && f(x1)>-0.00001){
root=x1;
break;
}

x2 = ((x0*f(x1))-(x1*f(x0))) / (f(x1)-f(x0));

x0=x1;
x1=x2;
}
printf("It took %d loops.\n", loopCounter);

return root;
}