The root of an equation is where its value is zero. This may not be the smallest or the largest value, but the place where a function equals zero is often important. For example, if a function for the error in a calculation can be found then finding the place where the error is zero would be important. In one dimension, the problem being solved is as follows:

x: ƒ(x) = 0 (11.1)

or in other words, find the value of x that results in f(x) being equal to zero. The function could be quite complicated, but for the technique to work, it should have a derivative.

The basis of many root finding procedures is Newton’s method. The procedure begins with a guess at the right answer. The guess in many cases does not have to be very accurate, but is simply a starting point. If a range of values is given within which to find the solution, the center of that range may be a good starting guess. Here is a problem to start with:

ƒ(x) = (x-1)^{3} between x = -2 and x = 12 (11.2)

the center of the range is x = 5.

The initial guess is called x_{0}, and here x_{0} = 5. The function value there, f(x_{0}), is 64. The algorithm now says that the next guess for x, x_{1}, will be as follows:

where f’(x_{0}) is the derivative of f at the point x= x_{0}. This is a problem: the derivative of f has to be calculated. A numerical method is examined a little later in this chapter, so let’s code a function that gives the derivative, having done the calculus on paper and then written the function based on that. The derivative of (x-1)^{3} is 3x^{2} – 6x + 3.

# Roots of a function

def objective (x):

return (x-1)*(x-1)*(x-1)

def deriv (x):

return 3*x*x – 6*x+3

# Range is -2 to +12

x = 5.

fx = 1000.

delta = 0.000001

print (“Step 0: x=”, x, ” obj = “, objective(x))

i = 1

while abs(fx) > delta:

f = objective(x)

ff = f/deriv(x)

x = x – ff

fx = objective(x)

print (“Step “,i,”: x=”, x, ” obj = “, fx)

i = i + 1

Step 0: x= 5.0 obj = 64.0

Step 1 : x= 3.666666666666667 obj = 18.96296296296297

Step 2 : x= 2.7777777777777777 obj= 5.618655692729766

Step 3 : x= 2.185185185185185 obj= 1.6647868719199308

Step 14 : x= 1.0137019495631274 obj = 2.5724508967303e-06

Step 15 : x= 1.0091346330420865 obj= 7.622076731056633e-07

The correct answer in this case is x=1.0, so the method gets to within 0.009 of the correct root in 15 steps. Depending on the application, this could be fine. What if the initial guess was terrible? If the process starts at x = 500, then it takes 27 steps, but gets just a little closer to the right answer (x=1.0087). Starting at -500 also takes 27 steps.

It’s possible that there is no root. What happens in that case? The program keeps looking. It overshoots, and then goes back, and forth, and back again. To prevent this from happening, it is common to place a limit of the number of times the program will try. When this limit is exceeded, an error occurs indicating that there is no solution.

This first example has illustrated some common concepts that are used in numerical analysis, which is the mathematical discipline encompassing the computation of mathematical functions and operations. The common concepts include the following:

The **initial guess**: It is relatively common to have a numerical algorithm begin at a guessed value.

The **delta**: It is also common to have an algorithm step when the change in the result or some mathematical feature becomes smaller than a specified threshold, called delta.

**Iteration**: Numerical methods frequently repeat a calculation, expecting it to converge on the correct result, using the previously calculated value as the new starting point.

**Maximum iterations**: A user of a numerical method can assume that the method will not converge (get close enough to the right answer) if a specified number of attempts have been made.

Source: Parker James R. (2021), *Python: An Introduction to Programming*, Mercury Learning and Information; Second edition.