# Differentiation in Python

Determining the derivative of a function is something that is often thought of as a symbolic operation, and the result is valid for any value of the function. This may not always be true, and it may not be easy to do in the general case. Think about what the previous algorithm does. It needs the derivative of a function at one specific point. Can that be determined if the algebraic form of the function is not known? Yes, it can, to within some degree of accuracy.

The derivative of a function at a point x is the slope of the curve defined by that function at that point. The definition of the derivative of f at the point x is as follows:

h gets smaller and smaller, reaching what is called a limit in calculus. This for­mula is essentially the mathematical definition of a derivative. On a computer, h can be made quite small, but can never be zero. If the expression above is used as an estimate of the derivative it will work in many cases. It is based on sampling two points of the function each time. An improvement can be made by using more points. For example,

uses four points and often produces better results.

Coding this uses a function passed as a parameter. It makes sense that the function to be differentiated would be a parameter to the function that differenti­ates it; other parameters will be x, the point at which it will be evaluated, delta, the accuracy desired, and niter, the maximum number of iterations. The calcu­lation should take place in a try-except block so that numerical errors will be caught. The two-point and the four-point versions of the function that performs numerical differentiation are as follows:

Testing these functions is an excellent demonstration. First, a function to be differentiated is written. The previous example on finding roots has a simple one (renamed as f1):

def f1 (x):

return (x-1)*(x-1)*(x-1)

That example also has a function that represents the derivative of f1 at the point x (renamed df1):

def df1 (x):

return 3*x*x – 6*x+3

The function df1() should return the exact derivative of f1(), and can be used to check the value returned by deriv1() or deriv2(). Create a loop that runs over a range of x values and compare the value returned from df1() to that returned by derive1() and/or deriv2():

for i in range (1,20):

x = i*1.0

f = f1(x)

df = df1(x)

mydf = derivl (f1, x)

mydf2 = deriv2(f1, x)

print (f, df, mydf, n0,””, mydf2, n1)

The result looks something like this:

Both functions give excellent results in a very few iterations in this case. Of course, some functions present more difficulties than do simple polynomials (See Press et al. in the References).

Source: Parker James R. (2021), Python: An Introduction to Programming, Mercury Learning and Information; Second edition.