A Taylor expansion is useful when you want to know the general behavior of a function with respect to small changes in one of the parameters that the functions depends on.
A situation that comes up a lot in physics is the “early time” behavior of some system. This implies that there is some function f(t) that depends on time t. Early time would indicate t \approx 0, but that definition is not precise enough. You always need to compare t near t=0 but where t is small compared to some time scale \tau inherent to the problem.
Let’s make this concrete: consider radioactive decay N(t) = N_0(1/2)^{t/\tau_{1/2}} where N is the number of nuclei remaining after a time t if the nucleus has a half-life of \tau_{1/2} and N_0 is the number of nuclei that we started with at t=0. This can be rewritten as N(t) = N_0 \exp(-\gamma t) where \gamma is the radioactive decay rate.
The two forms for N(t) are equivalent when we make the identification that \gamma = \log(2)/\tau_{1/2}.
To understand the behavior of N(t) at early times, t \approx 0, we perform a Taylor expansion of N(t) \approx N_0 \left [1 - \gamma t + \gamma^2 t^2/2 - \cdots \right ]. The early time behavior of the number of nuclei is simply a linear decrease in time with slope -N_0 \gamma. The intrinsic time scale in this example is the radioactive half-life \tau_{1/2}.
As a practical matter, this is super useful if you were doing an experiment where you were trying to determine the half-life for a new unknown isotope. If you plot the number of nuclei remaining vs. time and you see that this is linear, then you immediately know that your measurement time so far is well short of the half-life. On the other hand, if the number of nuclei remaining vs. time starts to deviate from a simple line, then you know that your measurement time is on order of the half-life.
Another example is related to feedback theory. A real world example is the cruise control in your car. In this case, your car measures your present speed v_r and compares it to your desired or set speed v_s. It then takes the difference v_r- v_s which gives the deviation. In order to adjust your speed, the car creates an error signal u = -G(v_r - v_s) which is related to your deviation and in this case is simply proportional to the deviation. (There are more sophisticated ways to generate the error signal, see PID for example.) The error signal is a command to the controller, some combination to the the accelerator foot pedal and the brake pedal in your car, which tells the controller to speed up or slow down depending on the deviation. Note that if you are going too fast, the error signal is negative which tells the controller to slow down.
The factor G>0 in the error signal converts the deviation which is in speed units to something that the controller can understand. Typically this is a voltage to the controller where a larger voltage to the controller tells the controller to make the car go faster. In this case, the G would have units of volts per (miles per hour).
In this example, we seem to imply that the error signal can be as large as needed and that the conversion from deviation to error signal is completely linear. In reality, this is rarely the case.
This is where Taylor expansions come in handy. We can study the behavior of the controller response f(x) where x is the parameter that we’'re trying to control, such as car speed. For the controller response f(x) to be useful for creating an error signal, there needs to be a regime near x_s, which is the desired setpoint, where f(x) \approx m (x-x_s) + b and m is the slope and b is the offset. Note that x-x_s is simply the deviation and this form of f(x) can be found by a Taylor expansion of f(x) around x = x_s.
The Taylor expansion allows use to determine m and b. But that is not all! We can perform a Taylor expansion to second order (x-x_s)^2:
f(x) \approx f(x_s) + \frac{df(x_s)}{dx} (x-x_s) + \frac{1}{2} \frac{d^2f(x_s)}{dx^2} (x-x_s)^2 + \cdots
The Taylor expansion allows us to determine under what conditions the second order term is small enough that the linear term is dominant. This is important for feedback because the error signal needs to depend linearly on the deviation. The practical consequence is that the Taylor expansion tells us not only the slope with respect to the deviation but also what range of deviations is small enough for the feedback to work as intended.
A common example in an experimental physics laboratory is locking the frequency (color or wavelength) of a laser to an atomic transition using a feedback loop. The frequency of light needed to excite an atomic transition is given by \omega_a = (E_b - E_a)/\hbar where E_a is the energy of the atomic ground state and E_b is the energy of the desired excited state.
The frequency of the laser \omega_l is determined by the distance between two mirrors that make up the laser cavity. Very small changes can be made to the distance between the mirrors, i.e. the mirror spacing, by sending voltages to a PZT which is a device attached to one of the mirrors.
The idea here is to determine whether the laser is exciting the atomic transition and then feedback any deviations back to the laser cavity and make small adjustments to the mirror spacing to keep the laser frequency \omega_l equal to the atomic transition frequency \omega_a.
It turns out that when \omega_l = \omega_a, the atoms glow maximally and this glow can be picked up by a photodetector. Our goal is to create an error signal that monitors the amount of glowing as measured by the photodetector and send a compensating voltage to the PZT to adjust the mirror spacing.
The amount that the atoms glow has a functional form given by f(\omega_l) = \frac{A}{(\omega_l - \omega_a)^2 + \gamma^2} where A depends on the photodetector and the angle that the photodetector views the atoms and \gamma depends on properties of just the atoms.
If you perform a Taylor expansion of f(\omega_l) about \omega_a, you’ll find that: f(\omega_l) \approx \frac{A}{\gamma^2} - \frac{A}{\gamma^4}(\omega_l - \omega_a)^2 + \cdots.
This is not useful since we want a form that is linear in the deviation (\omega_l-\omega_a). It turns out, using tricks like frequency modulation, we can get a signal from the photodetector that is the derivative of f(\omega_l) with respect to \omega_l. The derivative signal has a more useful Taylor expansion at \omega_l \approx \omega_a:
\frac{df}{d\omega_l} \approx -\frac{2A}{\gamma^4}(\omega_l-\omega_a) + \frac{4A}{\gamma^6}(\omega_l - \omega_a)^3 + \cdots
This expansion tells us that the slope is -2A/\gamma^4 and that the laser will stay locked so long as the deviation is small. This is defined by
\frac{4A}{\gamma^6}(\omega_l - \omega_a)^3 \ll \frac{2A}{\gamma^4}(\omega_l-\omega_a)
or equivalently:
|\omega_l - \omega_a| \ll \frac{\gamma}{\sqrt{2}}