I have done a little work using branching process models to study extinction in ecology and emergence of infectious diseases. One of the questions that comes up in those studies is to determine how sensitive an extinction probability is to changes in the system. In some cases, this turns out to be a tricky question, requiring the generalization of L'Hopital's rule to systems of equations. Here's how the story goes...
As it is usually taught to calculus students, L'Hopital's rule is stated as follows:
Theorem:
If \( \lim_{p\rightarrow p_0} f(p) = \lim_{p\rightarrow p_0} g(p) = 0 \) then \begin{gather} \lim_{p\rightarrow p_0} \frac{f(p)}{g(p)} = \lim_{p\rightarrow p_0} \frac{\frac{df}{dp}(p)}{\frac{dg}{dp}(p)} . \end{gather}
One way to explain this is to rewrite it. An alternative interpretation is in terms of calculating \(x(p)\) where \(x\) solves an equation of the form \[ g(p) x - f(p) = 0.\] Naturally, in cases where \(g(p) \neq 0\), \[ x = \frac{ f(p) } { g(p) } .\] If \(g(p) = 0\), but \(f(p) \neq 0\) then \(x\) is infinite. If \(g(p_0) = 0\) and \(f(p_0) = 0\) then Taylor expansion comes to the rescue. \begin{align} g(p) x - f(p) &= \left[ g(p_0) + \frac{dg}{dp}(p_0) \Delta p + O(\Delta p^2) \right] x \\ & \quad \quad - \left[ f(p_0) + \frac{df}{dp}(p_0) \Delta p + O(\Delta p^2) \right] \\ &\approx \frac{dg}{dp}(p_0) \Delta p \; x - \frac{df}{dp}(p_0) \Delta p , \end{align} and our solution \(x(0)\) is given by L'Hopital's rule. Of course, we've made a subtle mistake. \(x\) is an implicit function of \(p\), so we also need to include it's expansion. However, since that expansion's terms will always be of higher order, they have no effect on the 1st-order solution.
Now, the observant reader will notice that this alternative interpretation is a way to save our solution to a singular linear equation. But what if, instead of a scalar linear equation, we really had a singular system depending on a parameter? Well, as it just so happens, this problem can also be solved with a little ingenuity.
Suppose we need to solve \[ A(t) x - b(t) = 0.\] at \(t = 0\), but \(A(0)\) is singular AND \(b(0)=0\) although for \(t \neq 0\), \(b(t)\) is always in the column space of \(A(t)\). Well, if we Taylor expand, \begin{align} A(t) x - b(t) &=\\ &= \left[ A(0) + \frac{dA}{dt}(0) \Delta t + O(\Delta t^2) \right] \left[ x(0) + \frac{dx}{dt}(0) \Delta t + O(\Delta t^2) \right] \\ &\quad\quad - \left[ b(0) + \frac{db}{dt}(0) \Delta t + O(\Delta t^2) \right] \\ &= \left[ A(0) x(0) - b(0) \right] \\ &\quad \quad + \left[ \frac{dA}{dt}(0) x(0) + A(0) \frac{dx}{dt}(0) - \frac{db}{dt}(0) \right] \Delta t + O(\Delta t^2) \\ &\approx \left[ A(0) x(0) - b(0) \right] + \left[ \frac{dA}{dt}(0) x(0) + A(0) \frac{dx}{dt}(0) - \frac{db}{dt}(0) \right] \Delta t . \end{align} Now, this problem appears odd, but is exactly what we are hoping for. By assumption \(b(0)\) lies in the column space of \(A(0)\), leaving components of \(x(0)\) in the nullspace of \(A(0)\) undetermined. Now, \[ A(0) \frac{dx}{dt} = \frac{db}{dt} - \frac{dA}{dt} x \] only has a solution if we choose the nullspace components of \(x(0)\) so that \[ \frac{db}{dt} - \frac{dA}{dt} x(0) \] lies in the column space of \(A(0)\). Thus, \(x(0)\) is exactly determined by Taylor expansion of the problem. Well, actually, just like the traditional L'Hopital's rule, things may fail at this point. If this fails, we'll need to include higher order terms, or we've shown there is no solution. But when studying root collisions with positive velocity, this is sufficient.
For sensitivity analysis in multitype branching processes, we have nonlinear systems of the form \begin{gather} s(t) = G(s(t),t), \end{gather} and we'd like to determine the solution of the linear system \begin{gather} \label{eq:sense} \left[ \frac{\partial s}{\partial s} - \frac{\partial G}{\partial s} \right] \frac{\mathrm{d} s}{\mathrm{d}t} = \frac{\partial G}{\partial t} \end{gather} However, for a critical process, our system is under-determined, so we augment it using the method above. Sensitivity equation for a critical process with PGF \(G\), variable \(s\), and parameter \(t\). \begin{gather} \left( \frac{\partial^2 G}{\partial s^2} \frac{\partial s}{\partial t} + 2 \frac{\partial^2 G}{\partial s \partial t} \right) \frac{\partial s}{\partial t} + \left( \frac{\partial G}{\partial s} - \frac{\partial s}{\partial s} \right) \frac{\partial^2 s}{\partial t^2} = 0 \end{gather}