Posts

2024-02-11: Symbolic algebra and typing

2023-08-01: Population waves

2023-05-18: Math of telephone billing mystery

2023-05-05: Franklin and DNA More information…

2023-04-25: On angle and dimension

2023-02-20: On Leonardo da Vinci and Gravity

2022-04-29: Fabricating Evidence to catch Carmen Sandiego

2022-03-04: Probabilistic law of the excluded middle

2020-05-04: Archimedes and the sphere

2019-05-16: Glow worms return

2019-04-11: Original memetic sin

2019-01-31: The theory of weight

2018-11-06: Origins of telephone network theory

2018-10-24: Modern thought

2018-09-10: Feeding a controversy

2018-06-11: Glow worm distribution

2018-04-23: Outlawing risk

2017-08-22: A rebuttal on the beauty in applying math

2017-04-22: Free googles book library

2016-11-02: In search of Theodore von Karman

2016-09-25: Amath Timeline

2016-02-24: Math errors and risk reporting

2016-02-20: Apple VS FBI

2016-02-19: More Zika may be better than less

2016-02-17: Dependent Non-Commuting Random Variable Systems

2016-01-14: Life at the multifurcation

2015-09-28: AI ain't that smart

2015-06-24: Mathematical Epidemiology citation tree

2015-03-31: Too much STEM is bad

2015-03-24: Dawn of the CRISPR age

2015-02-12: A Comment on How Biased Dispersal can Preclude Competitive Exclusion

2015-02-09: Hamilton's selfish-herd paradox

2015-02-08: Risks and values of microparasite research

2014-11-10: Vaccine mandates and bioethics

2014-10-18: Ebola, travel, president

2014-10-17: Ebola comments

2014-10-12: Ebola numbers

2014-09-23: More stochastic than?

2014-08-17: Feynman's missing method for third-orders?

2014-07-31: CIA spies even on congress

2014-07-16: Rehm on vaccines

2014-06-21: Kurtosis, 4th order diffusion, and wave speed

2014-06-20: Random dispersal speeds invasions

2014-05-06: Preservation of information asymetry in Academia

2014-04-16: Dual numbers are really just calculus infinitessimals

2014-04-14: More on fairer markets

2014-03-18: It's a mad mad mad mad prisoner's dilemma

2014-03-05: Integration techniques: Fourier--Laplace Commutation

2014-02-25: Fiber-bundles for root-polishing in two dimensions

2014-02-17: Is life a simulation or a dream?

2014-01-30: PSU should be infosocialist

2014-01-12: The dark house of math

2014-01-11: Inconsistencies hinder pylab adoption

2013-12-24: Cuvier and the birth of extinction

2013-12-17: Risk Resonance

2013-12-15: The cult of the Levy flight

2013-12-09: 2013 Flu Shots at PSU

2013-12-02: Amazon sucker-punches 60 minutes

2013-11-26: Zombies are REAL, Dr. Tyson!

2013-11-22: Crying wolf over synthetic biology?

2013-11-21: Tilting Drake's Equation

2013-11-18: Why \(1^{\infty} eq 1\)

2013-11-15: Adobe leaks of PSU data + NSA success accounting

2013-11-14: 60 Minutes misreport on Benghazi

2013-11-11: Making fairer trading markets

2013-11-10: L'Hopital's Rule for Multidimensional Systems

2013-11-09: Using infinitessimals in vector calculus

2013-11-08: Functional Calculus

2013-11-03: Elementary mathematical theory of the health poverty trap

2013-11-02: Proof of the circle area formula using elementary methods

Fiber-bundles for root-polishing in two dimensions

In some of the theoretical biology modeling I do, we try to keep problems low-dimensional. A useful model is one that we can understand intuitively, and high-dimensional spaces are not usually intuitive. Usually, this means 1-dimensional. But I'm always running into situations where there's some equation that we want to solve with a few more dimensions than 1. These situations can be rather frustrating because the problem is still, in some sense, simple, but the only numerical tools we have for solution are very general, and sporadically unreliable -- they may work most of the time, but 1 in every 20 times they fail, and I've got little idea why they fail in that particular situation, or how to fix the problem.

Root-finding is a great example of my problems. If you check out wikipedia, there is an elaborate discussion of different approaches for scalar root-finding, with various pros and cons discussed. But as soon as you go from 1 dimension to 2 dimensions, there's nothing to say any more. Sure, there are arbitrary-dimensional approaches such as Ptolemy's generic fixed-point iteration scheme, and the vector version of Newton's method. But Ptolemy's method only find's stable fixed points, and Newton's method can fail for less-obvious reasons. You would think there's something useful to say between the 1-d cases and the n-d cases.

Numerical Recipes has a helpful discussion about the difficulties of root-finding in two or more dimensions. There are some ideas out there, like multi-dimensional bisection methods, but it doesn't get us far. High-dimensional root-finding is a difficult problem. But so is minimization in an arbitrary landscape, and we tackle such problems all the time these days, with seeming success. In fact, one can make a convincing argument that root-finding in one dimension is much easier than minimization in one dimension. So, why aren't there some better ways?

In the summer of 2013, I was studying a two-dimensional nonlinear, black-box, root-finding problem, and discovered a useful numerical approach, based on a fiber bundle decomposition of the 2-dimensional space into 1-dimensional fibers where fast scalar root-finders could be employed. This idea was probably first discovered decades ago by somebody else, but their work has unfortunately not recieved enough promotion to reach my ears. So, let me give an outline of my version of the idea, and hopefully, we can get things a little more attention.

I would like to solve for the fixed point of a map represented by a system of two equations \begin{gather} x_{t+1} = f(x_t,y_t), \quad y_{t+1} = g(x_t, y_t). \end{gather} When $f$ and $g$ are both smooth, we expect our map to behavie linearly near a fixed point $(x^*, y^*)$. So, let's make some weakly non-linear vector fields to use as examples of our problem.

Now, the basic idea is that we want to somehow use all our wonderful 1-d methods to solve this 2 dimensional equation. Unfortunately, the way to do this isn't immediately clear. In minimization problems, we can take a cross-section of the potential, find the 1-d minimum, and then track that minimum as we vary the cross-section. But with root-finding, when we take a cross-section of the vector field, there are no local stationary points usually. So, we have to induce some kind of local structure on this cross-section fiber that we can then study accross fibers.

One naive classical idea is to study local displacement from the fiber. If we calculate the magnitude of the relative displacement of the vector field at each point, then \[ (x^*, y^*) \in \operatorname{argmin}_{x,y} ( f(x,y) -x )^2 + (g(x,y)-y)^2. \] Thus, we can use optimizaiton methods to solve for the root. But like I mentioned above, optimization methods are relatively in-efficient in one dimension, compared to native root-finding methods. Isn't there some better way, which uses more information from the vector field besides just the magnitudes?

Broyden's method and quasi-Newton methods can definitely do better than optimization, but they don't always converge. I'd like a method for two-dimensions with robustness like that of bracketting methods -- it might be a little slow, but you know it's going to work.

Here is a different idea that can work really well if you get lucky. Allong our cross-sectional fiber, look for places where the inner or outer geometric product with the fiber tangent-space vanishes. If $u$ is a vector in the tangent-space of our fiber, then the inner product vanishes when the vector field is perpendicular to the fiber, so \[ [ f(x,y)-x, g(x,y) - y ] \cdot u = 0.\] The outer product vanishes when the vector field is parallel to the fibure, so \[ [ f(x,y)-x, g(x,y) - y ] \wedge u = 0.\] And the only place they both vanish is at a fixed point $(x^*, y^*)$. So, given a $u$, we can use scalar root-finding to construct a curve where the inner product vanishes, and then use scalar root-finding allong that manifold to find when the outer product simultaneously vanishes. Or, we can do things in the opposite order. Since, in all cases, the functions will be smooth, we can use super-linear methods both times, thus making 2-d root finding a super-linear process.

Of course, there are several ways the idea might fall down. We have to have a domain with a root already, and be tight-enough to the root that non-linear terms don't cause the scalar root-finding to fail. And what "tight-enough" actually means will depend on the orientation we've picked for our fiber bundles. We haven't even established the existence of our reduced-manifolds. I'll write more latter, but for starters, here's a picture.

The idea here can be extended to higher dimensions, although we lose a little more leverage due to the greater geometric flexibility in 3 and more dimensions. While outer-products are 1-dimensional in 2d, they are in generally $n-1$ dimensional objects in n-dimensional spaces. In 3 dimensions, we would first have to decompose the space into parallel planes, and then decompose these planes into lines.