For certain problems, finding the coefficients for the expansion doesn’t work when using Galerkin projection (or, it doesn’t work well). Other methods include minimizing the residual of the PDE, and treating the coefficients as variational parameters (e.g. when solving a Schrodinger-like equation). It will be helpful to have demonstrations of this in the RBM notebook (Introduction to Reduced-Basis Methods in Nuclear Physics — Reduced Basis Methods in Nuclear Physics), to compare with the harmonic oscillator example. We may also want to demonstrate examples where one or more methods does not work.
Did you wrangle anyone else into looking at this? It might be worth making a notebook based off the HO or Gross-Pitaevskii in a fork of the nuclear-rbm book and then we can merge it as a sub-section.
Amy had sounded interested in working on it. And our initial plan was to start with the HO example, so we can easily compare the methods.
This is a cool topic, I just want to mention/suggest one cool related application to consider, which would be to determine an optimal affine decomposition for a non-affine operator.
For example, in the EIM chapter of the rbm jupyter-book, we decompose a non-affine scattering potential into M affine terms, each consisting of the product of one principal component of the training set of operators as the parameter is varies, with a coefficient function of the parameter. These coefficient functions are determined by enforcing that these functions interpolate between a set of M points on the problem domain. This method has trouble extrapolating outside the bounds of the training set in parameter space.
An extension could be to evaluate the functions at N>M points on the domain, and devising a least-squared optimization for determining the best-fit coefficient functions. Since simply evaluating the operator (without solving the equation) is relatively cheap, many points ( N >> M) can be used, hopefully improving the robustness compared to the exactly determined problem.
actually yes! This idea for empirical interpolation is exactly what we would want in an ideal world.
Some of these ideas have fancy-sounding names like “gappy POD reconstruction” in case you want to take a jab a the literature.
In an ideal world, you would evaluate the potential you want to interpolate and directly project it onto the principal components for reconstruction. This is equivalent to solving a least-squares regression problem where we want to fit a function V(x,s) with a sum a_k*f_k(x) for the principal components f_k. Projecting (or solving the normal equations, however you wanna call it) is the solution, but it would take operations of the same order as the ones we wish to avoid by using the interpolation.
gappy methods aim to get the most bang for your buck by using only a few well chosen values of the potential to approximate the real projection. the trick is finding good enough points (both when we use M or N>M). I know there are a few ideas in the literature, but haven’t sat to do a survey, if someone would do that I would be eternally grateful.
For now I will say that I implemented a version of the ‘maxvol’ algorithm to select exactly M points and it is what we have been using so far. I know people have developed extensions to select N>M points in a ‘great’ way (rect_maxvol, for example) but I haven’t played with them to tell you if they’re any good for our problems, we should go and check.
We are exploring this idea as well, mainly to create an effective operator which, depending on the nature of your problem, can be made to track, via its kernel or eigenspaces, the curve in principal component space that your database draws.
We are still exploring different methods to “guess” the Hamiltonian with different levels of success.
@KyleB, this was exactly our motivation that as the dimension of the parameter space increases interpolation becomes more dubious.