Optical Model RBM

Hello, finally back to you!

I was about to try to spit out some numbers, but I noticed that, if I switch to a potential with imaginary components (I tried just in the diagonal parts), I get an assert error in coupled_channels_example() for:

assert np.isclose(complex_det(S), 1.0)

I think that this is ok, meaning that the assert was meant to be sensible only in the case of real-only potentials, but @KyleB can you confirm? Thank you.

Hey, sorry! I know I said I’d look at this but been caught up with other things.

Yes that is correct, the coupled channels example uses a real (eg hermitian) interaction, so the assert was a sanity check that the s matrix was indeed unitary, which would not be the case for complex interactions

I really really promise I will help out more with this, just need a few days to get a paper draft to my collaborators that I’ve been promising for months lol

For the record, I packaged the r-matrix code into a little python package that lives here: GitHub - beykyle/lagrange_rmatrix: Solver for the radial Bloch-Shrödinger equation in the continuum using the calculable R-Matrix method on a Lagrange-Legendre mesh

Although I dont think the implementation has changed much from what I uploaded to the code space

Hi! Sorry for waiting.

I uploaded a “timeRmatrix.py” to the repository. This has the parameters defined as in our initial notebook (with freedom of setting gamma \neq beta), which can be set by passing keyword arguments.
It takes about 6-8 seconds to run a calculation on my personal computer. The speed seem to depend a bit on the parameter values. And takes about 1 second more if I ask for a png figure.

Regarding the discussion on the parameters space, I imagine there is a least one extra parameter we want to allow to vary, the collision energy, no? If we want to stay at 4 varied parameters, we might give up the imaginary parts and fix W0=0.

I was thinking, in principle we could train the RBM not on the wave-functions, but for instance on the S-matrix, would that make sense?

To avoid having to get familiar with yet another version, I did not try to reimplement on the separately packaged version, which however would be superior from a “good practices” perspective.

In the initial notebook (and in the runge-kutta implementation), the transition potential is a Gaussian centered at the origin, while in the R-matrix example it is a “surface-peaked Gaussian”. I could see a rationale for both (do we want the coupling between different channels to act chiefly at the surface or in the nuclear interior?), do you remember if there was a specific reason why you picked the latter in the newer implementation?

That was only for the single channel case, correct? It would be cool to have the same check in the coupled channels case, but I am a bit tired of tinkering with the high-fidelity solver, I want to get the RBM going!

That is correct, I do as well haha! Thanks so much for taking a look at this.

Yeah I just picked surface-peaked to mimic transition potentials that act primarily amongst valence nucleons at the surface, but I think either way would be good test problem.

This is an interesting idea. To do what we typically call the Reduced Basis Method, we need to actually come up with a basis in Hilbert space to project the Schroeginer eqn onto, e.g. Galerkin weak-form of the SE. In principle, the S-matrix I believe should commute with the Hamiltonian, so the basis they describe should be equivalent given the same asymptotic scattering boundary conditions, so maybe by projecting on the eigenvectors of the S-matrix in channel space could get us somewhere.

I think what you’re describing would be better formulated as a variational method. There has been some interesting work on this (and, in fact, the initial literature on reduced basis methods in scattering used the Kohn-variational principle to find a stationary principle for the K-matrix [related to S-matrix but for a different choice of basis for the asymptotic scattering BC’s]). This paper is a good review on variational approaches to scattering: https://iopscience.iop.org/article/10.1088/1361-6471/ac83dd/pdf

In my opinion, the Galerkin emulators, where you just project onto a reduced basis and solve the SE there, may be more ammenable to the CC problem. For example, in this work, the variational emulator required 4 total emulators (2^2 for a 2-channel problem). I think the Galerkin method we could with just one.

I need to finish up a draft of a paper tomorrow, but after that I should be able to actually take some time and get the RBM started.

One thing I’ve been thinking about lately is using Pade approximants to help factorize non-affine operators: PadĂ© approximant - Wikipedia

Hey @Simone, I have pushed a few quick changes:

  1. formatting and imports
  2. remove the rmatrixse.py file, and instead import lagrange-rmatrix

Everything still seems to be working, as long as you have lagrange-rmatrix installed, e.g.:

 pip install lagrange-rmatrix

Hi, thank you so much for the updates and the interesting reply on RBM vs variational. This weekend I have a pleasant but demanding surprise, I hope to give our project some time in the next days.

That’s cool! But if we stop at changing the depth we won’t need it, right? I would suggest starting from there. The matrix elements of the kinetic energy operator do not depend on the parameters, while the others would be just factors times fixed matrix elements.

Do you think it would be sensible to just star out with a “random” basis (just choosing a couple solutions for parameters values chosen by hand), just to see what we get and, later, how will that compare with better-though choices?

I believe we tried this at some point, right @Edgard ? Do you recall the performance?

Yes agreed, depth’s are non-affine so now decomposition needed.

I think that would be a great place to start!

We did try pade approximants for an initial decomposition of nonlinear potentials. A that point we were not extremely familiar with empirical interpolation, so it made sense to do it that way.

The performance was not better than the nested empirical interpolation we did later, and certainly not better than the blackbox stuff that is being attempted now. However, I can see a case for Pade approximants in the case of potentials where we care about a large enough range of parameters that the asymptotic behavior needs to be captured precisely.

That being said, I’m always in favor of trying well-established (and powerful) techniques. I remember that in the summer school people were presenting very complicated Bayesian interpolation techniques that could be easily replaced by Pade approximants. @KyleB What specifically were you planning to tackle with Pade Approximants?